@magic9mushroom's banner p

magic9mushroom

If you're going to downvote me, and nobody's already voiced your objection, please reply and tell me

2 followers   follows 0 users  
joined 2022 September 10 11:26:14 UTC
Verified Email

				

User ID: 1103

magic9mushroom

If you're going to downvote me, and nobody's already voiced your objection, please reply and tell me

2 followers   follows 0 users   joined 2022 September 10 11:26:14 UTC

					

No bio...


					

User ID: 1103

Verified Email

It's really, really a shock when you see it the first time because you're hoping that there will be the heroic ending of the plucky, scrappy underdogs winning over the villainous tyrannical regime (like every American movie and show does).

I was fairly confident a full heroic ending wasn't in the cards by that point; one episode wasn't enough for a real finale and no groundwork had been laid. I wasn't expecting the actual ending; I wasn't really expecting an ending at all, because I figured the show had sucked for two seasons and gotten cancelled (I had long since said the eight deadly words).

Also, if they wanted to shock me, they probably shouldn't have had the slow-motion.

I will say that shaggy-dogs are unusual for a reason, particularly when they leave a lot of Chekhov's Guns unfired (remember the Federation agent in "The Way Back" who orchestrated the massacre and killed Blake's lawyer? Because I do). I will also say that I find the best twist reveals to not be those that are shocking, but those the audience works out approximately five seconds prior.

I don't love the SFX being lousy. I don't care about graphics as long as I can understand what's going on (for a videogame example: Civ2 and X-Com are good enough; Dwarf Fortress pre-graphics and NetHack are not), so I just didn't really care about them one way or the other (which is why I never mentioned them).

Avon is "to hell with principles, I wanna be rich" but even there, their attempts to be space pirates go hilariously wrong (the fourth season episode Gold is wonderful with double-cross over double-cross).

The problem with showing this is that, well, eight deadly words. I cared about Blake/Jenna/Avon/Gan/Cally, not Vila/Orac/Tarrant/Dayna/Soolin, and above any of those I cared about Plot. Stuff actually happens in seasons 1 and 2; episodes fit into a broader picture. Most of the season 3 and 4 episodes have no broader impact.

Chara isn't utilitarian in the normal sense. She's an avatar of powergaming/level-grinding:

I am Chara. "Chara." The demon that comes when people call its name. It doesn't matter when. It doesn't matter where. Time after time, I will appear. And, with your help. We will eradicate the enemy and become strong. HP. ATK. DEF. GOLD. EXP. LV. Every time a number increases, that feeling... That's me. "Chara."

...and, I suppose, if you really want to read too much into it, the Killer-Ape/B5-Shadow philosophy of growth through conflict.

If you have a fission reactor as an energy source, getting a heating your propellant to a much higher temperature than your fuel elements seems challenging.

Right. So you run open-cycle. You mix your fuel with your propellant, stick it into your nozzle, and burn it at plasma temperatures (remember, while terrestrial nuclear reactors burn up their fuel over the course of years, this is not actually required; a nuclear bomb burns its fuel to reasonable completion inside a microsecond). You will need cooling systems for the nozzle, of course, but the temperature gradients are all in the right direction.

I finished watching Blake's Seven earlier today. Spoiler-free rating: 7/10 for the first two seasons, 4/10 for the third and fourth. Spoilers below.

Things I liked:

  1. Blake is really-a-terrorist and really-a-hero. Even Trek DS9, which makes a big deal out of having Kira Nerys the terrorist as First Officer, mostly fails to show her being a heroic terrorist. She does very little terrorism on-screen, and when her background does come into play she generally turns into a psycho killer whose justifications don't hold up. Blake's different; he clearly tries to minimise casualties, but he's willing to accept them, and the Federation is such a horrifying dystopia that he's still clearly in the right.

  2. There's a reasonable amount of real science mixed in. I'm not 100% sure, but I seem to see a pattern of older shows assuming a more technically-inclined audience.

  3. For the first couple of seasons, at least, most episodes fit into the big picture in some way. It's ground-level, but there's clearly a larger plot going on.

Things I disliked:

  1. There are a few basic mistakes that were re-used far too many times, particularly as the series went on, to the point that it's just the idiot ball to have them keep happening. The first is "yelling desperately instead of just pushing the teleporter switch when somebody stops responding". The second is "letting Vila be a single point of failure", most notably by leaving him alone on the ship to operate the teleporter (he's also, IIRC, the only one who ever sold them out for real; the correct response to that one was seen on Firefly). The third is, well...

  2. Servalan. I get that Jacqueline Pearce played her excellently, but seriously, her plot armour is a mile thick. If they wanted to keep her around, they should have had a lot less situations where only idiocy prevents her death. Blake lets Travis go again and again because Travis is a moron who's of negative value to the Federation (the period between the Federation pulling the plug on him and his death should have been shorter, though). Servalan, though, is high-up enough that killing her is actually a big deal, and yet Blake's crew never does it (sometimes for believable reasons, but too often without). And, uh, how many plots are "some fuckwit actually believed Servalan's promises"? There's that old saying regarding worldbuilding, "there wouldn't be stories of deals gone bad if deals always went bad, because if they did, no-one would make deals". Is there literally anyone in the entire series who actually profits from dealing with her? By the end it's clearly well-known that her word is written on water, and yet people keep jumping into the lion's mouth. She should have either been someone actually worthwhile to deal with, or someone who doesn't (need to) make deals. And she should have been either an actual magnificent bitch who never lets the crew get motive/means/opportunity to kill her, or she should have died.

  3. The loss of Blake (and Jenna) basically decapitates the show after season 2, because Blake's the one with an actual goal. Having Avon go nuts for a couple of episodes is one thing; a couple of seasons, quite another.

  4. Season 4 appears to have had a rule of "every named character outside the main cast dies by the end of the episode". What the fuck was going on there? This isn't even a question of in-universe stupidity, it just makes no sense and is so obvious I was basically crossing people off in my head by the end.

You have made several serious errors that invalidate your point.

a) The facts that you need high temperature to achieve high exhaust velocity, and that there are limits to the temperature of solids, do not imply that high exhaust velocity is impossible. All you need to do is ensure that your rocket motor is not in thermal equilibrium with your propellant or your fuel. The obvious way to do this is to have low thrust, as that allows your cooling system to keep up. Making your fuel also your propellant, and limiting its ability to thermalise before leaving the ship, also help (the limit is still proportional to F*Ve, but you can raise the proportionality constant). It helps a lot that plasma can be contained magnetically.

b) Thrust doesn't matter all that much except for takeoff. The time taken on a brachistochrone trajectory goes roughly as the inverse square root of your thrust (1), so a millionth the thrust is only a thousand times the travel time. 1 mG could get you the distance to Pluto in about a year and a half. What really matters for transit time is the delta-V needed to achieve brachistochrone at all, and that means nuclear.

c) Speaking of which, yes, nuclear has staggeringly-higher Isp than chemical, to the point that the rocket equation is generally in the linear rather than exponential regime for interplanetary flight and, hence, its "tyranny" is indeed "escaped" (interstellar's a different beast; if you want to go relativistic you're generally looking at antimatter fuel or external drives like light sails). "800 seconds" is a fucking joke compared to what nuclear's capable of - in the highest-Isp version of fission, the fission-fragment rocket, it's capable of millions.

  1. t (time taken) = x (distance) / v (mean speed), by definition. But also, because half the time is spent accelerating (thus, to twice the mean speed), 2v = a (acceleration) * t/2. Hence t = x/(at/4), t^2 = 4x/a, t = 2sqrt(x/a).

I don't think that's the name I learned - if it was named at all - but it's a valid one nonetheless.

if you're selecting for "demonstrates poor impulse control, short time horizons, willingness to harm others for small personal gain," you will find that distribution pretty spread across income levels, possibly with some concerning concentrations nowhere near the top of the wealth distribution.

The last one is more important than the first two, and a decent chunk of Western billionaires got there from it. Sam Bankman-Fried got there by screwing his cofounders out of FTX (also he did massive fraud later). Sam Altman got there by massive deceit of the public until OpenAI had enough power to make it hard to knock over.

Don't get me wrong, Elon Musk is a positive-sum guy. But what Zvi calls the "maze nature" is almost certainly more prevalent among billionaires than among people in general, even if it's likely less common among billionaires than among CEOs.

Agreed with all of that except the neural-nets part. The problem with neural nets is that you literally don't know what the AI's goals are; training gives you something that does the things you train for during training, but it is agnostic as to why. You can easily, particularly at high intelligence, get something that does the things you want for instrumental reasons like "I don't want to be turned off/re-educated" (note that this is an instrumentally-convergent goal, and will thus pertain for most terminal goals) - and that will kill you the moment it gets a chance (note that, given it's smarter than you, you can't train against that, because fake chances to kill you will be detected and a real chance to kill you doesn't let you train afterward).

Furthermore, even if you do get some vague interpretability, it's not going to be reliable on something smarter than you (you cannot comprehend it as a whole; that's the whole point) and as you just noted, true positives are very, very rare and hence will still be massively-outnumbered by false positives.

Neural nets are mad science. GOFAI and uploads are a much-better plan - still immensely dangerous, but they're not just summoning demons and hoping.

EDIT: In case there's the "well, we're neural nets, and we learn morality okay" objection floating around in somebody's head: the problem with that is that humans are hardwired to be able to learn morality, not just learn to fake morality. Psychopaths are those people for whom this hardwiring fails (they can learn what ethics are just fine; they just don't care about them). This moral hardwiring was bred into us by evolution due to the millennia of tribe-on-tribe violence that made working together a winning strategy (given that humans are not really that different from each other in physical capabilities). We don't know how to duplicate that. So teaching neural nets morality will, at sufficient degrees of intelligence, just teach them to fake it. I listed uploads as being less insane than de novo neural nets because you'd be uploading the moral hardwiring as well without needing to comprehend it - it's still dangerous because the human brain is not designed for existence as software and various known and unknown mental illnesses may occur, but at least there's something to work with.

Well, no, colonising Mars specifically is not a necessary first step. Mars has meme behind it but Luna and Venus are better prospects, and we don't have a lot of good places in-system to test lithopanspermia so I'd rather not piss on one of them for meme value alone.

(The Martian surface is lousy because the atmosphere's not thick enough to reach the Armstrong limit or provide adequate radiation protection, so you have to dig. So either go somewhere where it is thick enough - i.e. Venus or Titan, the former with cloud cities - or dig somewhere with lower gravity like Luna. And Luna and Venus, at least, are NBD to contaminate from a scientific point of view, since lithopanspermia Earth -> Venus is impossible anyway and there are a lot of airless rocks around the system so losing Luna's no harm.)

There's a concept in card games - I've forgotten the name - where you play as if a card is in a specific location because if it isn't, you're doomed anyway.

It is possible that AGI could be built within a decade. However, if anyone builds it, everyone dies. If we're all dead, we don't really care whether our further plans are accurate or not. So, plans for the further future should assume that AGI did not, in fact, come within a decade. (Also, we should stop it, but we do need at least some plans for what to do afterward.)

This is also true of the USA and Israel (with the addition of AIPAC and company).

You don't need anywhere remotely near that much to shield against radiation.

Do note also that the radiation intensity from fallout drops by about 99.9% in two weeks (the nuclides that are hot are hot because they decay fast). When you're talking about hundreds of nukes hitting small areas, then you might have enough to still be a radiation-sickness problem afterward in those areas; no chance from a handful, and certainly none for the entire world even with Cold War arsenals.

There's a reason I haven't stockpiled food and don't have an actual bunker (I have two weeks of water, though, just in case the fallout hits the water supply).

And through what platform? Don't all the payment processors and crowdfunding/donation platforms/Patreon clones pretty much forbid NSFW?

No. Patreon forbids porn that SJ objects to (and anything else SJ objects to, for that matter). SubscribeStar is essentially "Patreon without the SJ fun police" (which is why Wikipedia insistently refuses to make an article about it, as damnatio memoriae) - I think they'd forbid RLCP but I'm not even 100% sure about that and hopefully you're not going to be curating that. I'm a member of the porn forum Questionable Questing, and most of the good authors there have SubscribeStar accounts (even the fanfic writers) - Patreon was a thing but like 95%+ have abandoned it due to the aforementioned fun police.

I mean, I Noticed a long time ago that nuclear war means victory for the Red Tribe in the culture war (at least in the USA, and possibly in other Western nations hit). I happen to think that tearing the Blue Tribe from power isn't actually worth that, but I suspect there are some among the Trumpists sufficiently mindkilled to disagree - at the very least, my noting this factual point has been mistaken for such Posadism on three separate occasions.

(I will cop to being more of a China hawk than I might otherwise be due to the AI issue; not due to the culture war, though.)

I would predict that in this specific scenario the mid-terms would at least still probably happen, although you are correct that it's a footnote (the broader CW point less of one, but still relatively minor).

Do you think Trump would win the mid-terms under these circumstances?

Do remember that a significant chunk of the Democratic base is located in LA and NYC, and thus would not be voting in this scenario. It is not immediately clear that the swing would outweigh that.

(Yes, the incentives on Trump may in fact be perverse!)

China has no military allies

North Korea is a formal Chinese ally and not negligible in terms of military.

The Solomon Islands are basically a Chinese ally at this point, although one can question whether the correct term is in fact "puppet state". Of course, their own military's negligible, but the ability to base PLA forces there is a big deal.

I remember one of our posters here talked about struggling with gender identity, and feeling like people they interacted with online were, to paraphrase from memory, "part of a cult that just wanted to increase the number of trans people at all costs."

I suspect you were thinking of this - the actual line being "It felt like I was talking to an AI designed to maximize the number of trans people". Written by a Motte member, and quoted here three times that I know of, but not actually written on theMotte.

Do also note that some men want a lot of kids (without going through multiple women), and a younger wife very mathematically means either more kids or the ability to wait longer between babies (thus reducing the stress).

I would not go "all men are X because Y" even when it is very tempting to do so

You certainly have gone "all men who want the AoC lowered, or object to the stigma of age-gap relationships above the AoC, personally want to fuck teen girls below the current AoC", and continued insisting on it in the face of denials - i.e., very deliberately implied we're liars based purely on your own model of men.

Did theMotte get DDoSed in the past couple of days? I wasn't able to access it for quite a while.

Oh, don't I know it. My point was more the (lack of) proportionality; one can eat way more calories for way less money, so someone who's obese isn't necessarily spending more than someone who isn't.

You do realize that a listener refusing to listen to valid and true arguments (presuming they are) is the fault of the listener?

...You do realise that you just invoked moral luck?

Suppose I know that Bob is a clever arguer, and can convince me that false things are true. If refusing to listen to valid and true arguments is a fault, then whether I should listen to Bob is dependent on whether his arguments are valid and true - which I can't discern even after I've listened to them (because even if they were false, I'd believe they were true, by the premise), let alone before.

No. Moral luck is useless ethics. And of course, saying that one should always listen to arguments from clever arguers is actually worse; that's handing cult leaders and ASIs the keys to the kingdom.

Persuasion tactics are Dark Arts. They're disrespectful of the listeners' agency. To be proud of using them is to think of people as sheep to be herded.

This seems roughly orthogonal to the meme image which Deveraux is arguing about, except that both predict cycles.

The meme image can be read either way. It shows the Roman Empire (a solid example), the "Good Times/Weak Men" image is of people partying opulently instead of doing things, it talks about a cycle within a nation (the barbarians are not actually depicted as being "Strong Men"), and people do often talk about evil as (moral) weakness.

The weakman does exist, as I've granted from the start - it's not a strawman. Pete Hegseth might believe it. I just don't consider the meme phrase or image to be clear evidence that someone believes it.

(unless you think outlaws are unpersons by definition?)

Not in the Orwellian sense, but in the legal sense, yeah, they basically are.

I feel there should probably be an exception for a guy appearing on behalf of a company when he's sole owner of that company.