Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
No sci fi really captures the possibilities inherent in generative AI because they’re so significant. Even far-out stuff like Culture or Revelation Space doesn’t really.
The problem is that there’s limited room for human protagonists’ agency when AI (or AI + decent robots which all science fiction generally assumes) and this is kind of the core of storytelling. It’s the same reason why sci fi struggles to move away from human pilots and captains and soldiers and so on. I think moving forward a lot of science fiction will be retro-future stuff that imagines we went to space with something like 1960s to 1990s technology and that AI wasn’t invented, or at least not in the way it was here. Starfield seems to be taking this approach.
I'm so, so tired of stories that follow human narrative sensibilities. Are there any books that ask the reader to fall in love with a well crafted structure that completely defies human narrative convention? That aims to map the reader to the alien rather than mapping the alien to the reader?
Stenislaw Lem's Solaris might be worth a read if you haven't already. It doesn't play with a narrative convention at all though, but conveys a sense of something truly alien.
More options
Context Copy link
I'd say there are, but the more you do this the more avant garde / surreal things become, and the more skilled a writer you have to be to make things work. I guess Flatland is probably one of the most famous examples.
Flatland was good. I was also a fan of the aliens in slaughterhouse five, though they weren't central. I like nature documentaries- but I don't think they go far enough. Ant youtubers who get really passionate about morphology and behavioral analysis are ok. Sometimes I get my jollies just by reading ML whitepapers. Animorphs had a lot going for it, though I read it all as a kid and don't know if I would again.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm really not asking for much, just that an author writing in X AD account for something that was clearly in existence in X-1 AD.
Or at the very least, it would be trivial for the author to make up some excuse for their absence. Say, normally the AI was a standard AGI, but it was intentionally sabotaged during the flight and is in a crippled position. Or the section carrying all the heavy robots in storage was hit by debris that made it past the shielding.
At least some sign that the author is aware of the issue and is attempting to placate my suspension of disbelief.
As per Yudkowsky's take on Vinge's law, it's pretty much impossible for a human to write a compelling superhuman intelligence in the first place, so I am willing to look the other way most of the time.
Still, I was so ticked off at this point that I went to the Author's discord, and boy did I end up on the wrong side of the tracks.
They/Thems for miles (even the author, which I sort of suspected from the emphasis on neo-pronouns and weird additional genders, but I actually don't mind that because it's set hundreds of years in the future and it would be weird if there weren't any running around).
I was confused to see half a dozen bot accounts replying to me, before someone informed me that this was people using "alters", some kind of DID bullshit I presume, since the bot's description explained it was a way for people to talk to themselves as a different persona (???).
I more or less copy pasted my complaints, and was immediately inundated by more They/Thems spouting the absolute worst takes on AI, to the point my eyes bled. At least they were mostly polite about it, but I'm sure they're already well acquainted with accommodating people with weird personal quirks, if you count my intolerance for gaping plot holes as one.
Then the author themselves showed up, and informed me that they were aware of modern AI, yet apparently disagreed on their capabilities and future.
This pretty much killed me outright, so I politely agreed to disagree and left. I am unsure what mechanism he's using to extrapolate the future that requires AI to he worse than they are today after hundreds of years, and I'd rather not even ask.
I guess if all science fiction written after 2024 includes something suspiciously like Butlerian Jihad then be careful what you wish(ed) for?
Well, in my setting, I chose the route of having the initial singularity be aborted by humanity panicking the fuck out regardless of whether the AI was actually aligned or not.
And the justification for limiting the spread of generalized ASI was to prevent that from happening again, with the operational solution being either having AGIs locked to the last human level proven empirically safe, or only allowing narrowly superhuman AGI.
It's a world where Yudkowskian fighter jets dropping bombs on data centers isn't a joke, but they usually go for nukes and antimatter explosives.
I'll leave it to the reader to decide whether that's a bad thing, but at the very least I don't commit the sin of writing about technology worse than today without an explanation of any kind. Butlerian Jihad it isn't though.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link