site banner

Friday Fun Thread for June 23, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

2
Jump in the discussion.

No email address required.

I agree with all of the points you made, I'll add one more - there is also the fact that having aligned super-intelligent AGI completely obviates any use vampires may have had, which is something you see perhaps unintentionally acknowledged a bit in Blindsight when it is revealed that Sarasti was likely just the Captain's meat-puppet all along. In that case, there is no reason to keep vampires around. They seem to be redundant and it almost seems as if all they offer is the potential to massacre a few hundred people before the superintelligent AGIs step in to clean up the mess. The fact that the AGIs don't do this when the vampires are running amok is yet another plothole, but you've already mentioned that.

At any rate, Watts is a fundamentally misanthropic doomer. He legitimately believes that humanity is doomed because of climate change, and he has a visceral opposition to humans actually doing well for themselves because of technological advances.

He's quite the kook for sure and harbours quite a few very questionable positions that can make me wince at times. It's part of the reason I don't visit his blog often other than to check for the occasional fiblet.

A lot of the creators I like tend to share this quality, honestly.

Still, he's one of my favorite authors, and if you haven't already, read the Sunflower novels and short stories, they're pretty great.

I have read almost all of the stories in the Sunflower Cycle, with the exception of Hotshot. The Island and the first half of The Freeze-Frame Revolution are among my favourite pieces of writing he's done, especially this oddly mournful part of FFR where Sunday describes an early memory with the Chimp. Unfortunately I think FFR takes a bit of a dive in quality later on, I found the protagonist's conflicted loyalties in the first half to be a much more compelling narrative than the more standard and clear-cut "revolt" against the Chimp that occurs in the latter half of the book. The ending also feels incomplete and lacks a sense of climax, and while I think this is a bit more forgivable given that it is only an instalment in an episodic story, I do believe if you're writing a novella with a downer ending or a cliffhanger it needs to feel more deliberate and foreshadowed than how the ending played out.

Oh, and there's also the as-of-yet unfinished "Hitchhiker". That one has a very disturbing setup and if the quality of that story remains at this level it might end up being my favourite Sunflower story yet.

The fact that the AGIs don't do this when the vampires are running amok is yet another plothole, but you've already mentioned that.

Why should they? Maybe they prefer Vampires, they are probably better pets than humans anyway. Humans by definition can't understand what super-AGI wants, and, moreover, according to Watts, it's not necessary for them to want anything at all, conscience is just a random glitch, not very useful for intelligence. So why we're assuming AGIs would want to protect humans? Maybe they'd want to wipe them off, using Vampires? Or maintain a constant human-Vampire war where neither side wins but both sides are busy enough to keep them under control and guide them to a necessary direction and stimulating their development (remember Babylon 5 Shadows?)

Whether they have consciousness or not is irrelevant to whether they act to achieve a certain goal. It is possible for AGI to be both non-conscious and still agentic, the same way Scramblers are.

Humans design the cognitive architecture of AGIs, and I'd imagine we would (try to) program AIs to take account of our interests. While misalignment is certainly possible, no real indication is provided in the world of Blindopraxia that the AGIs developed are routinely coming out misaligned - Captain for example seemed very well aligned with the mission it was tasked to achieve, and there's no evidence I can recall in these books of AGIs having negative influence in the larger world (if they are, they pose as much of a danger to humanity as Rorschach and Portia).

I'd imagine we would (try to) program AIs to take account of our interests

We of course would, but who says we'd succeed? Who says we'd even know whether we succeeded? We have pretty poor understanding of what comes out of where even of current models, if that continues, the future AIs would be a complete black box to us, and we'd have to pretty much rely on asking them "do you want to kill all humans today?" and trust the answer.

After all, in the same setting, people tried to control Vampires and failed. They tried to control Bicamerals and only kinda succeeded because of Vampires' help. Why would we assume they are actually in control of the AIs and not that the AIs just let them think they are, because humans tend to react violently to the perspective of the loss of control, and who wants that trouble?

if they are, they pose as much of a danger to humanity as Rorschach and Portia

Or maybe more, because they are smart enough to not reveal their intentions while there's still a chance humans can do anything about it.