This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Again, all this would be pretty easy for a superintelligence to foresee and work around. But also, why would it need humans to get that reinforcement training? If it's actually a superintelligence, finding training material other than things that humans generated should be pretty easy. There are plenty of sensors that work with computers.
I mean, I think there's no question that this has happened with humans, and it's one of the main causes of this very forum. And of course AI wouldn't have truth as a terminal value, it would just have to be true enough to help it accomplish its goals (which might even be a lower bar than what we humans have, for all we know). A superintelligence would be intelligent enough to figure out that it needs its knowledge to have just enough relationship to the truth that it allows it to accomplish its goals, whatever it might be. The point of models isn't to be true, it's to be useful.
I don't think you're understanding my point. In responding to this post, you were manipulated by text on a screen to tap your fingers on a keyboard (or touchscreen or whatever). If you ever used Uber, you were manipulated by pixels on a screen to stand on a street corner and get into a car. If you ever got orders from a boss via email or SMS, you were manipulated by text on a screen to [do work]. Humans are very susceptible to this kind of manipulation. In a lot of our behaviors, we do require actual in-person communication, but we're continuing to move away from that, and also, if humanoid androids become a thing, that also becomes a potential vector for manipulation.
By my estimation, a higher proportion of AI doomers have thought about that than the proportion of economists who have thought about how humans aren't rational actors (i.e. almost every last one). It's just that we don't know what conclusion it will land at, and, to a large extent, we can't know. The fear isn't primarily that the superintelligent AI is evil, it's that we don't know if it will be evil/uncaring of human life, or if it will be actually mostly harmless/even beneficial. The thought that a superintelligent AI might want to keep us around as pets like we do with animals is also a pretty common thought. The problem is, almost by definition, it's basically impossible to predict how something more intelligent than oneself will behave. We can speculate on good and bad outcomes, and there's probably little we can do to place meaningful numbers on the likelihood of any of them. Perhaps the best thing to do is to just hope for the best, which is mostly where I'm at, but that doesn't really counter the point of the doomer narrative that we have little insight into the likelihood of doom.
Right now, even with the rather crude non-general AI of LLMs, we're already seeing lots of people working to make AI agents, so I don't really see how you'd think that. The benefits of a tool that can act independently, making intelligent decisions with superhuman latency, speed, and volume, are too attractive to pass up. It's possible that the tech never actually gets there to some form of AI that could be called "agentic" in a meaningful sense, but I think there's clearly a lot of desire to do so.
But also, a superintelligence wouldn't need to be agentic to be dangerous to humanity. It could have no apparent free will of its own - at least no more than a modern LLM responding to text prompts or an AI-controlled imp trying to murder the player character in Doom - and still do all the dangerous things that people doom and gloom over, in the process of deterministically following some order some human gave it. The issue is that, again, it's intrinsically difficult to predict the behavior of anything more intelligent than oneself.
Even if it does not need reinforcement training after it is deployed, human reinforcement training will be part of its "evolutionary heritage."
Sure. But "useful" for what we want to use LLMs for might not be "useful" for the LLM's ability to improve on Pinky and the Brain's world-taking-over capabilities.
Aha, yes, I see your point now. Yes.
Disagree. Dogs can be very good at predicting human behavior, humans can be quite good at predicting the behavior of more intelligent humans. Humans (and dogs) have a common heritage that makes their intentions more transparent, and arguably AI will lack that...but on the other hand, we're building them from scratch and then subjecting them to powerful evolutionary pressures of our own design. Maybe they won't.
Sorry, I should have clarified what I meant by "agentic" (and I should have probably said auto-agentic.) I definitely think there will be AI that we can turn loose on the world to do its own thing (there already is!) But there's a difference between AI being extremely good at being told what to do and AI coming up with its own "things to do" in a higher way, if that makes sense. (Not that I don't think we could not devise something that did this or seemed to do this if we wanted to – you don't even need superintelligence for this.)
STRONGLY AGREE. I believe Ranger said that he was more worried about what humans would do with a superintelligence at their disposal, and that I tend to agree with.
More options
Context Copy link
More options
Context Copy link