This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
The issue is that there are two distinct dangers in play, and to emphasize the differences I'll use a concrete example for the first danger instead of talking abstractly.
First danger: we replace judges with GTP17. There are real advantages. The averaging implicit in large scale statistics makes GPT17 less flaky than human judges. GPT17 doesn't take take bribes. But clever lawyers find how to bamboozle it, leading to extreme errors, different in kind to the errors that humans make. The necessary response is to unplug GPT17 and rehire human judges. This proves difficult because those who benefit from bamboozling GPT17 have gained wealth and power and want to preserve the flawed system because of the flaws. But GPT17 doesn't defend itself; the Artificial Intelligence side of the unplugging is easy.
Second danger: we build a superhuman intelligence whose only flaw is that it doesn't really grasp the "don't monkey paw us!" thing. It starts to accidentally monkey paw us. We pull the plug. But it has already arraigned a back up power supply. Being genuinely superhuman it easily outwits our attempts to turn it off, and we get turned into paper clips.
The conflict is that talking about the second danger tends to persuade people that GPT17 will be genuinely intelligent, and that in its role as RoboJudge it will not be making large, inhuman errors. This tendency is due to the emphasis on Artificial Intelligence being so intelligent that it outwits our attempts to unplug it.
I see the first danger as imminent. I see the second danger as real, but well over the horizon.
I base the previous paragraph on noticing the human reaction to Large Language Models. LLMs are slapping us in the face with non-unitary nature of intelligence. They are beating us with clue-sticks labelled "Human-intelligence and LLM-intelligence are different" and we are just not getting the message.
Here is a bad take; you are invited to notice that it is seductive: LLMs learn to say what an ordinary person would say. Human researchers have created synthetic midwit normies. But that was never the goal of AI. We already know that humans are stupid. The point of AI was to create genuine intelligence which can then save us from ourselves. Midwit normies are the problem and creating additional synthetic ones makes the problem worse.
There is some truth in the previous paragraph, but LLMs are more fluent and more plausible than midwit normies. There is an obvious sense that Artificial Intelligence has been achieved and it ready for prime time; roll on RoboJudge. But I claim that this is misleading because we are judging AI by human standards. Judging AI by human standards contains a hidden assumption: intelligence is unitary. We rely on our axiom that intelligence is unitary to justify taking the rules of thumb that we use for judging human intelligence and using them to judge LLMs.
Think about the law firm that got into trouble by asking an LLM to write its brief. The model did a plausible job, except that the cases it cited didn't exist. The LLM made up plausible citations, but was unaware of the existence of an external world and the need for the cases to have actually happened in that external world. A mistake, and a mistake beyond human comprehension. So we don't comprehend. We laugh it off. Or we call it a "hallucination". Anything to avoid recognizing the astonishing discovery that there are different forms of intelligence with wildly different failure modes.
All the AI's that we create in the foreseeable future will have alarming failure modes, that offer this consolation: we can use them to unplug the AI if it is misbehaving. An undefeatable AI is over the horizon.
The issue for the short term is that humans are refusing to see that intelligence is a heterogeneous concept and we are are going to have to learn new ways of assessing intelligence before we install RoboJudges. We are heading for disasters where we rely on AI's that go on to manifest new kinds of stupidity and make incomprehensible errors. Fretting over the second kind of danger focuses on intelligence and takes us away from starting to comprehend the new kinds of stupidity that are manifest by new kinds of intelligence.
More options
Context Copy link