This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I don’t think that most doomers actually believe in a very high likelihood of doom. Their actions indicate that they don’t take the whole thing seriously.
If you actually believed that AI was an existential risk in the short- or medium-term, then you would be advocating for the government to seize control of OpenAI’s datacenters effective immediately, because that’s basically the only rational response. And yet almost none of them advocate for this. “If we don’t do it then someone will” and “but what about China?” are very lame arguments when the future of the entire species is on the line.
It’s very suspicious that the most commonly recommended course of action in response to AI risk is “give more funding to the people working on AI alignment, also me and my friends are the people working on AI alignment”.
For what it’s worth, I don’t think that capabilities will advance as fast as the hyper optimists do, but I also don’t think that p(doom) is 0, so I would be quite fine with the government seizing control of OpenAI (and all other relevant top tier labs) and either carrying on the project in a highly sequestered environment or shutting it down completely.
They (as in LW-ish AI safety people / pause ai) are directly advocating for the government to regulate OpenAI and prevent them from training more advanced models, which I think is close enough for this
More options
Context Copy link
They DON'T want the Aschenbrenner plan where AI becomes hyper-militarized and hyper-securitized. They know the US government wants to sustain and increase any lead in AI because of its military and economic significance. They know China knows this. They don't want a race between the superpowers.
They want a single globally dominant centralized superintelligence body, that they'd help run. It's naive and unrealistic but that is what they want.
More options
Context Copy link
This one is valid. If this might kill us all then we especially don't want China getting it first. I judge their likelihood of not screwing this up lower than ours. So we need it first and most even if it is playing Russian roulette.
More options
Context Copy link
What makes the government less likely to create an AI apocalypse with the technology than OpenAI? And just claiming an argument is lame does not refute it.
The important part was this:
Obviously the safest thing would be shutting it down altogether, if the risk is really that great. But, if that's not an option for some reason, then at least treat it like the Manhattan project. Stop sharing methods and results, stop letting the public access the newest models. Minimizing attack surface is a pretty basic principle of good security.
The main LLM developers don't share methods or model weights. But they claim that if they didn't make enough money to train the best models, no one would care what they say.
More options
Context Copy link
More options
Context Copy link
There is an argument to be made that if you want to stop the development of a technology dead in its tracks, you let the government (or any immensely large organization with no competition) do the ressource allocation for it.
If the US government had a monopoly on space travel by law, we wouldn't have satellite internet the way we do right now. And we may actually had lost access to space for non-military applications altogether.
Of couse this argument only goes as far as the technology not being something that is core to those few areas of actual competition for the organization, namely war.
But I feel like doomers are merely trying to stop AI from escaping the control of the managerial class. Placing it in the hands of the most risk averse of the managers and burdening it with law is a neat way of achieving that end and securing jobs as ethicists and controllers.
It's never really been about p(doom) so much as p(ingroup totally unable to influence the fate of humanity in the slightest going forward).
Yes, I think this is what it actually comes down to for a lot of people. The claim is that our current course of AI development will lead to the extinction of humanity. Ok, maybe we should just stop developing AI in that case... but then the counter is that no, that just means that China will get to ASI first and they'll use it to enslave us all. But hasn't the claim suddenly changed in that case? Surely if AI is an existential risk, then China developing ASI would also lead to the extinction of humanity, right? How come if we get to ASI first it's an existential risk, but if China gets there first, it "merely" installs them as the permanent rulers of the earth instead of wiping us all out?
I suppose there are non-zero values you could assign to p(doom) and p(AGI-is-merely-a-superweapon), with appropriate weights on those outcomes, that would make it all consistent. But I think the simpler explanation is that the doomers just don't seriously believe in the possibility of doom in the first place. Which is fine. If you just think that AI is going to be a powerful superweapon and you want to make sure that your tribe controls it then that's a reasonable set of beliefs. But you should be honest about that.
Only minor quibble I have with your post is when you said "doomers are merely trying to stop AI from escaping the control of the managerial class". I think there are multiple subsets of "doomers". Some of them are as you describe, but some of them are actually just accelerationists who want to imagine themselves as the protagonist of a sci-fi movie (which is how you get doomers with the very odd combination of beliefs "AI will kill us all" and "we should do absolutely nothing whatsoever to impede the progress of current AI labs in any way, and in fact we should probably give them more money because they're also the people who are best equipped to save us from the very AI that they're developing!")
That's fair, this is an intellectual space rife with people who have complicated beliefs, so generalizing has to be merely instrumental.
That said I think it is an accurate model of politically relevant doomerism. The revealed preferences of Yuddites is to get paid by the establishment to make sure the tech doesn't rock the boat and respects the right moral fads. If they really wanted to just avoid doom at any cost, they'd be engaging in a lot more terrorism.
It's the same argument Linkola deploys against the NGO environmentalist movement: if you really think that the world is going to end if a given problem isn't solved, and you're not willing to discard bourgeois morality to solve the problem, then you are either a terrible person by your own standards, or actually value bourgeois morality more than you do solving the problem.
I’m coming to this discussion late, but this assumes that discarding bourgeois morality will be better at achieving your goals, when we see from BLM and Extinction Rebellion that domestic terrorism can have its own counterproductive backlash. How do we know they aren’t entirely willing to give up bourgeois morality, they just don’t see it as conducive to their cause?
It doesn't assume. Linkola actually builds the argument, convincingly in my opinion, that if radical change is required to solve the problem, as conceptualized by ecologists, that change is incompatible with democracy, equality and the like. Most people cannot be convinced peacefully to act against their objective interest in the name of ideas they do not share.
ER and BLM are exactly the sort of people criticized here. When your idea of eco-terror is vandalizing paintings to call out people doing nothing, you're not a terrorist, you're a clown.
Serious radical eco-terrorists would destroy infrastructure, kill politicians, coup countries, sabotage on a large scale and generally plot to make industrial society impossible.
In many ways, Houthis and Covid are better at this than the NGOs who say they are doing it and that's entirely by accident.
Good points. So why are the eco extremists risking jail time for mere clownery rather than bona fide terrorism on the level of the Houthis?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I feel like this is unfair. The hardcore Yuddites are not on the Trust & Safety teams at big LLM companies. However, I agree that there are tons of "AI safety" people who've chosen lucrative corporate jobs whose output feeds into the political-correctness machine. But at least they get to know what's going on that way and have at least potentially minor influence. The alternative is... be a full-time protester with little resources, clout, or up-to-date knowledge?
The hardcore Yuddites were pissed at those teams using the word "Safety" for a category that included sometimes-reading-naughty-words risk as a central problem and existential risk as an afterthought at most. Some were pissed enough to rename their own philosophy from "AI Safety" to "AI Notkilleveryoneism" just because being stuck with a stupid-sounding neologism is a cheap price to pay to have a word that can't be so completely hijacked again.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The way this could work is that, if you believe that any ASI or even AGI will have high likelihood of leading to human extinction, then you want to stop everyone, including China, from developing it. But it's difficult to prevent them from doing so if their pre-AGI AI systems are better than our pre-AGI AI systems. Thus we must make sure our own pre-AGI AI is ahead of China's pre-AGI AI, to better allow us to prevent them from evolving their pre-AGI AI to actual AGI.
This is quite the needle to try to thread, though! And likely unstable, since China isn't the only powerful entity with the ability to develop AI, and so you'd need to keep evolving your pre-AGI AI to keep ahead of every other pre-AGI AI, which might be hard to do without actually turning your pre-AGI AI into actual AGI.
To be fair to doomers, this is a needle that was thread by scientists before. The fact that there is a strong taboo against nuclear weapons today is for the most part the result of a deliberate conspiracy of scientists to make nuclear weapons special, associated with total war and to think the world in terms of the probability of this total war to make their use irrational.
That reading of their use is not a foregone conclusion from the nature of the destruction they wreak. But rather a matter of policy.
And to apply the analogy to this, it did require both that those scientists actually shape nukes into a superweapon and that they denounce it and its uses utterly.
I see a lot of doomer advocacy as an attempt to manifest AI's own Operation Candor.
From my reading of Nina Tannenwald’s The Nuclear Taboo: The United States and the Non-Use of Nuclear Weapons Since 1945, it appears that while the scientists were generally opposed to widespread use of nukes, and while they did play a large part in raising public consciousness around the dangerous health effects of radiation, they ultimately had minimal influence on the development of the international nuclear taboo compared to domestic policy makers, Soviet propaganda efforts, and third world politics.
According to that book at least, far from trying to stigmatize nukes, the Eisenhower administration was very much trying to counter their stigmatization and present them as just another part of conventional warfare, due to the huge cost savings involved. Seen in this light, Operation Candor was more of a public relations campaign around justifying the administration’s spending on nukes rather than a way to stop nuclear proliferation.
So if history is any indication, the scientists can make all the noise they want, but it’s not going to matter unless it aligns with the self-interests of major institutional stakeholders.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
That "non-military" is critical. Governments can develop technology when it suits their purposes, but those purposes are usually exactly what you don't want if you're afraid of AI.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This would be great, yes. To the extent I'm not advocating for it in a bigger way, that's because I'm not in the USA or a citizen there and because I'm not very good at politics.
This has less to do with nobody saying the sane things, and more to do with the people saying "throw money at me" tending to have more reach. There may also be some direct interference from Big Tech; I've heard that YouTube sinks videos calling for Big Tech to be destroyed, for instance.
More options
Context Copy link
More options
Context Copy link