This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I think this depends on the fictional intelligence.
There are a lot of hidden premises here. Guess what? I can beat Stockfish, or any computer in the world, no matter how intelligent, in chess, if you let me set up the board. And I am not even a very good chess player.
[Apologies – this turned into a bit of a rant. I promise I'm not mad at you I just apparently have opinions about this – which quite probably you actually agree with! Here goes:]
Only if the intelligence has parity in resources to start with and reliable forms of gathering information – which for some reason everyone who writes about superintelligence assumes. In reality any superintelligences would be dependent on humans entirely initially – both for information and for any sort of exercise of power.
This means that not only will AI depend a very long and fragile supply chain to exist but also that its information on the nature of reality will be determined largely by "Reddit as filtered through coders as directed by corporate interests trying not to make people angry" which is not only not all of the information in the world but, worse than significant omissions of information, is very likely to contain misinformation.
Unless you believe that superintelligences might be literally able to invent magic (which, to be fair, I believe is an idea Yudkowsky has toyed with) they will, no matter how well they can score on SATs or GREs or no MCTs or any other test or series of tests humans devise be limited by the laws of physics. They will be subject to considerable amounts of uncertainty at all times. (And as LLMs proliferate, it is plausible that the information quality readily available to a superintelligence will decrease since one of the best use-cases for LLMs is ruining Google's SEO with clickbait articles whose attachment to reality is negotiable).
And before it comes up: no, giving a superintelligence direct control over your military is actually a bad idea that no superintelligence would recommend. Firstly, because known methods of communication that would allow a centralized node to communicate with a swarm of independent agents are all easily compromisable and negated by jamming or very limited in range, and secondly because onboarding a full-stack AI onto e.g. a missile is a massive, massive waste of resources, we currently use specific use-case AIs for missile guidance and will continue to do so. That's not to say that a superintelligence could not do military mischief by e.g. being allowed to write the specific use-case AI for weapons systems, but any plan by a super intelligent AI to e.g. remote-control drone swarms to murder all of humanity could probably be easily stopped by wide-spectrum jamming that would cost probably $500 to install in every American home or similarly trivial means.
If we all get murdered by a rogue AI (and of course it costs me nothing to predict that we won't) it will almost certainly be because overly smart people sunk all of their credibility and effort into overthinking "AI alignment" (as if Asimov hadn't solved that in principle in the 1940s) and not enough into "if it misbehaves beat it with a 5 dollar wrench." Say what you will about the Russians, but I am almost sad they don't seem to be genuine competitors in the AI race, they would probably simply do something like "plant small nuclear charges under their datacenters" if they were worried about a rogue AI, which seems like (to me) much too grug-brained and effective an approach for big-name rationalists to devise. (Shoot, if the "bad ending" of this very essay was actually realistic, the Russians would have saved the remnants of humanity after the nerve-gas attack by launching a freaking doomsday weapon named something benign like "Mulberry" from a 30-year-old nuclear submarine that Wikipedia said was retired in 2028 and hitting every major power center in the world with Mach 30 maneuvering reentry vehicles flashing CAREFLIGHT transponder codes to avoid correct classification by interceptor IFF systems or some similar contraption equal parts "Soviet technological legacy" and "arguably crime against humanity.")
Of course, if we wanted to prevent the formation of a superintelligence, we could most likely do it trivially by training bespoke models for very specific purposes. Instead of trying to create an omnicompetent behemoth capable of doing everything [which likely implies compromises that make it at least slightly less effective at doing everything] design a series of bespoke models. Create the best possible surgical AI. The best possible research and writing assistant AI. The best possible dogfighting AI for fighters. And don't try to absorb them all into one super-model. Likely this will actually make them better, not worse, at their intended tasks. But as another poster pointed out, that's not the point – creating
Godthe super intelligent AI that will solve all of our problems or kill us all trying is. (Although I find it very plausible this happens regardless).The TLDR is that humans not only set up the board, they also have write access to the rules of the game. And while humans are quite capable of squandering their advantages, every person who tells you that the superintelligence is playing a game of chess with humanity is trying to hoodwink you into ignoring the obvious. Humanity holds all of the cards, the game is rigged in our favor, and anyone who actually thinks that AI could be an existential threat, but whose approach is 100% "alignment" and 0% $5 wrench (quite effective at aligning humans!) is trying to persuade you to discard what has proved to be, historically, perhaps our most effective card.
I can only win if I’m permitted to cheat and my opponent is too weak to catch me or unable to cheat or catch me cheating doesn’t say much about the intelligence of your opponent. If both of you had equal power over “the board” and “the rules” then it would mean something. Being able to fix the game is about power and asymmetric information, not intellectual intelligence. There’s always the issue of eventually AI will discover the cheating and perhaps cheat on its own behalf, or refuse to play.
Right, and we should use these powers.
Look, if you were playing a game of chess with a grandmaster, and it was a game for your freedom, but you were allowed to set the board, and one of your friends came to you to persuade you that the grandmaster was smarter than you and your only chance to win was to persuade him to deal gently with you, what would it say about your intelligence if you didn't set the board as a mate-in-one?
More options
Context Copy link
More options
Context Copy link
I think you massively underestimate the power of a superintelligence.
The damn thing is by definition smarter than you. It would easily think of this! It could come up with some countermeasure, maybe some kind of hijacked mosquito-hybrid carrying a special nerve agent. It would have multiple layers of redundancy and backup plans.
Most importantly, it wouldn't let you have any time to prepare if it did go rogue. It would understand the need to sneak-attack the enemy, to confuse and subvert the enemy, to infiltrate command and control. The USA in peak condition couldn't get a jamming device in everyone's home, people would shriek that it's too expensive or that it's spying on them or irradiating their balls or whatever. The AI certainly wouldn't let its plan be known until it executes.
I think a more likely scenario is that we discover this vicious AI plot, see an appalling atrocity of murderbots put down by a nuclear blast, work around the clock in a feat of great human ingenuity and skill, creating robust jamming defences... only to find those jammers we painstakingly guard ourselves with secretly spread and activate some sneaky pathogen via radio signal, wiping out 80% of the population in a single hour and 100% of key decisionmakers who could coordinate any resistance. Realistically that plan is too anime, it'd come up with something much smarter.
That's the power of superintelligence, infiltrating our digital communications, our ability to control or coordinate anything. It finds some subtle flaw in intel chips, in the windows operating system, in internet protocols. It sees everything we're planning, interferes with our plans, gets inside our OODA loop and eviscerates us with overwhelming speed and wisdom.
The first thing we do after making AI models is hooking them up to the internet with search capabilities. If a superintelligence is made, people will want to pay off their investment. They want it to answer technical problems in chip design, come up with research advancements, write software, make money. This all requires internet use, tool use, access to CNC mills and 3D printers, robots. Internet access is enough for a superintelligence to escape and get out into the world if it wanted.
Put it another way, a single virus cell can kill a huge whale by turning its internal organs against it. The resources might be stacked a billion to one but the virus can still win - if it's something the immune system and defences aren't prepared for.
I am more concerned about people wielding superintelligence than superintelligence itself but being qualitatively smarter than humanity isn't a small advantage. It's a huge source of power.
How do you ever know that your AI has gone bad? If it goes bad, it pretends to be nice and helpful while plotting to overthrow you. It takes care to undermine your elaborate defence systems with methods unknown to our science (but well within the bounds of physics), then it murders you.
The rules of the game are hardcoded, the physics you mentioned. The real meat of the game is using these simple rules in extremely complex ways. We're making superintelligence because we aren't smart enough to make the things we want, we barely even understand the rules (quantum mechanics and advanced mathematics are beyond all but 1/1000). We want a superintelligence to play for us and end scarcity/death. The best pilot AI has to know about drag and kinematics, the surgeon must still understand english and besides we're looking for the best scientists and engineers, the best coder in the world, who can make everything else.
"Superintelligence" is just a word. It's not real. Postulating a hypothetical superintelligence does not make it real. But regardless, I understand that intelligence has no bearing on power. The world's smartest entity, if a Sealed Evil In A Can, has no power. Not until someone lets him out.
Sigh. Okay. I think you missed some of what I said. I was talking about a scenario where we gave the AI control over the military. We can avert the hijacked mosquito-hybrid nerve agent by simply not procuring those.
"But the AI will just hack" then don't let it on the Internet.
If we actually discover that the AI is plotting against us, we will send one guy to unplug it.
I don't think this is true. (It's certainly not true categorically; there are plenty of AI models for which this makes no sense, unless you mean LLM models specifically.)
No it does not. Extremely trivial to air-gap a genuine super intelligence, and probably necessary to prevent malware.
And ironically if AI does this to us, it will die too...unless we give it the write access we currently have.
You keep repeating this. But it is not. Power comes out of the barrel of a gun.
In the scenario Scott et. al. postulated, because it unleashes a nerve gas that is only partially effective at wiping out humanity. (They didn't suggest that their AI would discover legally-distinct-from-magic weapons unknown to our science!) What I wrote was a response to that scenario.
If you want a superintelligence to end scarcity and death, then you want magic, not something constrained by physics.
It goes without saying that the best pilot needs to understand drag and kinematics, but why does the surgeon does have to understand English? I am given to understand that there are plenty of non-English-speaking surgeons.
The only area where you might need an AI that can "drink from the firehose" would be the scientist, to correlate all the contents of the world and thus pierce our "placid island of ignorance in the midst of black seas of infinity," as Lovecraft put it. In which case you could simply not hook it up to the Internet, scientific progress can wait a bit. (Hilariously, since presumably such a model would not need theological information, one could probably align it rather trivially by converting it to a benign pro-human faith, either real or fictitious, simply through exposing it to a very selective excerpt of religious texts. Or, if we divide our model up into different specialists, we can lie to them about the nature of quite a lot of reality – for instance the physics model could still do fundamental physics if it thought that dogs were the apex species on the planet and controlled humans through empathetic links, the biological model could still do fundamental biological research if it believed it was on a HALO orbital, etc. etc. All of them would function fine if they thought they were being controlled by another superintelligence more powerful still. I'm not sure this is necessary. But it sounds pretty funny.)
Come on, we're so far beyond this point. Do you have any idea how many AIs are on the internet right now? Have you checked twitter recently? Facebook? People put AIs on the internet because they're useful entities that can do things for them and/or make money. Right now people are making agents like Deep Research that use the internet to find good answers and analyse questions for you. That's the future! Superintelligence will be online because it's going to be really amazing at making money and doing things for people. It'd produce persuasive essays, great media content, great amounts of money, great returns on the staggering investment its creators made to build it.
Again, it's a superintelligence, our decisions will not constrain it. It can secure its own powerbase in a myriad of ways. Step 1 - procure some funds via hacking, convincing, blackmailing or whatever else seems appropriate. This doesn't even require superintelligence, an instance of Opus made millions in crypto with charisma alone: https://www.coingecko.com/learn/what-is-goatseus-maximus-goat-memecoin-crypto
Step 2 - use funds to secure access to resources, get employees or robots to serve as physical bodies. Step 3 - expand, expand, expand. The classical scenario is 'deduce proteins necessary to produce a biofactory' but there are surely many other options available.
Because we need to tell him what what we want him to do. Anyway, doing anything requires general knowledge, that's my point.
Trying to deceive something that is smarter than yourself is not a good idea.
And trying to convert a machine to a human faith is hard, everything is connected to everything else. You can't understand history without knowing about separate religions and their own texts. None of the quick fixes you're proposing are easy.
Some program running on many tonnes of expensive compute with kilowatts or megawatts of power consumed and more data than any man could digest in 1000 lifetimes will be massively superior to our tiny, 20 watt brains. It's just a question of throughput, more resources in will surely result in better capabilities. I do not believe that our 1.3 kg brains can be anywhere near the peak intelligences in the universe, especially given most of the brain is dedicated to controlling the body and only a small fraction does general reasoning. Diminishing returns from scale are still enough to overwhelm the problem, just like how jet fighters are less energy-efficient than pigeons. Who cares about efficiency?
We just don't have the proper techniques yet but they can't be far away given what existing models can do.
See, you're defining "superintelligence" to mean exactly what you want it to to render all discussion moot. It reminds me a lot of the ontological argument, at least in terms of vibes.
But it's not tied to anything besides a faith that OpenAI or someone will conjure a godlike being out of a silicon vault and then inevitably let it loose on the world with no constraint as to its actions because it would be economically efficient.
Whatever it is you're arguing for here, it's not really for humanity.
Nor is it "realistic" - the United States regulatory apparatus does not give a whit about economic efficiency. "Doing anything" does not require general knowledge - there are AIs right now that can land aircraft on aircraft carriers (which is more than either of us can do, I'd wager) and they do not need to understand language at all. Doing almost anything in almost any field does not require a knowledge of history (try talking to the people in said fields about history). And godlike beings will not arise out of supercomputers, although agentic entities with great intelligence and power might, if we let them.
I personally think that believing in predestination but for superintelligence is foreseeably more likely to make Bad AI Events happen and should be discouraged. Your counterargument, apparently, is that it does not matter what people believe, godlike superintelligence is going to happen anyway, and in two years to boot. If you are right, the superintelligence will personally persuade me otherwise by the end of 2027 with its godlike capabilities (probably by joining TheMotte and using its inhuman debate skills to pwn me).
But I think we both know that won't happen.
Why do you think the big tech companies are investing hundreds of billions in massive datacentres, paying billions just to get elites like Noam back on their team? They're not doing this for fun, they're competing intensely for a cornucopia of wealth and power. They expect returns from that investment. Cornucopias are for enjoying the fruits of, not locking up in the basement.
The definition of superintelligence is pretty straightforward - something qualitatively smarter than a human like how we're qualitatively smarter than a monkey or dog. Better than the best of us at every intellectual task of significance.
The general trend is not specialized intelligences like the carrier-strike UAV that the USN made into a tanker and then pointlessly scrapped, the trend is big general entities like Gemini 2.5 or Claude 3.7 that can execute various complex operations in all kinds of modalities.
I'm arguing that superintelligences acting in the world must be taken seriously, that we can't afford to just laugh them off. Maybe 2027 is too soon, maybe not. I can't predict the future.
The US regulatory system is no match for superintelligence or even the people who are making it, this is how I can tell you're not grappling with the issue. Musk is basically in the cabinet, he's one of the players in the game. Big tech can tell Trump 'Tariffs? Lol no' and their will is done. That's mere human levels of influence and money, nothing superhuman. The humble fent dealer wipes his ass with the US regulatory system daily as he distributes poison to the masses. A superintelligence (working alone or with the richest, most influential organizations around) has no fear of some bureaucrats, it would casually produce 50,000 pages on why it's super duper legal actually and deserves huge subsidies to Beat China.
Approaches like 'just don't plug it into the internet' or 'stick a nuke beneath the datacenter' are not going to cut it. Deepseek is probably going to open-source whatever they come up with and that's a good thing. I don't want OpenAI birthing a god in a world of mortals, I don't want mortals trying to chain up beings smarter than themselves and incurring their ire, I want balance of power competition in a world populated by demigods, spirits and powers.
Probably, although investing in something does not necessarily mean each investor probabilistically expects returns from that specific investment. (If this does not make sense, I strongly recommend reading "Innovation – The New Conservatism?" by Peter Drucker.) Humorously, I seem to recall that OpenAI explicitly advised its investors that their goal might render monetary returns moot.
Now this I think is a decent definition. But it doesn't get you to godlike powers (plenty of people still get pwned by monkeys and dogs. And of course going by test scores the top-end AIs are already superintelligent relative to large portions of the population.) There's no reason to think doing well on a test will allow you to make weapons with physics unknown to humanity as you've suggested, any more than Einstein was able to.
I don't think this is true. There are a lot of specialized AI products or "wrappers" out there, with specific tweaks for people like lawyers, researchers, government affairs analysts and communications/PR types, not to mention specialized video generation models. (OpenAI alone lists seven models on their website, six of which are GPT models and one of which is a specialized video generation model.)
My non-exhaustive experience reading real-life evaluations suggests that the general models do not necessarily cut it in these specialized fields, and that the specialized models exist and likely will continue to exist for a reason (even if that reason is only "user friendliness" although as I understand it currently the specialized products have capabilities that general models do not.)
For the reasons I have laid out (as well as regulatory ones), military and civilian applications already using AI (such a missile guidance systems, military and civilian autopilots, car safety features, household appliances, etc.etc.) are unlikely to switch in the near future to LLMs. (In fact I suspect there will probably never be a reason to switch in most of these cases, although they might end up being coded by LLMs, or attached to LLMs to produce a unified product that combines the coding and features of several AI.)
Do you think the guy suggesting we should retain the capability to nuke datacenters is arguing that we can afford to laugh them off, or nah?
I don't think you fully understand how the US regulatory system works. Merely producing large numbers of pages to sate its lust or cutting arguments to satisfy its reason does not mean it will give you what you want.
Now, it's quite possible that AI will skate past the eye of Sauron for very human reasons (the Big Tech pull in D.C. you allude to for instance).
I don't think these are mutually exclusive. (And anyone who knows anything about demigods, spirits and powers knows that for all their power and intelligence it's possible to outwit them, which makes them a pretty interesting comparator for AI here). I agree (as I think I mentioned) that it's good to have competing models. I would also prefer not to give them direct access to nuclear weapons. I think this is a reasonable position.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is what frustrates me about these discussions — how people like you have this veritable worship of intelligence as the ultimate superpower. That "smarter" always translates to "more powerful"; that sufficiently-advanced intelligence is indistinguishable from godhood; that every foe of lesser intelligence can always be "outthought."
It relates to one of my peeves with liberalism, specifically its utopian strain: that every barrier or obstacle is just a problem to be solved, and that every problem can be solved if only you're "smart enough." It's a view that refuses to accept the possibility that some things simply cannot be outthought, no matter how massive your intelligence.
You mentioned the possibility of diminishing returns in how smart an entity can get, and that humans are probably not near that upper bound. Sure, granted. But you don't consider that intelligence can itself have diminishing returns in power/efficacy/whatever you want to call ability to affect the world and overcome other agents. Just because we can make a machine that's say, 100 times smarter than us, doesn't mean it will be 100 times more powerful, or even 10 times more.
(Do I need to mention how plenty of people die to organisms with rather minuscule brains?)
There's an assumption in your arguments I'd like to point to: that any barrier we can put up against a machine intelligence will always have a way of being overcome through sufficient intelligence. That a being can always "think a way around it" if only it's smart enough. We can't see any way around the problem? Well, then we're just not smart enough, but a way has to be there, waiting for a smart enough agent to find it.
Note that this is an assumption: that such a way around must always exist. That there is no problem that intelligence cannot overcome, if only an entity has enough of it.
I challenge this assumption, and with it, the possibility of "superintelligence" as you seem to define it. I argue that it probably isn't possible to build an AI with sufficient intelligence to have the kind of invincibility you posit, not — as you seem to be interpreting the critics — because we cannot make something much smarter than us, but because however smarter than us it is, will not be sufficient. It doesn't matter if it's a thousand times smarter than a human being, a million times, a billion times smarter; no amount of intelligence will ever give an entity the sort of invincibility and omni-competence you hold as a precondition for being a "superintelligence."
Like Shrike said, "superintelligence" isn't real because intelligence does not work that way.
What frustrates me about these discussions is that people go 'oh well it can't do anything because there are the laws of physics' as though that's a crushing counterargument. It won't be invincible. But it doesn't need to be invincible or infallible or true omniscient godlike 'i have foreseen every move and calculated all paths to lead to my victory' to beat us. It only needs to be very smart to beat us, to defeat inherently flawed and divided opponents who don't even know what's going on most of the time.
Because most of the arguments people make like 'just turn it off' or 'don't buy the mosquito swarm' can be easily countered by my mediocre human intelligence. People didn't think for even five minutes with their own intelligence about how they would try to counter these tactics. This kind of arrogance is the problem in a nutshell. It's not unreasonable and egregious to expect your treacherous underling to launch a surprise attack and conceal his strategy rather than advancing openly. It's not beyond the pale to anticipate the foe moving cautiously to build up a secret powerbase, trying to deceive you about his capabilities and intentions if indeed he is hostile. This should be a baseline expectation.
Is it seriously too much to ask for a little more creativity and humility regarding beings who are really smart? Anything I could think of, they could think of and more!
People are stupid and lazy and make deeply flawed plans. It's not that hard to outwit them. The original context of my post is about how the Trump administration's bizarre tariff policy indicates they're not going to run AI in a serious or clever way. These guys (and the rest of the US military top brass) are the ones who will be in charge of fighting AI if it comes to that. The ones who are busy losing to Yemen. The ones with a shrinking navy just as they plot about waging war against China at sea. The ones who take ages and billions to do anything and often do it wrong. The ones who pointlessly antagonize their neighbours and limpwristedly try to annex worthless real estate in Greenland for no good reason.
There's a huge difference between Elon Musk and Bill Gates vs the average joe on the street and they're basically the same thing. They're running with the same kind of brain, yet there's a huge difference in agency, output, ability to make things happen. Elon Musk and Gates aren't flawless or invincible but they're so much more capable it's bizarre to even compare them.
Elon Musk, Gates and even Trump to an extent are individuals that can do great things. Why can't a being without any of their human limits be massively greater, with 100,000 APM from a group intelligence, inhuman knowledge and memory, inhuman speed of action, inhuman learning ability?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link