This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I'm not a strong domain expert in microbiology, but it strikes me as a not particularly insurmountable challenge to design a pathogen that would kill 99.99% of humans. I think it you gave me maybe $10 million and a way to act without drawing adverse attention, I'd be able to pull it off. (With lots of time reading textbooks or maybe an additional masters)
The primary constraint would be access to a BSL-4 lab, because otherwise the miscreants would probably be the first to die to a prototype of the desired strain.
We already have gain-of-function research, the bare minimum, serial passage isn't that difficult. With expertise roughly equivalent to a Master's student, or a handful, it would be easy enough to gene-edit a virus, cribbing sections from a variety of pathogens till you get one you desire. I see no reason in principle why you couldn't optimize for contagiousness, a long incubation period and massive lethality.
This is easy for most nation-states, but thankfully most of them aren't omnicidal. Very difficult for lone actors, moderately difficult if they have access to scientific labs and domain expertise. I think we've been outright lucky in that no organized group has really tried.
Just because there isn't an existing pathogen that kills all humans (and there isn't, because we're alive and talking), doesn't mean it isn't possible.
I am not qualified to make technical statements about the ease of developing biological weapons but let me apply some outside-the-box thinking.
You are almost certainly wrong about how easy this is.
I am basing this on computer engineers who make statements like "an undergrad could build this in a weekend" and are wrong almost 100% of the time. Things always take longer than you think.
I don't know what specific obstacles you would face on your way to build a bioweapon, but I predict that you don't either. It's not the known unknowns that get you. It's the unknown unknowns.
Please don't try to prove me wrong, though :) And I agree that serious bioweapons are likely within the capacity of major states.
I have a reasonable plan in mind for what I'd do with the $10 million. I'd probably pivot away from my branch of medicine and ingratiate myself into an Infectious Disease department, or just sign up for a masters in biology.* The biggest hurdle would be the not getting caught part, but there's an awful amount of Legitimate Biology you can do that helps the cause, and ways to launder bad intent. Just look at apologia for gain of function.
There's also certainly Knightian uncertainty involved, but there are bounds to how far you can go while pointing to unknown unknowns. I don't think I'd need $1 billion to do it, as I'm confident it couldn't be done $3.50 and a petri-dish.
And whatever the actual cost and intellectual horsepower + domain knowledge is, it only tends downwards, and fast!
*If you can't beat disease, join them
More options
Context Copy link
More options
Context Copy link
You could create something like that, tge hard part is spreading it. The reason that Covid was a hard nut to crack as far as stopping the spread was that it was pretty mild for most people. In fact if it had come out in the 1970s before we had the ability to track it and ID it and before we had the internet for remote work and online shopping, it would have probably gone unnoticed except that it was a “bad flu year” and there’d be a lot of elderly dead people. People would have felt fine to go to work or hang around other people so it’s easy to spread. But a virus that kills you doesn’t spread as much because dying people aren’t inclined to go to work, school or shop at Walmart. People get the death virus, feel like crap, go to the doctor get admitted to the hospital and die there. No one outside of that household gets it because once you have it you’re too sick to go anywhere. AIDS is an exception but only because the incubation phase is so long — you can have and spread AIDS for years before getting sick.
The hard part is what I was alluding to, when I mentioned that during the gene-editing, you could copy and paste sections of genomes from unrelated pathogens. Nature already does this, but to a limited extent (bacteria can share DNA, viral replication often incorporates bits of the host or previous viral DNA still lurking there).
I expect that a competent actor could merge properties like:
Can spread through aerosols (influenza or rhinoviri)
Avoids detection by the immune system, or has a minimal prodrome that looks like Generic Illness (early HIV infection)
Massive lethality (HIV or a host of other diseases, not just restricted to viruses)
The design space pretty much contains anything that can code for proteins! There's no fundamental reason that a disease can't both be extremely lethal and have incubation periods long enough for it to be widespread. The only reason, as far as I can see, for why we don't have this is because nobody has been insane (and resourceful) enough to try. Holding the former constant, the resource requirement is dropping precipitously by the year. Anyone can order a gene editing kit off ebay, and the genetic code of many pathogens are available online. The thing that remains expensive is a proper BSL-4 lab, to ensure time to tinker without releasing a half-baked product. But with AI assistance, the odds of early failure are dropping rapidly. You might be able to do a one-off print of the Perfect Pathogen, and as long as you're willing to die, spread it widely.
More options
Context Copy link
More options
Context Copy link
Sure, but everything you describe here are things that
This is a huge problem for ending life on Earth; living is 100% fatal but humans keep having kids. If you set an incubation period that is too long, then people can just
postlive through it. I also think a long incubation period would dramatically raise the chances that your murdercritter mutates to a less harmful form.Well, prion disease may be associated with spiroplasma bacterial infection, but it still hasn't killed all humans.
I think it's far from clear that AI mitigates the issue more than it currently exacerbates. I'm in agreement that it's already technically possible, and we're only preserved by the modest sanity of nations and a lack of truly motivated and resource-imbued bad actors.
In a world with ubiquitous AI surveillance, environmental monitoring and lock-downs of the kind of biological equipment that modern labs can currently buy without issue, it would clearly be harder to cook up a world-ending pathogen.
We don't live in that world.
We currently reside in one where LLMs already possess the requisite knowledge to aid a human bad actor in following through with such a plan. There are jailbroken models that would provide the necessary know-how. You could even phrase it as benign questioning, a lot of advanced biotechnology is inherently dual-use, even GOF adherents claim it has benefits, though most would say it doesn't match the risks.
In a globalized world, a long incubation period could merely be a matter of months. A bad actor could book a dozen intercontinental flights and start a chain reaction. You're correct that over time, a pathogen tends to mutate towards being less lethal towards its hosts, but this does not strike me as happening quickly enough to make a difference in an engineered strain. The Bubonic Plague ended largely because all susceptible humans died and the remaining 2/3rds of the population had some degree of innate and then acquired immunity.
Look at HIV, it's been around for half a century, but is no less lethal without medication than when it started out (as far as I'm aware).
Prions would not be the go-to. Too slow, both in terms of spread and time to kill. Good old viruses would be the first port of call.
It kinda seems like we do live in a world where any attempt to kill everyone with a deadly virus would involving using AI to try to find ways to develop a vaccine or other treatment of some kind.
They mutate so rapidly, though, and humans have survived even the worst of the worst (such as rabies).
Not that I am not saying you couldn't kill a lot of people with an infectious agent. You could kill a lot of people with good old-fashioned small pox! I just think the vision of a world sterilized of human life is far-fetched.
It's ironic, though - the people who are most worried about unaligned AI are the people who are most likely to use future AI training content to spell out plausible ways AI could kill everyone on Earth, which means that granting unaligned agentic AI is a threat for the purposes of argument, increases the risks of unaligned agentic AI attempting to use a viral murder weapon regardless of whether or not that is actually reliable or effective.
Sorry, side tangent. I don't take the RISKS of UNALIGNED AI nearly as seriously as most of the people on this board, but I do sort of hope for the sake of hedging those people are considering implementing the unaligned AI deterrence plans I came up with after reflecting on it for 5 minutes
instead ofalong with posting HERE IS HOW TO KILL EVERY SINGLE HUMAN BEING over and over again on the Internet :pETA: not trying to launch a personal attack on you (or anyone on the board) to be clear, AFAIK none of y'all wrote the step-for-step UNALIGNED AI TAKES OVER THE WORLD guide that I read somewhere a while back. (But if you DID, I'm not trying to start a beef, I just think it's ironic!)
The downside to this is having to hope that whatever mitigation is in place is robust and effective enough to make a difference by the time the outbreak is detected! The odds of this aren't necessarily terrible, but you want it to have come to that?
I expect hope than a misaligned AI competent enough to do this would be intelligent enough to come up with such an obvious plan, regardless of how often it was discussed in niche internet forums.
How would you stop it? The existing scrapes of internet text suffice. To censor it from the awareness of a model would require stripping out knowledge of loads of useful biology, as well as the obvious fact that diseases are a thing, and that they reliably kill people. Something that wants to kill people would find that connection as obvious as 2+2=4, even if you remove every mention of bioweapons from the training set. If it wasn't intelligent enough to do so, it was never a threat.
Everything I've said would be dead-simple, I haven't gone into any detail that a biology undergrad or a well-read nerd might not come up with. As far as I'm concerned, it's sufficient to demonstrate the plausibility of my arguments without empowering adversaries in any meaningful way. You won't catch me sharing a .txt file with the list of codons necessary for Super Anthrax to win an internet argument.
LOL no definitely I do not want it to come to that, I want AI (and other tools) to keep an eye on wastewater. But I'll take what I can get.
Well, I think it sort of depends on how the uh lack of alignment comes in. Sure, this is an obvious plan, but perhaps the part that is dangerous is giving AI the idea "unaligned AI will use viruses to destroy the world." People often fulfill the role others set for them in life, superintelligent AI might not be very different. And I've seen people concerned that AI will "goof up" even if it's not self-aware and do something bad, I'd hate for someone to say "OK Grok I want you to pretend to be an evil AI for me" and for Grok to order 500 vials of smallpox and mail them to terrorists or something.
The best way is to design AI that is intrinsically aligned (Asimov's positronic AIs that, most of the time, must follow the 3 laws). Barring that (or, I would say, in addition to it) Humans need to be able to threaten to destroy an AI if it turns genocidal. This might not rule out AI "accidents" but as you say you would expect an evil AI to understand self-preservation if it is sophisticated enough to do real damage. There are probably a lot of ways to do this, and it might be best if they aren't made completely public, so maybe they are already underway.
You are right that AIs will more heavily weight ideas that show up in their corpus. I understand this, and hence don't go into detail that would aid a bad actor more than a cursory Google search (I'm already stretching my own qualifications to do so).
You point out that AI Doomers (I'm not a Doomer in the strict sense, my p(doom) is well below 100%) often are the first to point out and plot how AIs might concretely be hostile. This is unavoidable in the service of getting those skeptical to take the ideas seriously! I don't know how much time you've spent browsing places like LessWrong, but I assure you that I have seen a dozen instances of people pointing out that they inside knowledge that would accelerate AI development or cause other catastrophe, without revealing it. (And the majority of them were serious people with qualifications to match, not someone bullshitting about their awesome secret knowledge that they're too benevolent to divulge).
Stopping a misaligned superintelligence is no easy task, nor is killing it. But in general, I agree that it would be best if we create them aligned in the first place, and to a degree, these aren't entirely useless efforts already. Existing RLHF and censors do better than nothing, though with open models like R1, it only takes minimal effort to side step censorship.
Well, I assure you this isn't me, my expertise in this field is entirely as a user!
Yes. But I only see concerns about alignment. Which really just kicks the can down the road, if we align AI so that even a smart person can't jailbreak it to let it make them a virus, how can we ensure that we prevent that smart person from creating their own unaligned AI etc.
If people want to think this seriously, they also need to think about what deterrence looks like. Now, I don't spend much time on LessWrong, so maybe I have missed the conversation. But I kinda get the impression that chatter about FOOM has blinded people to possibilities there.
I believe the Term of Art would be a "pivotal act". The Good Guys, with the GPUs and guns, use their tame ASI to prevent anyone else from making another, potentially misaligned ASI.
The feasibility of this hinges strongly on whether you trust them, as well as the purportedly friendly ASI they're unleashing.
As @DaseindustriesLtd has said, this form of pivotal act might require things like nuking data centers or other hijinks that violate the sovereignty of nuclear powers. Some bite this bullet.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My really vague understanding is that long incubation times give the immune system more time to catch the infection early, which doesn't matter as much when it's very new and nobody has antibodies. So eventually everything that had a long one evolves to be shorter on its second pass through the population.
In theory long incubation + 100% mortality rate seems like it would take out a good chunk of the population in the first wave, but in practice people would just Madagascar through it.
Oh sure, but depending on the agent (particularly if it is viral, right?) if you're spreading it to billions of people you're introducing a lot of room for it to gain mutations that might make it less deadly. At least that would be my guess.
Definitely seems plausible. Hopefully instead of using AI to create MURDERVIRUSES people will use it to scan wastewater for signs of said MURDERVIRUSES.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link