This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Zeno's AGI.
For a long time, people considered the Turing Test the gold standard for AI. Later, better benchmarks were developed, but for most laypeople with a passing familiarity with AI, the Turing Test meant something. And so it was a surprise that when LLMs flew past the Turing Test in 2022 or 2023, there weren't trumpets and parades. It just sort of happened, and people moved on.
I wonder if the same will happen with AGI. To quote hype-man Sam Altman:
Okay, actually he said that about Chat GPT 4.5, but you get the point. The last 6 months have seen monumental improvements in LLMs, with DeepSeek making them much more efficient and xAI proving that the scaling hypothesis still has room to run.
Given time, AI has been reliably able to beat any benchmarks that we throw at it (remember the Winograd schema?). I think if, 10 years ago, if someone said that AI could solve PHD level math problems, we'd say AGI had already arrived. But it hasn't. So what ungameable benchmarks remain?
AGI should lead to massive increases in GDP. We haven't seen productivity even budge upwards despite dumping trillions into AI. Will this change? When?
AI discoveries with minimal human intervention. If a genius-level human had the breadth of knowledge that LLMs do, they would no doubt make all sorts of novel connections. To date, no AI has done so.
What stands in the way?
It seems like context windows might be the answer. For example, what if we wanted to make novel discoveries by prompting an AI. We might prompt a chain-of-reasoning AI to try to draw connections between disparate fields and then stop when it finds something novel. But with current technology, it would fill up the context window almost immediately and then start to go off the rails.
We stand at a moment in history where AI advances at a remarkable pace and yet is only marginally useful, basically just a better Google/Stack Overflow. It is as smart as a genius-level human, far more knowledgable, and yet also remarkably stupid in unpredictable ways.
Are we just one more advance away from AGI? It's starting to feel like it. But I also wouldn't be surprised if life in 2030 is much the same as it is in 2025.
There are some tests on which it looks like this, but I have yet to see an LLM that can count. And no, its not just the tokeniser, they cant count normal words in text either. Errors of 50% are totally normal around 100 or so.
Obviously this is not something "Computers will never do", counting is one of the first things computers have done, it wouldnt be difficult to hardcode it in, but the fact that its not there automatically indicates that other important things are missing.
More options
Context Copy link
Did they? Did it?
There was a big flurry of development 3 - 4 years ago enabled by Nvidia's (then new) multimodal framework and novel tokenization methods, but my impression is that those early breakthroughs have since given way to increasingly high compute times for increasingly marginal gains.
As for the path forward, while llms and other generative models have thier uses, i find it unlikely that they represent a viable path towards "True AGI" as despite the claims of grifters and hype-men like Altman, they remain non-agentic nor are they "reasoning" or "inferring" in the sense that most people use that word. The reason of an LLM is more like the verbal/intellectual equivalent of a space filling curve. The more itterations of Hilbert you run the more of the square you color-in but you're still constraining yourself to points on a line (or tokens in the training data). Once you understand this, the apperant stupidity of LLMs becomes both less remarkable and far more predictable.
If we do see "True AGI" in the next 5-10 years I predict that it will come out of what will seem to a lot users here like left feild. But leave the algorithm engineers all nodding to each other. EG a breakthrough in digital signal processing leads to self-driving cars picking thier route for the scenery.
More options
Context Copy link
No AI has ever passed a Turing Test. Is AI very impressive and can it do a lot of things that people used to imagine it would only be able to do once it became generally intelligent? Yes. But has anyone actually conducted a test where they were unable to distinguish between an AI and a human being? No. This never happend and therefore the Turing Test hasn't been passed.
The entire point of the Turing Test is that, rather than try to define general intelligence as the ability to do specific things that we can test for, we define it in such a way that passing it means we know the AI can do any cognitive task that a human can do, whatever that might be, without trying to guess ahead of time what that is. We don't try to guess the most difficult things for AI to do and say it has general intelligence when it can do them, or else we end up making the mistake that you and many others are making where we have AI that can do very well in coding competitions but cannot do the job of a low level programmer or it can get high marks on a test measuring Ph.D. level of knowledge of some subject, but it can't do an entry level job of someone in that field.
Humans have always been and continue to be really bad at guessing what will be easy for computers to do and what will be hard, and we're discovering that the hardest things for computers to do are not what we thought, so the Turing Test must remain defined as a test in which the computer passes if it is indistinguishable from a human being. That is not the same as sounding like a human being or doing a lot of things only humans could do until recently.
It is still trivial to distinguish an AI from a human being because it has a very distinctive writing style that it struggles to deviate from, it cannot answer a lot of very simple questions that most intelligent people can answer, and it refuses to do a lot of things of things like use racial slurs, give instructions for dangerous actions, and answer questions with politically incorrect answers.
We shouldn't be too surpised that AI can do well on these benchmarks but not lead to massive productivity increases because doing well on benchmarks isn't AGI. There aren't very many jobs that consist of completing benchmarks.
AI is still pretty dumb in some sense. The latest estimates of the number of neurons these models have that I've heard are on the order of 2 trillion. That would make it about as smart as a fox. That's smarter than a cat but dumber than a dog. If a company said they were investing in dog breeding to see if they could get them to replace humans, would you expect a huge increase to our GDP just because it turns out they're better than almost anyone at finding the locations of smells (implying they could be better than us at most things)? Or what if they bred cats to help catch rodents or apes to instantly memorize visual layouts? It seems absurd only because dogs have been around for a long time and we're used the idea that they can't do human jobs and being good at smelling doesn't predict other cognitive abilities. Chimpanzees are far more intelligent than any AI, but I haven't heard of them taking anyone's job yet.
The difference with AI is it is rapidly improving and we can expect it to reach human intelligence before too long, but we are clearly not there yet and benchmarks are not going to give us more than a rough idea of how close we are to it unless those benchmarks start getting a lot closer to the things we actually want AI to do.
The Turing test has been performed with GPT-4, and it passed 54% of the time (compared to humans being suspected as human 67% of the time.)
I think Turing imagined the test with humans like himself. The species that literally thought the weather was a sapient being propably cannot be trusted with this by default.
More options
Context Copy link
That's a nice experiment you have there. It would be a shame if someone were to replicate it. (Or look at the original paper) That howtogeek article is seriously overselling it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think we have AGI. Or at least sorta of. It's probably in the range of a 110-130 IQ person, but in just about all domains. Humans in specific domains that are very smart can still usually beat AIs. But college kids are almost universally not able to surpass them.
The only difference is agency. Which is why agentic AIs are some of the hype right now. AI just sits there and does nothing without human prompting. Which seems like one of the dream scenarios for AI safety obsessed people.
Agentic AIs seem like possibly the real wave of AI that will change society. When you can tell an AI "Hey go be active promoting a thing on the Internet". The Internet is probably going to be the first casualty of AI.
If it is missing a crucial characteristic of human intelligence, how can you say it has AGI? I can believe it could do well on an IQ test, but given that it has a totally different distribution of abilities, the relevance of those tests for AI is very low. The predictive power of an IQ test result for AI is dramatically lower for an AI. So while it might get a 120 IQ result, it is in no way as competent as an actual person with an IQ of 120.
Once it gets agency, we will probably discover something else it is lacking that we didn't think of before. So we can't even modify the test to include agency to get something as good as an IQ test is for humans. We need to remember that it has a different skill distribution and one which we are discovering as it improves. That's why the ultimate test has to be the broadest set of possible tasks that humans do, not these narrow tests which happen to be highly predictive for human abilities.
An AGI should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for AlphaFold.
More options
Context Copy link
What crucial characteristic is AI missing? Agency? It's not missing it so much as they just choose not to implement it.
Knowing that there is a real world, out there, beyond language.
Are the moon landings a hoax? Large Language Models can say what people say, perhaps rather more fluently and persuasively. If one is open to the possibility that there is more than one kind of intelligence, then LLM's have one of those kinds of intelligence, and in greater degree than an average human. But LLM's are rather stuck on giving their own opinion of whether the moon landings are a hoax, because they don't know whether the moon is real or fictional. Nor do the know whether the Earth is real or fictional. The whole "ground truth" thing is missing.
We don't know the "ground truth" either, though. All the information that we parse, such as touching the Earth or seeing the moon in the sky or through a telescope are basically hallucinations created by our brains based on the sensory input that we take in through detection mechanisms in our cells. We have to trust that the qualia that we experience are somewhat accurate representations of the "ground truth." Our experience is such that we perceive reality accurately enough such that we can keep surviving both as individuals and as a species, but who knows just how accurate that really is?
LLMs are certainly far more limited compared to us in the variety of sensory input they can take in, or how often it can update itself permanently based on that sensory input, and the difference in quantity is probably large enough to have a quality of its own.
More options
Context Copy link
More options
Context Copy link
My guess as to the biggest missing factor between here and AGI is efficient online and self-directed learning (aka continuous or lifetime learning).
More specifically, a means of avoiding the inward spiral that comes when the model's output becomes part of its input (via the chat context). I've noticed that LLMs very quickly become less flexible as a conversation progresses, and I think this kind of self-imitation is part of it. I'm working on something and I'd like to force the AI to push itself out of distribution, but I'm not sure how.
More options
Context Copy link
More options
Context Copy link
Yes, agency. How do you know they know how to implement it?
Because it exists? The agentic AIs are already a thing
Those don't really work. There have been a bunch of iterations but prompts of the form 'decide what you should do to achieve task X and then do it' don't produce good results in situ and it's not really clear why. I think partly because AI is not good at conceptualising the space of unknowns and acting under uncertainty, and it's not good at collaborating with others. Agentic AI tends to get lost, or muddled, or hare off in the wrong direction. This may be suboptimal training, of course.
I read Zvi he follows AI much closer than I will ever bother to.
There are potential tricks around the problem you talk about. One of the easier ones is asking the AI to prompt engineer itself. "How would you request a task to do X" ... "How would you improve this prompt that is a request to do task X" ... keep doing that and asking separately "which is a better prompt to do task X".
The sense I get is that there is thinking that an AI is doing, but it is mostly like a dice roll. Rolling consecutively for a cumulatively high number isn't a great strategy, but you don't need to do that. You can instead do something where you re-roll for the best possible roll, then move on to the next roll and do the same thing.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
From 5 hours ago: A complex problem that took microbiologists a decade to get to the bottom of has been solved in just two days by a new artificial intelligence (AI) tool.
Slowly it's becoming clear that ASI is already with us. Imagine if you handed someone from 100 years ago a smartphone or modern networking technology. Even after explaining how it worked, it would take them some time to figure out what to do with it. It took a long time after we invented wheels to figure out what to do with them, for example.
The technology to automate 80-90% of white collar labor already exists, for example, with current-generation LLMs. It's just about interfaces and layers and regulation and safeguards now. All very important, of course, but it's not fundamental technical challenge.
I'm skeptical this is actually how it went down. Why would it take 2 days to come up with the hypotheses? I'm not aware of any LLM that thinks that long, which to me implies the scientist was working with it throughout and probably asking leading questions.
It looks like co-scientist is one of the new "tree searching agent" models: you give it a problem and it will spin off other LLMs to look into different aspects of the problem, then prune different decision trees and go further with subsequent spinoff LLMs based on what those initial report back, recursing until the original problem is solved. This is the strategy that was used by OpenAI in their "high-compute o3" model to rank #175 vs humans on Codeforce (competitive coding problems), pass the GPQA (Google-proof Graduate-level Q&A), and score 88% on ARC-AGI (vs. Human STEM graduate's 100%). The recursive thought process is expensive: the previous link cites a compute cost of $1000 to $2000 per problem for high-compute o3, so these are systems that compute on each problem for much longer than the 35 seconds available to consumer ($20/month) users of o1.
Thanks, that's good information. Still, I don't believe it would actually take two days straight to work through the problem, which indicates follow-up questions etc.
Doesn't sound like it.
Possible if running on Google servers that the request is somehow queued up or prepared at least in the testing phase which they appeared to have been invited to.
The thought occurred to me as well. Between 'no one else has our data!' and 'we didn't realize someone else might have our data,' I tend to default to the later.
That's not relevant to what 2rafa said. Her point is that coscientist may have taken two days to serve earlier queries and only then gotten to this query.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I came of age right as the Internet was taking off. But I've started watching classic movies and TV and I think the "information at my fingertips" effect is something that has happened so gradually I don't think we really appreciate it's impact fully, even pre-LLM. One recent TV episode from the '90s had one character tell another to travel to the state capital and find and photocopy dead-tree legal references, which was expected to take a day. My world today is radically different in a number of ways:
State laws are pretty easily accessible via the internet. I'm not sure how the minutia of laws were well-known back then. Are our laws themselves different (or enforced differently) because the lay public can be expected to review, say, health code requirements for a restaurant?
Computerized text is much more readily searchable. If I have a very specific question, I can find key words with ctrl-f rather than depending on a precompiled index. The amount of information I need to keep in my brain is no longer things like exact quotes, just enough to find the important bits back quickly. The computer already put a bunch of white-collar workers out of jobs, just gradually: nobody needs an army of accountants with calculators to crunch quarterly reports. Or humans employed to manually compute solutions to math problems.
The Internet is now readily accessible on-the-go. Pre-iPhone (or maybe Blackberry), Internet resources required (I remember this) finding a computer to access. So the Internet couldn't easily settle arguments in real conversation. The vibe is different, and at least in my circles, it seems like the expectation of precision in claims is much higher. IRL political arguments didn't go straight to citing specific claims quite the same way.
I sometimes feel overwhelmed trying to grasp the scope of the changes even within my own lifetime, and I find myself wondering things like what my grandfather did day-to-day as an engineer. These days it's mostly a desk job for me, but I don't even know what I'd be expected to do if you took away my computer: it'd be such a different world.
Maybe I'm misunderstanding the question, but laws are organized into books that are indexed. You look up the relevant statute, search for the right section, and then read a few paragraphs describing the law. If you need to know the details of case lawyer, you consult a lawyer. They go to law school and read relevant cases to know how judges are likely to rule on similar future cases.
You still need lawyers to do this because ctrl-f doesn't return a list of all the relevant legal principles from all the relevant cases.
There also has been a massive explosion in the number and complexity of laws since the word processor was invented.
This is, I think, the answer I was looking for. Ctrl-F doesn't find everything (I've had to search non-indexed dead-tree books before), but it's a huge force multiplier.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
To me, this is impressive, but not that impressive: sure it answered the question, but it didn't pose the question. In the same way, LLMs are decent at writing code, but have ~no ability to decide what to write. You can't just point them at your codebase and a bunch of email threads from PMs and hope it writes the right thing.
I don't know how many plausible hypotheses there are for the question it solved, or how hard it is to generate them, but it's surely much easier than looking at the state of the field as a whole and coming up with a new idea for which to generate hypothesis.
AI is a junior engineer.
It's actually far worse than that. LLMs are a junior engineer who cannot learn. The reason that we put up with junior engineers and invest effort into training them is because they will learn, stop making junior mistakes, and someday be productive enough that they pay off the effort of training them. But you can't do that with an LLM. Even after 2-3 years of development, they still suck at writing code. They still hallucinate things because they have no understanding of what they are doing. They still can't take on your feedback and improve for the next time you ask a question.
If LLMs were as capable as a junior engineer, that wouldn't be all bad. But they're actually less capable. Of course people aren't impressed.
I agree completely.
I see you've met my coworker.
More options
Context Copy link
Hoo boy. Speaking as an programmer who uses LLMs regularly to help with his work, you're very, VERY wrong about that. Maybe you should go tell Google that the 20% of their new code that is written by AI is all garbage. The code modern LLMs generate is typically well-commented, well-reasoned, and well-tested, because LLMs don't take the same lazy shortcuts that humans do. It's not perfect, of course, and not quite as elegant as an experienced programmer can manage, but that's not the standard we're measuring by. You should see the code that "junior engineers" often get away with...
I use AI a lot at work. There is a huge difference between writing short bits of code that you can test or read over and see how it works and completing a task with a moderate level of complexity or where you need to give it more than a few rounds of feedback and corrections. I cannot get an AI to do a whole project for me. I can get it to do a small easy task where I can check its work. This is great when it's something like a very simple algorithm that I can explain in detail but it's in a language I don't know very well. It's also useful for explaining simple ideas that I'm not familiar with and would have to look up and spend a lot of time finding good sources for. It is unusable for anything much more difficult than that.
The main problem is that it is really bad at developing accurate complex abstract models for things. It's like it has memorized a million heuristics, which works great for common or simple problems, but it means it has no understanding of something abstract, with a moderate level of complexity, that is not similar to something it has seen many times before.
The other thing it is really bad at is trudging along and trying and trying to get something right that it cannot initially do. I can assign a task to a low-level employee even if he doesn't know the answer and he has a good chance of figuring it out after some time. If an AI can't get something right away, it is almost always incapable of recognizing that it's doing something wrong and employing problem solving skills to figure out a solution. It will just get stuck and start blindly trying things that are obviously dead-ends. It also needs to be continuously pointed in the right direction and if the conversation goes on too long, it keeps forgetting things that were already explained to it. If more than a few rounds of this go on, all hope of it figuring out the right solution is lost.
Thanks, it's clear that (unlike the previous poster, who seems stuck in 2023) you have actual experience. I agree with most of this. I think there are people working on giving LLMs some sort of short-term memory for abstract thought, and also on making them more agentic so they can work on a long-form task without going off the rails. But the tools I have access to definitely aren't there yet.
So, yeah, I admit it's a bit of an exaggeration to say that you can swap a junior employee's role out with an LLM. o3 (or Claude-3.5 Sonnet, which I haven't tried, but which does quite well on the objective SWE-bench metric) is almost certainly better at writing small bits of good working code - people just don't understand how horrifically bad most humans are at programming, even CS graduates - but is lacking the introspection of a human to prevent it from doing dangerously stupid things sometimes. And neither is going to be able to manage a decently-sized project on their own.
More options
Context Copy link
More options
Context Copy link
I'm a programmer too, and I'm perfectly willing to tell Google that their 20% code is garbage. Honestly you shouldn't put them on a pedestal in this day and age, we are long past the point where they are nothing but top tier engineers doing groundbreaking work. They are just another tech company at this point, and they sometimes do stupid things just like every other tech company does.
If you are willing to accept use of a tool which gives you code that doesn't even work 10% of the time, let alone solve the problem, that's your prerogative. I say that such a tool sucks at writing code, and we can simply agree to disagree on that value judgement.
More options
Context Copy link
The vast majority of that "code being written by AI" at Google is painfully trivial stuff. We're not talking writing a new Paxos implementation or even a new service from scratch. It's more, autocomplete the rest of the line "for (int i = 0"
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is exactly @jeroboam's point - you say "AI is a junior engineer" as if that's some sort of insult, rather than unbelievably friggin' miraculous. In 2020, predicting "in 2025, AI will be able to code as well as a junior engineer" would have singled you out as a ridiculous sci-fi AI optimist. If we could only attach generators to the AI goalposts as they zoom into the distance, it would help pay for some of the training power costs... :)
It's weird and a surprise that current AI functions differently enough from us that it's gone superhuman in some ways and remains subhuman in others. We'd all thought that AGI would be unmistakable when it arrived, but the reality seems to be much fuzzier than that. Still, we're living in amazing times.
"We" should have read Wittgenstein to predict this. LLMs can speak, but we can't understand them.
People really need to get over anthropomorphism until we actually understand how humans work.
More options
Context Copy link
More options
Context Copy link
The question was “why are some bacteria resistant to antibiotics”, ie one of the most important questions in medicine.
On the one hand, wow, that's very, very impressive.
On the other hand, skepticism and my prior of "nothing ever happens and especially not with LLMs" makes me ask: was that literally the question? Do you have a source? I am very much not a biologist, but that is surprisingly/impressively broad.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I’ve never understood how the Turing test measured anything useful. The test doesn’t even require that the AI agent understand anything about its world or even the questions being asked of it. It just has to do well enough to convince a human that it can do so. That’s the entire point of the Chinese room rejoinder— an agent might well be clever enough to fool a person into thinking it understands just by giving reasonable no answers to questions posed.
The real test, to me, is more of a practical thing — can I drop the AI in a novel situation and expect it to figure out how to solve the problems. Can I take a bot trained entirely on being an English chatbot and expect it to learn Japanese just by interacting with Japanese users? Can I take a chatbot like that and expect it to learn to solve physics equations? That seems a much better test because intelligent agents are capable of learning new things.
The Turing test is an insanely strong test, in the sense that an AI that passes it can be seen to have achieved human-level intelligence at the very least. By this I mean the proper, adversarial test with a fully motivated and intelligent tester (and ideally, the same for the human participant).
Certainly no AI today could pass such a thing. The current large SotA models will simply tell you they are an AI model if you ask. They will outright deny being human or having emotions. I don't know how anyone could think these models ever passed a Turing test, unless the tester was a hopeless moron who didn't even try.
One could object that they might pass if people bothered to finetune them to do so. But that is a much weaker claim akin to "I could win that marathon if I ever bothered to get up from this couch." Certainly they haven't passed any such tests today. And I doubt any current AI could, even if they tried.
In fact, I expect we'll see true superhuman AGI long before such a test is consistently passed. We're much smarter than dogs, but that doesn't mean we can fully imitate them. Just like it takes a lot more compute to emulate a console such as the SNES than such devices originally had, I think it will require a lot of surplus intelligence to pretend to be human convincingly. If there is anything wrong with the Turing test, it's that it's way too hard.
More options
Context Copy link
The Turing test is like the Bechdel test. It’s not a perfect heuristic, and it’s misused in a lot of ways, but the point is that it’s a fairly low bar that most things at the time still weren’t able to clear.
More options
Context Copy link
I am flabbergasted by people, including the person who came up with the Chinese Room thought experiment, Searle, not seeing what seems to me to be the obvious conclusion:
The room speaks Chinese.
(Is that a problem? No, not at all. I just didn't think you'd be Chinese)
No individual component of the room speaks Chinese, including the human, but that is no impediment. No single neuron in your brain speaks English, but we have zero qualms about saying the entire network, i.e your brain, does.
Searle literally addressed this objection in his very first paper on the Chinese Room.
Seems nonsensical to me. I fail to see how this person could have that inside their brain and fail to speak Chinese. How is that even physically possible?
So, take throwing a ball. The brain’s doing a ton of heavy lifting, solving inverse kinematics, adjusting muscle tension, factoring in distance and wind and all in real time, below the level of conscious awareness. You don’t explicitly think, “Okay, flex the biceps at 23.4 degrees, then release at t=0.72 seconds.” You just do it. The calculations happen in the background, and you’d be hard-pressed to explain the exact math or physics step-by-step. Yet, if someone said, “You can’t throw a ball because you don’t consciously understand the equations,” you’d rightly call that nonsense. You can throw the ball - your ability proves it, even if the “how” is opaque to your conscious mind.
If Searle were to attempt to rebutt this by saying, nah, you're just doing computations in your head without actually "knowing" how to throw a ball, then I'd call him a dense motherfucker and ask if he knows how the human brain works.
Well, he would be able to converse in chinese, and to converse in english, but not able to translate between them. That seems very possible; theres propably some brain disorder where you do that.
More options
Context Copy link
The answer is that he doesn't understand Chinese, he plus the room understand Chinese.
More options
Context Copy link
If someone internalizes the system in his head, ignoring practicality (which makes it hard to properly imagine the situation), then he's acting as a dumb CPU executing a Chinese program. The answer is still "the man doesn't know Chinese, the system does". The answer feels strange because "the man" is in the man's head and "the system" is also in the man's head, but that doesn't make them the same thing or mean that they both have the same knowledge.
Of course, in Searle's time, "come on, he's running a virtual machine" isn't something you could really say because people weren't familiar with the concept.
Virtual machines were a thing since 1965, and Searle wrote his nonsense about intentionality in 1983, and the Chinese Room in 1980.
If someone has the gall to claim to disprove the possibility of artificial intelligence, as he set out to do, it would help to have some understanding of computer science. But alas.
I agree with you but Searle and his defenders wouldn't. As far as I'm concerned, it matters not a jot if the system is embedded inside a brain, up an arse, or in the room their arse is resting in.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You're just a Functionalist, exactly the sort of people the argument is supposed to criticize. Or you're missing the point.
Searles is a Biological Realist, which is to say that he believes that processes of mind are real things that emerge from the biochemical processes of human beings and that language (and symbol manipulation in general) is a reflection of those processes, not the process in itself. He thinks thoughts are real things that exist outside of language.
To wit, he argues that what the room is missing is "intentionality". It does not have the ability do to anything but react to input in ways that are predetermined by the design of the chinese manual, and insofar as any of its components are concerned (or the totality thereof) they are incapable of reflecting upon the ideas being manipulated.
Your brain does "speak chinese" properly speaking because it is able to communicate intentional thoughts using that medium. The mere ability to hold conversation does not qualify to what Searles is trying to delineate.
Not too familiar with Searle's argument, but isn't this just saying that the lack of ability to generalize out of distribution is the issue? But I don't get how being able to react to novel inputs (in a useful way) would even help things much. Suppose one did come up with a finite set of rules that allowed one to output Chinese to arbitrary inputs in highly intelligent, coherent ways. It's still, AFAICT, still just a room with a guy inside to Searle.
Perhaps it's the ability to learn. But even then, you could have the guy follow some RL algorithm to update the symbols in the translation lookup algorithm book, and it's still just a guy in the room (to Searle).
It's not even clear to me how one could resolve this: at some point, a guy in the room could be manipulating symbols in a way that mirrors Xi Jinpeng's neural activations arbitrarily closely (with a big enough room and a long enough time), and Searle and I would immediately come to completely confident and opposite conclusions about the nature of the room. It just seems flatly ridiculous to me that the presence of dopamine and glutamate impart consciousness to a system, but I don't get how to argue against that (or even get how Searle would say that's different from his actual argument).
Insofar as this is possible, (I believe Searle disagrees that it is), then the room does speak Chinese because it's just a brain.
But as I explain in the other thread, this means assuming computationalism is true, which renders the whole though experiment moot since it's supposed to be a criticism of computationalism.
I'm not sure how one would argue that it's not possible. Is the contention that there's something ineffable happening in neurons that fundamentally can't be copied via a larger model? That seems isomorphic to a "god of the gaps" argument to me.
"God of the gaps" cuts both ways. The cached Materialist narrative has some very large holes in it that are bridged through unexamined axioms and predictions that never update when falsified.
More options
Context Copy link
It's quite simple in a world where the experience of consciousness is unexplained and machines don't offer the same sort of spontaneous behavior as humans, actually.
It's not compatible with strict materialism, but then again most people don't believe in strict materialism.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My response is that there isn't a point to miss.
That strikes me as the genuine opposite of what someone with a realistic understanding of biology would believe, but I guess people can call themselves whatever they like. It strikes me as unfalsifiable Cartesian Dualism or a close relative, and worth spending no more time rebutting with evidence than it was forwarded without evidence.
What is so mysterious about this "intentionality"? Give the Room a prompt that requires it to reason about being a Chinese Room. Problem solved.
What is the mechanism by which a thought is imbued with "intentionality"? Where, from a single neuron, to a brain lobe, to the whole human, does it arise?
Realism is a term of art in metaphysics in particular and philosophy in general. It is the view that reality exists outside of the processes of the mind, the opposite of anti-realism (solipsism, skepticism, etc).
For what it's worth, Skepticism, which I take to be your view if you're making this objection, is also unfalsifiable. As are all statements in metaphysics.
I happen to be a metaphysical skeptic myself, but this isn't an argument. We're talking about something more fundamental than notions of falsifiablity or correspondence. You can't use logic even.
What isn't? Consciousness is the most mysterious phenomenon I have ever experienced. It is so mysterious in fact as to be a centerpiece of many religious traditions.
Why do humans go about doing things on their own instead of merely reacting to their environment? Is it just a more complex form of the instinctive behavior we see in other animals or something entirely different? And why do I have qualia? These are mysteries.
Unless the writer of the manual understands reasoning to a sufficient degree as to provide exhaustive answer to all possible questions of the mind, this isn't possible. And certainly isn't within the purview of the thought experiment as originally devised.
We don't know yet. We may possibly never know. But we can observe the phenomenon all the same.
If we really want to get into this, then proving (and disproving) anything is mathematically impossible..
This makes axioms necessary to be a functional sapient entity. Yet axioms are thus incredibly precious, and not to be squandered or proliferated lightly.
To hold as axiomatic that there exists some elan vital of "intent" that the room lacks, but a clearly analogous system in the human brain itself possesses, strains credulity to say the least. If two models of the world have the same explanatory power, and do not produce measurable differences in expectation, the sensible thing to do is adopt the one that is more parsimonious.
(It would help if more philosophers had even a passing understanding of Algorithmic Information Theory)
Why not? What exactly breaks if we ask that the creator of the Room makes it so?
It is already a very unwieldy object, a pure look-up-table that could converse in Chinese is an enormous thing. Or is it such an onerous ask that we go beyond "Hello, where is the library?" in Chinese? You've already shot for the moon, what burden the stars?
If the Room can equipped to productively answer questions that require knowledge of the inner mechanisms of the Room, then the problem is solved.
For consciousness? Maybe. I'd be surprised if we never got an answer to it, and a mechanistic one to boot. Plenty of mysterious and seemingly ontologically basic phenomenon have crumbled under empirical scrutiny.
Non functionalists disagree that it is analogous. So you need to actually make that argument beyond "it is obviously so because it is so from the functionalist standpoint".
Moreover, you're defending two contradictory positions here.
On the one hand, you seem ready to concede to metaphysical skepticism and the idea that knowledge is impossible. On the other hand, you're using the Naive Empiricist idea that systems can only be considered to exist if they have measurable outcomes. These are not compatible.
If what you're doing is simply instrumentally using empiricism because it works, you must be ready to admit that there are truths that are possibly outside of its reach, including the inner workings of systems that contain hidden variables. Otherwise you are not a skeptic.
It requires the assumption that cognition is reducible to computation, which makes the entire experiment useless as a prop to discuss whether that view is or isn't satisfactory. It turns it into a tautology.
If computationalism is true, computationalism is true.
You should be careful with that line of thinking.
Surely you must be familiar with the story of Lord Kelvin's speech to the Royal Society inwhich he stipulated that Physics was now almost totally complete save for two small clouds.
Explaining those "small" issues would of course end up requiring the creation of special relativity and quantum mechanics, which were neither small tasks, nor ultimately complete to this day and unearthed a lot more problems along the way.
Whatever one thinks of our epistemic position, I always recommend humility.
On the flip side... how is the thought experiment helping illustrate anything to anyone who doesn't already agree with Searle's take? It's as if he's saying "...and obviously the room doesn't know anything so functionalism is wrong."
One man's modus ponens is another man's modus tollens. 🤷
More options
Context Copy link
I am large, I contain multitudes. As far as I'm concerned, there is no inherent contradiction behind my stance.
Knowledge without axiomatic underpinning is fundamentally impossible, due to infinite regress. Fortunately, I do have axioms and presumably some of them overlap with yours, or else we wouldn't have common grounds for useful conversation.
I never claimed being a "skeptic" as a label, that's your doing, so I can only apologize if it doesn't fit me. If there are truths beyond materialist understanding, regretfully we have no other way of establishing them. What mechanism ennobles non-materialists, letting them pick out Truth safe from materialism from the ether of all possible concepts? And how does it beat a random number generator that returns TRUE or FALSE for any conjecture under the sun?
I must then ask them to please demonstrate where a Chinese Room, presumably made of atoms, differs from human neurons, also made of atoms.
I reject your claim this is a tautology. A Chinese Room that speaks Chinese is a look-up table. A Chinese Room that speaks Chinese while talking about being a Chinese Room is a larger LUT. Pray tell what makes the former valid, and the latter invalid. Is self-referentiality verboten? Can ChatGPT not talk about matrix multiplication?
I'm all for epistemic humility, but I fail to see the relevance here. It's insufficient grounds for adding more ontologically indivisible concepts to the table than are strictly necessary, and Searle's worldview doesn't even meet necessity, let alone strictness.
There's epistemic humility, and there's performative humility, a hemming and hawing and wringing of hands that we just can't know that things are the way they seem, there must be more, and somehow this validates my worldview despite it having zero extra explanatory power.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Your tests have the exact same "problem" as the Turing Test, though. There's no way to tell if the bot actually "understands" Japanese just because it is able to produce Japanese words that are understandable to Japanese people after interacting with Japanese people a bunch. There's no way to tell if the bot actually "understands" physics just because it responds to an equation with symbols that a learned physicist would judge as "correct" after interacting with a bunch of physics textbooks or lectures or whatever. It could just be updating the mappings within its Chinese room.
One might say that updating the mappings in its Chinese room is essentially another way of describing "understanding." In which case the Turing Test also qualifies; if the chatbot is able to update its mappings during its conversation with you such that it appears to you as indistinguishable from a human, then that's equivalent to a bot updating its mappings through its conversations with Japanese people such that it appears to Japanese people as someone who understands Japanese.
I guess the point of my conjecture is that understanding is required for intelligence. And one way to get after intelligence is putting an agent in a situation where it has no previous experience or models to work from and expect it to solve problems.
Where I agree with the idea behind the Chinese Room is exactly that. Yes, the agent can answer questions about the things it’s supposed to be able to answer questions about well enough to fool an onlooker asking questions about the subject it’s been trained to answer. But if you took the same agent and got it off script in some way — if you stopped asking about the Chinese literature it was trained to answer questions about and started asking questions about Chinese politics or the weather or the Kansas City Chiefs, an agent with no agency that doesn’t actually have a mental model of what the characters it’s matching actually mean will be unable to adapt. It cannot answer the new questions because it specifically doesn’t understand any of tge old questions nor can it understand the new ones. And likewise if the questions in English are not understood it would be impossible to get the agent to understand Japanese because it’s unable to derive meanings from words, it’s just stringing them together in ways that it’s training tells it are pleasing to users.
It’s also a pretty good test for human understanding of a given subject. If I can get you to attempt to use the information you have in a novel situation and you can do so, you understand it. If you can only regurgitate things you have been told in exactly the ways you have been told to do it, you probably don’t.
Perhaps I'm not as familiar with the Chinese Room experiment as I thought I was. I thought the Chinese Room posited that the room contained mappings for literally every single thing that could be input in Chinese, such that there was literally nothing a Chinese person outside the room could state such that a response indicated a lack of understanding of Chinese? If the Chinese Room posits that the mappings contained in the room are limited, then that does change things, but then I also believe it's not such a useful thought experiment.
I personally don't think "understanding," at least the way we humans understand (heh) it, is a necessary component of intelligence. I'm comfortable with calling the software that underlies the behavior of imps in Doom as "enemy artificial intelligence," even though I'm pretty sure there's no "understanding" going on in my 486 Thinkpad laptop that's causing the blobs of brown pixels to move on the screen in a certain way based on what my player character is doing, for instance. If it talks like a duck and walks like a duck and is otherwise completely indistinguishable from a duck in every way that I can measure, then I'll call it a duck.
Yeah, to tie this back to the above thread about the ramifications of massively-increased automation, what the hell does it matter if an AI really understands anything, if it puts most of us out of a job anyways? Philosophy is for those who don't have to grind for their bread.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Most people will recognize AGI when it can independently enact change on the world, without regard for the desires or values of humans, including its creators. Until then, they'll see whatever brilliant things it enables as merely extensions of the guiding human's agency.
AI built on language models will always reflect us because they’re trained on us in a fundamental way. They are the product of human civilization, their ways of thinking are ours, only faster / better. Where AGI is ‘evil’ it will be evil in a human way.
More options
Context Copy link
This.
MY personal benchmark is that I want an AI agent that can renew and refill a drug prescription for me.
This isn't a task that should require an AGI, but its one that is frustrating for me to accomplish, and often involves interacting with a couple websites, then making a phone call to multiple parties, possibly scanning and e-mailing a document or two, and finally confirming that the end result is ready for pickup.
This still seems beyond the current crop.
And I daresay that robotics is lagging enough that I'm skeptical that we'll see AI capable of physically navigating the real world independently, without using a human intermediary before we get AGI. They haven't yet hooked up an LLM to sensors that give it a constant stream of data about the real world that I know of, so maybe it can adapt faster than I expect. I wouldn't put anything out beyond 5 years.
That said, I don't think an AGI NEEDS to be able to navigate the world, if its smart enough it can use human intermediaries to achieve most of its goals, potentially including killing the humans.
So, if I'm being blunt and oversimplifying, the current state of AI tech is ABSOLUTELY a tool that can leverage human productivity, but needs more agentic behavior and the ability to manipulate atoms and not just bits, which I think most benchmarks don't actually account for.
This came out just this week: https://microsoft.github.io/Magma/
Things are moving very quickly now.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As far as gorillas are concerned, humans still can't replace gorillas - neither a human nor any human technology can pass as a member of a gorilla tribe and fulfill all the functions that gorillas expect of each other no worse than a gorilla would. Yet, if gorillas could invent benchmarks as well as humans do, they probably would have made up a whole bunch that we would have blown past with ease - we could delouse more effectively, make devices that roar louder, thump artificial chests with more force, mass-produce silverback pheromones in bioreactors and obliterate any rivalling gorilla tribe with FPV drones. At some point, we have to recognise that "be a productive and well-assimilated member of the existing community of X" is a much harder problem than "outperform X at any given task not closely coupled with the former", which is unsurprising because life on earth has a much longer evolutionary history gatekeeping its respective community than it has doing anything that we consider useful.
Unfortunately, our informal AGI metrics, which really should be looking at performance at the latter, keep falling into the trap of measuring performance at the former instead, leaving us in a position somewhat akin to gorillas dismissing early hominids because they can't even grow a full back of majestic silver hair.
AI limitations go well beyond simply not fitting in as humans. Virtually anything we value in real life will be accomplished better by a below-average human than by an AI. AI agents, given scaffolding that allows them more human reasoning (long-term memory storage, frequent reminders of their objective, plugins that give them access to the internet etc.) are generally pretty useless and incapable of solving even very basic programs. And they usually go mad eventually.
Extending the gorilla-human analogy, AIs are really more like gorillas to us humans. We have much stronger tools available to do most of what they can do (construction equipment, machinery, etc.), and we can't exactly leave them alone to accomplish objectives--even if they could technically be better than humans at some forms of brute labor, for example, they won't understand the objective, or will forget it very quickly and pursue more gorilla things. We're not asking them to enter human society, just asking them to be useful to us on any level besides entertainment.
AIs are moderately more useful than that but still fundamentally extremely limited.
In fairness to AI, the internet drives many humans mad, too.>
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I agree with your main premise, but a nitpick:
I'm not really sure that's true. The Turing Test has been passed in some form or another since 1966, with ELIZA, and I also remember various chat bots on AOL instant messenger doing the same back in the early 2000s. I think that people realized quickly that the Turing test is just a novelty, something thought up by Turing in the early days of computer science that seemed relevant but was quickly proven not to be, and that various technologies could beat it.
ELIZA didn't pass the Turing Test. The Turing Test includes an adversarial element where a person is told that one player is human, and the other a machine, and must determine the difference.
It is not simply temporarily making a person think he is talking to a human when he already assumes that is the case.
It was not possible for computers to fool a median-IQ person in this manner until roughly 2022, give or take.
I don't think it's conclusive that nothing has passed the test before, because I don't think the test is necessarily set in stone. There are variations, and I think it's been romanticized enough that people have moved the goalposts for the test as we progress. I mean some people could be fooled while others are not. Eugene Goostman is another one from 2014 that is said to have passed the test.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
a) We didn't.
b) it takes time to integrate new tech into business and to figure out how to best use it. Reasoner models are what, 3 months old now?
You'll be a little lucky if you're even alive. Pacific War 2: Electric Boogaloo and it's possible thermonuclear complications aside, there's many, many people who think like Ziz, there doesn't seem to be a good way of preventing jailbreaks reliably and making very deadly pathogens that kill in a delayed manner is not hard if you don't care about your own survival that much. And in any case, It looks like for a ~500k$ people will be able to run their own OS AGI in isolation, meaning moderately rich efilist lunatics could run their own shitty biolabs with help and spend as much time figuring out jailbreaks as needed, with no risk of snitching.
I put very little weight on this. It seems obvious to me that it's just become a sort of ingroup belief that it is now trendy to have. Ten years ago, it was the opposite. Virtually everyone in AI research found the idea to be ridiculous. Within the last few years, the balance of opinion has changed without any significant relevant new information on what AI will be like.
Every estimate that is actually based on people betting real money or which weights estimates based on the predictive abilities of those providing the estimates gives a very low probability of this happening.
I meant people who believe human life is inherently evil and should be eradicated. Hell, there is a fair amount of people who believe all life should end.
More options
Context Copy link
More options
Context Copy link
Possible, but also possible that you can just cheaply run massive automated genetic testing on trillions of particles, with billions of sensors located at every major human transit point that can pick up on those pathogens before their delayed death sentence kicks in and before they spread as widely as their proponents hope. It's all fiction for now, we'll see who wins (or perhaps not). I'm pretty optimistic humanity will survive beyond 2030.
There's no known disease that could wipe out all life on earth if every single person got it simultaneously. Prion disease is essentially the only 100% fatal disease and it does not kill quickly enough to stop reproduction.
A sufficiently powerful oncovirus probably wouldn't kill everyone, but are you really confident you'd be one of the 1-10% who wouldn't get a fatal cancer? I don't think we could even treat all those cancers even if we really geared up for it.
My own odds are pretty good, almost no one in my family ever had cancer and I barely get sick from normal viruses but playing viral russian roulette isn't on my to-do list.
I hope that we don't even have to play roulette with something mundane. COVID wasn't bad on a human historic scale and it was bad enough, plenty bad enough to justify shutting down GOF research in my view.
More options
Context Copy link
More options
Context Copy link
I'm not a strong domain expert in microbiology, but it strikes me as a not particularly insurmountable challenge to design a pathogen that would kill 99.99% of humans. I think it you gave me maybe $10 million and a way to act without drawing adverse attention, I'd be able to pull it off. (With lots of time reading textbooks or maybe an additional masters)
The primary constraint would be access to a BSL-4 lab, because otherwise the miscreants would probably be the first to die to a prototype of the desired strain.
We already have gain-of-function research, the bare minimum, serial passage isn't that difficult. With expertise roughly equivalent to a Master's student, or a handful, it would be easy enough to gene-edit a virus, cribbing sections from a variety of pathogens till you get one you desire. I see no reason in principle why you couldn't optimize for contagiousness, a long incubation period and massive lethality.
This is easy for most nation-states, but thankfully most of them aren't omnicidal. Very difficult for lone actors, moderately difficult if they have access to scientific labs and domain expertise. I think we've been outright lucky in that no organized group has really tried.
Just because there isn't an existing pathogen that kills all humans (and there isn't, because we're alive and talking), doesn't mean it isn't possible.
I am not qualified to make technical statements about the ease of developing biological weapons but let me apply some outside-the-box thinking.
You are almost certainly wrong about how easy this is.
I am basing this on computer engineers who make statements like "an undergrad could build this in a weekend" and are wrong almost 100% of the time. Things always take longer than you think.
I don't know what specific obstacles you would face on your way to build a bioweapon, but I predict that you don't either. It's not the known unknowns that get you. It's the unknown unknowns.
Please don't try to prove me wrong, though :) And I agree that serious bioweapons are likely within the capacity of major states.
I have a reasonable plan in mind for what I'd do with the $10 million. I'd probably pivot away from my branch of medicine and ingratiate myself into an Infectious Disease department, or just sign up for a masters in biology.* The biggest hurdle would be the not getting caught part, but there's an awful amount of Legitimate Biology you can do that helps the cause, and ways to launder bad intent. Just look at apologia for gain of function.
There's also certainly Knightian uncertainty involved, but there are bounds to how far you can go while pointing to unknown unknowns. I don't think I'd need $1 billion to do it, as I'm confident it couldn't be done $3.50 and a petri-dish.
And whatever the actual cost and intellectual horsepower + domain knowledge is, it only tends downwards, and fast!
*If you can't beat disease, join them
More options
Context Copy link
More options
Context Copy link
You could create something like that, tge hard part is spreading it. The reason that Covid was a hard nut to crack as far as stopping the spread was that it was pretty mild for most people. In fact if it had come out in the 1970s before we had the ability to track it and ID it and before we had the internet for remote work and online shopping, it would have probably gone unnoticed except that it was a “bad flu year” and there’d be a lot of elderly dead people. People would have felt fine to go to work or hang around other people so it’s easy to spread. But a virus that kills you doesn’t spread as much because dying people aren’t inclined to go to work, school or shop at Walmart. People get the death virus, feel like crap, go to the doctor get admitted to the hospital and die there. No one outside of that household gets it because once you have it you’re too sick to go anywhere. AIDS is an exception but only because the incubation phase is so long — you can have and spread AIDS for years before getting sick.
The hard part is what I was alluding to, when I mentioned that during the gene-editing, you could copy and paste sections of genomes from unrelated pathogens. Nature already does this, but to a limited extent (bacteria can share DNA, viral replication often incorporates bits of the host or previous viral DNA still lurking there).
I expect that a competent actor could merge properties like:
Can spread through aerosols (influenza or rhinoviri)
Avoids detection by the immune system, or has a minimal prodrome that looks like Generic Illness (early HIV infection)
Massive lethality (HIV or a host of other diseases, not just restricted to viruses)
The design space pretty much contains anything that can code for proteins! There's no fundamental reason that a disease can't both be extremely lethal and have incubation periods long enough for it to be widespread. The only reason, as far as I can see, for why we don't have this is because nobody has been insane (and resourceful) enough to try. Holding the former constant, the resource requirement is dropping precipitously by the year. Anyone can order a gene editing kit off ebay, and the genetic code of many pathogens are available online. The thing that remains expensive is a proper BSL-4 lab, to ensure time to tinker without releasing a half-baked product. But with AI assistance, the odds of early failure are dropping rapidly. You might be able to do a one-off print of the Perfect Pathogen, and as long as you're willing to die, spread it widely.
More options
Context Copy link
More options
Context Copy link
Sure, but everything you describe here are things that
This is a huge problem for ending life on Earth; living is 100% fatal but humans keep having kids. If you set an incubation period that is too long, then people can just
postlive through it. I also think a long incubation period would dramatically raise the chances that your murdercritter mutates to a less harmful form.Well, prion disease may be associated with spiroplasma bacterial infection, but it still hasn't killed all humans.
I think it's far from clear that AI mitigates the issue more than it currently exacerbates. I'm in agreement that it's already technically possible, and we're only preserved by the modest sanity of nations and a lack of truly motivated and resource-imbued bad actors.
In a world with ubiquitous AI surveillance, environmental monitoring and lock-downs of the kind of biological equipment that modern labs can currently buy without issue, it would clearly be harder to cook up a world-ending pathogen.
We don't live in that world.
We currently reside in one where LLMs already possess the requisite knowledge to aid a human bad actor in following through with such a plan. There are jailbroken models that would provide the necessary know-how. You could even phrase it as benign questioning, a lot of advanced biotechnology is inherently dual-use, even GOF adherents claim it has benefits, though most would say it doesn't match the risks.
In a globalized world, a long incubation period could merely be a matter of months. A bad actor could book a dozen intercontinental flights and start a chain reaction. You're correct that over time, a pathogen tends to mutate towards being less lethal towards its hosts, but this does not strike me as happening quickly enough to make a difference in an engineered strain. The Bubonic Plague ended largely because all susceptible humans died and the remaining 2/3rds of the population had some degree of innate and then acquired immunity.
Look at HIV, it's been around for half a century, but is no less lethal without medication than when it started out (as far as I'm aware).
Prions would not be the go-to. Too slow, both in terms of spread and time to kill. Good old viruses would be the first port of call.
It kinda seems like we do live in a world where any attempt to kill everyone with a deadly virus would involving using AI to try to find ways to develop a vaccine or other treatment of some kind.
They mutate so rapidly, though, and humans have survived even the worst of the worst (such as rabies).
Not that I am not saying you couldn't kill a lot of people with an infectious agent. You could kill a lot of people with good old-fashioned small pox! I just think the vision of a world sterilized of human life is far-fetched.
It's ironic, though - the people who are most worried about unaligned AI are the people who are most likely to use future AI training content to spell out plausible ways AI could kill everyone on Earth, which means that granting unaligned agentic AI is a threat for the purposes of argument, increases the risks of unaligned agentic AI attempting to use a viral murder weapon regardless of whether or not that is actually reliable or effective.
Sorry, side tangent. I don't take the RISKS of UNALIGNED AI nearly as seriously as most of the people on this board, but I do sort of hope for the sake of hedging those people are considering implementing the unaligned AI deterrence plans I came up with after reflecting on it for 5 minutes
instead ofalong with posting HERE IS HOW TO KILL EVERY SINGLE HUMAN BEING over and over again on the Internet :pETA: not trying to launch a personal attack on you (or anyone on the board) to be clear, AFAIK none of y'all wrote the step-for-step UNALIGNED AI TAKES OVER THE WORLD guide that I read somewhere a while back. (But if you DID, I'm not trying to start a beef, I just think it's ironic!)
The downside to this is having to hope that whatever mitigation is in place is robust and effective enough to make a difference by the time the outbreak is detected! The odds of this aren't necessarily terrible, but you want it to have come to that?
I expect hope than a misaligned AI competent enough to do this would be intelligent enough to come up with such an obvious plan, regardless of how often it was discussed in niche internet forums.
How would you stop it? The existing scrapes of internet text suffice. To censor it from the awareness of a model would require stripping out knowledge of loads of useful biology, as well as the obvious fact that diseases are a thing, and that they reliably kill people. Something that wants to kill people would find that connection as obvious as 2+2=4, even if you remove every mention of bioweapons from the training set. If it wasn't intelligent enough to do so, it was never a threat.
Everything I've said would be dead-simple, I haven't gone into any detail that a biology undergrad or a well-read nerd might not come up with. As far as I'm concerned, it's sufficient to demonstrate the plausibility of my arguments without empowering adversaries in any meaningful way. You won't catch me sharing a .txt file with the list of codons necessary for Super Anthrax to win an internet argument.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My really vague understanding is that long incubation times give the immune system more time to catch the infection early, which doesn't matter as much when it's very new and nobody has antibodies. So eventually everything that had a long one evolves to be shorter on its second pass through the population.
In theory long incubation + 100% mortality rate seems like it would take out a good chunk of the population in the first wave, but in practice people would just Madagascar through it.
Oh sure, but depending on the agent (particularly if it is viral, right?) if you're spreading it to billions of people you're introducing a lot of room for it to gain mutations that might make it less deadly. At least that would be my guess.
Definitely seems plausible. Hopefully instead of using AI to create MURDERVIRUSES people will use it to scan wastewater for signs of said MURDERVIRUSES.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
And killing somebody after infection is the easy part - somehow spreading to "every single person", or even a significant fraction, is a million times harder. People really underestimate how hard it is to build superweapons that can end civilization (it's easy in the movies!). I think if there are going to be problems with widespread unfiltered AI, it'll be because a large number of unstable individuals become individually capable of killing thousands of people, rather than a few people managing to kill billions.
Yes. And if AI is All That I imagine it will actually be fairly good at mitigating bioweapons - moreso than other weapons of mass destruction.
That's a hope, yeah.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You don't think $1 trillion has been spent globally on AI so far? I think you're wrong. Revenue of NVDA alone is $60 billion in 2024 (figure 95+% of that is AI spend). But that's just a small percentage of the overall cost, which includes energy, servers, and most importantly salaries. Probably the total spend is less than $1 trillion per year right now, but inching up towards it. Cumulative spend should be well over $1 trillion.
It seems like this is a pretty tight path between literally zero economic value in 2025 and apocalypse before 2030. I'm not saying you will be wrong, only that you would be wrong to have any confidence in these predictions. There's still value in Bayesian priors, such as "most years don't have nuclear war" and "no one has created a lab-grown virus that has killed people".
I'll quote Forbes:
ByBeth Kindig, Contributor. Free stock tips and stock research newsletter at https://io-fund.com
Follow Author Nov 14, 2024, 05:29pm EST
Save Article
Comment 0 Big Tech’s AI spending continues to accelerate at a blistering pace, with the four giants well on track to spend upwards of a quarter trillion dollars predominantly towards AI infrastructure next year.
Though there have recently been concerns about the durability of this AI spending from Big Tech and others downstream, these fears have been assuaged, with management teams stepping out to highlight AI revenue streams approaching and surpassing $10 billion with demand still outpacing capacity.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
To the common layperson, LLMs haven't really advanced that much since 2022 or 2023. Sure, each new model might have fancy graphs that show it's better than ever before, but it always feels disappointingly iterative when normal people get their hands on it. The only few big leaps have come from infrastructure surrounding it that lets us use it in novel ways, e.g. Deep Research is pretty good from what I've heard. DR isn't revolutionary or anything, it just takes what we already had, gives it more processor cycles, and has it produce something with lots of citations which is genuinely useful for some things. I expect further developments will be like that. It's like how electricity was sort of a flop in industry until we figured out things like the assembly line.
"AGI" is basically a meme at this point. Nobody can agree on a definition, so we might have had it back in 2022... or we might never have it, based on whatever definition you use. It's a silly point of reference.
More options
Context Copy link
More options
Context Copy link