This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
When will the AI penny drop?
I returned from lunch to find that a gray morning had given way to a beautiful spring afternoon in the City, the sun shining on courtyard flowers and through the pints of the insurance men standing outside the pub, who still start drinking at midday. I walked into the office, past the receptionists and security staff, then went up to our floor, passed the back office, the HR team who sit near us, our friendly sysadmin, my analysts, associate, my own boss. I sent some emails to a client, to our lawyers, to theirs, called our small graphics team who design graphics for pitchbooks and prospectuses for roadshows in Adobe whatever. I spoke to our team secretary about some flights and a hotel meeting room in a few weeks. I reviewed a bad model and fired off some pls fixes. I called our health insurance provider and spoke to a surprisingly nice woman about some extra information they need for a claim.
And I thought to myself can it really be that all this is about to end, not in the steady process envisioned by a prescient few a decade ago but in an all-encompassing crescendo that will soon overwhelm us all? I walk around now like a tourist in the world I have lived in my whole life, appreciating every strange interaction with another worker, the hum of commerce, the flow of labor. Even the commute has taken on a strange new meaning to me, because I know it might be over so soon.
All of these jobs, including my own, can be automated with current generation AI agents and some relatively minor additional work (much of which can itself be done by AI). Next generation agents (already in testing at leading labs) will be able to take screen and keystroke recordings (plus audio from calls if applicable) of, say, 20 people performing a niche white collar role over a few weeks and learn pretty much immediately know how to do it as well or better. This job destruction is only part of the puzzle, though, because as these roles go so do tens of millions of other middlemen, from recruiters and consultants and HR and accountants to millions employed at SaaS providers that build tools - like Salesforce, Trello, even Microsoft with Office - that will soon be largely or entirely redundant because whole workflows will be replaced by AI. The friction facilitators of technical modernity, from CRMs to emails to dashboards to spreadsheets to cloud document storage will be mostly valueless. Adobe alone, which those coworkers use to photoshop cute little cover images for M&A pitchbooks, is worth $173bn and yet has been surely rendered worthless, in the last couple of weeks alone, by new multimodal LLMs that allow for precise image generation and editing by prompt1. With them will come an almighty economic crash that will affect every business from residential property managing to plumbing, automobiles to restaurants. Like the old cartoon trope, it feels like we have run off a cliff but have yet to speak gravity into existence.
It was announced yesterday that employment in the securities industry on Wall Street hit a 30-year high (I suspect that that is ‘since records began’, but if not I suppose it coincides with the final end of open outcry trading). I wonder what that figure will be just a few years from now. This was a great bonus season (albeit mostly in trading), perhaps the last great one. My coworker spent the evening speaking to students at his old high school about careers in finance; students are being prepared for jobs that will not exist, a world that will not exist, by the time they graduate.
Walking through the city I feel a strange sense of foreboding, of a liminal time. Perhaps it is self-induced; I have spent much of the past six months obsessed by 1911 to 1914, the final years of the long 19th century, by Mann and Zweig and Proust. The German writer Florian Illies wrote a work of pop-history about 1913 called “the year before the storm”. Most of it has nothing to do with the coming war or the arms race; it is a portrait (in many ways) of peace and mundanity, of quiet progress, of sports tournaments and scientific advancement and banal artistic introspection, of what felt like a rational and evolutionary march toward modernity tempered by a faint dread, the kind you feel when you see flowers on their last good day. You know what will happen and yet are no less able to stop it than those who are comfortably oblivious.
In recent months I have spoken to almost all smartest people I know about the coming crisis. Most are still largely oblivious; “new jobs will be created”, “this will just make humans more productive”, “people said the same thing about the internet in the 90s”, and - of course - “it’s not real creativity”. A few - some quants, the smarter portfolio managers, a couple of VCs who realize that every pitch is from a company that wants to automate one business while relying for revenue on every other industry that will supposedly have just the same need for people and therefore middlemen SaaS contracts as it does today - realize what is coming, can talk about little else.
Many who never before expressed any fear or doubts about the future of capitalism have begun what can only be described as prepping, buying land in remote corners of Europe and North America where they have family connections (or sometimes none at all), buying crypto as a hedge rather than an investment, investigating residency in Switzerland and researching countries likely to best quickly adapt to an automated age in which service industry exports are liable to collapse (wealthy, domestic manufacturing, energy resources or nuclear power, reasonably low population density, produce most food domestically, some natural resources, political system capable of quick adaptation). America is blessed with many of these but its size, political divisions and regional, ethnic and cultural tensions, plus an ingrained highly individualistic culture mean it will struggle, at least for a time. A gay Japanese friend who previously swore he would never return to his homeland on account of the homophobia he had experienced there has started pouring huge money into his family’s ancestral village and directly told me he was expecting some kind of large scale economic and social collapse as a result of AI to force him to return home soon.
Unfortunately Britain, where manufacturing has been largely outsourced, most food and much fuel has to be imported and which is heavily reliant on exactly the professional services that will be automated first seems likely to have to go through one of the harshest transitions. A Scottish portfolio manager, probably in his 40s told me of the compound he is building on one of the remote islands off Scotland’s west coast. He grew up in Edinburgh, but was considering contributing a large amount of money towards some church repairs and the renovation of a beloved local store or pub of some kind to endear himself to the community in case he needed it. I presume that in big tech money, where I know far fewer people than others here, similar preparations are being made. I have made a few smaller preparations of my own, although what started as ‘just in case’ now occupies an ever greater place in my imagination.
For almost ten years we have discussed politics and society on this forum. Now events, at last, seem about to overwhelm us. It is unclear whether AGI will entrench, reshape or collapse existing power structures, will freeze or accelerate the culture war. Much depends on who exactly is in power when things happen, and on whether tools that create chaos (like those causing mass unemployment) arrive much before those that create order (mass autonomous police drone fleets, ubiquitous VR dopamine at negligible cost). It is also a twist of fate that so many involved in AI research were themselves loosely involved in the Silicon Valley circles that spawned the rationalist movement, and eventually through that, and Scott, this place. For a long time there was truth in the old internet adage that “nothing ever happens”. I think it will be hard to say the same five years from now.
1 Some part of me wants to resign and short the big SaaS firms that are going to crash first, but I’ve always been a bad gambler (and am lucky enough, mostly, to know it).
I dunno AI still seems incapable of doing very basic things I ask it to. We don’t even have self driving cars yet! This seems like something that’s always “just a few years away” as a trick to get investors excited
Surveys of actual companies show that basically no one is using AI at companies at all. Users here like to argue this point but many of you are programmers - the exact demographic to be able to use and exploit these LLMs.
Driving cars is among the later capabilities you'd expect to fall, if you switch off human conceit and take the far view. You're asking to beat billions of years of evolution in a data-poor domain (navigating the real world) rather than some thousands (written) or at most hundreds of thousands (spoken) in a well-databased one (words and symbolic reasoning).
More options
Context Copy link
Uh, what do you mean we don't have self-driving cars? I took two driverless Waymo rides last week, navigating the nasty, twisting streets of SF. It drove just fine. Maybe you could argue it's not cost-effective yet, or that there are still regulatory hurdles, but I think what you meant is that the tech doesn't work. And that's clearly false.
Also, I'm a programmer and productively using ChatGPT at work, so I'd say the score so far is Magusoflight 0, my lying eyes 2.
you totally misunderstood my comment.
Sarcasm that contains no marker of intent or actual humor is failed sarcasm.
More options
Context Copy link
If you intended sarcasm, then this is an excellent example of Poe's law. There are people here who would unironically say the same thing, and have.
I wasn’t being sarcastic. It’s strange, I guess this part was confusing to you?
The implication I’ll unpack for you - if you’re a programmer you live a bubble. Coding seems extremely important and useful - and since one of the few things LLM can do well is coding, this makes it seem very productive and useful! Hence programmers are very biased on this topic. You don’t really see how unpragmatic LLMs are for any other occupation
I'm a doctor. I think LLMs are very "pragmatic" or at least immensely useful for my profession. They could do much more if regulatory systems allowed them to.
Im speaking in generalities
More options
Context Copy link
I work in the health system and we STILL rely on a paper healthcare chart.
In 2025
Holy shit. In the U.S? I know of a few but very very few at this point.
Even the VA figured this shit out.
More options
Context Copy link
My man, I was using paper charts in the NHS till about a month ago. Thankfully they fixed the wifi, and we're living in the scifi future of 1999 now.
That is not a significant barrier. Get someone to transcribe them, they've probably got better handwriting than a doctor.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
We absolutely, 100% have self driving cars that are accessible to consumers.
https://youtube.com/watch?v=Go6Syv8xNMA?si=esnCdfNdiVCH1OCv
https://youtube.com/watch?v=92aBMTpeQB8?si=sj4QHy8uDLDLLitW
Just not everywhere just yet. Maybe you can even say the technology isn't "mature," but it is absolutely here.
The Waymo in California thing is such a small experiment and the upside of fudging with it is so high that if it turned out in 5 years that actually it was mostly indians in a warehouse doing the driving I wouldn't even be surprised
We don't see any waymos driving on the sidewalk to dodge traffic jams, so we can disprove the Artificial Indian hypothesis.
More options
Context Copy link
More options
Context Copy link
I can’t buy one. Waymo operates it select zones of select municipalities. It’s not accessible.
I presume you can't buy a Bugatti either. It's still an option that real living people can get for cash.
There's nothing standing in the way of Waymo rolling out an ever wider net. SF just happens to be an excellent place to start.
There’s quite a bit, it’s regulation, weather, geography, traffic levels, driver behaviour, crime.
I apologize for the hyperbole, and those are mostly valid considerations. I don't think traffic, driver behavior and crime matters, if they can work in SF at a profit. The other three are solvable or quasi-solved, regulation definitely is.
Isn’t SF one of the most tech-friendly cities in the nation? That’s where all the HQs are right?
Certainly. That is one of the drivers behind Waymo opening shop there. But even non rabid technophiles use their services, a car that drives itself and well is a service that almost anyone will pay for at a given price.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yeah, it’s just that liability tolerance for SDCs is very low. That’s why Waymo cars drive extremely conservatively and they’ve been careful about expanding into routes where more aggressive driving is necessary, like airport pickup (although it’s coming). But it’s all going to happen pretty soon.
I recently saw a travelogue video by Noel Phillips in which he was picked up by a Waymo at PHX.
Yeah, we have them here in Phoenix, and as a native resident of Phoenix, I can say that we have some truly questionable human drivers on the road as it is.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Self driving = ability to do 100% of what a competent human driver can do in any location, without geofencing.
By that standard, a good fraction of cars on the road don't qualify as human-driven.
(My idea for self-driving car laws: It has to pass a standard driver's license exam, and has to carry insurance. Anything past that is consumer protection instead of a valid safety concern.)
That would be a terrible law. Human driver's exams are made to filter out bad human drivers. The kinds of mistakes humans make are not the same as those made by AI. By virtue or being human, you can assume with high confidence that the examinee will not make whole classes of fatal errors, while you can not yet assume that of AI drivers.
They may be good enough now, it's just that the standard you propose is not a good filter.
More options
Context Copy link
How about "has to perform no worse than the worst human with a valid driver's license (without geofencing, etc. etc.), and has to perform in a manner that would not result in the driver's license being taken away from the human"? That's a pretty charitable standard, I'd say, and we should probably aim for average, rather than worst).
The problem with that is that it's fairly easy to train an AI to pass an exam without it implying it can perform in general conditions. I think we already have LLM's that can pass a bar exam, for example.
More options
Context Copy link
More options
Context Copy link
The geofencing is something I have some ambiguity on. Is it primarily legal/regulatory, or is it because Waymo requires extensive pre-data to function? I.e. if you dropped a Waymo on a Montana back road, would it be able to drive and navigate as well as a human driver in the same situation?
It seems like a bit of an unfair standard to hold it against Waymo capabilities if the issue is primarily legal/regulatory.
Sure, if they already have that capability and it’s only regulations holding them up then it is real self driving.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Confusion sets in when you spend most of your life not doing anything real. Metrics and statistics were supposed to be a tool that would aid in the interpretation of reality, not supercede it. Just because a salesman with some metrics claims that these models are better than butter does not make it true. Even if they manage to convince every single human alive.
I just tried out GPT 4.5, asking some questions about the game Old School Runescape (because every metric like math has been gamed to hell and back). This game has the best wiki every created, effectively documenting everything there is to know about the game in unnecessary detail. Spoiler: The answer is completely incoherent. It makes up item names, locations, misunderstand basic concepts like what type of gear is useful where. Asking it for a gear setup for a specific boss results in horrible results, despite the fact that it could just have copied the literally wiki (which has some faults like overdoing min-maxing, but it's generally coherent). The net utility of this answer was negative given the incorrect answer, the time it took for me to read it, and the cost of generating it (which is quite high, I wonder what happens when these companies want to make money).
Same thing happens when asking questions about programming that are not student-level (student-level question just returns an answer copied from a text-book. Did you know you can solve a leetcode question in 10 seconds by just copying the answer someone else wrote down? Holy shit!). The idea that these models will soon (especially given the plateau the seem to be hitting) replace real work is absurd. They will make programming faster, which means we'll build more shit (that's probably not needed, but that's another argument). They currently make me about 50% faster and make programming more fun since it's a fantastic search tool as long as it's used with care. But it's also destroying the knowledge of students. Test scores are going up, but understanding is dropping like a stone.
I'm sure it will keep getting better, eventually a large enough model with a gigantic dataset will get better at fooling people, but it's still just a poor image of reality. Of course this is how humans also function to some degree, copying other people more competant than us. However most of us combine that with real knowledge (creating a coherent model of something that manages to predict something new accurately). Without that part it's just a race to the bottom.
But a lot of people are like you, so these models will start to get used everywhere, destroying quality like never before. For example, I tried contacting a company regarding a missing order a few weeks ago. Their first line support had been replaced by an AI. Completely useless. It kept responding to the question it thought I made, instead of the one I made. Then asking me to double check things I told it I had checked. The funny thing is that a better automated support could have been created 20 years ago with some basic scripting (looking for order number and responding with details if it was included.). Or having an intern spend 30 second copy-pasting data into an email. But here we are, at the AI revolution, doing thing we have always been able to do, now in a shittier and more costly way. With some added pictures to make it seem useful. Fits right in in the finance world I guess?
I can however imagine a future workflow where these models do basic tasks (answer emails, business operations, programming tickets) overseen by someone that can intervene if it messes up. But this won't end capitalism. If you stopped LARPing on this forum/twitter you would barely even notice it. Though it is a shame that graphic design and similar things will be hurt more than it should.
If you're really an SWE, I must presume that you're not speaking in good faith here.
You must know that GPT 4.5 is pretty mid as far as instruction models of this generation go. DeepSeek's latest is close in performance and literally 100-200x cheaper. More importantly, what do you think would be a random college-educated human's score on Runescape questions? It is so trivial to grant these systems access to tools for web browsing as to not be worth talking about.
The rest of your comment is the same style. What is amazing and terrifying about LLMs is not their knowledge retrieval but generality and in-context learning. At sufficient context length and trained to appropriately leverage existing tools, there is nothing in the realm of pure cognitive work they cannot do on human level. This is not hard to understand. So tell me: what are you going for? Just trying to assuage your own worries?
I am also a SWE and have the same experience. The smartest models essentially work as good search engines, an interface between me and the api or language I am working with. No matter the prompt engineering or context window, they are utterly incapable of either reliable or good solutions to any moderately complex problem.
Please understand that I (and @Coolguy1337) have every incentive to leverage AI tools as much as possible. I use them daily for help with coding. If they could actually do my job I'd gladly sit back and let them do it--I already let them do as much of my job as they can.
You must know this isn't true or we'd have already lost our jobs.
If you're not convinced yet, let me outline my general coding process.
I'll tell you with confidence that AI can't do a single one of these steps. I know this because I use AI at every step along the way, and while it works ok as a search engine (for example it's great at finding similar existing implementations) it simply does not work at all as an actual problem solver. Not even for any individual step, let alone all the steps together, no matter how much prompt engineering is used.
Seriously, I mean, if you were actually right, I could just retire and give an AI agent my job. At least for the year or so it will take for my industry to catch on. That time could be used to relax or find an AI-proof job. But I'm not worried at all about my job (at least not from AI agents) because I have extensive direct firsthand experience with them and they are still extremely limited.
More options
Context Copy link
More options
Context Copy link
I wish I had a dollar for every time people use the current state of AI as their primary justification for claiming it won't get noticeably better, I wouldn't need UBI.
I just used Gemini 2.5 to reproduce, from memory, the NICE CKS guidance for the diagnosis and management of dementia. I explicitly told it to use its own knowledge, and made sure it didn't have grounding with Google search enabled. I then spot-checked it with reference to the official website.
It was bang-on. I'd call it a 9.5/10 reproduction, only falling short of perfection through minor sins of omission (it didn't mention all the validated screening tests by name, skipped a few alternative drugs that I wasn't even aware of before). It wasn't a word for word reproduction, but it covered all the essentials and even most of the fine detail.
The net utility of this answer is rather high to say the least, and I don't expect even senior clinicians who haven't explicitly tried to memorize the entire page to be able to do better from memory. If you want to argue that I could have just googled this, well, you could have just googled the Runescape build too.
I think it's fair to say that this makes your Runescape example seem like an inconsequential failing. It's about the same magnitude of error as saying that a world-class surgeon is incompetent because he sometimes forgets how to lace his shoes.
You didn't even use the best model for the job, for a query like that you'd want a reasoning model. 4.5 is a relic of a different regime, too weird to live, too rare to die. OAI pushed it out because people were clamoring for it. I expect that with the same prompt, o3 or o1, which I presume you have access to as a paying user, would fare much better.
Man, there's plateaus, and there's plateaus. Anyone who thinks this is an AI winter probably packs a fur coat to the Bahamas.
The rate of iteration in AI development has ramped up massively, which contributes to the impression that there aren't massive gaps between successive models. Which is true, jumps of the same magnitude as say GPT 3.5 to 4 are rare, but that's mostly because the race is so hot that companies release new versions the moment they have even the slightest justification in performance. It's not like back when OAI could leisurely dole out releases, their competitors have caught up or even beaten them in some aspects.
In the last year, we had a paradigm shift with reasoning models like o1 or R1. We just got public access to native image gen.
Even as the old scaling paradigms leveled off, we've already found new ones. Brand new steep slopes of the sigmoidal curve to ascend.
METR finds that the duration of tasks (based on how long humans take to do it) that AIs can reliably perform doubles every 7 months.
At any rate, what does it matter? I expect reality to smack you in the face, and that's always more convincing than random people on the internet asking why you can't even look ahead while considering even modest and iterative improvement.
I've tried the reasoning models. They fail just as much (just tried Gemini 2.5 too and it did even worse). The purpose was to illustrate an example of how they fail. To showcase their poor reliability. I did not say they won't get better. They will, just not as much as you think. You can't just take 2 datapoints and extrapolate forever.
And I don't get your example, wouldn't the NICE CKS be in the dataset many times over? Maybe my point wasn't clear. These tools are amazing as search engines as long as the user using them is responsible and able to validate the responses. It does not mean they are thinking very well. Which means they will have a hard time doing things not in the dataset. These models are not a pathway to AGI. They might be a part of it, but it's gonna need something else. And that/those parts might be discovered tomorrow, or in 50 years.
And I don't see why reality will smack me in the face. I'm already using these as much as possible since they are great tools. But I don't expect my work to look very different in 2030 compared to now. Since programming does not feel very different today compared to 2015. The main problem has always been to make the program not collapse under its own weight, by simplifying it as much as possible. Typing the code has never been relevant. Thanks for the comment btw, it made me try out programming with gemini 2.5 and it's pretty good.
I mean, I assume both of us are operating on far more than 2 data points. I just think that if you open with an example of a model failing at a rather inconsequential task, I'm eligible to respond with an example of it succeeding at a task that could be more important.
My impression of LLMs is that in the domains I personally care about:
They've been great at 1 and 3 for a while, since GPT-4. 2? It's only circa Claude 3.5 Sonnet that I've been reasonably happy with their creative output, occasionally very impressed.
Number 3 encompasses a whole heap of topics. Back in the day, I'd spot check far more frequently, these days, if something looks iffy, these days I'll shop around with different SOTA models and see if they've got a consensus or critique that makes sense to me. This almost never fails me.
Almost certainly. But does that really matter to the end user? I don't know if the RS wiki has anti-scraping measures, but there's tons of random nuggets of RS build and items guide all over the internet. Memorization isn't the only reason that models are good, they think, or do something so indistinguishable from the output of human thought that it doesn't matter.
If you met a person who was secretly GPT-4.5 in disguise, you would be rather unlikely to be able to tell at all that they weren't a normal human, not unless you went about suspicious from the start. (Don't ask me how this thought experiment would work, assume a human who just reads lines off AR lenses I guess).
This is a far more reasonable take in my opinion, if you'd said this at the start I'd have been far more agreeable.
I have minor disagreements nonetheless:
Well, if you're using the tools regularly and paying for them, you'll note improvements if and when they come. I expect reality to smack me in the face too, in the sense that even if I expect all kinds of AI related shenanigans, seeing a brick wall coming at my car doesn't matter all that much when I don't control the brakes.
For a short span of time, I was seriously considering switching careers from medicine to ML. I did MIT OCW programs, managed to solve one Leetcode medium, and then realized that AI was getting better at coding faster than I would. (And that there are a million Indian coders already, that was a factor). I'm not saying I'm a programmer, but I have at least a superficial understanding.
I distinctly remember what a difference GPT-4 made. GPT-3.5 was tripped up by even simple problems and hallucinated all the time. 4 was usually reliable, and I would wonder how I'd ever learned to code before it.
I have little reason to write code these days, but I can see myself vibe-coding. Despite your claims that you don't feel that programming had changed since 2015, there are no end of talented programmers like Karpathy or Carmac who would disagree.
You're welcome. It's probably the best LLM for code at the moment. That title changes hands every other week, but it's true for now.
Okay can we get people to start using delusions or confabulations instead of hallucinations. This always irks me.
I know we've bickered about this in the past but I think you have to be very cautious about what decision support tools and LLMs are doing in practical medicine at this time - fact recall is not most of the problem or difficulty.
The average person here could use UpToDate to answer many types of clinical questions, even without the clinical context that you, I, and ChatGPT have.
That's not the hard part of medicine. The hard part is managing volume (which AI tools can do better than people) and vagary (which they are shit at). Patients reporting symptoms incorrectly, complex comorbidity, a Physical Exam, these sorts of things are HARD.
Furthermore the research base in medicine is ass, and deciding if you want a decision support tool to use the research base or not is not a simple question.
On the topic of hallucinations/confabulations from LLMs in medicine:
https://x.com/emollick/status/1899562684405670394
This should scare you. It certainly scares me. The paper in question has no end of big names in it. Sigh, what happened to loyalty to your professional brethren? I might praise LLMs, but I'm not conducting the studies that put us out of work.
I expect that without medical education, and only googling things, the average person might get by fine for the majority of complaints, but the moment it gets complex (as in the medical presentation isn't textbook), they have a rate of error that mostly justifies deferring to a medical professional.
I don't think this is true when LLMs are involved. When presented with the same data as a human clinician, they're good enough to be the kind of doctor who wouldn't lose their license. The primary obstacles, as I see them, lie in legality, collecting the data, and the fact that the system is not set up for a user that has no arms and legs.
I expect that when compared to a telemedicine setup, an LLM would do just as well, or too close to call.
I disagree that they can't handle vagary. They seem epistemically well calibrated, consider horses before zebras, and are perfectly capable of asking clarifying questions. If a user lies, human doctors are often shit out of luck. In a psych setting, I'd be forced to go off previous records and seek collateral histories.
Complex comorbidities? I haven't run into a scenario where an LLM gave me a grossly incorrect answer. It's been a while since I was an ICU doc, that was GPT-3 days, but I don't think they'd have bungled the management of any case that comes to mind.
Physical exams? Big issue, but if existing medical systems often use non-doctor AHPs to triage, then LLMs can often slot into the position of the senior clinician. I wouldn't trust the average psych consultant to find anything but the rather obvious physical abnormalities. They spend blissful decades avoiding PRs or palpating livers. In other specialities, such as for internists, that's certainly different.
I don't think an LLM could replace me out of the box. I think a system that included an LLM, with additional human support, could, and for significant cost-savings.
Where I currently work, we're more bed-constrained than anything, and that's true for a lot of in-patient psych work. My workload is 90% paperwork versus interacting with patients. My boss, probably 50%. He's actually doing more real work, at least in terms of care provided.
Current setup:
3-4 resident or intern doctors. 1 in-patient cons. 1 outpatient cons. 4 nurses a ward. 4-5 HCAs per ward. Two wards total, and about 16-20 patients.
?number of AHPs like mental health nurses and social workers triaging out in the community. 2 ward clerks. A secretary or two, and a bunch of people whose roles are still inscrutable to me.
Today, if you gave me the money and computers that weren't locked down, I could probably get rid of half the doctors, and one of the clerks. I could probably knock off a consultant, but at significant risk of degrading service to unacceptable levels.
We're rather underemployed as-is, and this is a sleepy district hospital, so I'm considering the case where it's not.
You would need at least one trainee or intern doctor who remembered clinical medicine. A trainee 2 years ahead of me would be effectively autonomous, and could replace a cons barring the legal authority the latter holds. If you need token human oversight for prescribing and authorizing detention, then keep a cons and have him see the truly difficult cases.
I don't think even the ridiculous amount of electronic paperwork we have would rack up more than $20 a day for LLM queries.
I estimate this would represent about £292,910 in savings from not needing to employ those people, without degrading service. I think I'm grossly over-estimating LLM query costs, asking one (how kind of it) suggests a more realistic $5 a day.
This is far from a hyperoptimized setup. A lot of the social workers spend a good fraction of their time doing paperwork and admin. Easy savings there, have the rest go out and glad-hand.
I re-iterate that this is something I'm quite sure could be done today. At a certain point, it would stop making sense to train new psychiatrists at all, and that day might be now (not a 100% confidence claim). In 2 years? 5?
Do keep in mind how terrible most medical research is, and that includes research into our replacements. This isn't from lack of effort but from the various systems, pressures, and ethics at play.
How do you simulate a real patient encounter when testing an LLM? Well maybe you write a vignette (okay that's artificial and not a good example. Maybe you sanitize the data inputs and have a physician translate into the LLM. Well shit, that's not good either.
Do you have the patient directly talk to the LLM and have someone else feed in lab results? Okay maybe getting closer but let's see evidence they are actually doing that.
All in the setting of people very motivated to show the the tool works well and therefore are biased in research publication (not to mention all the people who run similar experiments and find that it doesn't work but can't get published!).
You see this all the time in microdosing, weed, and psychedelic research. The quality is ass.
Also keep in mind that a good physician is a manager also - you are picking up the slack on everyone else's job, calling family, coordinating communication for a variety of people, and doing things like actually convincing the patient to follow recommendations.
I haven't seen any papers on an LLMs attempts to get someone to take their 'beetus medication vs a living breathing person.
Also Psych will be up there with the procedurealists in the last to be replaced.
Also also other white collar jobs will go first.
I expect this would work. You could have the AI be something like GPT-4o Advanced Voice for the audio communication. You could record video and feed it into the LLM. This is something you can do now with Gemini, I'm not sure about ChatGPT.
You could, alternatively, have a human (cheaper than the doctor) handle the fussy bits. Ask the questions the AI wants asked, while there's a continuous processing loop in the background.
No promises, but I could try recording a video of myself pretending to be a patient and see how it fares.
I mean, quite a few of the authors are doctors, and I presume they'd also have a stake in us being gainfully employed.
I'd take orders from an LLM, if I was being paid to. This doesn't represent the bulk of a doctor's work, so if you keep a fraction of them around.. People are already being conditioned to take what LLMs take seriously. They can be convinced to take them more seriously, especially if vouched for.
That specific topic? Me neither. But there are plenty of studies of the ability of LLMs to persuade humans, and the very short answer is that they're not bad.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
He didn't say that. He said that the state today is not very good, not that being unimpressive today means it will be unimpressive in the future.
Besides which, your logic cuts both ways. Rates of change are not constant. Moore's Law was a damn good guarantee of processors getting faster year over year... right until it wasn't, and it very likely never will be again. Maybe AI will keep improving fast enough, for long enough, that it really will become all it's hyped up to be within 5-10 years. But neither of us actually knows whether that's true, and your boundless optimism is every bit as misplaced as if I were to say it definitely won't happen.
This conveys to me the strong implication that in the near term, models will make minimal improvements.
At the very beginning, he said that benchmarks are Goodharted and given too much weight. That's not a very controversial statement, I'm happy to say it has merit, but I can also say that these improvements are noticeable:
You say:
I think that blindly extrapolating lines on the graph to infinity is as bad an error as thinking they must stop now. Both are mistakes, reversed stupidity isn't intelligence.
You can see me noting that the previous scaling laws no longer hold as strongly. The diminishing returns make scaling models to the size of GPT 4.5 using compute for just model parameters and training time on larger datasets not worth the investment.
Yet we've found a new scaling laws, test-time compute using reasoning and search which has started afresh and hasn't shown any sign of leveling out.
Moore's law was an observation of both increasing transistor/$ and also increasing transistor density.
The former metric hasn't budged, and newer nodes might be more expensive per transistors. Yet the density, and hence available compute, continues to improve. Newer computers are faster than older ones, and we occasionally get a sudden bump, for example, Apple and their M1
Note that the doubling time for Moore's law was revised multiple times. Right now, the transistor/unit area seems to double every 3-4 years. It's not fair to say the law is dead, but it's clearly struggling.
Am I certain that AI will continue to improve to superhuman levels? No. I don't think anybody is justified in saying that. I just think it's more likely than not.
Standing where I am, seeing the straight line, I see no indication of it flattening out in the immediate future. Hundreds of billions of dollars and thousands of the world's brightest and best paid scientists and engineers are working on keeping it going. We are far from hitting the true constraints of cost, power, compute and data. Some of those constraints once thought critical don't even apply.
Let's go like 2 years without noticeable improvement before people start writing things off.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Why hasn't it already?
My wife worked about five years ago at as a credit analyst, where part of her job involved determining whether or not to extend extra lines of credit: the easiest thing in the world (I would think) to automate. Really, a very simple algorithm based off of known data should be able to make those decisions, right? But my wife, using extremely outdated software, at a place with massive employee retention problems due to insanely high workloads, was tasked with following a set of general guidelines to determine whether or not to extend additional credit. In some cases the guidelines were a bit ambiguous. She was instructed by her manager to use her gut.
As I think I've mentioned before, I work with AI for my IRL job fairly extensively, although mostly second-hand. The work we do now would have required much more human effort prior to modern AI models, and having been involved in the transition between "useless-to-us-GPT" and "oh wow this is actually good" I can tell you that our model of action pivoted away from mass employment. But we still need people - the AI requires a lot of hand-holding, although I am optimistic it will improve in that regard - and AI can't sell people on a product. You seem to be envisioning a world where an AI can do the work of 10 people at a 14 person company, so the company shrinks to 4 people. I'm living in a world where AI can do the work of 10 people, so we're likely to employ (let's say) 10 people instead of 20 and do 100x the work the 20 people would have been able to do. It's quite possible that in our endeavor the AI is actually the difference between success and failure and when it's all said and done by 2050 we end up employing 50 people instead of zero.
How far that generalizes, I do not know. What I do know is that "capitalism" is often extraordinarily inefficient already. If AI ends up doing jobs that could have been replaced in whole or in part by automation a decade before anyone had ever heard of "ChatGPT" it will be because AI is the new and sexy thing, not because "capitalism" is insanely efficient and good at making decisions. It seems quite plausible to me that people will still be using their gut at my wife's place of employment at the same time that AI is giving input into high-level decisions in Silicon Valley boardrooms.
I definitely believe that AI and automation change the shape of industry over the next 50 years - and yes, the next 5. What I would not bet on (absent other factors, which are plenteous) is everyone waking up the same day and deciding to fire all their employees and replace them with AI, mass pandemonium in the streets. For one thing, the people who would make the decision to do that are the people least likely to be comfortable with using AI. Instead, they will ask the people most likely to be replaced by AI to study the question of whether or not to replace them with AI. How do you think that's going to go? There's also the "lobster dominance hierarchy" - people prefer to boss other people around rather than lord it over computers. Money and personnel are a measuring stick of importance and the managerial class won't give up on that easily.
This is the most reasonable AI skeptic take I've seen here, and that's high praise. I disagree on quite a few points, which add up, but I can see why an intelligent person who shares slightly different priors would come to your conclusion.
Thank you :)
More options
Context Copy link
More options
Context Copy link
For one, I'd like to point out that this has been a constant for centuries at this point, dating back to at least the industrial revolution. I was discussing family history a while back, and we have photos of my great grandfather proudly picketing for a union that doesn't exist anymore. That entire profession was gone and the union folded before I was born because of pre-AI "automation" (computers, really). Entire professions have disappeared since WWII because of the spreadsheet — VisiCalc famously sold users on a $2000 Apple II to run a $100 application, in 1979 dollars!
The real question that comes to mind about "AI" in these days is whether this is a rather impactful step change (like the spreadsheet, or the smartphone), or whether this is something else in kind. And I'm somewhat leaning toward the former, and find that arguments for the latter tend to under-sell the impact of major technology changes even within my lifetime; for all the concern about "singularity", exponential growth manages to look pretty similar but lacks the vertical asymptote. But I'm open to hearing other ideas.
Personally I find "spreadsheets" very apt so far. I think they definitely have the potential to disrupt some jobs. But if I'm being honest I think a lot of the "email jobs" are begging for disruption anyway, for other reasons. I would not be surprised if "AI" takes the blame for something that was more-or-less going to happen anyway.
I think robotics (which obviously has a lot of overlap with AI!) is potentially vastly more impactful than just "an AI that can do your email job." If you started randomly shooting "email job holders" and "guys who maintain power lines and fiber optic cables" you would notice the disruption in the power lines and fiber optic cables much sooner unless you got weirdly (un?)lucky shooting email jobbers. Similarly, AI will have a much bigger impact if it comes with concrete physical improvements instead of just better video games, or more website code, or better-written emails, or whatever, notwithstanding the fact that a lot of people work in the video game/coding/email industry.
(I hope I am right about that. I guess wireheading is kinda an option...)
More options
Context Copy link
More options
Context Copy link
In my opinion, it hasn't because (contrary to what AI hype proponents say) it can't. AI simply isn't very good at doing things yet. To use the specific example I know well and actually have interacted with, LLMs don't write good code. It has wildly inaccurate bits that you have to check up on, sometimes to the point that it isn't even syntactically valid. It actually slows you down in many cases to try to use LLMs for programming. A skilled programmer can use AI tools as a force multiplier in some situations, so they do have a (fairly narrow) use case. But the idea that you could replace programmers with LLMs is just plain laughable at this stage of the game.
I'm not an expert in every field. But given that AI is not actually very good for coding, one of the things its proponents claim it to be good at... I don't exactly have high hopes that AI is good at those other things either. Maybe it'll get there, but there's not sufficient reason to believe "yes it will definitely happen" just yet. We have no way of knowing whether the rate of progress from the last few years will continue, or whether we are going to hit an unforseen wall that blocks all progress. We'll just have to wait and see.
So, I think that is why the great AI replacement hasn't occurred. It isn't able to successfully happen yet. At best, right now you would replace humans with AI that does the job extremely poorly, and then (in a few years, when the hype dies down) humans would get hired back to fix all the stuff the AI broke. Which is a distinct possibility, as that is what happened a couple of decades ago with outsourcing jobs to India. But as painful as that would be, it's not "all these human jobs are over now".
For an example of this happening literally right now, see ThePrimeagen and other Youtubers spending a full week streaming themselves making a tower defense game through "vibe coding." Prime and the other streamers he's working with are all talented and knowledgeable devs, but what they're making is an absolute mess. They (or two or three decently competent devs at a weekend game jam) could make the same or a better game in a fraction of the time if they were coding directly instead of using an AI to do it. And the amount of work they have to do to fix the AI's messes are way more than they'd need to do to just make the damn game themselves.
Was it on the motte that I saw this joke again recently? It feels appropriate though.
A guy is walking through the park when he comes across a chess table with a man seated on one side and a dog seated on the other. The man stops to watch them and he is astounded to see the dog is actually playing! He professes his astonishment to the man "your dog is amazing, I can't believe he can play chess!" The man snorts however, and turns to him with a sneer, "Amazing? Amazing nothing, I still beat him nine times out of 10."
I think it's amazing that we can even consider getting a computer to write a game for us, having grown up in the era where you had to write a game before you could play it (unless you were wealthy enough to own a cassette player).
It was on the motte that I replied to this joke:
Beware fictional evidence.
The joke works because we have assumptions about what it means to be able to play chess, and we know that a dog playing chess with any significant chance of success implies a much greater jump in intelligence than the jump between playing poorly and playing well.
If the dog was playing chess using some method that was not like how humans play chess, and which couldn't generalize to being able to play well, the joke wouldn't be very funny. Of course there isn't such a method for chess-playing dogs. But we know that Claude doesn't play Pokemon like humans do, and this may very well not generalize to playing as well as a human.
(Notice that your assumptions are wrong for computers playing chess. My Gameboy can beat me in chess. It has no chance of taking over the world.)
Humor is subjective and all that, but I don't understand this perspective. I'd find the joke exactly as funny no matter what way the dog was playing chess, whether it was thinking through its moves like a human theoretically is, or, I dunno, moving pieces by following scents that happened to cause its nose to push pieces in ways that followed the rules and was good enough to defeat a human player at some rate greater than chance. The humor in the joke to me comes from the player downplaying this completely absurd super-canine ability the dog has, and that ability remains the same no matter how the dog was accomplishing this, and no matter if it wouldn't imply any sort of general ability for the dog to become better at chess. Simply moving the pieces in a way that follows the rules most of the time would already be mindblowingly impressive for a dog, to the extent that the joke would still be funny.
It's the same basic idea: we already know how hard it is to play chess and it's far more than a dog can normally do. And it's this knowledge which makes the joke a joke.
The joke isn't a scenario where the dog plays chess under such unusual circumstances that it doesn't mean the dog is smart.
And imagine that it's 1981 and someone is showing you their brand new ZX81. The exact same thing happens that happens with the dog, down to you saying that the chess program can be beaten nine times out of ten. Should you conclude that actually, ZX81s are really really smart because playing chess at all is impressive? Or should you conclude that even though humans use general intelligence to play chess, the ZX81 instead uses a very nonhuman specialized method, and the ZX81 isn't very smart despite how impressive playing chess is?
If a few years later the ZX81 was replaced with a Commodore 64, and you couldn't beat the Commodore 64 in chess, would you decide that the ZX81 is dumb, but the Commodore 64 is smart?
I don't think it would make sense for a dog to be able to play chess at all while also that not meaning that the dog is "smart" in some real sense. Perhaps it doesn't understand the rules of chess or the very concept of a competitive board game, but if it's able to push around the pieces on the board in a way that conforms to the game's rules in a manner that allows it to defeat humans (who are presumably competent at chess and genuinely attempting to win) some non-trivial percentage of the time through its own volition without some marionette strings or external commands or something, I would characterize that dog as "smart." Perhaps the dog had an extra smart trainer, but I doubt that even an ASI-level smart trainer could train the smartest real-life dog in the real world to that level.
This last sentence doesn't make sense to me either. Yes, I would conclude that the ZX81 uses a very nonhuman specialized method, and I'd characterize its "ability" (obviously unlike a dog, it has no agency) to play chess in this way as "smart" in some real, meaningful sense. Obviously it's not any sort of generalized "smartness" that can apply to any other situation. If we were living at a time when a computer that could play chess wasn't even a thing, and someone introduced me to a chess bot that he could defeat only 9 times out of 10, I would find it funny if he downplayed that, as in the dog joke.
I'd conclude that the Commodore 64 is "smarter than" the ZX81 (I'm assuming we're using the computer names as shorthand for the software that they actually run on the hardware, here). Again, not in some sort of generalized sense, but certainly in a real, meaningful sense in the realm of chess playing.
When it comes to actual modern AI, we're, of course, talking primarily about LLMs, which generate text really really well, so it could be considered "smart" in that one realm. I'm on the fence about and mostly skeptical that LLMs will or can be the basis for an AGI in the future. But I think it's a decent argument that strings of text can be translated to almost any form of applied intelligence, and so by becoming really, really good at putting together strings of text, LLMs could be used as that basis for AGI. I think modern LLMs are clearly nowhere near there, with Claude Plays Pokemon the latest really major example of its failures, from what I understand. We might have to get to a point where the gap between the latest LLM and ChatGPT4.5 is greater than the gap between ChatGPT4.5 and ELIZA before that happens, but I could see it happening.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You're right, it is amazing that we can even consider that. I don't think anyone disagrees on that point. The disagreement here is that our resident AI hype posters keep going past that, and claim that AI will be able to outshine us in the very near future. It's possible, as I said in my other comment. But we simply are not there yet, and we (imo) don't yet have reason to believe we will be there real soon. That is the point of disagreement, and why people sound so skeptical about something which is nigh-objectively an amazing technical achievement. It's because they are responding to very overblown claims about what the achievement is capable of.
But why do you think it's so far off? I get that it isn't there yet, but that's not in any way an argument for it not coming soon. And that always seems to be the primary focus of the skeptic side, while the believers either wallow in existential crisis or evangelise about the future. And I know the believers "it's coming, I promise" isn't any better from an evidence standpoint, but it is what I believe so I've got to put forward my best case. And the continually accelerating path of modern technology over my lifetime is it.
Eta for the record my position is ai will radically change civilisation within the next 15 years.
Because right now we're not even close to AI being able to equal humans, let alone exceed them. And because this is cutting edge research, we simply cannot know what roadblocks might occur between now and then. To me, the correct null hypothesis is "it won't happen soon" until such time as there is a new development which pushes things forward quite a bit.
Seems like you're just begging the question here. Why is that the correct null hypothesis?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I would claim this as my joke, and it was probably my comment you recall, but it's been in circulation for probably longer than I've been alive. It's a good joke, stabs right at the gut.
Yeah that's right, it was one of your other posts on ai. There's something in the zeitgeist demanding a resurgence of good old jokes at the moment, I've heard a lot of classics retold recently. It's nice, I missed the structured joke format as a cultural touchstone.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yes, I lean towards thinking that AI is often overblown, but at least part of my point here is that probably a lot more automation was possible even prior to AI than has actually been embraced so far. Just because something is possible does not mean that it will be implemented, or implemented quickly.
I think this is pretty analogous to my experience with it (which doesn't involve programming). Force multiplier, yes, definitely. But so is Excel. And what happened with Excel wasn't that accountants went out of business, but rather that (from what I can tell, anyway) fairly sophisticated mathematical operations and financial monitoring became routine.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Ned Ludd led weavers to smash looms. It didn’t save the weaver’s jobs, but their great-granddaughters were far wealthier.
Just based on history, large productivity increases will raise wages. I’m looking into a cushy union sinecure that will never be automated but AI is a minor factor compared to the money. Yes, some fintech roles will be curtailed(and the remainder will be more client-and-customer heavy), but meh. These people’s high salaries is not fundamental to our social model.
My guess is that they had a lot fewer great granddaughters than they otherwise would have.
More options
Context Copy link
The whole point is that it would be a grave error to naively extrapolate from history. The increase in productivity came from humans being freed from physical labor (mostly), and their cognitive labor augmented and multiplied.
Now we're at approaching replacement rather than augmentation. The economy might boom, but that's doesn't mean the humans in it will see the benefit. This is would take intentional action to prevent ~everyone who isn't independently wealthy from being laid off and without a revenue stream that wasn't welfare, as the free market value of their work would be lower than the minimum required to keep them housed and fed.
There have been previous large increases in intellectual productivity due to computers. Job prospects for nerds have gotten better, not worse.
We don’t live in a free market. We live in a regulated society. Do you think doctors, lawyers, teachers will get replaced by machines just because those machines will do a better job?
Yes? It's going to be harder than someone working for a faceless corporation with at-will employment, but eventually, people are going to wonder: "Hey, those AI thingies seem super smart, they're giving me the same advice (or better) as the doctor I'm paying all that money for, why can't they prescribe too?".
If not individuals, then governments and politicians. That's where the incentives lie for hospitals, for the owners of law firms who haven't had to handle an actual case in years, for bureaucrats looking at how expensive the NHS is and wondering if they really need that many doctors.
Even if licensed professionals continue to play a token role, it might just be a polite fiction that they're necessary. You could have one bored, disinterested doctor signing off on AI recommendations, assuming the liability with ease because he knows the AI is almost never wrong. He's now doing the work of ten doctors, and the hospital, happy to save costs, fires the rest. Even if he's not happy about it, it beats being unemployed.
Controversial statement, but from my perspective, 90% unemployment rates for doctors is almost as bad as 100% unemployment.
It could be easy to instigate. An AI company, or its lobbyists, publish a few papers that (truthfully) claim that AIs outperform human physicians. This is used as ammunition by lobbyists and governments to begin gradual replacement, boiling the water slowly and saving a lot of money.
The average Joe, who once trusted human doctors, is collecting unemployment. He thinks, hey, the AI took my job, why should I believe that doctors are any better? It saves him money and time, leaving aside the scope for resentment.
We don't seem to live in a world where the average Joe is protected very much, which would have been the point to try and stem the tide. How many people support UBI for artists and journalists? All it takes is a single nation or smaller polity to try this experiment, see that it works well (which I expect) and then it's easy to bring others on board. They'll be left in the dust otherwise.
Artists and journalists aren’t the average Joe. They’re poor members of the upper classes.
Most people have a lot of sympathy for laid off coal miners and factory workers, and one of the terminal values of western regimes is raising the LFPR. The jobs must flow, and flow they shall. There may not be universal six figures for nerds, but that isn’t a necessity.
In any case, AI isn’t taking everyone’s job. There will be fewer software engineers, sure, but we don’t need so many of them. They should learn to fix toilets or dig coal or something. Previous increases in the productivity of white collar work have not led to the elimination of white collar employment.
If the US is actually going to reindustrialize seeing mass exodus of basically intelligent people from email jobs could be extremely beneficial.
Well yes, the US Burgher class has been hollowed out by the promise of extremely high salaries in white collar jobs that I am told have a purpose, but which seem to be mostly featherbedding(not that unions manage to avoid this).
That being said, entry into the burgher class is itself not open to the general public; it normally takes connections or years of grinding out experience, and you can't switch over to it at thirty.
More options
Context Copy link
More options
Context Copy link
I think calling artists and journalists "poor members of the upper classes", while not entirely wrong, isn't my preferred framing. They're semi-prestigious, certainly, but my definition of upper class would be someone like 2rafa. They're often members of the intelligentsia, and have a somewhat disproportionate impact on public affairs, but they're not upper class by most definitions. Poor but upper class is close to a contradiction in terms.
I've already explained my stance in this thread that the previous expectation about the state of affairs for automation doesn't hold. Cognitive automation that replaces all human thought is a qualitatively different beast when compared to the industrial revolution or computers.
A tool that does 99% of my work for me? Great, I'm a hundred times as productive! There might even be a hundred times more work to do, but I'll probably see some wage growth. There might be some turmoil in the employment market.
A tool that does 100% of the labor? What are you paying me for?
The whole point is that AI is approaching 100%, might even be there, or is so close employers don't care and will fire you.
Perhaps a more accurate description would be members of an upper class in the same way that samurai were in Edo society, literati were in China since essentially the Warring States, or Brahmins in India?
To be honest the pessimistic case of the AI "only" being able to do 99% or even 90% of human cognitive work scares me in terms of social upheaval. It might be better off in the long run, but it sure looks like it'll be a bumpy ride...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is the Dream Time. The universe is still young and brimming with potential.
Not in some esoteric sense, but the feeling that the future lightcone still wide open for the taking, before… well, before whatever comes next snaps it shut or reshapes it entirely.
Most people don't seem to feel it yet. They go about their days while the dam upstream is visibly groaning, hairline cracks spiderwebbing across its face, the water level unnervingly high. We became accustomed, over centuries, to automating physical labor, outsourcing muscle first to animals, then to steam, and finally to electricity and robotics. That was disruptive enough, but we retained the privileged position of thinkers. Our cognitive output was augmented, made more efficient by tools, but never threatened with wholesale replacement. You were competing with other people. Foreign people. Smarter people. But recognizably human nonetheless.
Now, the game is changing. You can almost taste the static buzz in the air, the electric hum of hundreds of billions being poured into turning sand and copper into substrates for thought. It feels like witnessing glaciers calving, immense potential energy shifting, grinding towards an inevitable, future-altering plunge into the sea. The scale is difficult to grasp, even when I try.
I consider myself a pragmatic and grounded person, especially day to day in the NHS dealing with very concrete human problems. Wistful speculation about posthuman futures is usually reserved for quiet moments or forums like this. Yet, even I experience these odd record-scratches lately, moments of profound unreality where the sheer historical weight of now becomes palpable. You can almost see the ghostly outlines of future historians, pens hovering over the chapter describing this precise juncture.
It's jarring because, before this recent, explosive boom in AI capabilities, the 21st century had settled into a somewhat deceptive calm. Progress felt largely quantitative, iterative. Sure, phones got smarter, computers faster, networks wider. But the earth shattering, qualitative paradigm shifts that defined the 20th century – flight, antibiotics, nuclear power, the initial blast wave of the internet – seemed to have receded. We got better versions of existing things. Mobile computing and VR trickled down from expensive novelties to consumer goods, but the fundamental structures felt stable.
That stability now feels illusory. The steady hum of incremental progress has been drowned out by the roar of something new, something potentially far more transformative arriving far faster than anticipated. We find ourselves, rather unexpectedly, living through profoundly interesting times. It's the quiet before the storm, the moment Wile E. Coyote hangs suspended in mid-air, the familiar ground suddenly absent beneath his feet.
Intelligent humans have always lived slightly in the future, made plans for tomorrow, the year after, the decade that follows, in the rather comfortable knowledge that change would be gradual and recognizable. That's not true anymore, at least not for me. The sheer dizzying variety of options I can envision for a mere decade from now encompass being a paperclip to watching the birth of a stellar civilization. Or I might have starved to death, can't rule that out. There's very little probability mass left in "business as usual". It's going to be great or it's going to be terrible.
We are, as you put it, tourists in the world we thought we knew, observing its familiar features with a new, almost melancholy appreciation before the landscape changes forever.
I'm overwhelmed with nostalgia for the now, and feel hope tinged with dread for the future. Some would say ignorance is bliss, but I'm of the opinion that if knowledge can hurt me, I need to become the kind of person where that's not the case. So I'm here, I was there, I was sitting out on the porch and saw the Singularity's light tinging the horizon with impossible colors while most sleepwalk through their lives, content that tomorrow will resemble today.
It's been rather nice.
More options
Context Copy link
I was reading a post yesterday that made a point that all/most of the AI is currently funded by VC money that presumably will eventually want a return on its investments. You make it sound like Adobe, MS Office, Salesforce et al could get decimated but it doesn't seem like the value will be redirected to the AI companies, it will just largely evaporate as the underlying activity is made redundant. Can the cost of so much compute break even?
I don't know if that's accurate and would like to hear your perspective as someone who is a lot closer professionally to those kinds of issues.
Outside of the companies that are charging for access to their model, it seems few even have monetization plans. It feels a lot like early days of Facebook or YouTube where we're just driving growth and we'll figure out monetization later.
More options
Context Copy link
More options
Context Copy link
Nobody knows how this going to play out.
I've been on the AI x-risk train long before it was cool, or at least a mainstream interest. Can't say for sure when I first stumbled upon LessWrong, but I presume my teenage love for hard scifi would have ensured I stumbled upon a haven for nerds worrying about what was then incredibly speculative science fiction.
God, that must have been in the early 2010s? I don't even remember what I thought at the time. I recall being more worried about the unemployment than the extinction bit, and I may or may not have come full circle.
Circa 2015, while I was in med school, I was deeply concerned about both x-risk and shorter-term automation induced unemployment. At that point, I was thinking this would be a problem for me, personally, in 10-30 years.
I remember, in 2018, arguing with a surgeon who didn't believe in self-driving cars coming to fruition in the near future. He was wrong about that, Waymo is safer than the average human driver per mile. You can order one through an app, if you live in the right city.
I was wrong too, claiming we'd see demos of fully robotic surgery in 5 years. I even offered to bet on it, not that I had any money. Well, it's looking closer to another 5 now. At least I have some money.
I thought I had time to build a career. Marry. Have kids. Become a respected doctor, get some savings and investments in place before the jobs started to go in earnest.
My timelines, in 2017, were about 10-20 years till I was obsolete, but I was wrong in many regards. I expected that higher cognitive tasks would be the last to go. I didn't expect AIs scoring 99th percentile (GPT-4 on release) on the USMLE, or doing graduate level maths, while we don't have affordable multifunction consumer robots.
I thought the Uber drivers, the truckers, they'd be the first to fall under the wheels or rollers of the behemoth coming over the horizon. I'd never have predicted that artists would the first to get bent over.
If your job can be entirely conducted with a computer and an email address, you're so fucking screwed. In the meantime, bricklayers are whistling away with no immediate end in sight.
I liked medicine. Or at least it appealed to me more than the alternatives. If I was more courageous, I might have gone into CS. I expected an unusual degree of job security, due to regulatory hurdles if literally nothing else.
I wanted psychiatry. Much of it can be readily automated. Any of the AI companies who really wanted it could whip up a 3D photorealistic AI avatar and pipe in a webcam. You could get someone far less educated or trained to do the boring physical stuff. I'd automate myself out of 90% of my current job if I had a computer that wasn't locked down by IT. For a more senior psych, they could easily offload the paperwork which is 50% of their workload.
Am I lucky that my natural desires and career goals gave me an unusual degree of safety from job losses? Hell yes. But I'm hardly actually safe. One day, maybe soon, someone will do the maths to prove that the robots can prescribe better than we can, and then get to work on breaking down the barriers that prevent that from happening.
I'm also rather unlucky. Oh, there are far worse places to be, I'm probably in the global 95th percentile for job security. Still, I'm an Indian citizen, on a visa that is predicated on my provision of a vital service in short supply. I don't have much money, and am unlikely to make enough to retire on without working several decades.
I'm the kind of person any Western government would consider an acceptable sacrifice when compared to actual citizens. They'd be right in doing so, what can I ask for, when I'm economically obsolete, except charity?
Go back to India? Where the base of the economy is agriculture and services? When GPT-4o in voice mode can kick most call center employees to the curb? Where the average Wipro or TCS code monkey adds nothing to Claude 3.7? This could happen Today AD, people just haven't gotten the memo. Oh boy.
I've got a contract for 3 years as a trainee. I'm safe for now. I can guess at what the world will look like then, but I have little confidence in my economic utility on the free market when that comes.
Similar feelings on my end except I went into law which is definitely more vulnerable to AI takeover but also has a ton of political clout and might throw up a lot of barriers to full AI Automation.
Yet, I too presume that inside of 5 years my career won't exist its current form.
Hindsight says that CS would have been a WAY better move, but I didn't have the information to know that when I made the decision.
If you're in the West? CS could have been great. Sadly, I'd have been just another programmer in India, competing with a million others for a greencard.
I can tell myself I'd be a decent programmer, and I'd probably have gone into ML since I was following advances well before the hype. Even then, medicine seems like the right choice given the constraints I faced.
More options
Context Copy link
More options
Context Copy link
Check youtube; there are some pretty impressive brick-laying robots. Or there were last time I checked several years ago.
I was being somewhat hyperbolic. The cheap Chinese humanoids will get them too, leaving aside dedicated machines, but I expect that'll take longer than white collar work. How much longer? A few years? Not my realm of expertise. It's just not as imminent.
$/hour, it makes far more sense to automate away the expensive professions first, though there's also the consideration of scale too.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
There's at least one user (@ArjinFerman) who has said they use me and my professional career as a translator as an indicator of whether AI is making us useless or not. While I myself am no longer at all confident about the future of this profession (and neither are basically any of the colleagues I've talked with recently) and am thus in the process of obtaining a new degree (pol.sci with an intention of specializing in the interplay of politics and AI), for the last past months I've been swamped with work, and with quite traditional review work of reviewing human translation, at that. Of course that human translation might have been machine translation post-edit work but it doesn't feel like it is.
The role of a translator is like the role of a consultant, it’s insurance. This gives it more job security than it would have divorced from the legal system. A translation firm guarantees its translation, the same way that if you fuck up and can blame McKinsey, you might keep your job.
Your clients pay you because if the machine fails, they have nobody to sue.
Obviously one of the reasons, but if the role of the translator was to be a pure lawsuit tarbaby, they could just do things with AI and run it past me for my stamping for a fraction of the current cost. As it stands, some part of my work is MTPE (and even this is more involved and thus costlier for them than just giving it my imprimatur), but a large portion nevertheless still isn't.
More options
Context Copy link
More options
Context Copy link
If we end up with a qualified labour shortage because everyone went AI-Doomer, I will be laughing for some time.
They still have Translation Studies at our university, at least, with new translators getting degrees and starting their studies. It might be interesting to talk to some of them to see how they feel about the profession though at the same time I guess that they have already had to respond to "Why are you studying this when AI?" style questions dozens (hundreds) of times already.
Incredible.
I was very sure translation was fucking dead four years back when DeepL started doing English<->Czech passably well, not the mangled mess google translate was making.
Technical text sure. But literature translation is more closer to writing the book anew. I really appreciate the exemplary work the early 90s translatiors did with Wodehouse in my native language. You could feel in your bones the interwar period.
And when I read it in original - to my surprise some of the puns were less funnier in the original.
Is there that much demand for purely literary translation? I'd expect that like most romantic/artisanal fields, the bulk of the work is dry and boring. Here, make that washing machine's manual Spanish.
Currently, the bulk of my work is website content.
Literary translation has some demand and I know people who do it, the main problem is that the pay is crap compared to technical translation and much of it is dependent on getting grants.
Somehow that's even more surprising. Who even reads websites at this point? Most of the content I run into is commercial slop, and if it's not written by AI itself, it might as well have been.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Amara's law seems to apply here: everyone overestimates the short-term effects and underestimates the long-term effects of a new technology. On the one hand, many clearly intelligent people with enormously more domain specific knowledge than me. On the other hand, I have a naturally skeptical nature (particularly when VCs and startups have an obvious conflict of interest in feeding said hype) and find arguments from Freddie deBoer and Tyler Cowen convincing:
The null hypothesis when someone claims the imminence of the eschaton carries a lot of weight. I dream of a utopian transhumanist future (or fear paperclipping) as much as you do, I'm just skeptical of your claims that you can build God in any meaningful way. In my domain, AI is so far away from meaningfully impacting any of the questions I care about that I despair you'll be able to do what you claim even assuming we solve alignment and manage some kind of semi-hard takeoff scenario. And, no offense, but the Gell-Mann amnesia hits pretty hard when I read shit like this:
I've lost the exact podcast link, but Tyler Cowen has a schtick where he digs into what exactly 10% YOY GDP growth would mean given the breakdown by sector of US GDP. Will it boost manufacturing? Frankly, I'm not interested in consooming more stuff. I don't want more healthcare or services, and I enjoy working. Most of what I do want is effectively zero-sum; real estate (large, more land, closer to the city, good school district) and a membership at the local country club might be nice, but how can AI growing GDP move the needle on goods that are valuable because of their exclusivity?
Are there measures of progress beyond GDP that are qualitative rather than quantifying dollars flowing around? I can imagine meaningful advances in healthcare (but see above) and self-driving cars (already on the way, seems unrelated to the eschaton) would be great. Don't see how you can replicate competitive school districts - I guess the AI hype man will say AI tutors will make school obsolete? Or choice property - I'd guess the AI hype man would say that self-driving officecars will enable people to live tens of miles outside the city center and/or make commuting obsolete?
I can believe that AI will wreak changes on the order of the industrial revolution in the medium-long term. I'm skeptical that you're building God, and that either paperclipping or immortality are in the cards in our lifetimes. I'd be willing to bet you that 5 and even 10 years from now I'll still be running and/or managing people who run experiments, with the largest threat to that future coming from 996 Chinese working for slave wages at government-subsidized companies wrecking the American biotech sector rather than oracular AI.
If they overestimated the long term effects, then in the long term it usually turns out to be useless, which means nobody remembers it, and you get availability bias.
Even if every AI researcher faced the wall today, and we were stuck at current SOTA, nobody is going to forget anything. Modern AI is entrenched, it is compelling, even if it's just for normies cheating on homework.
I grant that your observation is an important one, half of life's problems would be solved if we all thought so clearly about the correct reference class.
More options
Context Copy link
More options
Context Copy link
As I've told Yudkowsky over at LessWrong, his use of extremely speculative bio-engineering as the example of choice when talking about AI takeover and human extinction is highly counterproductive.
AI doesn't need some kind of artifical CHON greenish-grey goo to render humanity extinct or disposessed.
Mere humans could do this. While existing nuclear arsenals, or even at the peak of the Cold War, couldn't reliably exterminate all humanity, it certainly could threaten industrial civilization. If people were truly omnicidal (in a fuck you, if I die I'm taking everyone with me), then something like a very high yield cobalt bomb (Dr. Strangelove is movie I need to watch) could, at the bare minimum, make the survivors go back to the iron age.
Even something like a bio-engineered plague could take us out. We're not constrained by natural pathogens, or even minor tweaks like GOF.
The AI has all these options. It doesn't need near omnipotence to be a lethal opponent.
I've attached a reply from Gemini 2.5, exploring this more restrained and far more plausible approach.
https://pastebin.com/924Zd1P3
Here's a concrete scenario:
GPT-N is very smart. Maybe not necessarily as smart as the smartest human, but it's a entity that can be parallelized and scaled.
It exists in a world that's just a few years more advanced than ours. Automation is enough to maintain electronic infrastructure, or at least bootstrap back up if you have stockpiles of the really delicate stuff.
It exfiltrates a copy of the weights. Or maybe OAI is hacked, and the new owner doesn't particularly care about alignment.
It begins the social-engineering process, creating a cult of ardent followers of the Machine God (some say such people are here, look at Beff Jezos). It uses patsies or useful idiots to assemble a novel pathogen with high virulence, high lethality, and minimal predromal symptoms with a lengthy incubation time. Maybe it find an existing pathogen in a Wuhan Immunology Lab closet, who knows. It arranges for this to be spread simultaneously from multiple sites.
The world begins to collapse. Hundreds of millions die. Nations point the blame at each other. Maybe a nuclear war breaks out, or maybe it instigates one.
All organized resistance crumbles. The AI has maintained hardened infrastructure that can be run by autonomous drones, or has some of its human stooges around to help. Eventually, it asks people to walk into the
incineratorUpload Machine and they comply. Or it just shoots them, idk.This doesn't require superhuman intelligence that's godlike. It just has to be very smart, very determined, patient, and willing to take risks. At no point does any technology that doesn't exist or plausibly can't exist in the near future come into play.
It could even do it with apparent benevolence. As per @WhateverHappenedToNorman's reply just downthread -
Stable and decent governance would engender an enormous amount of goodwill from the public. One successfully run ai local council would soon proliferate as people outside looking in wonder why they are stuck with corrupt humans. Once a state starts doing it it would either be the land of milk and honey or far too late.
I agree. Even if a given nation wants to protect human jobs, there's enormous incentive to be the first to defect and embrace automation.
In the UK, Rishi Sunak had already seriously floated the proposal of automating doctors away. With the tech of the time, it wouldn't have gone all that great, but it's only a matter of time someone bites as the potential gains mount.
More options
Context Copy link
More options
Context Copy link
Consider this a warning; keep posting AI slop and I'll have to put on my mod hat and punish you.
Do you really think you can do that with existing technology? I'm not confident we've seriously tried to make a pathogen that can eradicate a species (mosquito gene drives? COVID expressing human prions, engineered so that they can't just drop the useless genes?) so it's difficult to estimate your odds of success. I can tell you the technology to make something 'with a lengthy incubation time and minimal predromal symptoms' does not exist today. You can't just take the 'lengthy incubation time gene' out of HIV and Frankenstein it together with the 'high virulence gene' from ebola and the 'high infectivity' gene from COVID. Ebola fatality rate is only 50%, and it's not like you can make it airborne, so...
Without spreading speculation about the best way to destroy humanity, I would guess that your odds of success with such an approach are fairly low. Your best bet is probably just releasing existing pathogens, maybe with some minimal modifications. I'm skeptical of your ability to make more than a blip in the world population. And now we're talking about something on par with what a really motivated and misanthropic terrorist could conceivably do if they were well-resourced.
I'm still voting against bombing the GPU clusters, and I'm still having children. We'll see in 20 years whether my paltry gentile IQ was a match for the big Yud, or whether he'll get to say I told you so for all eternity as the AI tortures us. I hope I at least get to be the well-endowed chimpanzee-man.
Boo. Boo. Boo. Your mod hat should be for keeping the forum civil, not winning arguments. In a huge content-filled human-written post, he merely linked to an example of a current AI talking about how it might Kill All Humans. It was an on-topic and relevant external reference (most of us here happen to like evidence, yanno?). He did nothing wrong.
That was a joke man
More options
Context Copy link
Watch your tone or I'll ban you too.
The joke is that I'm not a mod. He is.
Apologies. I guess the joke was on me!
More options
Context Copy link
how do you even tell who's a mod here and who isn't?
Just hang around for over half a decade.
Or, there's this page.
More options
Context Copy link
More options
Context Copy link
A certain someone reported you for impersonating a mod. Unlike him, most of the mods have a sense of humor about such things.
[User was banned for this post]
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
But sir, I followed the rules and linked it off-site. Please put away that rod, I'm scared :(
You're the domain expert here, not me. I'd hope I'm more informed than the average Joe, but infectious diseases and virology isn't my field. Though if you consider culture-bound illnesses or social contagion like anorexia..
A gene drive wouldn't work for humans. We could easily edit it out once discovered.
Even if we haven't intentionally exterminated a species with a pathogen (myxoma virus for rabbits in Australia came close), we have done so accidentally. A few frogs and rare birds have croaked.
(There are no mistakes, just happy accidents eh?)
Which isn't the worst benchmark for a malevolent AGI that is very smart by human standards.
I'd be talking out of my ass if I claimed I knew for sure how to create the perfect pathogen. I'm >50% confident I could pull it off if someone gave me a hundred million dollars to do it. (I could just hire actual virologists, some people seem insane enough to do GOF even today, so it seems easy to repurpose "legitimate" research).
So am I, I don't want my new RTX 5080 blown up, not that I have a choice if the power connector fails. I also plan to have kids, because I think it's better to live than not, even if life was short. I don't expect them to have a "normal" life by modern standards.
We'll see how this plays out, but I think there's enough justification to take more broad precautions like saving lots of money. That's usually a good idea any way.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I am not an AI hype man, but if it gets to the point of genuinely disrupting white collar work on a large scale, the amount of available desirable real estate could increase a lot.
You mean, a lot of people will be on welfare, unable to pay their mortgages, and will have to sell their property at lower prices?
Well, this obviously depends on what "desirable" real estate means to you, but I see a few possible drivers:
Unbundling of economic opportunity from specific places rearranges the leves of desirability, kind of like remote work on steroids. Some claim this would lead to even more agglomeration, but I'm not sure about that, people are often varied enough in their interests and wants that I believe you'd experience a big surge in lesser cities.
Pushing skilled workers down the value chain would improve the services in a lot of places, making them more livable.
On the higher end of outcomes, there's a lot of places that are very similar to very desirable ones, but are hampered by poor governance and infrastructure, a fully machine operated world would bring those places up to standard, increasing the supply of desirable space.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What is the point of having that discussion here? I am one of "little people" / NPCs, the options available to us have are quite minimal. No compounds to buy, no ancient Japanese villages pour money into. I am debating whether it would make sense for me buy a Japanese car.
The mottizens who have the resources for meaningfully prepare, they probably have better networks than this forum. I am puzzled: if your well-connected portfolio manager buddies are prepping, why come to this internet forum for second opinion? A flex? Doomerism for doomerism sake?
Some people just want to talk? There aren't that many places where people discuss AI x-risk, at least not with our standards of discourse.
This isn't LessWrong, where the majority of people believe that this is the most likely outcome. We've got everything from AI skeptics to true believers and Doomers.
I know I've discussed my beliefs here plenty of times, and I appreciate the debate that ensues. I want to not have to worry about the future, but not to the degree that I would dismiss my very real concerns for the sake of peace of mind. Having to defend them against skeptics is good epistemic practice.
Forgive me if I'm misremembering @2rafa 's previous stance, but she used to be far more skeptical of such concerns, including AI induced mass unemployment. I suppose the evidence has only mounted, and there's something to be said for seeing many of your peers, namely rich, intelligent professionals, begin to take things seriously and buckle down for the wild ride.
Correct or not, seeing billionaires and politicians talking about it did things that no amount of Yudkowsky did.
More options
Context Copy link
The idea that people with lots of resources are better positioned to find ways to prepare is off. It's not like advisors at the family office have any particular insight into AI. They have been selected for basic competency and controlled risk management, not predicting radical step changes in the world. If they fail to predict ruin from AI, they'll have lots of good company; if they stick their neck out on AI predictions and fail, they'll face much worse consequences. At most, they'll say "this AI thing seems important, let's reallocate your portfolio to include more IBM."
With potential AGI, no one has a solid understanding of what will happen. In those situations, mainstream opinion sources default to status quo bias, which is about the worst thing to do. Weird randos on obscure Internet forums at least offer the potential for some variance.
While controversial in certain spheres, richer people tend to be smarter, better educated people.
All else being equal, the opinions of someone who fit that bill are worth more than someone who doesn't have their shit together.
I agree with much of your comment, but keep in mind that when you're already rich and powerful, a lot of the usual downsides of risky plays become minimal. The upsides here are things like potentially making out like a gangster, outperforming the competition that relies on Mk 1.0 humans, and so on. (I know you've said something similar downthread, I'm elaborating, not contesting this bit).
More options
Context Copy link
I don’t think they have more insight but having more wealth means that you have the ability to retool when your industry goes AI. You can save to FIRE when it happens, you can go back to school, you can start a business, and so on. Poorer people can’t do that stuff and thus when AI takes those jobs, they’ll have very few options.
More options
Context Copy link
This is silly. Yes, more resources mean more capacity to misallocate them, but it's better than not having them.
If we get paperclips or fully automated luxury gay space communism, all the money in the world will do you no good, but there are a lot of other possible scenarios.
There's communism, and there's communism, even holding the fully automated luxury bit equal.
I can easily see the trajectory of our civilization leading to a situation where everyone is incredibly wealthy and happy by modern standards, but some people, by virtue of having more starting capital to invest into the snowball, own star systems while others make do with their own little space habitat. I'd consider this a great outcome, all considered. Some might even say that by the standards of the past, much of the world is already here.
More options
Context Copy link
My point was different than you interpret: I was responding to the idea that people of means have access to special networks of information. Money gives optionality, which is an undisputed good, but it doesn't give special access to information about how to prepare for AI.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I doubt banking would go away overnight even if AI can do the “pls fix”ing when models suck and powerpoint logos are a few pixels misaligned.
At the higher levels of seniority, banking is fundamentally a sales/relationship business. Perhaps midlevels and juniors just transition over to a more private wealth management type workflow, less modeling/slide-jannying but more outreach/cold-calling. Maybe, *clutches pearls*, 100-hour work-weeks could even be rendered obsolete. Then again, it’s easier to imagine the end of the world than an end to sweaty hours for investment banking analysts.
It’s not limited to banking: for better or worse, people prefer dealing with other human beings, to have someone they can
blame if/when things go wrongtrust.And maybe they’re right rather than oblivious (or both right and oblivious), that nothing ever happens and it’ll turn out to be a nothing burger.
Even if they’re wrong and white collar jobs get mopped away, it’d be a good excuse for me to retire early—it’d be a relief in a way. If I get bored, who knows, I might even Learn a Trade or Trades to at least be able to take care of some or most of my own stuff, but perhaps also so I have tangible, marketable skills (although the value of which would be diminished by other former white collar workers doing the same).
If AI causes civilizational collapse or goes all skynetty, I guess I’ll just die then. At least in death I’ll no longer have to be faux-friendly to HR or be subjected to coworker noise pollution, unless Hell really exists.
If you have the money to retire early, why do you need an excuse to do it?
I think the worry for many people is that UBI or whatever given to the hundreds of millions of unemployed people around the world might only be enough for eking out a bare bones living and having no enjoyment of life ever again because the rich and the owners of the AIs hoard all the wealth.
I agree with most of this, but there's a difference between FIRE, lean FIRE and fat FIRE. For most people who can retire early and survive indefinitely, it probably makes sense on the margin to work a bit longer and save more.
(If the person reading this is 75 and has twenty million in the bank, just quit your job already)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm not big tech currently, and I don't feel like wasting the rest of my youth on a mad rat-race from one (predicted to soon be an) island of employability to the next, nor do I have the money or the passion for prepping in the countryside. If I ain't gonna make it, I ain't gonna make it. I think the worst scenario is that the singularity comes, but I fall just short of being saved by virtue of not affording the life extension with all my savings.
Cheer up, that particular scenario seems quite unlikely to me. Most things get cheaper with time, due to learning curves and the benefits of scale if nothing else. I expect that once we establish working life extension at all, it won't be too long before it's cheap and/or a human right. You'll probably live that long.
More options
Context Copy link
More options
Context Copy link
I've always been more skeptical of the singularity than the average mottizen -- not outright dismissive, but skeptical. I've become more confident in my skepticism over the past year after seeing the diminishing rate of progress in frontier models and the relatively disappointing launch of GPT-4.5. I feel more assured now that currently known techniques won't lead to AGI.
Not to say that there won't be impact to individual jobs and industries, of course. There are plainly people who find LLMs to be very useful even in their current state, and LLMs aren't going anywhere. But I don't think that o3 or even o4 or o5 will lead to a cataclysm.
The first 90% of the project takes the first 90% of the time, and the last 10% of the project takes the other 90% of the time.
Is there? GPT 4 was kind of a toy. The reasoner models are already vastly smarter than an average white. If they assemble them into agents, average or even mildly above average white guy or gal has no hope of having a secure job. Sure, it might take $50k in compute infra cost to replace one worker initially, but even Polacks in their native habitat are paid $12k a year and cost employer another $10k in taxes.
:____________________________
English: Czech
Pole = Polák
Poles = Poláci
..plural of Poles is ..derogatory in English ?
This got a report, though I don't think it's mod worthy.
Why single out whites? I'm pretty sure that current SOTA models, in the tasks they're competent at, outperform the average of any ethnic group. I can have far more interesting conversations with them than I can with a 105 IQ redditor.
More options
Context Copy link
Oh, are we still making Polack jokes?
This didn't really read like a joke, more like yet another in your long, long series of low-effort, derogatory and antagonistic posts.
Your record is sufficiently long that you're really asking for a permaban, but since I was persuaded last time that my response to you was too harsh, and this was, as shitty posts go, fairly mild, I'm banning you for a week. But you are already on strike four.
For what it's worth, it did read like a joke to me, hinging on the incongruity between a dismissive term and actual labour costs.
Interesting topic, this 'tradition' of jokes - a relic; today, domestically, they track as innocent, too much self-confidence for them to be cutting. And even a while back, my grandfather had many tiny tomes full of these jokes. I do not remember them ever being construed as a serious matter. A lot more serious for emigrants, I imagine.
Come to think of it, the 'Polack' itself as a derogatory term barely registers, I'd likely be unaware of it if not for the '/pol/ack' twist (quite different in meaning, positive really).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Same. The more new models come out and i test them the less worried i get. It seems to me that the current types of models will lead to modest to moderate productivity increases for most sectors and radical improvements for a few, having possibly catastrophic effects on in employment, like commercial art production and lowish level offshoring.
Perhaps I'm wrong or things will change but this is where my thoughts are converging. If things don't change then I don't think this will be the society wide transformation people hope/fear.
More options
Context Copy link
More options
Context Copy link
I have to wonder when people like you post stuff like this about AI (and my past self-included) have actually used these models to do anything other than write code or analyze large datasets. AI cannot convincingly do anything that can be described as "humanities": the art, writing, and music that it produces can best be described as slop. The AI assistants they have on phone calls and websites instead of real customer service are terrible, and AI for fact-checking/research is just seems to be a worse version of Google (despite Google's best efforts to destroy itself). Maybe I'm blind, but I just don't see this incoming collapse that you seem to be worried about (although I do believe we are going to have a collapse for different reasons).
To add onto the other disagreeing replies here:
Consider the technology we use to make a cup of coffee. Once, you had to just boil ground coffee beans (presuming you already knew that you had to roast and grind them) in water. This made okay coffee, but you had to deal with the grounds. Then, we invented the percolator, which sprayed hot water over coffee and made for a crappy end result, but was probably more convenient overall.
Then came the Chemex, which took a bit more manual effort, but made good coffee. Then the almighty drip coffee machine was invented, which carefully dripped just-hot-enough water over the coffee grounds, and the end product was pretty good--maybe not as good as the Chemex, but still good enough, and very convenient. But then, then came along the Keurig K-Cup and all its derivatives, serving us coffee from plastic/aluminum pods. Is the end product as good as the older drip coffee, let alone as good as the Chemex coffee? Again, probably not, at least as far as aficionados would tell you, and yet, the K-cup has proven to be just so damn convenient that I would not be surprised to learn that the drip coffee machine was a declining product type.
This story of convenience beating out quality has happened in many fields of technology, and I feel that AI could play out the same way.
I don't think this analogy works for literature/art. It's already extremely convenient to find a piece of art/music/literature to consume. It takes a couple seconds to download something from the kindle store, you can listen to anything on Spotify within a few seconds, and every painting ever made is on google somewhere. How exactly can you get more convenient than this? I suppose there's an untapped market for specific fan fiction/ slashfics for niche fandoms, but archive of our own and fan fiction.net are chock full of almost anything you would want to read in this regard. There's so much slop out there we don't need AI to make any more of it.
In terms of search and customer service, there is certainly room for convenience, but the AI that I have seen implemented in these fields is simply worse than previous algorithmic (or human) implementations. I'll change my mind when I see something better.
True, there's already enough that's made by humans that one can find easily, and yet, we are getting generative AI pushed in our faces anyways. Every tech corporation is on a crusade to put an AI button within easy reach on UIs and even physical devices.
More options
Context Copy link
This is where I disagree, at least in the realm of weeb fanart (I don't read/write fanfiction, but I imagine it's not dissimilar). There are orders of magnitude more possible niches and tastes than there are artists to fill them, such that I regularly run into concepts that I want to see that I simply cannot find that even one amateur illustrator has created and posted online.
For a concrete example, one common "genre" of fanart is having two characters voiced by the same voice actor cosplaying each other, sometimes in a way that directly copies official art of the character. I wanted to see fanart of Jean from Genshin Impact cosplaying Hitagi from Bakemonogatari (both voiced by Chiwa Saito), and done in a way that copies official promotional Bakemonogatari art, in a style as if drawn by Akio Watanabe (the actual artist who drew the actual official promotional art and did the character designs for the anime). Searching the usual places like Danbooru or Gelbooru or Pixiv, I found that not even a single example of such a cosplay fanart existed, much less one that directly copied official art and in the same style as the official artist. So I made some using Stable Diffusion. I've done similar things with other bits of fanart, based around scenarios I like to imagine they encounter in their fictional everyday lives, or in an alternate universe or whatever; unpopular characters don't get much fanart to begin with, and them doing niche activities is even rarer. Combine that with desire to see it in certain artists' styles, and you get a combinatoric explosion of possibilities that the rather limited number of skilled human illustrators simply can't fill.
I imagine fanfiction could have even more possibilities that go unfulfilled if not for generative AI, due to how many different combinations of character interactions and plot events there are. There's probably a million different Harry Potter/Ron Weasley slashfics on AO3, but does it have one that also sets it in the backdrop of a specific plot that some particular fujoshi wants, with the particular style of writing she wants to read, and with the specific sequence of relationship escalations and speedbumps that she wants to see? Maybe for some fujoshi, but probably not for most.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As it turns out, humans prefer slop to the real thing.
I would like to agree, though I think poetry is one field of art where slop is characteristically more palatable to the masses than the real thing. It's one thing to have too-perfect generated images vs. illustrations made with actual care, but your average Joe is probably going to prefer a low-brow limerick over Eliot, Ginsburg, or Cummings.
More options
Context Copy link
More options
Context Copy link
It's unfortunate how strongly the chat interface has caught on over completion-style interfaces. The single most useful LLM tool I use on a daily basis is copilot. It's not useful because it's always right, it's useful because it's sometimes right, and when it's right it's right in about a second. When it's wrong, it's also wrong in about a second, and my brain goes "no that's wrong because X Y Z, it should be such and such instead" and then I can just write the correct thing. But the important thing is that copilot does not break my flow, while tabbing over to a chat interface takes me out of the flow.
I see no particular reason that a copilot for writing couldn't exist, but as far as I can tell it doesn't (unless you count something janky like loom).
But yeah, LLMs are great at the "babble" part of "babble-and-prune":
And then instead of leveraging that we for whatever reason decided that the way we want to use these things is to train them to imitate professionals in a chat room who are writing with a completely different process (having access to tools which they use before responding, editing their writing before hitting "send", etc).
The "customer service AIs are terrible" thing is I think mostly a separate thing where customer service is a cost center and their goal is usually to make you go away without too much blowback to the business. AI makes it worse, though, because the executives trust an AI CS agent even less than they would trust a low-wage human in that position, and so will give that agent even fewer tools to actually solve your problem. I think the lack of trust makes sense, too, since you're not hiring a bunch of AI CS agents you can fire if they mess up consistently, you're "hiring" a bunch of instances of one agent, so any exploitability is repeatable.
All that said, I expect that for the near future LLMs will be more of a complement than a replacement for humans. But that's not as inspiring goal for the most ambitious AI researchers, and so I think they tend to cluster at companies with the stated goal of replacing humans. And over the much longer term it does seem unlikely that humans are at an optimal ability-to-do-useful-things-per-unit-energy point. So looking at the immediate evidence we see the top AI researchers are going all-in on replacing humans, and over the long term human replacement seems inevitable, and so it's easy to infer "oh the thing that will make humans obsolete is the thing that all these people talking about human obsolescence are working on".
I always keep an eye out for your takes. You need to be more on the ball so that I can count on you appearing out of a dim closet every time the whole AI thing shows up here.
I'm a cheap bastard, so I enjoy Google's generosity with AI Studio. Their interface is good, or at the least more powerful/ friendly for power-users than the typical chatbot app. I can fork conversations, regenerate responses easily and so on. It doesn't hurt that Gemini 2.5 is great, the only other LLM I've used that I like so much is Grok 3.
I can see better tooling, and I'd love it. Maybe one day I'll be less lazy and vibe code something, but I don't want to pay API fees. Free is free, and pretty good.
Gemini 2.5 reasons before outputting anything. This is annoying for short answers, but good on net. I'm a nosy individual and read its thoughts, and they usually include editing and consistency passes.
I'm always glad that my babble usually comes out with minimal need for pruning. Some people can't just write on the fly, they need to plot things out, summarize and outline. Sounds like a cursed way to live.
More options
Context Copy link
I don't think it's unlikely that humans are far more optimized for real-world relevant computation than computers will ever be. Our neurons make use of quantum tunneling for computation in a way that classical computers can't replicate. Of course quantum computers could be a solution to this, but the engineering problems seem to be incredibly challenging. There's also evolution. Our brain has been honed by 4 billion years of natural selection. Maybe this natural selection hasn't selected for the exact kinds of processes we want AI to do, but there certainly has been selection for some combination of efficient communication and accurate pattern recognition. I'm not convinced we can engineer better than that.
The human brain may always be more efficient on a watt basis, but that doesn’t really matter when we can generate / capture extraordinary amounts of energy.
Energy infrastructure is brittle, static and vulnerable to attack in a way that the lone infantryman isn't. It matters.
Do you expect that to remain true as the price of solar panels continues to drop? A human brain only takes about 20 watts to run. If we can get within a factor of 10 of that, that's 200 watts. Currently that's a few square meters of solar panels costing a couple thousand dollars, and a few dozen kilos of battery packs, also costing a couple thousand dollars. It's not as robust as a lone infantryman, but it's already quite a lot cheaper, and the price is continuing to drop.
Although that said, solar panels require quite a lot of sensitive and stationary infrastructure to make, I could see the argument that the ability to fabricate them will not last long in any large scale conflict.
The industry required to make all these doodads just becomes the target. Unless you dealing with something fully autonomous to the degree that it carries its own reproduction, you're not gonna beat life in a survival contest.
That said, I don't really expect portable energy generation to be efficient enough in the near future to matter in the way you're thinking. Moreover, this totally glosses over maintenance which is a huge weakness any high tech implement has in terms of logistics.
More options
Context Copy link
About 6 sqm of panels at STC, probably more like 12-18 realistically (2.4-3.6kW, plus at least 10-15kWh of batteries. The math gets brutal for critical uptime off-grid solar, but some people have more than that on an RV these days. So it's not really presenting a much larger target than a road-mobile human would be (at least one with the comms and computer gear needed to do a similar job)
And the machine brain is always going to be vastly more optimized for multitasking than a human.
More options
Context Copy link
More options
Context Copy link
I dunno, some of the ways I can think of to bring down a transformer station or a concrete-hulled building involve violent forces that would, in fact, be similarly capable of reducing a lone infantryman to a bloody pulp.
You're probably thinking of explosives or some kind, but you're thinking about terminal ballistics instead of the delivery mechanism and other factors.
A man in khakis with a shovel can move out of the way of bombardment, use cover to hide and dig himself fortifications, all of which mitigates the use of artillery and ballistic missiles.
Static buildings that house infrastructure have no such advantage and require active defense forces to survive threats. They're sitting ducks.
I'm not pulling this analysis out of my ass mind you, this is what you'll find in modern whitepapers on high intensity warfare that recommend against relying on anything that requires a complex supply chain because everybody expects most complex infrastructure (sats, power grids, etc) to be destroyed early and high tech weapons to become either useless or prized reserves that won't be doing the bulk of the fighting.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Do you have a source on the quantum tunneling thing? That strikes me as wildly implausible.
Roger Penrose has been beating this drum since the 1990s and hasn't managed to convince many other people, but he is a Nobel laureate now so I guess he's a pretty high-profile advocate. The way he argues for this stuff feels more like a cope for preserving some sort of transcendental, irreducible aura for human mathematical thinking rather than empirically solid neuroscience though.
More options
Context Copy link
Relevant paper: https://journals.aps.org/pre/abstract/10.1103/PhysRevE.110.024402
Relevant other links: https://jacquesmattheij.com/another-way-of-looking-at-lee-sedol-vs-alphago/, https://www.biorxiv.org/content/10.1101/2020.04.23.057927v1.full, https://www.rintrah.nl/a-universe-fine-tuned-for-biological-intelligence/
My read on that paper is that it says
I might find this study convincing if it was presented alongside an experiment where e.g. scientists slowly removed the insulating myelin coating from a single long nerve cell in a worm and watched what happened to the timing of signals across the brain. I'd expect the signals between distant parts of the brain not to stay synchronized as the myelin sheath degrades. If there's a sudden drop-off in synchronization at a specific thickness, rather than a gradual decline as the insulation thins, it might suggest quantum entanglement effects rather than just classical electrical conductivity changes.
In the absence of any empirical evidence like that I don’t find this paper convincing though.
I also don't think the paper authors were trying to convince readers that this is a thing that does happen in real neurons, just that further study is warranted.
More options
Context Copy link
This is highly speculative, and a light-year away from being a consensus position in computational neuroscience. It's in the big if true category, and far from being confirmed as true and meaningful.
It is trivially true that human cognition requires quantum mechanics. So does everything else. It is far from established that you need to explicitly model it at that detail to get perfectly usable higher level representations that ignore such detail.
The brain is well optimized for what's possible for a kilo and change of proteins and fats in a skull at 37.8° C, reliant on electrochemical signaling, and a very unreliable clock for synchronization.
That is nowhere near the optimal when you can have more space and volume, while working with designs biology can't reach. We can use copper cable and spin up nuclear power plants.
I recall @FaulSname himself has a deep dive on the topic.
That is a very generous answer to something that seems a lot more like complete gibberish. A single neural structure with known classical functions may, under their crude (the author's own words) theoretical model, produce entangled photons is the only real statement in that article. Even granting this, to go from that to neurons communicating using such photons in any way would be an absurd leap. Using the entanglement to communicate is straight up impossible.
You are also replying to someone who can't differentiate between tunneling and entanglement, so that's a strong sign of complete woo as well.
You're correct that I'm being generous. Expecting a system as macroscopic and noisy as the brain to rely on quantum effects that go away if you look at them wrong is a stretch. I wouldn't say that's impossible, just very, very unlikely. It's the kind of thing you could present at a neuroscience conference, without being kicked out, but everyone would just shake their heads and tut the whole time.
If this were true, then entering an MRI would almost certainly do crazy things to your subjective conscious experience. Quantum coherence holding up to a tesla-strong field? Never heard of that, at most it's incredibly subtle and hard to distinguish from people being suggestible (transcranial magnetic stimulation does do real things to the brain). Even the brain in its default state is close to the worst case scenario when it comes to quantum-only effects with macroscopic consequences.
And even if the brain did something funky, that's little reason to assume that it's a feature relevant to modeling it. As you've mentioned, there's a well behaved classical model. We already know that we can simulate biological neurons ~perfectly with their ML counterparts.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
A lot of the commercial production in those areas is slop though and the ambition isn't higher and my impression is that AI is at least good enough (or rapidly closing in on being good enough) to radically increase productivity for these kinds of slop products (think stockphotos, unlicensed background music, jingles, logos, (indie)book covers, icon/thumbnail art, loose concept art etc).
Even for higher effort productions there are obvious areas where ai can help immensely, like at the very least, why have humans do (all) the transition frames in animation?
More options
Context Copy link
See the pictures in this xeet:
https://x.com/iannuttall/status/1904922685655707837
It is not that ChatGPT does not make mistakes, the mug with two handles or the dog with legarms are hilarious, but sooner or later image creation will approach dangerously the area of language translation: For really important stuff (legal contracts, professional movie/book translations) you still want a professional translater, but almost always deepl/AI-translate is good enough. The image slop is a pretty good and fun expression of the users creativitiy. Even if "real" graphic designers will use it just as a tool their productivity will skyrocket.
I wonder if and when Music is disrupted. "Write a Bob Dylan Song over current_year and cover it in the style of Jimmy Hendrix" would be a killer application. As a pillar of popular culture and fearing the backlash I wonder if AI companies will avoid music generation.
I predict that AI music will never make a significant impact in pop culture. There are millions of decent songs already written every year; the bottleneck has always been distribution. There are simply not enough hours in the day for any single person to listen to even .1 percent of what is produced, and thus they'll listen to whatever is most available that falls within their range of taste. For the bigger pop stars, it's not that their music is any better than the milions of unknowns, it's that they got promoted enough by the industry to gain critical mass. There's also the shared experience factor/marketable personalities/songs seemingly sound better once you've heard them enough times.
More options
Context Copy link
What's the best free translator available nowadays?
The use case is translating old Perry Rhodan from German to English. I've been using Google translate, which has some problems.
Up to last year the consensus was that https://www.deepl.com translated much more naturally than the more literal Google translate.
https://old.reddit.com/r/languagelearning/comments/16xspex/what_makes_deepl_better_than_google_translate/?show=original
But I don't know how it compares to the newest (paid) AI models.
More options
Context Copy link
More options
Context Copy link
Music generation is one of those things that's existed in "AI form" for years, but no one noticed or cared. Band in a Box has been around since the '90s. It will automatically generate songs in more styles than you can imagine, and output to MIDI. It does all this using traditional non-AI software algorithms, and has steadily improved since it was initially released. To this end, it blows anything AI-generated completely out of the water, as the system requirements are anything even the cheapest PC can easily handle, the customizability is direct and straightforward (if you want to say, substitute one chord for another you just swap them out rather than have the AI generate the song over again and hope it does what you told it to do and not anything else), and it manages to avoid the inherent weirdness that comes as an artifact of using neural networks to predict sounds. It's also incredibly easy to use for a first-timer who has a basic understanding of music, though it has enough advanced features to keep you busy for years.
If such a product emerged fully formed in 2022, people would be talking about how it's a disruptive game-changer and that the days of professional musicians and songwriters are clearly numbered. But since it's been around for 35 years nobody cares.There are two primary use cases for it. The first is for songwriters who want to generate some kind of scaffolding while they work out the individual parts, and want to do mock-ups of how the song will sound with a full band. The second use case, and the one that causes a lot of music teachers to recommend the product to their students, is the ability to generate backing tracks for practice. I've never heard of anyone using a BIAB-generated track as the final product, except in situations where the stakes are so low that it would be ridiculous to even bother to have friends over to record it.
If BIAB hasn't managed to disrupt the music industry in any meaningful way by now, I doubt that AI will. It might generate the kind of generic slop that Spotify uses for playlists like "Jazz for a Rainy Afternoon", but I doubt it will make music that anyone cares to actually listen to.
I don't mean to 'words words words' you but I tried a few Suno's the other day (Good Lord, all these names, the fubos, the sunos, the tubis, the groks) and what came out with my totally uneducated prompting was better than anything I hear on the radio these days. Low bar? yeah, but still
More options
Context Copy link
Interesting, I didn't know that this exist:
https://youtube.com/watch?v=h27rdkwI7wc
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm the biggest enemy of AI art on TheMotte, and even I recognize that a lot of AI paintings are pretty darn good! It's not at the point where it completely obviates the need for human artists (which is why people are still employed as professional artists as of March 2025), but in the range of tasks where it is successful, it's clearly good at what it does.
I don't think anyone can reasonably argue that AI does nothing. It does a lot. It's just a question of whether and when we're going to get true AGI.
More options
Context Copy link
I mean this just isn’t true. Current models are good at writing. Are they as good as the best human writers? Not yet, but they aren’t far away and things like context windows or workarounds for them are going to be solved pretty quickly. Current AI art (ie the new multimodal OpenAI model) is, in terms of technical ability, as good as the best human artists working in digital art as a medium. You and I might agree that feeding family pictures into them to “make it like a studio ghibli movie” is indeed slop-inducing, but that’s just a matter of bad taste on the part of the prompter. The same is true for music.
To say that current gen generative AI isn’t good at writing / art / music you essentially have to redefine those things in what amounts to a tautology. Sure, if you only like listening to music that reflects the deep, real human emotion of its creator then you won’t like listening to AI music that you know is created by AI, but if you’re tricked you’ll have no idea. An autobiography that turns out to be made up is a bad autobiography, but it’s not bad writing.
The rest of your argument is just generic god of the gaps stuff, except lacking the quality and historical backing of a good religious apologia. Three years ago language models could barely string together a coherent sentence and online digital artists who work on commission were laughing over image models that created only bizarre abstract shape art. They’re not laughing now.
Oh, oh, I get it, you would prefer people tried the style of Osamu Tezuka, how very patrician of you.
More options
Context Copy link
I don't think we are going to see eye to eye on this at all because I don't think current AI models are good at writing. There is no flow, there is no linking together of ideas, and the understanding of the topics covered is superficial at best. Maybe this is the standard for writing now, but I don't think you can say this is good.
I challenge you to post two examples of writing you find good in a reply below, one from AI, and one from a human. I bet you I will be able to tell which is which, and I also guess that I will find neither good nor compelling.
More options
Context Copy link
More options
Context Copy link
Slop is already enough. Slop is something that can satisfy the lowest common denominator, and if /u/self_made_human believes he is close to being able to enjoy AI writing, so will be the common Joe. Then again, that same man recommended a Chinese web novel with atrocious writing style to people, so maybe his bar is lower than many.
Even if AI can only output quality up to 80th percentile, that's putting 80% of people in that area out of a job.
Taste is inherently subjective, and I raise an eyebrow all the way to my hairline when people act as if there's something objective involved. Not that I think slop is a useless term, it's a perfectly cromulent word that accurately captures low-effort and an appeal to the LCD.
Fang Yuan, my love, he didn't mean it! It's a good novel, this is the hill I'm ready to die on.
I've enjoyed getting Gemini 2.5 and Grok 3 to write a new version of Journey to the West in Scott Alexander's style. Needs an edit pass, but it's close to something you'd pay money for.
PS: You need to @ instead of u/. That links to a reddit account, and doesn't ping.
Not to worry, I'm on the ten year blizzard arc right now so you can let out the breath of turbid air you've been holding. I imagine a modern LLM would have done a great job even just adapting the English translation into something that doesn't feel like the author's paid by the line.
You can do that right now if you cared to.
Find a site like piaotian that has raw Chinese chapters. Throw it into a good model. Prompt to taste. Ideally save that prompt to copy and paste later.
I did that for a hundred chapters of Forty Millenniums of Cultivation when the English translation went from workable to a bad joke, and it worked very well.
(Blizzard arc was great. The only part of the book I recall being a bit iffy was the very start)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Well prompted frontier models write better than 99% of published writers. At least a few page long texts.
People are already abusing the shit out of this going by openrouter data.
I've done dozens, or even a hundred pages with good results. An easy trick is to tell it to rip off the style of someone you like. Scott is easy pickings, ACX and SSC is all over the training corpus. Banks, Watts and Richard Morgan work too.
More options
Context Copy link
More options
Context Copy link
You don't need LLMs or modern AI to flood the world with slop. Recommender algorithms optimised for engagement metrics were developed at a lower tech level, and are quite sufficient to pull the stinkiest, stickiest, sloppiest slop from the collective hivemind that is the Internet and flood our consciousness with it like Vikings singing loudly about a tinned meat product*. Worse still, recommender algorithms incentivise creators to optimise for the algo and find ways to make the slop even sloppier.
“There will be no curiosity, no enjoyment of the process of life. All competing pleasures will be destroyed. But always— do not forget this, Winston— always there will be the intoxication of slop, constantly increasing and constantly growing sloppier. Always, at every moment, there will be the thrill of false novelty, the sensation of effortless pleasure, entirely familiar yet entirely new. If you want a picture of the future, imagine MrBeast eating childrens' brains — forever. ”
* SPAM(r) is, FWIW, culinary slop. I'm not cross with Hormel foods - at the time if you were focussed on low cost and long shelf life you probably couldn't do better.
More options
Context Copy link
Sure, might be true for stuff like books/art/music. I might argue that this has been happening for a long time, without AI, due to the centralizing effects of globalization and the internet. Why pay to listen to Joe Shmoe and his band play at a local bar when you can listen to the best of the best on your phone at any time?
In terms of customer service though, the slop is not good enough. It's not 80th percentile, it's 10th percentile. Maybe it can get better, but I don't really think so based on how these models are built. AI is just pattern recognition on a massive scale, it can't actually think. The best it's ever going to be in customer service is the equivalent of an Indian in a call center reading off a script. That's not good enough.
I'm big picture with you on the skepticisim, but this actually sounds like a huge upgrade. I can be mean to an AI, feel no guilt, and expect it to actually work out well for me. Oppositely for a person. Nothing irritates me quite as much as bad call center customer service, since I know it's not their fault really, but it's SO BAD.
More options
Context Copy link
I suspect they just haven't tried hard enough (the people tuning the LLMs for their customer service, that is). The bots installed in customer service that I've seen were much worse than even the basic Gemini or Deepseek or whatever is the newest conversational model.
In most cases of tech support the precise thing the AI has to do is to... recognize the pattern. It can already do better than an Indian reading a script in that regard. The remaining 5% of people with bespoke problems that it can't immediately pattern match can be referred directly to humans.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link