site banner

Culture War Roundup for the week of March 24, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

I have to wonder when people like you post stuff like this about AI (and my past self-included) have actually used these models to do anything other than write code or analyze large datasets. AI cannot convincingly do anything that can be described as "humanities": the art, writing, and music that it produces can best be described as slop. The AI assistants they have on phone calls and websites instead of real customer service are terrible, and AI for fact-checking/research is just seems to be a worse version of Google (despite Google's best efforts to destroy itself). Maybe I'm blind, but I just don't see this incoming collapse that you seem to be worried about (although I do believe we are going to have a collapse for different reasons).

To add onto the other disagreeing replies here:

Consider the technology we use to make a cup of coffee. Once, you had to just boil ground coffee beans (presuming you already knew that you had to roast and grind them) in water. This made okay coffee, but you had to deal with the grounds. Then, we invented the percolator, which sprayed hot water over coffee and made for a crappy end result, but was probably more convenient overall.

Then came the Chemex, which took a bit more manual effort, but made good coffee. Then the almighty drip coffee machine was invented, which carefully dripped just-hot-enough water over the coffee grounds, and the end product was pretty good--maybe not as good as the Chemex, but still good enough, and very convenient. But then, then came along the Keurig K-Cup and all its derivatives, serving us coffee from plastic/aluminum pods. Is the end product as good as the older drip coffee, let alone as good as the Chemex coffee? Again, probably not, at least as far as aficionados would tell you, and yet, the K-cup has proven to be just so damn convenient that I would not be surprised to learn that the drip coffee machine was a declining product type.

This story of convenience beating out quality has happened in many fields of technology, and I feel that AI could play out the same way.

I don't think this analogy works for literature/art. It's already extremely convenient to find a piece of art/music/literature to consume. It takes a couple seconds to download something from the kindle store, you can listen to anything on Spotify within a few seconds, and every painting ever made is on google somewhere. How exactly can you get more convenient than this? I suppose there's an untapped market for specific fan fiction/ slashfics for niche fandoms, but archive of our own and fan fiction.net are chock full of almost anything you would want to read in this regard. There's so much slop out there we don't need AI to make any more of it.

In terms of search and customer service, there is certainly room for convenience, but the AI that I have seen implemented in these fields is simply worse than previous algorithmic (or human) implementations. I'll change my mind when I see something better.

True, there's already enough that's made by humans that one can find easily, and yet, we are getting generative AI pushed in our faces anyways. Every tech corporation is on a crusade to put an AI button within easy reach on UIs and even physical devices.

I suppose there's an untapped market for specific fan fiction/ slashfics for niche fandoms, but archive of our own and fan fiction.net are chock full of almost anything you would want to read in this regard.

This is where I disagree, at least in the realm of weeb fanart (I don't read/write fanfiction, but I imagine it's not dissimilar). There are orders of magnitude more possible niches and tastes than there are artists to fill them, such that I regularly run into concepts that I want to see that I simply cannot find that even one amateur illustrator has created and posted online.

For a concrete example, one common "genre" of fanart is having two characters voiced by the same voice actor cosplaying each other, sometimes in a way that directly copies official art of the character. I wanted to see fanart of Jean from Genshin Impact cosplaying Hitagi from Bakemonogatari (both voiced by Chiwa Saito), and done in a way that copies official promotional Bakemonogatari art, in a style as if drawn by Akio Watanabe (the actual artist who drew the actual official promotional art and did the character designs for the anime). Searching the usual places like Danbooru or Gelbooru or Pixiv, I found that not even a single example of such a cosplay fanart existed, much less one that directly copied official art and in the same style as the official artist. So I made some using Stable Diffusion. I've done similar things with other bits of fanart, based around scenarios I like to imagine they encounter in their fictional everyday lives, or in an alternate universe or whatever; unpopular characters don't get much fanart to begin with, and them doing niche activities is even rarer. Combine that with desire to see it in certain artists' styles, and you get a combinatoric explosion of possibilities that the rather limited number of skilled human illustrators simply can't fill.

I imagine fanfiction could have even more possibilities that go unfulfilled if not for generative AI, due to how many different combinations of character interactions and plot events there are. There's probably a million different Harry Potter/Ron Weasley slashfics on AO3, but does it have one that also sets it in the backdrop of a specific plot that some particular fujoshi wants, with the particular style of writing she wants to read, and with the specific sequence of relationship escalations and speedbumps that she wants to see? Maybe for some fujoshi, but probably not for most.

the art, writing, and music that it produces can best be described as slop

As it turns out, humans prefer slop to the real thing.

We conducted two experiments with non-expert poetry readers and found that participants performed below chance levels in identifying AI-generated poems (46.6% accuracy, χ2(1, N = 16,340) = 75.13, p < 0.0001). Notably, participants were more likely to judge AI-generated poems as human-authored than actual human-authored poems (χ2(2, N = 16,340) = 247.04, p < 0.0001). We found that AI-generated poems were rated more favorably in qualities such as rhythm and beauty, and that this contributed to their mistaken identification as human-authored.

I would like to agree, though I think poetry is one field of art where slop is characteristically more palatable to the masses than the real thing. It's one thing to have too-perfect generated images vs. illustrations made with actual care, but your average Joe is probably going to prefer a low-brow limerick over Eliot, Ginsburg, or Cummings.

It's unfortunate how strongly the chat interface has caught on over completion-style interfaces. The single most useful LLM tool I use on a daily basis is copilot. It's not useful because it's always right, it's useful because it's sometimes right, and when it's right it's right in about a second. When it's wrong, it's also wrong in about a second, and my brain goes "no that's wrong because X Y Z, it should be such and such instead" and then I can just write the correct thing. But the important thing is that copilot does not break my flow, while tabbing over to a chat interface takes me out of the flow.

I see no particular reason that a copilot for writing couldn't exist, but as far as I can tell it doesn't (unless you count something janky like loom).

But yeah, LLMs are great at the "babble" part of "babble-and-prune":

The stricter and stronger your Prune filter, the higher quality content you stand to produce. But one common bug is related to this: if the quality of your Babble is much lower than that of your Prune, you may end up with nothing to say. Everything you can imagine saying or writing sounds cringey or content-free. Ten minutes after the conversation moves on from that topic, your Babble generator finally returns that witty comeback you were looking for. You'll probably spend your entire evening waiting for an opportunity to force it back in.

And then instead of leveraging that we for whatever reason decided that the way we want to use these things is to train them to imitate professionals in a chat room who are writing with a completely different process (having access to tools which they use before responding, editing their writing before hitting "send", etc).

The "customer service AIs are terrible" thing is I think mostly a separate thing where customer service is a cost center and their goal is usually to make you go away without too much blowback to the business. AI makes it worse, though, because the executives trust an AI CS agent even less than they would trust a low-wage human in that position, and so will give that agent even fewer tools to actually solve your problem. I think the lack of trust makes sense, too, since you're not hiring a bunch of AI CS agents you can fire if they mess up consistently, you're "hiring" a bunch of instances of one agent, so any exploitability is repeatable.

All that said, I expect that for the near future LLMs will be more of a complement than a replacement for humans. But that's not as inspiring goal for the most ambitious AI researchers, and so I think they tend to cluster at companies with the stated goal of replacing humans. And over the much longer term it does seem unlikely that humans are at an optimal ability-to-do-useful-things-per-unit-energy point. So looking at the immediate evidence we see the top AI researchers are going all-in on replacing humans, and over the long term human replacement seems inevitable, and so it's easy to infer "oh the thing that will make humans obsolete is the thing that all these people talking about human obsolescence are working on".

I always keep an eye out for your takes. You need to be more on the ball so that I can count on you appearing out of a dim closet every time the whole AI thing shows up here.

I see no particular reason that a copilot for writing couldn't exist, but as far as I can tell it doesn't (unless you count something janky like loom).

I'm a cheap bastard, so I enjoy Google's generosity with AI Studio. Their interface is good, or at the least more powerful/ friendly for power-users than the typical chatbot app. I can fork conversations, regenerate responses easily and so on. It doesn't hurt that Gemini 2.5 is great, the only other LLM I've used that I like so much is Grok 3.

I can see better tooling, and I'd love it. Maybe one day I'll be less lazy and vibe code something, but I don't want to pay API fees. Free is free, and pretty good.

And then instead of leveraging that we for whatever reason decided that the way we want to use these things is to train them to imitate professionals in a chat room who are writing with a completely different process (having access to tools which they use before responding, editing their writing before hitting "send", etc).

Gemini 2.5 reasons before outputting anything. This is annoying for short answers, but good on net. I'm a nosy individual and read its thoughts, and they usually include editing and consistency passes.

The stricter and stronger your Prune filter, the higher quality content you stand to produce. But one common bug is related to this: if the quality of your Babble is much lower than that of your Prune, you may end up with nothing to say. Everything you can imagine saying or writing sounds cringey or content-free. Ten minutes after the conversation moves on from that topic, your Babble generator finally returns that witty comeback you were looking for. You'll probably spend your entire evening waiting for an opportunity to force it back in.

I'm always glad that my babble usually comes out with minimal need for pruning. Some people can't just write on the fly, they need to plot things out, summarize and outline. Sounds like a cursed way to live.

I don't think it's unlikely that humans are far more optimized for real-world relevant computation than computers will ever be. Our neurons make use of quantum tunneling for computation in a way that classical computers can't replicate. Of course quantum computers could be a solution to this, but the engineering problems seem to be incredibly challenging. There's also evolution. Our brain has been honed by 4 billion years of natural selection. Maybe this natural selection hasn't selected for the exact kinds of processes we want AI to do, but there certainly has been selection for some combination of efficient communication and accurate pattern recognition. I'm not convinced we can engineer better than that.

The human brain may always be more efficient on a watt basis, but that doesn’t really matter when we can generate / capture extraordinary amounts of energy.

Energy infrastructure is brittle, static and vulnerable to attack in a way that the lone infantryman isn't. It matters.

Do you expect that to remain true as the price of solar panels continues to drop? A human brain only takes about 20 watts to run. If we can get within a factor of 10 of that, that's 200 watts. Currently that's a few square meters of solar panels costing a couple thousand dollars, and a few dozen kilos of battery packs, also costing a couple thousand dollars. It's not as robust as a lone infantryman, but it's already quite a lot cheaper, and the price is continuing to drop.

Although that said, solar panels require quite a lot of sensitive and stationary infrastructure to make, I could see the argument that the ability to fabricate them will not last long in any large scale conflict.

The industry required to make all these doodads just becomes the target. Unless you dealing with something fully autonomous to the degree that it carries its own reproduction, you're not gonna beat life in a survival contest.

That said, I don't really expect portable energy generation to be efficient enough in the near future to matter in the way you're thinking. Moreover, this totally glosses over maintenance which is a huge weakness any high tech implement has in terms of logistics.

About 6 sqm of panels at STC, probably more like 12-18 realistically (2.4-3.6kW, plus at least 10-15kWh of batteries. The math gets brutal for critical uptime off-grid solar, but some people have more than that on an RV these days. So it's not really presenting a much larger target than a road-mobile human would be (at least one with the comms and computer gear needed to do a similar job)

And the machine brain is always going to be vastly more optimized for multitasking than a human.

I dunno, some of the ways I can think of to bring down a transformer station or a concrete-hulled building involve violent forces that would, in fact, be similarly capable of reducing a lone infantryman to a bloody pulp.

You're probably thinking of explosives or some kind, but you're thinking about terminal ballistics instead of the delivery mechanism and other factors.

A man in khakis with a shovel can move out of the way of bombardment, use cover to hide and dig himself fortifications, all of which mitigates the use of artillery and ballistic missiles.

Static buildings that house infrastructure have no such advantage and require active defense forces to survive threats. They're sitting ducks.

I'm not pulling this analysis out of my ass mind you, this is what you'll find in modern whitepapers on high intensity warfare that recommend against relying on anything that requires a complex supply chain because everybody expects most complex infrastructure (sats, power grids, etc) to be destroyed early and high tech weapons to become either useless or prized reserves that won't be doing the bulk of the fighting.

Do you have a source on the quantum tunneling thing? That strikes me as wildly implausible.

Roger Penrose has been beating this drum since the 1990s and hasn't managed to convince many other people, but he is a Nobel laureate now so I guess he's a pretty high-profile advocate. The way he argues for this stuff feels more like a cope for preserving some sort of transcendental, irreducible aura for human mathematical thinking rather than empirically solid neuroscience though.

My read on that paper is that it says

  1. The mechanism behind neural synchronization across distances in the brain is not fully understood
  2. Maybe that's because quantum effects play a significant role
  3. Here's a theoretical quantum mechanism that could potentially operate in myelinated axons
  4. Our mathematical model suggests this mechanism is physically possible under certain conditions
  5. The myelin sheath structure might provide sufficient isolation for quantum effects to persist despite body temperature
  6. We have no empirical evidence confirming or even suggesting that this actually happens in real neurons

I might find this study convincing if it was presented alongside an experiment where e.g. scientists slowly removed the insulating myelin coating from a single long nerve cell in a worm and watched what happened to the timing of signals across the brain. I'd expect the signals between distant parts of the brain not to stay synchronized as the myelin sheath degrades. If there's a sudden drop-off in synchronization at a specific thickness, rather than a gradual decline as the insulation thins, it might suggest quantum entanglement effects rather than just classical electrical conductivity changes.

In the absence of any empirical evidence like that I don’t find this paper convincing though.

I also don't think the paper authors were trying to convince readers that this is a thing that does happen in real neurons, just that further study is warranted.

This is highly speculative, and a light-year away from being a consensus position in computational neuroscience. It's in the big if true category, and far from being confirmed as true and meaningful.

It is trivially true that human cognition requires quantum mechanics. So does everything else. It is far from established that you need to explicitly model it at that detail to get perfectly usable higher level representations that ignore such detail.

The brain is well optimized for what's possible for a kilo and change of proteins and fats in a skull at 37.8° C, reliant on electrochemical signaling, and a very unreliable clock for synchronization.

That is nowhere near the optimal when you can have more space and volume, while working with designs biology can't reach. We can use copper cable and spin up nuclear power plants.

I recall @FaulSname himself has a deep dive on the topic.

That is a very generous answer to something that seems a lot more like complete gibberish. A single neural structure with known classical functions may, under their crude (the author's own words) theoretical model, produce entangled photons is the only real statement in that article. Even granting this, to go from that to neurons communicating using such photons in any way would be an absurd leap. Using the entanglement to communicate is straight up impossible.

You are also replying to someone who can't differentiate between tunneling and entanglement, so that's a strong sign of complete woo as well.

You're correct that I'm being generous. Expecting a system as macroscopic and noisy as the brain to rely on quantum effects that go away if you look at them wrong is a stretch. I wouldn't say that's impossible, just very, very unlikely. It's the kind of thing you could present at a neuroscience conference, without being kicked out, but everyone would just shake their heads and tut the whole time.

If this were true, then entering an MRI would almost certainly do crazy things to your subjective conscious experience. Quantum coherence holding up to a tesla-strong field? Never heard of that, at most it's incredibly subtle and hard to distinguish from people being suggestible (transcranial magnetic stimulation does do real things to the brain). Even the brain in its default state is close to the worst case scenario when it comes to quantum-only effects with macroscopic consequences.

And even if the brain did something funky, that's little reason to assume that it's a feature relevant to modeling it. As you've mentioned, there's a well behaved classical model. We already know that we can simulate biological neurons ~perfectly with their ML counterparts.

We know for a fact that the electron transport chain of mitochondria relies on quantum tunneling to move electrons between complexes and MRI doesn't seem to effect that very much, so I wouldn't be surprised if an MRI had no effect on conscious experience (although I couldn't tell you, I've never had one).

I don't buy the claim that we can simulate biological neurons perfectly with their ML counterparts. We can barely simulate the function of an entire bacterial cell, which for context, is about as big as a mitochondria. Can we approximate neuronal function? Sure. But something is clearly lost: what else would explain the great efficiency of biological versus human systems in terms of power consumption.

More comments

AI cannot convincingly do anything that can be described as "humanities": the art, writing, and music that it produces can best be described as slop.

A lot of the commercial production in those areas is slop though and the ambition isn't higher and my impression is that AI is at least good enough (or rapidly closing in on being good enough) to radically increase productivity for these kinds of slop products (think stockphotos, unlicensed background music, jingles, logos, (indie)book covers, icon/thumbnail art, loose concept art etc).

Even for higher effort productions there are obvious areas where ai can help immensely, like at the very least, why have humans do (all) the transition frames in animation?

See the pictures in this xeet:

https://x.com/iannuttall/status/1904922685655707837

It is not that ChatGPT does not make mistakes, the mug with two handles or the dog with legarms are hilarious, but sooner or later image creation will approach dangerously the area of language translation: For really important stuff (legal contracts, professional movie/book translations) you still want a professional translater, but almost always deepl/AI-translate is good enough. The image slop is a pretty good and fun expression of the users creativitiy. Even if "real" graphic designers will use it just as a tool their productivity will skyrocket.

I wonder if and when Music is disrupted. "Write a Bob Dylan Song over current_year and cover it in the style of Jimmy Hendrix" would be a killer application. As a pillar of popular culture and fearing the backlash I wonder if AI companies will avoid music generation.

I predict that AI music will never make a significant impact in pop culture. There are millions of decent songs already written every year; the bottleneck has always been distribution. There are simply not enough hours in the day for any single person to listen to even .1 percent of what is produced, and thus they'll listen to whatever is most available that falls within their range of taste. For the bigger pop stars, it's not that their music is any better than the milions of unknowns, it's that they got promoted enough by the industry to gain critical mass. There's also the shared experience factor/marketable personalities/songs seemingly sound better once you've heard them enough times.

deepl/AI-translate

What's the best free translator available nowadays?

The use case is translating old Perry Rhodan from German to English. I've been using Google translate, which has some problems.

Up to last year the consensus was that https://www.deepl.com translated much more naturally than the more literal Google translate.

https://old.reddit.com/r/languagelearning/comments/16xspex/what_makes_deepl_better_than_google_translate/?show=original

Google Translate tends to do a much more literal translation, DeepL tends to be more idiomatic and free. Sometimes one is preferable, sometimes the other.

But I don't know how it compares to the newest (paid) AI models.

Music generation is one of those things that's existed in "AI form" for years, but no one noticed or cared. Band in a Box has been around since the '90s. It will automatically generate songs in more styles than you can imagine, and output to MIDI. It does all this using traditional non-AI software algorithms, and has steadily improved since it was initially released. To this end, it blows anything AI-generated completely out of the water, as the system requirements are anything even the cheapest PC can easily handle, the customizability is direct and straightforward (if you want to say, substitute one chord for another you just swap them out rather than have the AI generate the song over again and hope it does what you told it to do and not anything else), and it manages to avoid the inherent weirdness that comes as an artifact of using neural networks to predict sounds. It's also incredibly easy to use for a first-timer who has a basic understanding of music, though it has enough advanced features to keep you busy for years.

If such a product emerged fully formed in 2022, people would be talking about how it's a disruptive game-changer and that the days of professional musicians and songwriters are clearly numbered. But since it's been around for 35 years nobody cares.There are two primary use cases for it. The first is for songwriters who want to generate some kind of scaffolding while they work out the individual parts, and want to do mock-ups of how the song will sound with a full band. The second use case, and the one that causes a lot of music teachers to recommend the product to their students, is the ability to generate backing tracks for practice. I've never heard of anyone using a BIAB-generated track as the final product, except in situations where the stakes are so low that it would be ridiculous to even bother to have friends over to record it.

If BIAB hasn't managed to disrupt the music industry in any meaningful way by now, I doubt that AI will. It might generate the kind of generic slop that Spotify uses for playlists like "Jazz for a Rainy Afternoon", but I doubt it will make music that anyone cares to actually listen to.

I don't mean to 'words words words' you but I tried a few Suno's the other day (Good Lord, all these names, the fubos, the sunos, the tubis, the groks) and what came out with my totally uneducated prompting was better than anything I hear on the radio these days. Low bar? yeah, but still

Interesting, I didn't know that this exist:

https://youtube.com/watch?v=h27rdkwI7wc

I'm the biggest enemy of AI art on TheMotte, and even I recognize that a lot of AI paintings are pretty darn good! It's not at the point where it completely obviates the need for human artists (which is why people are still employed as professional artists as of March 2025), but in the range of tasks where it is successful, it's clearly good at what it does.

I don't think anyone can reasonably argue that AI does nothing. It does a lot. It's just a question of whether and when we're going to get true AGI.

AI cannot convincingly do anything that can be described as "humanities": the art, writing, and music that it produces can best be described as slop.

I mean this just isn’t true. Current models are good at writing. Are they as good as the best human writers? Not yet, but they aren’t far away and things like context windows or workarounds for them are going to be solved pretty quickly. Current AI art (ie the new multimodal OpenAI model) is, in terms of technical ability, as good as the best human artists working in digital art as a medium. You and I might agree that feeding family pictures into them to “make it like a studio ghibli movie” is indeed slop-inducing, but that’s just a matter of bad taste on the part of the prompter. The same is true for music.

To say that current gen generative AI isn’t good at writing / art / music you essentially have to redefine those things in what amounts to a tautology. Sure, if you only like listening to music that reflects the deep, real human emotion of its creator then you won’t like listening to AI music that you know is created by AI, but if you’re tricked you’ll have no idea. An autobiography that turns out to be made up is a bad autobiography, but it’s not bad writing.

The rest of your argument is just generic god of the gaps stuff, except lacking the quality and historical backing of a good religious apologia. Three years ago language models could barely string together a coherent sentence and online digital artists who work on commission were laughing over image models that created only bizarre abstract shape art. They’re not laughing now.

You and I might agree that feeding family pictures into them to “make it like a studio ghibli movie” is indeed slop-inducing, but that’s just a matter of bad taste on the part of the prompter.

Oh, oh, I get it, you would prefer people tried the style of Osamu Tezuka, how very patrician of you.

I don't think we are going to see eye to eye on this at all because I don't think current AI models are good at writing. There is no flow, there is no linking together of ideas, and the understanding of the topics covered is superficial at best. Maybe this is the standard for writing now, but I don't think you can say this is good.

I challenge you to post two examples of writing you find good in a reply below, one from AI, and one from a human. I bet you I will be able to tell which is which, and I also guess that I will find neither good nor compelling.

Slop is already enough. Slop is something that can satisfy the lowest common denominator, and if /u/self_made_human believes he is close to being able to enjoy AI writing, so will be the common Joe. Then again, that same man recommended a Chinese web novel with atrocious writing style to people, so maybe his bar is lower than many.

Even if AI can only output quality up to 80th percentile, that's putting 80% of people in that area out of a job.

Taste is inherently subjective, and I raise an eyebrow all the way to my hairline when people act as if there's something objective involved. Not that I think slop is a useless term, it's a perfectly cromulent word that accurately captures low-effort and an appeal to the LCD.

Then again, that same man recommended a Chinese web novel with atrocious writing style to people, so maybe his bar is lower than many.

Fang Yuan, my love, he didn't mean it! It's a good novel, this is the hill I'm ready to die on.

I've enjoyed getting Gemini 2.5 and Grok 3 to write a new version of Journey to the West in Scott Alexander's style. Needs an edit pass, but it's close to something you'd pay money for.

PS: You need to @ instead of u/. That links to a reddit account, and doesn't ping.

It's a good novel, this is the hill I'm ready to die on.

Not to worry, I'm on the ten year blizzard arc right now so you can let out the breath of turbid air you've been holding. I imagine a modern LLM would have done a great job even just adapting the English translation into something that doesn't feel like the author's paid by the line.

You can do that right now if you cared to.

Find a site like piaotian that has raw Chinese chapters. Throw it into a good model. Prompt to taste. Ideally save that prompt to copy and paste later.

I did that for a hundred chapters of Forty Millenniums of Cultivation when the English translation went from workable to a bad joke, and it worked very well.

(Blizzard arc was great. The only part of the book I recall being a bit iffy was the very start)

Well prompted frontier models write better than 99% of published writers. At least a few page long texts.

People are already abusing the shit out of this going by openrouter data.

I've done dozens, or even a hundred pages with good results. An easy trick is to tell it to rip off the style of someone you like. Scott is easy pickings, ACX and SSC is all over the training corpus. Banks, Watts and Richard Morgan work too.

You don't need LLMs or modern AI to flood the world with slop. Recommender algorithms optimised for engagement metrics were developed at a lower tech level, and are quite sufficient to pull the stinkiest, stickiest, sloppiest slop from the collective hivemind that is the Internet and flood our consciousness with it like Vikings singing loudly about a tinned meat product*. Worse still, recommender algorithms incentivise creators to optimise for the algo and find ways to make the slop even sloppier.

“There will be no curiosity, no enjoyment of the process of life. All competing pleasures will be destroyed. But always— do not forget this, Winston— always there will be the intoxication of slop, constantly increasing and constantly growing sloppier. Always, at every moment, there will be the thrill of false novelty, the sensation of effortless pleasure, entirely familiar yet entirely new. If you want a picture of the future, imagine MrBeast eating childrens' brains — forever. ”

* SPAM(r) is, FWIW, culinary slop. I'm not cross with Hormel foods - at the time if you were focussed on low cost and long shelf life you probably couldn't do better.

Sure, might be true for stuff like books/art/music. I might argue that this has been happening for a long time, without AI, due to the centralizing effects of globalization and the internet. Why pay to listen to Joe Shmoe and his band play at a local bar when you can listen to the best of the best on your phone at any time?

In terms of customer service though, the slop is not good enough. It's not 80th percentile, it's 10th percentile. Maybe it can get better, but I don't really think so based on how these models are built. AI is just pattern recognition on a massive scale, it can't actually think. The best it's ever going to be in customer service is the equivalent of an Indian in a call center reading off a script. That's not good enough.

The best it's ever going to be in customer service is the equivalent of an Indian in a call center reading off a script. That's not good enough.

I'm big picture with you on the skepticisim, but this actually sounds like a huge upgrade. I can be mean to an AI, feel no guilt, and expect it to actually work out well for me. Oppositely for a person. Nothing irritates me quite as much as bad call center customer service, since I know it's not their fault really, but it's SO BAD.

I suspect they just haven't tried hard enough (the people tuning the LLMs for their customer service, that is). The bots installed in customer service that I've seen were much worse than even the basic Gemini or Deepseek or whatever is the newest conversational model.

In most cases of tech support the precise thing the AI has to do is to... recognize the pattern. It can already do better than an Indian reading a script in that regard. The remaining 5% of people with bespoke problems that it can't immediately pattern match can be referred directly to humans.