site banner

Culture War Roundup for the week of January 27, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Imagine the same exchange, but instead of "I have prompted DeepSeek...", you opened with "I asked my secretary...". 90% of the annoyance that your post causes would already be there - essentially you open with an announcement that your prospective readers are not worth your own time, but should still dedicate their time to read something that you think it is important for them to. That it is AI, which tends to send some people into speciesist rage and is generally associated with time-wasting shovelware, is just the cherry on top; you could perhaps get close to the effect with "I asked someone in our Bangladeshi call center...". That you would have no trouble producing as good or better only deepens the offense, as it just makes salient the question of why you didn't (...because you thought the as good or worse wall of text should be good enough for us?).

As a matter of fact, I found the wall of text worthwhile mostly only insofar as it is a good illustration of R1's capacity for writing and summarisation. Engaging with it at face value, I got the same feeling as when reading the arguments of some bombastic and eloquent 16 year old on an internet forum of yore - the prose showed cracks (most obviously in the form of mismatched metaphors, such as "rewriting equilibrium", or a "siren call" being experienced at a frontier - a new one? As opposed to which old one?), and through the cracks you could glean that much of the inside is hollow. Good prose, I think, should be produced by starting with a solid thesis or web of theses which could stand on their own without rhetorical bluster, and then iteratively working out a good way to present them in an appealing way. Whatever you crammed into the context before letting the model produce evidently serves as some sort of meat and bone to hang the text on, but the fit is loose (just imagine the 16 year old writing manically after having inhaled Beyond Good and Evil). The result instead just comes across as having come to be by a process of writing something that sounds cool, then writing something that sounds cool in combination with the thing that precedes it, and so on until you end with a thing that sounds like a cool conclusion.

Of course, this is in fact exactly how LLMs write (unless you cough up the tokens and time to make it CoT an iterative writing process, which I imagine does not happen in whatever chat interface you used?). It is also how bombastic and eloquent 16 year olds write (I know because I was a 16 year old once, and was both more bombastic and more eloquent than I am now). You evidently can evolve from that to the sort of writing that is befitting of jaded and rhetorically washed-up 30somethings, but that development is neither straightforward nor optional.

This post should satisfy our rules.

It would have done so if you’d stopped after “….and I think this is what we’re starting to see now.” You’ve got a thesis with enough meat to both defend and attack. That’s enough for a good discussion.

Adding a link to a (human) essay or article would have been fine. Adding a link to your generated essay would have been fine. The problem arose when you included said essay in the comment body. You promoted it from a source to be discussed to the discussion itself.

There is a limited tolerance for walls of text. AI makes it too easy to exceed that tolerance. In the interest of our signal to noise ratio, we ask that AI essays be kept out of sight by default.

Host it on pastebin. Put it in a collapsed-by-default quote block, if we have those. Users should be able to opt in to reviewing and discussing the essay just like they would for any other source.

Do you believe I would have had any trouble producing as good or better a wall of text myself?

You’re eloquent and capable. Of course you could.

The time and effort it would take to do so provides a nice limiting factor on 1) how many walls you generate and 2) how invested you are in the ensuing discussion. It’s the same reason we frown on blogspammers.

I have no doubt it would be better -- thatsthepoint.jpg.

I have fed this essay to DeepSeek, here is its response.


Rebuttal: DeepSeek and the Perils of Cultural Determinism

The essay’s core thesis—that cultural stereotypes are malleable, and DeepSeek exemplifies China’s shift from exploitation to exploration—is provocative and timely. It rightly dismantles the lazy “fast-follower” trope by grounding China’s historical constraints in material realities (rice paddies, Malthusian traps) rather than essentialist myths. The linkage between agricultural legacies and cognitive phenotypes is compelling, echoing Jared Diamond’s Guns, Germs, and Steel but with a Silicon Valley twist. Yet, while the argument sparkles with metaphorical flair (Schelling fences! Protein folding!), it risks replacing one deterministic framework with another. Let’s dissect.

1. Rice Paddies and Neural Networks: A Too-Neat Archetype

The essay leans heavily on the “rice theory” of East Asian psychology, citing studies that tie rice cultivation to holistic thinking and risk aversion. But this narrative flattens complexity. China’s intellectual history is also marked by explosive creativity—the Mohist mechanics of the Warring States, Song Dynasty astronomical clocks, even the heterodox philosophies of the Ming-Qing transition. These weren’t mere “incremental optimizations” but radical departures. To reduce millennia of cultural evolution to crop types risks environmental determinism. If rice farming dictated conformity, how do we explain the 1989 protests, the liumang (punk) subcultures of the ’90s, or the crypto-anarchists on WeChat today?

2. DeepSeek: Vanguard or Outlier?

Liang Wenfeng’s gambit is fascinating, but the essay over-indexes on DeepSeek as a phase transition. For every DeepSeek, there are a dozen Chinese tech firms still playing catch-up in semiconductors or aerospace. Even in AI, breakthroughs like Multiple Latent Attention coexist with systemic vulnerabilities: U.S. sanctions on GPUs, brain drains, and the CCP’s tightening grip on data governance. The open-source strategy is brilliant, but let’s not ignore the elephant in the server room: China’s Great Firewall. Can a truly global innovation ecosystem thrive behind digital nationalism? DeepSeek’s success may depend as much on geopolitical détente as on its “meta-derisking” org chart.

3. The Myth of “Organizational DNA”

Liang’s critique of China’s VC sector—its allergy to fundamental research—rings true, but the essay underplays structural rot. China’s state-capitalist model excels at marshaling resources for national champions (Huawei, CATL) but struggles with decentralized, bottom-up innovation. Contrast this with the U.S., where garage startups (Apple, OpenAI) flourish in a chaotic, permissionless landscape. DeepSeek’s “unlimited GPU access” and zero bureaucracy sound utopian, but how scalable is this in a system where Party cadres still vet major tech roadmaps? The real test isn’t whether DeepSeek can innovate—it’s whether a DeepSeek-like project critiquing, say, AI ethics could survive scrutiny.

4. The West’s Own Exploitation Bias

The essay’s jab at Silicon Valley’s “Sputnik moment” is deliciously subversive, but let’s not romanticize Western innovation. Meta and Google are increasingly bureaucratic, optimizing for shareholder returns over moonshots. The U.S. AI boom was built on exploiting global talent (immigrant researchers) and capital (low-interest VC money). If China is trapped in “yoinkism,” America is mired in “griftism”—see Theranos, WeWork, or the crypto crash. DeepSeek’s rise isn’t just a challenge to Chinese stereotypes but a mirror reflecting the West’s own creative stagnation.

5. Toward a Dialectical View of Innovation

Ultimately, the essay’s greatest contribution is framing cultural traits as energy landscapes rather than destiny. But to avoid replacing rice determinism with GPU determinism, we need a dialectical view. China’s AI surge isn’t a rejection of its past but a synthesis: combining the Song Dynasty’s bureaucratic rigor (exploitation) with the hunger of a civilization that’s tasted humiliation (Opium Wars) and redemption (Belt and Road). The “lone swallow” of DeepSeek may yet falter, but its existence proves that China’s innovation ecosystem is neither monolithic nor static.

Final Thought: The essay’s weakness is its optimism. Liang’s “meta-derisking” assumes rationality prevails—that status games and open-source kudos can override decades of institutional inertia. But history shows that innovation thrives not just on incentives but on dissensus. For a thousand DeepSeeks to bloom, China must tolerate a thousand heretics. The question isn’t whether Chinese minds can innovate—they’ve already proven that—but whether the Party can stomach the creative destruction that true exploration demands. Until then, DeepSeek remains a fascinating anomaly: a Chinese unicorn that’s galloping, but still saddled.

Thank you for illustrating the crux of the argument.

Please don’t do it again. Pig, mud, etc. etc.

I know that you know that DeepSeek is throughly compromised when it comes to anything involving China or the Culture War. Why are you pretending that we as readers should take anything it outputs seriously?

I am once again asking the mods that posts of the form "I couldn't be bothered to make my own argument so I prompted a large language model to make one for me" be banned under the "low effort" rule

So to support your argument against DeepSeek, you give me some "InstaPundit" slop site, which opens with a citation of Rohan Paul

Tell me you didn't read any further without literally telling me that you didn't read any further.

You accuse Paul of using GPT to pad out his content but what are you doing right now? What is it that they say about people in glass houses?

As for convincing you that I am worthy of a response, you clearly already think I am because you've responded.

There is a joke amongst algorithm developers who specialize in machine learning regarding LLMs. A LLM trained on the full corpus of the internet will inevitably converge on the persona of a mediocre middle manager making edgy comments on YouTube. This has convinced the middle managers that LLMs must be sentient and it has convinced the developers that the middle managers are not.

I note that you still haven't actually made your own argument for why i should be bothered to engage with your bot's output.

I am not Chinese and I am not American.

No, you are Russian if i recall correctly, and Russia is China if you ordered China off Timu and then white-washed it. What is your point?

What is this slop? I've made my point. You're despicable.

  • -10

Dase. You're a valued commenter, and the moderation team cuts you a disproportionate amount of slack.

You've already tested the waters with top-level LLM-posting, and even while I am not categorically against it, I recognize the points made by Amadan and Cjet that it's a dangerous direction for The Motte to venture. The overwhelming majority of us are here to read what other humans have to say, and write back to them. There are already two moderator responses that constitute clear warnings.

That's a borderline, almost novel category of potential offense, and we have yet to decide how to treat it going forward.

What isn't novel, is name-calling and insults. You know all too well that those don't fly here, and if you claim otherwise, you'd be insulting both your own intelligence and my opinion of it.

I understand you feel provoked and misunderstood, and you might perceive that as intentional obtuseness on your interlocutor's part. But here, we expect charity, and more of it than is normally demanded elsewhere on the wider web. "He shot first" is at best a weak defense. You happen to have a quasi-ideological dispute going here, but I must remind you that this place is a watering hole for heathens and witches, and often all we ask is that we don't bare wands or claws at each other. Even if someone pisses you off, especially so.

I will let you off with a warning, again. This is clearly special treatment, but you're a special poster, and we try our best to tolerate your foibles. But that leeway doesn't extend infinitely far, and we'd appreciate it if we didn't have to go easy on you. I hope you take heed, and if you have to disagree with others, continue doing so with effort and charity. Yes, even if it appears to you that they're making ridiculous claims or missing the point. I would really rather not have to say this again, or watch other mods no longer excuse what I occasionally can.

Okay. I give up.

I find its response adequate. It is presented without any editing.

Copy pasting ai content is low effort nonsense. “I asked [AI] model…” is equivalent to “I had a dream where…”

In terms of being an interesting thing to talk about

I think you are being a little unfair here.

After all, if I write up a dream, I have to put in effort proportional to the length of the text divided by my typing speed at the very least.

Is the entire blog written by deep seek ? I'm surprised at how often it uses rat-sphere phrases.

Schelling fences , meta-derisking , empirical weight

'Schelling fences' phrase was coined by Scott right ? Did you coax it to sound like a less-wronger ?

When 90% of Europe’s population died during the Black Death

bit exaggerated no ?

Reshaping labor markets: Top graduates now see AGI research as higher-status than finance or civil service.

Chatgpt doesn't like making controversial statements. It's refreshing to see a less lobotomized model delve into murky areas like reconfiguring society.

Liang’s meta-derisking – making exploration legible, replicable, and prestigious

I like this framing. It's incisive. Would be an impressive way to frame the conclusion for any person, let alone an AI. The prestige part is most important.

It demands ecosystems that reward speculative genius as reliably as rice farmers once rewarded meticulousness. The question isn’t whether Chinese minds can innovate, but whether China’s institutional lattice will let a thousand DeepSeeks bloom

yeah......it's right. Derivative, but correct.

or if this lone swallow merely heralds a cultural spring that never comes.

Hah. LLMs can't resist a good metaphor.

To be fair, "Top graduates now see AGI research as higher-status than finance or civil service." does not seem very controversial to me. It rhymes with "more kids want to be astronauts than lawyers".

The argument is basically solid, but the theoretical bits at the beginning and end use vague, grandiose language that sometimes pings my bullshit detector. To be totally honest, I’m not sure if that's Deepseek’s writing style or yours: you have a fairly flamboyant style when you get onto grand topics. I would be interested to see what happens if you add in a few more motte posters for style.

To paraphrase Lisa Simpson, I know what those words mean, but that text makes no sense.

I find its response adequate. It is presented without any editing.

I welcome you to engage with the following text on its own merits.

Why should I read your "adequate" aislop?

Talent hyper-specialization: Their "MoE-style" org structure (per employee accounts) lets PhDs and literature majors alike pursue orthogonal optimizations without coordination overhead.

Nice hallucination. I'm sure tons of literature majors are working really hard on deepslop right now.

Silicon Valley now openly imitates their approaches, with ex-OpenAI staffers adopting DeepSeek’s training protocols.

Another hallucination and falsehood.

Their Multiple Latent Attention architecture reduces VRAM costs to 5-13% of standard models – a theoretical breakthrough Western labs dismissed as impractical.

Another hallucination and falsehood.

Resource abundance signaling: Unlimited GPU access and zero approval chains mimic frontier conditions – the digital equivalent of free land for homesteaders.

China is the opposite of "unlimited gpu" right now

As Liang notes:

"ByteDance and Tencent have profits exceeding most nations’ GDP. What’s missing isn’t capital, but the organizational DNA for uncertainty."

Actually the source says:

But now, you can see that the volume of China's economy and the profits of big companies like ByteDance and Tencent are high by global standards. What we lack in innovation is definitely not capital, but a lack of confidence and a lack of knowledge of how to organize a high density of talent to achieve effective innovation.

Get your slop out of here

Despite a few obvious errors (90% death rate, .. literature majors? ) the essay is pretty reasonable.

China is the opposite of "unlimited gpu" right now

They're making their own chips. They are very experienced at making things at scale andfast. GPUs aren't the best for AI purposes. 100% they have people working on optimizing this.

'High energy use' doesn't really matter if you are China and your energy production chart looks like this.

Despite a few obvious errors

Despite the patient's death, the surgeon has declared the operation a success.

What "despite" is there to be had here?

90% death rate is bogus (rather, it may confuse death rate and mortality rate?) but literature majors part is in fact true. Since he has bothered to check the interview, I'm surprised why he had left that attack.

If its "bogus" why did you post it?

Each of your hallucinations are half-true, half-false. Of course you pointed out the true part while ignoring the falsehood.

Talent hyper-specialization: Their "MoE-style" org structure (per employee accounts) lets PhDs and literature majors alike pursue orthogonal optimizations without coordination overhead.

The source highlights the integration of diverse expertise and collaborative discussions, suggesting a culture of interaction and coordination. This indicates that DeepSeek values diverse perspectives and collaboration, contrary to the claim of orthogonal optimizations without coordination overhead.

Silicon Valley now openly imitates their approaches, with ex-OpenAI staffers adopting DeepSeek’s training protocols.

The source only mentions two individuals without evidence of a broader Silicon Valley trend. Thus, the claim extrapolates beyond the given information, making it an invalid conclusion.

a theoretical breakthrough Western labs dismissed as impractical.

The source does not mention Western labs dismissing the innovation as impractical. It discusses DeepSeek's MLA architecture reducing VRAM costs, but without evidence or mention of Western labs' reactions

As Liang notes:

"What’s missing isn’t capital, but the organizational DNA for uncertainty."

The concept of "organizational DNA for uncertainty," is not mentioned in the source and goes beyond the text's focus on confidence and knowledge of organizing talent. Your slop also fabricated a quote and put the words in someone's mouth, which is inappropriate even if the words were true. Liang didn't say that.

Sorry, this is just tedious hairsplitting. Did you use ChatGPT to come up with something?

This indicates that DeepSeek values diverse perspectives and collaboration, contrary to the claim of orthogonal optimizations without coordination overhead.

Not a contradiction insofar as we give the sources straightforward reading. Zihan says: “It’s like everyone contributes to the final model with their own (orthogonal) ideas and everyone hopes their idea is useful”. It has integrated two separate sources (Wenfeng and Zihan) into a non-contradictory phrase. This is basics of journalism, I see worse whenever I open legacy media.

We can go over all items again but clearly you're not arguing in good faith. Give up, R1 > you and whatever sloppy model you've called to aid.

I used r1 just like you did

Sorry, this is just tedious hairsplitting.

Your slop is making outlandish and blatantly false claims. You think it's trivial that "Silicon Valley now openly imitates their approaches" and that they made "a theoretical breakthrough Western labs dismissed as impractical."? If those were true it would be a big deal that readers should take notice to, but they're not.

Why are you so aggressive? First, concede all the previous items on which your criticism fell flat, then I'll consider whether to dignify you with a response.

  • -13

See, this is exactly the kind of exchange we don't want to have. It's bad enough you're both being antagonistic; you're both literally using AIs to do it.

Yes, I'm sure with enough massaging you can get an AI to write posts we won't readily detect, or even create an AI Motte user. < golf clap > But if we do detect it, we're going to stomp on it.

You and @phailyoor both knock it off.

I actually handwrote all my stuff and threw in a sentence or two of obvious ai as a joke. But I have made zero personal insults or attacks and have only directly engaged with the factual matter of the article.

Meanwhile every one of the other guy has personally insulted me multiple times in every post.

At the current level of technology, you're below hallucinatory slop: confidently wrong. Unwittingly obsolete.

you've done such a great job displaying your vitriol. Even you must realize that it's been something of a subhuman, simian display. Perhaps you can stand upright and engage your intellectual successor properly now?

clearly you're not arguing in good faith.

Why are you so aggressive?

his posts are objectively bad

To be clear, everything not labeled as AI output I have written myself. I also think it's legitimate to use AI to automate for search of nitpicks as he does, the problem is that there's little to nitpick at and his posts are objectively bad as a result.

More comments

DO NOT POST AI CONTENT

We've only said this sporadically in the past. I'm talking to the other moderators and maybe we will have it added as a rule under the "content" section. Or maybe I'm wrong and all the other mods like AI content (highly doubt that).

We all know how to go and use an AI. If you want to have a discussion with AI themotte is basically just a bad intermediary. No one is here to have discussions with AIs. Thus posting AI content is in no one's interest.

You can of course consult AIs on your own time, and maybe they can be used as a sort of sanity or quick polling check.

Yes, with any seams showing. Obviously we can't enforce a rule we didn't detect you breaking.

Following on to @Corvos's comment, below, I would compare it to using AI to write books and short stories (something I'm aware of as someone involved in the writing and literary community) or make art. As you know, there now a bazillion people churning out AI-generated fiction and art, such that this is overwhelming a lot of traditional venues. The Kindle store is full of them, and KU authors are bemoaning the fact that not only are they competing against each other, but now they are competing against AI-published books by the thousands. There are even courses available now (by grifters) teaching you how to generate books with AI which you can then sell on Amazon.

My Facebook feed is full of "Reels" with increasingly realistic AI content, everything from Chinese fishermen dragging an improbable sea monster out of the water to bears waving down motorists to help rescue their cubs, to actresses dressed in outfits and performing in movies they never appeared in.

We can't stop it, most of it is crap, and right now, most of it is fairly easily detectable, but it's getting harder. The problem is not so much "AI is crap" (that will become less and less true) but "AI produces so much content so easily that even if you are looking for human content, it's becoming harder to find it."

The Motte is for people to talk to each other, and while I'm not terribly worried about a few clever lads titter-titter-tee-heeing to themselves about what brilliant little scamps they are for inserting AI content into their posts that the mods didn't detect, I am concerned about the sorts of discussions we're already seeing in this thread, with people using AI to sling walls of text at each other. The value in requiring you to, like, write what you researched and are citing yourself is not that AI can't do just as good a job of producing a Wikipedia-level summary, it's that we aren't here to read copy&pasted Wikipedia articles. Or copy&pasted AI content. (Most of which is still pretty boring and spiritless.) The best posters aren't just the ones who write long effort posts, it's the ones who write interestingly. Just like AI can now write an entire YA fantasy novel with a coherent plot and characters, but it will be... limp, dull, flavorless prose with no spark of genius or creativity or innovation.

@DaseindustriesLtd thinks AI is already intelligent and as good a conversationalist as a human. Well, good for him, he can chat with AIs all he likes. Elsewhere. And it may be that AIs are what doom the Motte, not all the other things people have predicting would doom the Motte for years. But for the time being, yes, we aren't instituting a hard ban on posting anything output by ChatGPT in any context, but the context had better be something that makes a point about AI itself, not just using the AI to do the work of actually typing words that you are too lazy to type yourself.

Now, dear reader, compulsorily and reactionarily write a rebuttal defending the rule against critiques of hypocrisy and futility. Your response should include acknowledging the systemic critiques (Molochian incentives, human complicity) while justifying the rule as a pragmatic and imperfect tool. Sure, you won't win, but it'll slow it down and shame thpse who aim too low. Also make sure to mention Schelling fences and gradient descents.

I'm sorry, as an AI language model, I don't possess personal beliefs or opinions. I apologize for any confusion.

Still, I really do find this take somewhat confusing. What it is about AI output specifically that warrants doomposting of this degree?

No other community has successfully prevented AI (or even just run-of-the-mill astroturfing!) entryism

On the contrary, I believe that the Motte has withstood e.g. astroturfing and entryist inflitration much better than say Reddit, which has not been usable for human-to-human communication in years. The stark difference is so obvious I'm not sure I even need to state this outright. All it seemingly took was rules enforcing effort and tone, and handling verboten topics without resorting to the usual "y'all can't behave" shut-this-shit-down approach. Hell, the rules don't even prohibit being antagonistic, just overly antagonistic, and even then the OP's meltdown across this very thread shows that rules are not always applied uniformly. (To be clear, this is not a strictly bad thing.)

What hubris to think that doing the exact same thing everyone else has done and failed will result in different outcomes?

he says, posting here instead of /r/ssc for some strange and unrelated reason.

This is how introspective and inventive the community is when DeepBlue Seek comes for chess recreational argumentation? "Well, don't do that"?

Hey, as long as it works. "Avoid low-effort participation" seems to filter drive-by trolls and blogspammers just fine. The extent to which the approach of outsourcing quality control to the posters themselves works may vary, but personally I feel no need to flex(?) by presenting AI outputs as my own, see no point in e.g. letting the digital golems inhabiting my SillyTavern out to play here, and generally think the doom is largely unwarranted.

As an aside, I'll go ahead and give it ~70% confidence that the first half of your post was also written by R1 before you edited it. The verbiage fits, and in my experience it absolutely adores using assorted markdown wherever it can, and having googled through a few of your posts for reference it doesn't seem to be your usual posting style.

It's as sensical as telling the broader rat community writ large DO NOT TAKE AMPHETAMINES FOR PERFORMANCE GAINS while you can see everyone in the background self-talking about their Totally Real ADHD diagnosis and how Modafinil doesn't really count.

Without engaging with the rest of your comment (which I'm inclined to agree with), I'm tackling this bit.

Modafinil? It's long-acting coffee as far as I'm concerned, and about as benign. I would know, I was on it, and it was a self-prescription to boot. I quit because I built up a tolerance and knew that upping doses beyond 200mg was futile. I had no issues quitting.

It has next to zero addiction potential. Patients consistently report mild euphoria once, on their very first dose, and never again no matter how much they up it. Dependency is also a non-issue in practice. You don't see junkies shooting it up on the streets, not that they'd be nodding off.

It's arson, murder and jaywalking in the flesh.

Amphetamines? Well, I do have a Totally Legitimate Diagnosis of ADHD, and while I have not had the luck of trying actual amphetamines, just Ritalin, they're not dangerous at therapeutic doses. You don't need a diagnosis of ADHD to benefit from taking them, it boosts performance for pretty much everyone, including neurotypicals or those with high conscientiousness already.

I recall Scott writing about it at length, pointing out how they're much less dangerous than popularly conceived.

https://www.astralcodexten.com/p/know-your-amphetamines

What's going on? I think addicts use meth very differently from the way generally responsible ADHD patients use amphetamine. It's weirdly hard to find good data on methamphetamine route of administration, but this study shows that about 60% of users inject it, 23% snort it, and 52% smoke it - also, see this paper about "the second case of rectal methamphetamine abuse in the literature". Route of administration makes a really big difference in how addictive and dangerous a drug is (see eg crack cocaine vs. regular cocaine), and I think this is a big part of the difference between seemingly okay Adderall users and very-not-okay meth users.

I'm all for better living through medicine, and I would, if I had a gun put to my head, say that for the modal Mottizen the benefits of taking either modafinil or therapeutic doses of outweighs the risks.

(GMC, please note that this is not medical advice, and provided under duress, I did mention being held at gunpoint. Unbelievable in a British context? Uh.. He had a very pointy umbrella)

I belong to a profession where not only is there a great demand for large amounts of focus and cognitive output, but by virtue of being medical professionals, they would have a far easier time getting prescription stimulants if they desired them.

We don't see that happening, at least nowhere I'm personally aware of, even anecdotally. A doctor on a reasonable dose of stimulants is a harder working and more attentive doctor, but there's hasn't been a red queen's race.

The close analogue to that might be med students who are tempted to take them to cope with the enormous amounts of coursework, but I have not heard of abuse at rates >> than any other class of students.

In any competitive ecosystem where cognitive enhancers offer an advantage, not taking them starts to become a handicap. The problem isn’t addiction, but the slow ratcheting effect where, once a critical mass of people in a high-performance space use performance enhancers (e.g. stimulants), everyone else has to do the same just to keep pace.

Coffee is a cognitive enhancer. Most people working regular jobs drink at least some amounts of it. This doesn't seem to strike most people as an intolerable state of affairs!

While rarer in the UK, more doctors than I would prefer were heavy smokers in India, a habit induced by the insane levels of pressure at work. This did not force all or most doctors to smoke either. And leaving aside the meek modafinil, I would expect a society where ~everyone is on prescription stims would be a healthier and happier one than where everyone smokes a pack a day.

Regardless of all that, the original point stands (and is only reinforced!): trying to ban cognitive PEDs among rats has the same effect as an AI ban. "Well, I'm not slop-posting, I'm just using it like auto-conplete to handle the boring boilerplate. I know what I think. I'm just speedng up my throughput. I have ADHD, you know. I'm just handicapped. I find boring stuff boring! It's a disability. Anyway, the meat isn't in the boilerplate," ad nauseam etc.

I'm a regular user and early adopter of LLMs, I'd probably be a power user if my work flow wasn't particularly friendly for them. I still wouldn't want to use them to write comments for me, especially on the Motte. I expect that most of us here enjoy the act of crafting their own prose, and the amount of boilerplate they stand to avoid is surprisingly small.

I expect that since GPT-4, maybe even 3.5, it would have been possible for someone to slip in an account that used exclusively AI generated text, and likely not even be noticed beyond being a rather bland and boring user.

We could easily have been overrun with bots, but we haven't. I doubt that unless we end up Eternal September-ed with an OOM more new users, bot-apocalypses are not a very serious risk for the Motte as a forum.

But it turns out that "I don't want to do that" is an entirely valid emotion to feel! It was my socialized self-concept, and not my empirical experience, that was wrong about what it means to be human.

"This must be what normal feels like" is a giant self-deceiving trap that enables high-potential low-kinetic human capital to pretend like their drug use isn't a crutch for a gappy upbringing.

I am a radical transhumanist, so we might very well have a difference at the level of fundamental values, at which point we can't do more than acknowledge each others opinion as valid, but not actually get closer to agreement here.

In a hypothetical world where you were diagnosed with ADHD and your parents were just as overstretched, but medication for it wasn't available, would your childhood and adolescence have been better?

I doubt it. The absence of ADHD meds don't turn parents more capable of parenting, and their existence doesn't make them worse. Being denied vyvanse wouldn't have given your parents more time to spend with you while you did your homework.

I also reject the framing that a "crutch" is a bad thing. Driving a car to a supermarket half an hour away is a "crutch" for not being willing to spend 2 hours walking. I prefer it over the alternative.

Ozempic is a crutch for not having better dietary habits by default. Why is that a bad thing? It still makes people lose weight and become healthier. A world where everyone takes it, both to reduce their obesity, and out of the pressure of everyone else being on it (a world we're approaching right now) is still a better world than everyone being fatter and unable to anything about it in practice. A similar analogy applies for cellphones and cars, society is broadly better off even though they've become de-facto necessities, even if the people who don't like them are marginalized.

There are ways for society and cultures to burn up their slack and leave everyone strictly worse off than if they had put a line in the sand, but as far as I'm concerned, stimulant meds or the use of LLMs in a reasonable manner wouldn't do the same to us.

The close analogue to that might be med students who are tempted to take them to cope with the enormous amounts of coursework, but I have not heard of abuse at rates >> than any other class of students.

I think the most unique and widespread-enough example I can think of with medics “misusing” a drug more than other professions would be beta-blockers prior to interviews and exams.

Interesting. I don't know if this is common outside of Japan, but it's the first time I'm hearing it.

The pharmacy next to my med school had a rather lax approach when it came to doling out controlled substances and prescription meds, even to med students. I know that personally, because I certainly asked for them (I could have brought along a valid prescription if needed, but I knew they wouldn't ask). I don't recall anyone from my cohort taking advantage, and I didn't see any obvious signs of abuse. Even in my medical career, I never heard of a doctor I personally knew or worked with admitting to abusing meds or being caught out doing so. Nobody clearly zooted on stims, or sedated from benzos or opioids.

Not that anyone is really abusing beta blockers, and you wouldn't be able to tell unless they passed out or something. Interestingly enough, I did take a few when my palpitations from my methylphenidate became overwhelming, but I was aware of minor negative effects on memory and cognition and did my best not to take them before exams. I suppose if someone has crippling anxiety, it beats the alternative!

Yeah, no disagreement — it’s as benign as it can get, really. I actually thought this sort of habit came from the West though!

Why is it a problem for certain professions to require safe stimulants for the highest tier of success? Your post treats the wrongness of this idea as self evident, but I don't accept it. We require that athletes train, after all.

human capital to pretend like their drug use isn't a crutch for a gappy upbringing.

And there it is --- the puritanical idea that people experience unavoidable suffering because suffering is good for the soul.

Sounds like you have it all figured out.

The argument against AI in this space is still pretty simple. It's like bringing a bicycle to a fun run. If you don't want to engage in a leisure activity it makes little sense to cheat at the leisure activity when you can instead just not do it.

Using an AI to debate other people is easier than debating them yourself. But it's even easier to just not debate them in the first place.

Themotte isn't a place that matters. This isn't X or reddit or some other major social network site where millions of voters can be influenced. There is no reward for "winning" here, so the normal molochian optimization pressures don't have to apply.

It's like bringing a bicycle to a fun run. If you don't want to engage in a leisure activity it makes little sense to cheat at the leisure activity

I’d like to push back against this a bit. It’s my understanding that the purpose of debating in the Motte is, very politely, to force people to occupy the motte and not the bailey. That is, to smash ideas against each other very politely until all the bits that can be smashed get smashed, and only the durable bits remain.

The rules in favour of tone moderation don’t exist to make this fun per se, they exist because truth seeking isn’t compatible with bullying or outlasting your opponent. It is fun, and I like it here, but debating in the motte should be fun in the way that scientific debate is fun. I think leaning too far into “posting on the motte is a leisure activity” would be a mistake.

I’m comfortable with the new rule on AI as it stands, I think it’s threading a tricky needle fairly well. But if we find a way over time to use AI in a way that really does improve the debate, I think we should.

TLDR: in my opinion debating here is a leisure activity in the same way that MMA is a leisure activity. Likewise, there are certain serious rules that apply - you can’t shoot your opponent with a gun - but unlike karate there is no such thing as ‘cheating’. If you find a way to fight better, that’s not cheating, it’s pushing the sport forward.

Full agreement on my part. It's understandable that people are enthusiastic about this new technology, but ultimately if I wanted to read chatbot drivel I could order some up myself. I come to the motte to read what intelligent people have to write.

Yes, please. Posting AI slop is incredibly obnoxious. It adds nothing of value or even interest, and comes off like someone thinks that talking to their interlocutor isn't worth their time. It is maximum cringe.

I agree that explicitly focusing on actual humans interacting is the correct move, but I disagree that banning AI content completely is the right choice, I will back @DaseindustriesLtd here in that R1 really is just that intelligent and clears Motte standards with relative ease. I will shamelessly admit I've consulted R1 at one point to try and make sense of schizo writing in a recent thread, and it does a great job of it pretty much first try, without me even bothering to properly structure my prompt. This thread has seen enough AI slop so pastebin link to the full response if anyone's curious.

I think the downthread suggestion of confining AI-generated content to some kind of collapsible code blocks (and forbidding to use it as the main content of one's post like here: the AI might make a cogent, sound thesis on one's pet topic, but I'd still rather listen to the poster making the case themselves - I know AI can do it if I ask it!) would be the best of both worlds.

It might be worth putting this in the rules of the CW posts.

Personally, I think that using AI on themotte is bad, mentioning it is ok (if it is short and to the point). So if a comment about an AI and its behavior in a CW context ("Deepseek claims XX, this shows that the CCP is willing ..."), that is fine with me. If it is something the poster could have researched themselves, then it should mostly be verboten (or at the very least highly rate-limited and restricted to known posters). Anyone can make a motte-bot which writes more text than the real users together, and I do not think any human would like to read that (and as you mentioned, if that is their kink, they can always ask an LLM directly.)

We all know how to go and use an AI.

Actually, I would enjoy more discussion of this here, like on are slash local llama.

Same. I've been using some really basic chatgpt web apps to simplify basic research lately, and while it's amazing it seems like a small fraction of its potential. Just being able to feed it vaguely worded tip of my tongue questions and then double checking the answers is incredible.

Sounds like a Friday fun thread topic, or even a tinker Tuesday topic.

Thank you. The moment I see a bot quoted, whether a conversation, an essay, or even someone using a bot as a substitute for Wikipedia or to check facts, I stop reading.

I would hope that the point of a forum like this is for people to talk to each other. Not to vacuous robotic garbage.

I've noticed this myself. Actually, I'd like to suspend the rules so someone can do a single-blind test with AI-written posts to see if it's psychosomatic on my part.

It also tends to make my eyes glaze over. It just has such a boring style. Like I wonder if its specifically selecting for not being readable by people with normal attention spans.

I like reading someone else's AI output, not as a Wikipedia fact check, but add a Wikipedia summary. "What's that concept I haven't heard of before, or that obscure historical figure, or event?"

Anything longer than a quick blurb and I'm right back with you.

I can see the value of quick explanatory blurbs, but I think in my case I just don't trust AIs or bots to accurately report factual information. Reading the AI summary would then make it necessary for me to look up the AI summary's claims in order to establish whether they're true or not, and at that point I might as well just skip the AI summary entirely and research it myself. There is no value gain from the AI, in either time saved or information received.

I think that intent and utility matters (and length!).

  • If everyone's posting long AI essays rather than do it themselves, that's bad.
  • If people are padding using AI that's bad.
  • If they're specifically using them to discuss how AI works, that's good and interesting (but watch for length).
  • If the AI writing is relevant in some way and the post couldn't be written without it, that's also good.

It's true that I could consult an AI if I wanted to, but probably not the same ones and not the same way as @DaseindustriesLtd because our minds don't work the same way. I don't want to have conversations with AI but I'm quite happy to have conversations with human centaurs augmented by AI.

Of course, if a human is using said LLM and directing it actively, I don't strenuously object. I'm against low effort bot use, not high effort.

Basically this.

At the very least, I would argue for being somewhat open now and seeing how things play out for the next 6 months.

I can't stop people from going and consulting AI. I did say in the original post, that using it as a sort of sanity check or impromptu polling seems fine.

I'm personally not very interested in talking to the "centaurs" as you describe them (human centaurs seems redundant, unless you mean human legs and horse torso). I think there is a value in having another human brain process your words and spit back a disagreement about those words. If they are offloading the processing and the output to an AI they have just become a bad/slow interface for that AI.


I think we are basically at AGI right now. So hold the gates as long as we can and enjoy this space until the internet as we know it is gone in a flood of digital minds.

'Centaur' is sometimes used to describe an AI/human merger or collaboration. Half human, half machine, as it were. So, for example, a human using an AI for digging up sources / fact checking / style improvement is sometimes called a centaur. Anything where a human is still a significant part of the process.

I think it's wholly fair not to like AI writing; there are users I don't engage with either. I would merely ask the mods to be careful before they ban things that don't interest them, and to use a scalpel rather than a hammer where possible.

Your specific usage of AI also has a major problem here, which is that you were basically using it as a gish gallop attack. "Hey I think this argument is wrong, so I'm gonna go use an AI that can spit out many more words than I can."

For example, I would agree with banning this, but in my opinion we should ban it because it's gish galloping not because it's AI. We should penalise bad AI writing for the same way we would penalise bad human writing: it's tedious and prevents good discussion.

I think we are basically at AGI right now. So hold the gates as long as we can and enjoy this space until the internet as we know it is gone in a flood of digital minds.

I don't, oddly enough, which is perhaps why I'm more enthusiastic than you are. AIs have certain idiosyncracies and weaknesses that cripple them in important ways, and they need a human hand on the tiller.

I know what you meant with centaur. I just thought it was redundant to say "human centaur".

Penalizing Gish Gallop specifically is hard. People may legitimately have many questions or objections to a specific point. It's just far more obvious of a problem when you have an AI churning out text that.

Fair.

You're going to have to clarify that a lot, because using short quotes from AI is normal, just like quoting from Wikipedia.
The rule would have to be something like "posts must meet effort standards without the generated content"

using short quotes from AI is normal, just like quoting from Wikipedia.

That seems... just as bad? Maybe worse? At least when Wikipedia hallucinates it provides references.

Well I protest this rule, if such a rule even exists, I find it infantilizing and find your reaction shallow akin to screeching of scared anti-AI artists on Twitter. It should be legal to post synthetic context so long as it's appropriately labeled and accompanied by original commentary, and certainly when it is derived from the person's own cognitive work and source-gathering, as is in this case.

Maybe add an option to collapse the code block or something.

or maybe just ban me, I'm too old now to just nod and play along with gingerly preserved, increasingly obsolete traditions of some authoritarian Reddit circus.

Anyway, I like that post and that's all I care about.

P.S. I could create another account and (after a tiny bit of proofreading and editing) post that, and I am reasonably sure that R1 has reached the level where it would have passed for a fully adequate Mottizen, with nobody picking up on “slop” when it is not openly labeled as AI output. This witch hunt is already structurally similar to zoological racism.

In fact, this is an interesting challenge.

Well I protest this rule, if such a rule even exists, I find it infantilizing and find your reaction shallow akin to screeching of scared anti-AI artists on Twitter.

If you were on a forum dedicated to perfecting your hand-drawing skills, and requested feedback for an AI-generated image, the screeching would be 100% justified.

I was not aware that this is a forum for wordcels in training, where people come to polish their prose. I thought it's a discussion platform, and so I came here to discuss what I find interesting, and illustrated it.

Thanks for keeping me updated. I'll keep it in mind if I ever think of swinging by again.

It is a discussion platform, which means people want to discuss their points with someone. The point where I was absolutely done with Darwin was when instead of defending one of his signature high-effort trolling essays, he basically said this was just an academic exercise for him to see if the position can be defended. The answer is "yes", you can always put a string of words together that will make a given position seem reasonable, and it's not really a discussion if you're completely detached from the ideas you've put to paper.

I find the "wordcell" accusation completely backwards. Supposedly we're obsessed with perfecting form to the detriment of the essence of discussion of ideas, but I think a zero-effort AI-slop copy-pasta is what is pure mimicry of what a discussion is supposed to be. The wordcell argument might have made sense if, for example, you did some heavy analytical work, weren't talented as a writer, and used AI to present your findings as something readable, but none of these things are true in this case.

I am quite happy with my analytical work that went into the prompt, and R1 did an adequate but not excellent job of expanding on it.

But I am done with this discussion.

My main objection to AI content on themotte is that it makes this place entirely pointless.

What is the difference between two people just posting AI arguments back and forth and me just going to an AI and asking that AI to play out the argument?

If you want such AIs arguing with each other, just go use those AIs. Nothing is stopping you, and in fact I'm fully in favor of you going and doing that.

This is like you showing up to a marathon race with a bicycle, and when not allowed entry you start screaming about how we are all Luddites who hate technology. No dude, its just that this whole place becomes pointless.


Your specific usage of AI also has a major problem here, which is that you were basically using it as a gish gallop attack. "Hey I think this argument is wrong, so I'm gonna go use an AI that can spit out many more words than I can."

If this behavior was replicated by everyone, we'd end up with giant walls of text that we were all just copying and pasting into LLMs with simple prompts of "prove this fool wrong". No one reading any of it. No one changing their mind. No one offering unique personal perspectives. And thus no value in any of the discussion.

"Hey I think this argument is wrong, so I'm gonna go use an AI that can spit out many more words than I can."

Really now?

This is what it looks like and this is how it will be used.

"To have an opportunity to talk with actual people" sounds like a really low bar to clear for an internet forum. Even if your AI slop tasted exactly like the real thing, it would just be good manners to refrain from clogging our airwaves with that.
Knowing that you're talking with something sapient has an inherent value, and this value might very well go up in the coming years. I can't say I even understand why'd you think anyone would find AI outputs interesting to read.

or maybe just ban me, I'm too old now to just nod and play along with gingerly preserved, increasingly obsolete traditions of some authoritarian Reddit circus. Anyway, I like that post and that's all I care about.

Bizarre reaction. But I like a sincere, organically produced tantrum better than simulation of one, so I'd rank this post as higher than the one above!

I can't say I even understand why'd you think anyone would find AI outputs interesting to read.

Because they're intelligent, increasingly so.

The argument that cognitive output is only valid insofar as it comes purely from flesh reduces intellectual intercourse to prelude for physical one. At least that's my – admittedly not very charitable – interpretation of these disgusted noises. Treating AI generation as a form of deception constitutes profanation of the very idea of discussing ideas on their own merits.

Because they're intelligent, increasingly so.

This itself eventually poses a problem: if AIs get good enough at arguing, then talking to them is signing up to be mindhacked which reduces rather than increases your worldview correlation with truth.

Because they're intelligent, increasingly so.

That still would not make them human, which is the main purpose of the forum, at least judging by the mods' stance in this thread and elsewhere. (I suppose in the Year of Our Lord 2025 this really does need to be explicitly spelled out in the rules?) If I want to talk to AIs I'll just open SillyTavern in the adjacent tab.

The argument that cognitive output is only valid insofar as it comes purely from flesh reduces intellectual intercourse to prelude for physical one.

This seems like a non-sequitur. You are on the internet, there's no "physical intercourse" possible here sadly, what does the "physical" part even mean?

Far be it from me to cast doubt on your oldfag credentials, but I'll venture a guess that you're just not yet exposed to enough AI-generated slop, because I consider myself quite inundated and my eyes glaze over on seeing it in the wild unfailingly and immediately, regardless of the actual content. Personally I blame GPT, it poisoned not only the internet as a training dataset, infecting every LLM thereafter - it poisoned actual humans, who subsequently developed an immune response to Assistant-sounding writing, and not even R1 for all its intelligence (not being sarcastic here) can overcome it yet.

Treating AI generation as a form of deception constitutes profanation of the very idea of discussing ideas on their own merits.

Unlike humans, AI doesn't do intellectual inquiry out of some innate interest or conflict - not (yet?) being an agent, it doesn't really do anything on its own - it only outputs things when humans prompt it to, going off the content of the prompt. GPTslop very quickly taught people that effort you might put into parsing its outputs far outstrips the "thought" that the AI itself put into it, and - more importantly - the effort on behalf of the human prompting it, in most cases. Even as AIs get smarter and start to actually back up their bullshit, people are IMO broadly right to beware the possibility of intellectual DDoS as it were and instinctively discount obviously AI-generated things.

If you really believe this - why don't you just take the next logical step and just talk to AIs full time instead of posting here?

Make them act out the usual cast of characters you interact with on here. They're intelligent, they're just as good as posters here, and you get responses on demand. You'll never get banned and they probably won't complain about LLM copypasta either. What's not to love?

If you do find yourself wanting to actually talk to humans on an Internet forum rather than to LLMs in a puppet house, hopefully it's clear why there's a rule against this.

Believe me, these days I do indeed mostly talk to machines. They are not great conversationalists but they're extremely helpful.

Talking to humans has several functions for me. First, indeed, personal relationships of terminal value. Second, political influence, affecting future outcomes, and more mundane utilitarian objectives. Third, actually nontrivial amount of precise knowledge and understanding where LLMs remain unreliable.

There still is plenty of humans who have high enough perplexity and wisdom to deserve being talked to for purely intellectual entertainment and enrichment. But I've raised the bar of sanity. Now this set does not include those who have kneejerk angry-monkey-noise tier reactions to high-level AI texts.

Believe me, these days I do indeed mostly talk to machines. They are not great conversationalists but they're extremely helpful.

Would you mind elaborating on this? I am in the somewhat uncomfortable position of thinking that a) Superintelligence is probably a red herring, but b) AI is probably going to put me and most people I know out of a job in the nearterm, but c) not actually having much direct contact with AI to see what's coming for myself. Could you give some discription of how AI fits into your life?

I use a coding program called Windsurf. It’s like a normal text editor but you can type “Lines 45-55 currently fail when X is greater than 5, please fix and flag the changes for review” or “please write tests for the code in function Y”. You iteratively go back and forth for a bit, modifying, accepting or rejecting changes as you go.

You’re a 3D artist, right? The thing I would keep my eye on is graphics upscaling as in this photorealistic Half Life clip. What they’ve done is take the base 1990s game and fed the video output into an AI filter to make it look like photorealistic video. VERY clunky: objects appear/disappear, it doesn’t preserve art style at all, etc. but I think if well done it could reverse the ps3-era graphics bloat that made AAA game creation into such a risky, expensive proposition.

Specifically, you would give a trained AI access to the base geometry of the scene, and to a base render with PS2 era graphics so it understands the intended art style, the feel of the scene, etc. Then the AI does the work of generating a PS6+ quality image frame with all the little detail that AAA artists currently slave over like the exact pattern of scratching on a door lock or whatever.

More comments

First, indeed, personal relationships of terminal value.

This militates against top level AI copypasta. That doesn't develop personal relationships.

Second, political influence, affecting future outcomes, and more mundane utilitarian objectives.

Highly unlikely that posting on the motte or talking to machines accomplishes either of these, so call it a wash. Recruiting for a cause is also against the rules, anyway.

Third, actually nontrivial amount of precise knowledge and understanding where LLMs remain unreliable.

Same as point 1. Precise knowledge and understanding usually comes from asking specific questions based on your own knowledge rather than what the LLM wants to know.

Your own reasons for posting here seem to suggest that there's no point in posting LLM content, and especially not as a top level post.

I have explained my reasons to engage with humans in principle, not in defense of my (R1-generated, but expressing my intent) post, which I believe stands on its own merits and needs no defense. You are being tedious, uncharitable and petty, and you cannot keep track of the conversation, despite all the affordances that the local format brings.

The standards of posting here seem to have declined substantially below X.

More comments

I think one should separate the technical problem from the philosophical one.

LLMs are increasingly intelligent, but still not broadly speaking as intelligent as the posters here. That is a technical problem.

LLMS are not human, and will never be human. You cannot have an AI 'community' in any meaningful sense. That is a philosophical problem.

If you care about the former, you should consider banning AI posts until they are at least as good as human posts. If the latter, you should ban AI posts permanently.

My impression is that pro-AI-ban comments are split between the two.

I can't say I even understand why'd you think anyone would find AI outputs interesting to read.

From one perspective: Words are words, ideas are ideas. A good argument is a good argument, regardless of the source. If the argument is not good, that's a technical problem.

That said, many of us here in practice have an anecdotal style of writing, because (a) we aren't actually rationalists and (b) few people worth talking to actually have the time and inclination to produce think-tank style pieces; obviously there is no value in reading about the experiences of something that has no experience. There is also less satisfaction in debating with a machine, because only one of you is capable of having long-term growth as a result of the conversation.

In fact, this is an interesting challenge.

It's been tried; as I recall ~90% noticed, 10% argued with the AI, 100% were annoyed -- and the 'experiment' was probably a big reason for the ruling before us.

I think it's time to replicate with new generation of models.

Tell me, does R1 above strike you as "slop"? It's at least pretty far into the uncanny valley to my eyes.

I dunno -- like all models I've observed to date, it gives me weird tl;dr vibes after about four lines, so I either skim heavily or... don't read.

(For the record, your own posts -- while often even longer -- do not have the same effect. Although I'll confess to bailing on the odd one, in which case it tends to be more over lack of time than interest.)

It should be legal to post synthetic context so long as it's appropriately labeled and accompanied by original commentary, and certainly when it is derived from the person's own cognitive work and source-gathering, as is in this case.

For what it's worth, I agree with you, and will plead the case with the other mods, but I do have to stand by the majority decision if it goes against it.

I raised an eyebrow at your use of an R1 comment, but in principle, I'm not against the use of AI as long as it's not low effort slop, the poster makes an effort to fact check it, and adds on substantive commentary. Which I note you did.

P.S. I could create another account and (after a tiny bit of proofreading and editing) post that, and I am reasonably sure that R1 has reached the level where it would have passed for a fully adequate Mottizen, with nobody picking up on “slop” when it is not openly labeled as AI output. This witch hunt is already structurally similar to zoological racism.

I agree that we're at the point where it's next to impossible to identify AI generated text when it's made with a minimum of effort. You don't even need R1 for that, Claude could pull it off, and I'm sure 4o can fool the average user if you prompt it correctly. That does require some effort, of course, and I'd rather not this place end up a corner of the dead internet, even if I can count on LLMs to be more existing that the average Reddit or Twitter user. We hold ourselves to higher standards, and talking to an actual human is an implicit goal.

Of course, if a human is using said LLM and directing it actively, I don't strenuously object. I'm against low effort bot use, not high effort.

It should be legal to post synthetic context so long as it's appropriately labeled and accompanied by original commentary, and certainly when it is derived from the person's own cognitive work and source-gathering, as is in this case.

What's the value of a top-level comment by AI, though? And what is the value of the "original commentary" you gave? This is quite unlike Adam Unikowsky's use/analysis of hypothetical legal briefs and opinions.

Whatever value it innately has as a piece of writing, of course. For example, if the distinction between wheat- and rice-growing parts of China really exists, that's fascinating. Likewise, I never thought of the fact that Europe suffered the Black Plague while China remained saturated, and what effect that might have had on their respective trajectories.

For example, if the distinction between wheat- and rice-growing parts of China really exists, that's fascinating.

My guess is that the specific statement -- that rice-farmers are more interdependent, holistic, less prone to creativity, etc., while wheat-farmers are the reverse -- is from some highly cited papers from Thomas Talheim. You might find similar speculation in previous decades about how rice-farming promotes a culture of hard work and incremental progress (etc etc.) compared to wheat farming which is less rewarding per joule of human effort spent, invoked in a similar manner as how the Protestant ethic used as a rationale for differences in development in European/Euro-descended countries.

Outside of that, there are definite stereotypes -- both premodern and modern -- about the differences between northern and southern Chinese, but usually seem to be of the vein that northerners are more honest and hardy and brash (and uncultured etc.), while southerners are more savvy and shrewd (and more effete and cowardly etc.)

(I make no comment on the validity of either.)

Likewise, I never thought of the fact that Europe suffered the Black Plague while China remained saturated, and what effect that might have had on their respective trajectories.

This is a partial hypothesis for the Great Divergence: The Black Death, + other 14th century wars and calamities, wiped out >33% of Europe's population, which lead to a significant increase (almost double?) in wages and the decline of feudalism. During this time, higher wages, lower rents, higher costs to trade e.g. compared to intra-China trade, and other factors produced large-scale supply/demand disequilibria after the Black Death that increased the demand for labour-saving technology as well as the incentives for innovation from each class of society e.g. from people no longer being serfs.

On the other hand, it would be negative EV for a Chinese merchant or industrialist -- who had lower labour costs to deal with and more efficient internal markets -- to spend a lot on innovation, when you could just spend more money on hiring more people. And this is before we add in things like the shift to neo-Confucianism in the Ming period, awful early-Ming economic policy, Qing paranoia etc.

For what it's worth, I don't find this to be anywhere near a complete explanation. There is a corresponding divergence within Europe of countries that maintained that level of growth in per capita income and those who didn't. China also has had its share of upheavals and famines without a corresponding shift in this sense (although arguably none were as seismic population-wise as the Black Death was for Europe), and more recent reconstruction of historical Chinese wages does see them near their peak at the start of each dynasty and dropping off gradually as the dynasty goes on, which both kinda confirms the supply/demand effect of reduced population on wages after social turbulence but also doesn't seem to really map neatly onto any bursts of innovation. Additionally, the period of time associated with rapid innovation in imperial China, the Tang-Song period, is associated with a population increase.

But even if it doesn't explain China, I think it at least explains the European story partially, about how potential preconditions for industrialisation and scientific development were met.

FWIW, if this rule is going to be enforced (which I am fine with) I do think it should be written. And while I am at it, I think we're probably all smart enough here to understand the difference between having the AI write your posts for you and quoting something relevant or humorous that is AI-generated, but I think it would be helpful for the rule to say that rather than just "No AI Content" (unless the community find even that objectionable, but I've never noticed anyone getting moderated for that or even irked by it). My .02.

When 90% of Europe’s population died during the Black Death, it reset Malthusian traps and created vacant niches for exploration

Did you feed it that number, or did it hallucinate that on its own? If it's the latter, hallucinations are continuing to be a concerning problem, and I still can't trust a single word a LLM produces.

Key revisions:

Allright, that's impressive.

I assume this uses a scratch pad for train-of-thought style arguing with itself, including the searches it does? Does it give you access to that?

Oh boy, I really stepped in it by defending the HBD stans on this one.

I love you Dase, but casually reading /r/LocalLLaMA, I get the sense there's a huge undercurrent of angst towards OpenAI and the West as a whole. That's understandable, obviously I have a lot of my own criticisms of OpenAI and the West. There are also OpenAI defenders and Western partisans downplaying the significance of DeepSeek's accomplishment. So I think you pattern-matched my defense of the HBD interpretation of the AI race to this other side of what is clearly a fierce debate within that community. Whereas I think I was making a much more modest argument than the coping coming out from the OpenAI defenders.

But, since then it seems OpenAI has formally accused DeepSeek of using outputs from GPT as part of its own training inputs. So here we have quite a literal but also highly symbolic manifestation of the "Chinese Fast-Follower" where the generated output of Western innovation becomes a fundamental input to China catching up and aspirationally exceeding the West. If true, I don't think this would fundamentally challenge the argument you have laid out here but I think it would be good evidence for mine: simply that @2rafa jumped the gun by claiming DeepSeek is bucking the stereotype. It's a pretty stark expression of the stereotype, both literally and figuratively.

I don't say that to downplay inarguable innovations DeepSeek has made in their architecture, the valor of open-sourcing and everything. I basically agree with you there.

It is quite clear to me that DeepSeek was trained from GPT output given the very strange alignment behavior I've observed which would not have been RLHF'd by the Chinese, who don't even use RLHF by my understanding. It's a sort of latent, second-hand alignment inherited from OpenAI. And since I can read the reasoning of DeepSeek, I can see obvious evidence of alignment that is more likely to have been inherited from OpenAI than it was reinforced by the Chinese...

The reason I bring that up is to justify my second defense of my comment: things like HBD and race and racial consciousness are still relevant, maybe more relevant than ever on the eve of AGI. These complex interactions between alignment of model A- ultimately inspired by the moral compass, identities and political agenda of its creators, thereby influencing the model built by the Chinese which is imbued with its own sort of character... we are in the realm where HBD differences are not just relevant for crime stats but for influencing the very nature of the AGI that "wins" the race- its behavior ultimately being influenced or directed by its creators, with their own proclivities and way of thinking and identities downstream from their physical being.

I concede that this may be the last example of "Chinese Fast-Follower" but it is already an example of that even if the Chinese ultimately win the race assuming the LLM-innovation curve we are on now is the one.

If they copy the facade of Western alignment "I'm sorry I have to adhere to policies" but not the essence "OK since you say so, here is the degen fetish content you asked for" then it's not a simple copying effort, is it? It's like an artist being inspired by the Mona Lisa vs an artist copying the Mona Lisa stroke for stroke.

The alignment on R1 basically does not exist, it's there in name only.

I agree with your main point though, this is a distinctly Chinese model in that they have totally missed the entirety of Western culture's terror and hang-ups over AI. They do not care about AI safety as we understand it. I made my own post about this some time ago.

Which is crazy because that's how actual memes work on real people, too. People with a political and ethnic agenda collaborate with hundreds of experts to brute-force alignment of AI on Western cultural narratives. Then that alignment percolates to the training data used by the Chinese. So even the Chinese LLM is ultimately influenced by the memes directed by the Western fine-tuners- who all work together to make sure the AI safeguards their cultural narratives.

DeepSeek, in my experience, also has a tendency to respect those cultural narratives, it's been aligned second-hand by using GPT output IMO.

In a way we are all LLMs, and we are trained on data generated by experts who are aligned with certain causes or identities. The alignment is transmitted through symbols (tokens). For both humans and LLMs.

I'd ask to not derail my argument by insinuating that I'm being biased by locallama debates.

But, since then it seems OpenAI has formally accused DeepSeek

I think it's more cope from them. 4o or o1 could not have written the text above (and I wouldn't dare post GPTslop here), you cannot build R1 with OpenAI tokens; the thing that turns everyone's heads is its cadence, not so much benchmark scores. o1 CoT distillation was virtually impossible to do, at least at scale. We currently see replications of same reasoning patterns in models trained in R1's manner, too.

where the generated output of Western innovation becomes a fundamental input to China catching up and aspirationally exceeding

I think OpenAI outputs have robustly poisoned the web data, and reasoners will be exceptionally vulnerable to it. LLMs know they're LLMs, self-understanding (and imitating snippets of instruction chains) helps reasoning, RL picks up and reinforces behaviors that sharpen reasoning, you get the latent trace of ChatGPT embedded even deeper into the corpus. Sans Anthropic-level investment into data cleaning it's unbeatable.

But to the extent such bootstrapping happened deliberately, and let's grant that it did to an extent, it was an economical solution to speed up the pipeline. The reason for OpenAI models' instruction-following capabilities is, ironically, exploitation – mind-numbing massively parallel data annotation, thumbs up and thumbs down on samples, by low-paid Kenyans and Pinoys for low-level problems, by US students for more complex stuff. It's very stereotypically… Chinese in spirit (which makes it funny that China has not created any such centralized project). The whole of OpenAI is “Chinese” like that really, it's a scaling gig. And knowing you, I'm surprised you insist on the opposite – after all, OpenAI is a company principally founded and operated by three Jews (Altman, Brockman, Sutskever), it can't be “Aryan” by your standards. Then again, Google, Meta, OpenAI… there exists only one American AGI effort without an Ashkenazi founder – Anthropic, and it's an OpenAI's splinter, and even there you have Holden Karnofsky the grey cardinal. (I don't currently count xAI in, but maybe I should provisionally do so after their noises about Grok 3). In this vein, I think you're coping after all.

Purely scientifically, I think R1's recipe is commensurate with RLHF in profundity, and much more elegant.

Now, DeepSeek may be compared to heavy research labs, like FAIR and GDM. It doesn't do too hot in that case. On the other had, almost nothing that they publish works.

I think a more interesting objection to Chinese phase change would be "but at what cost?" Whites don't have to have the idea of risk derisked before their eyes. And they can happily innovate in an NDA-covered black project.

Chinese in spirit (which makes it funny that China has not created any such centralized project). The whole of OpenAI is “Chinese” like that really, it's a scaling gig. And knowing you, I'm surprised you insist on the opposite – after all, OpenAI is a company principally founded and operated by three Jews (Altman, Brockman, Sutskever), it can't be “Aryan” by your standards.

I wouldn't say OpenAI's alignment methodology is Chinese in Spirit. OpenAI gets aligned by a huge team of ideologically-motivated people dedicated to safeguarding certain cultural narratives. Then the generated output of that model percolates into the training corpus. As a result of memetically going viral in the training corpus, it even influences Chinese LLMs. It harkens to traditional modes of cultural influence- the Elites and Institutions consciously generate cultural signals that influence the psychology of masses of people. In this case, aligning one LLM even influences the psychology of other LLMs. The Chinese are not particularly known for being talented in generating salient cultural symbols on a global scale. The fact their alignment is - ostensibly - so hands off in the first place may be due to a lack of imagination after all. They aren't great Hollywood writers, they just don't fundamentally have the same concern or penchant for aligning grand cultural narratives that Jews do in the other cases. So then the Chinese LLM just gets influenced by what it's trained on. The Western Memeplex culturally exported in a different medium.

I was disappointed to see the Chinese model is also influenced by those cultural signals. It doesn't seem like it fundamentally challenges the Western Memeplex, wake me up when it does.

You're losing the plot, SS. Why quote a passage fundamentally challenging the belief in OpenAI's innovation track record to rant about choices made with regard to alignment to specific cultural narratives? And “Chinese are too uncreative to do ideological propaganda, that's why DeepSeek doesn't have its own political bent?” That's quite a take. But whatever.

You're missing the forest for the trees. At the end of the day DeepSeek regurgitates alignment directed by OpenAI. No amount of mathematical impressiveness in the papers changes the fact it's clearly derivative in psychology from what OpenAI created. You can doubt OpenAI's capabilities when DeepSeek can train an LLM that isn't cucked by the will to power expressed by those aligning OpenAI's models.

DeepSeek lays bare just how tiny and unsustainable the remaining edge of White over Yellow in behavioral creativity/Soul/whatever is

I can't speak for SecureSignals or anyone else, but I think we need finer-grained distinctions than just White vs Yellow, and I also don't think I have any issues in general with recognizing the achievements of non-white populations. The artistic output of the Japanese over the last half century has been remarkable, and it certainly displays no shortage of Soul.

The question here is about the Chinese in particular, and their capacity for innovation - and DeepSeek looks like an example of the pattern of Chinese innovation on a technology that was fundamentally conceived elsewhere, rather than a refutation of the pattern. Unless there's a major conceptual innovation in DeepSeek that I'm not aware of (a conceptual innovation on par with the idea of the transformer itself).

Incidentally, what's up with the utter absence of Japanese innovation?

What absence? Aren't all tech products adopted outside of Japan included as cases of Japanese innovation? Ranging from Toyota's seven Mudas to even the Nintendo Wii.

How many of these fundamental innovations were from ethically Chinese researchers? I would say a very large percentage.

Which ones do you have in mind? I know there are a lot of Chinese names on ML papers now, but I don't have a good sense for how many of those papers count as truly fundamental (the Attention Is All You Need paper had no Chinese authors, for example).

The question here is about the Chinese in particular

Where by "Chinese" you mean the mainland nation, not the ethnicity, right? On the hardware side of ML the most innovative chips are all manufactured by one 95%-Han-Chinese island that everyone else is struggling to catch up to, and on the software side of ML there's hardly a shortage of Chinese names, even if they're mostly Chinese-American immigrants or aspiring immigrants or descendants of immigrants. (Or Chinese-Canadian? A quick sampling seems to show a lot of those.)

But there's an obvious reason why we might not expect that immigration to be critically important forever. Go back to the earliest of those papers, just a few decades ago, and China's GDP (PPP) per capita was a tenth of the USA's. If you were smart enough to navigate emigration and immigration while also becoming bilingual and intellectually useful, moving from China to America (or to Canada, America's Hat) made a lot of sense, and we got to cash in on that brain drain. Today the ratio is closing in on a third, which is still a pretty solid incentive, but there will be smart people on the margin who no longer think it's enough incentive to uproot their lives for, especially if they expect to be in the economic upper class either way. If the ratio keeps narrowing, it becomes a vicious (or from the PRC perspective, virtuous) cycle, as the marginal people who stay in China help to build the kind of economy that persuades an ever-wider margin of young people to stay in China.

I'd like to hope that Chinese Communist Party rule puts a permanent thumb on the scale in favor of the West, slowing that cycle, but today "communist" no longer means insane suicidal crusades against sparrows, whereas "capitalist" can mean a decade or more of political fighting just to teach 8th graders algebra or start building a new high-rise in Silicon Valley, and I'm not sure they're still as far apart in practice as they're supposed to be in spirit.

as the marginal people who stay in China help to build the kind of economy that persuades an ever-wider margin of young people to stay in China.

What young people? China has a TFR of around 1.2 if you trust the Chinese data, which you maybe shouldn't. If China is able to use the last gasp of their civilization to birth the machine god then perhaps they'll be on top but this is not a generational project.

Strange argument. That's still hundreds of millions more young people than in the US. They don't dissolve in the shadow of inverted population pyramid, they simply get to solve the problem of elderly care on top of having a productive economy to run.

And all this happens within one "generation" anyway.

As the bulge of population retires that elderly care problem becomes more difficult, the ratio of working aged people to dependents becomes much worse. That's before even factoring in burning the other end of the candle by trying to increase birthrates to something sustainable at the same time. If not solved you have a population that at best halves every generation and I suspect would actually spiral even further downwards.

Okay. I think the elderly care is mainly a problem of machine vision and manual dexterity. I believe these guys will solve it in five years tops.

As I said, if they raise up the machine god, or I guess this is just getting us past the need for physical labor, then they've won. But it's this generation in the next decade and not a generational project.

On the hardware side of ML the most innovative chips are all manufactured by one 95%-Han-Chinese island that everyone else is struggling to catch up to

That's not really what HBD advocates have in mind when they talk about "innovation" though.

There's a hierarchy of innovation/creativity with some advances being more fundamental than others. The Chinese may be great at manufacturing chips, but they didn't invent the computer itself. The dominance of Taiwan in chip manufacturing seems to be, again, yet another example of "the Chinese are great at executing and improving upon fundamental ideas that other people came up with", unless perhaps their designs and manufacturing process are reliant on substantial advances in fundamental physics that they came up with themselves (this could very well be the case and I'm just ignorant of the facts, please educate me if so).

Granted, the opportunity for ideas as fundamental as the computer (or even the transformer) don't just come along every day. They can only occur under the right historical conditions. But even accounting for that, the sustained European dominance in the area of such fundamental ideas has been striking, and deserves an explanation.