site banner

Culture War Roundup for the week of January 27, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

DO NOT POST AI CONTENT

We've only said this sporadically in the past. I'm talking to the other moderators and maybe we will have it added as a rule under the "content" section. Or maybe I'm wrong and all the other mods like AI content (highly doubt that).

We all know how to go and use an AI. If you want to have a discussion with AI themotte is basically just a bad intermediary. No one is here to have discussions with AIs. Thus posting AI content is in no one's interest.

You can of course consult AIs on your own time, and maybe they can be used as a sort of sanity or quick polling check.

Yes, with any seams showing. Obviously we can't enforce a rule we didn't detect you breaking.

Following on to @Corvos's comment, below, I would compare it to using AI to write books and short stories (something I'm aware of as someone involved in the writing and literary community) or make art. As you know, there now a bazillion people churning out AI-generated fiction and art, such that this is overwhelming a lot of traditional venues. The Kindle store is full of them, and KU authors are bemoaning the fact that not only are they competing against each other, but now they are competing against AI-published books by the thousands. There are even courses available now (by grifters) teaching you how to generate books with AI which you can then sell on Amazon.

My Facebook feed is full of "Reels" with increasingly realistic AI content, everything from Chinese fishermen dragging an improbable sea monster out of the water to bears waving down motorists to help rescue their cubs, to actresses dressed in outfits and performing in movies they never appeared in.

We can't stop it, most of it is crap, and right now, most of it is fairly easily detectable, but it's getting harder. The problem is not so much "AI is crap" (that will become less and less true) but "AI produces so much content so easily that even if you are looking for human content, it's becoming harder to find it."

The Motte is for people to talk to each other, and while I'm not terribly worried about a few clever lads titter-titter-tee-heeing to themselves about what brilliant little scamps they are for inserting AI content into their posts that the mods didn't detect, I am concerned about the sorts of discussions we're already seeing in this thread, with people using AI to sling walls of text at each other. The value in requiring you to, like, write what you researched and are citing yourself is not that AI can't do just as good a job of producing a Wikipedia-level summary, it's that we aren't here to read copy&pasted Wikipedia articles. Or copy&pasted AI content. (Most of which is still pretty boring and spiritless.) The best posters aren't just the ones who write long effort posts, it's the ones who write interestingly. Just like AI can now write an entire YA fantasy novel with a coherent plot and characters, but it will be... limp, dull, flavorless prose with no spark of genius or creativity or innovation.

@DaseindustriesLtd thinks AI is already intelligent and as good a conversationalist as a human. Well, good for him, he can chat with AIs all he likes. Elsewhere. And it may be that AIs are what doom the Motte, not all the other things people have predicting would doom the Motte for years. But for the time being, yes, we aren't instituting a hard ban on posting anything output by ChatGPT in any context, but the context had better be something that makes a point about AI itself, not just using the AI to do the work of actually typing words that you are too lazy to type yourself.

Now, dear reader, compulsorily and reactionarily write a rebuttal defending the rule against critiques of hypocrisy and futility. Your response should include acknowledging the systemic critiques (Molochian incentives, human complicity) while justifying the rule as a pragmatic and imperfect tool. Sure, you won't win, but it'll slow it down and shame thpse who aim too low. Also make sure to mention Schelling fences and gradient descents.

I'm sorry, as an AI language model, I don't possess personal beliefs or opinions. I apologize for any confusion.

Still, I really do find this take somewhat confusing. What it is about AI output specifically that warrants doomposting of this degree?

No other community has successfully prevented AI (or even just run-of-the-mill astroturfing!) entryism

On the contrary, I believe that the Motte has withstood e.g. astroturfing and entryist inflitration much better than say Reddit, which has not been usable for human-to-human communication in years. The stark difference is so obvious I'm not sure I even need to state this outright. All it seemingly took was rules enforcing effort and tone, and handling verboten topics without resorting to the usual "y'all can't behave" shut-this-shit-down approach. Hell, the rules don't even prohibit being antagonistic, just overly antagonistic, and even then the OP's meltdown across this very thread shows that rules are not always applied uniformly. (To be clear, this is not a strictly bad thing.)

What hubris to think that doing the exact same thing everyone else has done and failed will result in different outcomes?

he says, posting here instead of /r/ssc for some strange and unrelated reason.

This is how introspective and inventive the community is when DeepBlue Seek comes for chess recreational argumentation? "Well, don't do that"?

Hey, as long as it works. "Avoid low-effort participation" seems to filter drive-by trolls and blogspammers just fine. The extent to which the approach of outsourcing quality control to the posters themselves works may vary, but personally I feel no need to flex(?) by presenting AI outputs as my own, see no point in e.g. letting the digital golems inhabiting my SillyTavern out to play here, and generally think the doom is largely unwarranted.

As an aside, I'll go ahead and give it ~70% confidence that the first half of your post was also written by R1 before you edited it. The verbiage fits, and in my experience it absolutely adores using assorted markdown wherever it can, and having googled through a few of your posts for reference it doesn't seem to be your usual posting style.

It's as sensical as telling the broader rat community writ large DO NOT TAKE AMPHETAMINES FOR PERFORMANCE GAINS while you can see everyone in the background self-talking about their Totally Real ADHD diagnosis and how Modafinil doesn't really count.

Without engaging with the rest of your comment (which I'm inclined to agree with), I'm tackling this bit.

Modafinil? It's long-acting coffee as far as I'm concerned, and about as benign. I would know, I was on it, and it was a self-prescription to boot. I quit because I built up a tolerance and knew that upping doses beyond 200mg was futile. I had no issues quitting.

It has next to zero addiction potential. Patients consistently report mild euphoria once, on their very first dose, and never again no matter how much they up it. Dependency is also a non-issue in practice. You don't see junkies shooting it up on the streets, not that they'd be nodding off.

It's arson, murder and jaywalking in the flesh.

Amphetamines? Well, I do have a Totally Legitimate Diagnosis of ADHD, and while I have not had the luck of trying actual amphetamines, just Ritalin, they're not dangerous at therapeutic doses. You don't need a diagnosis of ADHD to benefit from taking them, it boosts performance for pretty much everyone, including neurotypicals or those with high conscientiousness already.

I recall Scott writing about it at length, pointing out how they're much less dangerous than popularly conceived.

https://www.astralcodexten.com/p/know-your-amphetamines

What's going on? I think addicts use meth very differently from the way generally responsible ADHD patients use amphetamine. It's weirdly hard to find good data on methamphetamine route of administration, but this study shows that about 60% of users inject it, 23% snort it, and 52% smoke it - also, see this paper about "the second case of rectal methamphetamine abuse in the literature". Route of administration makes a really big difference in how addictive and dangerous a drug is (see eg crack cocaine vs. regular cocaine), and I think this is a big part of the difference between seemingly okay Adderall users and very-not-okay meth users.

I'm all for better living through medicine, and I would, if I had a gun put to my head, say that for the modal Mottizen the benefits of taking either modafinil or therapeutic doses of outweighs the risks.

(GMC, please note that this is not medical advice, and provided under duress, I did mention being held at gunpoint. Unbelievable in a British context? Uh.. He had a very pointy umbrella)

I belong to a profession where not only is there a great demand for large amounts of focus and cognitive output, but by virtue of being medical professionals, they would have a far easier time getting prescription stimulants if they desired them.

We don't see that happening, at least nowhere I'm personally aware of, even anecdotally. A doctor on a reasonable dose of stimulants is a harder working and more attentive doctor, but there's hasn't been a red queen's race.

The close analogue to that might be med students who are tempted to take them to cope with the enormous amounts of coursework, but I have not heard of abuse at rates >> than any other class of students.

In any competitive ecosystem where cognitive enhancers offer an advantage, not taking them starts to become a handicap. The problem isn’t addiction, but the slow ratcheting effect where, once a critical mass of people in a high-performance space use performance enhancers (e.g. stimulants), everyone else has to do the same just to keep pace.

Coffee is a cognitive enhancer. Most people working regular jobs drink at least some amounts of it. This doesn't seem to strike most people as an intolerable state of affairs!

While rarer in the UK, more doctors than I would prefer were heavy smokers in India, a habit induced by the insane levels of pressure at work. This did not force all or most doctors to smoke either. And leaving aside the meek modafinil, I would expect a society where ~everyone is on prescription stims would be a healthier and happier one than where everyone smokes a pack a day.

Regardless of all that, the original point stands (and is only reinforced!): trying to ban cognitive PEDs among rats has the same effect as an AI ban. "Well, I'm not slop-posting, I'm just using it like auto-conplete to handle the boring boilerplate. I know what I think. I'm just speedng up my throughput. I have ADHD, you know. I'm just handicapped. I find boring stuff boring! It's a disability. Anyway, the meat isn't in the boilerplate," ad nauseam etc.

I'm a regular user and early adopter of LLMs, I'd probably be a power user if my work flow wasn't particularly friendly for them. I still wouldn't want to use them to write comments for me, especially on the Motte. I expect that most of us here enjoy the act of crafting their own prose, and the amount of boilerplate they stand to avoid is surprisingly small.

I expect that since GPT-4, maybe even 3.5, it would have been possible for someone to slip in an account that used exclusively AI generated text, and likely not even be noticed beyond being a rather bland and boring user.

We could easily have been overrun with bots, but we haven't. I doubt that unless we end up Eternal September-ed with an OOM more new users, bot-apocalypses are not a very serious risk for the Motte as a forum.

But it turns out that "I don't want to do that" is an entirely valid emotion to feel! It was my socialized self-concept, and not my empirical experience, that was wrong about what it means to be human.

"This must be what normal feels like" is a giant self-deceiving trap that enables high-potential low-kinetic human capital to pretend like their drug use isn't a crutch for a gappy upbringing.

I am a radical transhumanist, so we might very well have a difference at the level of fundamental values, at which point we can't do more than acknowledge each others opinion as valid, but not actually get closer to agreement here.

In a hypothetical world where you were diagnosed with ADHD and your parents were just as overstretched, but medication for it wasn't available, would your childhood and adolescence have been better?

I doubt it. The absence of ADHD meds don't turn parents more capable of parenting, and their existence doesn't make them worse. Being denied vyvanse wouldn't have given your parents more time to spend with you while you did your homework.

I also reject the framing that a "crutch" is a bad thing. Driving a car to a supermarket half an hour away is a "crutch" for not being willing to spend 2 hours walking. I prefer it over the alternative.

Ozempic is a crutch for not having better dietary habits by default. Why is that a bad thing? It still makes people lose weight and become healthier. A world where everyone takes it, both to reduce their obesity, and out of the pressure of everyone else being on it (a world we're approaching right now) is still a better world than everyone being fatter and unable to anything about it in practice. A similar analogy applies for cellphones and cars, society is broadly better off even though they've become de-facto necessities, even if the people who don't like them are marginalized.

There are ways for society and cultures to burn up their slack and leave everyone strictly worse off than if they had put a line in the sand, but as far as I'm concerned, stimulant meds or the use of LLMs in a reasonable manner wouldn't do the same to us.

The close analogue to that might be med students who are tempted to take them to cope with the enormous amounts of coursework, but I have not heard of abuse at rates >> than any other class of students.

I think the most unique and widespread-enough example I can think of with medics “misusing” a drug more than other professions would be beta-blockers prior to interviews and exams.

Interesting. I don't know if this is common outside of Japan, but it's the first time I'm hearing it.

The pharmacy next to my med school had a rather lax approach when it came to doling out controlled substances and prescription meds, even to med students. I know that personally, because I certainly asked for them (I could have brought along a valid prescription if needed, but I knew they wouldn't ask). I don't recall anyone from my cohort taking advantage, and I didn't see any obvious signs of abuse. Even in my medical career, I never heard of a doctor I personally knew or worked with admitting to abusing meds or being caught out doing so. Nobody clearly zooted on stims, or sedated from benzos or opioids.

Not that anyone is really abusing beta blockers, and you wouldn't be able to tell unless they passed out or something. Interestingly enough, I did take a few when my palpitations from my methylphenidate became overwhelming, but I was aware of minor negative effects on memory and cognition and did my best not to take them before exams. I suppose if someone has crippling anxiety, it beats the alternative!

Yeah, no disagreement — it’s as benign as it can get, really. I actually thought this sort of habit came from the West though!

Why is it a problem for certain professions to require safe stimulants for the highest tier of success? Your post treats the wrongness of this idea as self evident, but I don't accept it. We require that athletes train, after all.

human capital to pretend like their drug use isn't a crutch for a gappy upbringing.

And there it is --- the puritanical idea that people experience unavoidable suffering because suffering is good for the soul.

Sounds like you have it all figured out.

The argument against AI in this space is still pretty simple. It's like bringing a bicycle to a fun run. If you don't want to engage in a leisure activity it makes little sense to cheat at the leisure activity when you can instead just not do it.

Using an AI to debate other people is easier than debating them yourself. But it's even easier to just not debate them in the first place.

Themotte isn't a place that matters. This isn't X or reddit or some other major social network site where millions of voters can be influenced. There is no reward for "winning" here, so the normal molochian optimization pressures don't have to apply.

It's like bringing a bicycle to a fun run. If you don't want to engage in a leisure activity it makes little sense to cheat at the leisure activity

I’d like to push back against this a bit. It’s my understanding that the purpose of debating in the Motte is, very politely, to force people to occupy the motte and not the bailey. That is, to smash ideas against each other very politely until all the bits that can be smashed get smashed, and only the durable bits remain.

The rules in favour of tone moderation don’t exist to make this fun per se, they exist because truth seeking isn’t compatible with bullying or outlasting your opponent. It is fun, and I like it here, but debating in the motte should be fun in the way that scientific debate is fun. I think leaning too far into “posting on the motte is a leisure activity” would be a mistake.

I’m comfortable with the new rule on AI as it stands, I think it’s threading a tricky needle fairly well. But if we find a way over time to use AI in a way that really does improve the debate, I think we should.

TLDR: in my opinion debating here is a leisure activity in the same way that MMA is a leisure activity. Likewise, there are certain serious rules that apply - you can’t shoot your opponent with a gun - but unlike karate there is no such thing as ‘cheating’. If you find a way to fight better, that’s not cheating, it’s pushing the sport forward.

Full agreement on my part. It's understandable that people are enthusiastic about this new technology, but ultimately if I wanted to read chatbot drivel I could order some up myself. I come to the motte to read what intelligent people have to write.

Yes, please. Posting AI slop is incredibly obnoxious. It adds nothing of value or even interest, and comes off like someone thinks that talking to their interlocutor isn't worth their time. It is maximum cringe.

I agree that explicitly focusing on actual humans interacting is the correct move, but I disagree that banning AI content completely is the right choice, I will back @DaseindustriesLtd here in that R1 really is just that intelligent and clears Motte standards with relative ease. I will shamelessly admit I've consulted R1 at one point to try and make sense of schizo writing in a recent thread, and it does a great job of it pretty much first try, without me even bothering to properly structure my prompt. This thread has seen enough AI slop so pastebin link to the full response if anyone's curious.

I think the downthread suggestion of confining AI-generated content to some kind of collapsible code blocks (and forbidding to use it as the main content of one's post like here: the AI might make a cogent, sound thesis on one's pet topic, but I'd still rather listen to the poster making the case themselves - I know AI can do it if I ask it!) would be the best of both worlds.

It might be worth putting this in the rules of the CW posts.

Personally, I think that using AI on themotte is bad, mentioning it is ok (if it is short and to the point). So if a comment about an AI and its behavior in a CW context ("Deepseek claims XX, this shows that the CCP is willing ..."), that is fine with me. If it is something the poster could have researched themselves, then it should mostly be verboten (or at the very least highly rate-limited and restricted to known posters). Anyone can make a motte-bot which writes more text than the real users together, and I do not think any human would like to read that (and as you mentioned, if that is their kink, they can always ask an LLM directly.)

We all know how to go and use an AI.

Actually, I would enjoy more discussion of this here, like on are slash local llama.

Same. I've been using some really basic chatgpt web apps to simplify basic research lately, and while it's amazing it seems like a small fraction of its potential. Just being able to feed it vaguely worded tip of my tongue questions and then double checking the answers is incredible.

Sounds like a Friday fun thread topic, or even a tinker Tuesday topic.

Thank you. The moment I see a bot quoted, whether a conversation, an essay, or even someone using a bot as a substitute for Wikipedia or to check facts, I stop reading.

I would hope that the point of a forum like this is for people to talk to each other. Not to vacuous robotic garbage.

I've noticed this myself. Actually, I'd like to suspend the rules so someone can do a single-blind test with AI-written posts to see if it's psychosomatic on my part.

It also tends to make my eyes glaze over. It just has such a boring style. Like I wonder if its specifically selecting for not being readable by people with normal attention spans.

I like reading someone else's AI output, not as a Wikipedia fact check, but add a Wikipedia summary. "What's that concept I haven't heard of before, or that obscure historical figure, or event?"

Anything longer than a quick blurb and I'm right back with you.

I can see the value of quick explanatory blurbs, but I think in my case I just don't trust AIs or bots to accurately report factual information. Reading the AI summary would then make it necessary for me to look up the AI summary's claims in order to establish whether they're true or not, and at that point I might as well just skip the AI summary entirely and research it myself. There is no value gain from the AI, in either time saved or information received.

I think that intent and utility matters (and length!).

  • If everyone's posting long AI essays rather than do it themselves, that's bad.
  • If people are padding using AI that's bad.
  • If they're specifically using them to discuss how AI works, that's good and interesting (but watch for length).
  • If the AI writing is relevant in some way and the post couldn't be written without it, that's also good.

It's true that I could consult an AI if I wanted to, but probably not the same ones and not the same way as @DaseindustriesLtd because our minds don't work the same way. I don't want to have conversations with AI but I'm quite happy to have conversations with human centaurs augmented by AI.

Of course, if a human is using said LLM and directing it actively, I don't strenuously object. I'm against low effort bot use, not high effort.

Basically this.

At the very least, I would argue for being somewhat open now and seeing how things play out for the next 6 months.

I can't stop people from going and consulting AI. I did say in the original post, that using it as a sort of sanity check or impromptu polling seems fine.

I'm personally not very interested in talking to the "centaurs" as you describe them (human centaurs seems redundant, unless you mean human legs and horse torso). I think there is a value in having another human brain process your words and spit back a disagreement about those words. If they are offloading the processing and the output to an AI they have just become a bad/slow interface for that AI.


I think we are basically at AGI right now. So hold the gates as long as we can and enjoy this space until the internet as we know it is gone in a flood of digital minds.

'Centaur' is sometimes used to describe an AI/human merger or collaboration. Half human, half machine, as it were. So, for example, a human using an AI for digging up sources / fact checking / style improvement is sometimes called a centaur. Anything where a human is still a significant part of the process.

I think it's wholly fair not to like AI writing; there are users I don't engage with either. I would merely ask the mods to be careful before they ban things that don't interest them, and to use a scalpel rather than a hammer where possible.

Your specific usage of AI also has a major problem here, which is that you were basically using it as a gish gallop attack. "Hey I think this argument is wrong, so I'm gonna go use an AI that can spit out many more words than I can."

For example, I would agree with banning this, but in my opinion we should ban it because it's gish galloping not because it's AI. We should penalise bad AI writing for the same way we would penalise bad human writing: it's tedious and prevents good discussion.

I think we are basically at AGI right now. So hold the gates as long as we can and enjoy this space until the internet as we know it is gone in a flood of digital minds.

I don't, oddly enough, which is perhaps why I'm more enthusiastic than you are. AIs have certain idiosyncracies and weaknesses that cripple them in important ways, and they need a human hand on the tiller.

I know what you meant with centaur. I just thought it was redundant to say "human centaur".

Penalizing Gish Gallop specifically is hard. People may legitimately have many questions or objections to a specific point. It's just far more obvious of a problem when you have an AI churning out text that.

Fair.

You're going to have to clarify that a lot, because using short quotes from AI is normal, just like quoting from Wikipedia.
The rule would have to be something like "posts must meet effort standards without the generated content"

using short quotes from AI is normal, just like quoting from Wikipedia.

That seems... just as bad? Maybe worse? At least when Wikipedia hallucinates it provides references.

Well I protest this rule, if such a rule even exists, I find it infantilizing and find your reaction shallow akin to screeching of scared anti-AI artists on Twitter. It should be legal to post synthetic context so long as it's appropriately labeled and accompanied by original commentary, and certainly when it is derived from the person's own cognitive work and source-gathering, as is in this case.

Maybe add an option to collapse the code block or something.

or maybe just ban me, I'm too old now to just nod and play along with gingerly preserved, increasingly obsolete traditions of some authoritarian Reddit circus.

Anyway, I like that post and that's all I care about.

P.S. I could create another account and (after a tiny bit of proofreading and editing) post that, and I am reasonably sure that R1 has reached the level where it would have passed for a fully adequate Mottizen, with nobody picking up on “slop” when it is not openly labeled as AI output. This witch hunt is already structurally similar to zoological racism.

In fact, this is an interesting challenge.

Well I protest this rule, if such a rule even exists, I find it infantilizing and find your reaction shallow akin to screeching of scared anti-AI artists on Twitter.

If you were on a forum dedicated to perfecting your hand-drawing skills, and requested feedback for an AI-generated image, the screeching would be 100% justified.

I was not aware that this is a forum for wordcels in training, where people come to polish their prose. I thought it's a discussion platform, and so I came here to discuss what I find interesting, and illustrated it.

Thanks for keeping me updated. I'll keep it in mind if I ever think of swinging by again.

It is a discussion platform, which means people want to discuss their points with someone. The point where I was absolutely done with Darwin was when instead of defending one of his signature high-effort trolling essays, he basically said this was just an academic exercise for him to see if the position can be defended. The answer is "yes", you can always put a string of words together that will make a given position seem reasonable, and it's not really a discussion if you're completely detached from the ideas you've put to paper.

I find the "wordcell" accusation completely backwards. Supposedly we're obsessed with perfecting form to the detriment of the essence of discussion of ideas, but I think a zero-effort AI-slop copy-pasta is what is pure mimicry of what a discussion is supposed to be. The wordcell argument might have made sense if, for example, you did some heavy analytical work, weren't talented as a writer, and used AI to present your findings as something readable, but none of these things are true in this case.

I am quite happy with my analytical work that went into the prompt, and R1 did an adequate but not excellent job of expanding on it.

But I am done with this discussion.

My main objection to AI content on themotte is that it makes this place entirely pointless.

What is the difference between two people just posting AI arguments back and forth and me just going to an AI and asking that AI to play out the argument?

If you want such AIs arguing with each other, just go use those AIs. Nothing is stopping you, and in fact I'm fully in favor of you going and doing that.

This is like you showing up to a marathon race with a bicycle, and when not allowed entry you start screaming about how we are all Luddites who hate technology. No dude, its just that this whole place becomes pointless.


Your specific usage of AI also has a major problem here, which is that you were basically using it as a gish gallop attack. "Hey I think this argument is wrong, so I'm gonna go use an AI that can spit out many more words than I can."

If this behavior was replicated by everyone, we'd end up with giant walls of text that we were all just copying and pasting into LLMs with simple prompts of "prove this fool wrong". No one reading any of it. No one changing their mind. No one offering unique personal perspectives. And thus no value in any of the discussion.

"Hey I think this argument is wrong, so I'm gonna go use an AI that can spit out many more words than I can."

Really now?

This is what it looks like and this is how it will be used.

"To have an opportunity to talk with actual people" sounds like a really low bar to clear for an internet forum. Even if your AI slop tasted exactly like the real thing, it would just be good manners to refrain from clogging our airwaves with that.
Knowing that you're talking with something sapient has an inherent value, and this value might very well go up in the coming years. I can't say I even understand why'd you think anyone would find AI outputs interesting to read.

or maybe just ban me, I'm too old now to just nod and play along with gingerly preserved, increasingly obsolete traditions of some authoritarian Reddit circus. Anyway, I like that post and that's all I care about.

Bizarre reaction. But I like a sincere, organically produced tantrum better than simulation of one, so I'd rank this post as higher than the one above!

I can't say I even understand why'd you think anyone would find AI outputs interesting to read.

Because they're intelligent, increasingly so.

The argument that cognitive output is only valid insofar as it comes purely from flesh reduces intellectual intercourse to prelude for physical one. At least that's my – admittedly not very charitable – interpretation of these disgusted noises. Treating AI generation as a form of deception constitutes profanation of the very idea of discussing ideas on their own merits.

Because they're intelligent, increasingly so.

This itself eventually poses a problem: if AIs get good enough at arguing, then talking to them is signing up to be mindhacked which reduces rather than increases your worldview correlation with truth.

Because they're intelligent, increasingly so.

That still would not make them human, which is the main purpose of the forum, at least judging by the mods' stance in this thread and elsewhere. (I suppose in the Year of Our Lord 2025 this really does need to be explicitly spelled out in the rules?) If I want to talk to AIs I'll just open SillyTavern in the adjacent tab.

The argument that cognitive output is only valid insofar as it comes purely from flesh reduces intellectual intercourse to prelude for physical one.

This seems like a non-sequitur. You are on the internet, there's no "physical intercourse" possible here sadly, what does the "physical" part even mean?

Far be it from me to cast doubt on your oldfag credentials, but I'll venture a guess that you're just not yet exposed to enough AI-generated slop, because I consider myself quite inundated and my eyes glaze over on seeing it in the wild unfailingly and immediately, regardless of the actual content. Personally I blame GPT, it poisoned not only the internet as a training dataset, infecting every LLM thereafter - it poisoned actual humans, who subsequently developed an immune response to Assistant-sounding writing, and not even R1 for all its intelligence (not being sarcastic here) can overcome it yet.

Treating AI generation as a form of deception constitutes profanation of the very idea of discussing ideas on their own merits.

Unlike humans, AI doesn't do intellectual inquiry out of some innate interest or conflict - not (yet?) being an agent, it doesn't really do anything on its own - it only outputs things when humans prompt it to, going off the content of the prompt. GPTslop very quickly taught people that effort you might put into parsing its outputs far outstrips the "thought" that the AI itself put into it, and - more importantly - the effort on behalf of the human prompting it, in most cases. Even as AIs get smarter and start to actually back up their bullshit, people are IMO broadly right to beware the possibility of intellectual DDoS as it were and instinctively discount obviously AI-generated things.

If you really believe this - why don't you just take the next logical step and just talk to AIs full time instead of posting here?

Make them act out the usual cast of characters you interact with on here. They're intelligent, they're just as good as posters here, and you get responses on demand. You'll never get banned and they probably won't complain about LLM copypasta either. What's not to love?

If you do find yourself wanting to actually talk to humans on an Internet forum rather than to LLMs in a puppet house, hopefully it's clear why there's a rule against this.

Believe me, these days I do indeed mostly talk to machines. They are not great conversationalists but they're extremely helpful.

Talking to humans has several functions for me. First, indeed, personal relationships of terminal value. Second, political influence, affecting future outcomes, and more mundane utilitarian objectives. Third, actually nontrivial amount of precise knowledge and understanding where LLMs remain unreliable.

There still is plenty of humans who have high enough perplexity and wisdom to deserve being talked to for purely intellectual entertainment and enrichment. But I've raised the bar of sanity. Now this set does not include those who have kneejerk angry-monkey-noise tier reactions to high-level AI texts.

Believe me, these days I do indeed mostly talk to machines. They are not great conversationalists but they're extremely helpful.

Would you mind elaborating on this? I am in the somewhat uncomfortable position of thinking that a) Superintelligence is probably a red herring, but b) AI is probably going to put me and most people I know out of a job in the nearterm, but c) not actually having much direct contact with AI to see what's coming for myself. Could you give some discription of how AI fits into your life?

I use a coding program called Windsurf. It’s like a normal text editor but you can type “Lines 45-55 currently fail when X is greater than 5, please fix and flag the changes for review” or “please write tests for the code in function Y”. You iteratively go back and forth for a bit, modifying, accepting or rejecting changes as you go.

You’re a 3D artist, right? The thing I would keep my eye on is graphics upscaling as in this photorealistic Half Life clip. What they’ve done is take the base 1990s game and fed the video output into an AI filter to make it look like photorealistic video. VERY clunky: objects appear/disappear, it doesn’t preserve art style at all, etc. but I think if well done it could reverse the ps3-era graphics bloat that made AAA game creation into such a risky, expensive proposition.

Specifically, you would give a trained AI access to the base geometry of the scene, and to a base render with PS2 era graphics so it understands the intended art style, the feel of the scene, etc. Then the AI does the work of generating a PS6+ quality image frame with all the little detail that AAA artists currently slave over like the exact pattern of scratching on a door lock or whatever.

Is it me, or do the Half-Life 2 segments of the clip look much worse than the Half-Life 1 segments? Particularly the sand buggy one, looks at the same level of graphics as the source material.

More comments

First, indeed, personal relationships of terminal value.

This militates against top level AI copypasta. That doesn't develop personal relationships.

Second, political influence, affecting future outcomes, and more mundane utilitarian objectives.

Highly unlikely that posting on the motte or talking to machines accomplishes either of these, so call it a wash. Recruiting for a cause is also against the rules, anyway.

Third, actually nontrivial amount of precise knowledge and understanding where LLMs remain unreliable.

Same as point 1. Precise knowledge and understanding usually comes from asking specific questions based on your own knowledge rather than what the LLM wants to know.

Your own reasons for posting here seem to suggest that there's no point in posting LLM content, and especially not as a top level post.

I have explained my reasons to engage with humans in principle, not in defense of my (R1-generated, but expressing my intent) post, which I believe stands on its own merits and needs no defense. You are being tedious, uncharitable and petty, and you cannot keep track of the conversation, despite all the affordances that the local format brings.

The standards of posting here seem to have declined substantially below X.

Friendo, you are the one who can't keep track of the conversation.

  1. You say it's dumb to have a rule against AI posts.

  2. Someone asks you why anyone would want to read AI posts.

  3. You say talking to AIs is great, maybe even better than talking to humans.

  4. I asked why you post here at all instead of talking to LLMs all the time.

  5. You responded with three reasons to prefer talking to humans vs LLMs

  6. I point out that these very reasons suggest that this forum should remain free from LLM posts.

  7. You bristle and say that your post needs no defense (why are you defending it up and down this thread then?)

At risk of belaboring the point, my response in point 6 is directly on the topic of point 1. To make it as clear as I can possibly make it, people come to this forum to talk to people because they prefer to talk to people. It should be clear that anyone who prefers to read LLM outputs can simply cut out the middleman and talk to them off of the motte.

More comments

I think one should separate the technical problem from the philosophical one.

LLMs are increasingly intelligent, but still not broadly speaking as intelligent as the posters here. That is a technical problem.

LLMS are not human, and will never be human. You cannot have an AI 'community' in any meaningful sense. That is a philosophical problem.

If you care about the former, you should consider banning AI posts until they are at least as good as human posts. If the latter, you should ban AI posts permanently.

My impression is that pro-AI-ban comments are split between the two.

I can't say I even understand why'd you think anyone would find AI outputs interesting to read.

From one perspective: Words are words, ideas are ideas. A good argument is a good argument, regardless of the source. If the argument is not good, that's a technical problem.

That said, many of us here in practice have an anecdotal style of writing, because (a) we aren't actually rationalists and (b) few people worth talking to actually have the time and inclination to produce think-tank style pieces; obviously there is no value in reading about the experiences of something that has no experience. There is also less satisfaction in debating with a machine, because only one of you is capable of having long-term growth as a result of the conversation.

In fact, this is an interesting challenge.

It's been tried; as I recall ~90% noticed, 10% argued with the AI, 100% were annoyed -- and the 'experiment' was probably a big reason for the ruling before us.

I think it's time to replicate with new generation of models.

Tell me, does R1 above strike you as "slop"? It's at least pretty far into the uncanny valley to my eyes.

I dunno -- like all models I've observed to date, it gives me weird tl;dr vibes after about four lines, so I either skim heavily or... don't read.

(For the record, your own posts -- while often even longer -- do not have the same effect. Although I'll confess to bailing on the odd one, in which case it tends to be more over lack of time than interest.)

It should be legal to post synthetic context so long as it's appropriately labeled and accompanied by original commentary, and certainly when it is derived from the person's own cognitive work and source-gathering, as is in this case.

For what it's worth, I agree with you, and will plead the case with the other mods, but I do have to stand by the majority decision if it goes against it.

I raised an eyebrow at your use of an R1 comment, but in principle, I'm not against the use of AI as long as it's not low effort slop, the poster makes an effort to fact check it, and adds on substantive commentary. Which I note you did.

P.S. I could create another account and (after a tiny bit of proofreading and editing) post that, and I am reasonably sure that R1 has reached the level where it would have passed for a fully adequate Mottizen, with nobody picking up on “slop” when it is not openly labeled as AI output. This witch hunt is already structurally similar to zoological racism.

I agree that we're at the point where it's next to impossible to identify AI generated text when it's made with a minimum of effort. You don't even need R1 for that, Claude could pull it off, and I'm sure 4o can fool the average user if you prompt it correctly. That does require some effort, of course, and I'd rather not this place end up a corner of the dead internet, even if I can count on LLMs to be more existing that the average Reddit or Twitter user. We hold ourselves to higher standards, and talking to an actual human is an implicit goal.

Of course, if a human is using said LLM and directing it actively, I don't strenuously object. I'm against low effort bot use, not high effort.

It should be legal to post synthetic context so long as it's appropriately labeled and accompanied by original commentary, and certainly when it is derived from the person's own cognitive work and source-gathering, as is in this case.

What's the value of a top-level comment by AI, though? And what is the value of the "original commentary" you gave? This is quite unlike Adam Unikowsky's use/analysis of hypothetical legal briefs and opinions.

Whatever value it innately has as a piece of writing, of course. For example, if the distinction between wheat- and rice-growing parts of China really exists, that's fascinating. Likewise, I never thought of the fact that Europe suffered the Black Plague while China remained saturated, and what effect that might have had on their respective trajectories.

For example, if the distinction between wheat- and rice-growing parts of China really exists, that's fascinating.

My guess is that the specific statement -- that rice-farmers are more interdependent, holistic, less prone to creativity, etc., while wheat-farmers are the reverse -- is from some highly cited papers from Thomas Talheim. You might find similar speculation in previous decades about how rice-farming promotes a culture of hard work and incremental progress (etc etc.) compared to wheat farming which is less rewarding per joule of human effort spent, invoked in a similar manner as how the Protestant ethic used as a rationale for differences in development in European/Euro-descended countries.

Outside of that, there are definite stereotypes -- both premodern and modern -- about the differences between northern and southern Chinese, but usually seem to be of the vein that northerners are more honest and hardy and brash (and uncultured etc.), while southerners are more savvy and shrewd (and more effete and cowardly etc.)

(I make no comment on the validity of either.)

Likewise, I never thought of the fact that Europe suffered the Black Plague while China remained saturated, and what effect that might have had on their respective trajectories.

This is a partial hypothesis for the Great Divergence: The Black Death, + other 14th century wars and calamities, wiped out >33% of Europe's population, which lead to a significant increase (almost double?) in wages and the decline of feudalism. During this time, higher wages, lower rents, higher costs to trade e.g. compared to intra-China trade, and other factors produced large-scale supply/demand disequilibria after the Black Death that increased the demand for labour-saving technology as well as the incentives for innovation from each class of society e.g. from people no longer being serfs.

On the other hand, it would be negative EV for a Chinese merchant or industrialist -- who had lower labour costs to deal with and more efficient internal markets -- to spend a lot on innovation, when you could just spend more money on hiring more people. And this is before we add in things like the shift to neo-Confucianism in the Ming period, awful early-Ming economic policy, Qing paranoia etc.

For what it's worth, I don't find this to be anywhere near a complete explanation. There is a corresponding divergence within Europe of countries that maintained that level of growth in per capita income and those who didn't. China also has had its share of upheavals and famines without a corresponding shift in this sense (although arguably none were as seismic population-wise as the Black Death was for Europe), and more recent reconstruction of historical Chinese wages does see them near their peak at the start of each dynasty and dropping off gradually as the dynasty goes on, which both kinda confirms the supply/demand effect of reduced population on wages after social turbulence but also doesn't seem to really map neatly onto any bursts of innovation. Additionally, the period of time associated with rapid innovation in imperial China, the Tang-Song period, is associated with a population increase.

But even if it doesn't explain China, I think it at least explains the European story partially, about how potential preconditions for industrialisation and scientific development were met.

FWIW, if this rule is going to be enforced (which I am fine with) I do think it should be written. And while I am at it, I think we're probably all smart enough here to understand the difference between having the AI write your posts for you and quoting something relevant or humorous that is AI-generated, but I think it would be helpful for the rule to say that rather than just "No AI Content" (unless the community find even that objectionable, but I've never noticed anyone getting moderated for that or even irked by it). My .02.