This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Well I protest this rule, if such a rule even exists, I find it infantilizing and find your reaction shallow akin to screeching of scared anti-AI artists on Twitter. It should be legal to post synthetic context so long as it's appropriately labeled and accompanied by original commentary, and certainly when it is derived from the person's own cognitive work and source-gathering, as is in this case.
Maybe add an option to collapse the code block or something.
or maybe just ban me, I'm too old now to just nod and play along with gingerly preserved, increasingly obsolete traditions of some authoritarian Reddit circus.
Anyway, I like that post and that's all I care about.
P.S. I could create another account and (after a tiny bit of proofreading and editing) post that, and I am reasonably sure that R1 has reached the level where it would have passed for a fully adequate Mottizen, with nobody picking up on “slop” when it is not openly labeled as AI output. This witch hunt is already structurally similar to zoological racism.
In fact, this is an interesting challenge.
If you were on a forum dedicated to perfecting your hand-drawing skills, and requested feedback for an AI-generated image, the screeching would be 100% justified.
I was not aware that this is a forum for wordcels in training, where people come to polish their prose. I thought it's a discussion platform, and so I came here to discuss what I find interesting, and illustrated it.
Thanks for keeping me updated. I'll keep it in mind if I ever think of swinging by again.
It is a discussion platform, which means people want to discuss their points with someone. The point where I was absolutely done with Darwin was when instead of defending one of his signature high-effort trolling essays, he basically said this was just an academic exercise for him to see if the position can be defended. The answer is "yes", you can always put a string of words together that will make a given position seem reasonable, and it's not really a discussion if you're completely detached from the ideas you've put to paper.
I find the "wordcell" accusation completely backwards. Supposedly we're obsessed with perfecting form to the detriment of the essence of discussion of ideas, but I think a zero-effort AI-slop copy-pasta is what is pure mimicry of what a discussion is supposed to be. The wordcell argument might have made sense if, for example, you did some heavy analytical work, weren't talented as a writer, and used AI to present your findings as something readable, but none of these things are true in this case.
I am quite happy with my analytical work that went into the prompt, and R1 did an adequate but not excellent job of expanding on it.
But I am done with this discussion.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My main objection to AI content on themotte is that it makes this place entirely pointless.
What is the difference between two people just posting AI arguments back and forth and me just going to an AI and asking that AI to play out the argument?
If you want such AIs arguing with each other, just go use those AIs. Nothing is stopping you, and in fact I'm fully in favor of you going and doing that.
This is like you showing up to a marathon race with a bicycle, and when not allowed entry you start screaming about how we are all Luddites who hate technology. No dude, its just that this whole place becomes pointless.
Your specific usage of AI also has a major problem here, which is that you were basically using it as a gish gallop attack. "Hey I think this argument is wrong, so I'm gonna go use an AI that can spit out many more words than I can."
If this behavior was replicated by everyone, we'd end up with giant walls of text that we were all just copying and pasting into LLMs with simple prompts of "prove this fool wrong". No one reading any of it. No one changing their mind. No one offering unique personal perspectives. And thus no value in any of the discussion.
Really now?
This is what it looks like and this is how it will be used.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
"To have an opportunity to talk with actual people" sounds like a really low bar to clear for an internet forum. Even if your AI slop tasted exactly like the real thing, it would just be good manners to refrain from clogging our airwaves with that.
Knowing that you're talking with something sapient has an inherent value, and this value might very well go up in the coming years. I can't say I even understand why'd you think anyone would find AI outputs interesting to read.
Bizarre reaction. But I like a sincere, organically produced tantrum better than simulation of one, so I'd rank this post as higher than the one above!
Because they're intelligent, increasingly so.
The argument that cognitive output is only valid insofar as it comes purely from flesh reduces intellectual intercourse to prelude for physical one. At least that's my – admittedly not very charitable – interpretation of these disgusted noises. Treating AI generation as a form of deception constitutes profanation of the very idea of discussing ideas on their own merits.
This itself eventually poses a problem: if AIs get good enough at arguing, then talking to them is signing up to be mindhacked which reduces rather than increases your worldview correlation with truth.
More options
Context Copy link
That still would not make them human, which is the main purpose of the forum, at least judging by the mods' stance in this thread and elsewhere. (I suppose in the Year of Our Lord 2025 this really does need to be explicitly spelled out in the rules?) If I want to talk to AIs I'll just open SillyTavern in the adjacent tab.
This seems like a non-sequitur. You are on the internet, there's no "physical intercourse" possible here
sadly, what does the "physical" part even mean?Far be it from me to cast doubt on your oldfag credentials, but I'll venture a guess that you're just not yet exposed to enough AI-generated slop, because I consider myself quite inundated and my eyes glaze over on seeing it in the wild unfailingly and immediately, regardless of the actual content. Personally I blame GPT, it poisoned not only the internet as a training dataset, infecting every LLM thereafter - it poisoned actual humans, who subsequently developed an immune response to Assistant-sounding writing, and not even R1 for all its intelligence (not being sarcastic here) can overcome it yet.
Unlike humans, AI doesn't do intellectual inquiry out of some innate interest or conflict - not (yet?) being an agent, it doesn't really do anything on its own - it only outputs things when humans prompt it to, going off the content of the prompt. GPTslop very quickly taught people that effort you might put into parsing its outputs far outstrips the "thought" that the AI itself put into it, and - more importantly - the effort on behalf of the human prompting it, in most cases. Even as AIs get smarter and start to actually back up their bullshit, people are IMO broadly right to beware the possibility of intellectual DDoS as it were and instinctively discount obviously AI-generated things.
More options
Context Copy link
If you really believe this - why don't you just take the next logical step and just talk to AIs full time instead of posting here?
Make them act out the usual cast of characters you interact with on here. They're intelligent, they're just as good as posters here, and you get responses on demand. You'll never get banned and they probably won't complain about LLM copypasta either. What's not to love?
If you do find yourself wanting to actually talk to humans on an Internet forum rather than to LLMs in a puppet house, hopefully it's clear why there's a rule against this.
Believe me, these days I do indeed mostly talk to machines. They are not great conversationalists but they're extremely helpful.
Talking to humans has several functions for me. First, indeed, personal relationships of terminal value. Second, political influence, affecting future outcomes, and more mundane utilitarian objectives. Third, actually nontrivial amount of precise knowledge and understanding where LLMs remain unreliable.
There still is plenty of humans who have high enough perplexity and wisdom to deserve being talked to for purely intellectual entertainment and enrichment. But I've raised the bar of sanity. Now this set does not include those who have kneejerk angry-monkey-noise tier reactions to high-level AI texts.
Would you mind elaborating on this? I am in the somewhat uncomfortable position of thinking that a) Superintelligence is probably a red herring, but b) AI is probably going to put me and most people I know out of a job in the nearterm, but c) not actually having much direct contact with AI to see what's coming for myself. Could you give some discription of how AI fits into your life?
I use a coding program called Windsurf. It’s like a normal text editor but you can type “Lines 45-55 currently fail when X is greater than 5, please fix and flag the changes for review” or “please write tests for the code in function Y”. You iteratively go back and forth for a bit, modifying, accepting or rejecting changes as you go.
You’re a 3D artist, right? The thing I would keep my eye on is graphics upscaling as in this photorealistic Half Life clip. What they’ve done is take the base 1990s game and fed the video output into an AI filter to make it look like photorealistic video. VERY clunky: objects appear/disappear, it doesn’t preserve art style at all, etc. but I think if well done it could reverse the ps3-era graphics bloat that made AAA game creation into such a risky, expensive proposition.
Specifically, you would give a trained AI access to the base geometry of the scene, and to a base render with PS2 era graphics so it understands the intended art style, the feel of the scene, etc. Then the AI does the work of generating a PS6+ quality image frame with all the little detail that AAA artists currently slave over like the exact pattern of scratching on a door lock or whatever.
Is it me, or do the Half-Life 2 segments of the clip look much worse than the Half-Life 1 segments? Particularly the sand buggy one, looks at the same level of graphics as the source material.
They’re misusing an AI trained on GoPro footage. I don’t know about HL1 vs 2 specifically, but if you watch the rest of the series the quality of the output varies significantly between games and between scenes. The author was basically running AI multiple times for each clip, and keeping all the ones that looked best as a proof of concept.
There are other issues: it’s trained on real life footage so it doesn’t do violence / gore (also b/c moderation of the API forbids it I believe) and it has very weird results on futuristic or fantasy stuff.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This militates against top level AI copypasta. That doesn't develop personal relationships.
Highly unlikely that posting on the motte or talking to machines accomplishes either of these, so call it a wash. Recruiting for a cause is also against the rules, anyway.
Same as point 1. Precise knowledge and understanding usually comes from asking specific questions based on your own knowledge rather than what the LLM wants to know.
Your own reasons for posting here seem to suggest that there's no point in posting LLM content, and especially not as a top level post.
I have explained my reasons to engage with humans in principle, not in defense of my (R1-generated, but expressing my intent) post, which I believe stands on its own merits and needs no defense. You are being tedious, uncharitable and petty, and you cannot keep track of the conversation, despite all the affordances that the local format brings.
The standards of posting here seem to have declined substantially below X.
Friendo, you are the one who can't keep track of the conversation.
You say it's dumb to have a rule against AI posts.
Someone asks you why anyone would want to read AI posts.
You say talking to AIs is great, maybe even better than talking to humans.
I asked why you post here at all instead of talking to LLMs all the time.
You responded with three reasons to prefer talking to humans vs LLMs
I point out that these very reasons suggest that this forum should remain free from LLM posts.
You bristle and say that your post needs no defense (why are you defending it up and down this thread then?)
At risk of belaboring the point, my response in point 6 is directly on the topic of point 1. To make it as clear as I can possibly make it, people come to this forum to talk to people because they prefer to talk to people. It should be clear that anyone who prefers to read LLM outputs can simply cut out the middleman and talk to them off of the motte.
Okay, fair. #6 is contrived non sequitur slop, barely intelligible in context as a response to #5, so that has confused me.
In conclusion, I think my preference to talk to people when I want to, to AI when I want to, and use any mix of generative processes I want to, has higher priority than comfort of people who have nothing to contribute to the conversation or to pretraining data and would not recognize AI without direct labeling.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think one should separate the technical problem from the philosophical one.
LLMs are increasingly intelligent, but still not broadly speaking as intelligent as the posters here. That is a technical problem.
LLMS are not human, and will never be human. You cannot have an AI 'community' in any meaningful sense. That is a philosophical problem.
If you care about the former, you should consider banning AI posts until they are at least as good as human posts. If the latter, you should ban AI posts permanently.
My impression is that pro-AI-ban comments are split between the two.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
From one perspective: Words are words, ideas are ideas. A good argument is a good argument, regardless of the source. If the argument is not good, that's a technical problem.
That said, many of us here in practice have an anecdotal style of writing, because (a) we aren't actually rationalists and (b) few people worth talking to actually have the time and inclination to produce think-tank style pieces; obviously there is no value in reading about the experiences of something that has no experience. There is also less satisfaction in debating with a machine, because only one of you is capable of having long-term growth as a result of the conversation.
More options
Context Copy link
More options
Context Copy link
It's been tried; as I recall ~90% noticed, 10% argued with the AI, 100% were annoyed -- and the 'experiment' was probably a big reason for the ruling before us.
I think it's time to replicate with new generation of models.
Tell me, does R1 above strike you as "slop"? It's at least pretty far into the uncanny valley to my eyes.
I dunno -- like all models I've observed to date, it gives me weird tl;dr vibes after about four lines, so I either skim heavily or... don't read.
(For the record, your own posts -- while often even longer -- do not have the same effect. Although I'll confess to bailing on the odd one, in which case it tends to be more over lack of time than interest.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
For what it's worth, I agree with you, and will plead the case with the other mods, but I do have to stand by the majority decision if it goes against it.
I raised an eyebrow at your use of an R1 comment, but in principle, I'm not against the use of AI as long as it's not low effort slop, the poster makes an effort to fact check it, and adds on substantive commentary. Which I note you did.
I agree that we're at the point where it's next to impossible to identify AI generated text when it's made with a minimum of effort. You don't even need R1 for that, Claude could pull it off, and I'm sure 4o can fool the average user if you prompt it correctly. That does require some effort, of course, and I'd rather not this place end up a corner of the dead internet, even if I can count on LLMs to be more existing that the average Reddit or Twitter user. We hold ourselves to higher standards, and talking to an actual human is an implicit goal.
Of course, if a human is using said LLM and directing it actively, I don't strenuously object. I'm against low effort bot use, not high effort.
More options
Context Copy link
What's the value of a top-level comment by AI, though? And what is the value of the "original commentary" you gave? This is quite unlike Adam Unikowsky's use/analysis of hypothetical legal briefs and opinions.
Whatever value it innately has as a piece of writing, of course. For example, if the distinction between wheat- and rice-growing parts of China really exists, that's fascinating. Likewise, I never thought of the fact that Europe suffered the Black Plague while China remained saturated, and what effect that might have had on their respective trajectories.
My guess is that the specific statement -- that rice-farmers are more interdependent, holistic, less prone to creativity, etc., while wheat-farmers are the reverse -- is from some highly cited papers from Thomas Talheim. You might find similar speculation in previous decades about how rice-farming promotes a culture of hard work and incremental progress (etc etc.) compared to wheat farming which is less rewarding per joule of human effort spent, invoked in a similar manner as how the Protestant ethic used as a rationale for differences in development in European/Euro-descended countries.
Outside of that, there are definite stereotypes -- both premodern and modern -- about the differences between northern and southern Chinese, but usually seem to be of the vein that northerners are more honest and hardy and brash (and uncultured etc.), while southerners are more savvy and shrewd (and more effete and cowardly etc.)
(I make no comment on the validity of either.)
This is a partial hypothesis for the Great Divergence: The Black Death, + other 14th century wars and calamities, wiped out >33% of Europe's population, which lead to a significant increase (almost double?) in wages and the decline of feudalism. During this time, higher wages, lower rents, higher costs to trade e.g. compared to intra-China trade, and other factors produced large-scale supply/demand disequilibria after the Black Death that increased the demand for labour-saving technology as well as the incentives for innovation from each class of society e.g. from people no longer being serfs.
On the other hand, it would be negative EV for a Chinese merchant or industrialist -- who had lower labour costs to deal with and more efficient internal markets -- to spend a lot on innovation, when you could just spend more money on hiring more people. And this is before we add in things like the shift to neo-Confucianism in the Ming period, awful early-Ming economic policy, Qing paranoia etc.
For what it's worth, I don't find this to be anywhere near a complete explanation. There is a corresponding divergence within Europe of countries that maintained that level of growth in per capita income and those who didn't. China also has had its share of upheavals and famines without a corresponding shift in this sense (although arguably none were as seismic population-wise as the Black Death was for Europe), and more recent reconstruction of historical Chinese wages does see them near their peak at the start of each dynasty and dropping off gradually as the dynasty goes on, which both kinda confirms the supply/demand effect of reduced population on wages after social turbulence but also doesn't seem to really map neatly onto any bursts of innovation. Additionally, the period of time associated with rapid innovation in imperial China, the Tang-Song period, is associated with a population increase.
But even if it doesn't explain China, I think it at least explains the European story partially, about how potential preconditions for industrialisation and scientific development were met.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link