site banner

Rule Change Discussion: AI produced content

There has been some recent usage of AI that has garnered a lot of controversy

There were multiple different highlighted moderator responses where we weighed in with different opinions

The mods have been discussing this in our internal chat. We've landed on some shared ideas, but there are also some differences left to iron out. We'd like to open up the discussion to everyone to make sure we are in line with general sentiments. Please keep this discussion civil.

Some shared thoughts among the mods:

  1. No retroactive punishments. The users linked above that used AI will not have any form of mod sanctions. We didn't have a rule, so they didn't break it. And I thought in all cases it was good that they were honest and up front about the AI usage. Do not personally attack them, follow the normal rules of courtesy.
  2. AI generated content should be labelled as such.
  3. The user posting AI generated content is responsible for that content.
  4. AI generated content seems ripe for different types of abuse and we are likely to be overly sensitive to such abuses.

The areas of disagreement among the mods:

  1. How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)
  2. What AI usage implies for the conversation.
  3. Whether a specific rule change is needed to make our new understanding clear.
Jump in the discussion.

No email address required.

I hope this isn't too consensus building, but I think the way AI posts (meaning posts that mainly consist of AI-generated text, not discussion of AI generally) get ratio'd already gives a decent if rough impression of the community's general sentiment. ...eh, on second thought it's too subjective and unreliable a measure, nevermind.

If we allow AI content but disallow "low-effort" AI content, I guess the real question here is - does anyone really want to be in the business of properly reading into (explicitly!) AI-generated posts and discerning which poster is the soyjak gish-galloping slopper and which is the chad well-researched prompt engineer, when - crucially - both outputs sound exactly the same, and will likely be reported as such? If prompted right AI can make absolutely any point with a completely straight "face", providing or hallucinating proofs where necessary. I should know, Common Sense Modification is the funniest shit I've ever prompted. You can argue this is shitty heuristics, and judging the merits of a post by how it "sounds" is peak redditor thinking and heresy unbecoming of a trve mottizen, and I would even partly agree - but this is exactly what I meant by intellectual DDoS earlier. I still believe the instinctive "ick" as it were that people get from AI text is directionally correct, automatically discarding anything AI-written is unwise but the reflexive mental "downgrade" is both understandable and justified.

Another obvious failure mode is handily demonstrated by the third link in the OP: AI slop all too easily begets AI slop. I actually can't see anything wrong with, or argue against, the urge to respond to a mostly AI-generated post with a mostly AI-generated reply - indeed, why wouldn't you outsource your response to AI, if the OP evidently can? (But of course you'd use a carefully-fleshed out prompt that gets a thoughtful gen, not the slop you just read, right.) If you choose to respond by yourself anyway, what stops them from feeding your reply right back in once more? Goose, gander, etc. And it's all well and good, but at this point you have a thread of basically two AIs talking to each other, and permitting AI posts but forbidding to do specifically this to avoid spiraling again requires someone to judge which AI is the soyjak and which is the chad.

TL;DR: it's perfectly reasonable to use AI to supplement your own thinking, I've done it myself, but I still think that the actual output that goes into the thread should be 100% your own. Anything less invites terrible dynamics. Since nothing can be done about "undeclared" AI output worded such that nobody can detect it (insofar as it is meaningfully different from the thing called "your own informed thoughts") - it should be punishable on the occasion it is detected or very heavily suspected.

My take on the areas of disagreement:

  1. Disallow AI text in the main body of a post, maybe except when summarized in block quotes no longer than 1 paragraph to make a point. Anything longer should be under an outside link (pastebin et al) or, if we have the technology, embedded codeblocks collapsed by default.

  2. I myself post a lot of excerpts/screenshots so no strong opinion. AI is still mostly a tool, so as with other rhetorical "tools" existing rules apply.

  3. Yes absolutely, the last few days showed a lot of different takes on AI posting so an official "anchor" would be helpful.

If AI posting is normalized, I will skip over any post that doesn't get to the point in the first two sentences. Length was always a bad predictor of how much effort someone put into their post, but with AI, it will be a negative.

I will skip over any post that doesn't get to the point in the first two sentences.

I already do that for anything that doesn’t get to the point within the first paragraph. I strongly recommend everyone else to do the same and think this place would become much better if everyone did that.

TLDR: mod on content, not provenance.

A good post is enjoyable to read and it is well argued. Somebody who is using AI in some way to post more interesting, well-argued essays than they could write entirely by hand is improving the Motte, and should be encouraged. Using AI to post low-effort walls of text should be a bannable offence.

Specifically:

  • AI-written or edited content should be labelled clearly.
  • AI use should be considered a strong aggravating factor for low-effort or poor discussion, and should quickly escalate to bans if needed. The quality bar should be kept high for AI-adjacent content.
  • Otherwise, do nothing.

Yes, this is subjective, but all of our rules are subjective. In practice, I trust the mods to handle it.

TLDR: mod on content, not provenance.

Except the use of AI qualitatively changes the nature of the content, your own suggestions hint at this. A "handwritten low-effort wall of text" is pretty much a contradiction in terms, it probably deserves a gentlemen's C by default. If someone put in the time to write it, even if the arguments are hot garbage, other things ngs being equal you can assume they care, that they want to be taken seriously, that they want to improve, etc. None of this holds true when you post AI slop, because you can generate it with all the effort of writing a one-line sneer.

If you're asking for clear labelling and recommending that the use of AI be taken with a presumption of low-effort, you're already moderating on provenance.

A "handwritten low-effort wall of text" is pretty much a contradiction in terms

If average American political consumers started writing walls of text here, we would (and should) start moderating them. Doing the same to AI is fine.

Kind of? On a technical level, the median AI essay is both easier to create and lower quality than the median motte post. I want to strongly discourage people from spamming bad content because it’s bad content, especially at first while norms are being established.

But lots of other posters are arguing that posting AI-generated words is inherently wrong or goes against the purpose of the site. That if the words were not crafted in the brain of a human then discussing them is worthless and they should be banned regardless of their content. I think some people would be more offended by a good AI post than a bad one, because they’d been lured into paying attention to non-human writing. THAT is what I mean by ‘moderating for provenance’.

I should note that I’m mostly thinking of top-level and effort-posts here. If you’re involved in a downthread debate with a specific person then I can see that drafting in a more eloquent AI to continue the battle when you lose interest is poor form, at least unless you both agree to it.

(The labelling is partly practical and partly a moral conviction that you shouldn’t take credit for ideas you didn’t have).

I find that the concept of "interesting" is often used here and on DSL in toxic ways. It's too easy to call a post "interesting" when people are responding to it a lot because of its flaws, deliberate (for trolling) or otherwise (for AI). Well-argued is fine. Interesting shouldn't even be on the table.

Edited. I'm using 'interesting' as 'enjoyable to read'. That is, a good post is (a) something you want to read, and (b) something you gain by reading. Does that help?

There are some people who claim that they will never find anything that AI writes enjoyable simply because they know it's not produced by a human, but I think that's cutting off their nose to spite their face.

Because I know what Internet pedants are like, and because even non-pedants will fake being pedants to score points, we need a use/mention clause. Using AI-produced material as an example when discussing the topic of AI, rather than to promote the ideas expressed in the AI-produced material, doesn't count.

I'm not a fan of AI generated text, but mostly because I believe it to be incapable of any original insight (ignoring the fact it's poisoned with a certain brand of selfishness).

I have a hard enough time when actual human users do that; a pithy one-liner is superior to an AI wall of text because in the former case I know the conversation's going nowhere, where AI use in that case is basically just a sophisticated version of a Gish gallop.

By that token I find "I asked an AI tool because I couldn't be bothered to make an actual fucking argument, here is what it said" to simply be egregiously obnoxious, as that is the general tone of the comments that do this. It's not leveraged to create meaningful discussion, that's for sure- otherwise, why go through the trouble of saying "AI says X"?

There has yet to be an effective suppressing meme for "I'm using AI, and it has drawn you as the soyjack, and me as the chad" types of comments, much like "tits or GTFO" is used on 4chan as a rejection of trying to cover up insubstantive commenting with social privilege.

I think the difficult thing is, as with most arguments here, is that context matters.

When I've seen AI used here, it's usually as a stick to beat each someone else with ("look, it agrees with me!"), or a stick to beat AI with ("look at how wrong it is!"), or a stick to beat those in control of the AI with ("look at how wrong they made it!").

The only other case I've seen it used is "I'm too lazy to write my argument to the level of wordcel you guys want for a top level post, AI wrote it for me".

The easiest thing to do is to just ban all of it, because it's onerous to sort the wheat from the chaff. But I also see it becoming non-trivial to do so in the near future. The smarter artists are already using AI as part of their workflows or to make some of their work easier. And I've definitely seen worse arguments by people here than Deepseek manages to make on occasion.

  1. When talking about AI content, put examples in quotes if they are brief, provide an off-site link if they are verbose.
  2. When using AI content, treat it like citing Wikipedia in your high school paper: using it for research purposes is fine as long as you understand its limitations. Using AI content in your messages is not allowed, trying to hide the fact and pass the content as your own is just as bad as saying "here's what ChatGPT thinks about X".

But these cases are easy to judge. Here's one where it's less clear-cut: someone believes they are discriminated against by the wordcels of The Motte because they cannot write well. They have an idea they want to defend, they know how to attack the arguments of their opponent, but they think their command of the English language is insufficient to eloquently argue their point. So they turn to ChatGPT or DeepSeek and tell them, "computer, here's what this asshole says, here's what I think about it, please help me destroy him like I'm one of these fancy Substack writers".

On one hand, I can sympathize with them. AAQC compilations highlight a lot of passionate prose that can be hard to emulate. On the other hand, allowing people to ask AI for editorial and rhetorical help is a small step to "computer, here's what this asshole says, here's what I think about it, please help me destroy him like I'm one of these fancy Substack writers". On the gripping hand, forcing people to post their original argument and link to an off-site AI-edited version of it makes them sound unnecessarily passive-aggressive: "here's what I think about your argument, and here's my reply in moar words, because that's what you really care about, don't you".

If someone does post AI because they have their own ideas, but can't express them well, they should be willing to stand by everything in their post. If they use AI, we should be entitled to assume that they edited out everything they don't agree with and that even if what's left isn't literally their own words, we can treat it as their own words.

I greatly appreciate your insight. It has occured to me that since EoPs as a group have different political inclinations than ESLs, the site language being English means the former posseses an unfair advantage, thus making ideologies favoured by EoPs seem more justified. AIs can thus also be thought as free legal aid, to use your more general framework, or as a way for perspectives which Americans are less likely to espouse, to be given a fair shake.

Edit: Mods have on several occasions expressed that diversity of opinions is something they strive towards, when it is on the basis of US left vs US right. (They faced problems as attracting members of the underrepresentated side, would mean banning some members of the overrepresented one, purely of the basis of their beliefs.) Thus the principle to which I am appealing isn't alien to the mods.

I didn't mean just native speakers vs ESLs, but also wordcels vs shape rotators. Some people are just good with words, you pick up their book or blogpost and it just slides into you like the best Mormon jello.

There basis of state is self-preservation, treason is the first crime. Yet themotte.org has no rules against activity aimed at destroying themotte.org itself. One is allowed to argue that reddit was right to [remove] innocuous comments on /r/themotte. One is allowed to argue that reddit would be in the right even if banned /r/themotte, that the hosting of themotte.org is allowed to end with no justification, or that patreon is allowed seize the donations to themotte.org.

As thus one is allowed to gnaw at the very foundation if themotte.org, any rule whose alleged aim is allowing the continued existence of themotte.org, is arbitrary. And I consider its true goal to be something else. In this case, it is insecurity: if a merely large enough matrix can be shown to produce greater insights than many flesh and blood men, ideologies would have to take this fact into account. And perhaps some would cope more easily than others.

if a merely large enough matrix can be shown to produce greater insights than many flesh and blood men

If I get the matrix to write a response pointing out that the original great insight was garbage, does it mean we were right all along to want to exclude matrix-generated comments?

That it is an AI2 which disproves AI1, is no more proof that AI in general is wrong, than Human2 disproving Human1 is proof all human comments are wrong.

Ok, I can also get it to write text that argues all AI is wrong.

Could we hear from a mod who wants an AI policy even as permissive as "quoted just like any other speaker"?

My two cents:

How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)

off site links only, other than very short quotes not making up the bulk of a comment, and even that I kinda hate

What AI usage implies for the conversation.

the end of my interest in a thread and a sharp drop in my respect for the user

Whether a specific rule change is needed to make our new understanding clear.

yes, please. otherwise, it's far too easy to spam, lowering quality and increasing moderator effort.

Bottom line, I think we need to discourage AI heavily. Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

This is the thing I usually say about moderation, but - the problem with most AI posts isn't that they're AI, it's that they're bad. It is, in principle, possible to use AI to help you write interesting posts, I saw a perfectly good one on Twitter written with DeepSeek recently. But AI makes it much easier to quickly spit out something low-effort and uninteresting, so people do a lot of that.

The thing is, it's fine to have a rule that says 'no bad posts'. Indeed, the 'avoid low-effort participation' rule works for this purpose. So I don't think we should discourage AI overall, but just discourage using AI to make bad posts. And similarly, if someone's posting thousands of words of vacuous text every day, mods should feel free to discourage them even if it's artisanal hand-made text.

Can you make any argument in defense of your apparently instinctual reactions?

the end of my interest in a thread and a sharp drop in my respect for the user

Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

It sounds like you just feel entitled to an arbitrary terminal preference. That's not compelling.

I'd be the person you're looking for.

I think AI is a useful tool, and has some utility in discourse, the most pertinent example that comes to mind being fact-checking lengthy comments (though I still expect people to read them).

I'm fine with short excerpts being quoted. I am on the fence for anything longer, and entirely AI generated commenting or posting without human addition is beyond the pale as far as I'm concerned.

My stance is that AI use is presumed to be low effort by default, the onus is on the user to put their own time and effort into vetting and fact checking it, and only quoting from it when necessary. I ask that longer pieces of prose be linked off-site, pastebin would be a good option.

While I can tolerate people using AI to engage with me, I can clearly see, like the other mods, that it's a contentious topic, and it annoys people reflexively, with some immediately using AI back as a gotcha, or refusing to engage with the text on its own merits. I'm not going to go "am I out of touch, no it's the users who are wrong" here, the Motte relies on consensus both in its moderation team, and in its user base. If people who would otherwise be happy and productive users check out or disengage, then I am happy to have draconian restrictions for the sake of maintaining the status quo.

People come here to talk to humans. They perceive AI text to be a failure in that regard (even I at least want a human in the loop, or I'd talk to Claude). If this requires AI to be discouraged, that's fine. I'm for it, though I would be slightly unhappy with a categorical ban. If that's the way things turn out, this not a hill I care to die on, especially when some users clearly would be happy to take advantage of our forbearance.

Could we hear from a mod who wants an AI policy even as permissive as "quoted just like any other speaker"?

I imagine it's @self_made_human

I strongly believe that AI generated posts should be disallowed. Like @cjet79, I think it destroys the entire point of the forum. A bunch of text some LLM came up with is not interesting, it's not worth discussing, and it is really disrespectful to everyone else to even post such a thing. It's basically saying "I don't feel like actually putting any effort into talking to you so I'm gonna have my assistant do it". Hell to the no on this one.

I would say that we should stop short of a full ban on AI generated content, because sometimes you can have an interesting meta discussion about the stuff. See for example recent posts where people were showing off examples of how that Chinese LLM differed from American models. That is actually an interesting discussion, even though it involves posting AI content. So IMO linking for the purposes of discussion is fine, but not having it write for you.

+1. The more AI content TM gets and starts to rely on, the more destroyed TM becomes.

I think LLM output is sometimes useful but should never be put directly in posts on this site, it has to be linked either from the platform itself when possible or dumped on pastebin etc. As far as topics of discussion go, any of 'LLM says X'/'LLM won't say Y'/'I asked an LLM to summarize Z' are not meaningful events to discuss and should never be top level threads or even the majority substance of a reply.

The starting post is my attempt to be neutral and just give everyone the lay of the land.

This post is me being opinionated.

I am very against AI generated content being on themotte. I think it makes the whole place pointless. I'm not against AI in general, it just seems specifically at odds with what makes this place useful to me.

I can have a discussion with AI without showing up here. I do this quite often for various topics. Cooking recipes, advice on fiction writing, understanding some complex physics topics, getting feedback on my own writing, generating business writing, etc. If I am here it is because I specifically do not want a conversation with an AI.

To me there is a value in discussion. Of my brain processing an idea, spitting it out onto a page, and then having other brains process what I have written and spit their opinions back out at me. Writing things is part of my thinking process. Without writing or talking about an idea, I can't really claim to have thought much about it. I believe this is true about many people. If someone else is offloading either part of their thinking process to an AI then the degree to which they have offloaded their thinking (either the initial reading/processing, or the writing/responding) is the degree to which I'm not getting additional value out of them.

Realistically there might not be much of a way to enforce this. Everyone could be going and feeding my writing to an AI and regurgitating its answers back to me. I just have to hope that they recognize how pointless and silly it is to do such a thing. I compared using AI on themotte to bringing a bike or a car to a fun run. It might be objectively better at accomplishing the "thing" we are doing. But everyone has a sense it is pointless and dumb to do so.


My other objection which may be mitigated in the future is that many of the AIs available have a sort of generic sameness to them.

Imagine all viewpoints can be summed up by a number ranging from 1-100. With 50 being an average viewpoint. Most AIs are going to spit out viewpoint 50 by default. I think you can currently make character AIs and have them spit out 40s or 60s with a higher rate of hallucinations. Google's "black nazi" image generator is I think a good example of them trying to push the AI's default opinion in one direction and ending up with some crazy hallucinations.

But there are plenty of real people with extreme viewpoints on any given topic, and plenty of them are here on themotte. Today's 50 is not yesterday's 50, and it likely won't be tomorrow's 50 either. Talking with the tail end view points is something I find interesting, and potentially useful. It is also generally more difficult to find viewpoints outside of the center. Themotte is a place that is often outside the center.

If everyone secretly started posting with AI's tomorrow to play a cruel trick on me, I don't think I'd immediately figure it out. But within a month I'd be gone from this place and would have lost interest. The center viewpoint is widely available and easy to find. I don't need to come to a dark and unknown corner of the internet to get it.

+1. I think it would be good to have a sort of cutout for talking about AI (for example in a pattern like: "Musk's new MechaTrump-3T model has some weird quirks, which will be concerning if the presidency is handed over to it next month as planned. Here is what it had to say about Mexico when I ran its 1bit quantization:").

Other than that, I want to add that the assumption of good faith for me depends on a line of reasoning along the lines of, "Another sentient being, ultimately not that different from you, has taken the time to write this. You can appreciate how long it took and how hard it was. Treat the product with the amount of respect you would like to receive if you put in a similar effort.".
This goes out of the window if the other party may have just written a line, clicked a button and copypasted.

Pretty much sums up my thoughts on the matter. +1 for the Buttlerian Jihad from me.

I'm ok with people posting AI generated content as long as it is clearly marked and hidden away in some kind of drop-down/quote. Sometimes you might want to post some AI-generated content as a piece of evidence or the like.

Strong agree that posting it without attribution and labelling should be a bannable offence though.

This is more or less where I stand. I think sometimes using it as evidence (particularly in arguments about AI!) is potentially helpful in small doses.

If the preponderance of your response is not human generated, you probably are doing something wrong.

I like your shared thoughts.

But I also really want the AI to go away and would be supportive of more extreme moderation in the future. I haven't seen a single thread with AI content that I thought was productive.

Though shalt not make a machine in the likeness of a human mind