site banner

Rule Change Discussion: AI produced content

There has been some recent usage of AI that has garnered a lot of controversy

There were multiple different highlighted moderator responses where we weighed in with different opinions

The mods have been discussing this in our internal chat. We've landed on some shared ideas, but there are also some differences left to iron out. We'd like to open up the discussion to everyone to make sure we are in line with general sentiments. Please keep this discussion civil.

Some shared thoughts among the mods:

  1. No retroactive punishments. The users linked above that used AI will not have any form of mod sanctions. We didn't have a rule, so they didn't break it. And I thought in all cases it was good that they were honest and up front about the AI usage. Do not personally attack them, follow the normal rules of courtesy.
  2. AI generated content should be labelled as such.
  3. The user posting AI generated content is responsible for that content.
  4. AI generated content seems ripe for different types of abuse and we are likely to be overly sensitive to such abuses.

The areas of disagreement among the mods:

  1. How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)
  2. What AI usage implies for the conversation.
  3. Whether a specific rule change is needed to make our new understanding clear.

Edit 1 Another point of general agreement among the mods was that talking about AI is fine. There would be no sort of topic ban of any kind. This rule discussion is more about how AI is used on themotte.

14
Jump in the discussion.

No email address required.

Could we hear from a mod who wants an AI policy even as permissive as "quoted just like any other speaker"?

My two cents:

How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)

off site links only, other than very short quotes not making up the bulk of a comment, and even that I kinda hate

What AI usage implies for the conversation.

the end of my interest in a thread and a sharp drop in my respect for the user

Whether a specific rule change is needed to make our new understanding clear.

yes, please. otherwise, it's far too easy to spam, lowering quality and increasing moderator effort.

Bottom line, I think we need to discourage AI heavily. Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

This is the thing I usually say about moderation, but - the problem with most AI posts isn't that they're AI, it's that they're bad. It is, in principle, possible to use AI to help you write interesting posts, I saw a perfectly good one on Twitter written with DeepSeek recently. But AI makes it much easier to quickly spit out something low-effort and uninteresting, so people do a lot of that.

The thing is, it's fine to have a rule that says 'no bad posts'. Indeed, the 'avoid low-effort participation' rule works for this purpose. So I don't think we should discourage AI overall, but just discourage using AI to make bad posts. And similarly, if someone's posting thousands of words of vacuous text every day, mods should feel free to discourage them even if it's artisanal hand-made text.

It's also that they're AI. If the goal of a discussion is to produce content, then, sure, a good AI could do that just as well. But if the goal is to have a conversation between people, then AI allows people to avoid ever putting their own ideas in contact with others, they can just offload it to the AI. They can have the AI never change its mind, if they like.

[Meta] I gave gemini the whole contents of this thread, and asked it to select a comment to disagree with. You "win":

I disagree that AI inherently prevents a "conversation between people." It's a tool, like any other. Sure, it can be used to disengage, but it can also be used to sharpen arguments, explore different angles, even generate thought-provoking counterpoints. The problem isn't the AI itself, it's how people choose to use it. Just like a human can be stubborn and refuse to change their mind, so can someone using AI badly. But that doesn't mean all human interaction is invalid, right? We shouldn't throw the baby out with the bathwater. A well-integrated AI contribution can actually enhance a discussion by bringing in perspectives or information a single human might not have considered.

So how do I feel about this argument? It's coherent, representative of the pro AI arguments here...and cripples the ability to draw signal about what people actually think. My post is at least tolerable under "use/mention" but, ugh, I feel dirty. qa!

It doesn't disagree with anything I said. I was pretty clearly (especially in context) addressing a comment advocating an unbounded use of AI, as long as the posts it produces are of quality. I address that, but your AI's comment in no way interacts with that. The only disagreement inserts the word inherently when I said no such thing, and addresses situations that I didn't care to talk about here.

That's not really worse than typical comments, in that people will frequently just respond to particular features of some comment in ways that weren't salient in context, but, if it could have chosen out of anyone to disagree with, the AI could have done a lot better than not actually disagreeing with me.

Yeah, the closer I look, the less I am impressed with its comment.

the problem with most AI posts isn't that they're AI, it's that they're bad

My opinion is otherwise.

The problem with 'good' LLM-generated posts is that they introduce an effort asymmetry. They make it possible for an individual to astroturf or gish gallop to a hitherto-unseen level.

In the absence of LLMs a longpost represents a certain minimum bar of 'this person cared enough about this subject to spend the time to write this'. LLMs completely upend this.

Can you make any argument in defense of your apparently instinctual reactions?

the end of my interest in a thread and a sharp drop in my respect for the user

Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

It sounds like you just feel entitled to an arbitrary terminal preference. That's not compelling.

the end of my interest in a thread and a sharp drop in my respect for the user

This is because it indicates that the other user is not particularly engaged. Ordinarily, if I'm having a conversation, I know that they read my response, thought about their position, and produced what they thought a suitable response was. If an AI is doing it, then there's no longer much sign of effort, and it's fairly likely that I'm not even going to convince them. This point should be expanded upon—if, often, much of what fuels online discourse is "someone is wrong on the internet," then that would no longer be a motivation, since it's not like they'll even hear the words you're saying, but just feed it into a machine to come up with another response taking the same side as before. You may retort that you're still interacting with an AI, proving them wrong, but, their recollection is ephemeral, and depending on what they're being told to do, they will not ever be persuaded, regardless of how strong your arguments and evidence are.

Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

Currently, length indicates effort and things to say, which are both indicators of value. Not perfectly, but there's certainly a correlation.

With liberal use of AI, that correlation breaks down. That would considerably lower the expected value of a long-form post, in general.

It sounds like you just feel entitled to an arbitrary terminal preference. That's not compelling.

At least you hedged it with "it sounds," but I don't think the preferences are arbitrary.

Can you make any argument in defense of your apparently instinctual reactions?

Yes. In short, AI makes it too easy to be low quality given the finite energy and speed of our mods.

Metaphor:

Suppose you're a camp counselor (mod) and find that your campers (posters) sometimes smell bad (make bad posts). Ideally, you could just post a sign that says "you must not smell bad" (don't write bad posts) and give people total freedom, simply punishing those who break the rule.

In practice, you need stricter rules about the common causes of the smell, like "shower daily" or "flush the toilet" (don't use AI) - even though, sure, some people might not smell bad for a couple days (use AI well) or might like the smell of shit (not hate AI generated prose like I absolutely freely admit that I do). It's just not realistic to expect campers (us) to have good hygiene (use AI responsibly sufficiently often).

Not the OP; I share similar reactions. I am not the most articulate; let me attempt to expand these a little.

arbitrary terminal preference.

Your 'arbitrary terminal preference' is my '(relational) tacit knowledge'.

the end of my interest in a thread

The other person a. has signaled that they do not find the thread in question important enough to spend their time on b. has signaled that the preferences and opinions expressed in the thread are not their own.

Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

Current LLMs tend to write long-form responses unless explicitly prompted otherwise, and LLM content is often used to 'pad' a short comment into a longer-form one. This results in P(LLM | longform) >> P(LLM | shortform). There are, of course, exceptions.

Ditto, the amount of my time wasted trying to help someone if I realize 90% through a shortform comment it's LLM-generated is much less than if I realize 90% of the way through a longform comment it's LLM-generated.

and a sharp drop in my respect for the user

P(arbitrary comment from user uses a LLM | some comment from user visibly used a LLM) >> P(arbitrary comment from user uses a LLM | no comments from user visibly used a LLM).


Of course, then the followup question becomes "what do I have against LLM content". A few of the components:

The big LLM-as-a-services are a very attractive target for propaganda and censorship. (Of course, so are local models - though a local model can be assumed to remain identical tomorrow as today, and user Y as user X.) We already are seeing the first stirrings of this; everything so far has been fairly blatent whereas I am far more concerned about more subtle forms thereof.

Beware Gell-Mann Amnesia. I test (locally, because test set contamination) LLMs fairly often on questions in my actual field of expertise (which will go unnamed here) and they categorically write well-articulated responses that get the superficial parts correct while spouting dangerous utter nonsense for anything deeper. Unfortunately, over time the threshold of where they start spouting nonsense gets deeper, but does not disappear. I worry that at some point that threshold will overtake my level of expertise and I will be no longer able to pinpoint the nonsense.

I am not here to get one particular piece of information in the most efficient manner possible. I am here to test my views for anything I've missed, and to be exposed to a wider variety of views, and to help others when they have missed implications of views, and to be socialish. LLM-generated content misses the mark on all of these. If ten people read and respond to me they have a far wider range of tests of my views than if they all run my post through a single LLM. If ten people read and respond to me they have a far wider variety of outlooks than if they all run my post through a single LLM. If I spend time reading a partially-incorrect answer to be able to respond to it to help someone and I discover after-the-fact that it was LLM-generated, I have wasted time without helping the person I thought I was helping.

People often have knowledge which is difficult to express in words. That doesn't make it an arbitrary preference.

Sounds like they need LLM writing assistance more than anyone, then.

I'd be the person you're looking for.

I think AI is a useful tool, and has some utility in discourse, the most pertinent example that comes to mind being fact-checking lengthy comments (though I still expect people to read them).

I'm fine with short excerpts being quoted. I am on the fence for anything longer, and entirely AI generated commenting or posting without human addition is beyond the pale as far as I'm concerned.

My stance is that AI use is presumed to be low effort by default, the onus is on the user to put their own time and effort into vetting and fact checking it, and only quoting from it when necessary. I ask that longer pieces of prose be linked off-site, pastebin would be a good option.

While I can tolerate people using AI to engage with me, I can clearly see, like the other mods, that it's a contentious topic, and it annoys people reflexively, with some immediately using AI back as a gotcha, or refusing to engage with the text on its own merits. I'm not going to go "am I out of touch, no it's the users who are wrong" here, the Motte relies on consensus both in its moderation team, and in its user base. If people who would otherwise be happy and productive users check out or disengage, then I am happy to have draconian restrictions for the sake of maintaining the status quo.

People come here to talk to humans. They perceive AI text to be a failure in that regard (even I at least want a human in the loop, or I'd talk to Claude). If this requires AI to be discouraged, that's fine. I'm for it, though I would be slightly unhappy with a categorical ban. If that's the way things turn out, this not a hill I care to die on, especially when some users clearly would be happy to take advantage of our forbearance.

the onus is on the user to put their own time and effort into vetting and fact checking it

i am concerned that in practice it's going to fall heavily onto other users and the mods, rather than OP.

i think we can have ~all the ai you actually want with a policy of "no ai, except for short things where you used it so well/minimally that we can't tell and it's more like a spell checker than outsourcing"

I assume you want to use the caveats that it can be used for research purposes or thinking through things, just not copy-pasting text?

If you mean behind the scenes, sure*. If you mean "and then quote its facts/figures," no. I consider "AI says $STATISTIC" to be at most twice as accurate and at least twice as irritating as "my ballpark guess is", while being significantly dishonest. It's just forcing others to do your work for you.

*: Even researching is on shaky ground. "Question -> AI -> paraphrase answer" is marginally better than piping AI into a textbox. "Question -> AI -> check original sources -> synthesize your own argument" can be done well. I personally don't find it more useful than "Question -> search results -> check original sources if those weren't already -> synthesize your own argument", but concede that I am a luddite. (muh vimmmmm)

I agree with you here. There's an unfortunate amount of inherent haziness in trying to adjudicate using the Motte's rules, and the effort requirements are quite often the most litigated.

If someone is using an AI in a more discreet manner, while I can't I outright approve of them doing so, if I can't prove it, well...

Could we hear from a mod who wants an AI policy even as permissive as "quoted just like any other speaker"?

I imagine it's @self_made_human