site banner

Rule Change Discussion: AI produced content

There has been some recent usage of AI that has garnered a lot of controversy

There were multiple different highlighted moderator responses where we weighed in with different opinions

The mods have been discussing this in our internal chat. We've landed on some shared ideas, but there are also some differences left to iron out. We'd like to open up the discussion to everyone to make sure we are in line with general sentiments. Please keep this discussion civil.

Some shared thoughts among the mods:

  1. No retroactive punishments. The users linked above that used AI will not have any form of mod sanctions. We didn't have a rule, so they didn't break it. And I thought in all cases it was good that they were honest and up front about the AI usage. Do not personally attack them, follow the normal rules of courtesy.
  2. AI generated content should be labelled as such.
  3. The user posting AI generated content is responsible for that content.
  4. AI generated content seems ripe for different types of abuse and we are likely to be overly sensitive to such abuses.

The areas of disagreement among the mods:

  1. How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)
  2. What AI usage implies for the conversation.
  3. Whether a specific rule change is needed to make our new understanding clear.

Edit 1 Another point of general agreement among the mods was that talking about AI is fine. There would be no sort of topic ban of any kind. This rule discussion is more about how AI is used on themotte.

14
Jump in the discussion.

No email address required.

I think this sounds fine in principle.

But suppose you make that post, and it actually sucks, and you didn't realize. I've definitely polished a few turds and posted them before without realizing, these things happen to the best of us. Now what? Does subsequent discussion get derailed by an intellectually honest disclosure of AI usage, and we end up relitigating the AI usage rules every time this happens?

On the one hand, I'd like to charitably assume that my interlocutors are responsible AI users, the same way we're usually responsible Google users. I don't necessarily indicate every time I look up some half-remembered factoid on Google before posting about it; I want to say that responsible AI usage similarly doesn't warrant disclosure[1].

On the other hand, a norm of non-disclosure whenever posters feel like they put in the work invites paranoid accusations of AI ghostwriting in place of actual criticisms. I've already witnessed this interaction play out with the mods a few days ago - it was handled well in this case, but I can easily imagine this getting out of hand when a post touches on hotter culture war fuel.

I don't think there's a practical way to allow widespread AI usage without discussion inevitably becoming about AI usage. I'd rather you didn't use it; and if you do, it should be largely undetectable; and if it's detectable, we charitably assume you're just a bad writer; and if you aren't, we can spin our wheels on AI in discourse again - if only to avoid every bad longpost on the Motte becoming another AI rule debate.

[1] A big part of my hesitation for AI usage is the blurry epistemology of its output. Google gives me traceable references to which I can link, and high quality sources include citations; AI doesn't typically cite sources, and sometimes hallucinates stuff. It's telling that Google added an AI summarizer to the search function, and they immediately caught flak for authoritatively encouraging people to make pizza out of glue. AI as a prose polisher doesn't have this epistemological problem, but please prompt it to be terse.