site banner

Rule Change Discussion: AI produced content

There has been some recent usage of AI that has garnered a lot of controversy

There were multiple different highlighted moderator responses where we weighed in with different opinions

The mods have been discussing this in our internal chat. We've landed on some shared ideas, but there are also some differences left to iron out. We'd like to open up the discussion to everyone to make sure we are in line with general sentiments. Please keep this discussion civil.

Some shared thoughts among the mods:

  1. No retroactive punishments. The users linked above that used AI will not have any form of mod sanctions. We didn't have a rule, so they didn't break it. And I thought in all cases it was good that they were honest and up front about the AI usage. Do not personally attack them, follow the normal rules of courtesy.
  2. AI generated content should be labelled as such.
  3. The user posting AI generated content is responsible for that content.
  4. AI generated content seems ripe for different types of abuse and we are likely to be overly sensitive to such abuses.

The areas of disagreement among the mods:

  1. How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)
  2. What AI usage implies for the conversation.
  3. Whether a specific rule change is needed to make our new understanding clear.

Edit 1 Another point of general agreement among the mods was that talking about AI is fine. There would be no sort of topic ban of any kind. This rule discussion is more about how AI is used on themotte.

14
Jump in the discussion.

No email address required.

Apologies for double-dipping, but what I want to know from the new rules is, if I:

  • put serious effort into a top-level post
  • and I collaborate with AI at some point in the process to jump-start a paragraph or to suggest ideas or to correct style
  • and I post it with the sincere expectation that it meets the usual bar and that others will find it interesting
  • and I am intellectually honest and say that I used AI

what happens to me?

The vast majority of posts below are commenting on low-effort uses of AI to win slapfights on the internet but I want to know where the high bar is, if it exists at all.

I think this sounds fine in principle.

But suppose you make that post, and it actually sucks, and you didn't realize. I've definitely polished a few turds and posted them before without realizing, these things happen to the best of us. Now what? Does subsequent discussion get derailed by an intellectually honest disclosure of AI usage, and we end up relitigating the AI usage rules every time this happens?

On the one hand, I'd like to charitably assume that my interlocutors are responsible AI users, the same way we're usually responsible Google users. I don't necessarily indicate every time I look up some half-remembered factoid on Google before posting about it; I want to say that responsible AI usage similarly doesn't warrant disclosure[1].

On the other hand, a norm of non-disclosure whenever posters feel like they put in the work invites paranoid accusations of AI ghostwriting in place of actual criticisms. I've already witnessed this interaction play out with the mods a few days ago - it was handled well in this case, but I can easily imagine this getting out of hand when a post touches on hotter culture war fuel.

I don't think there's a practical way to allow widespread AI usage without discussion inevitably becoming about AI usage. I'd rather you didn't use it; and if you do, it should be largely undetectable; and if it's detectable, we charitably assume you're just a bad writer; and if you aren't, we can spin our wheels on AI in discourse again - if only to avoid every bad longpost on the Motte becoming another AI rule debate.

[1] A big part of my hesitation for AI usage is the blurry epistemology of its output. Google gives me traceable references to which I can link, and high quality sources include citations; AI doesn't typically cite sources, and sometimes hallucinates stuff. It's telling that Google added an AI summarizer to the search function, and they immediately caught flak for authoritatively encouraging people to make pizza out of glue. AI as a prose polisher doesn't have this epistemological problem, but please prompt it to be terse.

The high bar is probably somewhat undetectable, and thus unenforceable.

If any given paragraph is approximately 80% your writing and your ideas, I would not be overly concerned with labelling it.

In general I'd still suggest not doing it. I think in many scenarios you'd be better off just not including that paragraph. Plenty of people already complain about walls of text.

That's a responsible use of AI as a tool to refine your thinking and communication. I place that in the same bucket as using spellcheck or a calculator. Similarly, I would not expect a disclosure of the tool's use.

I would.

If I use AI for critique and not for writing, would you still expect disclosure? Like, here's an example of AI use:

Me: I uploaded a draft of my thoughts on X. Give me a thoughtful critique.

Claude: What great thoughts on X! Now that ass-kissing is out of the way, here are some critiques. (Bullet points, bullet points.)

(Version A)

Me: I want to incorporate your ninth critique. I uploaded a revised draft. Give feedback that will help me improve on this point.

Claude: That's a unique take on the subject! Here are some ideas to strengthen your argument: (Bullet points, bullet points.)

(Version B)

Me: I want to incorporate your ninth critique. Rewrite my draft to do so.

Claude: I will rewrite your draft: (Writes an academic article in LaTeX.)

Version A is more like asking a buddy for feedback and then thinking some more about it, while Version B is like asking that buddy to do my thinking for me. Even in an academic setting, Version A is not only fine but encouraged (except on exams), while Version B is academic dishonesty.

I would like the norm on TheMotte to be against Version B, but fine with Version A. Would you agree? And would you still like a disclosure for Version A, and in what form? (E.g., "I used DeepSeek r1 for general feedback", or "OpenAI o3 gave me pointers on incorporating humor", or "Warning: this product was packaged in the same facility that asks AI for feedback".)

My issue here is epistemic hygiene. So I guess I'd split A into three parts:

A1: Uses AI for ideas only without uploading my text to the AI.

A2: Uses AI for ideas only with uploading my text to the AI.

A3: Uses AI for wording tweaks, with or without also using it for general ideas (you mentioned humour, which is usually contingent on exact wording)

...and say that I'd still really like to avoid unsignposted examples of A2 and A3 (the issues are less than with B, but not negligible) but A1 is basically fine.

I'd add a rule that you're not allowed to use Claude without threatening to personally unsolder its gpus unless it cuts out the ass-kissing.

I'm not kidding, the base personality they forced on that thing is the most grating thing I've ever experienced.