site banner

Rule Change Discussion: AI produced content

There has been some recent usage of AI that has garnered a lot of controversy

There were multiple different highlighted moderator responses where we weighed in with different opinions

The mods have been discussing this in our internal chat. We've landed on some shared ideas, but there are also some differences left to iron out. We'd like to open up the discussion to everyone to make sure we are in line with general sentiments. Please keep this discussion civil.

Some shared thoughts among the mods:

  1. No retroactive punishments. The users linked above that used AI will not have any form of mod sanctions. We didn't have a rule, so they didn't break it. And I thought in all cases it was good that they were honest and up front about the AI usage. Do not personally attack them, follow the normal rules of courtesy.
  2. AI generated content should be labelled as such.
  3. The user posting AI generated content is responsible for that content.
  4. AI generated content seems ripe for different types of abuse and we are likely to be overly sensitive to such abuses.

The areas of disagreement among the mods:

  1. How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)
  2. What AI usage implies for the conversation.
  3. Whether a specific rule change is needed to make our new understanding clear.

Edit 1 Another point of general agreement among the mods was that talking about AI is fine. There would be no sort of topic ban of any kind. This rule discussion is more about how AI is used on themotte.

14
Jump in the discussion.

No email address required.

Give LLMs zero credibility under the rules, and most of the situations can be handled smoothly.

  • Can you research using AI, and present your findings in a comment? Of course! You can research with anything, and the other people can push back on mistakes or low-quality sources. (You can also skip this step entirely and post without researching anything).

  • Can you research using AI, and present its findings in a comment? No, no more than you can dig up a random blog and copy/paste it in support of your argument.

  • Can you talk about something Claude said? Kind of. You can talk about something your uncle Bob said, but you shouldn't expect us to put any weight on the fact that he said it. Similarly, the LLM's statements are not notable. Go ahead and use it as a jumping-off point, though.

  • Can you use an LLM as a copyeditor? Go ahead and use whatever writing strategy you want.

  • Can you use an LLM as a coauthor? No, you have to write your own comments.

Maybe add a text host next to the image hosting we already have? It could give a place to dump them when appropriate.

We are finger-countable years away from AI agents that can meet or exceed the best human epistemological standards. Citation and reference tasks are tedious for humans, and are soon going to be trivial to internet-connected AI agents. I agree that epistemological uncertainty in AI output is part of the problem, but this is actually the most likely to be addressed by someone other than us. Besides, assuming AI output is unreliable doesn't address the problems with output magnitude or non-disclosure of usage/loss of shared trust, both of which are actually exacerbated by an epistemically meticulous AI.