site banner

Rule Change Discussion: AI produced content

There has been some recent usage of AI that has garnered a lot of controversy

There were multiple different highlighted moderator responses where we weighed in with different opinions

The mods have been discussing this in our internal chat. We've landed on some shared ideas, but there are also some differences left to iron out. We'd like to open up the discussion to everyone to make sure we are in line with general sentiments. Please keep this discussion civil.

Some shared thoughts among the mods:

  1. No retroactive punishments. The users linked above that used AI will not have any form of mod sanctions. We didn't have a rule, so they didn't break it. And I thought in all cases it was good that they were honest and up front about the AI usage. Do not personally attack them, follow the normal rules of courtesy.
  2. AI generated content should be labelled as such.
  3. The user posting AI generated content is responsible for that content.
  4. AI generated content seems ripe for different types of abuse and we are likely to be overly sensitive to such abuses.

The areas of disagreement among the mods:

  1. How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)
  2. What AI usage implies for the conversation.
  3. Whether a specific rule change is needed to make our new understanding clear.

Edit 1 Another point of general agreement among the mods was that talking about AI is fine. There would be no sort of topic ban of any kind. This rule discussion is more about how AI is used on themotte.

Jump in the discussion.

No email address required.

But that's qualitatively different from such a containment thread. The posts in such a containment thread would be determined by things like: what type of person would enjoy posting/reading in such a thread, what type of prompts would such people use, what LLMs such people would choose to use, and what text output such people would deem as meeting the threshold of being good enough to share in such a thread. You'd get none of that by simulating a forum via LLM by yourself.

You can just ask your LLM of choice what it would enjoy reading, what it deems as meeting the threshold of being good enough to share, etc, and go from there. And/or ask it to simulate a wide variety of personas of varying tastes and proclivities.

But an LLM simulating those humans is qualitatively different from those actual humans sitting at their computers or their phones all around the actual Earth tapping their fingers on the keyboards or screens in front of them.

Yes, it is qualitatively different. Which is precisely the reason why people don’t want AI content here in the first place.

Indeed, and that's why the above comment was suggesting a method in which we could get something that gets AI content in a way that keeps that quality - i.e. individual real humans making decisions using their minds with respect to what they post online - but contained in a way that allows people who still don't like it to avoid it. You've just circled back to the original point.

Like, it's reasonable to say that such prompting, filtering, and curating don't meet the bar that you want humans to meet when posting on this forum. I actually lean in this direction, though I'm mostly ambivalent on this. But the idea that you can literally already do what was suggested by just using an LLM on your own is simply false.

But the idea that you can literally already do what was suggested by just using an LLM on your own is simply false.

I acknowledge that "my terminal value is that I'm ok with reading 100% AI-generated text as long as human hands physically copy and pasted it into the textbox on themotte.org" is a clever counterexample that I hadn't considered. I'm skeptical that any substantial number of people actually hold such a value set however.

At any rate I'm universally opposed to "containment zones", whether related to AI or not, for similar reasons that I oppose the bare links repository -- one of the functions of rules is to cultivate a certain type of culture, and exceptions to those rules serve to erode that culture.

I acknowledge that "my terminal value is that I'm ok with reading 100% AI-generated text as long as human hands physically copy and pasted it into the textbox on themotte.org" is a clever counterexample that I hadn't considered. I'm skeptical that any substantial number of people actually hold such a value set however.

I don't understand where you're getting the idea that it's a terminal value; could you explain the reasoning? In any case, the fact that the text that was posted was filtered through a human mind is information about the text that makes the contents of the forum substantively and meaningfully different from an LLM simulation of the forum. My point is that, for anyone who considers the human thought and human input to be valuable to have in a web forum like this, this theoretical containment thread provides that value. Is it enough value to make it worth having it? That's a separate question.