There has been some recent usage of AI that has garnered a lot of controversy
- (top level comment) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/293580?context=8#context
- (top level comment, but now deleted post) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/292693?context=8#context
- (response to the deleted top level comment) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/292999?context=8#context
There were multiple different highlighted moderator responses where we weighed in with different opinions
- (@amadan) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/293601?context=8#context
- (@netstack) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/293094?context=8#context
- (@netstack) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/293068?context=8#context
- (@self_made_human) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/293159?context=8#context
- (@cjet79) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/292776?context=8#context
The mods have been discussing this in our internal chat. We've landed on some shared ideas, but there are also some differences left to iron out. We'd like to open up the discussion to everyone to make sure we are in line with general sentiments. Please keep this discussion civil.
Some shared thoughts among the mods:
- No retroactive punishments. The users linked above that used AI will not have any form of mod sanctions. We didn't have a rule, so they didn't break it. And I thought in all cases it was good that they were honest and up front about the AI usage. Do not personally attack them, follow the normal rules of courtesy.
- AI generated content should be labelled as such.
- The user posting AI generated content is responsible for that content.
- AI generated content seems ripe for different types of abuse and we are likely to be overly sensitive to such abuses.
The areas of disagreement among the mods:
- How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)
- What AI usage implies for the conversation.
- Whether a specific rule change is needed to make our new understanding clear.
Edit 1 Another point of general agreement among the mods was that talking about AI is fine. There would be no sort of topic ban of any kind. This rule discussion is more about how AI is used on themotte.
Jump in the discussion.
No email address required.
Notes -
Can you make any argument in defense of your apparently instinctual reactions?
It sounds like you just feel entitled to an arbitrary terminal preference. That's not compelling.
This is because it indicates that the other user is not particularly engaged. Ordinarily, if I'm having a conversation, I know that they read my response, thought about their position, and produced what they thought a suitable response was. If an AI is doing it, then there's no longer much sign of effort, and it's fairly likely that I'm not even going to convince them. This point should be expanded upon—if, often, much of what fuels online discourse is "someone is wrong on the internet," then that would no longer be a motivation, since it's not like they'll even hear the words you're saying, but just feed it into a machine to come up with another response taking the same side as before. You may retort that you're still interacting with an AI, proving them wrong, but, their recollection is ephemeral, and depending on what they're being told to do, they will not ever be persuaded, regardless of how strong your arguments and evidence are.
Currently, length indicates effort and things to say, which are both indicators of value. Not perfectly, but there's certainly a correlation.
With liberal use of AI, that correlation breaks down. That would considerably lower the expected value of a long-form post, in general.
At least you hedged it with "it sounds," but I don't think the preferences are arbitrary.
More options
Context Copy link
Yes. In short, AI makes it too easy to be low quality given the finite energy and speed of our mods.
Metaphor:
Suppose you're a camp counselor (mod) and find that your campers (posters) sometimes smell bad (make bad posts). Ideally, you could just post a sign that says "you must not smell bad" (don't write bad posts) and give people total freedom, simply punishing those who break the rule.
In practice, you need stricter rules about the common causes of the smell, like "shower daily" or "flush the toilet" (don't use AI) - even though, sure, some people might not smell bad for a couple days (use AI well) or might like the smell of shit (not hate AI generated prose like I absolutely freely admit that I do). It's just not realistic to expect campers (us) to have good hygiene (use AI responsibly sufficiently often).
More options
Context Copy link
Not the OP; I share similar reactions. I am not the most articulate; let me attempt to expand these a little.
Your 'arbitrary terminal preference' is my '(relational) tacit knowledge'.
The other person a. has signaled that they do not find the thread in question important enough to spend their time on b. has signaled that the preferences and opinions expressed in the thread are not their own.
Current LLMs tend to write long-form responses unless explicitly prompted otherwise, and LLM content is often used to 'pad' a short comment into a longer-form one. This results in P(LLM | longform) >> P(LLM | shortform). There are, of course, exceptions.
Ditto, the amount of my time wasted trying to help someone if I realize 90% through a shortform comment it's LLM-generated is much less than if I realize 90% of the way through a longform comment it's LLM-generated.
P(arbitrary comment from user uses a LLM | some comment from user visibly used a LLM) >> P(arbitrary comment from user uses a LLM | no comments from user visibly used a LLM).
Of course, then the followup question becomes "what do I have against LLM content". A few of the components:
The big LLM-as-a-services are a very attractive target for propaganda and censorship. (Of course, so are local models - though a local model can be assumed to remain identical tomorrow as today, and user Y as user X.) We already are seeing the first stirrings of this; everything so far has been fairly blatent whereas I am far more concerned about more subtle forms thereof.
Beware Gell-Mann Amnesia. I test (locally, because test set contamination) LLMs fairly often on questions in my actual field of expertise (which will go unnamed here) and they categorically write well-articulated responses that get the superficial parts correct while spouting dangerous utter nonsense for anything deeper. Unfortunately, over time the threshold of where they start spouting nonsense gets deeper, but does not disappear. I worry that at some point that threshold will overtake my level of expertise and I will be no longer able to pinpoint the nonsense.
I am not here to get one particular piece of information in the most efficient manner possible. I am here to test my views for anything I've missed, and to be exposed to a wider variety of views, and to help others when they have missed implications of views, and to be socialish. LLM-generated content misses the mark on all of these. If ten people read and respond to me they have a far wider range of tests of my views than if they all run my post through a single LLM. If ten people read and respond to me they have a far wider variety of outlooks than if they all run my post through a single LLM. If I spend time reading a partially-incorrect answer to be able to respond to it to help someone and I discover after-the-fact that it was LLM-generated, I have wasted time without helping the person I thought I was helping.
More options
Context Copy link
People often have knowledge which is difficult to express in words. That doesn't make it an arbitrary preference.
Sounds like they need LLM writing assistance more than anyone, then.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link