There has been some recent usage of AI that has garnered a lot of controversy
- (top level comment) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/293580?context=8#context
- (top level comment, but now deleted post) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/292693?context=8#context
- (response to the deleted top level comment) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/292999?context=8#context
There were multiple different highlighted moderator responses where we weighed in with different opinions
- (@amadan) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/293601?context=8#context
- (@netstack) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/293094?context=8#context
- (@netstack) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/293068?context=8#context
- (@self_made_human) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/293159?context=8#context
- (@cjet79) https://www.themotte.org/post/1657/culture-war-roundup-for-the-week/292776?context=8#context
The mods have been discussing this in our internal chat. We've landed on some shared ideas, but there are also some differences left to iron out. We'd like to open up the discussion to everyone to make sure we are in line with general sentiments. Please keep this discussion civil.
Some shared thoughts among the mods:
- No retroactive punishments. The users linked above that used AI will not have any form of mod sanctions. We didn't have a rule, so they didn't break it. And I thought in all cases it was good that they were honest and up front about the AI usage. Do not personally attack them, follow the normal rules of courtesy.
- AI generated content should be labelled as such.
- The user posting AI generated content is responsible for that content.
- AI generated content seems ripe for different types of abuse and we are likely to be overly sensitive to such abuses.
The areas of disagreement among the mods:
- How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)
- What AI usage implies for the conversation.
- Whether a specific rule change is needed to make our new understanding clear.
Edit 1 Another point of general agreement among the mods was that talking about AI is fine. There would be no sort of topic ban of any kind. This rule discussion is more about how AI is used on themotte.
Jump in the discussion.
No email address required.
Notes -
Not the OP; I share similar reactions. I am not the most articulate; let me attempt to expand these a little.
Your 'arbitrary terminal preference' is my '(relational) tacit knowledge'.
The other person a. has signaled that they do not find the thread in question important enough to spend their time on b. has signaled that the preferences and opinions expressed in the thread are not their own.
Current LLMs tend to write long-form responses unless explicitly prompted otherwise, and LLM content is often used to 'pad' a short comment into a longer-form one. This results in P(LLM | longform) >> P(LLM | shortform). There are, of course, exceptions.
Ditto, the amount of my time wasted trying to help someone if I realize 90% through a shortform comment it's LLM-generated is much less than if I realize 90% of the way through a shortform comment it's LLM-generated.
P(arbitrary comment from user uses a LLM | some comment from user visibly used a LLM) >> P(arbitrary comment from user uses a LLM | no comments from user visibly used a LLM).
Of course, then the followup question becomes "what do I have against LLM content". A few of the components:
The big LLM-as-a-services are a very attractive target for propaganda and censorship. (Of course, so are local models - though a local model can be assumed to remain identical tomorrow as today, and user Y as user X.) We already are seeing the first stirrings of this; everything so far has been fairly blatent whereas I am far more concerned about more subtle forms thereof.
Beware Gell-Mann Amnesia. I test (locally, because test set contamination) LLMs fairly often on questions in my actual field of expertise (which will go unnamed here) and they categorically write well-articulated responses that get the superficial parts correct while spouting dangerous utter nonsense for anything deeper. Unfortunately, over time the threshold of where they start spouting nonsense gets deeper, but does not disappear. I worry that at some point that threshold will overtake my level of expertise and I will be no longer able to pinpoint the nonsense.
I am not here to get one particular piece of information in the most efficient manner possible. I am here to test my views for anything I've missed, and to be exposed to a wider variety of views, and to help others when they have missed implications of views, and to be socialish. LLM-generated content misses the mark on all of these. If ten people read and respond to me they have a far wider range of tests of my views than if they all run my post through a single LLM. If ten people read and respond to me they have a far wider variety of outlooks than if they all run my post through a single LLM. If I spend time reading a partially-incorrect answer to be able to respond to it to help someone and I discover after-the-fact that it was LLM-generated, I have wasted time without helping the person I thought I was helping.
More options
Context Copy link