site banner

Rule Change Discussion: AI produced content

There has been some recent usage of AI that has garnered a lot of controversy

There were multiple different highlighted moderator responses where we weighed in with different opinions

The mods have been discussing this in our internal chat. We've landed on some shared ideas, but there are also some differences left to iron out. We'd like to open up the discussion to everyone to make sure we are in line with general sentiments. Please keep this discussion civil.

Some shared thoughts among the mods:

  1. No retroactive punishments. The users linked above that used AI will not have any form of mod sanctions. We didn't have a rule, so they didn't break it. And I thought in all cases it was good that they were honest and up front about the AI usage. Do not personally attack them, follow the normal rules of courtesy.
  2. AI generated content should be labelled as such.
  3. The user posting AI generated content is responsible for that content.
  4. AI generated content seems ripe for different types of abuse and we are likely to be overly sensitive to such abuses.

The areas of disagreement among the mods:

  1. How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)
  2. What AI usage implies for the conversation.
  3. Whether a specific rule change is needed to make our new understanding clear.

Edit 1 Another point of general agreement among the mods was that talking about AI is fine. There would be no sort of topic ban of any kind. This rule discussion is more about how AI is used on themotte.

14
Jump in the discussion.

No email address required.

Testing a feature here:

AI Art Mind-bendingly beautiful prose.

Apologies for double-dipping, but what I want to know from the new rules is, if I:

  • put serious effort into a top-level post
  • and I collaborate with AI at some point in the process to jump-start a paragraph or to suggest ideas or to correct style
  • and I post it with the sincere expectation that it meets the usual bar and that others will find it interesting
  • and I am intellectually honest and say that I used AI

what happens to me?

The vast majority of posts below are commenting on low-effort uses of AI to win slapfights on the internet but I want to know where the high bar is, if it exists at all.

I think this sounds fine in principle.

But suppose you make that post, and it actually sucks, and you didn't realize. I've definitely polished a few turds and posted them before without realizing, these things happen to the best of us. Now what? Does subsequent discussion get derailed by an intellectually honest disclosure of AI usage, and we end up relitigating the AI usage rules every time this happens?

On the one hand, I'd like to charitably assume that my interlocutors are responsible AI users, the same way we're usually responsible Google users. I don't necessarily indicate every time I look up some half-remembered factoid on Google before posting about it; I want to say that responsible AI usage similarly doesn't warrant disclosure[1].

On the other hand, a norm of non-disclosure whenever posters feel like they put in the work invites paranoid accusations of AI ghostwriting in place of actual criticisms. I've already witnessed this interaction play out with the mods a few days ago - it was handled well in this case, but I can easily imagine this getting out of hand when a post touches on hotter culture war fuel.

I don't think there's a practical way to allow widespread AI usage without discussion inevitably becoming about AI usage. I'd rather you didn't use it; and if you do, it should be largely undetectable; and if it's detectable, we charitably assume you're just a bad writer; and if you aren't, we can spin our wheels on AI in discourse again - if only to avoid every bad longpost on the Motte becoming another AI rule debate.

[1] A big part of my hesitation for AI usage is the blurry epistemology of its output. Google gives me traceable references to which I can link, and high quality sources include citations; AI doesn't typically cite sources, and sometimes hallucinates stuff. It's telling that Google added an AI summarizer to the search function, and they immediately caught flak for authoritatively encouraging people to make pizza out of glue. AI as a prose polisher doesn't have this epistemological problem, but please prompt it to be terse.

The high bar is probably somewhat undetectable, and thus unenforceable.

If any given paragraph is approximately 80% your writing and your ideas, I would not be overly concerned with labelling it.

In general I'd still suggest not doing it. I think in many scenarios you'd be better off just not including that paragraph. Plenty of people already complain about walls of text.

That's a responsible use of AI as a tool to refine your thinking and communication. I place that in the same bucket as using spellcheck or a calculator. Similarly, I would not expect a disclosure of the tool's use.

I would.

If I use AI for critique and not for writing, would you still expect disclosure? Like, here's an example of AI use:

Me: I uploaded a draft of my thoughts on X. Give me a thoughtful critique.

Claude: What great thoughts on X! Now that ass-kissing is out of the way, here are some critiques. (Bullet points, bullet points.)

(Version A)

Me: I want to incorporate your ninth critique. I uploaded a revised draft. Give feedback that will help me improve on this point.

Claude: That's a unique take on the subject! Here are some ideas to strengthen your argument: (Bullet points, bullet points.)

(Version B)

Me: I want to incorporate your ninth critique. Rewrite my draft to do so.

Claude: I will rewrite your draft: (Writes an academic article in LaTeX.)

Version A is more like asking a buddy for feedback and then thinking some more about it, while Version B is like asking that buddy to do my thinking for me. Even in an academic setting, Version A is not only fine but encouraged (except on exams), while Version B is academic dishonesty.

I would like the norm on TheMotte to be against Version B, but fine with Version A. Would you agree? And would you still like a disclosure for Version A, and in what form? (E.g., "I used DeepSeek r1 for general feedback", or "OpenAI o3 gave me pointers on incorporating humor", or "Warning: this product was packaged in the same facility that asks AI for feedback".)

My issue here is epistemic hygiene. So I guess I'd split A into three parts:

A1: Uses AI for ideas only without uploading my text to the AI.

A2: Uses AI for ideas only with uploading my text to the AI.

A3: Uses AI for wording tweaks, with or without also using it for general ideas (you mentioned humour, which is usually contingent on exact wording)

...and say that I'd still really like to avoid unsignposted examples of A2 and A3 (the issues are less than with B, but not negligible) but A1 is basically fine.

I'd add a rule that you're not allowed to use Claude without threatening to personally unsolder its gpus unless it cuts out the ass-kissing.

I'm not kidding, the base personality they forced on that thing is the most grating thing I've ever experienced.

I will note one thing:

If somebody is detected having posted unmarked AI content (arguably including "AI disclaimer at the end"), that's got to be an instant ban. Maybe only, like, two months ban for the first offence, but it needs to be whacked really, really hard. Else theMotte dies because we lose common knowledge that we're talking to real people.

A holistic approach is better. does the person have a history of only ai-assisted posts or a mix of both? A lot of people use AI to assist with writing when stuck. An account that only posts ai-like content should be banned though.

This sounds like a recipe for paranoid accusations of AI ghostwriting every time someone makes a disagreeable longpost or a weird factual error, and a quick way to derail subsequent discussion into the tar pit of relitigating AI rules.

If a poster's AI usage is undetectable, your "common knowledge" is now a "common misconception". Undetectable usage is unquestionably where the technology is rapidly headed. In the near future when AI prose polishing and debate assistance are widespread, would you rather have almost every post on the site include an AI disclaimer?

Edit: misread your original post - you arguably want to include AI disclaimers as bannable offenses. This takes the wind out of my sails on the last point... I'm going to leave it rudderless for now, might revisit later.

Easier said than done.

It isn't 2023, when it used to be trivially obvious that someone was using ChatGPT 3.5. Even without special care, LLMs output text that doesn't scream AI, especially if you're only viewing what might be the equivalent of a typical comment. Even large posts or essays can be made to sound perfectly human.

The only conclusive examples would be someone getting sloppy and including "Certainly! Here is a long essay about Underwater Basket Weaving in Nigeria".

(I felt like getting cheeky and using ChatGPT to draft a reply, and I'd bet you wouldn't notice, but I didn't, because I'm a good boy)

Even so-called AI detectors have unacceptable failure rates. Us moderators have no ground truth to go off of, but of course, we do our best. This will, inevitably, not withstand truly motivated bad actors, but we haven't been swamped yet.

(I felt like getting cheeky and using ChatGPT to draft a reply, and I'd bet you wouldn't notice, but I didn't, because I'm a good boy)

I suspect that ChatGPT isn't "clever" enough to insert this kind of line when prompted to write a forum comment, but any time I see a line like this in a comment, that's what my mind goes to first.

With the default system prompt it won't say stuff like that, if you use something like eigenrobot's system prompt it will.

It wouldn't do so without explicit instruction or example. Not that it isn't capable of doing so, it's simply not default behavior.

Hence why I said "detected". I am aware you can't get them all; I'm merely noting that those that do fuck up should be harshly punished.

Fair enough. It's still an unpleasant call to make as a moderator, since we won't get ground truth without a major misstep or the person copping guilty.

Ya obvious detections will get harshly punished. I suspect most detections will be in some gradient of uncertainty and we will lower the punishment or not impose it at all based on that uncertainty.

Give LLMs zero credibility under the rules, and most of the situations can be handled smoothly.

  • Can you research using AI, and present your findings in a comment? Of course! You can research with anything, and the other people can push back on mistakes or low-quality sources. (You can also skip this step entirely and post without researching anything).

  • Can you research using AI, and present its findings in a comment? No, no more than you can dig up a random blog and copy/paste it in support of your argument.

  • Can you talk about something Claude said? Kind of. You can talk about something your uncle Bob said, but you shouldn't expect us to put any weight on the fact that he said it. Similarly, the LLM's statements are not notable. Go ahead and use it as a jumping-off point, though.

  • Can you use an LLM as a copyeditor? Go ahead and use whatever writing strategy you want.

  • Can you use an LLM as a coauthor? No, you have to write your own comments.

Maybe add a text host next to the image hosting we already have? It could give a place to dump them when appropriate.

We are finger-countable years away from AI agents that can meet or exceed the best human epistemological standards. Citation and reference tasks are tedious for humans, and are soon going to be trivial to internet-connected AI agents. I agree that epistemological uncertainty in AI output is part of the problem, but this is actually the most likely to be addressed by someone other than us. Besides, assuming AI output is unreliable doesn't address the problems with output magnitude or non-disclosure of usage/loss of shared trust, both of which are actually exacerbated by an epistemically meticulous AI.

A common criticism of AI slop is that it's not really that creative. It's mid. It only gives you a sort of average (let's set aside the question of whether you can make that sort of average still be approx academic tier average). That is still useful for at least one type of application - when an extremely obstinate commenter acts like their words don't mean things that regular, average people would think they mean. An example here, where someone basically just tried repeating a phrase, as if it was an argument on its own, as if saying it again made it mean something else that it didn't mean.

In these cases, it's useful to just ask the AI, "What does this mean?" It gives you a perfectly mid interpretation. A basic, what does this mean to a normal person? A "duh" check. An obviousness check. An easy way to point out, "If you want this to have some esoteric meaning, you need to put in a modicum of effort to justify that esoteric meaning (and explain it as though everyone is reading), because it doesn't mean that to anyone else."

Lol that discussion was already completely dead and pointless, and you made it infinitely worse by flushing it with pressurized AI sewage.

Hell yea - I’ve finally been highlighted !!!

Do not personally attack them

Come at me

On a more serious note, I could read ‘ AI slop ‘ for days and find it interesting and unlike other people’s fantasy football teams, I find others AI interactions fascinating as well.

I understand that this place does not, which I also find fascinating, and honestly it makes me understand plot points of future anti AI readings & cinema better.

A good reason for this not to be a place to read AI slop is that place already exists: https://chatgpt.com/. People who want to can just generate it themselves. Unless you are privy to some super advanced proprietary AI that isn't yet available to the public and blows all of the public ones out of the water, each and every one of us can go ask ChatGPT any questions about any topics we care about and get generic AI responses to them. Or better yet, stuff more tailored to our own interests and in our own styles, and can ask followup questions. Same reason I don't need to see your AI generated pictures or hear your AI generated music, but am enjoying generating my own.

AI land already exists. I'd like this place to exist too and be distinct from it.

Come at me

I lol'd, as they say. Props for being a good sport about this.

Can't we just... not?

Using AI as a (lazy) reference (eg. "ChatGPT says the GDP of Elbonia is $1.5B") is bad -- because nobody can be sure that GPT is correct in this, and therefore if you want to engage with the point you need to look up the number yourself first and make sure it's not a hallucination.

Using AI as a source of argument (eg. "ChatGPT says that Roko's Basilisk is baloney: <insert long AI argument> I think this is wrong because...") is bad for all the reasons many have pointed out (boring, low effort...), and putting it in spoiler tags, quotes, offsite links, smoke signals, whatever -- doesn't change this.

Quoting AI output as an example in a discussion about AI quality (eg. "Look what ChatGPT can do; the singularity is surely nigh!") or similar is probably OK -- but in most cases might be off-topic in the CW thread, and better placed in "Tinker Tuesday" or something.

So how about:

"No use of AI output in posts, with the distinction between 'use' and 'mention' subject to mod judgement -- keeping in mind that relevance and 'don't be annoying' rules still apply".

I don't necessarily want AI banned, but I'd like to see heavy moderation against a few common failure modes.

The first is that it's often used as an argument to authority. "I'm right because the God machine agrees with me" is a bad argument, but it seems to be a common one in the LLMposting on this site.

The second is that it seems to be used to pad the word count of a post. Word count as a proxy for quality is already a problem on this site, and AI posts make it worse.

If someone is using it as an augmented search engine that's fine, and should probably treated as such. Bare links to Google would trigger a ban, so why wouldn't dumping pages upon pages of AI output?

Bare links to Google would trigger a ban

To nitpick, it probably wouldn't warrant a ban. A warning, certainly, but bans would come up if they didn't stop and didn't contribute anything of note. I can't say that situation actually came up outside of actual spam.

I haven't gone out of my way to consume gen-AI content, for whatever that's worth, but my idea of good use of LLMs for public discourse is appellate litigator Adam Unikowsky incorportating analysis of Claude-authored hypothetical legal briefs and opinions in his explanations off legal controversies/theories. I don't see a constructive role for gen-AI content in a human forum like this one. From the CWT description:

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

Asking an LLM to steelman your outgroup or your own opinion and posting the output would be counter to the stated goals of The Motte. If anything, using an LLM for your comments just makes it easier to be "egregiously obnoxious."

From that link:

Kendall v. United States ex rel. Stokes, 37 U.S. (12 Pet.) 524 (1838). After the Postmaster General refused to pay a contractor for delivering mail, the contractor persuaded Congress to pass a bill requiring the Postmaster General to pay him. The Postmaster General refused, so the contractor sued. Held: the Postmaster General was required to pay.

Damn. I guess Congress was either really bored and underworked at the time, or they thought acting as a small-claims court was a cushy gig.

How is this not a bill of attainder? Because he's working for the government?

The Constitution Annotated:

A bill of attainder is legislation that imposes punishment on a specific person or group of people without a judicial trial.

A law that imposes a benefit, rather than a punishment, on a specific person is perfectly permissible.

I don't think "you have to pay someone" is a benefit.

It seems that, in the court case, bills of attainder were not discussed at all. Rather, after the private law was passed and the postmaster general still refused to pay, a district court issued a writ of mandamus ordering the postmaster general to pay. The entire discussion is centered on whether the district court was empowered to issue that writ of mandamus.

When an act of Congress imposes on an officer of the executive department, for the benefit of a private party, a duty purely ministerial, the performance of that duty may be coerced by mandamus, by any court to which the necessary jurisdiction shall have been given.

"Private laws" that affect only a specific person are pretty rare nowadays, but they do sometimes get passed. The most recent example, from year 2022: "Notwithstanding other laws, these three people shall be eligible for immigrant visas."

I'm guessing it's a matter of scale - if USPS contracted out major services and refused to pay on delivery (no pun intended) in today's world, any firm large enough to have performed the contracted major service would be large enough to lobby for Congressional intervention (even if Congressional intervention no longer takes the form of passing legislation, increasing the apparent difference). The same was presumably true when both government and government services were smaller.

(Edit: And, of course, Congress had an interest in keeping potential contractors confident in the government as potential customer.)

Quick thoughts: AI text should at least be in quote format,

like this

and maybe said quotes should have a red line to distinguish them as such. Don't know how you'd enforce using a special markup format for that, though.

My $0.02: non-critical appendix-style references to AI output are probably okay. Usage for generating discussion or argument should be banned. We do need a rule to match expectations between users and mods, to avoid encouraging excessive attempts at AI ghostwriting, and to reduce paranoid accusations of AI slop in place of deserved criticism.

  • The cost of generating AI content is so low that it threatens to trivially out-compete human content. The volume of output and the speed of processing by AI makes for an extremely powerful gish-gallop generator.
  • Unlike our resident human gish-gallop generators, nothing I say to the AI will meaningfully change its mind. AI can simulate a changed mind, but with substantial limitations and ephemeral results. Personally, the draw of the Motte is the symmetric potential to have my own mind changed and to change others' minds by sharing our own unique experiences and perspectives. (I am open to future AI advances that make debating the AI similarly engaging, but we're not there yet.)
  • Quoting books, blog posts, etc is an acknowledgement of the perspective and effort applied by the human being cited, regardless of topic-level value alignment. AI does not develop perspectives or apply efforts in ways that warrant social considerations (at least, not presently).
  • Quoting a source also serves as a natural bridge to further learning and discovery of the source for anyone interested. There can be valuable context, history, or interpersonal relationships surrounding the quote. In this sense, AI mostly generates shallow engagement opportunities. Where it could be more engaging (e.g. reference discovery or Google search replacement), I'd prefer to take recommendations from someone with skin in the social game.
  • Importantly, quotation is typically brief, poignant, and insightful. I'll grant that brief, poignant, and insightful are possible properties of AI output, but I've yet to see anything worth quoting by those criteria.
  • Pastebinning or spoiler-tagging AI output is an invitation for me to skip it. I'm okay with this for mentions or references, where there is already an implicit understanding that I may skip or summarize the content. I am not okay with "see my response [here](www.aislop.com )" replies.
  • I strongly agree with @SubstantialFrivolity that responding to a human with a wall of AI text creates an impression of "I can't be bothered, talk to my assistant instead." It's very rude. Critically, no amount of initial prevaricating about the effort you spent prompting, tweaking, and blessing the output makes this any less rude. On the other hand, if I can't tell if you used AI, you're likely using it well enough that I don't mind. It is in principle possible that I am already interacting with several longstanding AI characters and I just don't realize. The quality of AI output to date is not compelling evidence for this possibility. I also suspect that for each person successfully using AI to ghostwrite their posts, there would be ten other clumsy attempts that obviously fail. I feel that anything other than a blanket ban on AI ghostwriting is an invitation for people to push their luck, and will lead to more AI slop, more paranoid accusations of AI slop when mere slop is sufficient, and more moderation headaches as a result.
  • The growing pool of modhat "we didn't order you not to do this, but don't do this" posts on AI slop is a strong indication of an impedance mismatch between the expectations of mods and users, and of a need for unambiguous rules about how AI should or should not be used here.
  • I'm open to reviewing any rules made about AI posting in the coming years as AI gains increasing agency.

Aside: is $0.02 competitive for this amount of inference?

Unlike our resident human gish-gallop generators, nothing I say to the AI will meaningfully change its mind.

Woah, woah, woah, you might be expecting a bit too much of us.

(Seriously though, this is a good point.)

Aside: is $0.02 competitive for this amount of inference?

I believe that the going rate is about tree fiddy.

In the long term, I think the only way a forum like this survives is if users learn to be bored by AI-tier content. Weve just seen some labled AI posts get mass downvoted, but I think something like the Mihow post could totally have been a normal top-level and even be successful if it was a current topic. IMO its a bad post, it doesnt tell me anything I didnt know, and this is something everyone needs to internalise, downvote, and collapse. You should think primarily about how to get there - once we can all agree about what is unlabled AI content, or might as well be, the remaining problem wont be difficult.

"AI-tier content" will rapidly improve (unless WWIII, in which case theMotte will probably not be available either). This isn't a permanent solution.

I dont really agree. The current wave of AI improvements happened because someone found a way to utilise a much bigger dataset than anyone before. This is not a way to make indefinite improvements, and weve already picked the low-hanging fruit.

But if Im wrong about that, then Im not sure theres much else we can do?

I'm voting a hard no, I would no longer read this forum anymore. I'm here because I know people are taking the time to write and post their personal opinions. I believe Reddit is increasingly 'dead internet' thanks to AI and astroturfed subreddits controlled by superusers utilizing AI tools. I really don't want this place to go in the same direction.

Could we have a length limit on AI posts? I don't mind AI generated content, but it feels like AI content is often very long posts, perhaps a 200 word limit would allow things like AI summaries of long things and push for self written portions of top level posts? Maybe also a tag like 4chans green text that would make it very clear where it is used?

Not sure what the ideal length would be, but I had the same thought.

If there is any AI generated content allowed, I would suggest that it is required to be behind spoiler tags.

Oh please no, giant blocks of spoiler tags are awful. It takes just as much space on the screen, and you have to click through every single one individually.

Collapsible would be ideal. Hell, I'd use it for dumping citations instead of the obnoxious "every sentence has a link" habit we picked up from Scott's early writing.

If I wanted to read polished prose expressing RHLF-ed opinions on charged current-events topics, I would read the New York Times.

The Motte expects thoughtful engagement. Someone posting AI output in place of their own ideas is being neither thoughtful nor engaged.

I support having a rule disallowing blocks of AI-generated text, with exception of meta discussions (like, "Claude output X to prompt P; here's what I think", or "Given prompt P, O3 says X but R3 says Y; here's what I think.").

Would it be possible to have an AI containment thread where all the people who want to experiment with AI discussion can do so without effecting the rest of the site?

(Nit: you likely meant 'affecting' not 'effecting' in this context.)

I'm never confident in this one, and just end up using impact instead.

Simply remember that a noun and a verb are different, "effect" for one and "affect" for the other. "Effect" as a verb means something different, and it's never worth it to use it: you'll impress 1%, 49% will smugly and mistakenly correct you, 50% won't notice anything or care.

There's actually also affect as a noun. I come across all of them every so often.

All the uses:

  • Affect (verb): Influence, have any sort of impact upon. Pollution affects your health.
  • Affect (verb)(second meaning): assume a behavior as a display. He spoke with an affected British accent.
  • Affect (noun): emotional state. Mostly used in psychology-ish settings. Unlike all the others, the accent is on the first syllable: /ˈæ.fÉ›kt/. I couldn't think up a natural example, so from an online dictionary:

Evidence from several clinical groups indicates that reduced accuracy in decoding facial affect is associated with impaired social competence.

  • Effect (noun): The result of some action or occurence. The effects of rent control have been studied quite well enough, thank you.
  • Effect (verb): Bring to accomplishment, cause. Lincoln's emancipation proclamation did not at once effect freedom for all the slaves, seeing as the local authorities were not exactly inclined to listen.

I skipped it because it's from an entirely different area, and not to muddle the mnemonic which already was faltering.

Hmm. Estimated order of importance to know:

  1. Effect (noun)
  2. Affect (verb)(1)
  3. Affect (verb)(2)
  4. Effect (verb)
  5. Affect (noun)

All of these are common enough that I'd expect most people who are very literate to know each of them, but the first two are far more common than the last three, and the last least of all.

But, also, affect as a noun sounds totally different from all the rest, so it's hard to confuse.

Not sure what mnemonics are good. When you affect something, you effect effects.

"Just remember the right one" isn't really advice 😂

I just never use either as a verb. Not worth the mental effort.

Not so much "remember the right one" as "forget the wrong one completely" because you'd be unlikely to forget that the noun is the "e" one, wouldn't you? Then the verb synonymous with "impact" starts with the other letter.

There actually is a noun "affect", although it's psychology jargon.

As you say, it's from a different area, so unlikely to get involved in the mix-up. But yes, you could have all four quadrants in "affect the affect to effect an effect."

You can literally already do that by just having a conversation with your AI of choice.

You can even tell it to pretend to be different people to simulate the real forum experience!

But that's qualitatively different from such a containment thread. The posts in such a containment thread would be determined by things like: what type of person would enjoy posting/reading in such a thread, what type of prompts would such people use, what LLMs such people would choose to use, and what text output such people would deem as meeting the threshold of being good enough to share in such a thread. You'd get none of that by simulating a forum via LLM by yourself.

You can just ask your LLM of choice what it would enjoy reading, what it deems as meeting the threshold of being good enough to share, etc, and go from there. And/or ask it to simulate a wide variety of personas of varying tastes and proclivities.

It's interesting to see what kind of clothes people wear, even though they don't sew the clothes themselves.

But an LLM simulating those humans is qualitatively different from those actual humans sitting at their computers or their phones all around the actual Earth tapping their fingers on the keyboards or screens in front of them.

Yes, it is qualitatively different. Which is precisely the reason why people don’t want AI content here in the first place.

Indeed, and that's why the above comment was suggesting a method in which we could get something that gets AI content in a way that keeps that quality - i.e. individual real humans making decisions using their minds with respect to what they post online - but contained in a way that allows people who still don't like it to avoid it. You've just circled back to the original point.

Like, it's reasonable to say that such prompting, filtering, and curating don't meet the bar that you want humans to meet when posting on this forum. I actually lean in this direction, though I'm mostly ambivalent on this. But the idea that you can literally already do what was suggested by just using an LLM on your own is simply false.

But the idea that you can literally already do what was suggested by just using an LLM on your own is simply false.

I acknowledge that "my terminal value is that I'm ok with reading 100% AI-generated text as long as human hands physically copy and pasted it into the textbox on themotte.org" is a clever counterexample that I hadn't considered. I'm skeptical that any substantial number of people actually hold such a value set however.

At any rate I'm universally opposed to "containment zones", whether related to AI or not, for similar reasons that I oppose the bare links repository -- one of the functions of rules is to cultivate a certain type of culture, and exceptions to those rules serve to erode that culture.

I acknowledge that "my terminal value is that I'm ok with reading 100% AI-generated text as long as human hands physically copy and pasted it into the textbox on themotte.org" is a clever counterexample that I hadn't considered. I'm skeptical that any substantial number of people actually hold such a value set however.

I don't understand where you're getting the idea that it's a terminal value; could you explain the reasoning? In any case, the fact that the text that was posted was filtered through a human mind is information about the text that makes the contents of the forum substantively and meaningfully different from an LLM simulation of the forum. My point is that, for anyone who considers the human thought and human input to be valuable to have in a web forum like this, this theoretical containment thread provides that value. Is it enough value to make it worth having it? That's a separate question.

I’m opposed to AI in top level comments. It’s verbose, low effort, and says little of worth. Write it yourself.

In debates over whether it's ok to use AI for X, it's helpful to ask "instead of asking an AI to do it, what if I asked another human to do it? Would that be ok?"

So the equivalent here would be, instead of dumping AI output as a top level post, what if you just copy and pasted an article from someone else's substack and offered that as a top level post? And that's already not ok. It would be considered low effort and essentially no different from a bare link. We're here to read your writing, not some other entity's writing (whether that entity is human or not).

Failing to attribute the source doesn't make it any better, and if anything it just makes it worse. IIRC people have gotten banned in the past for copying other people's articles and posting them here without attribution.

Of course the use/mention distinction applies and I don't think anyone has a problem with quoting AI text when it's relevant in a discussion about AI.

In debates over whether it's ok to use AI for X, it's helpful to ask "instead of asking an AI to do it, what if I asked another human to do it? Would that be ok?"

I like the metaphor, but I think it misses the key detail that AI effort has a trivial effort floor. Your "copying from substack" metaphor is spot on though.

I at least agree that it should at most be included where people would be okay with using another human's text. But I think that there are still some cases where people might be okay with you quoting someone, but would not be okay with you using an AI to write up a quote.

I think this is an excellent metric to use.

As a pretty non-prolific contributor, I'm not sure my opinion means very much. But, here is my take. Use or discard it as you will.

AI-generated content being used by a contributor to make an argument, should not be allowed. "I think X position is correct, so I asked AI to come up with reasons to support my statement" is pointless. People don't come here to argue with AI, or to find out how AI will support an argument you ask it to; they can just go to the AI and ask it themselves. If you want to use AI to make a case for a position you hold, you should at the very least be willing to rewrite what it spits out in your own words, and own them as your own. Using AI as a tool as research to help you write your own post is fine, using it as the substance of the post itself is wrong.

AI-generated content being used by a contributor to make a critique or demonstration of AI in general (AI-meta, I suppose you could call it), in which the content of the AI is used not as a way to strengthen the poster's argument but as a topic of discussion itself, should be allowed. "I asked [specific AI] about [particular issue], it said X. I think it said this because Y" is a potentially interesting and valid discussion topic, as is the development of AI in general; these are things that can be done better with snippets that AI produces. These should be snippets, not long blocks, and should not be used to advance the argument of the post in-and-of-themselves.

I hope this isn't too consensus building, but I think the way AI posts (meaning posts that mainly consist of AI-generated text, not discussion of AI generally) get ratio'd already gives a decent if rough impression of the community's general sentiment. ...eh, on second thought it's too subjective and unreliable a measure, nevermind.

If we allow AI content but disallow "low-effort" AI content, I guess the real question here is - does anyone really want to be in the business of properly reading into (explicitly!) AI-generated posts and discerning which poster is the soyjak gish-galloping slopper and which is the chad well-researched prompt engineer, when - crucially - both outputs sound exactly the same, and will likely be reported as such? If prompted right AI can make absolutely any point with a completely straight "face", providing or hallucinating proofs where necessary. I should know, Common Sense Modification is the funniest shit I've ever prompted. You can argue this is shitty heuristics, and judging the merits of a post by how it "sounds" is peak redditor thinking and heresy unbecoming of a trve mottizen, and I would even partly agree - but this is exactly what I meant by intellectual DDoS earlier. I still believe the instinctive "ick" as it were that people get from AI text is directionally correct, automatically discarding anything AI-written is unwise but the reflexive mental "downgrade" is both understandable and justified.

Another obvious failure mode is handily demonstrated by the third link in the OP: AI slop all too easily begets AI slop. I actually can't see anything wrong with, or argue against, the urge to respond to a mostly AI-generated post with a mostly AI-generated reply - indeed, why wouldn't you outsource your response to AI, if the OP evidently can? (But of course you'd use a carefully-fleshed out prompt that gets a thoughtful gen, not the slop you just read, right.) If you choose to respond by yourself anyway, what stops them from feeding your reply right back in once more? Goose, gander, etc. And it's all well and good, but at this point you have a thread of basically two AIs talking to each other, and permitting AI posts but forbidding to do specifically this to avoid spiraling again requires someone to judge which AI is the soyjak and which is the chad.

TL;DR: it's perfectly reasonable to use AI to supplement your own thinking, I've done it myself, but I still think that the actual output that goes into the thread should be 100% your own. Anything less invites terrible dynamics. Since nothing can be done about "undeclared" AI output worded such that nobody can detect it (insofar as it is meaningfully different from the thing called "your own informed thoughts") - it should be punishable on the occasion it is detected or very heavily suspected.

My take on the areas of disagreement:

  1. Disallow AI text in the main body of a post, maybe except when summarized in block quotes no longer than 1 paragraph to make a point. Anything longer should be under an outside link (pastebin et al) or, if we have the technology, embedded codeblocks collapsed by default.

  2. I myself post a lot of excerpts/screenshots so no strong opinion. AI is still mostly a tool, so as with other rhetorical "tools" existing rules apply.

  3. Yes absolutely, the last few days showed a lot of different takes on AI posting so an official "anchor" would be helpful.

If AI posting is normalized, I will skip over any post that doesn't get to the point in the first two sentences. Length was always a bad predictor of how much effort someone put into their post, but with AI, it will be a negative.

I will skip over any post that doesn't get to the point in the first two sentences.

I already do that for anything that doesn’t get to the point within the first paragraph. I strongly recommend everyone else to do the same and think this place would become much better if everyone did that.

Beware that this tends to exclude any point that is sufficiently complex that it cannot be simplified to a paragraph without presupposing the audience is already an expert in the topic.

This may be acceptable fallout, but is still worth explicitly noting.

The point doesn't have to fit in a single paragraph but the comment absolutely does have to get into it within a paragraph.

Or to put it another way, FFS people, stop waffling around like Scott and start with your point right away. Then you can expand around it after you've shown the reader that you're not just wasting their time.

Ah sorry, let me rephrase to clarify:

Beware that this tends to exclude any point that is sufficiently complex that getting into it [the point] cannot be simplified to a paragraph without presupposing the audience is already an expert in the topic.

I don't know. Every longer text needs a good hook, but there are other ways to do it than by spilling the main point in the first paragraph.

TLDR: mod on content, not provenance.

A good post is enjoyable to read and it is well argued. Somebody who is using AI in some way to post more interesting, well-argued essays than they could write entirely by hand is improving the Motte, and should be encouraged. Using AI to post low-effort walls of text should be a bannable offence.

Specifically:

  • AI-written or edited content should be labelled clearly.
  • AI use should be considered a strong aggravating factor for low-effort or poor discussion, and should quickly escalate to bans if needed. The quality bar should be kept high for AI-adjacent content.
  • Otherwise, do nothing.

Yes, this is subjective, but all of our rules are subjective. In practice, I trust the mods to handle it.

TLDR: mod on content, not provenance.

Except the use of AI qualitatively changes the nature of the content, your own suggestions hint at this. A "handwritten low-effort wall of text" is pretty much a contradiction in terms, it probably deserves a gentlemen's C by default. If someone put in the time to write it, even if the arguments are hot garbage, other things ngs being equal you can assume they care, that they want to be taken seriously, that they want to improve, etc. None of this holds true when you post AI slop, because you can generate it with all the effort of writing a one-line sneer.

If you're asking for clear labelling and recommending that the use of AI be taken with a presumption of low-effort, you're already moderating on provenance.

A "handwritten low-effort wall of text" is pretty much a contradiction in terms

No its not. I could write a full page rant in maybe double the time it takes just to type.

I'm guessing you're uncommonly good at rapid handwriting.

No, I do mean write as in type out. I would be spending about equal time on thinking and physical typing. Its not difficult if you set the standard low enough - its just that this is not actually something people do here.

Oh. I think the word "handwritten" was very literal there, the point being that if you have literally handwritten a wall of text, it is impossible for you to have not invested substantial effort. There is a reason that books were uncommon and valuable before the printing press.

Ctrl-C on an AI wall is the other extreme of required effort for length, with typing somewhere in the middle.

Oh. I think the word "handwritten" was very literal there

No, typing still fell into what I meant by it anyway.

Oh, okay. Mea culpa.

"Pretty much". I'm not saying it can't be done to prove a point, I'm saying next to noone does it in the natural course of posting.

A "handwritten low-effort wall of text" is pretty much a contradiction in terms

If average American political consumers started writing walls of text here, we would (and should) start moderating them. Doing the same to AI is fine.

Except a big reason we don't have them in the first place is that we have the longpoast filter. It's the number one complaint of dramanauts, and le average redittors (at least the ones that don't run away screaming about Nazis).

If we came across one that managed to string together a longer sentence by himself, but still didn't make the cut of what we expect here, I don't think it would be proper to ban them on sight. They're obviously trying, and the effort should be commended. With AI there is no effort, so it should be banned on sight.

Kind of? On a technical level, the median AI essay is both easier to create and lower quality than the median motte post. I want to strongly discourage people from spamming bad content because it’s bad content, especially at first while norms are being established.

But lots of other posters are arguing that posting AI-generated words is inherently wrong or goes against the purpose of the site. That if the words were not crafted in the brain of a human then discussing them is worthless and they should be banned regardless of their content. I think some people would be more offended by a good AI post than a bad one, because they’d been lured into paying attention to non-human writing. THAT is what I mean by ‘moderating for provenance’.

I should note that I’m mostly thinking of top-level and effort-posts here. If you’re involved in a downthread debate with a specific person then I can see that drafting in a more eloquent AI to continue the battle when you lose interest is poor form, at least unless you both agree to it.

(The labelling is partly practical and partly a moral conviction that you shouldn’t take credit for ideas you didn’t have).

But lots of other posters are arguing that posting AI-generated words is inherently wrong or goes against the purpose of the site.

I know, I'm one of them.

I wouldn't say I'd be offended by a good AI-post, some of my favorite strings of words are AI-generated, like that Doctor Suess poem about whether armed citizens could win against their military, or that sci-fi story passage about human rebels using the n-word to detect android infiltrators.

I know there are ways to use AI in a way that doesn't go against the point of this forum, I even suggested one myself, but:

a) that's not how AI was used so far, even by some of our best posters, which is why we're even having this conversation, and

b) doing it properly would probably make the use of AI undetectable in the first place.

I can even go further and say it's possible for human posters to make discussion pointless in exactly the same way that talking to an AI is (we've had one or two people like that), but that should get the bamhammer just as much as AI-posting.

I agree with much of what you say, and I think some level of moderation is necessary and desirable. It should be possible to address this stuff you raise in this post whilst still primarily modding for content.

My particular concern is that I would like to do a really proper AI-assisted effort-post (category B, undetectable) in the near future. For moral reasons I would prefer to give credit to the AI where appropriate and would like not to be modded for it if the post is otherwise up to my usual standards.

I find that the concept of "interesting" is often used here and on DSL in toxic ways. It's too easy to call a post "interesting" when people are responding to it a lot because of its flaws, deliberate (for trolling) or otherwise (for AI). Well-argued is fine. Interesting shouldn't even be on the table.

Edited. I'm using 'interesting' as 'enjoyable to read'. That is, a good post is (a) something you want to read, and (b) something you gain by reading. Does that help?

There are some people who claim that they will never find anything that AI writes enjoyable simply because they know it's not produced by a human, but I think that's cutting off their nose to spite their face.

Because I know what Internet pedants are like, and because even non-pedants will fake being pedants to score points, we need a use/mention clause. Using AI-produced material as an example when discussing the topic of AI, rather than to promote the ideas expressed in the AI-produced material, doesn't count.

Added above in the edit, this would absolutely not be a topic ban of any form.

I'm not a fan of AI generated text, but mostly because I believe it to be incapable of any original insight (ignoring the fact it's poisoned with a certain brand of selfishness).

I have a hard enough time when actual human users do that; a pithy one-liner is superior to an AI wall of text because in the former case I know the conversation's going nowhere, where AI use in that case is basically just a sophisticated version of a Gish gallop.

By that token I find "I asked an AI tool because I couldn't be bothered to make an actual fucking argument, here is what it said" to simply be egregiously obnoxious, as that is the general tone of the comments that do this. It's not leveraged to create meaningful discussion, that's for sure- otherwise, why go through the trouble of saying "AI says X"?

There has yet to be an effective suppressing meme for "I'm using AI, and it has drawn you as the soyjack, and me as the chad" types of comments, much like "tits or GTFO" is used on 4chan as a rejection of trying to cover up insubstantive commenting with social privilege.

I think the difficult thing is, as with most arguments here, is that context matters.

When I've seen AI used here, it's usually as a stick to beat each someone else with ("look, it agrees with me!"), or a stick to beat AI with ("look at how wrong it is!"), or a stick to beat those in control of the AI with ("look at how wrong they made it!").

The only other case I've seen it used is "I'm too lazy to write my argument to the level of wordcel you guys want for a top level post, AI wrote it for me".

The easiest thing to do is to just ban all of it, because it's onerous to sort the wheat from the chaff. But I also see it becoming non-trivial to do so in the near future. The smarter artists are already using AI as part of their workflows or to make some of their work easier. And I've definitely seen worse arguments by people here than Deepseek manages to make on occasion.

  1. When talking about AI content, put examples in quotes if they are brief, provide an off-site link if they are verbose.
  2. When using AI content, treat it like citing Wikipedia in your high school paper: using it for research purposes is fine as long as you understand its limitations. Using AI content in your messages is not allowed, trying to hide the fact and pass the content as your own is just as bad as saying "here's what ChatGPT thinks about X".

But these cases are easy to judge. Here's one where it's less clear-cut: someone believes they are discriminated against by the wordcels of The Motte because they cannot write well. They have an idea they want to defend, they know how to attack the arguments of their opponent, but they think their command of the English language is insufficient to eloquently argue their point. So they turn to ChatGPT or DeepSeek and tell them, "computer, here's what this asshole says, here's what I think about it, please help me destroy him like I'm one of these fancy Substack writers".

On one hand, I can sympathize with them. AAQC compilations highlight a lot of passionate prose that can be hard to emulate. On the other hand, allowing people to ask AI for editorial and rhetorical help is a small step to "computer, here's what this asshole says, here's what I think about it, please help me destroy him like I'm one of these fancy Substack writers". On the gripping hand, forcing people to post their original argument and link to an off-site AI-edited version of it makes them sound unnecessarily passive-aggressive: "here's what I think about your argument, and here's my reply in moar words, because that's what you really care about, don't you".

If someone does post AI because they have their own ideas, but can't express them well, they should be willing to stand by everything in their post. If they use AI, we should be entitled to assume that they edited out everything they don't agree with and that even if what's left isn't literally their own words, we can treat it as their own words.

I greatly appreciate your insight. It has occured to me that since EoPs as a group have different political inclinations than ESLs, the site language being English means the former posseses an unfair advantage, thus making ideologies favoured by EoPs seem more justified. AIs can thus also be thought as free legal aid, to use your more general framework, or as a way for perspectives which Americans are less likely to espouse, to be given a fair shake.

Edit: Mods have on several occasions expressed that diversity of opinions is something they strive towards, when it is on the basis of US left vs US right. (They faced problems as attracting members of the underrepresentated side, would mean banning some members of the overrepresented one, purely of the basis of their beliefs.) Thus the principle to which I am appealing isn't alien to the mods.

I didn't mean just native speakers vs ESLs, but also wordcels vs shape rotators. Some people are just good with words, you pick up their book or blogpost and it just slides into you like the best Mormon jello.

There basis of state is self-preservation, treason is the first crime. Yet themotte.org has no rules against activity aimed at destroying themotte.org itself. One is allowed to argue that reddit was right to [remove] innocuous comments on /r/themotte. One is allowed to argue that reddit would be in the right even if banned /r/themotte, that the hosting of themotte.org is allowed to end with no justification, or that patreon is allowed seize the donations to themotte.org.

As thus one is allowed to gnaw at the very foundation if themotte.org, any rule whose alleged aim is allowing the continued existence of themotte.org, is arbitrary. And I consider its true goal to be something else. In this case, it is insecurity: if a merely large enough matrix can be shown to produce greater insights than many flesh and blood men, ideologies would have to take this fact into account. And perhaps some would cope more easily than others.

Is discussion on themotte a war for survival that must be won at all costs, or is it more like a sports competition where winning/losing is possible but both participants can be made better off through their participation?

In my mind themotte is much closer to a sports competition than all out war.


All sports competitions have rules that often forbid the most effective methods of "winning".

It seems silly to call it "insecurity". I can hop into just about any car with an engine and I can travel faster than Usain Bolt. Should he thus feel insecure about his speed?

Any competition short of all out warfare must occasionally update their rules to maintain the intended nature of the competition.

Is discussion on themotte a war for survival

Treason is illegal in peacetime also. But one could take mere existence of a state as there being a state of war between it, and its citizens.

Any competition short of all out warfare must occasionally update their rules to maintain the intended nature of the competition.

(Even rules of war get updated sometimes when new, more inhumane methods of war are discovered. But that is irrelevant.) The question is then what is the purpose of themotte.org. "Move past shady thinking" is commonly cited. But AIs can aid with that: a partisan might be so filled with rage, that what he writes would not only be against civility rules, but also would insoire others to respond in kind with too antagonist replies. But running his post through an AI, with instructions to turn down the heat, before posting the output, would allow him to make his points understandable to a not necessarily sympathetic audience. Allowing a more measured discourse, more conductive to changing minds, to occur.

if a merely large enough matrix can be shown to produce greater insights than many flesh and blood men

If I get the matrix to write a response pointing out that the original great insight was garbage, does it mean we were right all along to want to exclude matrix-generated comments?

That it is an AI2 which disproves AI1, is no more proof that AI in general is wrong, than Human2 disproving Human1 is proof all human comments are wrong.

Ok, I can also get it to write text that argues all AI is wrong.

Could we hear from a mod who wants an AI policy even as permissive as "quoted just like any other speaker"?

My two cents:

How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)

off site links only, other than very short quotes not making up the bulk of a comment, and even that I kinda hate

What AI usage implies for the conversation.

the end of my interest in a thread and a sharp drop in my respect for the user

Whether a specific rule change is needed to make our new understanding clear.

yes, please. otherwise, it's far too easy to spam, lowering quality and increasing moderator effort.

Bottom line, I think we need to discourage AI heavily. Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

This is the thing I usually say about moderation, but - the problem with most AI posts isn't that they're AI, it's that they're bad. It is, in principle, possible to use AI to help you write interesting posts, I saw a perfectly good one on Twitter written with DeepSeek recently. But AI makes it much easier to quickly spit out something low-effort and uninteresting, so people do a lot of that.

The thing is, it's fine to have a rule that says 'no bad posts'. Indeed, the 'avoid low-effort participation' rule works for this purpose. So I don't think we should discourage AI overall, but just discourage using AI to make bad posts. And similarly, if someone's posting thousands of words of vacuous text every day, mods should feel free to discourage them even if it's artisanal hand-made text.

It's also that they're AI. If the goal of a discussion is to produce content, then, sure, a good AI could do that just as well. But if the goal is to have a conversation between people, then AI allows people to avoid ever putting their own ideas in contact with others, they can just offload it to the AI. They can have the AI never change its mind, if they like.

[Meta] I gave gemini the whole contents of this thread, and asked it to select a comment to disagree with. You "win":

I disagree that AI inherently prevents a "conversation between people." It's a tool, like any other. Sure, it can be used to disengage, but it can also be used to sharpen arguments, explore different angles, even generate thought-provoking counterpoints. The problem isn't the AI itself, it's how people choose to use it. Just like a human can be stubborn and refuse to change their mind, so can someone using AI badly. But that doesn't mean all human interaction is invalid, right? We shouldn't throw the baby out with the bathwater. A well-integrated AI contribution can actually enhance a discussion by bringing in perspectives or information a single human might not have considered.

So how do I feel about this argument? It's coherent, representative of the pro AI arguments here...and cripples the ability to draw signal about what people actually think. My post is at least tolerable under "use/mention" but, ugh, I feel dirty. qa!

It doesn't disagree with anything I said. I was pretty clearly (especially in context) addressing a comment advocating an unbounded use of AI, as long as the posts it produces are of quality. I address that, but your AI's comment in no way interacts with that. The only disagreement inserts the word inherently when I said no such thing, and addresses situations that I didn't care to talk about here.

That's not really worse than typical comments, in that people will frequently just respond to particular features of some comment in ways that weren't salient in context, but, if it could have chosen out of anyone to disagree with, the AI could have done a lot better than not actually disagreeing with me.

Yeah, the closer I look, the less I am impressed with its comment.

the problem with most AI posts isn't that they're AI, it's that they're bad

My opinion is otherwise.

The problem with 'good' LLM-generated posts is that they introduce an effort asymmetry. They make it possible for an individual to astroturf or gish gallop to a hitherto-unseen level.

In the absence of LLMs a longpost represents a certain minimum bar of 'this person cared enough about this subject to spend the time to write this'. LLMs completely upend this.

Can you make any argument in defense of your apparently instinctual reactions?

the end of my interest in a thread and a sharp drop in my respect for the user

Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

It sounds like you just feel entitled to an arbitrary terminal preference. That's not compelling.

the end of my interest in a thread and a sharp drop in my respect for the user

This is because it indicates that the other user is not particularly engaged. Ordinarily, if I'm having a conversation, I know that they read my response, thought about their position, and produced what they thought a suitable response was. If an AI is doing it, then there's no longer much sign of effort, and it's fairly likely that I'm not even going to convince them. This point should be expanded upon—if, often, much of what fuels online discourse is "someone is wrong on the internet," then that would no longer be a motivation, since it's not like they'll even hear the words you're saying, but just feed it into a machine to come up with another response taking the same side as before. You may retort that you're still interacting with an AI, proving them wrong, but, their recollection is ephemeral, and depending on what they're being told to do, they will not ever be persuaded, regardless of how strong your arguments and evidence are.

Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

Currently, length indicates effort and things to say, which are both indicators of value. Not perfectly, but there's certainly a correlation.

With liberal use of AI, that correlation breaks down. That would considerably lower the expected value of a long-form post, in general.

It sounds like you just feel entitled to an arbitrary terminal preference. That's not compelling.

At least you hedged it with "it sounds," but I don't think the preferences are arbitrary.

Can you make any argument in defense of your apparently instinctual reactions?

Yes. In short, AI makes it too easy to be low quality given the finite energy and speed of our mods.

Metaphor:

Suppose you're a camp counselor (mod) and find that your campers (posters) sometimes smell bad (make bad posts). Ideally, you could just post a sign that says "you must not smell bad" (don't write bad posts) and give people total freedom, simply punishing those who break the rule.

In practice, you need stricter rules about the common causes of the smell, like "shower daily" or "flush the toilet" (don't use AI) - even though, sure, some people might not smell bad for a couple days (use AI well) or might like the smell of shit (not hate AI generated prose like I absolutely freely admit that I do). It's just not realistic to expect campers (us) to have good hygiene (use AI responsibly sufficiently often).

Not the OP; I share similar reactions. I am not the most articulate; let me attempt to expand these a little.

arbitrary terminal preference.

Your 'arbitrary terminal preference' is my '(relational) tacit knowledge'.

the end of my interest in a thread

The other person a. has signaled that they do not find the thread in question important enough to spend their time on b. has signaled that the preferences and opinions expressed in the thread are not their own.

Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

Current LLMs tend to write long-form responses unless explicitly prompted otherwise, and LLM content is often used to 'pad' a short comment into a longer-form one. This results in P(LLM | longform) >> P(LLM | shortform). There are, of course, exceptions.

Ditto, the amount of my time wasted trying to help someone if I realize 90% through a shortform comment it's LLM-generated is much less than if I realize 90% of the way through a longform comment it's LLM-generated.

and a sharp drop in my respect for the user

P(arbitrary comment from user uses a LLM | some comment from user visibly used a LLM) >> P(arbitrary comment from user uses a LLM | no comments from user visibly used a LLM).


Of course, then the followup question becomes "what do I have against LLM content". A few of the components:

The big LLM-as-a-services are a very attractive target for propaganda and censorship. (Of course, so are local models - though a local model can be assumed to remain identical tomorrow as today, and user Y as user X.) We already are seeing the first stirrings of this; everything so far has been fairly blatent whereas I am far more concerned about more subtle forms thereof.

Beware Gell-Mann Amnesia. I test (locally, because test set contamination) LLMs fairly often on questions in my actual field of expertise (which will go unnamed here) and they categorically write well-articulated responses that get the superficial parts correct while spouting dangerous utter nonsense for anything deeper. Unfortunately, over time the threshold of where they start spouting nonsense gets deeper, but does not disappear. I worry that at some point that threshold will overtake my level of expertise and I will be no longer able to pinpoint the nonsense.

I am not here to get one particular piece of information in the most efficient manner possible. I am here to test my views for anything I've missed, and to be exposed to a wider variety of views, and to help others when they have missed implications of views, and to be socialish. LLM-generated content misses the mark on all of these. If ten people read and respond to me they have a far wider range of tests of my views than if they all run my post through a single LLM. If ten people read and respond to me they have a far wider variety of outlooks than if they all run my post through a single LLM. If I spend time reading a partially-incorrect answer to be able to respond to it to help someone and I discover after-the-fact that it was LLM-generated, I have wasted time without helping the person I thought I was helping.

People often have knowledge which is difficult to express in words. That doesn't make it an arbitrary preference.

Sounds like they need LLM writing assistance more than anyone, then.

I'd be the person you're looking for.

I think AI is a useful tool, and has some utility in discourse, the most pertinent example that comes to mind being fact-checking lengthy comments (though I still expect people to read them).

I'm fine with short excerpts being quoted. I am on the fence for anything longer, and entirely AI generated commenting or posting without human addition is beyond the pale as far as I'm concerned.

My stance is that AI use is presumed to be low effort by default, the onus is on the user to put their own time and effort into vetting and fact checking it, and only quoting from it when necessary. I ask that longer pieces of prose be linked off-site, pastebin would be a good option.

While I can tolerate people using AI to engage with me, I can clearly see, like the other mods, that it's a contentious topic, and it annoys people reflexively, with some immediately using AI back as a gotcha, or refusing to engage with the text on its own merits. I'm not going to go "am I out of touch, no it's the users who are wrong" here, the Motte relies on consensus both in its moderation team, and in its user base. If people who would otherwise be happy and productive users check out or disengage, then I am happy to have draconian restrictions for the sake of maintaining the status quo.

People come here to talk to humans. They perceive AI text to be a failure in that regard (even I at least want a human in the loop, or I'd talk to Claude). If this requires AI to be discouraged, that's fine. I'm for it, though I would be slightly unhappy with a categorical ban. If that's the way things turn out, this not a hill I care to die on, especially when some users clearly would be happy to take advantage of our forbearance.

the onus is on the user to put their own time and effort into vetting and fact checking it

i am concerned that in practice it's going to fall heavily onto other users and the mods, rather than OP.

i think we can have ~all the ai you actually want with a policy of "no ai, except for short things where you used it so well/minimally that we can't tell and it's more like a spell checker than outsourcing"

I assume you want to use the caveats that it can be used for research purposes or thinking through things, just not copy-pasting text?

If you mean behind the scenes, sure*. If you mean "and then quote its facts/figures," no. I consider "AI says $STATISTIC" to be at most twice as accurate and at least twice as irritating as "my ballpark guess is", while being significantly dishonest. It's just forcing others to do your work for you.

*: Even researching is on shaky ground. "Question -> AI -> paraphrase answer" is marginally better than piping AI into a textbox. "Question -> AI -> check original sources -> synthesize your own argument" can be done well. I personally don't find it more useful than "Question -> search results -> check original sources if those weren't already -> synthesize your own argument", but concede that I am a luddite. (muh vimmmmm)

I agree with you here. There's an unfortunate amount of inherent haziness in trying to adjudicate using the Motte's rules, and the effort requirements are quite often the most litigated.

If someone is using an AI in a more discreet manner, while I can't I outright approve of them doing so, if I can't prove it, well...

Could we hear from a mod who wants an AI policy even as permissive as "quoted just like any other speaker"?

I imagine it's @self_made_human

I strongly believe that AI generated posts should be disallowed. Like @cjet79, I think it destroys the entire point of the forum. A bunch of text some LLM came up with is not interesting, it's not worth discussing, and it is really disrespectful to everyone else to even post such a thing. It's basically saying "I don't feel like actually putting any effort into talking to you so I'm gonna have my assistant do it". Hell to the no on this one.

I would say that we should stop short of a full ban on AI generated content, because sometimes you can have an interesting meta discussion about the stuff. See for example recent posts where people were showing off examples of how that Chinese LLM differed from American models. That is actually an interesting discussion, even though it involves posting AI content. So IMO linking for the purposes of discussion is fine, but not having it write for you.

+1. The more AI content TM gets and starts to rely on, the more destroyed TM becomes.

I think LLM output is sometimes useful but should never be put directly in posts on this site, it has to be linked either from the platform itself when possible or dumped on pastebin etc. As far as topics of discussion go, any of 'LLM says X'/'LLM won't say Y'/'I asked an LLM to summarize Z' are not meaningful events to discuss and should never be top level threads or even the majority substance of a reply.

The starting post is my attempt to be neutral and just give everyone the lay of the land.

This post is me being opinionated.

I am very against AI generated content being on themotte. I think it makes the whole place pointless. I'm not against AI in general, it just seems specifically at odds with what makes this place useful to me.

I can have a discussion with AI without showing up here. I do this quite often for various topics. Cooking recipes, advice on fiction writing, understanding some complex physics topics, getting feedback on my own writing, generating business writing, etc. If I am here it is because I specifically do not want a conversation with an AI.

To me there is a value in discussion. Of my brain processing an idea, spitting it out onto a page, and then having other brains process what I have written and spit their opinions back out at me. Writing things is part of my thinking process. Without writing or talking about an idea, I can't really claim to have thought much about it. I believe this is true about many people. If someone else is offloading either part of their thinking process to an AI then the degree to which they have offloaded their thinking (either the initial reading/processing, or the writing/responding) is the degree to which I'm not getting additional value out of them.

Realistically there might not be much of a way to enforce this. Everyone could be going and feeding my writing to an AI and regurgitating its answers back to me. I just have to hope that they recognize how pointless and silly it is to do such a thing. I compared using AI on themotte to bringing a bike or a car to a fun run. It might be objectively better at accomplishing the "thing" we are doing. But everyone has a sense it is pointless and dumb to do so.


My other objection which may be mitigated in the future is that many of the AIs available have a sort of generic sameness to them.

Imagine all viewpoints can be summed up by a number ranging from 1-100. With 50 being an average viewpoint. Most AIs are going to spit out viewpoint 50 by default. I think you can currently make character AIs and have them spit out 40s or 60s with a higher rate of hallucinations. Google's "black nazi" image generator is I think a good example of them trying to push the AI's default opinion in one direction and ending up with some crazy hallucinations.

But there are plenty of real people with extreme viewpoints on any given topic, and plenty of them are here on themotte. Today's 50 is not yesterday's 50, and it likely won't be tomorrow's 50 either. Talking with the tail end view points is something I find interesting, and potentially useful. It is also generally more difficult to find viewpoints outside of the center. Themotte is a place that is often outside the center.

If everyone secretly started posting with AI's tomorrow to play a cruel trick on me, I don't think I'd immediately figure it out. But within a month I'd be gone from this place and would have lost interest. The center viewpoint is widely available and easy to find. I don't need to come to a dark and unknown corner of the internet to get it.

+1. I think it would be good to have a sort of cutout for talking about AI (for example in a pattern like: "Musk's new MechaTrump-3T model has some weird quirks, which will be concerning if the presidency is handed over to it next month as planned. Here is what it had to say about Mexico when I ran its 1bit quantization:").

Other than that, I want to add that the assumption of good faith for me depends on a line of reasoning along the lines of, "Another sentient being, ultimately not that different from you, has taken the time to write this. You can appreciate how long it took and how hard it was. Treat the product with the amount of respect you would like to receive if you put in a similar effort.".
This goes out of the window if the other party may have just written a line, clicked a button and copypasted.

Pretty much sums up my thoughts on the matter. +1 for the Buttlerian Jihad from me.

I'm ok with people posting AI generated content as long as it is clearly marked and hidden away in some kind of drop-down/quote. Sometimes you might want to post some AI-generated content as a piece of evidence or the like.

Strong agree that posting it without attribution and labelling should be a bannable offence though.

This is more or less where I stand. I think sometimes using it as evidence (particularly in arguments about AI!) is potentially helpful in small doses.

If the preponderance of your response is not human generated, you probably are doing something wrong.

I like your shared thoughts.

But I also really want the AI to go away and would be supportive of more extreme moderation in the future. I haven't seen a single thread with AI content that I thought was productive.

Though shalt not make a machine in the likeness of a human mind