site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 275676 results for

domain:parrhesia.co

hell, I can look up one of the listed beliefs on wikipedia right now and it is directly labelled a "right wing conspiracy theory"

Wikipedia is biased to the left. I wouldn't go to Wikipedia for information about whether it's correct to call something a conspiracy theory.

Your definition, while it might be more accurate, very clearly isn't the one being used by the rest of society,

There are a lot of things that have a real definition, but are also abused to attack political opponents. "Conspiracy theory" is one just like "Nazi". Would you suggest that because Trump and the president of Ukraine are called Nazis, but I would not call them that, "Nazi" is a useless term?

What is the term actually communicating beyond "I think this theory is dumb and wrong, and the person who believes it does so due to faulty reasoning"?

It communicates that it is a particular type of faulty reasoning.

I greatly appreciate your insight. It has occured to me that since EoPs as a group have different political inclinations than ESLs, the site language being English means the former posseses an unfair advantage, thus making ideologies favoured by EoPs seem more justified. AIs can thus also be thought as free legal aid, to use your more general framework, or as a way for perspectives which Americans are less likely to espouse, to be given a fair shake.

Edit: Mods have on several occasions expressed that diversity of opinions is something they strive towards, when it is on the basis of US left vs US right. (They faced problems as attracting members of the underrepresentated side, would mean banning some members of the overrepresented one, purely of the basis of their beliefs.) Thus the principle to which I am appealing isn't alien to the mods.

The only real reason to think that DOGE wasn't meant seriously was the name. DOGE as a thing made perfect sense as something he'd do. And the name isn't going to affect what it actually does, and this isn't a GIMP situation where people really have a choice to stay away because of the name, so the name is just there to thumb his nose at the media.

Trump: have you guys ever considered just ethnically cleansing them?

Trump: like, duh

Maybe he really is a genius? If he actually makes the Palestinians relocate and turns Gaza into American Ibiza he'll be the most competent President of my life.

His recent obsession with acquiring foreign territory is really strange. It’s been two weeks, and already there’s 4 or so territories that he’s consistently talking about trying to take.

I don’t know but I’m starting to set my assumption toward there being something even more wrong with his brain than I previously thought rather than him doing this in a posturing way or to get some kind of outcome.

I know Trump is just uniquely Trump but even for him this is getting pretty out there.

I am fine with just transnational bare link roundups. CW link roundups are only good for people with hypotension.

I am not convinced that Trump is serious about this. I get the impression he's trolling.

It's like buying Greenland. We haven't heard much about that recently, and even though buying Greenland isn't possible, he certainly could be a lot more obnoxious about still wanting to buy it than he has been. He's just mastered the art of saying outrageous things to get a media reaction that 1) earns publicity and 2) distracts them from attacking the things he actually wants to do. Notice that all the things Trump has actually tried to do this term have been things that he's clearly wanted to do for a while.

And Trump has already done things that are a lot more obvious trolling, like the DOGE name.

If taking over Gaza is still on the radar in two weeks, I'll be surprised.

I'm not a fan of AI generated text, but mostly because I believe it to be incapable of any original insight (ignoring the fact it's poisoned with a certain brand of selfishness).

I have a hard enough time when actual human users do that; a pithy one-liner is superior to an AI wall of text because in the former case I know the conversation's going nowhere, where AI use in that case is basically just a sophisticated version of a Gish gallop.

By that token I find "I asked an AI tool because I couldn't be bothered to make an actual fucking argument, here is what it said" to simply be egregiously obnoxious, as that is the general tone of the comments that do this. It's not leveraged to create meaningful discussion, that's for sure- otherwise, why go through the trouble of saying "AI says X"?

There has yet to be an effective suppressing meme for "I'm using AI, and it has drawn you as the soyjack, and me as the chad" types of comments, much like "tits or GTFO" is used on 4chan as a rejection of trying to cover up insubstantive commenting with social privilege.

I think the difficult thing is, as with most arguments here, is that context matters.

When I've seen AI used here, it's usually as a stick to beat each someone else with ("look, it agrees with me!"), or a stick to beat AI with ("look at how wrong it is!"), or a stick to beat those in control of the AI with ("look at how wrong they made it!").

The only other case I've seen it used is "I'm too lazy to write my argument to the level of wordcel you guys want for a top level post, AI wrote it for me".

The easiest thing to do is to just ban all of it, because it's onerous to sort the wheat from the chaff. But I also see it becoming non-trivial to do so in the near future. The smarter artists are already using AI as part of their workflows or to make some of their work easier. And I've definitely seen worse arguments by people here than Deepseek manages to make on occasion.

Interesting way of looking at it. Now with that said, don’t you think the president of Haiti or some other third world country would be an even more lethal job?

if a merely large enough matrix can be shown to produce greater insights than many flesh and blood men

If I get the matrix to write a response pointing out that the original great insight was garbage, does it mean we were right all along to want to exclude matrix-generated comments?

It’s a meme stock. If you’re looking for how research analysts are justifying their hold ratings, it’s by suggesting that the AI and self driving stuff will all pan out perfectly.

  1. When talking about AI content, put examples in quotes if they are brief, provide an off-site link if they are verbose.
  2. When using AI content, treat it like citing Wikipedia in your high school paper: using it for research purposes is fine as long as you understand its limitations. Using AI content in your messages is not allowed, trying to hide the fact and pass the content as your own is just as bad as saying "here's what ChatGPT thinks about X".

But these cases are easy to judge. Here's one where it's less clear-cut: someone believes they are discriminated against by the wordcels of The Motte because they cannot write well. They have an idea they want to defend, they know how to attack the arguments of their opponent, but they think their command of the English language is insufficient to eloquently argue their point. So they turn to ChatGPT or DeepSeek and tell them, "computer, here's what this asshole says, here's what I think about it, please help me destroy him like I'm one of these fancy Substack writers".

On one hand, I can sympathize with them. AAQC compilations highlight a lot of passionate prose that can be hard to emulate. On the other hand, allowing people to ask AI for editorial and rhetorical help is a small step to "computer, here's what this asshole says, here's what I think about it, please help me destroy him like I'm one of these fancy Substack writers". On the gripping hand, forcing people to post their original argument and link to an off-site AI-edited version of it makes them sound unnecessarily passive-aggressive: "here's what I think about your argument, and here's my reply in moar words, because that's what you really care about, don't you".

+1. I think it would be good to have a sort of cutout for talking about AI (for example in a pattern like: "Musk's new MechaTrump-3T model has some weird quirks, which will be concerning if the presidency is handed over to it next month as planned. Here is what it had to say about Mexico when I ran its 1bit quantization:").

Other than that, I want to add that the assumption of good faith for me depends on a line of reasoning along the lines of, "Another sentient being, ultimately not that different from you, has taken the time to write this. You can appreciate how long it took and how hard it was. Treat the product with the amount of respect you would like to receive if you put in a similar effort.".
This goes out of the window if the other party may have just written a line, clicked a button and copypasted.

There basis of state is self-preservation, treason is the first crime. Yet themotte.org has no rules against activity aimed at destroying themotte.org itself. One is allowed to argue that reddit was right to [remove] innocuous comments on /r/themotte. One is allowed to argue that reddit would be in the right even if banned /r/themotte, that the hosting of themotte.org is allowed to end with no justification, or that patreon is allowed seize the donations to themotte.org.

As thus one is allowed to gnaw at the very foundation if themotte.org, any rule whose alleged aim is allowing the continued existence of themotte.org, is arbitrary. And I consider its true goal to be something else. In this case, it is insecurity: if a merely large enough matrix can be shown to produce greater insights than many flesh and blood men, ideologies would have to take this fact into account. And perhaps some would cope more easily than others.

I'd be the person you're looking for.

I think AI is a useful tool, and has some utility in discourse, the most pertinent example that comes to mind being fact-checking lengthy comments (though I still expect people to read them).

I'm fine with short excerpts being quoted. I am on the fence for anything longer, and entirely AI generated commenting or posting without human addition is beyond the pale as far as I'm concerned.

My stance is that AI use is presumed to be low effort by default, the onus is on the user to put their own time and effort into vetting and fact checking it, and only quoting from it when necessary. I ask that longer pieces of prose be linked off-site, pastebin would be a good option.

While I can tolerate people using AI to engage with me, I can clearly see, like the other mods, that it's a contentious topic, and it annoys people reflexively, with some immediately using AI back as a gotcha, or refusing to engage with the text on its own merits. I'm not going to go "am I out of touch, no it's the users who are wrong" here, the Motte relies on consensus both in its moderation team, and in its user base. If people who would otherwise be happy and productive users check out or disengage, then I am happy to have draconian restrictions for the sake of maintaining the status quo.

People come here to talk to humans. They perceive AI text to be a failure in that regard (even I at least want a human in the loop, or I'd talk to Claude). If this requires AI to be discouraged, that's fine. I'm for it, though I would be slightly unhappy with a categorical ban. If that's the way things turn out, this not a hill I care to die on, especially when some users clearly would be happy to take advantage of our forbearance.

Could we hear from a mod who wants an AI policy even as permissive as "quoted just like any other speaker"?

I imagine it's @self_made_human

Dr. hbd nrx is pretty good.

Could we hear from a mod who wants an AI policy even as permissive as "quoted just like any other speaker"?

My two cents:

How AI generated content can be displayed. (off site links only, or quoted just like any other speaker)

off site links only, other than very short quotes not making up the bulk of a comment, and even that I kinda hate

What AI usage implies for the conversation.

the end of my interest in a thread and a sharp drop in my respect for the user

Whether a specific rule change is needed to make our new understanding clear.

yes, please. otherwise, it's far too easy to spam, lowering quality and increasing moderator effort.

Bottom line, I think we need to discourage AI heavily. Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

This is more or less where I stand. I think sometimes using it as evidence (particularly in arguments about AI!) is potentially helpful in small doses.

If the preponderance of your response is not human generated, you probably are doing something wrong.

Pretty much sums up my thoughts on the matter. +1 for the Buttlerian Jihad from me.

I strongly believe that AI generated posts should be disallowed. Like @cjet79, I think it destroys the entire point of the forum. A bunch of text some LLM came up with is not interesting, it's not worth discussing, and it is really disrespectful to everyone else to even post such a thing. It's basically saying "I don't feel like actually putting any effort into talking to you so I'm gonna have my assistant do it". Hell to the no on this one.

I would say that we should stop short of a full ban on AI generated content, because sometimes you can have an interesting meta discussion about the stuff. See for example recent posts where people were showing off examples of how that Chinese LLM differed from American models. That is actually an interesting discussion, even though it involves posting AI content. So IMO linking for the purposes of discussion is fine, but not having it write for you.

I think LLM output is sometimes useful but should never be put directly in posts on this site, it has to be linked either from the platform itself when possible or dumped on pastebin etc. As far as topics of discussion go, any of 'LLM says X'/'LLM won't say Y'/'I asked an LLM to summarize Z' are not meaningful events to discuss and should never be top level threads or even the majority substance of a reply.

lol

strikethrough_regex = re.compile('''~{1,2}([^~]+)~{1,2}''', flags=re.A)

Used here

# turn ~something~ or ~~something~~  into <del>something</del>
sanitized = strikethrough_regex.sub(r'<del>\1</del>', sanitized)

Anyway maybe like ~this~?

Which looks like &amp;#126;this&amp;#126;

In unrelated news I'm not sure how much I trust the variable named sanitized to contain what it says on the tin.

He's an executive branch appointee acting with the permission of the elected President. That's how the executive branch has always worked: The President appoints people to oversee various aspects of the executive branch, because it does way too much for one person to micromanage. The President can fire him at any time if he disapproves.

If there's a problem here, it's that the executive branch is doing things the executive branch has no authority to do, not that an unelected appointee is making decisions.