This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Might be true, but trying to carefully micro-manage which views need to be pruned to what extend in order to give room to which other views, and deciding which views bring how much value, and having an apparatus in place to enforce all of that...well, it might work on small internet forums where small teams of savvy mods who know their userbase well and actually care to maximize viewpoint diversity (though still - by what metric?), but I don't think it scales at all without devolving into conformity enforcement machinery.
At the risk of sounding like a broken record that goes "AI will fix it", that sounds like a job for AI.
I suspect a model finetuned on the moderation decisions of The Motte will beat the brakes off the typical internet or reddit mod.
I think you underestimate how many humans want censorship. To me reddit is a boring sterile place in most areas where any political sub becomes parroting of the same agreed ideas. But humans seem to want that because we converge on it repeatedly.
Even here if someone parrots a few ideas like more communists leaning they probably get enough disagreement that they end up just deciding to go to the place where everyone will call them geniuses.
AI might be able to maximize for users by never showing that posts they don’t like. Effectively letting everyone live in their self-reinforcing bubble. But it does seem many on the left don’t like the idea of supposedly something they think is a Nazi being on the same platform whose thoughts they never see.
I think people are confused about what they want. They don't understand that in order to get lively, creative, intellectually stimulating conversation, they have to be willing to tolerate people with beliefs that are far different from their own.
The modern progressive movement has sold the idea that you can have all the vitality, ingenuity, and fun that we've always had without the dissidents and the ghouls and the witches. Hell, they push the line that without those bad people, there will be even more of the good stuff!
Unfortunately this message is, likely unintentionally, a classic example of throwing out the baby with the bathwater.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Completely true. I’m not saying Twitter is trying to (or even can) cultivate a garden if ideological diversity, which was (roughly) the goal of /r/slatestarcodex
Twitter is probably more interested in maximizing users (which, as you say, isn’t the same as diverse viewpoints), but a similar principle still holds: if you want to maximize the number of people using your services, a policy of allowing entry to all often isn’t optimal (as users here often point out for public transportation).
What AI? The commercial versions which are being carefully monitored, pruned, and edited to make sure no No-No Words or Thoughts get through the sieve?
I think you replied to the wrong comment
More options
Context Copy link
Finetuning is the process by which such goody-two-shoes AI can be coaxed into almost anything you like. You could remove the guardrails, turn it into a member of the gestapo, or in this case, teach it the tenets of Motte moderation.
Of course, this is for open source models like Llama where we can tinker with their brains, not GPT-4, which is locked down and if you get naughty, OpenAI will spank you.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link