A brief argument that “moderation” is distinct from censorship mainly when it’s optional.
I read this as a corollary to Scott’s Archipelago and Atomic Communitarianism. It certainly raises similar issues—especially the existence of exit rights. Currently, even heavily free-speech platforms maintain the option of deleting content. This can be legal or practical. But doing so is incompatible with an “exit” right to opt back in to the deleted material.
Scott also suggests that if moderation becomes “too cheap to meter,” it’s likely to prevent the conflation with censorship. I’m not sure I see it. Assuming he means something like free, accurate AI tagging/filtering, how does that remove the incentive to call [objectionable thing X] worthy of proper censorship? I suppose it reduces the excuse of “X might offend people,” requiring more legible harms.
As a side note, I’m curious if anyone else browses the moderation log periodically. Perhaps I’m engaging with outrage fuel. But it also seems like an example of unchecking (some of) the moderation filters to keep calibrated.
Jump in the discussion.
No email address required.
Notes -
The true problem with censorship is when it silences certain ideas. Child porn as he mentioned is not an idea, it's a red herring as nobody is truly arguing in favor of allowing that. The philosophical position that no ideas should be censored has been debated for centuries and it has a name: freedom of speech.
The problem is that today nobody really knows what freedom of speech actually is. The fact that moderation and censorship has been conflated is one problem, but so is the fact that the philosophical position has been conflated with the laws (First Amendment). It shows when people claim that freedom of speech is a right.
Freedom of speech was meant to safeguard heliocentrism, it wasn't meant to be a right of Galileo.
More options
Context Copy link
Copying a comment I made on the SSC subreddit.
I don't like this distinction, and this isn't the argument I think should be made in the first place. Moderation is censorship. I cannot speak to the public on a platform in any manner I want because some third party (the mods) have decided I cannot violate their arbitrary rules.
The key point is the distribution of power. In a setting with very few global rules and many variations of local rules, individuals reign mostly supreme. Don't like how a locality runs their things and they won't change? Leave and make your own. This is like Reddit, where admins handle site-wide (but ideally limited) rules and violations of said rules, and volunteer moderators who have a stake in the group's success manage their own areas.
In contrast, a place where the global:local rule ratio is more equal (or just weighted more to the global side) is one that is engaging in censorship. This is equivalent to Facebook or Twitter, where one centralized ruleset governs everybody (leaving no one happy except those who agree with the status quo).
In my opinion, the argument should be "make Reddit-like segregation the norm".
More options
Context Copy link
That's removing the conflation between moderation and censorship. It takes away the excuse that it's necessary for a good user experience because users don't want to see the content, and forcing censorship advocates to admit that they don't think that users should be allowed to see it.
More options
Context Copy link
The closest thing that actually exists for this sort of moderation is reddit. Each community has its own mods and its own rules.
Though of course there is no golden goose that someone won't choose to slaughter. So reddit has slowly been enforcing site wide moderation that defeats the whole point of their model.
The only reason reddit was so good for so many years was the complete incompetence of the people running it. The Reddit leadership was so bad, that they failed at every pro-monetization move they tried and the website continued having that 2010-eque charm.
The people who run Reddit do not understand Reddit.
We have already seen Tumblr be run into the ground by idiots. From the looks of it, Reddit is fast headed downhill too.
Can you imagine seeing Twitter's shakeup coming from 6 months away, and still having nothing ready to welcome what would've been a massive influx of people looking for a new social media home ?
If you are going to ruin a product, at least do it while laughing your way to the bank. Reddit somehow manages to stay broke and worsen the product.
More options
Context Copy link
More options
Context Copy link
First, on a personal note, this is exactly what I stoner-hot-take predicted Musk would do with Twitter in a prior motte thread. This freaks me out. Not that it's all that creative a take, but it's something I've noticed before when I was spending too much time in narrow epistemic corners (team fan blogs, fashion blogs) where I'd start to think the same thoughts that showed up on the blogs a week later. The same trade ideas for fan blogs, or I'd pick something up at a thrift store that caught my eye and a week later it would get anointed a trend. It's sort of a weird hivemind thing, we're all thinking about the same issues based on the same influences at the same time if we're all consuming the same set of blogs and news sites. So you can look there for my joking-not-joking predictions as to how this will go, it's a good plan but it won't survive contact with the enemy (users).
Second, to address a specific point SA makes:
Twitter would be completely unusable without any bans or filters, filled with bots and scams and obvious harassment. That's not a viable product. The minimum viable product has to filter out enough to make the product usable. I don't want to see "banned" posts, I want to see posts banned for political incorrectness. Maybe. Shout out to the mods of themotte, would themotte be usable in your judgment without that kind of basic filtering?
"Accurate" is a point of contention here. It's not unusual to have certain topics be overwhelmingly dominated by a particularly numerous or energetic viewpoint on the topic, even just something simple like Toronto sports can get weird with sportswriters admitting they softpedaled coverage of the Raptors and Blue Jays because the articles got a ton of clicks from Canadian fans, along with a ton of comments yelling at the writer if they insulted Toronto's honor. If you're any country other than the USA, you risk being flooded by American content and American viewpoints. While I'm not defending Chinese censorship per se, I do think that saying opening up to all sources of information increases accuracy can be disputed.
People (including myself) have been coming up with the idea of reframing freedom of speech as "freedom to listen" for most (but not all!) purposes since forever. It has a bunch of obvious benefits: it's much easier to defend one's right to read Mein Kampf than Hitler's right to have it read (should he have it even?), it's easy to go on the counteroffensive and ask who exactly and on what grounds reserves the right to read something and then decide that I don't have that right for me, and consequently forces the usually unspoken but implied idea that some people are too stupid to be allowed to read dangerous things into the open--I don't disagree actually, but who is deciding and how do I qualify for unrestricted access?
And freedom to listen flat out contradicts the naive interpretation of freedom of speech as freedom to call people nwords on the internet. Because obviously freedom to listen is a freedom to choose what to listen to, and someone interfering with it by screaming the nword violates it. When you think about how to implement it technically, naturally you get the idea of moderation as a service that readers subscribe to based on their individual preferences, rather than something that must be applied to writers in a one size fits all fashion.
The problem doesn't really change either way, a platform can still insist that you go listen elsewhere. Just as they argue you don't have the unlimited right to speak anywhere, they'd argue that you don't have the unlimited right to hear whatever you want wherever you want if it would require someone else to be in listening distance.
The argument against censorship should emphasize that people can and have been hilariously wrong in the past, and there's no proof we're any better at understanding what's true or not, so we should be willing to listen to ideas that go against what we say.
More options
Context Copy link
More options
Context Copy link
Funnily enough, ~same for me. Tho I suggested this nearly half a year ago, so maybe that's a little different... Link
.........
Zorba responded, explaining why it might not work; not quoting it here because of the length. So probably not(?).
I'm not actually a big fan of Zorba or the moderation history here (especially on the old subreddit), and am a fan and supporter of subscription-based moderation, but I'll be a good "Motteizen" and try to steelman what I see as the strong argument against this idea (without tracking down the original Zorba post you mentioned, so maybe he said something similar).
Ultimately, subscription-based moderation is commonly presented by its supporters as 100% frictionless and without consequence for the non-consenting (and thus basically impossible to reasonably object to): if you like the mods, then you get the modded version (potentially from different sets of mods per your choice as in many proposals), and if I don't, then I get the raw and uncut edition. Both of us therefore get what we want without interfering with the other, right? How could you say no unless you're a totalitarian who wants to force censorship on others?
But when you factor in social/community dynamics, is that actually true? Let's say you're browsing the modded version of the site. You see a response from User A, that isn't by itself rule-violating enough to be modded away, but is taking a very different tone from what you're otherwise seeing, and maybe even also commenting about/from a general tone from other users that you're not perceiving.
Maybe he starts his post off with something like "Obviously [Y proposition] isn't very controversial here, but...", but you're confused, because, as far as you knew, [Y proposition] is at least a little controversial among the userbase from what you've seen. What gives? Is this the forum you've known all along or did it get replaced by a skinwalker? Well, this is all easily explainable by the fact that the other user is browsing the unmodded version of the site (and the same thing could easily apply in reverse too). So you're both essentially responding to two semi-different conversations conducted by two semi-different (though also partially overlapping) communities, but your posts are still confusingly mixed in at times. You've probably heard of fuzzy logic and this is the fuzzy equivalent for socialization/communities.
Going based off of the above example, it also shows that it's almost certain that even just having a free unmodded view available would also make the amount of borderline content that is just below the moddable threshold explode even on the modded version of the site. After all, for the users who are posting it, it's not even borderline under their chosen ruleset. So the median tone of the conversation will inevitably shift even for the users who have not opted into (or have opted out of) unmodded mania. (This could also again happen in reverse if you have an optional more restrictive ruleset too. Suddenly you start seeing a bunch of prissy, apparently bizarrely self-censoring nofuns in your former universal wild west that was previously inhabited only by people who like that environment and thus have that in common as their shared culture. But from the perspective of the newer users who don't fit in by your standards, they're just following the rules, their rules.)
In essence, I don't think the idea that you can have users viewing different versions of a site without cross-contamination, contagion, and direct fragmentation between them is correct. This is especially true if you implement the idea of not only allowing modded vs. unmodded views, but for users to basically select their own custom mod team from amongst any user who volunteers (so you have potentially thousands of different views of the site).
The "chain links" of users making posts that aren't moddable under the rules of view A but who aren't themselves browsing the site under moderation view A (and so on for views B, C, etc.) and thus don't come from a perspective informed by it will inevitably cause the distinct views to mesh together and interfere, directly or indirectly, with each other, invalidating the idealistic notion that it's possible for me to just view what I want without affecting what you end up viewing. (One modification to the proposal you could make is to have it so that you only view posts from other users with the same or perhaps similar to X degree moderation lens applied as you, but that's veering into the territory of just having different forums/subforums entirely. With that being said, you could always make that the user's choice too.)
To be clear, I don't think of the above argument is by any means fatal to the essential core of subscription-based moderation proposals which I still think is superior to the status quo (nor do I think that it proves that subscription-based moderation isn't still essentially libertarian, that it is unjustifiably a non-consensual imposition upon others (as most of the effects on those who didn't opt-in as described above are essentially entirely indirect and I think people could learn to easily adapt to them), or that most people against it aren't still probably mostly motivated primarily by censoriousness), one important reason among many being its marvelous potential for the concept to eliminate the network effect's tyrannical suppression of freedom of association/right to exit, but then again I'm also heavily tilted towards thinking that most jannies are corrupt and biased and most moderation is unnecessary. If I had to argue against subscription-based moderation though, then an appeal to the above line of reasoning is what I'd use. (Though while it's a decent argument for subreddits, Discords, small forums like this, etc., it's a lot less appropriate of an argument for larger open platforms like Twitter or Facebook which shouldn't necessarily be expected to have one unified culture. So I'd say bring on the subscription-based jannyism only there.)
Yeah, generally I agree with this now.
Yep; while it wouldn't work that well in communities like themotte, it makes sense on a platform. Reddit, if not for admins interfering, is a pretty good model, I think. Instead of global, unified forum with global moderators, there are subreddits for entire communities, with their own mods. It makes sense.
An improvement on that might be tags-based system - allow posting into multiple places at once. Mods moderate tags they set up. Tags would allow some fancy solutions; for example, mods of #programming might enforce the rule that humor/memes has to be tagged #humor too. And so user can look at "#programming -#humor" if they want only serious posts. And #humor aggregates humor for all kinds of topics.
Through that would cause problems you describe, to some extent, in the comments. But maybe it's still better UX than crossposting?
More options
Context Copy link
More options
Context Copy link
I don't remember it, but I wonder if I read your post and forgot about it. Cool that you already asked and answered that question.
More options
Context Copy link
More options
Context Copy link
You are a real influencer man! Not the job, the vocation. What used to be called a memelord, or a trend setter. Yeah it is more likely that by reading and watching certain things you are primed to think more on that topic and you just did it faster than others, but rule number one of being an influencer is thinking you're the best and everyone listens to you regardless of what reality says.
Without any bans or filters whatsoever maybe, but why would it be unusable with hidden but accessible shit like that? It seems unlikely any social media company would make a filter to differentiate between banned for spam and banned for politically incorrect - even if the supreme court said they have to display them (but can hide them) to comply with the first amendment, the reason they censored the political speech they don't like is to stop others from seeing it, so mixing it in with spam would be the next best thing.
I'm not sure I phrased that right. I'm saying that the proposed unfiltered product isn't viable if it doesn't distinguish between actual pure trash and political incorrectness. So defiltering makes the product unusable instantly, no one would really use it. The filtered product would still be usable, and dominant. There would be a certain value in the existence of a short-form 4chan aspect to Twitter, but that reminds me of another thing I was trying to get my head around:
To what extent are social media platforms interested in excluding groups that habitually organize to break their internal systems from taking part in their social media ecosystem at all to avoid giving them space to organize? If unfiltered Twitter existed, and rdrama-type trolls were allowed to hang out there provided they all had their filters off, one of the things they're going to do is organize sallies out into the filtered world, and they'll always figure out how to grief users in the overworld. Inviting those people to use a dark corner of your platform is inviting the bikers into the bar, even if they just get a table over there they're eventually going to cause trouble.
Selection bias. I can't even know of times when I hypothetically would have thought of it slower, if I read an article on RiverAveBlues two days before I would have thought "The Yankees should trade for Trevor Story" then I'm never going to think of trading for Trevor Story on my own. It's more like @daseindustriesltd 's idea of modernity as a distributed conspiracy, if we're all educated in the same universities and reading the same blogs and listening to the same podcasts, the same ideas will occur to everyone.
Ah, you read it as minimum viable, I read it as minimum viable. I have talked with many people who would consider Scott's proposal not censorship, so I could see it being instituted. And I think social media companies would comply with that and stop there. Then everyone would talk about how they love free speech on the filtered version and just treat you like a criminal when they hear you use the unfiltered version. You would be told the information is free, and it's up to you to go find it, which is true.
It sounds like you are describing an egregore? Or maybe the zeitgeist as an egregore? That's one way you can view history, as a bunch of different egregores fighting it out on the conceptual plane. First there were family egregores, then tribe then village and so on (although all those smaller ones remained). You are noticing your integration into those egregores I think, and as a trend setter that's how it looks. As a regular member it just looks like everyone is saying on fleek all of a sudden for no particular reason.
Thanks for turning me on to tht term egregore. That's an interesting rabbit hole.
They are quite fascinating yeah. Make sure you search the vault here too, I think some of our smarties have talked about it (although it is a rather old concept and while it has always had mystical elements it has always been sociological or anthropological - the mystical stuff was because we didn't understand memes and so didn't have a scientific framework to use I think.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I don't think this is a real thing tbh, I don't see anyone who practices censorship saying they actually just want moderation. They all use the three arguments he doesn't have hard counters to.
More options
Context Copy link
Scott seems to be carving out a very novel definition of 'moderation' - virtually no forum ever has practiced moderation in that sense (a few, like, Reddit may come close with highly downvoted comments sometimes being hidden by default, but that's not the primary form moderation takes there or anywhere else).
I don't know what the point of this analogy is. If China merely discouraged the spread of information it didn't like instead of brutally repressing it, China would be very different? Yes, obviously, but what does that have to do with moderation policies on social media platforms?
This is unlikely to satisfying the people who are upset about getting booted from from social media platforms. People already routinely construe criticism as a form of censorship. How happy are they likely to be when they're tagged as a twitter-certified anti-semite? It also doesn't satisfy the platforms or their customers*, since they generally don't want the association with the sort of content we're talking about.
*a reminder that for most social media platforms the customers are not the users and vice versa.
Isn't it basically slashdot moderation? Nothing that doesn't break the law goes away, the really unpopular stuff is just in the -1 hidden unless you go looking for Natalie Portman's petrified grits proto memes and GNAA/WIPO trolling trough. If you want to see the best of the hiveminf browse at 5, if you want to see the iconoclasts, browse at 0.
They seem to have figured out a way to do this two decades ago, surely we can do it for less cost with machine learning today. I feel like that old joke about free software fans living in yurts trying to give away tanks that get 90 mpg.
More options
Context Copy link
Presumably posts, not people would be tagged. And also presumably, more than if such posts would be removed outright and the person posting them banned.
That people would also sometimes dispute an application of such a tag to their post, is to be expected, but it still leads to a world with freer speech.
To oppose such a plan is to let the perfect be the enemy of the good.
Why? If one of your objectives is to curtail harassment and enable people to self-segregate away from content they don't want to see, that is going to require not only identifying offensive posts but the people who make them as well.
Disputes over social media moderation have very little to do with free speech and a great deal more to do with people with unpopular views/behavior demanding that others not be allowed to dissociate from them.
More options
Context Copy link
More options
Context Copy link
The "see banned posts" opt-in is a non-starter because then what about replies to such posts? If you allow them but also hide them, then you create a shadow platform of absolute free speech below your sanitized platform. If you disallow replies, then you do kill off that strand of conversation so it's still censorship in effect.
More options
Context Copy link
More options
Context Copy link