A brief argument that “moderation” is distinct from censorship mainly when it’s optional.
I read this as a corollary to Scott’s Archipelago and Atomic Communitarianism. It certainly raises similar issues—especially the existence of exit rights. Currently, even heavily free-speech platforms maintain the option of deleting content. This can be legal or practical. But doing so is incompatible with an “exit” right to opt back in to the deleted material.
Scott also suggests that if moderation becomes “too cheap to meter,” it’s likely to prevent the conflation with censorship. I’m not sure I see it. Assuming he means something like free, accurate AI tagging/filtering, how does that remove the incentive to call [objectionable thing X] worthy of proper censorship? I suppose it reduces the excuse of “X might offend people,” requiring more legible harms.
As a side note, I’m curious if anyone else browses the moderation log periodically. Perhaps I’m engaging with outrage fuel. But it also seems like an example of unchecking (some of) the moderation filters to keep calibrated.
Jump in the discussion.
No email address required.
Notes -
The true problem with censorship is when it silences certain ideas. Child porn as he mentioned is not an idea, it's a red herring as nobody is truly arguing in favor of allowing that. The philosophical position that no ideas should be censored has been debated for centuries and it has a name: freedom of speech.
The problem is that today nobody really knows what freedom of speech actually is. The fact that moderation and censorship has been conflated is one problem, but so is the fact that the philosophical position has been conflated with the laws (First Amendment). It shows when people claim that freedom of speech is a right.
Freedom of speech was meant to safeguard heliocentrism, it wasn't meant to be a right of Galileo.
More options
Context Copy link