For ten long years, we fought the good fight on content moderation, and by the start of 2022, it seemed like things were on a pretty reasonable track. Platform moderation was far from perfect, but most of the major players agreed they should be doing it, and they assigned resources accordingly. The platforms and their leaders were often reluctant, often whiny, but they did what they had to do to keep critics and regulators at bay. Trust and Safety departments were staffed with mostly good people who believed in the task that had fallen to them, even if their CEOs did not.
One year later, it’s not such a pretty picture. Led by Elon Musk’s Twitter, the major platforms have laid off tens of thousands of people, including vast swathes of the Trust and Safety apparatus. Meanwhile, new platforms like Post, Substack Notes and Blue Sky are launching with a wish and a prayer that their enlightened free speech ideologies will empower good engagement and discourage bad engagement.
In short, despite vast amounts of experience and data showing the necessity and effectiveness of robust content moderation, no one has learned anything.
These problems are very visible on some of the new platforms, including Substack Notes, which is linked to the engine that (for now) powers this newsletter. On Mastodon, you might do better, depending on your server, since it was created with abuse in mind, but the federated model means that there aren’t consistent rules across the entire platform, and there’s no singular authority to deal with emerging problems.
So we’re back to 2013, or thereabouts, and rebooting the five stages of content moderation. If you’ve been following me for a while, you will have seen these before, but for newcomers, the stages are:
-
Denial: Because our platform is inherently good, we don’t have any need for robust content moderation.
-
Free-speech mouth noises: OK, so despite being inherently good, we can see some problems, but our unwavering commitment to free speech precludes us from taking