Social Ethics Online After The Great Deplatforming

Twitter has deplatformed Donald Trump, and that’s not even the most important digital news story of the cycle.

Even more consequential, I think: the decision by Apple and Google, the makers of the physical devices on which most people use the Parler app, to insist that Parler adopt a content moderation policy that doesn’t allow its users to threaten lives and plot sedition. Since January 6, Parler has hosted an online orgy of hate and gore worthy of a Roman emperor clinging to a column as the legions surround him. 

The two stories are related, of course.

Banning people from a default “normie” platform like Twitter will push them onto niche platforms, where their ideas are never intruded upon by alternative views or any social sanction. As they move into narrower and narrower spaces, the chances that their radical ideas cross tthe threshold from aspirational to actual action are greater.  But the reason that’s true is because, to a large extent, social sanctions for behavior that would cause shame and a loss of prestige in the world of meat are absent or turned around when the behavior is filtered through social media. 

Offline, when you lie regularly, or when you spread unreliable cures for social ills, people tend to distrust you and move away from you, especially if the lie is clear. Online, when you lie regularly, you are just as likely to attract people with similar world views, who will sustain you and give you esteem, as you are to be shunned.

Disinformation researchers have puzzled for many years now about how to hold to discourage people from regularly and willfully spreading harmful misinformation online.

Translating the ethics of three-dimensional space into the digital dimension is complicated even further because cause and effect online is far less intuitive to human mammals than cause and effect in person.

The free-for-all online and the lack of transparent, consistent (or semi-consistent) policies about harmful misinformation have allowed us to escape an attempt to define the scale of the problem: what should the ethical and legal consequences be for someone who abuses the physics of the digital world to repeatedly and willfully spread ideas that could harm other people in the non-digital world? Part of the problem is conceptual: the worlds are the same; the distinction has been a mirage, one aided by our own cognitive errors. We did not evolve to Tweet or be @-ted at.

Section 230 of the Communications Decency Act, which gives the platforms safe harbor for speech acts hosted or posted on their sites has never been the expansive writ of license for illegality that its opponents insist:  inciting violence, committing crimes, and harassment are still illegal and those who use platforms to commit them have been held accountable.  (Facebook has thousands of people who do nothing but help law enforcement track down criminals who misuse their site.) 

What’s been missing is the ethical consequences for violations of social norms, of which spreading harmful misinformation is the most malicious.  From 2016 through 2019, platforms slowly adopted policies about hate speech and enforced them with confounding irregularity. In 2020, when it became ultra-clear that President Trump’s supporters took their cues on public health matters from his Tweets and comments, the fiction that platforms could somehow take no action to stop harmful behavior disappeared.  Suddenly, there were warning labels and attempts to add friction. In late 2020 and early 2021, the platforms and technology companies realized that harmful misinformation about the integrity of elections could break the mythic offline-online barrier rather quickly, resulting in harm – and then death, after an actual insurrection.

There is lot to worry about and a lot that we don’t know.  Apple and Google will have to figure out whether their content review standards should be applied to hundreds if not thousands of other apps that cater to specific communities.  Twitter will need to articulate the reasons why Iran can promote the hatred of Jews and China can claim victory for “re-educated” Muslims – there are real world consequences there – and yet still have a presence on their site. 

But a line has been crossed, and I'm glad we've moved beyond it. There are now absolute consequences for willfully malicious behavior online.  The debate about what they should be has been joined, finally, by the power to imagine a different way forward.