We dodged more than one societal bullet with the Derek Chauvin guilty verdict. The streets remained quiet and the phalanxes of heavily armed police girding for civil unrest in dozens of cities went home without a shot fired or a head cracked. We also avoided an explosion of online hate speech that many expected to fuel street violence — although the preparations for that didn't involve armored personnel carriers. Rather, it was a sterile operation of turning the dials on algorithms.
Facebook announced the day before the verdict that it was preparing both to remove content that "praises, celebrates or mocks" the death of George Floyd, and to "limit the spread" of hate speech and incitement. In other words, Facebook was ready to aggressively censor speech on its platforms, either by eliminating it or reducing its amplification so that fewer people see and share it. That it made that announcement was not surprising. Its Community Standards already forbid such toxic speech, although they have been — to put it mildly — inconsistently and selectively enforced.
What should also be no surprise is the extraordinary latitude the social media platforms have to manhandle speech, and in a country where government censorship is almost entirely forbidden. As the U.S. struggles to deal with hateful and divisive speech, particularly given the recent spike in anti-Asian violence, the questions of who may address it, and how, have taken on new urgency. Unlike in Europe, where hate speech is increasingly criminalized and the governments penalize the platforms for not suppressing it en masse, our government is impotent to interfere with even the most divisive speech, while private entities have tremendous power to censor anything under their control — and to mess up doing it time and again.
The result is a chaotic mess that satisfies no one, but the alternative of government speech mandates on the European model is even worse. Mark Zuckerberg and Jack Dorsey have made themselves easy to loathe, but unlike governments they have no power to arrest, jail, or tear gas anyone. It's the combination of guns and speech control that pose the biggest threat, and we should be grateful our fractured system keeps them separate.
The Constitution isn't a ceiling on acceptable speech, but rather a low floor. The government must allow most all speech unless it risks causing an imminent threat of violence. Free speech, the Supreme Court ruled back in 1949, in fact serves "its high purpose when it induces a condition of unrest … or even stirs people to anger." To give the First Amendment "breathing space," the Court said in another case, the government must permit speech that is "insulting, even outrageous." That leaves hate speech and a raft of other offensive material well outside the reach of police. Even many of the communications used to organize the January 6 insurrection against the Capitol would survive constitutional scrutiny.
The truth is that many Americans aren't comfortable with such permissiveness. A late-2019 survey found that about half the American public believes hate speech should be illegal. This attitude is not new. From at least the early 20th century, minority groups have pushed for protections from speech advancing hateful stereotypes, but as they were left without recourse in the courts, the private sector picked up the slack. Coffee shops, private universities, and even immense internet platforms are free to censor or amplify almost any message. As a man who participated in the 2017 Charlottesville rally discovered, the law will protect you when you march for white supremacy, but will leave you in the cold when you are fired from your job for doing so.
Where does this leave us? In Germany, the larger Internet platforms are being forced, on pain of enormous fines, to remove often vaguely defined hate speech in a matter of hours. This has resulted in massive sweeps that have ensnared much unpleasant but legal speech along with the truly bad. Police have also conducted surprise raids in seven European countries to arrest and confiscate the phones and computers of online posters for crimes as earth shattering as insulting politicians.
Despite their overreach, these efforts still appear well intentioned. It is chilling to think of what Donald Trump — who saw criticism of his administration as treasonous — would have done with the force of criminal law to censor and jail speakers he didn't like. Even before he was deplatformed for his own hate speech and incitement, it's safe to assume his administration would have made aggressive use of any law allowing it to punish internet platforms for their content moderation decisions. Laws are only as good as those enforcing them, and Trump's excesses are an object lesson on the virtues of restricting government power over speech.
If the First Amendment saved us from some of Trump's worst depredations, it has also given the platforms room to experiment with ways to rein in hate speech without the distorting pressure of fines or criminal penalties. In the period surrounding the 2020 election, Facebook took measures to deemphasize incitements to violence, which worked: actual news climbed to into lists of most-viewed posts along with extreme right-wing content. It and the other platforms should be encouraged — and under threat of antitrust enforcement and increased privacy regulation, nudged — to make this their policy all the time, and continue to find ways to deemphasize harmful content along the lines Facebook planned to with respect to the Chauvin trial.
America's two-tiered system of speech governance is exasperating, but the alternatives, especially in the hands of an authoritarian leaning government, are far more harmful.
Shares