Why regulating “bad speech” online is one of society’s biggest conundrums
- What can we do about "bad" speech on the internet? It may be that the longstanding reliance on the self-correcting mechanisms of the marketplace of ideas will work again. But perhaps not.
- The current debates about the threats to free speech, and even to democracy itself, triggered by the evolution of our newest technology of communication call into question the whole edifice of freedom of speech and press.
- The debate is crucial. It is ultimately through speaking and listening that human beings become who they are.
Excerpted with permission from Social Media, Freedom of Speech, and the Future of our Democracy, edited by Lee C. Bollinger and Geoffrey R. Stone. Copyright @ 2022 by Oxford University Press.
One of the most fiercely debated issues of the current era is what to do about “bad” speech on the internet, primarily speech on social media platforms such as Facebook and Twitter. “Bad” speech encompasses a range of problematic communications — hate speech, disinformation and propaganda campaigns, encouragement of and incitement to violence, limited exposure to ideas one disagrees with or that compete with preexisting beliefs, and so on. Because the internet is inherently a global communications system, “bad” speech can arise from foreign as well as domestic sources. No one doubts that these kinds of very harmful expression have existed forever, but the premise of the current debate is that the ubiquity and structure of this newest and most powerful communications technology magnifies these harms exponentially beyond anything we have encountered before. Some argue that, if it is left unchecked, the very existence of democracy is at risk.
The appropriate remedies for this state of affairs are highly uncertain, and this uncertainty is complicated by the fact that some of these forms of “bad” speech are ordinarily protected by the First Amendment. Yet the stakes are very high in regard to how we answer the question because it is now evident that much of public discourse about public issues has migrated onto this new technology and is likely to continue that course into the future.
Current First Amendment jurisprudence has evolved on the premise that, apart from certain minimal areas of well-established social regulation (e.g., fighting words, libel, threats, incitement), we should place our trust in the powerful antidote of counter-speech to deal with the risks and harms of “bad” speech. Of course, that may well turn out to be the answer to our contemporary dilemmas. Indeed, one can already see the rise of public pressures on internet companies to increase public awareness of the dangers of “bad” speech, and there are discussions daily in the media raising alarms over dangerous speech and speakers. Thus, it may be that the longstanding reliance on the self-correcting mechanisms of the marketplace of ideas will work again.
But perhaps not. There is already a counter risk — that the increase in “editorial” control by internet companies will be biased against certain ideas and speakers and will effectively censor speech that should be free. On the other hand, even those who fear the worst from “bad” speech being uninhibited often assert that internet company owners will never do enough on their own to initiate the needed controls because their basic, for-profit motivations are in direct conflict with the public good and the management of civic discourse. There is understandable concern that those who control the major internet companies will have an undue and potentially dangerous effect on American democracy through their power to shape the content of public discourse. On this view, public intervention is necessary.
It is important to remember that the last time we encountered a major new communications technology we established a federal agency to provide oversight and to issue regulations to protect and promote “the public interest, convenience, and necessity.” That, of course, was the new technology of broadcasting, and the agency was the Federal Communications Commission. The decision to subject private broadcasters to some degree of public control was, in fact, motivated by some of the very same fears about “bad” speech that we now hear about the internet. People thought the risks of the unregulated private ownership model in the new media of radio and television were greater than those inherent in a system of government regulation. And, like today, those who established this system felt unsure about what regulations would be needed over time (in “the public interest, convenience, and necessity”), and they therefore set up an administrative agency to review the situation and to evolve the regulations as circumstances required.
On multiple occasions, the Supreme Court has upheld this system under the First Amendment. The formal rationale for those decisions may not apply to the internet, but there is still plenty of room for debate about the true principles underlying that jurisprudence and their continued relevance. In any event, the broadcasting regime stands as arguably the best example in our history of ways to approach the contemporary concerns about new technologies of communication. But, of course, it may be that government intervention in this realm is so dangerous that social media platforms should be left to set their own policies, just as the New York Times and the Wall Street Journal are free to do.
Section 230 of the Communications Decency Act of 1996 famously shields internet companies from liability for speech on their platforms. Many critics of internet companies have advocated the repeal of this law and have used the idea of its repeal as a threat to get these companies’ owners to change their editorial policies (either to stop censoring or to censor more). Another approach would be to enforce existing laws that forbid foreign states and certain actors from interfering in US domestic elections and politics.
Everyone accepts the proposition that efforts by Russia to spread disinformation in order to foster civil strife in America is highly dangerous and properly subject to criminal prohibitions. But, in a much more integrated world, especially one facing global problems (climate change, and so on), it is also true that the American public has a vital First Amendment interest in hearing and communicating with the broader international community. The problem, therefore, will be in finding the right balance between improper foreign interference and the healthy and necessary exchange of ideas on the global stage.
We also need to take stock of the precise nature of the problems we are facing with “bad” speech on social media platforms, as well as what means other than legal intervention might be available to address the problems. Public education, changes in algorithms, the development of a more journalistic culture within the management of these platforms, government pressures on “bad” actors abroad, and other non-legal solutions all need to be explored.
It is also possible that the constraints in existing First Amendment jurisprudence should themselves be amended, not only because the circumstances and contexts are different today but also because experience over time with those doctrines and principles might lead some to doubt their original or continuing validity. Overall, we need to imagine as best we can what a new equilibrium should look like as we experience the impacts on our democracy of this new technology of communication.
Every now and then in the history of the First Amendment an issue comes along that not only poses a perplexing and challenging question about some aspect of First Amendment doctrine or some incremental move, but also calls into question the whole edifice of freedom of speech and press as we have come to know it in the United States. The current debates about the threats to free speech, and even to democracy itself, triggered by the evolution of our newest technology of communication— the internet, and especially social media platforms— constitute such an occasion. The extraordinarily rapid embrace of this method of communication (in less than two decades), together with its pervasive presence in our lives, is both astonishing and revolutionary. This is true especially because the internet and social media are controlled by a few corporations which are structurally designed to reserve to them primary control of this powerful new means of communication. It is now a central question in the United States and around the world whether this new means of communication strengthens what freedom of speech has marked as the ideal or threatens everything we have so painstakingly built.
This book is dedicated to exploring that question and what follows from the answers we give to it. At this moment in the history of the United States, there is arguably no conundrum of greater importance. When an overwhelming majority of citizens communicates, receives information, and forms political alliances in a single place, and when that place is effectively controlled and curated by a single person or entity (or mathematical model), alarms built over decades of thought about freedom of speech and democracy are triggered. Too much censorship? Or too little? Those, in a sense, are the central concerns. The balance struck is always the test of a free and democratic society, because it is ultimately through speaking and listening that human beings become who they are and decide what to believe. Put simply, do entities like Facebook, Twitter, and YouTube have too much power under existing law to determine what speech we will or will not have access to on social media? Are there changes that can constitutionally be made to the current system that will improve rather than worsen the current state of affairs? And how should we think about the multinational implications of the internet and about how policies adopted in other nations affect freedom of speech in the United States?