Skip to main content

Verified by Psychology Today

Social Media

Morality, Violence, and Free Speech on Social Media

Should TikTok have taken down the Bin Laden letter?

Recently, a letter written by the late terrorist, Osama Bin Laden, advocating killing Americans and Jews was circulated on TikTok. The letter was subsequently taken down by the administrators of the site along with commentary defending it. Should TikTok have taken down these posts; or did it violate these individuals’ right to free speech? In this digital age, the questions raise new considerations which warrant analysis, especially in a period of history when there is a growing trend toward systemic racism, xenophobia, and other forms of discrimination.

A classical statement about whether and when free speech can be censored in a democratic society comes from philosopher John Stuart Mill in his classic essay, On Liberty (1859). Wrote Mill:

An opinion that corn-dealers are starvers of the poor, or that private property is robbery, ought to be unmolested when simply circulated through the press, but may justly incur punishment when delivered orally to an excited mob assembled before the house of a corn-dealer, or when handed about among the same mob in the form of a placard. Acts, of whatever kind, which, without justifiable cause, do harm to others, may be, and in the more important cases absolutely require to be, controlled by the unfavourable sentiments, and, when needful, by the active interference of mankind (Ch. 3, Mill).

In 1919, U.S. Supreme Court Justice Oliver Wendell Holmes, in Schenck v. United States, aligned with Mill, stating that:

The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic. It does not even protect a man from an injunction against uttering words that may have all the effect of force. … The question in every case is whether the words used are used in such circumstances and are of such a nature as to. create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent.

Neither Mill nor Holmes could have foreseen the challenge presented by the internet as a vehicle for inciting an angry mob to do violence to others by uttering words that could have all the effect of force. The theatre of cyberspace has the far-reaching capacity to incite millions globally, especially via social media platforms such as TikTok, Facebook, and X, among others. Statements intended to encourage violence to other human beings that are uttered in a relatively small physical space may be much less dangerous, when weighed on a social scale, than the same statements made over the internet.

Hence, while free speech should be preserved on the Internet, thereby permitting it to endure as the greatest experiment in democracy ever attempted, this does not bar it from limitations. Such restriction includes hateful, prejudicial speech that poses a “clear and present danger” to others.

There is, of course, the controversial question of whether the government should get involved with restrictions of free speech in such cases given the uncanny power it has of encroaching on freedom of speech if not itself constrained. Alternatively, it may arguably be the professional responsibility of administrators of such behemoth bastions of cyber speech to regulate such speech when it occurs. In other words, the latter may be treated as part of the professional code of ethics of those who stand as the gatekeepers of social media.

The grounds of such a moral responsibility, according to Mill, would be harm to others. Freedom of speech, argued Mill, should not be restricted unless it portends physical harm to others. The fact that some may be emotionally upset by a certain line of speech, even hateful speech, is not itself grounds for censoring it. Mill suggested that such censorship ultimately deprives humanity of a forum in which all ideas can be displayed side-by-side, both true and false, thereby allowing the truth to be heard while the false is exposed against the background of truth.

The exception, however, should not devour the rule. Thus, the criterion for restricting speech needs to be narrow so as not to infringe on other types of speech. Hate speech is itself too broad for these purposes. Someone can express hatred without threatening to harm the object of hatred. Hence, not all hate speech would qualify as that which “creates a clear and present danger.”

Speech has what philosophers of language call “illocutionary force.” That is, it can be used to perform diverse acts. For example, in saying “I hate Blacks (Jews, Muslims, Buddhists, Hindus, Christians, Americans, or whomever)," one is reporting or expressing one’s own subjective, negative emotion toward the group in question. On the other hand, in saying something equivalent to “American civilians should be killed” (due to their paying taxes to the American government—an argument advanced by Bin Laden in the letter removed by TikTok), one is also making a threat, or, at least recommending killing American civilians. Such speech acts, committed in a forum that reaches millions of people, at least some of whom are likely to be inclined toward violence, arguably creates a “clear and present danger” to innocent civilians. It does not matter whether the intended targets are Americans, Israelis, Palestinians, Backs, Whites, or whomever. The threat, or recommendation, is one which, arguably falls under Mill's Harm Principle, and can, therefore, on this criterion, be censored. Thus, the criterion consonant with Mill’s principle, as applied to social media, can take the following form:

Does the post in question threaten or recommend doing substantial physical harm to a certain individual or group of individuals?

If the answer to this question is yes, then the administrators of a social media network do, indeed, have a professional responsibility to take down, or disallow the post pursuant to Mill’s Harm Principle.

advertisement
More from Elliot D. Cohen Ph.D.
More from Psychology Today