Skip to main content

Verified by Psychology Today

Cognition

Why Some Tweets Trigger Hate Speech

As it turns out, it's the ones that use moral language.

Social media can clearly become a hotbed of hate speech. Even after so many years, I can still be surprised by aggressive, expletive-filled hateful responses to even relatively tame posts—and sometimes even explicit threats, That's not what most of us want to see in our feeds. And while hate speech on, for example, Twitter, has clearly gotten worse, the reasons are not clear.

But, in the age of big data and a huge and relatively easy to handle database, this question can be tackled empirically. In some sense, the vast number of tweets that are all publicly available is a dream opportunity to determine what it is about a social media post that triggers various responses, including hate speech.

A recently published study did exactly that. The database the researchers studied was impressively large: 691,234 tweets and 35.5 million responses. They were looking for some kind of correlation between the language used in the original tweets and the hatefulness of the responses. It is important that the focus was not on the content of the tweets (which would be difficult to quantify), but merely the language used.

What the researchers found was that there is a very clear correlation between the moral and moral-emotional language used in the original tweets and the hatefulness of their responses. What the researchers mean by moral and moral-emotional language is exemplified by the use of words that refer to values and features of groups. One example the authors give is "The LGBTQ community is nearly four times more likely to be victims of violent crime." This tweet uses moral and moral-emotional language because it includes words like community, victims, violent, and crime. Tweets like this attract many more responses of hate speech than ones using less moral and moral-emotional language. This correlation stands regardless of whether politicians, journalists, or activists posted the original tweets.

These results are disturbing in and of themselves. But they are even more disturbing if we put them together with another piece of empirical evidence finding that tweets with emotional or moral content are more likely to be diffused (retweeted) within ideological groups, but much less likely to be retweeted across ideological boundaries. So, Democrats retweet Democrats and Republicans retweet Republicans; that’s hardly surprising. More unexpectedly, the moral and emotional language of one ideological group is very consistent within that group, but differs sharply from the language used in other ideological groups. In other words, there are major differences in the emotional language used by Democrats and Republicans, but two random Democrats are likely to post in a remarkably similar manner.

If we put the results of these two studies together, what we get is extremely worrying. First, social media posts are more popular—that is, more widely retweeted—if they use moral and moral-emotional language. This drives social influencers to use more and more of this kind of language. but it is exactly this language that is a magnet for hate speech. So this process inevitably leads to more and more hate speech. Nonetheless, understanding what makes people go wild behind their screens can help us all at least try to tone down the use of words that serve as hate-speech triggers.

advertisement
More from Bence Nanay Ph.D.
More from Psychology Today