Pocket worthyStories to fuel your mind

Social Media Really Is Making Us More Morally Outraged

Platforms like Twitter may amplify engrained human behaviors, but a future filled with healthy discourse and productive conversations isn't impossible.

Popular Science

Read when you’ve got time to spare.

person holding hands while sitting at a laptop

Social media algorithms rank posts that express outrage higher in the feed. Max Pixel

To no one’s surprise, scientists from Yale University found that social media platforms like Twitter amplify our collective moral outrage. Additionally, they found that it was mostly politically moderate users who learned to be more outraged over time. Their findings are detailed in a 2021 study in Science Advances

“We were interested in broadly trying to understand this phenomenon that most people who use social media platforms, especially Facebook and Twitter, are aware of—which is that when you log in, there’s often a lot of political content that floats around in your newsfeed, and it usually comes with a lot of moral outrage, especially during key times in American politics,” says William Brady, a postdoctoral researcher in the department of psychology at Yale University. 

Further, Brady and his team wanted to tease out why moral outrage seems to be amplified in online social networks, and whether social learning—how we learn new behaviors by socializing with other humans—can play a role in influencing our decisions to express moral outrage.

Training Computers to Recognize Moral Outrage

To gather information for their study, they first had to define what moral outrage was. They used theoretical definitions from moral psychology and emotion science, and broke it down to three main components. Statements had to be imbued with a negative feeling that is typically a combination of anger and disgust, and the users had to be really worked up about the topic. In the realm of morality and politics, people usually “express outrage when they feel that someone has transgressed against their sense of right and wrong,” Brady explains. And finally, the statement has to evoke certain consequences: “Someone wants to hold someone else accountable, or punish them, or call them out.” 

They focused on Twitter reactions to select political events from 2017-2019 that were likely to elicit moral outrage from both sides of the political spectrum, like the confirmation hearing for Supreme Court Justice Brett Kavanaugh and the Trump administration’s ban on transgender individuals serving in the military. 

The researchers looked at two forms of social learning while following the tweets and tweeters over the course of the events. One type of social learning was reinforcement learning, where people learn to adjust their behavior based on social feedback that they receive. For example, getting likes and shares can be interpreted as positive social feedback. The other was norm learning, where people can adjust their behavior through observing what the most common, or normative, behaviors are in their social network. This can happen the instant that users log onto the platform, and see a snapshot of what their social network, or people they follow, are talking about. 

Initially, human annotators went through a dataset of 26,000 tweets and labeled whether these tweets expressed moral outrage or not. 

For example, while sorting tweets that were related to the transgender ban in January 2019, a tweet that said “This is a disgusting display of hatred and oppression. #FUCKYOUTRUMP and your criminal cabinet!” was coded as moral outrage, while another tweet that said “Hillary Clinton said some thoughtful words about the ban” was not coded as moral outrage. 

They then trained a machine learning model to pick up the linguistic features (type of words, arrangement of words, and punctuation) that are most associated with the tweets labeled as containing outrage. The model then extrapolated it to new tweets, and can estimate the probability that any given tweet contains an expression of moral outrage. The model performs very well with political topics, but it’s unclear if it can also be used for non-political topics. 

The most interesting finding for the team was that some of the more politically moderate people tended to be the ones who are influenced by social feedback the most. “What we know about social media now is that a lot of the political content we see is actually produced by a minority of users—the more extreme users,” Brady says. 

One question that’s come out of this study is: what are the conditions under which moderate users either become more socially influenced to conform to a more extreme tone, as opposed to just get turned off by it and leave the platform, or don’t engage any more? “I think both of these potential directions are important because they both imply that the average tone of conversation on the platform will get increasingly extreme.”

Social Media Can Exploit Base Human Psychology

Moral outrage is a natural tendency. “It’s very deeply ingrained in humans, it happens online, offline, everyone, but there is a sense that the design of social media can amplify in certain contexts this natural tendency we have,” Brady says. But moral outrage is not always bad. It can have important functions, and therefore, “it’s not a clear-cut answer that we want to reduce moral outrage.”

“There’s a lot of data now that suggest that negative content does tend to draw in more engagement on the average than positive content,” says Brady. “That being said, there are lots of contexts where positive content does draw engagement. So it’s definitely not a universal law.” 

It’s likely that multiple factors are fueling this trend. People could be attracted to posts that are more popular or go viral on social media, and past studies have shown that we want to know what the gossip is and what people are doing wrong. But the more people engage with these types of posts, the more platforms push them to us. 

Jonathan Nagler, a co-director of NYU Center for Social Media and Politics, who was not involved in the study, says it’s not shocking that moral outrage gets rewarded and amplified on social media. 

“It’s in line with what people who study social media worry about on social media, that the incentives are not great,” says Nagler. “In this perfect world, people would learn new things on social media and become better informed. That would make for a more informed society, and people could make decisions in their interests. But if what we’re seeing instead is that people just have an incentive to be noticed, a lot of what is going to show up online isn’t going to be related to people becoming informed about facts or ideas, but instead it’s going to be about more extreme views.” 

How Social Media Changed Politics

So, how did we get here? Nagler notes that it’s important to acknowledge that the way in which news and information related to politics is disseminated has changed vastly over the last century. 

In the pre-cable media US, consumers had three different networks that said generally the same thing. But when cable news took off, there were new networks that said something different, “and that led to a different world, and also let people opt out of the news,” Nagler says. “Once cable came on, they could watch Gilligan’s Island instead.” 

Then, the social media world came along. “I think it is important to realize that those things are very much connected, because a huge amount of what gets spread on social media starts on cable news.” 

What social media has done is give anyone with an internet connection a voice, and of course, what they say spreads faster. “We used to live in a world where the media essentially acted as gatekeepers. So, if you had a really outrageous claim to make, you could stand on a street corner with a bullhorn, you weren’t getting very far. And what social media has done is open the door to those people,” says Nagler. “There’s a market for reasoned debate out there, but there seems to be a bigger market for really outrageous or extreme claims.”

This has changed the way that organizations and even individual users message about political news. “There was the rise of clickbait headlines, although that has been regulated a bit. Politicians, since Trump and even before that, they’re engaging and they’re messaging with people in a way that is evocative of emotions like outrage,” Brady says. “There’s also the rise of political pundits who have made careers out of saying more and more extreme things. And I feel like a lot of that has been reinforced by social media. It definitely preceded social media—hyperpartisan radio was a thing since the 80s—it’s not that social media invented this stuff, but it does tend to amplify it.” 

Due to pressure from the growing “techlash,” some companies are installing small “behavioral nudges” such as accuracy prompts that ask users if they want to read an article before they share it. “Certain design features could potentially help conversational health,” Brady says. “I think that those things can have small effects, and there’s data suggesting that they do have some effects.”

Rather than eliminating all expressions of genuine moral outrage, Brady argues that it’s more useful to give people the correct information so they can choose to express outrage and also become aware of how they’re being affected by people and the platforms. 

“I don’t think there’s this one big thing that platforms can do to suddenly change how online discourse is, just because it’s not just the platform design, but it’s also our psychology,” says Brady. “So to me, it has to be this combination where you have the companies do these small nudges that can help conversational health, while also empowering users to be aware of the ways that the design of the technology can potentially influence the social information they see.” 

Charlotte Hu is the assistant technology editor at Popular Science. She’s interested in understanding how our relationship with technology is changing, and how we live online. Contact the author here.

How was it? Save stories you love and never lose them.

Logo for Popular Science

This post originally appeared on Popular Science and was published August 13, 2021. This article is republished here with permission.

Want more stories like this?

Join PopSci's newsletter