Today, it seems as though we are constantly being bombarded not just by information, but by emotionally charged information intended to affect the way we perceive and react to the world: information that seems intended to cause outrage. This seems to be especially true with political content on social media, so much so that a new term has been coined to describe how social media is navigated today: “doomscrolling”, defined as the act of excessively consuming negative content.

We already know that online content has the power to change the way we feel and think. As one 2017 study on advocacy organizations’ social media activity demonstrates, “Neuroscientists and psychologists have uncovered evidence that physical interaction is not necessary for the activation of mirror neurons enabling emotional or cognitive synchrony. Instead, they find people change their mental state in response to audiovisual cues or text alone” (Bail, “Channeling” 5). This is quite frightening. Theoretically speaking, individuals can get exposed to any variety of emotionally charged content and misinformation on social media, which can change the way they act in the real world, as technological sociologist Zeynep Tufekci pointed out in her talk “We’re Building a Dystopia to Make People Click on Ads”. This phenomenon, coupled with the divisive, polarizing political content that has taken over social media over the last few years could prove to have devastating consequences, including civil war, as Tristan Harris points out in “The Social Dilemma.”

To resolve this issue, it is crucial to understand why this brand of political content is so easily spread throughout social media in the first place. In the same aforementioned talk, Tufekci places a significant portion of the blame on “the algorithms”, the recommender systems used by platforms like Facebook and Twitter to choose what content to show to users. Though it is unclear exactly what these algorithms are trained on, Harris’ “The Social Dilemma” proposes a few intuitive ideas, including (but not limited to) prioritizing content that grabs users’ attention and, therefore, content that users are more likely to engage with. Engagement can be quantified by a variety of criteria, though. On Twitter, likes, retweets, reading/writing replies, and interacting with user’s profiles are all well-established ways of tracking engagement. These metrics were used in a 2020 study conducted by Alina Pavlova and Pauwke Bakers to understand what kind of content on Twitter most effectively raises awareness about widespread mental health issues. It is therefore feasible to believe that Twitter can select what content to show users in part based on the content that they “engage” the most with, whether that engagement takes the form of liking or retweeting a tweet, viewing the replies of or replying to a tweet, or visiting the profile of the author of a tweet.

Given this well-established phenomenon of the rapid spread of hostile, emotionally-charged political discourse on Twitter, the obvious question arises: does something about the way in which people use Twitter contribute to this spread? More specifically, are users’ engagement patterns contributing to the spread of outrage-inducing political content on Twitter? Intuitively, algorithms trained to maximize engagement or time spent on the service can be trained to suggest outrage-inducing, divisive content if users are more likely to engage with such content. The purpose of this study is to investigate this possibility among a population that is deeply emotionally invested in the subject matter of these tweets: college students. More specifically, this study will investigate whether or not college students are more likely to engage with political Twitter content that causes them to experience negative emotions than content that does not.


The hypothesis of this study is that college students will engage more with political content that elicits negative emotions than content that does not. The following section draws on past works in related areas to provide theoretical justification for this hypothesis and the assumptions it is based upon.

Before focusing on political content specifically, we must ask why those who generate content attempt to appeal to our emotions in the first place. After all, if appealing to our emotions was not an effective strategy to promote a message or idea, these polarizing, divisive messages would likely not be popular throughout social media to begin with. Pavlova and Bakers’ study explored the role of emotion in the propagation of social media messages after first establishing that “emotional energy is a more likely driving force for the public domain discourse than reason.” One of the study’s key findings was that among Tweets pertaining to mental health, “topics with higher emotional energy were persistently driving the discourse (p < 0.001), mostly by engagement (p < 0.001) and to a lesser extent by high confidence and solidarity (p < 0.01),” where “emotional energy” was a term derived from a different study: “Emotional energy arises from deep engagement with something (Csikszentmihalyi, 1996), or in interaction by intense involvement and commitment, often accompanied by strong emotions and feelings of solidarity, confidence, conviction, and collective effervescence.” This supported their initial claim that emotions played a more significant role in discourse than a lack thereof, but their findings verified this concept in the virtual world.

Emotional language on Twitter can be both positive and negative, and it is important to make the distinction between the two to better understand the effect of emotionally charged content. In one 2016 article, Christopher Bail describes a study analyzing social media messages regarding Autism Spectrum Disorders (ASDs). When discussing the role of emotional language, both positive and negative, in the virality of social media campaigns about Autism Spectrum Disorders, Bail writes, “Although numerous studies indicate that fear-based messages attract more attention than do dispassionate appeals, my results show that exchanges of emotional language between advocacy organizations and social media users—particularly positive emotional language—further increase the virality of advocacy messages” (Bail, “Emotional”). While political content occasionally involves encouraging, positive messages, given the increasingly divisive political climate of the United States, this may not apply to a significant share of the overly-emotional political content on Twitter. Rather, “negative” or controversial political messages may make up the majority, which Bail discusses immediately after the aforementioned quote: “Second, my results showed that exchanges of negative emotional language between advocacy organizations and social media users—although less common—are also associated with viral views” (Bail, “Emotional”). This claim hints at the possibility that users will be more likely to engage with Tweets that elicit negative emotions than those that won’t. Under the assumption that users may view emotionally charged political content that they agree with as objective fact rather than as an emotional outcry, they may perceive equally emotional content that goes against their views as being much more emotionally charged. As a result of this distorted perception, such users may be more likely to interact with content that upsets or frustrates them due to dissension or fear, further spreading this polarizing content, thereby adding more fuel to the fire.

Though it is the purpose of this study to verify this notion, the potential for it to exist is frightening, especially when analyzed beyond its impact on the individual. If recommender algorithms optimize for attention and time spent on an application, and negatively charged messages receive more attention, these algorithms could create a vicious cycle: users are exposed to more negative content since it’s more likely to hold their attention, they share and interact with that content, and as a result, spread it to even more users.

Though the connection between Bail’s claim and this study’s hypothesis may be far fetched, again given the context of his claim and the context of this study, the notion of emotions driving political behavior is not. George Marcus speaks to the effect of emotion in influencing individuals’ beliefs and involvement in his article Emotions and politics: hot cognitions and the rediscovery of passion. He cites a study from 1986 that showed that “variations in demeanor [of political candidates] had much more influence than party identification or ideology” which was confirmed by a later study in 1988 that concluded that “[emotionally] affective responses to candidates have greater consequence on voter preferences than do issue or ideological statements” (Marcus, 209).

One of the emotional responses that individuals can have to politics, of course, is anger. In his investigation into the emotional nature of anti-elite politics, Paul Marx finds that anger “has the potential to mobilize even disadvantaged or inattentive citizens to participate in politics” (Marx). What Bail and Marx observed could lead to a two-sided issue. On one hand, this would only encourage content generators to incite emotion into and even enrage their audiences. On the other hand, if one’s emotions are aroused on social media, they can conveniently act on those emotions and politically participate by engaging with the content and/or spreading it. This two-sided issue could be another source for the same vicious cycle described above: content generators pump out the same emotionally charged content, users interact with and share content, which causes it to be shared to more individuals, resulting in a propagation of outrage-inducing content. It is therefore crucial to understand the nature of the spread of this polarizing, outrage-inducing content on social media, because there are many systems and biases in place that can allow such a spread to take place.

Collectively, these works lay a theoretical foundation for this study’s hypothesis, that undergraduate college students will be more likely to engage with content that they react negatively to than content that they do not react negatively to, to rest upon.


This section will briefly discuss how data was collected, but a more detailed description can be found in the appendix below.

To test the research question, a Google Form that simulated Twitter was shared among college students all over the country. The form contained three parts. The first presented screenshots of 18 political tweets on controversial subject matter and asked respondents how they would engage with each tweet if they were to see it on their Twitter feed. For each tweet respondents were given the option to “Like”, “Retweet with quote”, “Retweet without quote”, “Click on author’s profile”, “View replies”, and “Write reply”. Users also had the option to not engage with tweets at all.

In the next part of the form, users were presented the same tweets in the same random order, but this time, were asked to provide an emotional response to each tweet. The options provided were fear, hope, sadness, joy, distress, relief, frustration, empathy, dissension, and agreement, which were based on Ira Roseman’s model described in his article “Appraisal Determinants of Emotions: Constructing a More Accurate and Comprehensive Theory” published in 1996.

Last, respondents were asked about their political beliefs, where they were asked to self identify as either “Extremely liberal”, “Liberal”, “Moderately liberal”, “Centrist”, “Moderately conservative”, “Conservative”, or “Extremely conservative”. Responses were written to a CSV which underwent additional formatting done by this python script.

Tweets Used

The following screenshots show some of the tweets that were included in the survey. For a full list of all 18, see the appendix.