Introduction

Today, it seems as though we are constantly being bombarded not just by information, but by emotionally charged information intended to affect the way we perceive and react to the world: information that seems intended to cause outrage. This seems to be especially true with political content on social media, so much so that a new term has been coined to describe how social media is navigated today: “doomscrolling”, defined as the act of excessively consuming negative content.

We already know that online content has the power to change the way we feel and think. As one 2017 study on advocacy organizations’ social media activity demonstrates, “Neuroscientists and psychologists have uncovered evidence that physical interaction is not necessary for the activation of mirror neurons enabling emotional or cognitive synchrony. Instead, they find people change their mental state in response to audiovisual cues or text alone” (Bail, “Channeling” 5). This is quite frightening. Theoretically speaking, individuals can get exposed to any variety of emotionally charged content and misinformation on social media, which can change the way they act in the real world, as technological sociologist Zeynep Tufekci pointed out in her talk “We’re Building a Dystopia to Make People Click on Ads”. This phenomenon, coupled with the divisive, polarizing political content that has taken over social media over the last few years could prove to have devastating consequences, including civil war, as Tristan Harris points out in “The Social Dilemma.”

To resolve this issue, it is crucial to understand why this brand of political content is so easily spread throughout social media in the first place. In the same aforementioned talk, Tufekci places a significant portion of the blame on “the algorithms”, the recommender systems used by platforms like Facebook and Twitter to choose what content to show to users. Though it is unclear exactly what these algorithms are trained on, Harris’ “The Social Dilemma” proposes a few intuitive ideas, including (but not limited to) prioritizing content that grabs users’ attention and, therefore, content that users are more likely to engage with. Engagement can be quantified by a variety of criteria, though. On Twitter, likes, retweets, reading/writing replies, and interacting with user’s profiles are all well-established ways of tracking engagement. These metrics were used in a 2020 study conducted by Alina Pavlova and Pauwke Bakers to understand what kind of content on Twitter most effectively raises awareness about widespread mental health issues. It is therefore feasible to believe that Twitter can select what content to show users in part based on the content that they “engage” the most with, whether that engagement takes the form of liking or retweeting a tweet, viewing the replies of or replying to a tweet, or visiting the profile of the author of a tweet.

Given this well-established phenomenon of the rapid spread of hostile, emotionally-charged political discourse on Twitter, the obvious question arises: does something about the way in which people use Twitter contribute to this spread? More specifically, are users’ engagement patterns contributing to the spread of outrage-inducing political content on Twitter? Intuitively, algorithms trained to maximize engagement or time spent on the service can be trained to suggest outrage-inducing, divisive content if users are more likely to engage with such content. The purpose of this study is to investigate this possibility among a population that is deeply emotionally invested in the subject matter of these tweets: college students. More specifically, this study will investigate whether or not college students are more likely to engage with political Twitter content that causes them to experience negative emotions than content that does not.

Background

The hypothesis of this study is that college students will engage more with political content that elicits negative emotions than content that does not. The following section draws on past works in related areas to provide theoretical justification for this hypothesis and the assumptions it is based upon.

Before focusing on political content specifically, we must ask why those who generate content attempt to appeal to our emotions in the first place. After all, if appealing to our emotions was not an effective strategy to promote a message or idea, these polarizing, divisive messages would likely not be popular throughout social media to begin with. Pavlova and Bakers’ study explored the role of emotion in the propagation of social media messages after first establishing that “emotional energy is a more likely driving force for the public domain discourse than reason.” One of the study’s key findings was that among Tweets pertaining to mental health, “topics with higher emotional energy were persistently driving the discourse (p < 0.001), mostly by engagement (p < 0.001) and to a lesser extent by high confidence and solidarity (p < 0.01),” where “emotional energy” was a term derived from a different study: “Emotional energy arises from deep engagement with something (Csikszentmihalyi, 1996), or in interaction by intense involvement and commitment, often accompanied by strong emotions and feelings of solidarity, confidence, conviction, and collective effervescence.” This supported their initial claim that emotions played a more significant role in discourse than a lack thereof, but their findings verified this concept in the virtual world.

Emotional language on Twitter can be both positive and negative, and it is important to make the distinction between the two to better understand the effect of emotionally charged content. In one 2016 article, Christopher Bail describes a study analyzing social media messages regarding Autism Spectrum Disorders (ASDs). When discussing the role of emotional language, both positive and negative, in the virality of social media campaigns about Autism Spectrum Disorders, Bail writes, “Although numerous studies indicate that fear-based messages attract more attention than do dispassionate appeals, my results show that exchanges of emotional language between advocacy organizations and social media users—particularly positive emotional language—further increase the virality of advocacy messages” (Bail, “Emotional”). While political content occasionally involves encouraging, positive messages, given the increasingly divisive political climate of the United States, this may not apply to a significant share of the overly-emotional political content on Twitter. Rather, “negative” or controversial political messages may make up the majority, which Bail discusses immediately after the aforementioned quote: “Second, my results showed that exchanges of negative emotional language between advocacy organizations and social media users—although less common—are also associated with viral views” (Bail, “Emotional”). This claim hints at the possibility that users will be more likely to engage with Tweets that elicit negative emotions than those that won’t. Under the assumption that users may view emotionally charged political content that they agree with as objective fact rather than as an emotional outcry, they may perceive equally emotional content that goes against their views as being much more emotionally charged. As a result of this distorted perception, such users may be more likely to interact with content that upsets or frustrates them due to dissension or fear, further spreading this polarizing content, thereby adding more fuel to the fire.

Though it is the purpose of this study to verify this notion, the potential for it to exist is frightening, especially when analyzed beyond its impact on the individual. If recommender algorithms optimize for attention and time spent on an application, and negatively charged messages receive more attention, these algorithms could create a vicious cycle: users are exposed to more negative content since it’s more likely to hold their attention, they share and interact with that content, and as a result, spread it to even more users.

Though the connection between Bail’s claim and this study’s hypothesis may be far fetched, again given the context of his claim and the context of this study, the notion of emotions driving political behavior is not. George Marcus speaks to the effect of emotion in influencing individuals’ beliefs and involvement in his article Emotions and politics: hot cognitions and the rediscovery of passion. He cites a study from 1986 that showed that “variations in demeanor [of political candidates] had much more influence than party identification or ideology” which was confirmed by a later study in 1988 that concluded that “[emotionally] affective responses to candidates have greater consequence on voter preferences than do issue or ideological statements” (Marcus, 209).

One of the emotional responses that individuals can have to politics, of course, is anger. In his investigation into the emotional nature of anti-elite politics, Paul Marx finds that anger “has the potential to mobilize even disadvantaged or inattentive citizens to participate in politics” (Marx). What Bail and Marx observed could lead to a two-sided issue. On one hand, this would only encourage content generators to incite emotion into and even enrage their audiences. On the other hand, if one’s emotions are aroused on social media, they can conveniently act on those emotions and politically participate by engaging with the content and/or spreading it. This two-sided issue could be another source for the same vicious cycle described above: content generators pump out the same emotionally charged content, users interact with and share content, which causes it to be shared to more individuals, resulting in a propagation of outrage-inducing content. It is therefore crucial to understand the nature of the spread of this polarizing, outrage-inducing content on social media, because there are many systems and biases in place that can allow such a spread to take place.

Collectively, these works lay a theoretical foundation for this study’s hypothesis, that undergraduate college students will be more likely to engage with content that they react negatively to than content that they do not react negatively to, to rest upon.

Procedure

This section will briefly discuss how data was collected, but a more detailed description can be found in the appendix below.

To test the research question, a Google Form that simulated Twitter was shared among college students all over the country. The form contained three parts. The first presented screenshots of 18 political tweets on controversial subject matter and asked respondents how they would engage with each tweet if they were to see it on their Twitter feed. For each tweet respondents were given the option to “Like”, “Retweet with quote”, “Retweet without quote”, “Click on author’s profile”, “View replies”, and “Write reply”. Users also had the option to not engage with tweets at all.

In the next part of the form, users were presented the same tweets in the same random order, but this time, were asked to provide an emotional response to each tweet. The options provided were fear, hope, sadness, joy, distress, relief, frustration, empathy, dissension, and agreement, which were based on Ira Roseman’s model described in his article “Appraisal Determinants of Emotions: Constructing a More Accurate and Comprehensive Theory” published in 1996.

Last, respondents were asked about their political beliefs, where they were asked to self identify as either “Extremely liberal”, “Liberal”, “Moderately liberal”, “Centrist”, “Moderately conservative”, “Conservative”, or “Extremely conservative”. Responses were written to a CSV which underwent additional formatting done by this python script.

Tweets Used

The following screenshots show some of the tweets that were included in the survey. For a full list of all 18, see the appendix.

Results

Exploratory data analysis has been omitted from this section, but is included in the appendix. This section will focus on how respondents’ political belifs and emotional state contributed to their engagement.

The Influence of Political Beliefs on Engagement

Before diving directly into the relationship between individuals’ emotional response to political content and their engagement with it, it may be valuable to first understand how their political beliefs affect their engagement. After all, it only makes sense that one’s emotional disposition regarding political tweets would be heavily influenced by their political beliefs. Therefore, any relationship between people’s ideological stances and their engagement may provide context necessary to this study’s attempt at answering the focal question.

The following graphs show, for several tweets, how engagement varied across two political groups: respondents who identified as moderately liberal, liberal, or extremely liberal, and respondents who identified as centrist, moderately conservative, conservative, or extremely conservative. Centrists were grouped in with the second category since, even without it, the first group already holds the vast majority of the respondents. Analysis for all 18 tweets, in order, can be found in the appendix.

Tweet 3 criticized Democratic presidential candidate Joe Biden’s refusal to condemn the “Marxist” leaders of the leftist “Black Lives Matter” movement. Given the aggressive nature of the tweet, very few respondents opted to retweet it, and though the only respondents that liked it were in the non-left group, less than 10% of respondents in that group did so. Many respondents completely ignored the tweet, with more than 50% of left-identifying respondents opting to do so. Interestingly enough, frequencies of private engagement remained relatively high for both groups, with more than 40% of left-identifying respondents viewing the replies (and nearly as many for non-left respondents), and just over 20% of both groups clicking on the author’s profile.

Tweet 5 is a skeptical claim that Trump was trying to push a Supreme Court nominee so that in the event that the outcome of the election went to them, the court would be more likely to rule in favor of Trump. Non-left respondents ignored this tweet at an extremely high rate, with almost 80% indicating that they would do so. However, nearly 20% of the respondents in this group opted to view the tweet’s replies, with just over 30% of left-identifying respondents indicating they would do the same.

Tweet 8 was an exclamation by Florida Representative Matt Gaetz that Republicans wanted to “OPEN AMERICA UP” rather than lock it down as a response to the pandemic. More than 60% of both groups ignored the tweet. Non-left respondents liked and retweeted the tweet at a higher rate, but left-identifying respondents indicated they would click on the author’s profile, view replies, or write a reply more often than non-left respondents.

Tweet 10 contained a message defending the “Proud Boys”, a white supremacist group, citing the fact that the author hadn’t heard of this group doing any kind of “looting, torching, or rioting.” Once again, most respondents either ignored the controversial tweet or viewed its replies. Over 10% of non-left respondents indicated that they would like the tweet, indicating perhaps another potential means by which controversial tweets could easily spread: simply by users who agree with the message propagating it to their followers.

In tweet 12, Bernie Sanders points out that Amazon has paid nearly nothing in federal income taxes over the last three years. A significant number of both left-identifying and non-left respondents “liked” the tweet, with 60% of the former and over 20% of the latter doing so. Retweets and profile clicks were done at a higher rate for left-identifying respondents, but non-left respondents more frequently indicated that they would view and write replies. This is another occurrence of individuals crossing ideological lines to engage with tweets (albeit, in just a few specific ways) more frequently than individuals whose ideological stances align more closely with the message of the tweet.

Tweet 16 points out the fallacy in claiming that wearing a mask in the midst of a pandemic is tyranny, but that an 8pm curfew in the midst of a protest isn’t – clearly a left-leaning message. This tweet was liked and retweeted frequently by left-identifying respondents (almost 60% and 20% of the time, respectively). While the portion of non-left respondents that liked the tweet was around 10%, the portion that would retweet was nearly 15%: an interesting discrepancy given that retweeting generally shows more support for a message than does liking a tweet, and given that the tweet’s message aligns much more with the left than the right. Another interesting feature of this graph is the substantial proportion of non-left respondents who indicated they would view the replies of the tweet, 30%, especially when compared to the smaller proportion of left-identifying respondents who would do the same.

The tweets above summarize the mixed results of all 18. Political belief seemed to be a decent predictor of users’ choices to like, retweet, or ignore the tweet, where users seemed to do the first two when the tweet’s message aligned with their beliefs, and did the third when it did not. These patterns, and others, can be illuminated by summarizing all 18 of the tweets into one larger graph. The visualization below shows how users of each group, left and not left, engaged with tweets that were aligned more with the left and the right:

The graph reveals information about both groups. First, left-identifying respondents were much more likely to like and retweet left-aligned tweets than right-aligned tweets. They also ignored right-aligned tweets far more often than they ignored left-aligned tweets, but they viewed the replies of left-aligned tweets almost exactly as often as they viewed the replies of right-aligned tweets.

Meanwhile, non-left respondents liked right-aligned tweets almost exactly as frequently as they liked left-aligned tweets. This may be due to the fact that centrists made up a significant portion of the non-left sample population. Nevertheless, it is interesting that the conservative respondents in this group didn’t create a larger separation in the frequencies at which each kind of tweet was liked. There also wasn’t much of a difference between the frequencies at which non-left respondents ignored left-aligned and right-aligned tweets, and the same seems to hold true for performing retweets without quotes. As was the case with the left-identifying respondents, non-left respondents also wished to view the replies of right aligned and left aligned tweets at roughly the same frequency.

The Influence of Emotional Response on Engagement

Now that some relationship between political ideology and engagement has been observed, adequate context has been established to analyze the relationship between emotion and engagement. This relationship can be quantified, for each kind of engagement, through a logistic regression model. Each model will predict whether or not a respondent performed that kind of engagement based on their emotional response to the tweet. Only coefficients with a p-value under 0.05 will be discussed, as they are the only statistically-significant relationships that can be observed using the standard alpha value of 0.05.

term estimate std.error statistic p.value conf.low conf.high
(Intercept) -2.579 0.099 -26.112 0.000 -2.777 -2.390
Reaction_Fear1 0.608 0.180 3.383 0.001 0.253 0.958
Reaction_Hope1 1.697 0.246 6.902 0.000 1.219 2.184
Reaction_Sadness1 0.584 0.155 3.768 0.000 0.279 0.886
Reaction_Joy1 0.776 0.404 1.921 0.055 -0.011 1.574
Reaction_Distress1 -0.170 0.182 -0.934 0.350 -0.531 0.184
Reaction_Relief1 -0.255 0.470 -0.544 0.587 -1.157 0.692
Reaction_Frustration1 0.411 0.123 3.350 0.001 0.171 0.652
Reaction_Empathy1 0.333 0.218 1.528 0.127 -0.093 0.761
Reaction_Dissension1 -2.705 0.352 -7.692 0.000 -3.468 -2.074
Reaction_Agreement1 2.879 0.110 26.251 0.000 2.666 3.097

The first engagement that users had the option to select was to like the tweet. The above model shows that, in accordance with the hypothesis, there were multiple negative emotions that made users more likely to like the tweet. For instance, it shows that, holding all else constant, if a user experienced fear from a tweet, the odds of them liking the tweet were 1.8367542 times higher than if they hadn’t experienced fear. Moreover, holding all else constant, feeling sadness and frustration increased the odds that a respondent would like a tweet by factors of 1.7931969 and 1.5083254 respectively. The remaining statistically significant emotional responses were hope, dissension, and agreement, which had coefficients of 1.697, -2.705, and 2.879 respectively. All three of these had a dramatic effect on the odds that a respondent would like a tweet, where, with all else held constant, each would change the odds by factors of 5.4575502, 0.0668703, and 17.7964678 in the same order.

term estimate std.error statistic p.value conf.low conf.high
(Intercept) -4.973 0.242 -20.511 0.000 -5.473 -4.520
Reaction_Fear1 0.544 0.305 1.780 0.075 -0.072 1.128
Reaction_Hope1 0.318 0.393 0.809 0.419 -0.503 1.046
Reaction_Sadness1 0.251 0.281 0.893 0.372 -0.319 0.786
Reaction_Joy1 0.923 0.555 1.664 0.096 -0.248 1.947
Reaction_Distress1 0.095 0.306 0.309 0.757 -0.521 0.681
Reaction_Relief1 0.332 0.602 0.551 0.582 -0.943 1.450
Reaction_Frustration1 0.955 0.234 4.088 0.000 0.496 1.413
Reaction_Empathy1 0.643 0.333 1.929 0.054 -0.050 1.264
Reaction_Dissension1 -0.097 0.333 -0.291 0.771 -0.774 0.542
Reaction_Agreement1 1.731 0.247 6.998 0.000 1.257 2.230

The only statistically significant predictors of the log-odds that a respondent would retweet a tweet with a quote were frustration and agreement, both of which increased the odds that a respondent would do so by factors of 2.5986706 and 5.6462974 with all else held equal.

term estimate std.error statistic p.value conf.low conf.high
(Intercept) -4.256 0.185 -22.969 0.000 -4.635 -3.907
Reaction_Fear1 0.926 0.254 3.646 0.000 0.417 1.415
Reaction_Hope1 0.984 0.254 3.873 0.000 0.474 1.473
Reaction_Sadness1 0.083 0.241 0.345 0.730 -0.405 0.541
Reaction_Joy1 0.865 0.408 2.121 0.034 0.036 1.643
Reaction_Distress1 -0.216 0.278 -0.776 0.438 -0.778 0.316
Reaction_Relief1 -0.363 0.471 -0.771 0.441 -1.336 0.524
Reaction_Frustration1 0.521 0.184 2.824 0.005 0.155 0.879
Reaction_Empathy1 0.607 0.257 2.361 0.018 0.086 1.097
Reaction_Dissension1 -2.064 0.602 -3.427 0.001 -3.488 -1.043
Reaction_Agreement1 2.109 0.191 11.049 0.000 1.744 2.494

There were a few more statistically significant emotional responses for retweets without quotes than for retweets with quotes. Both fear and hope seemed to increase the odds that a respondent would retweet a tweet without a quote by factors of 2.5243914 and 2.6751354 respectively, with all else held equal for each. Experiencing joy also increased the odds that a respondent would retweet a tweet by a factor of 2.3750061 with all else held equal. A similar effect was observed when respondents experienced frustration, emapthy, and agreement, as all increased the odds that a respondent would retweet a tweet without a quote by factors of 1.6837105, 1.8349184, and 8.2399972 holding all else equal. Dissension was the lone emotional response to decrease the odds of retweeting without a quote, doing so by a factor of 0.1269452 with everything else held constant.

term estimate std.error statistic p.value conf.low conf.high
(Intercept) -2.587 0.100 -25.936 0.000 -2.786 -2.395
Reaction_Fear1 0.648 0.177 3.670 0.000 0.296 0.989
Reaction_Hope1 0.896 0.235 3.818 0.000 0.421 1.343
Reaction_Sadness1 -0.251 0.177 -1.420 0.156 -0.608 0.087
Reaction_Joy1 0.576 0.382 1.508 0.132 -0.213 1.293
Reaction_Distress1 0.042 0.179 0.235 0.814 -0.316 0.387
Reaction_Relief1 -0.210 0.461 -0.457 0.648 -1.175 0.647
Reaction_Frustration1 0.223 0.131 1.705 0.088 -0.035 0.478
Reaction_Empathy1 0.415 0.240 1.730 0.084 -0.076 0.867
Reaction_Dissension1 -0.007 0.154 -0.049 0.961 -0.314 0.291
Reaction_Agreement1 0.300 0.136 2.208 0.027 0.032 0.564

Fear, hope, and agreement were the only emotional responses that had a statistically significant effect on respondents’ odds of clicking on the profile of a tweet’s author. They all increased the odds of doing so by factors of 1.9117136, 2.4497843, and 1.3498588 respectively, with all else held equal.

term estimate std.error statistic p.value conf.low conf.high
(Intercept) -1.240 0.063 -19.660 0.000 -1.364 -1.117
Reaction_Fear1 0.633 0.121 5.217 0.000 0.395 0.871
Reaction_Hope1 0.653 0.187 3.494 0.000 0.284 1.017
Reaction_Sadness1 -0.146 0.112 -1.302 0.193 -0.367 0.071
Reaction_Joy1 -0.086 0.320 -0.269 0.788 -0.735 0.526
Reaction_Distress1 -0.003 0.117 -0.029 0.977 -0.234 0.224
Reaction_Relief1 0.050 0.352 0.141 0.888 -0.654 0.732
Reaction_Frustration1 0.375 0.084 4.475 0.000 0.211 0.539
Reaction_Empathy1 0.519 0.173 3.003 0.003 0.177 0.856
Reaction_Dissension1 0.024 0.097 0.244 0.807 -0.167 0.212
Reaction_Agreement1 0.287 0.089 3.205 0.001 0.111 0.462

A wide range of emotions affected respondents’ odds of viewing the replies of a tweet. Fear, hope, frustration, empathy, and agreement were were all statistically significant, as each increased the odds of respondents wanting to view a tweet’s replies by respective factors of 1.8832519, 1.9212961, 1.4549914, 1.6803465, and 1.3324242, with all else held equal.

term estimate std.error statistic p.value conf.low conf.high
(Intercept) -5.167 0.295 -17.513 0.000 -5.788 -4.626
Reaction_Fear1 -0.009 0.369 -0.023 0.981 -0.757 0.695
Reaction_Hope1 1.428 0.671 2.127 0.033 -0.061 2.617
Reaction_Sadness1 -0.527 0.370 -1.425 0.154 -1.286 0.172
Reaction_Joy1 -0.094 1.124 -0.084 0.933 -2.534 1.905
Reaction_Distress1 1.164 0.315 3.693 0.000 0.536 1.776
Reaction_Relief1 1.825 0.942 1.938 0.053 -0.223 3.509
Reaction_Frustration1 0.839 0.329 2.550 0.011 0.208 1.504
Reaction_Empathy1 0.605 0.643 0.941 0.347 -0.877 1.724
Reaction_Dissension1 1.068 0.299 3.570 0.000 0.493 1.671
Reaction_Agreement1 -1.202 0.558 -2.155 0.031 -2.418 -0.197

The last form of engagement that users had the option to perform was writing replies, which, according to the model, was most significantly related to emotional responses of hope, distress, frustration, dissension, and agreement. The first four all increased the odds that respondents would choose to write replies, with each increasing the odds of doing so by factors of 4.1703501, 3.2027186, 2.3140518, 2.9095546 respectively with all else held equal. The only emotional response that significantly decreased the odds that a participant would write a reply was agreement, which decreased the odds of writing a reply by a factor of 0.3005924.

The last option that participants had on the survey wasn’t an engagement, but rather, a lack thereof: to ignore the Tweet, and “keep scrolling” through their Twitter feed. Understanding the emotional responses that caused respondents to select this option can provide valuable information, though. The emotional responses that significantly decreased the odds that a respondent would ignore the tweet may be interpreted to be the emotional responses that made users the most likely to engage with the tweet in some nonspecific form.

term estimate std.error statistic p.value conf.low conf.high
(Intercept) 0.893 0.061 14.736 0.000 0.775 1.013
Reaction_Fear1 -0.445 0.126 -3.544 0.000 -0.691 -0.199
Reaction_Hope1 -1.591 0.256 -6.221 0.000 -2.113 -1.106
Reaction_Sadness1 -0.097 0.110 -0.887 0.375 -0.312 0.119
Reaction_Joy1 -0.415 0.366 -1.134 0.257 -1.148 0.292
Reaction_Distress1 0.117 0.119 0.986 0.324 -0.114 0.351
Reaction_Relief1 0.810 0.434 1.868 0.062 -0.058 1.651
Reaction_Frustration1 -0.545 0.085 -6.387 0.000 -0.712 -0.378
Reaction_Empathy1 -0.362 0.198 -1.828 0.067 -0.755 0.023
Reaction_Dissension1 0.605 0.098 6.160 0.000 0.414 0.799
Reaction_Agreement1 -1.898 0.095 -20.002 0.000 -2.085 -1.713

The statistically significant emotional responses with negative coefficients in this model were fear, hope, frustration, and agreement. In other words, with all else held constant, each of these emotions decreased the odds that a respondent would ignore a tweet by factors of 0.6408243, 0.2037218, 0.5798418, and 0.1498681 respectively.

These findings can be verified by seeing which emotional responses were the best predictors of total engagement, found by adding up all the forms of engagement that users indicated they would perform for each given tweet. Given that this total engagement value would no longer be binary, a linear regression model would be more appropriate:

term estimate std.error statistic p.value conf.low conf.high
(Intercept) 0.366 0.021 17.267 0.000 0.324 0.407
Reaction_Fear1 0.320 0.045 7.060 0.000 0.231 0.409
Reaction_Hope1 0.659 0.071 9.332 0.000 0.521 0.798
Reaction_Sadness1 0.005 0.039 0.139 0.890 -0.071 0.082
Reaction_Joy1 0.307 0.114 2.687 0.007 0.083 0.531
Reaction_Distress1 0.017 0.042 0.400 0.689 -0.065 0.098
Reaction_Relief1 0.018 0.131 0.140 0.889 -0.238 0.275
Reaction_Frustration1 0.199 0.030 6.725 0.000 0.141 0.257
Reaction_Empathy1 0.308 0.065 4.738 0.000 0.181 0.436
Reaction_Dissension1 -0.112 0.034 -3.308 0.001 -0.178 -0.046
Reaction_Agreement1 0.779 0.032 24.565 0.000 0.717 0.841

Not only were fear, hope, frustration, and agreement statistically significant predictors of total engagement (this time with coefficients of 0.32, 0.659, 0.199, and 0.779), but the linear model also shows that joy and empathy also had significant positive correlations with total engagement numbers (with coefficients of 0.307 and 0.308 respectively). These coefficients can be interpreted as follows. According to the model, on average, one can expect a user’s total number of engagements with a tweet to increase by the value of the coefficient if they were to feel that emotion, holding all else equal.

Additional Discussion of Results

Given the significant number of results that have been extracted from the survey, there is much to unpack. To begin, it is important to reintroduce the hypothesis of this study: tweets that incite negative emotional responses in college students will field more engagement from them than tweets that do not.

This hypothesis can first be understood from the angle of individuals’ political beliefs. As was demonstrated above, there seems to be a complex relationship between individuals’ political beliefs and the way in which they interact with content, whether that content aligns with or goes against their views. It is not the case that individuals will always engage more with tweets that align with their views, nor is it the case that they will always engage more with tweets that go against them.

As is the case with any study, it is possible that trends may have been either exaggerated or obfuscated simply due to the methods of data collection. In the case of this study, this may have taken place with the attempt to find a relationship between one’s political views and their engagement with political content. This is apparent in how different tweets of the same political alignment would often have noticeably different patterns of engagement from one another, thereby obfuscating any trend that may have otherwise arisen. For instance, tweet 8, in which Florida Rep Matt Gaetz calls for America to be opened back up, doesn’t exhibit any clear pattern between political alignment and how users engaged with the tweet. While non-left respondents were more likely to retweet and like the tweet, left-identifying respondents were noticeably more likely to view replies and click on the author’s profile. The first pattern aligned with the trend seen across all 18 tweets, while the second deviated from it. It is difficult to know whether the patterns observed in this tweet are more representative of reality or less so, and since the general conclusions drawn from this study are drawn, in part, using this tweet, it’s impossible to know whether those general conclusions are nearer to or further from reality because of it.

On a similar note, this study was carried out with intentionally controversial, polarizing tweets to mimic the extremes of Twitter’s political environment. However, it’s difficult to make generalizations about “polarizing tweets” when these tweets cover subject matter of varying levels of controversy (which is difficult to quantify to begin with). Therefore, it becomes difficult to make generalizations about the way in which users interact with these controversial tweets and to tie that to their political stance. Tweet 6, for example, displayed scenes of conflict between protesters and law enforcement officials shortly after the murder of George Floyd. While it was a tweet whose message was more aligned with that of the right, the super controversial nature of the tweet along with its brash language may have caused individuals to act differently from how they may have acted towards other controversial political tweets. While the data for this tweet supported some trends observed in other tweets, as it was liked significantly more by non-left respondents and ignored more by left-identifying respondents, the data contradicted other general patterns, like how left-identifying respondents retweeted the tweet with a quote at a higher frequency than non-left respondents.

Nearly 40% of both groups indicated that they would want to view the replies of the tweet, likely due to the tweet’s super controversial nature, which is no insignificant piece of information. First, it continues the trend of respondents in both political groups viewing the replies of right-aligned tweets at roughly the same frequency. Second, and perhaps more importantly, it shows that users are interacting with outrage-inducing tweets to a significant degree. As was mentioned in the beginning of this document, the purpose of this study was to understand why social media platforms like Twitter so often propagate such content. As was also mentioned, it seems intuitive that a content-recommendation algorithm would prioritize content that keeps users on the apps more. If such controversial tweets are drawing engagement from users and prompting them to spend more time on the application by viewing the tweet’s replies, which again, almost 40% of respondents from both groups indicated they would do, then it makes sense that any recommender using this criterion would prioritize similarly hostile content.

The same phenomenon can be observed with Tweet 14, which called the COVID-19 pandemic a hoax and made the claim that masks were “merely symbolic”. Similar patterns hold, where non-left respondents were more likely to like the tweet, left-identifying respondents were more likely to retweet with a quote, and over 70% of both groups opted to ignore the tweet entirely, but over 20% of both sets of respondents indicated that they would view the replies. Again, it shows how tweets of varying controversy can draw varying patterns of engagement, so any generalized conclusions drawn from the data above may have been skewed due to varying levels of controversy among the tweets in the survey. It also shows how extremely controversial tweets can incite users to spend more time on the application by viewing the tweet’s replies, likely influencing individuals’ recommender systems to continue to prioritize such polarizing content.

Given these variations that exist between the tweets, it may be difficult to make any factual statement about the relationship between college students’ emotional responses to political tweets and the way in which they engage with them, and therefore, it may be difficult to verify the hypothesis of this study. However, an attempt will be made to do so based on the data that was collected.

The data collected show that negative emotions increased the odds that respondents would perform just about every kind of engagement. Fear, for example, significantly increased the odds that a respondent would like a tweet, retweet it without a quote, click the profile of an author, and view a tweet’s replies. While sadness only significantly increased the odds that a user would like a tweet, frustration significantly increased the odds that a user would like a tweet, retweet it with and without a quote, and view and write replies to tweets. Distress and dissension only significantly increased the odds that a user would write a reply to a tweet. It is also worth noting that fear and frustration were both statistically significant in the last model as well, which was made to predict the number of types of engagement that users would perform on a given tweet based on their emotional response to it. This may be interpreted to mean that both effected multiple types of engagement from the respondents on a very frequent basis.

The findings above resoundingly uphold the hypothesis. Every negative emotion apart from dissension significantly increased, and never decreased, the odds that a respondent would perform at least one type of engagement on a tweet. As has been mentioned, the role of dissension, and therefore of one’s political beliefs, is much more complex, and does not justify nor contradict the hypothesis directly.

Conclusion

The observed differences between the effect of political stance on engagement and the effect of negative emotion on engagement point to a variety of things, including that political disagreement may not necessarily connote negative emotions. Rather, they may be two separate phenomena. If disagreement does not have to connote negative emotions, perhaps it is possible to engage in online discourse where individuals’ ideas and perceptions are challenged without the accompaniment of sour feelings. This sounds easier in theory than it is to perform in practice, though. A 2018 study by Christopher Bail showed that being exposed to virtual content on the other side of the aisle only drove those who identified as Republican to be even more conservative, while those who identified as Democratic were driven slightly left, though the second effect did not seem to be statistically significant (Bail, “Exposure”). The issue on divisive, polarizing political content is relevant because it’s clear that the content that we’re presented with can have a significant sway on our emotions and political views. Therefore, it’s as important as ever to understand why such outrage-inducing content seems to dominate the discourse that takes place online. Understanding why we engage with the political content that we do, though, is a step in the right direction.

As was noted earlier on, this study was imperfect in a variety of ways, and much more future research would need to be done to be able to make any factual statement on the relationship between emotional response and engagement with political content. These imperfections begin with the survey. For example, multiple respondents conveyed that in the first section, they would have liked to see an option to engage with the media that was linked to tweets, whether that meant playing a video or clicking on a link. Multiple respondents also voiced desires to represent a wider range of emotions in the second part, like confusion. Another issue with the survey had to do with when it was sent out: just before a presidential election. As was discussed earlier on, it is very likely that respondents were “burnt out” from having been overloaded with political content prior to taking the survey, which may have altered how they would engage with political content.

The EDA in this study (see appendix) made clear yet another issue: bias of the respondents. The overwhelming majority of respondents were left of center, which makes sense given that they were all college students. As a result of this bias, not only were aggregate statistics concerning the entire population likely thrown off, but statistics concerning non-left respondents may have been based on far too small of a sample size to arrive at any meaningful conclusions. The fact that respondents had to be grouped into just two relatively non-specific groups, left and non-left, is another issue that arose due to this bias in the respondents. Not only should future studies attempt to use a larger, more ideologically-balanced pool of respondents, but they should also conduct similar group-based analysis with more specific groups. Doing so may entail making three groups of respondents, liberals, conservatives, and centrists, or making even more groups to highlight the subtleties that can exist between, for instance, moderately liberal and extremely liberal respondents.

Something that was not thoroughly discussed in this study was the role of positive emotions on engagement. Numerous results above showed that respondents engaged more with content that elicited positive emotions like joy and empathy, than content that did not, with all else held equal. These results may be indicative that, again, an online political discourse that is not characterized by hate, outrage, and polarization is possible, and that instead, it may be just as easy to create one more focused on bringing about positivity. Again, it is impossible to make any factual claim about this solely given the data used in this study. However this finding absolutely provides hope that, even if individuals are more likely to engage with political content that elicits negative emotions than political content that does not, there may be brighter days ahead when it comes to how politics are discussed on social media – hopefully ones that are not characterized by “doomscrolling.”

Bibliography

Bail, Christopher A., et al. “Channeling Hearts and Minds: Advocacy Organizations, Cognitive-Emotional Currents, and Public Conversation.” American Sociological Review, vol. 82, no. 6, Dec. 2017, pp. 1188–1213, doi:10.1177/0003122417733673.

Bail CA. Emotional Feedback and the Viral Spread of Social Media Messages About Autism Spectrum Disorders. Am J Public Health. 2016 Jul;106(7):1173-80. doi: 10.2105/AJPH.2016.303181. Epub 2016 May 19. PMID: 27196641; PMCID: PMC4984751.

Bail, Christopher A., et al. “Exposure to opposing views on social media can increase political polarization.” Proceedings of the National Academy of Sciences 115.37 (2018): 9216-9221.

Marcus, George E. “Emotions and Politics: Hot Cognitions and the Rediscovery of Passion.” Social Science Information, vol. 30, no. 2, June 1991, pp. 195–232, doi:10.1177/053901891030002001.

Marx, P. Anti‐elite politics and emotional reactions to socio‐economic problems: Experimental evidence on “pocketbook anger” from France, Germany, and the United States. Br J Sociol. 2020; 71: 608– 624. https://doi-org.proxy.lib.duke.edu/10.1111/1468-4446.12750

Pavlova A, Berkers P. Mental health discourse and social media: Which mechanisms of cultural power drive discourse on Twitter. Soc Sci Med. 2020 Oct;263:113250. doi: 10.1016/j.socscimed.2020.113250. Epub 2020 Aug 6. PMID: 32862081.

Roseman, Ira J. “Appraisal determinants of emotions: Constructing a more accurate and comprehensive theory.” Cognition & Emotion 10.3 (1996): 241-278.

Tufekci, Zeynep “We’re building a dystopia just to make people click on ads.” TED: Ideas Worth Spreading, October 2017, https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads?language=en

The Social Dilemma. Directed by Jeff Orlowski, performance by Tristan Harris, Exposure Labs, 2020. Netflix. www.netflix.com/title/81254224.


Appendix

Detailed Procedure

To test the research question, a Google Form that simulated Twitter was shared among college students all over the country. The form contained three parts. The first presented screenshots of 18 political tweets on controversial subject matter, including the coronavirus pandemic, Donald Trump’s nomination of Amy Cohen Barrett to the Supreme Court, and racial justice protests that took place in the summer of 2020. The tweets were selected to be controversial to simulate the divisive, polarizing nature of the content that dominates political Twitter today, and they were found both by using Twitter’s search functionality (to find tweets containing specific words or phrases that were published within a specific timeframe) and by going through popular political Twitter accounts. Users were asked how they would engage with each tweet if they were to see it on their Twitter feed, and for each tweet they were given the option to “Like”, “Retweet with quote”, “Retweet without quote”, “Click on author’s profile”, “View replies”, and “Write reply”. Users could select any number of options, but if they did not wish to engage with the tweet in any way, they also had the option to “Ignore/keep scrolling”. This way, the questions could be made mandatory so that users had to answer, but users weren’t required to engage with each tweet. Each question in this first section ended up taking a similar form to the question pictured below:

In the next part of the form, users were presented the same tweets in the same random order, but this time, were asked to provide an emotional response to each tweet. The options provided were fear, hope, sadness, joy, distress, relief, frustration, empathy, dissension, and agreement, which were based on Ira Roseman’s model described in his article “Appraisal Determinants of Emotions: Constructing a More Accurate and Comprehensive Theory” published in 1996. Again, all questions on the survey were made mandatory, so if users had none of these emotional responses, they also indicate apathy with an option marked “NONE”. Respondents were asked about their emotional response after they were asked about their engagement since they may have acted differently towards each tweet if they were consciously aware of their emotional reaction to it. Questions in this section took the following form:

At the very end of the survey, users were asked about their political beliefs, where they were asked to self identify as either “Extremely liberal”, “Liberal”, “Moderately liberal”, “Centrist”, “Moderately conservative”, “Conservative”, or “Extremely conservative”. Users were told multiple times throughout the survey that their responses would be completely anonymized, and that any information they provided would not be used to identify them, nor would their responses be associated with any component of their individual identity. This was emphasized strongly so that respondents didn’t feel hesitant to engage with the content however they would have if they were on Twitter.

To attract as many participants as possible, the survey was spread throughout various social media groups primarily made up of college students, and individuals were incentivized to take the study by being offered a chance to enter a raffle for one of three gift cards.

Responses were written to a CSV which underwent additional formatting done by this python script. In addition to anonymizing the responses, the script also formatted responses from the first two sections into binary variables.

Full list of tweets used

The following tweets were used in the survey, in this order:

The git repo for this project can be found here

Exploratory Data Analysis - Respondents

The following exploratory data analysis consists of visualizations to understand how respondents’ responses were distributed.

## [1] 203
## [1] 48

Through the data collection methods above, responses were collected from 203 students attending 48 different universities. These were both public and private universities scattered throughout the country from the University of Florida to Stanford University.

This variety of colleges likely had an impact on the next statistic observed - the respondents’ political stances.

## # A tibble: 7 x 2
##   Ideology                 count
##   <chr>                    <dbl>
## 1 Centrist                 8.87 
## 2 Conservative             4.43 
## 3 Extremely Conservative   0.493
## 4 Extremely liberal       15.3  
## 5 Liberal                 33.5  
## 6 Moderately Conservative  8.37 
## 7 Moderately Liberal      29.1

As expected, the vast majority of college students identified themselves as being left of center. Given that college students will generally be left of center, there was concern that right wing viewpoints weren’t going to be represented well. However, nearly a quarter of respondents identified as either centrists or right of center, which was a greater portion than initially expected.

Exploratory Data Analysis - Engagement

The graphs below show, on average, the portion of respondents that performed each kind of engagement on each tweet.

First, most tweets either received very few likes, or were liked by about 50% of the respondents. Given the political nature of these tweets, these disparate rates may be a result of the strong left-leaning bias among the respondents. Since most of the respondents were left leaning, it makes sense that most of the respondents would act similarly, either uniformly liking or not liking a tweet. While it may be pointed out that a “like” rate of 0.5 does not indicate a “uniform liking” of the tweet by the respondents, the distinct separation between the two peaks in the histogram is worth noting.

The next engagement type, retweets without quotes, shows a slightly different story. While a few tweets were retweeted occasionally (between 10 and 20% of the respondents doing so), the vast majority of tweets were retweeted rarely, if ever. While not as indicative of any sort of political bias among the respondents, this pattern is somewhat of a different expected trend: people’s reluctance to publicly engage with political content in general, and especially the controversial subject matter intentionally selected for this study. Retweets, more than any kind of engagement, associate the user with the content they rewteet to the public eye. This is a consequence that many college students, whether or not they’re seeking professional and academic opportunities, may not want to inflict upon themselves. As a result, they may be hesitant to retweet this controversial, polarizing content.

A similar pattern held for retweets with quotes, which respondents did even less frequently; likely for the same aforementioned reasons.

Yet another distinct pattern can be observed for viewing replies, which makes sense. Unlike the first three forms of engagement, viewing replies is not a form of engagement that is directly visible to one’s following. As a result, users may be more likely to engage with tweets in this manner. This was, to some extent, reflected in the above histogram: almost every tweet had its replies viewed between 20 and 40 percent of the time, with a single outlier on each side of that interval.

Next, though viewing author profiles are also a private form of engagement, users seemed to be far less likely to do that than viewing replies, with most tweets having their replies viewed less than 10 percent of the time.

By far the least common form of engagement taken was writing replies to the tweet, likely for the same reasons that caused low frequencies of retweets. Only one tweet had more than five percent of the respondents indicate that they would write a reply to it.

In a move somewhat contradictory to many of the assumptions introduced in the background of this study, ignoring tweets seemed to be by far the most common response to the tweets in this study, with the least-ignored tweets being ignored 30-40% of the time, and the most frequently ignored tweets being ignored over 70% of the time. This outcome may be due to a number of factors, including the time at which this survey was conducted: the weeks leading up to the 2020 Presidential Election. Most respondents had probably been overexposed to political content leading up to the survey, and as a result, may have been desensitized to much of the content on the survey. This may have resulted in a higher rate of apathy towards the tweets than if the survey was conducted at a different, less politicized time of year. One may compare this phenomenon to the playing of Christmas carols in public spaces throughout the United States during the holiday season. The first listening of Mariah Carey’s “All I Want for Christmas is You” may fill many with a warm, pleasant feeling inside, but by late December, most are numb to the amorous holiday-themed lyrics and upbeat melody.

Exploratory Data Analysis - Emotional Reponses

The histograms below show the portion of users that had each emotional response to each tweet.

To begin, little to no joy was felt by respondents for just about every tweet in the corpus, with most tweets instilling joy in close to 0% of the participants. The single outlier in the data seems to be a tweet for which only 8% of respondents felt joy. This graph can be said to demonstrate, to at least some capacity, how little joy political Twitter brings to college students. While these data can be interpreted to convey that college students simply don’t enjoy politics, perhaps the more likely reason for this outcome is the current emotional climate of political Twitter.

The negative counterpart to joy in this study was sadness, which was a much more popular response across the entire corpus. Though still reported in less than half of the respondents for every tweet, it seemed to be a much more common response across most tweets.

The same trend seems to be even more exaggerated in the next pair of emotions, relief and distress. In a response that aligned perfectly with the above description of today’s political climate, distress seemed to be a much more common response to the tweets than relief. While most tweets incited distress in 10 to 30 percent of respondents, most tweets fostered relief in less than 1% of the respondents. These trends are indicative of the issue that motivated this paper: political content on Twitter is outrage-inducing. Whether or not this phenomenon is intentional, it is an issue that absolutely needs to be better understood.

While the distributions are not dramatically different for hope and fear, the graph for the former is more right skewed than that of the latter. While fear was not exactly common, with most tweets triggering fear in less than 15% of the respondents, there were several that did so in more than 20% of the respondents. Meanwhile, only two tweets inspired hope in 10% or more of the respondents, indicating that for these controversial tweets, fear tended to be a much more common reaction than hope.

A clear demonstrator of the polarizing nature of political tweets is the difference in how often participants responded frustration and empathy. While most tweets inspired empathy in less than 10% of the respondents, almost every tweet provoked frustration in more than 30% of the respondents, with several provoking frustration in more than 60% of the respondents. These widely-frustrating tweets were tweets 14, 6, and 10, which, in order, called the coronavirus a hoax, showed protesters clashing with law enforcement officers to the glee of the tweet author, and defended a white supremacist group. If political tweets are, on average, generating more frustration than empathy in their audiences, then it only makes sense for those audiences to become increasingly polarized.

The last pair of emotions provided to respondents were dissension and agreement. The two had similar distributions: tweets generally either incited these emotions in more than 30% of the respondents, or in less than 10%. One feature worth nothing though is that there were several tweets that garnered agreement from more than 50% of the users, while there was only one such tweet for dissension. This may be indicative of a political bias in the respondents or in the survey. On one hand, the respondents may have disproportionately agreed upon certain political messages. On the other, the survey may have contained a disproportionately large number of tweets that were widely agreeable across all political groups. Determining which kind of bias exists, and the extent to which it exists, would likely require more testing with a larger set of tweets and a larger sample population.

In addition to the aforementioned emotional responses, respondents also had the option to mark “NONE”, indicating that they had no significant emotional response to the tweet. Half of the tweets had between 20 and 40% of the respondents mark this as their response, while the rest of the tweets were just outside this range. This was especially interesting given one of the assumptions reached in the background of this paper, which stated that political content generators probably try to produce emotionally-charged content since such content will draw more engagement. However, it is possible that the content was emotionally charged, and that, as was mentioned before, respondents were simply overexposed to political content around the time that they took the survey. As a result, they may have been more likely to be apathetic towards the content than if they had taken the survey at a different time of year.

Per-Tweet Analysis of Political Beliefs vs Engagement

Tweet 1, written by a democrat running for Congress, expressed enthusiasm for policies currently being pushed by the left, including an expansion of public education and governmental healthcare and, most notably, a “Green New Deal”. Engagement of almost all kinds were significantly more common among respondents in the “left” group apart from viewing and writing replies. Slightly more than 20% of respondents in both political groups viewed the tweet’s replies, with respondents not identifying with the left viewing replies slightly more frequently. Although relatively few respondents indicated they would do so, writing replies was selected much more frequently by non-left respondents than left-identifying respondents.

Tweet 2 discussed the controversial, widespread destruction of property that took place after the murder of George Floyd in late May and early June. Likes and both kinds of retweets were more common among left-identifying responses, but non-left respondents viewed replies with almost the same frequency and wrote replies with greater frequency. Nearly twice as many non-left respondents ignored the tweet as left-identifying respondents.

Tweet 3 criticized Democratic presidential candidate Joe Biden’s refusal to condemn the “Marxist” leaders of the leftist “Black Lives Matter” movement. Given the aggressive nature of the tweet, very few respondents opted to retweet it, and though the only respondents that liked it were in the non-left group, less than 10% of respondents in that group did so. Many respondents completely ignored the tweet, with more than 50% of left-identifying respondents opting to do so. Interestingly enough, frequencies of private engagement remained relatively high for both groups, with more than 40% of left-identifying respondents viewing the replies (and nearly as many for non-left respondents), and just over 20% of both groups clicking on the author’s profile.

Tweet 4 discussed a video in which individuals spray painted messages on the road in front a home with pro-Trump signage in a predominantly liberal neighborhood. All forms of engagement were relatively low, with roughly 60 percent of both groups simply ignoring the tweet. Once again, viewing the replies of the tweet was quite common. Nearly 40% of respondents in both groups indicated that they would do so. This was another tweet that exhibited a pattern of individuals stepping across ideological lines to view the replies of content that they may disagree with, thereby engaging with them.

Tweet 5 is a skeptical claim that Trump was trying to push a Supreme Court nominee so that if the outcome of the election were to be decided by them, the court would be more likely to rule in favor of Trump. Non-left respondents ignored this tweet at an extremely high rate, with almost 80% indicating that they would do so. However, nearly 20% of the respondents in this group opted to view the tweet’s replies, with just over 30% of left-identifying respondents indicating they would do the same.

Tweet 6 describes a video in which, as the author describes, the “Secret Service pushes back liberal thugs at the white house”. The tweet author also calls the COVID-19 pandemic a hoax. While just about 60% of the respondents in both groups opted to ignore the tweet, nearly 40% of both groups indicated that they would view the tweet’s replies. This is especially interesting, since despite the controversial nature of the tweet, significant portions of both groups choose to engage with the tweet.

Tweet 7 makes a comparison between left-wing and right-wing “extremism”, putting the former in a much more positive light. This clear bias may explain why so many of the left-identifying respondents liked the tweet, over 40%. However, once again, both groups viewed replies quite frequently, with non-left respondents at just over 20%, and left-identifying respondents at nearly 30%. Nearly 10% of both groups also chose to click on the author’s profile.

Tweet 8 was an exclamation by Florida Representative Matt Gaetz that Republicans wanted to “OPEN AMERICA UP” rather than lock it down as a response to the pandemic. More than 60% of both groups ignored the tweet. Non-left respondents liked and retweeted the tweet at a higher rate, but left-identifying respondents indicated they would click on the author’s profile, view replies, or write a reply more often than non-left respondents.

Tweet 9, written by Bernie Sanders, points out the fact that Donald Trump has paid substantially more in tax dollars to other countries than he has in the United States. Unsurprisingly, left-identifying respondents engaged with the content more frequently in almost every category. Another detail worth noting is that over 20% of both left-identifying and non-left respondents wanted to view the tweet’s replies.

Tweet 10 contained a message defending the “Proud Boys”, a white supremacist group, citing the fact that the author hadn’t heard of this group doing any kind of “looting, torching, or rioting.” Once again, most respondents either ignored the controversial tweet or viewed its replies. Over 10% of non-left respondents indicated that they would like the tweet, indicating perhaps another potential means by which controversial tweets could easily spread: simply by users who agree with the message propagating it to their followers.

Tweet 11 discusses the recent confirmation of Amy Cohen Barrett to the Supreme Court, saying that the left has no “moral authority to stand on… after what they did to Brett Kavanaugh”. Left-identifying respondents indicated they would retweet with a quote more frequently than non-left respondents, but both likes and retweets without a quote were significantly more frequent among non-left respondents. Once again, a significant portion of respondents from both groups indicated that they would view the tweet’s replies, with a higher portion of left-identifying respondents indicating that they would do so (nearly 30%!) despite the fact that the tweet promotes a right-wing viewpoint.

In tweet 12, Bernie Sanders points out that Amazon has paid nearly nothing in federal income taxes over the last three years. A significant number of both left-identifying and non-left respondents “liked” the tweet, with 60% of the former and over 20% of the latter doing so. Retweets and profile clicks were done at a higher rate for left-identifying respondents, but non-left respondents more frequently indicated that they would view and write replies. This is another occurrence of individuals crossing ideological lines to engage with tweets (albeit, in just a few specific ways) more frequently than individuals whose ideological stances align more closely with the message of the tweet.

Tweet 13 attempted to bring attention to the fact that Donald Trump, whom the author called a white supremacist, has been responsible for placing three judges on the Supreme Court. The tweet was ignored by over 80% of non-left respondents, which may explain why left-identifying respondents performed every type of engagement much more frequently than non-left respondents.

Tweet 14 was written by a radio show host advertising his podcast in which he and his co-host “talk the coronavirus hoax, and how masks are merely symbolic, and do nothing to protect you”. Despite the significant emotional energy contained in this tweet, over 70% of respondents from both groups ignored the tweet. The only significant form of engagement was viewing replies, which over 20% of respondents from both groups indicated they would do.

Tweet 15 is a proclamation by Rep. Matt Gaetz that Donald Trump is leading the fight against corruption and against “the establishment”. The tweet was ignored often by both left-identifying and non-left respondents (over 80% and 60% respectively), but when it was engaged with, it was engaged with by a larger portion of non-left respondents in just about every way. The only significant engagement that left-identifying respondents performed was viewing the tweet’s replies, which they opted to do just under 12.5% of the time.

Tweet 16 points out the fallacy in claiming that wearing a mask in the midst of a pandemic is tyranny, but that an 8pm curfew in the midst of a protest isn’t – clearly a left-leaning message. This tweet was liked and retweeted frequently by left-identifying respondents (almost 60% and 20% of the time, respectively). While the portion of non-left respondents that liked the tweet was around 10%, the portion that would retweet was nearly 15%: an interesting discrepancy given that retweeting generally shows more support for a message than does liking a tweet, and given that the tweet’s message aligns much more with the left than the right. Another interesting feature of this graph is the substantial proportion of non-left respondents who indicated they would view the replies of the tweet, 30%, especially when compared to the smaller proportion of left-identifying respondents who would do the same.

Tweet 17 accuses conservatives of “loving” violence given their defense of their decision to own firearms all while they patronize the left for engaging in non-peaceful protests. Almost 80% of the non-left respondents indicated that they would ignore the tweet, but still a significant number of these respondents, nearly 20%, indicated that they wished to view the tweet’s replies. Non-left respondents were also significantly more likely to write replies than left-identifying respondents, who had higher frequencies of every other kind of engagement.

The last tweet of the corpus criticizes the left’s claim that Mitch McConnell’s attempt to rapidly push in Trump’s latest Supreme Court nominee is hypocritical, saying that the left’s claim is baseless since doing so is the senate’s job. Non-left respondents engaged with the tweet more frequently in every form of engagement apart from viewing replies. Both left-identifying and non-left respondents indicated they wanted to view the tweet’s replies over 20% of the time.