Scholars and commentators contend that China has become one of the few issue areas in which the highly polarized U.S. congress shows a great extent of unity (Gentzkow, Shapiro, and Taddy 2019; Goldberg 2020). Being the world’s leading economies, the collaboration between the two countries have widely-acknowledged importance to sustainability and global health (Wu 2020; Change 2014). However, US-China relations have been on a downward spiral in the recent decade, with increasing conflicts in areas including trade, tech, human rights, and national security (Zaidi and Saud 2020). The tension started accumulating during Obama’s presidency and surged under Trump’s administration, with tariffs on hundreds of billions of dollars of Chinese products, sanctions on Chinese companies with military ties, restriction of tech purchases from Chinese businesses, and bills that punish China for its human right abuses (Gries 2020; Sutter 2017). Despite the myriad of differences well accounted between the two leaders and their parties, their actions taken on China have been extremely consistent, and lawmakers in both parties seem to have reached an unusual extent of consensus on the national strategy that goes against China (Tan 2020). In light of the recent election of democratic leader Joe Biden as the new president, opinions vary as to whether the bilateral tension might be eased under his leadership. While most of the recent comments and analyses focus on updates from Biden’s transition team (Swanson 2020), it is important to recognize that the opinions of the rest of the congress carry a significant weight in determining whether their proposed policies can come into effect (Kennedy 2020). I argue that whether or not Biden exhibits a “harsh” attitude on China, the tension between the two countries will not be alleviated anytime soon, as a united front on China has been formed within the congress across party line.
This project therefore sets out to examine how much the two parties have converged in their opinions on China compared with their general discourses. To quantify such subtle and intricate patterns, I leveraged several computational text analysis techniques applied on a comprehensive dataset of tweets sent by members of the current (116th) U.S. congress (2019.02 - 2020.05), merged with the members’ party information. Twitter is frequently used by the members of congress to directly communicate thoughts and stances with the public, whose political opinions are highly influential and can evoke national responses (Green et al. 2020). These tweet texts can therefore not only represent the members’ personal opinions, but also reflect to large extent how an issue is perceived by the general public. Together, the breath and substance of this dataset allows for generalization on both inter-party patterns and intra-party temporal dynamics, and the results might be cautiously generalized to national opinions toward China-related issues.
Following classic works on computational text-mining and drawing insights from recent studies on polarization (Silge and Robinson 2016a; Green et al. 2020; Gentzkow, Shapiro, and Taddy 2019), I utilized and customized a variety of text analysis techniques for my research purpose. I captured the inter-party convergence on China-related issues from four dimensions: sentiment, moral inclination, strategic positioning of China, and topic overlap. The ultimate dependent variable is the similarity between the two parties, and each dimension has a different index variable, depending on whether and how such similarity can be quantitatively measured. The main independent variable is the linguistic context (China-related or general), assisted by time (year-month) variable to check pattern persistence. Combining these, I operationalize inter-party convergence as the degree to which the two parties (1) show similar sentiment and (2) adopt similar moral foundations toward China (positive/negative) relative to their general tweets, (3) view China similarly in relation to other countries, (4) overlap on the salient topics on the issue of China. Five hypotheses were developed to pin down these questions, with two devoted to the second point of moral foundations.
China’s image has been negative in the western media narrative by large (Raj 2020). Though the recent decades have witnessed China’s economic rise and internationalization ever since its reform-and-open-up and entrance into the global market, its media account is still largely negative, accounting for its unfair trade practices, human rights abuses, and global influence operations (Lee 2016). The disputes between China and the U.S. on these topics are extended from the fundamental differences of political ideology (socialism VS capitalism) and political regime (democratic VS authoritarian) (Guo, Mays, and Wang 2019). Given that these two different political systems operate on distinct principles, it is unlikely that two parties from the same system, no matter how divergent they are, will lean toward a rivalry with an opposing ideology. This line of reasoning, combined with the ongoing tension between the two countries, leads to my first hypothesis.
Hypothesis 1: Democrats and republicans both tweet more negatively about China than they do for other topics.
Building on the first hypothesis, my second hypothesis concerns the underlying moral motivation for such negative attitude. I rely on moral foundation theory developed by a group of social psychologists in explanation of how people make moral judgments (Graham et al. 2013). This theory proposes five moral foundations on which we rely in our moral evaluations: care/harm, fairness/cheating, authority/subversion, loyalty/betrayal, and sanctity/degradation (Haidt 2012). The care and fairness foundations are tied to the ethic of autonomy (Graham et al. 2013) and trigger moral responses that take the individual as the fundamental unit. As these two foundations can potentially be applied universally to all individuals, they are denoted together as universalism (Graham et al. 2013); The authority, loyalty, and sanctity foundations, on the other hand, are values that raise the cohesion of human groups over individuals and are therefore constrained by group boundaries, denoted together as particularism (Graham et al. 2013). Empirical evidence suggests that democrats tend to emphasize universalism more than republicans in moral evaluations, while republicans rely more heavily on particularism for moral judgments (Waytz et al. 2019). However, I argue that when it comes to China, a rising Other with competing interests operating on conflicting principles, this pattern will be flipped. Viewing China as the “outgroup”, democrats will be less likely to extend their universalist moral values in discussing relevant topics, whereas republicans may be more inclined to use universalism in order to reach beyond the national boundary and hold China accountable. This gives my second hypothesis:
Hypothesis 2: When compared with their normative pattern of moral inclination, democrats are less universalist in discussing China, whereas republicans may be more universalist around this topic.
In order to better compare how similar/dissimilar the two parties are on the universalist-particularist spectrum across the two corpuses (China-related tweets and general tweets), I also deployed Jenson-Shannon Divergence, a statistical method commonly used to measure information entropy to calculate inter-party divergence of moral inclination. My third hypothesis is therefore:
Hypothesis 3: When measured upon the universalist-particularist moral spectrum, democrats and republicans significantly converged in their China-related tweets compared with that in their general tweets.
From there I dig further to compare the strategic positioning of China in the two intra-party contexts. I approach this by analyzing how the frames adopted to talk about China coincide with or differ from those used to describe four of China’s contemporary analogues: Russia, North Korea, Japan, and India, enabled by the word embedding algorithm. In terms of potential threats to national security, the elements of state-controlled campaign, political espionage, and censorship may make China viewed similar to Russia (Kydd 2020), whereas its growing military power and mode of governance may resemble North Korea in the eyes of US law makers (Jisi and Ran 2019); In terms of economic growth, the rise of China may reminisce that of Japan in the last century (Foot 2017), both of which constitutes key elements of the Asian-Pacific market, while the number of population (and market size thereby) as well as the current speed of growth may seem similar to India (Malik 2016). Comparing how China and these contemporary analogues are perceived thus sheds light on how lawmakers in a superpower depict a rising “Other” military-wise and economic-wise, and how domestic divide shrinks or deepens on this issue. My fourth hypothesis thus states:
Hypothesis 4: Both parties position China similarly when viewed comparatively with other countries through synchronous lenses.
Perhaps a more implicative approach is to look beyond linguistic patterns and examine the content or meaning more closely. Therefore I also conducted topic modeling to examine the overlapping themes and threads in the two parties’ China discourse. A large intersection in topics emphasized will support my last hypothesis below:
Hypothesis 5: The topics the two parties focus on in the issue are of China will largely overlap, though likely with different topic loading.
I started with a comprehensive dataset of tweet IDs of 2,817,747 tweets by members of the 116th U.S. Congress collected between Jan 27, 2019 to May 7, 2020 (Wrubel and Kerchner 2020). The according Twitter JSON data was then collected via Twitter API. A comprehensive documentation of the Twitter data frame and variables can be found on the Twitter Developer website (Twitter 2020). For the purpose of this project, which mainly adopts text mining techniques, only the text and date variable were used for the main analysis. The dataset acquired from Twitter API was then merged with data on information of the congress members, including their twitter handle and political affiliation. Noting that most congress members operate several accounts, I screened out the official staff-managed accounts and only included their personal twitter handles, as the opinions expressed in these accounts are usually more prominent and can better represent their truthful ideas (Green et al. 2020). This gives a total of 229,471 tweets, which constitutes the general corpus (general tweets henceforth) used as baseline for measuring intra-party characteristics and inter-party patterns.
To better identify tweets related to China and relevant topics, I first coerced all the text to lower case and combined key phrases into one word, including “xijinping”, “northkorea”, “hongkong”, “humanright”, “tradewar”. I extracted tweets that contain at least one of the regular expressions including “china”, “chinese”, “beijing”, “xijinping”, “ccp”, and “prc”. This gives a set of 3,303 tweets, which constitutes the China-related corpus of my main research interest here (China-related tweets henceforth).
A set of standard preprocessing steps were applied before feeding the text for analysis: removing stop words, punctuation, numbers, URLs and emojis, word stemming, and tokenization. In topic modeling, omitting tokens that are extremely frequent (sparsity ≥ 0.95) or infrequent words (sparsity ≤ 0.05) was applied additionally to enhance the interpretability of topic word clusters.
I take a dictionary-based approach to measure the sentiment in congress tweets, utilizing a dictionary developed by Bing Liu, et al (2005) available in the tidytext R package (Silge and Robinson 2016b). This dictionary contains a comprehensive lexicon of sentiment words in English, each labeled with their sentiment orientation (positive or negative) (Hu and Liu 2004). By identifying and counting the occurrences of positive or negative words, we can get a rough estimate of the sentiment of a tweet corpus in a certain period, calculated as follows:
Sentiment = [N(positive) - N(negative)]/N(total tweets)
Given that tweets are fairly short texts within a similar range of word count, the absolute sentiment is measured as the number of positive minus number of negative words per tweet. To minimize the impact of individual language use differences and also to test persistence of a relative sentiment, month-party is used as the unit of corpus, which gives the main metric relative sentiment to test hypothesis 1:
Relative sentiment = Sentiment(China-related tweets) - Sentiment (General tweets)
It’s worth noting the exact sentiment score can only be interpreted as ordinal as opposed to cardinal measures. Nevertheless, since hypothesis 1 sets out to measure the general attitude toward China compared with baseline, it would be informative enough if a consistent positive/negative relative sentiment is observed. To better observe the trends I also plotted the absolute sentiment to assist interpretation. A correspondent surge of the same sentiment tendency for both parties observed around the same time can further support the hypothesis.
The differences in moral judgments spills over into individuals’ habitual language use (Haidt 2012). To examine the moral inclinations as manifest in the two parties’ habitual language uses, I used a validated dictionary developed by (Frimer et al. 2019) to capture the moral framework in the studied corpus. The original vocabulary divided the each foundation into virtue words and vice words, with virtue words indicating an act of respect for principles of the foundation and vice words signaling an act of violation. Since I aim to characterize the universalism-particularism framing, whether the person is referring to its vice or virtue category should not make a difference. Therefore, virtue words and vice words of the same foundation is treated as the same category. Based on the extensiveness of each foundation as detailed in hypothesis section, I categorized care and fairness as universalism, and authority, loyalty, and sanctity as particularism. The weight w for a moral foundation is calculated as the total occurrences of words that belong to its dictionary category. To normalize the weight, I divide it by the sum of word occurrences across all moral categories within each corpus, denoted as o below. The metric relative universalism is therefore calculated as below:
o(foundation_X) = w(foundation_X)/sum(w(foundations))
Univ = o(care) + o(fairness)
Part = o(loyalty) + o(authority) +o(Sanctity)
Relative Universalism = Univ - Part
Like the sentiment score, the score of relative universalism does not have meaning on its own and will thereby be interpreted in comparison with synchoronous intra-party baseline or inter-party benchmark.
To characterize how much the moral inclination of the two parties have converged (or diverged) in tweeting about China compared with their normative pattern, I used Jenson-Shannon divergence to measure how the two parties differ on the Universalist-Particularist spectrum. Jenson-Shannon is a statistical method of measuring the similarity between two probability distribution (Lu, Henchion, and Mac Namee 2020). In this case, the distribution of Universalism word frequency and Particularism word frequency were used for calculation:
Moral Divergence = JSD([Universalism_Democrat_sequence, Particularism_Democrat_sequence], [Universalism_Republican_sequence, Particularism_Republican_sequence])
This method returns finite values bounded by 0 and log(2), with 0 indicating the two distributions being identical and log (2) being completely different.
All three methods above share the limitation of not taking the linguistic context into account. To study how the strategic positioning of China may be coincides with or differ from contemporarily analogous countries, I adopted a word2vec model trained on a neural network word embedding algorithm (Řehůřek and Sojka 2010). Word embedding creates high-dimensional vector representations for words based on their embedded context (Yin and Shen 2018). This algorithm is established upon the theory that word similarity can be represented by their distributional similarity, thus two semantically similar words, when represented as vectors, will also be close to each other in the high-dimensional space (Jurafsky and Martin 2019). Since the word2vec model maps out word distance based on context-specific metrics like co-occurences as opposed to the sheer frequency, whether a word is prevalent in a corpus or not does not affect the functioning of the model, which allows me to compare China and other countries in the whole corpus even though China-related tweets only comprise of little more than 1% of the general tweets. After turning the words into vector representations, I compute the cosine similarity - a measure of similarity between non-zero vectors in high-dimensional space (Kozlowski and Rybinski 2019)- between different country words within the corpus of the same party. Note that due to the stochastic, context-specific nature of word2vec models, word vectors cannot be compared across context. To work around this limitation while still examining whether the perception of China is consistent across party line, here I generated a heat map to visualize China’s positioning in relation (or resemblance) to the other countries. It will be concluded that China is perceived similarly if the pattern of resemblance intensity with other countries is consistent across party line.
To explore the topics discussed in the congress members’ China-related tweets, I applied structural topic modeling (Roberts, Stewart, and Tingley 2019), an unsupervised learning algorithm, which treats topics as probability distributions over individual words and return word clusters as “topics”.
The dimension of the topic space, a predetermined parameter, is the number of topics. There are some formal criteria for choosing this parameter, but choices based on these formal criteria oftentimes contradict human interpretation (Chang et al. 2009). For this project, I rely on both automated parameter optimization and manual experimentation to find the number that gives the most interpretable and distinct topics. Based on prior theoretical insight of key issues in US-China relations, I first narrowed down the topic number into the range of 5 to 12. I then examined the tweets of democrats and republicans separately within this range. This series of experiment reveals five prevalent topics that persist regardless of topic numbers and party context. Top words chosen for these five topics were the most symbolic when topic number is set to 10 (though with the rest 5 topics barely interpretable), so I settled on 10 as the most appropriate topic number. I then applied this model to the whole corpus of China-related tweets and estimated the effect of party covariate on topic proportion.
Figure 1 below plots the relative sentiment of China-related tweets compared with general tweets from Feb, 2019 to May, 2020. A below-zero score would indicate that the sentiment exhibited in China-related tweets is relatively more negative than the general tweets. As the figure shows, this pattern remains consistent across party line throughout the period examined. It’s also interesting to observe that the sentiment went significantly downward for both parties since the onset of COVID in 2020, with the average sentiment of this period dropping below the lowest point previously reached.
Figure 3 shows results from examining the moral frameworks across four sub-corpora, divided by party and scope of the tweets (you may click on the legend to show/hide it for better comparison within one variable while controlling the other). These results are in favor of my second hypothesis: When viewed comparatively through intra-party lenses, democrats adopted a much less universalist moral framework in China-related tweets than in general tweets, whereas the opposite is true for republicans; When observing the inter-party normative patterns, democrats are systematically more universalist than republicans in their moral leaning - this difference is almost reversed when it comes to discussing China, with republicans talking in more universalist terms than democrats for most periods.
Also, note that the normative pattern of clear inter-party divide does not hold in the context of China. This trend of convergence is demonstrated with more statistical rigor in figure 4 below. Starting from June 2019, the two parties in general have converged compared to the baseline degree of divergence, with several values close to 0 (extremely similar). What’s interesting to point out is perhaps that during the periods wherein the divergence is above baseline, the pattern was unexceptionally reversed (democrats less universalist than republicans). This forms a contrast with their normative divergence, which remains relatively steady between 0.05 and 0.1. Together with sentiment analysis, these results show that China is an exceptional issue on which intra-party characteristics and inter-party patterns both deviate from the norm.
The heatmap in Figure 5 visualizes the semantic similarity among China and its four contemporary analogues (Russia, North Korea, Japan, and India) in the context of all Democrat/Republican tweets. In terms of the two military/ideology analogues Russia and North Korea, their linguistic similarity with China are both higher in republican tweets, more notably so for Russia. In terms of the two economic analogues, the resemblance between Japan and China is significantly higher in republican tweets, while much lower when it comes to China and India. When accounting for the relative positioning of all 5 countries, in democrat tweets we see that China is kind of “linking” the four countries together, which in themselves can be structured into two clusters that are extremely dissimilar to each other (Japan and India VS Russia and North Korea). This metaphoric “linkage” generally holds true in republican tweets as well, with the resemblance of China-Russia-North Korea and China-Japan both higher. This reveals the complex nature of US-China relations as perceived by both parties, that China can play the role of a national security threat, a military rivalry, and a trading partner simultaneously.
In terms of tweet content, 5 prominent topics emerged that endure the stochastic disturbance of topic models. They persisted as the intersection of both parties’ discourse though with different weighing. Figure 6-1 shows these 5 topics and proportions as in all the China-related tweets, as well as top words associated with each topic. As the topic words indicate, the first four topics are trade, national security, covid, human rights accordingly. The last topic refers generally to holding the Chinese Communist Party accountable. Though it may be less interpretable as a topic, its prevalence across all the topic models I experimented may validate it as a theme that can connect other topics. Figure 6-2 compares the difference in topic loading between the two parties. A positive value means democrats talk about the topic more than republicans (vice versa), and a line not coinciding with the zero-axis adds statistical significance to such difference. Aligned with the general paradigm, democrats weigh significantly more on topics of public health and human rights, whereas the topic of national security and blaming the Communist Party load higher proportions in republican tweets. Trade is at the center of discourses of both parties, emphasized slightly more by republicans than by democrats. To be clear, since these five topics are persistent across party and models with notable proportion (> 0.1), the fact that one party talks relatively less about a certain topic should not undermine the absolute weight such topic carries in the discourse of that party. The party difference in words associated with each topic is not shown in the visualization, but will be used to draw important implications in the discussion section.
Together with the country comparison, these results suggest that the two parties are largely aligned on what constitutes the most salient concerns in handling relations with China, with normative patterns of polarization existing when accounting for more specific strategies and emphasis.
These results highlight the degree to which Democratic and Republican members of the US congress converged on their framing of China-related issues in the recent year, in sharp contrast with the highly polarized political landscape in most other issue areas. Combining analyses that gauge general attitude and framework with closer investigations of content and meaning, this project gives a detailed account of how the congress can be unified at the threat of a rising “Other”. Utilizing a variety of computational text analysis methods while leveraging theoretical insight from an interdisciplinary body of literature, my analysis provides creative measures to quantify intra-party dynamics and inter-party differences through both temporal and contextual lenses. Despite some obvious limitations such as the absence of context in dictionary-based methods and the agnostic nature of machine learning, I worked around them by constructing relative measures with baseline (the context of general tweets) or benchmarks (other countries). The methods detailed in this paper can therefore be easily transferred to explore other issue areas or topics. This line of thinking finds a way for abstract, qualitative characteristics to land in concrete, quantitative measures that we can better visualize.
Apart from methodology, the examined subject of US-China relations is of special importance in the midst of a global pandemic as of writing. Whether the two countries confront or collaborate with each other can change our fate in the battle with coronavirus (Wu 2020; Evans et al. 2020). The bilateral political consensus that’s needed for a coordinated global health action has yet been reached between the two nations, who seem to be battling each other more than the virus in the narrative of blaming (Jaworsky and Qiaoan 2020). Though it is yet to be seen whether Biden can bring positive changes, the results of my analysis suggest that hope is dim. Aside from the consistently negative attitude and parochial moral inclination that democrats share with republicans, inferring from results of topic modeling, the extra emphasis democrats put on human rights issues will likely be perceived by China as violations of sovereignty (Renouard 2020), thereby deepening the cleavage. On the issue of national security, the results that democrats view Russia and North Korea as more similar to China than Japan or India may suggest that China is seemed more of a foe than of a ally or partner. However, I argue that we may see a window of opportunity on trade and COVID-19. Drawing insights from democrats’ words associated with trade - “tradewar”, “tariff”, “will”, “hurt”, “lose”, “farmer” - the framing of trade tariffs as a war and the emphasis of its negative domestic impact suggests that the democrats may relax the blunt strategy of imposing tariffs and take a more constructive approach. When it comes to COVID-19, democratic topic words “health”, “wuhan”, “public”, “american”, “people”, “travel”, “asian”, “xenophobic” also exhibits a different framing compared with republicans’ “worldhealth”, “communist”, “ccp”, “china”, “lie”, “accountable”. While republicans’ topic words clearly play to the strategy of outsourcing responsibility, democrats’ positive framing of the pandemic as a national public health issue and condemnation of xenophobic voices give us some hope that the back-firing politics of blaming may be reduced.
It’s important to emphasize and elaborate on some of the key limitations in addition to the ones already noted. Although the subjectiveness of sentiment score was reduced with various normalization and comparative measures, these do not make up for its vital shortcoming of context-independence. The most apparent example might be omitting possible negation in the context, which can potentially flip the sentiment orientation and completely invalidate the measure. Moreover, the sentiment dictionary I used was not customized to the domain of foreign policy, and the binary labeling of sentiment orientation does not account for differences of intensity across words within the same category. This method, therefore heavily relies on the assumption that affirmation is more prevalent in daily language uses than negation, and that frequent mentions of positive or negative words can indicate the general positiveness or negativeness associated with the subject matter. In other words, a negation of positive or negative sentiment still suggests the positiveness or negativeness originally associated with a topic, and is usually less intense than their affirmative counterparts. To avoid such problems, future studies looking to gauge sentiment may benefit from recent development of natural language processing (Hii, n.d.; Liang et al. 2019), which enables the detection of the local and global context to adjust for these biases.
These limitations are less obvious in measuring the moral foundation, as mentioning of vice/virtue words of the same moral foundation are equally indicative of the underlying moral framework. However, it should be noted that the number of words for each moral foundation in the dictionary is not the same and that, even if it is, the validity of using the relative frequency to infer a document’s leaning on the universalist-particularist spectrum is not well-established. The scores can only be viewed comparatively and does not have meaning on its own. For example, a relative universalism score below 0 (as is in most cases) does not indicate a more particularist leaning on its own. The overall negativeness is likely due to fewer words and moral categories (2 vs 3) contained in universalism than in particularism.
The same “ordinal not cardinal” interpretation criteria also holds for the cosine similarity between word vectors. Though one may conclude for values closer to 0 or 1 that the two words are very dissimilar or similar, it’s hard to describe the similarity for values that are in between. Even for values like 0 or 1, the similarity can only be stated in metaphoric and not absolute terms. The explanation of why and how exactly two countries are viewed similarly will have to draw insight from the more rigorous study of international relations and foreign policy. Also, given the context the similarity should only be interpreted as perceived resemblance from a U.S. (congress) standpoint. This method therefore merely serves as a visual aid for a simplified view of the countries’ strategic positioning. Nevertheless, with more comprehensive text data, future studies may be able to expand the range to more countries or even other subjects and enhance the extent of interpretability. It is recommended that a network viz be used in addition to heat maps, with edge lengths indicating the cosine distance (1-cosine similarity, the smaller the more similar) between countries so as to better view relative positioning. It would also be ideal, if we could compare the similarity of the same word, in this case “China”, across the context of different party or periods. To that end, methods that temporarily align different word embedding models have also been developed and validated with high accuracy (Di Carlo, Bianchi, and Palmonari 2019), which enables us to examine the meaning shift of the same word across different corpuses.
The simplicity of the content exploration limits this project from mining more implicative insights on future trends of US-China relations. A small step further might be examining text surrounding a certain topic event and looking more closely into how the two parties act and react in subdomains under the issue area of China. A more ambitious goal in the long term might be applying more advanced computational techniques to validate some of the proven theories or findings in the field of US-China relations. Such future studies combined with the ones suggested previously, if in conversation with traditional research of international relations, can create a synergistic flow, checking the validity of methods in the emerging field of computational social science, and also shedding light on how traditional research can be enhanced with analysis of large text data.
Chang, Jonathan, Sean Gerrish, Chong Wang, Jordan Boyd-Graber, and David Blei. 2009. “Reading Tea Leaves: How Humans Interpret Topic Models.” Advances in Neural Information Processing Systems 22: 288–96.
Change, Global Environmental. 2014. “Same Dream, Different Beds: Can America and China Take Effective Steps to Solve the Climate Problem?” Global Environmental Change 24: 2–4.
Di Carlo, Valerio, Federico Bianchi, and Matteo Palmonari. 2019. “Training Temporal Word Embeddings with a Compass.” In Proceedings of the Aaai Conference on Artificial Intelligence, 33:6326–34.
Evans, Tierra Smiley, Zhengli Shi, Michael Boots, Wenjun Liu, Kevin J Olival, Xiangming Xiao, Sue Vandewoude, et al. 2020. “Synergistic China–Us Ecological Research Is Essential for Global Emerging Infectious Disease Preparedness.” EcoHealth, 1–14.
Foot, Rosemary. 2017. “Power Transitions and Great Power Management: Three Decades of China–Japan–Us Relations.” The Pacific Review 30 (6): 829–42.
Frimer, JA, R Boghrati, J Haidt, J Graham, and M Dehgani. 2019. “Moral Foundations Dictionary for Linguistic Analyses 2.0.” Unpublished Manuscript.
Gentzkow, Matthew, Jesse M. Shapiro, and Matt Taddy. 2019. “Measuring Group Differences in High-Dimensional Choices: Method and Application to Congressional Speech.” Econometrica 87 (4): 1307–40. https://doi.org/https://doi.org/10.3982/ECTA16566.
GLASER, BONNIE S, and KELLY FLAHERTY. 2020. “US-China Relations Hit New Lows Amid Pandemic.” Comparative Connections: A Triannual E-Journal on East Asian Bilateral Relations 22 (1).
Goldberg, Jordan Schneider, Coby. 2020. A Divided Washington Is (Sort of) United on China. https://foreignpolicy.com/2020/11/09/biden-china-republicans-democrats-congress/.
Graham, Jesse, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P Wojcik, and Peter H Ditto. 2013. “Moral Foundations Theory: The Pragmatic Validity of Moral Pluralism.” In Advances in Experimental Social Psychology, 47:55–130. Elsevier.
Green, Jon, Jared Edgerton, Daniel Naftel, Kelsey Shoub, and Skyler J Cranmer. 2020. “Elusive Consensus: Polarization in Elite Communication on the Covid-19 Pandemic.” Science Advances 6 (28): eabc2717.
Gries, Peter. 2020. “Humanitarian Hawk Meets Rising Dragon: Obama’s Legacy in Us China Policy.” In The United States in the Indo-Pacific. Manchester University Press.
Guo, Lei, Kate Mays, and Jianing Wang. 2019. “Whose Story Wins on Twitter? Visualizing the South China Sea Dispute.” Journalism Studies 20 (4): 563–84.
Haidt, Jonathan. 2012. The Righteous Mind: Why Good People Are Divided by Politics and Religion. Vintage.
Hii, Doreen. n.d. “Using Meaning Specificity to Aid Negation Handling in Sentiment Analysis.”
Hu, Minqing, and Bing Liu. 2004. “Mining and Summarizing Customer Reviews.” In Proceedings of the Tenth Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, 168–77.
Jaworsky, Bernadette Nadya, and Runya Qiaoan. 2020. “The Politics of Blaming: The Narrative Battle Between China and the Us over Covid-19.” Journal of Chinese Political Science, 1–21.
Jisi, Wang, and Hu Ran. 2019. “From Cooperative Partnership to Strategic Competition: A Review of China–Us Relations 2009–2019.” China International Strategy Review 1 (1): 1–10.
Jurafsky, Daniel, and JH Martin. 2019. “Vector Semantics and Embeddings.” Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 94–122.
Kennedy, Scott. 2020. “Thunder Out of Congress on China.” Center for Strategic International Studies, November. https://www.nytimes.com/2020/11/16/business/economy/biden-china-trade-policy.html.
Kozlowski, Marek, and Henryk Rybinski. 2019. “Clustering of Semantically Enriched Short Texts.” Journal of Intelligent Information Systems 53 (1): 69–92.
Kydd, Andrew. 2020. “Switching Sides: Changing Power, Alliance Choices and Us–China–Russia Relations.” International Politics, 1–30.
Lee, Paul SN. 2016. “The Rise of China and Its Contest for Discursive Power.” Global Media and China 1 (1-2): 102–20.
Liang, Bin, Jiachen Du, Ruifeng Xu, Binyang Li, and Hejiao Huang. 2019. “Context-Aware Embedding for Targeted Aspect-Based Sentiment Analysis.” arXiv Preprint arXiv:1906.06945.
Lu, Jinghui, Maeve Henchion, and Brian Mac Namee. 2020. “Diverging Divergences: Examining Variants of Jensen Shannon Divergence for Corpus Comparison Tasks.” In Proceedings of the 12th Language Resources and Evaluation Conference, 6740–4.
Malik, Mohan. 2016. “Balancing Act: The China-India-Us Triangle.” World Affairs 179 (1): 46–57.
Raj, Verma. 2020. “Rebranding China: Contested Status Signaling in the Changing Global Order.” Asian Journal of Political Science, 1–3.
Renouard, Joe. 2020. “Sino-Western Relations, Political Values, and the Human Rights Council.” Journal of Transatlantic Studies, 1–23.
Roberts, Margaret E., Brandon M. Stewart, and Dustin Tingley. 2019. “stm: An R Package for Structural Topic Models.” Journal of Statistical Software 91 (2): 1–40. https://doi.org/10.18637/jss.v091.i02.
Řehůřek, Radim, and Petr Sojka. 2010. “Software Framework for Topic Modelling with Large Corpora.” In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, 45–50. Valletta, Malta: ELRA.
Silge, Julia, and David Robinson. 2016a. “Tidytext: Text Mining and Analysis Using Tidy Data Principles in R.” Journal of Open Source Software 1 (3): 37. https://doi.org/10.21105/joss.00037.
———. 2016b. “Tidytext: Text Mining and Analysis Using Tidy Data Principles in R.” JOSS 1 (3). https://doi.org/10.21105/joss.00037.
Sutter, Robert. 2017. “Barack Obama, Xi Jinping and Donald Trump—Pragmatism Fails as Us-China Differences Rise in Prominence.” American Journal of Chinese Studies, 69–85.
Swanson, Ana. 2020. “Biden’s China Policy? A Balancing Act for a Toxic Relationship.” The New York Times, November. https://www.nytimes.com/2020/11/16/business/economy/biden-china-trade-policy.html.
Tan, Weizhen. 2020. “China Alienates Its Washington Allies as Its Relationship with the U.S. Worsens.” CNBC. https://www.cnbc.com/2020/07/27/us-china-tensions-escalate-amid-rivalry-in-the-south-china-sea.html.
Twitter, Inc. 2020. “Tweet Object Documentation | Twitter Developer.” Twitter. Twitter. https://developer.twitter.com/en/docs/twitter-api/v1/data-dictionary/overview/tweet-object.
Waytz, Adam, Ravi Iyer, Liane Young, Jonathan Haidt, and Jesse Graham. 2019. “Ideological Differences in the Expanse of the Moral Circle.” Nature Communications 10 (1): 1–12.
Wrubel, Laura, and Daniel Kerchner. 2020. “116th U.S. Congress Tweet Ids.” Harvard Dataverse. https://doi.org/10.7910/DVN/MBOJNS.
Wu, Tong. 2020. “COVID-19, the Anthropocene, and the Imperative of Us–China Cooperation.” EcoHealth, 1–2.
Yin, Zi, and Yuanyuan Shen. 2018. “On the Dimensionality of Word Embedding.” In Advances in Neural Information Processing Systems, 887–98.
Zaidi, Syed Muhammad Saad, and Adam Saud. 2020. “Future of Us-China Relations: Conflict, Competition or Cooperation?” Asian Social Science 16 (7): p1.