Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Political uses of the ancient past on social media are predominantly negative and extreme

  • Chiara Bonacchi ,

    Contributed equally to this work with: Chiara Bonacchi, Jessica Witte, Mark Altaweel

    Roles Conceptualization, Formal analysis, Writing – original draft, Writing – review & editing

    chiara.bonacchi@ed.ac.uk

    Affiliation Department of Archaeology, School of History, Classics and Archaeology, University of Edinburgh, Edinburgh, United Kingdom

  • Jessica Witte ,

    Contributed equally to this work with: Chiara Bonacchi, Jessica Witte, Mark Altaweel

    Roles Conceptualization, Formal analysis, Writing – original draft, Writing – review & editing

    Affiliation Department of Archaeology, School of History, Classics and Archaeology, University of Edinburgh, Edinburgh, United Kingdom

  • Mark Altaweel

    Contributed equally to this work with: Chiara Bonacchi, Jessica Witte, Mark Altaweel

    Roles Conceptualization, Formal analysis, Writing – original draft, Writing – review & editing

    Affiliation Institute of Archaeology, University College London, London, United Kingdom

Abstract

This study assesses whether references to the ancient past in debates about political issues on social media over-represent negative and extreme views. Using precision-recall, we test the performance of three sentiment analysis methods (VADER, TextBlob and Flair Sentiment) on a corpus of 1,478,483 posts, comments and replies published on Brexit-themed Facebook pages between 2015 and 2017. Drawing on the results of VADER and manual coding, we demonstrate that: 1) texts not containing keywords relating to the Iron Age, Roman and medieval (IARM) past are mostly neutral and 2) texts with IARM keywords express more negative and extreme sentiment than those without keywords. Our findings show that mentions of the ancient past in political discourse on multi-sided issues on social media are likely to indicate the presence of hostile and polarised opinions.

1. Introduction

For centuries, the past has been leveraged as a powerful means of framing and legitimising political identities [14]. Today, such identities are often expressed on social media. However, most of the existing literature on political uses of the past online has analysed populist nationalist and Far-Right speech [59]. Very few studies have examined how references to the past feature in multi-sided discussions about a specific political issue [10, 11]. Therefore, although substantial knowledge outlines how different ‘myths’ and heritage symbols are invoked to support extreme ideologies in online environments, there is virtually no information on whether people with more moderate views similarly mobilise the past to make sense of the present and plan for the future. The philosopher Jon Rüsen defines historical consciousness as the ‘mental procedure by which the past is interpreted for the sake of understanding the present and anticipating the future’ [12 p. 45]. Studying historical consciousness not only in the context of nationalism and extremism, but also in social media debates about political issues between users with more nuanced or milder opinions, is key to fully grasping how conceptions of the past shape political identities and decision-making.

Furthermore, researchers have yet to formally investigate the significance of sentiment polarity and its extremity, or strength of polarity, in relation to social media users’ positions in heritage-based political debates. Some research has speculatively reflected on the relationship between emotions, historical thinking and political activism online [8, 11]. This scholarship highlighted several topoi present in populist and far-right discourse referencing the past on Twitter/X: threat to the ingroup and their heritage; a quest for justice for those who belong to the ingroup; and heroism and collective action to restore justice for the ingroup [7, 11, 13, 14]. This literature also specifically stressed that heritage on social media is used to create affective ingroups ‘along religious-cultural lines’ that exclude those who do not belong [8, 14]. However, existing studies do not rigorously measure the polarity and extremity of this ‘affective’ dimension. Yet detecting negatively polarised sentiment can be useful for identifying and combatting hostility linked with extremism and ‘group-based anger’ online [15].

Extremist speech can be hidden using seemingly neutral language [16]. More frequently, however, extremity in ideology is positively correlated with extremity in sentiment polarity. As Weismueller and colleagues have shown, Twitter/X users with politically extreme views tend to share strongly negative content more frequently than those who hold moderate opinions [17]. In turn, tweets displaying extreme sentiment correlate with a higher number of retweets [17], especially if they have negative polarity [18]. Furthermore, some research suggests that communities on social media function as negatively polarised ‘echo chambers’ where users holding similar opinions discuss topics amongst each other, rarely encountering different beliefs. For example, in examining the consumption of Brexit-related information on news media Facebook pages, Del Vicario et al. found ‘two distinct communities of news outlets’ where individuals did not interact with the opposing viewpoint and expressed content with primarily negative sentiment [19 p. 6]. However, other research has concluded that the “echo chamber” effect might be overstated. For instance, participants in multi-sided discussion online still develop polarised perspectives as a result of exchanging emotionally heightened content [20]. Texts of this kind trigger motivated reasoning, a bias ‘directly related to ideological beliefs’ such as those ‘which signify and promote loyalty to an in-group’ [21 p. 5]. In turn, motivated reasoning leads to opinion polarisation [20].

Our study examines whether the Iron Age, Roman and early medieval (IARM) past is leveraged to express overtly negative and extreme political views. We address these questions through conducting a sentiment analysis on a corpus of posts, comments and replies collected from public Facebook pages related to the 2016 Referendum on the UK’s membership of the European Union. The absence of comparable formal assessments of sentiment in heritage studies does not allow us to formulate clear hypotheses. Furthermore, the more speculative literature available on the affective power of heritage in political discourse online focuses on Far Right and extreme nationalist ideologies. Given the impossibility of making directional predictions, we will explore whether texts referencing the IARM past in multi-sided social media discussions about Brexit will be:

  1. prevalently negative and extreme;
  2. more negative and extreme than content not containing mentions of the ancient past.

2. Materials and methods

2.1. Materials

The dataset consists of a corpus of 1,478,483 posts, comments and replies published in English on 364 public Facebook pages that had the word ‘Brexit’ in the title or description. These documents were extracted from 1 March to 30 April 2017 using Facebook’s public API; they were anonymised by substituting usernames and IDs with random numbers. Within the corpus, we identified a subset of 2,528 documents containing at least one reference to the Iron Age, Roman and early medieval (IARM) past of Britain via a keyword-based approach. IARM heritage keywords comprised place names, names of key historical figures and terms used to refer to the period between 800 BCE and 800 CE. The detection of keywords was undertaken as part of prior research (for details on how it was conducted, see [22, p. 178]). We chose the Natural Language Toolkit (NLTK) library in Python to prepare the corpus for sentiment analysis by performing word tokenisation and removing English-language stop words, non-ASCII characters (e.g. punctuation and symbols such as &, %,?) and extra white spaces.

This corpus comprises Facebook pages and, within them, views representing different positions towards Brexit, with some being in favour and others against [10]. It was compiled as part of previous research [10, 11], but ethical approval for new analyses was sought and obtained in 2022 from the University of Edinburgh. We chose to analyse our existing dataset for two reasons. First, examining social media data about Brexit, a high-profile event that has been intensely studied, offers significant opportunities for comparing our findings to existing work whilst contributing to ongoing scholarship on public discourse about political phenomena. Second, in the UK and in many other countries, institutional ethics policies require researchers to acquire data in compliance with platforms’ Terms of Service (ToS) agreements. Like other major platforms including X (formerly Twitter), Instagram and TikTok, Facebook’s ToS state that data must be extracted using its application programming interface (API). Yet following the Cambridge Analytica scandal, Facebook closed its public API, making it challenging to acquire additional data from the platform. Although the so-called ‘post-API age’ may appear to introduce a new barrier to reproducibility in studies examining social media data, such research has always been difficult to reproduce due to a host of known quality issues from platform-sourced data. However, despite these limitations, it is critical to continue studying social media data in acknowledgment that platforms function as public spaces for discourse on a range of political topics.

2.2. Background to methods

Sentiment analysis is a natural language processing (NLP) approach for studying human emotion in text [23]. Recently created frameworks allow measurements of both sentiment polarity, that is a negative, positive, or neutral orientation, and extremity, which is defined as overall strength of sentiment. Such frameworks have been used to examine a variety of textual data [24, 25]. Sentiment analysis methods can be subdivided into three distinct groups. The first consists of dictionary-based methods, which pair keywords (or phrases) with corresponding emotion or polarity values [26, 27]. For example, Almatarneh and Gamallo applied a lexicon-based method to assess extreme opinions, defined as the most positive or negative [26]. Similarly, Heidenreich and colleagues utilised a dictionary to examine the level of extreme sentiment in status updates about migration published by the Facebook accounts of 1702 political actors [27].

The second group of frameworks relies on machine learning. In this case, sentiment is investigated with the support of vector machines (SVM), Naive Bayes (NB), deep learning techniques including artificial neural networks, and regression-based methods [28, 29]. For instance, Sofat and Bansal chose multiple methods, including convolutional neural network long short-term memory (CNN-LSTM), to detect radical content in tweets and blog posts [30]. Jamil and co-authors identified extreme sentiment with the language representation model BERT (Bidirectional Encoder Representations from Transformers), which facilitates context awareness in determining word meanings, thereby improving score accuracy [31].

Finally, a third approach to sentiment analysis combines dictionary-based and machine learning methods [3234]. The results of individual methods are deemed stronger if other techniques lead to comparable conclusions or one method can be shown to support higher accuracy and consistency for desired predictions. Dictionaries are flexible and adaptable, providing the possibility to analyse specific thematic domains with bespoke sets of keywords. On the other hand, machine learning techniques report relatively higher accuracy and precision, but require the creation of a sufficiently large training corpus [3537].

2.3. Methods

As prior work has highlighted, a multi-method approach to identifying sentiment most accurately captures both polarity and extremity relative to the subject and linguistic features of the corpus. Therefore, we initially chose this strategy, deploying and subsequently testing the accuracy of VADER (Valence Aware Dictionary for sEntiment Reasoning), TextBlob Sentiment, and Flair Sentiment. We selected these specific techniques because they have an established track record of being deployed in comparable studies, which we discuss below. The code used to undertake the analysis is available via GitHub [38].

Both VADER and TextBlob are dictionary-based methods. VADER maps lexical features to emotional intensities providing sentiment scores from -1 (negative) to 1 (positive), with 0 being neutral [3942]. TextBlob also incorporates aspect-based sentiment analysis, that is tools to identify the subject target and sentiment polarity [41]. This was evaluated in our initial search for methods to compare along with the other libraries of TextBlob. For our results, aspect-based sentiment was not seen as the key focus as we attempted to capture more general sentiment. Finally, Flair Sentiment [43, 44] uses an LSTM neural network model and multiple embedding types (GloVe, BERT, and ELMo) to contextualise the sentiment of terms based on their surrounding text. Sentences are scored between 0 and 1, with ‘negative’ or ‘positive’ designations. It is possible to train the LSTM neural network with either a bespoke corpus created by the user or a standard pre-trained library. We tried both approaches and, surprisingly, found the latter to be more accurate and sensitive than training with a bespoke corpus. This was likely due to the size of the subset of our corpus containing IARM keywords, which was too small for satisfactory training.

To establish which sentiment analysis technique might generate results consistent with empirical evidence, we compared the outputs of the different analyses discussed above using 500 randomly sampled texts without keywords (Sample 1) and 300 with keywords (Sample 2). Neither sample included neutrally polarised texts. Sample 2 comprised 300 texts because these were the ones available for coding once neutrals had been excluded (Sample 2). Thereafter, we performed precision and recall tests on these samples to obtain accuracy measures for positive predictions and sensitivity, or completeness [45]. Precision, recall and F1 scores were initially calculated for positive and negative sentiment and, subsequently, for extreme (>0.75 or <-0.75) and mild sentiment (<0.75 or >-0.75). The results of this analysis were split into categories based on validity and polarity: true extreme positive, false extreme positive, true mild positive, false mild positive, true extreme negative, false extreme negative, true mild negative and false mild negative.

3. Results

In this section, we present the results of the precision and recall tests on randomised values. We then discuss the polarity scores of the most accurate method when applied to the corpus of Brexit-themed public Facebook pages. Finally, we integrate this analysis with the outcomes of manual coding of randomly selected documents from Sample 1 (without IARM heritage keywords) and Sample 2 (with IARM heritage keywords).

3.1 Precision-recall results

Overall, the precision and recall tests demonstrate that the different approaches to sentiment analysis can reliably detect general negative or positive sentiment in texts that do not contain mentions of the IARM past (Sample 1), but are less successful in capturing sentiment extremity (Table 1). Additionally, we find that precision and recall for texts with IARM heritage keywords (Sample 2) is much weaker. For Sample 1, VADER and Lexical term lists scored highest in the Positive-Negative precision and recall tests, while VADER and Flair scored highest for Extreme-Non-Extreme tests. Therefore, when considering the results of all tests together, VADER is the most accurate and the most sensitive for analysing texts with no mentions of the IARM past.

thumbnail
Table 1. Precision-recall tests for negative and positive sentiment, and for extreme and non-extreme sentiment, applied to Sample 1 and Sample 2.

https://doi.org/10.1371/journal.pone.0308919.t001

3.2 VADER results

Because VADER displayed the best overall precision-recall scores for Sample 1, we utilised the method for the full analysis of 974,053 posts, comments and replies that do not reference the IARM past. The precision-recall suggests a lower precision for extremity measures than polarity measures in all methods, including VADER. Therefore, we will discuss the total counts for negative, positive, and neutral sentiment categories. We found mean sentiment polarity to be approximately 0 for this no-keywords subset, with a standard deviation of around 0.39. Furthermore, dispersion within texts not containing mentions of IARM heritage is relatively low (Gini coefficient value of 0.21).

Most of the posts, comments and replies without keywords are neutral in polarity. In addition, the similar number of negative and positive texts indicates a relatively even polarity distribution across time (Figs 13). However, the percentage of neutral sentiment, compared to mild positive or negative sentiment, was higher from July 2013 to December 2014.

thumbnail
Fig 1. Proportion of negative, positive, and neutral texts with no IARM heritage keywords, calculated using VADER.

https://doi.org/10.1371/journal.pone.0308919.g001

thumbnail
Fig 2. VADER polarity scores for the subset of the data without IARM heritage keywords.

Polarity scores range from negative to positive and five categories are identified: extreme negative (<-0.75), negative (= />-0.75 and <0), neutral (0), positive (>0 and = /<0.75), and extreme positive (>0.75).

https://doi.org/10.1371/journal.pone.0308919.g002

thumbnail
Fig 3. Changes in VADER polarity scores measured in six-month intervals.

Percentages reflect the proportion of extreme negative, negative, neutral, positive and extreme positive scores within each six-month period. Results are shown for the subset of the data without IARM heritage keywords.

https://doi.org/10.1371/journal.pone.0308919.g003

3.3. Follow-up investigation

Given the output of the precision-recall tests (Table 1), we relied on manual coding of Samples 1 and 2 to assess both sentiment polarity (positive or negative) and strength (mild or extreme), comparatively, for the subsets with and without references to the IARM past (Table 2). We observed that sentiment is mostly extreme, especially for the keywords subset (91% compared to 71% for no-keywords). Confirming VADER results, we find that sentiment polarity is relatively evenly split in texts without IARM heritage mentions (58% negative and 42% positive). However, sentiment polarity is mostly negative (84%) in the subset with keywords relating to the IARM past. Additionally, whereas the percentage of texts with extreme negative sentiment was only 49% in Sample 1, we find that extreme negative polarity was 92% in Sample 2.

thumbnail
Table 2. Manual coding of sentiment polarity and extremity undertaken for the precision-recall test on sampled documents from the dataset without keywords (Sample 1) and from the dataset with keywords relating to the IARM past (Sample 2).

https://doi.org/10.1371/journal.pone.0308919.t002

4. Discussion

After testing different methods, we identified VADER as the most accurate technique for detecting sentiment polarity and, to a lesser extent, sentiment extremity in texts that are not heritage-specific. However, none of the methods accurately captured the polarity and extremity of sentiment in politically-themed discourse on social media that includes references to the Iron Age, Roman and early medieval past. Dictionary-based methods inadequately assessed domain-specific meanings, while machine learning techniques fell short due to the unavailability of a training set of sufficient size. There are only few corpora of texts that mention ancient periods when expressing political opinions in multi-sided discussions. Integrating multiple datasets from existing and new studies of political uses of the past online may provide a way forward for future research in this area.

Generative transform and specifically large language models (LLMs) could be one way to enhance the analysis and look at more language nuances. Some studies do suggest certain limitations on richer sentiment understanding using existing LLMs, including in areas such as sarcasm [46]. Nevertheless, this is likely a promising avenue of further research as LLMs show continued improvement. Despite current shortcomings in the ability to automatically capture sentiment polarity and extremity, our manual coding strongly suggests that IARM heritage is leveraged primarily within political social media discourse characterised by negative and extreme sentiment. This result is particularly important if one considers that both VADER scores and our manual analysis found the number of positively and negatively polarised posts, comments and replies to be somewhat evenly distributed in texts without keywords related to the ancient past. The subset with heritage keywords displays an overrepresentation of negative posts, which are likely to express anger, hostility and criticism.

This finding is crucial since, as previous research has shown, the past is often invoked to express political identities [311]. Because such references appear in online discourse that tends to be overtly negative and heightened, they are likely to lead to motivated reasoning and polarisation [21]. Our study demonstrates that research on political identities based on social media data allows the assessment of people’s historical consciousness. However, this research over-represents individuals who relate to their present realities with negative dispositions and extreme sentiments. These conclusions should be taken into account when designing future research on heritage and identity politics.

At the same time, the differences highlighted between texts with and without IARM heritage keywords might be less prominent in analyses of different kinds of public Facebook pages or on other social media platforms. In our study, VADER results demonstrated that texts that did not contain references to the ancient past were mostly neutral. This finding is in line with the tendency towards neutral valency (average 0.56) revealed in a study of 771,036 Facebook comments from the political campaign pages Stronger In, Vote Leave, and LeaveEU for the period between 14 April 2014 and 23 June 2016 [47]. However, research by Del Vicario and colleagues [19], registered a predominantly negative sentiment for posts, comments and replies about Brexit published on the Facebook pages of news outlets. Although the discrepancy could perhaps be attributed to the different techniques deployed, it may also suggest that negative sentiment about a topic is expressed more frequently on the Facebook pages of news media outlets than on themed pages dedicated to public debates.

Furthermore, the VADER analysis we completed shows a generally even split between positive and negative sentiment in texts that did not contain keywords. These results do not align with broadly comparable analysis undertaken for Twitter. Calisir and Brambilla used the AFINN lexicon-based sentiment analyser on a corpus of tweets in English that contained the Brexit keyword and were posted between January 2016 and September 2019 [48]. They found that the number of tweets with negative sentiment was consistently higher (an average of 13 percent points) than those with positive sentiment over the period considered [48]. These findings suggest that higher proportions of negative sentiment about a particular political event are expressed on Twitter than on Facebook. To confirm this hypothesis, testing must be performed using the same sentiment analysis method to compare datasets focusing on a broader range of political issues.

5. Conclusion

This study demonstrates that posts that reference the ancient past in political discourse on social media are significantly more negative and more extremely polarised than those that do not contain these references. We therefore conclude that heritage keywords in politically-themed debates on social media are likely to signal the presence of more polarised and, potentially, extremist views. Furthermore, we show that social media research on political uses of the past is likely to over-represent people with very strong opinions compared to individuals whose views are more moderate.

References

  1. 1. Anderson BRO’ G. Imagined Communities: Reflections on the Origin and Spread of Nationalism. London and New York: Verso; 1991.
  2. 2. Hingley R. Roman Officers and English Gentlemen: The Imperial Origins of Roman Archaeology. London and New York: Routledge; 2000.
  3. 3. De Cesari C, Kaya A, editors. European Memory in Populism: Representations of Self and Other. Abingdon and New York: Routledge; 2020.
  4. 4. Sebastiani A. Ancient Rome and the Modern Italian State: Ideological Placemaking, Archaeology, and Architecture, 1870–1945. Cambridge: Cambridge University Press; 2023.
  5. 5. Fuchs C. Fascism 2.0: Twitter Users’ Social Media Memories of Hitler on His 127th Birthday. Fascism. 2017;6(2):228–63.
  6. 6. Richardson-Little N, Merrill S. Who Is the Volk? PEGIDA and the Contested Memory of 1989 on Social Media. In: Merrill S, Keightley E, Daphi P, editors. Social Movements, Cultural Memory and Digital Media: Mobilising Mediated Remembrance. Cham: Springer International Publishing; 2020, pp. 59–84.
  7. 7. Esteve-Del-Valle M, Costa López J. Reconquest 2.0: The Spanish Far Right and the Mobilization of Historical Memory during the 2019 Elections. European Politics and Society. 2023;24(4):494–517.
  8. 8. Farrell-Banks D. Affect and Belonging in Political Uses of the Past. Abingdon and New York: Routledge; 2023.
  9. 9. Richardson-Little N, Merrill S, Arlaud L. Far-Right Anniversary Politics and Social Media: The Alternative for Germany’s Contestation of the East German Past on Twitter. Memory Studies. 2022;15(6):1360–77.
  10. 10. Bonacchi C, Altaweel M, Krzyzanska M. The heritage of Brexit: Roles of the past in the construction of political identities through social media. Journal of Social Archaeology. 2018;18(2):174–192.
  11. 11. Bonacchi C. Heritage and Nationalism: Understanding populism through big data. London: UCL Press; 2022. https://doi.org/10.14324/111.9781787358010
  12. 12. Rüsen J. Tradition: a principle of historical sense-generation and its logic and effect in historical culture. History and Theory. 2012;51(4):45–59.
  13. 13. Farrell-Banks D.1215 in 280 characters: Talking about Magna Charta on Twitter. In: Galani A, Mason R, Arrigoni G, editors. European heritage, dialogue and digital practices. New York and London: Routledge; 2020, pp. 86–106.
  14. 14. Van Den Hemel E. Social Media and Affective Publics: Populist Passion for Religious Roots. In: De Cesari C, Kaya A, editors. European Memory in Populism. London and New York: Routledge; 2020.
  15. 15. Tanesini A. Affective Polarisation and Emotional Distortions on Social Media. Royal Institute of Philosophy Supplement. 2022;92:87–109.
  16. 16. Ajala I, Feroze S, El Barachi M, Oroumchian F, Mathew S, Yasin R, et al. Combining Artificial Intelligence and Expert Content Analysis to Explore Radical Views on Twitter: Case Study on Far-Right Discourse. Journal of Cleaner Production. 2022;362:132263.
  17. 17. Weismueller J, Harrigan P, Coussement K, Tessitore T. What Makes People Share Political Content on Social Media? The Role of Emotion, Authority and Ideology. Computers in Human Behavior. 2022;129:107150.
  18. 18. Schöne JP, Garcia D, Parkinson B, Goldenberg A. Negative Expressions Are Shared More on Twitter for Public Figures than for Ordinary Users. PNAS Nexus. 2023;2(7):pgad219. pmid:37457891
  19. 19. Del Vicario M, Zollo M, Caldarelli G, Scala A, Quattrociocchi W. Mapping Social Dynamics on Facebook: The Brexit Debate. Social Networks. 2017;50:6–16.
  20. 20. Asker D, Dinas E. Thinking Fast and Furious: Emotional Intensity and Opinion Polarization in Online Media. Public Opinion Quarterly. 2019;83(3):487–509.
  21. 21. Priniski JH, Solanki P, Horne Z. A Bayesian decision-theoretic framework for studying motivated reasoning. PsyArxiv. 2022.
  22. 22. Ancient Identities. Ancient Identities: Keywords; 2022 [accessed 2024 July 6]. Available from: https://docs.google.com/spreadsheets/d/e/2PACX-1vSQB3A8Bfa5CtDg6Weh35gVLVbYOAwrIG9HEDYjMMri5xr_d3fEvvCa34FYGUJEMnwFivO6i3tXcn96/pubhtml?gid=0&single=true.
  23. 23. Liu B. Sentiment Analysis: Mining Opinions, Sentiments, and Emotions. New York: Cambridge University Press; 2020.
  24. 24. Abirami AM, Gayathri V. A survey on sentiment analysis methods and approach. 2016 Eighth International Conference on Advanced Computing (ICoAC). Chennai, India: IEEE; 2017, pp. 72–76. https://doi.org/10.1109/ICoAC.2017.7951748.
  25. 25. Torregrosa J, Bello-Orgaz G, Martínez-Cámara E, Del Ser J, Camacho D. A Survey on Extremism Analysis Using Natural Language Processing: Definitions, Literature Review, Trends and Challenges. Ambient Intell Human Comput. 2023;14:9869–9905. pmid:35039755
  26. 26. Almatarneh S, Gamallo P. A lexicon based method to search for extreme opinions. PLOS ONE. 2018;13(5):e0197816. pmid:29799867
  27. 27. Heidenreich T, Eberl JM, Lind F, Boomgaarden H. Political Migration Discourses on Social Media: A Comparative Perspective on Visibility and Sentiment across Political Facebook Accounts in Europe. Journal of Ethnic and Migration Studies. 2020;46(7):1261–80.
  28. 28. Jindal K, Aron R. A systematic study of sentiment analysis for social media data. Materials Today: Proceedings. 2021;S2214785321000705.
  29. 29. Matalon Y, Magdaci O, Almozlino A, Yamin D. Using sentiment analysis to predict opinion inversion in Tweets of political communication. Sci Rep. 2021;11:7250. pmid:33790339
  30. 30. Sofat C, Bansal D. RadScore: An Automated Technique to Measure Radicalness Score of Online Social Media Users. Cybernetics and Systems. 2022;0(0):1–26.
  31. 31. Jamil ML, Pais S, Cordeiro J. et al. Detection of extreme sentiments on social networks with BERT. Soc. Netw. Anal. Min. 2022;12(1):55.
  32. 32. Verma B, Thakur RS. Sentiment Analysis Using Lexicon and Machine Learning-Based Approaches: A Survey. In: Tiwari B, Tiwari V, Das KC, Mishra DK, Bansal JC, editors. Proceedings of International Conference on Recent Advancement on Computer and Communication. Singapore: Springer Singapore; 2018, pp. 441–447. https://doi.org/10.1007/978-981-10-8198-9_46
  33. 33. Mujahid M, Lee E, Rustam F, Washington PB, Ullah S, Reshi AA, et al. Sentiment Analysis and Topic Modeling on Tweets about Online Education during COVID-19. Applied Sciences. 2021;11:8438.
  34. 34. Subramanian RR, Akshith N, Murthy GN, Vikas M, Amara S, Balaji K. A Survey on Sentiment Analysis. 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India; 2021, pp. 70–75. https://doi.org/10.1109/Confluence51648.2021.9377136.
  35. 35. van Atteveldt W, van der Velden MACG, Boukes M. The Validity of Sentiment Analysis: Comparing Manual Annotation, Crowd-Coding, Dictionary Approaches, and Machine Learning Algorithms. Communication Methods and Measures. 2021;15:121–140.
  36. 36. Nandwani P, Verma R. A review on sentiment analysis and emotion detection from text. Soc Netw Anal Min. 2021;11:81. pmid:34484462
  37. 37. Yang P, Chen Y. A survey on sentiment analysis by using machine learning methods. 2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). Chengdu: IEEE; 2017, pp. 117–121. https://doi.org/10.1109/ITNEC.2017.8284920.
  38. 38. Bonacchi C, Witte J, Altaweel M. Measuring sentiment in political uses of the past. 2024 [accessed 2024 August 11]. Available from: https://github.com/maltaweel/Extreme_Sentiment_Ancient_Past.
  39. 39. Hutto CJ, Gilbert EE. VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Proceedings of the International AAAI Conference on Web and Social Media. 2014;8(1):216–25.
  40. 40. VADER. VADER Sentiment Analysis. 2022 [accessed 2024 July 6]. Available from: https://github.com/cjhutto/vaderSentiment.
  41. 41. TextBlob. TextBlob: Simplified Text Processing. 2022 [accessed 2024 July 6]. Available from: https://textblob.readthedocs.io/en/dev/
  42. 42. Mas Diyasa IGS, Marini Mandenni NMI, Fachrurrozi MI, Pradika SI, Nur Manab KR, Sasmita NR. Twitter Sentiment Analysis as an Evaluation and Service Base On Python Textblob. IOP Conf Ser: Mater Sci Eng. 2021;1125:012034.
  43. 43. Akbik A, Bergmann T, Blythe D, Rasul K, Schweter S, Vollgraf R. FLAIR: An Easy-to-Use Framework for State-of-the-Art NLP. Proceedings of the 2019 Conference of the North. Minneapolis, Minnesota: Association for Computational Linguistics; 2019, pp. 54–59. https://doi.org/10.18653/v1/N19-4010.
  44. 44. Flair. Flair NLP Library. 2022 [accessed 2024 July 6]. Available from: https://github.com/flairNLP/flair.
  45. 45. Carterette B. Precision and Recall. In: Liu L, Özsu MT, editors. Encyclopaedia of Database Systems. Boston, MA: Springer US; 2009, pp. 2126–2127. https://doi.org/10.1007/978-0-387-39940-9_5050
  46. 46. Zhang W, Yue D, Bing L, Sinno J P, Lidong B. Sentiment Analysis in the Era of Large Language Models: A Reality Check. arXiv. 2023.
  47. 47. Bossetta, M, Segesten, AD, Zimmerman, C, Bonacci, D. Shouting at the Wall: Does Negativity Drive Ideological Cross-Posting in Brexit Facebook Comments? In: Proceedings of the 9th International Conference on Social Media and Society, Copenhagen, Denmark: ACM; 2018, pp. 246–50. https://doi.org/10.1145/3217804.3217922.
  48. 48. Calisir, E, Brambilla, M. The Long-Running Debate about Brexit on Social Media. In: Proceedings of the International AAAI Conference on Web and Social Media 14 (26 May 2020); 2020, pp. 848–52. https://doi.org/10.1609/icwsm.v14i1.7349