Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society


This paper presents an analysis on information disorder in social media platforms. The study employed methods such as Natural Language Processing, Topic Modeling, and Knowledge Graph building to gain new insights into the phenomenon of fake news and its impact on critical thinking and knowledge management. The analysis focused on four research questions: 1) the distribution of misinformation, disinformation, and malinformation across different platforms; 2) recurring themes in fake news and their visibility; 3) the role of artificial intelligence as an authoritative and/or spreader agent; and 4) strategies for combating information disorder. The role of AI was highlighted, both as a tool for fact-checking and building truthiness identification bots, and as a potential amplifier of false narratives. Strategies proposed for combating information disorder include improving digital literacy skills and promoting critical thinking among social media users.


Social networking platforms such as Facebook, Twitter, Instagram, expose their users to an unprecedented amount of information, where purchase suggestions from recommendation systems, information and opinions from other users, as well as breaking news coexist, which is rather worrying considering the growing importance of social media networks for millions of people worldwide [13]. The rise of social media as a source of news and information has been marked by several concurrent phenomena: firstly, the convenience and accessibility of such media facilitate access to news and information from a wide range of sources, generally unverified [4]; the pervasiveness and ubiquity associated with the mode of use (e.g., mobile phone applications) mean that one does not have to wait for the next edition of a newspaper or television program [5]; the underlying social nature of such applications favors the rapid, immediate, and therefore uncontrolled dissemination of content among one’s contacts (both close and acquaintances) and, in a chain, among contacts’ contacts [6]. The well-established phenomenon of homophily (i.e., the tendency to associate among similar individuals) creates online communities that are strengthened by sharing interests, values, and worldviews, amplifying the pervasiveness of ideas that can thus find fertile ground (e.g., viral ideas and memes) [7,8]. While the spread of news and information via social networks has, in some cases, made a significant positive contribution (e.g., Arab Spring, Black Lives Matter, Iranian Women’s Demands for Freedom and similar civil rights uprisings) [6,911], many other times there are considerable concerns about the quality and reliability of the information that is shared on these platforms [12,13]. Social media platforms have been widely criticized for their role in spreading misinformation, fake news and disinformation, which can have a significant negative impact on individuals, communities and societies [14,15], as well as for themselves [16]. Although several review works have considered the importance of social media in relation to various phenomena related to the dissemination of untruthful information, to the best of our knowledge it is unclear how these phenomena are distributed over the different existing platforms [1720]. As social media continue to evolve and play an increasingly central role in the lives of millions of people in an increasingly globalized world, it would be important to create an ideal snapshot of these developments. To avoid confusion, we need to clarify the differences between the various Information Disorders (ID) that may appear very similar at first glance (Fig 1) [21].

“Misinformation”: incorrect information disseminated without intent to deceive or harm a third party; “Disinformation”: intentional dissemination of manipulated and/or false information with the specific intent to harm and manipulate someone; “Malinformation”: real information ‐ but presented in a distorted manner ‐ used for the purpose of harming or manipulating the judgement of others [22,23].

It is essential to identify the different actors behind the dissemination of false or harmful information, their motivations, and the methods they use [24,25]. The various social media platforms have unique characteristics that make them more susceptible to misinformation, disinformation and malinformation and this should be taken into account when designing interventions to mitigate their spread.

Moreover, the significant advancement of Artificial Intelligence has multiplied the complexity and multifaceted nature of the problem of source verifiability by several orders of magnitude [2628]. From an ontological perspective, deception is a fundamental characteristic associated with human intelligence. For this reason, in the inability to define exactly what intelligence is, the Turing test was created to evaluate whether a machine can be considered intelligent and is based on the verisimilar interaction between humans and computers [29,30]. The test only verifies whether the machine is able to dissimulate itself in a credible and convincing manner, as a human would [31]. In this sense, deception can be considered the "original sin" of A.I. It is humans who project humanity and intelligence onto machines that appear to possess similar abilities to ours, stimulating authentic empathy and, sometimes, authority. For example, it is important to carefully consider the ease and speed with which a cyber-sociotechnical agent, like a conversational Bot, can generate seemingly valid content [32]. A.I. is generating new opportunities to create or manipulate texts and images, audio or video content [33]. Moreover, A.I. systems developed and deployed by online platforms to enhance their users’ engagement significantly contribute to the effective and rapid dissemination of disinformation online [34]. Finally, specific bots connected to social network platforms might be designed with the aim of acting as fake-news super-spreaders [35].

In such a world where information can be easily accessed, evaluated, and disseminated on an unprecedented scale, individuals must therefore possess the necessary skills to assess the credibility of sources and the content they encounter [36]. Critical thinking plays a crucial role in the fight against disinformation, malinformation, and misinformation on social media platforms [37]. Critical thinking enables the identification of logical fallacies, the evaluation of evidence and the validity and reliability of claims [38]. By cultivating critical thinking skills, individuals can more effectively identify and avoid false, misleading, or manipulative information on social media platforms, reducing the risk of falling prey to disinformation, malinformation, and misinformation [3942].

Considering the above, this review aims to identify new insights into the phenomenon of fake news on social networking platforms. Specifically, addressing:This raises ethical and cultural questions about the need for interdisciplinary reflection to address these dynamics.

  1. How does misinformation, disinformation, and malinformation distribute across different social media platforms?
  2. What are the recurring themes in fake news? On which platforms do they find greater visibility?
  3. How does artificial intelligence relate to the issue of fake news? As an authoritative agent or as a spreader agent on social networks?
  4. What is the role of Critical Thinking as identified in the scientific literature related to the investigated problem?

The remainder of the article is developed as follows: the next section outlines the methodologies used; section 3 sets out the findings discussed in section 4, of which part 4.4 draws conclusions, limitations and future developments of this work.


The research team (consisting of: 2 psychologists experienced in Critical Thinking Assessment; 2 psychologists experienced in the construction of cognitive-behavioural models; 2 engineers versed in computational document management in complex socio-technical systems; 2 engineers experienced in network analysis; 1 engineer experienced in social network platforms) sought to take an agnostic approach to the distribution of ID and related topics on social media, as will be detailed in the following paragraphs. More in detail, after a preliminary focus-group, the team developed the general idea that relationships between ID themes and social network platforms could be identified as emerging categories from relevant documents drawn from existing literature [43]. Team’s multidisciplinary resulted to be essential in the retrieval and screening stages as well as during the validation one. The data analysis was performed by the engineers while the entire team worked on interpretation of results.

The core concept behind this proposition is that scholarly articles pertaining to specific platforms ought to encompass comprehensive discussions on relevant ID subjects as well. The less stringent the search query, the larger and more statistically valid the documentary sample that will form the basis for concept extraction. Alongside the latter consideration, the team also attempted to design a practicable methodology workflow (Fig 2).

Fig 2. The PRISMA workflow.

Left panel shows the Prisma part of the literature review. Right panel reports the processing part of the workflow.

Primary sources selection and extraction of articles

The research team identified the Scopus scientific database as a trustworthy and adequate source of articles to answer the research questions in this enquiry. Indeed, Scopus covers over 76 million records of scientific articles published by over 40,000 publishers worldwide, although it is important to note that Scopus does not cover all existing scientific journals, but only a selection of those considered to be of high quality and scholarly relevance. Some estimates suggest that Scopus coverage is over 86%. While aware that Scopus does not encompass the entirety of existing scientific journals, the decision to rely exclusively on this database was driven by a thorough evaluation of its coverage and representativeness in our specific research field. Most seminal works and leading studies within our area of interest are included in Scopus, which indicates that the percentage of potentially omitted research is significantly low [44,45]. Consequently, we maintain that despite this limitation, the robustness and validity of our findings remain intact, accurately reflecting current trends and significant discoveries in the field of study.

In our literature review methodology, we prioritized both the relevance and reliability of the sources through meticulous adherence to the PRISMA protocol and the exclusive use of the Scopus database. By employing the PRISMA protocol, we ensured a systematic, transparent, and rigorous approach to selecting studies directly related to our research objectives, thereby upholding the relevance criterion. Simultaneously, the reliance on Scopus guaranteed the inclusion of only peer-reviewed publications, affirming the reliability and scholarly merit of our sources. This dual emphasis on the PRISMA protocol and Scopus’s peer review process underscored our commitment to basing our review on literature that is both directly pertinent to our study and of verified quality, thereby reinforcing the credibility and robustness of our findings.

The search query submitted to the Scopus engine on 10 May 2023 included original articles and journal reviews in English, with no date restrictions (i.e., from the very first publication about the topic queried to 10 May 2023). In more detail, the query sought to identify in texts all possible declinations of the terms ‘misinformation’, ‘disinformation’, ‘malinformation’, and ‘fake news’ no more than five words away from the name of one of today’s most relevant social networks. The query is reported below:

TITLE-ABS-KEY((disinformation OR misinformation OR "fakenews" OR malinformation) AND (("socialnetwork" OR media OR platform) W/5 (facebook OR twitter OR instagram OR whatsapp OR youtube OR "TikTok" OR linkedin OR telegram OR wechat OR douyin OR snapchat OR kuaishou OR vkontakte OR "Sina Weibo" OR odnoklassniki OR livejournal OR "Moi Mir"))AND NOT(tv OR newspaper OR radio)) AND (LIMIT-TO(DOCTYPE,"ar") OR LIMIT-TO(DOCTYPE,"re")) AND (LIMIT-TO(LANGUAGE,"English"))

The query returned 496 documents in total. Fig 3 depicts the evolution over time of scholarly publications attempting to establish the phenomenon of ID through the different social networking platforms.

Fig 3. Evolution over time of academic production.

Misinformation, disinformation, and fake-news are investigated collectively.

It is worth noting how scholarly production greatly accelerates after 2017, despite the major social networking platforms were born more than 10 years earlier. The steep increase in production, consequence of an increased interest in the subject, might be related to a combination of both cultural and social modification (e.g., a change in consumption by news users which abandoned traditional media as newspapers, magazines, radio and television broadcasts) and epochal historical events (e.g., Brexit major vote outcome in 2016, Trump’s election in 2017, pandemic outbreak in 2019, as well as 35 terrorist attacks).

Drawing general interpretations at this stage of the analysis is not meaningful, given the potentially biased sample. In this regard, Fig 4 points out that articles and journal reviews were the primary sources chosen, excluding conference papers and reviews ‐ notoriously shorter-lived but more capable of capturing the immediacy of events ‐ and books ‐ generally texts of deeper and more thoughtful reflection.

Fig 4. Stratification of the documents collected.

Amount of documents analyzed per type (left panel). 94.8% Original Articles vs. 5.2% Reviews; Percentage of subject area to which the articles pertain (right panel).

While caution should be necessary, the distribution of interest in each of the subject areas is shown (Fig 4, right panel). Analogously distribution of documents per subject area (Fig 5) clearly captures the importance of political and health issues that have been pressing in recent years for the global landscape, which is confirmed by the results of the analysis (cfr. §3.2).

Fig 5. Documents distribution per subject area.

How highlighted by Pareto diagram, ID topics are addressed for the most (∼ 70%) by four areas: Social Science (196 papers), Medicine (169), Computer Science (162), and Engineering (60).

Retrieval of actually available documents

The identified potential pool of documents from the previous stage has been downloaded, then 352 articles out of the 496 have been retrieved. Such a loss is due to the fact that only open access or subscripted documents can be accessed. After removing duplicates, the actual number dropped further to 287, accounting for the 58% of the original corpus, in other word the final loss stands at 42%.

File conversion from pdf to txt format

The final graphic format adopted by individual journals in PDF format is ideal for human reading, but completely unsuitable for computational linguists or automatic text interpretation software. A library for the R language was used for this purpose, which is one of the most efficient ways for converting PDF files to txt files [46]. After this stage, a document matrix has been created, where each row represents a document within the corpus and columns bring important information, e.g., document identifier code, name of the corresponding txt file, authors, title, and most important, a column containing the whole paper text, allowing for subsequent natural language processing (NLP) tasks.

File preprocessing

The parsing of text from pdfs downstream of the previously described step must be considered partial, as various residual pdf structures such as headers, page numbers, XML definitions, figure references, notes, cross-references and graphic frames remain. These translation ‘artefacts’ are not the only textual elements to be removed to enable the subsequent natural language analysis steps. Rather standardized text processing is indeed required.


The text is changed to lower-case for purposes of uniformity promoting faster comparability and consistency across the papers being analyzed. When the text is presented in a consistent lower-case format, the algorithm can focus solely on analyzing the content and semantic patterns without being influenced or misled by variations in capitalization. This standardized approach simplifies the computational and linguistic processing involved in a more efficient tokenization (cfr. § 2.4.4), word normalization, and language modeling techniques reducing the complexity of these tasks and enhancing performance and accuracy.

Stop words removal.

As is well known, a text is not a random sequence of words, which is why there are words in every language that are much more frequent than others [47,48]. Such words are mostly connectors and articles, terms that serve the correct syntactic and morphological construction of the sentence but do not contribute to the semantic content. Other words that also do not bring semantics to the text are all those that are part of the jargon of scientific journals such as “authors”, “methods”, “lsevier”, “springer”, “results”, “figure”, “table”, etc. All these combined constitute the set of stop words, that is words to be ignored when processing text.

Removal of numeric and punctuation.

Likewise stop words, punctuation marks and numbers are also uninformative when it comes to discerning content themes in texts.


Once irrelevant information has been eliminated, tokenization can begin. It refers to the process of breaking down a text into individual tokens or words, which can then be analyzed and processed by a computer program. Tokenization involves identifying the boundaries between words in a text and assigning each word its own unique identifier, known as a token [49].


Lemmatization is the process of reducing a word to its base form (i.e., root). It involves taking away any inflectional suffix or prefix from a word to obtain its simplest and most basic form, making it easier to compare and analyze them across different documents [49]. At this point, the document matrix becomes a document × term matrix, where each row represents a document, and each column a unique word in the entire corpus. The cells of the matrix contain the frequency or presence/absence information of each word in each document. That is the starting point for representing the corpus and its documents as numeric vectors and, therefore, allowing several machine learning techniques, e.g., clustering.


Document clustering focuses on identifying groups of related papers with similar content. This process relies on feature extraction techniques to represent each document’s content in a numerical vector space model. The resulting vectors capture the semantic meaning of each document and allow algorithms to calculate similarities between them. The idea is that, once grouped together, these documents can reveal important information about the overall structure and organization of the documental corpus obtained during the retrieval stage (§ 2.2). The clustering stage helped to focus on key areas of research more efficiently than sifting through every single paper individually. Fig 6 shows the implementation of the elbow method for the corpus retrieved [50,51]. The line plots the Sum of Squared Errors (SSE) versus the number of clusters.

Fig 6. Line plot implementing the elbow method for our corpus.

Sum of Squared Errors (SSE) vs Number of clusters. The absence of any clear elbow means that further explorative evaluations are needed.

SSE calculates the sum of the squared Euclidean distances between each data point and its centroid within a cluster, quantifying the compactness of the clusters, with lower values indicating tighter and more well-defined clusters. In the elbow method, the SSE is often plotted against the number of clusters k, and the “elbow” point represents the optimal k value (adding more clusters does not significantly decrease the SSE), but the line does not show any clear elbow. Therefore, while the elbow method provides a useful heuristic for determining the number of clusters, it should not be solely relied upon: other factors such as domain knowledge, interpretability, and practical considerations should be considered when deciding on the final number of clusters. In the present case, through repeated analyses of the significance of the emerged topics, using a trial-and-error approach, we managed to identify 6 clusters/themes– 5 meaningful and the remaining one accounting for the “others” category. The identified 6 clusters will represent the potential topics to be identified in the following topic modeling stage.

Topic modeling

In this stage the research team implemented Latent Dirichlet Allocation (LDA), a generative probabilistic model assuming each document in the corpus as a mixture of a few latent topics, and each topic is characterized by a distribution of words. The number of the latent topics must be known in advance to apply LDA, and then the clustering phase. Then, an algorithm iteratively assigns words to topics and topics to documents based on statistical distributions, aiming at finding the optimal topic-word assignments that best explain the observed numeric data. The latter stage is the so-called “model training”, after which the results can be analyzed. This includes examining the topic-word distributions, which show the probability of each word belonging to a particular topic (i.e., cluster). Fig 7 gives an informative insight into topics distribution.

Fig 7. LDA algorithm results.

The topics projection on the most informative subspace (ℝ2) of the space derived from the Principal Components; the size of the circles is proportional to the marginal topic distribution (i.e., the number of words/terms covered by the topic).

The corpus whole information has been translated into a numeric format, and the entire document-word matrix define a vector space of cardinality as large as the rank of the matrix. The eigenvectors of such a matrix can be thought as components in which information principally distributes. By choosing a subspace as much informative as possible, the in-between conceptual distance among the topics can be visually represented.

LDA assigns a probability distribution of topics to each document in the corpus. This allocation allows researchers to understand the primary topics present in individual documents and analyze the document-topic relationships. To interpret the topics, we have analyzed the most probable words associated with each topic and then we were able to infer the underlying theme or meaning of each topic. The final identification of these meanings took place during a face-to-face discussion in a focus group of all the researchers, which ended when consensus was reached (Fig 8).

Fig 8. Bag-of-words relative to the six clusters identified.

Note a certain degree of terms and topics overlapping, reflecting the unclear behavior of the line plotted for the elbow method.

In this way it becomes possible to uncover hidden thematic structures in the next steps, and in the end understand the content of the retrieved corpus of local text files in a more systematic and objective manner.

Topics’ meaning elicitation throughout Obsidian software

To structure the process of topic identification and unlock other potential information investigation techniques, the corpus document files have been imported into Obsidian software, a powerful markdown interpreter. The documents, now in .md format, are treated within the software as "notes," which in the ordinary intent of the software constitute the atomic elements of a knowledge management system. In Obsidian, each node is identified by a name (in the use case derived from the filename) and relationships between notes are set by using tags or direct links. A word preceded by the hash symbol # turns it into a tag, while the name of a destination note within double square brackets in the body of the originating note defines a direct link. Once these links are created, it is possible to represent the collection of notes as a network or, more appropriately, as a knowledge graph (Fig 9).

Fig 9. The documental corpus in an initial stage of the topic elicitation through the Obsidian software.

Note the cluster around tag #facebook formed by red nodes (papers containing the word “facebook”) and light-blue nodes (papers containing the word "twitter”).

This feature, along with the numerous free plugins developed by the thriving and active user community, makes this software tool versatile and particularly powerful for structuring and retrieving information, as well as eliciting knowledge. This tool has already been tested in several academic research projects, but it is the first time it is being used for the elicitation of predefined topics in a corpus of documents. In this case, indeed, the topics are defined by the research questions presented in the introductory section. Therefore, we are interested in knowing:

  • Which documents cite different social networking platforms or, from a specular perspective, given a specific social network (e.g., Twitter), how it gathers certain documents rather than others.
  • Given a bag-of-words (the words that cluster around a topic), which documents contribute to its saturation. This reasoning translates the fact that the topic emerges from the recurrence of semantically related themes as they are distributed within the corpus of documents. In this context, it is possible to define the "topic" note as the node in the graph that acts as a broker between the words in the bag-of-words and the documents reflecting it. Essentially, the topic note is the note that points to all those documents containing the terms of the corresponding bag-of-words (Fig 10).
  • The previous point makes it possible to directly define the literature matrix (the document-feature matrix of the literature analysis) based on the graph analysis. In fact, the adjacency matrix of this new graph, in which both documents and "topic" nodes appear, corresponds precisely to the literature matrix of the investigated corpus.
Fig 10. The birth of topic 4.

The newly born Topic 4 emerges during the initial phase of topic elicitation in the knowledge graph.


Finally, the documents are read and analyzed, and the literature matrix is validated.


After the screening stage, the corpus includes 283 documents (listed in Table 1) that have been converted into Obsidian notes. That allowed for both knowledge graph construction and the implementation of advanced text search tools (e.g., regular expressions pattern matching is natively implemented in Obsidian) over the whole corpus.

Documents versus social networks

The number of social media platforms included in the initial query on the Scopus database has been deemed to be enough complete in terms of customers’ propensity. We have assumed that other scholars have addressed a certain topic and some social network platforms relevant for that topic in the same article. Moreover, we also assumed that the frequency of usage of a social media name (e.g. Facebook) within the document text is a meaningful proxy measure of the relevance of the correspondent social media platform for that particular paper. Starting from these considerations, fifteen nodes have been added to the Obsidian vault, one per each social media platform present in the corpus, since neither “Moir Mir” nor “Kuaishou” platforms are not present. The relationships between documents and social media can be found in the graph topology, as they become the ties between documents nodes and social media nodes, in principle allowing to evaluate the importance of the social media through the corpus. In Fig 11 the number in the cells represent the number of links connecting a document (row) with a social media platform name (column).

Fig 11. Excerpt of the documents-social networks matrix.

The matrix (size 283X15) reports the strength of relationships between documents and social media platforms.

For each social media the greyscale intensity gives visual feedback (the darker, the highest) upon the number of outgoing links from the document with it. The total number of incoming links for each social network (Representativity) can be computed by summing each column separately and represents how much the social network is addressed by the entire corpus (Twitter is the most represented platform in the corpus, the rightmost–Odnoklassniki–is the less represented). The documents also have been sorted on the Ranking score, that is the sum of the outgoing links scaled by the Representativity of the social media, reported on the greenish rightmost column.

Documents versus topics

The nodes corresponding to the topics have been constructed associatively from bag-of-words and clusters obtained in the previous phases. It is necessary to first detect the corresponding tags and then observe how they are highlighted in the various notes/documents. Clusters have been formed and then identified as topics according to the customary process of topic modeling. The identified topics are: Politics (addressing political events and issues); Health and Science (mainly regarding Covid-19 outbreak, debates about vaccines and drugs, but also environmental pollution and climate change, as well as technological and scientific development); Social Issues (Current social issues, such as immigration, wars, gender issues, poverty, and racism); Disasters and Tragedies (Criminal events, massacres, terrorist attacks, and natural disasters that have polarized social media users); Economy and Finance (topics related to the performance of financial markets, cryptocurrencies, investors, and relationships with various stakeholders); Other (Cluster gathering minor topics not falling under the previous ones, such as gossip about celebrities, unclassifiable conspiracy theories, internet memes, and generic hoaxes).

There are considerable differences in the amount of ID addressed per specific topic in the 283 articles analyzed. In Fig 12 the values in the cells represent the number of links connecting a document (row) with a topic (column).

Fig 12. Excerpt of the documents-topics matrix.

The matrix (size 283X6) reports the strength of relationships between documents and topics.

For each social media the blue intensity gives visual feedback (the darker, the highest) upon the number of outgoing links from the document to the social media. The total number of incoming links (Relevance) can be computed for each topic by summing the values along each column, it represents how much the topic is addressed by the entire corpus. From column sorting, it is evident that Health and Science is the most represented topic in the corpus, while the rightmost–Disasters and Tragedies–is the less represented,

The documents also have been sorted on the Ranking score, that is the sum of the outgoing links scaled by the Relevance of the topics, reported on the reddish rightmost column.

Once assessed the overall Relevance of the topics over the entire documental corpus, it is possible to rank them from the most relevant to the lowest in terms of total links to documents: Health and Science (27197 links), Politics (15082), Social Issues (5885), Economy and Finance (1092), Other (663), Disasters and Tragedies (565). Health and Science, Politics, Social Issues are the most relevant, which is consistent with the analysis performed in the early stage of the primary sources’ selection of the articles (§ 2.1).

From the bag-of-words is clear that the relevance of Health and Science is mainly due to the recent global pandemic that has been the subject of both correct and false information. The rush to find vaccines to tackle the Covid-19 pandemic ignited a flamed discourse on big pharma companies on which many conspiracy theories thrived. The phenomenon, however, is confused with the search for alternative information to traditional sources [332].

ID on Politics topic mainly relates to the political events happened after 2016 such as the Brexit, the USA presidential election, the Russia-Gate, the rise of nationalist movements worldwide, the cold conflict between the USA and North Korea, and the actions of dissidents against Vladimir Putin.

The issues regarding migratory phenomena, cultural, religious, sexual autonomy, or gender self-determination are always subject to heated debates among individuals, stirring them up. This instinctive response to topics that touch upon personal spheres and intimate beliefs is often exploited as a mechanism to deactivate critical control over one’s conscious actions. Users of various social media platforms, driven by fervor, tend to share messages with other users, regardless of their positions on the matter. The content sharing mechanism, facilitated by design through the interface of major social networking platforms, is constantly exploited to disseminate ID, as evidenced by the ranking of Social Issues in Fig 12.

When it comes to tragedies and natural disasters, however, this sharing mechanism is seldom utilized. Apparently, events that touch people not only from the perspective of beliefs but also through empathetic proximity to their fellow human beings do not spread false information as effectively [333,334].

Social media platforms versus topics

As described earlier, both the connections between documents and social networks, and the connections between documents and topics were obtained as relationships between corresponding nodes within the Obsidian vault. Each of these connections represents a pathway from a document to a topic or a social network. Consequently, it is possible to identify those documents that bridge the gap between topics and social networks and determine the level of connection between these two node types. Following this logic, we have obtained the social network versus topics matrix shown in the Fig 13.

Fig 13. The social media platforms-topics matrix.

Each cell reports the corresponding strength of relationship. The bottom row represents the total sums of the strengths per topic. The rightmost column reports a weighted score for each social media platform.

Such relationship can be interpreted as the Eco of that topic in the particular Eco-chamber represented by the social media corresponding to that row. For example, the Health and Science resonates in descending order in Twitter, Facebook, Youtube, Whatsapp, Instagram, Wechat, and so on, as visually suggested by the color scale: reddish are worst, greenish are better in terms of ID spreading. The overall Eco is reported on the last row. As previously done, the score accounts for the relative importance of the social media platforms as eco-chambers.

As expected, reverberation as an echo-chamber is proportional to the diffusion of the corresponding platform, since this is precisely the mechanism on which its relevance is based: the revenue mechanism on which all social networks are based is precisely the number of subscribers and the possibility of showing advertising content to, or collecting data, from as many people. As a consequence of how they are designed, social networking platforms favor the sharing of content as the basic mechanism for establishing and strengthening social relations between users. This is reflected in both the economic importance of the platform and its ability to amplify shared information, whether true or false. Therefore, it is not surprising that the ordering of the score corresponds to the ordering of notoriety of the platform. Actually, the score reflects the economic importance of the platform only partially [335]. This discrepancy could be associated with the sharing mechanism and the type of content on which the platform is based on. Tagging someone else tweet is much easier and faster than watching an entire video on YouTube and posting a comment. From another perspective, spreading ID content is harder on platforms like Tik Tok even though they have a large diffusion, especially among younger generations. Fig 14 shows how different types of IDs move through the major social networks.

Fig 14. Correlation matrix between ID types.

The ID types, misinformation, disinformation, malinformation, are correlated with the most widely used social networks worldwide.

Obviously, as noted above, Twitter and Facebook are the social networks where fake news is spread the most. Among the ID types, on the other hand, it is clear how misinformation, i.e., incorrect information disseminated without intent to deceive or harm, is most prevalent. Disinformation (the intentional manipulation of false news), is in second place. The dissemination of distorted news with intent to deceive or harm (malinformation) accounts for a much lower proportion. This result shows that most users are not aware that they are spreading ID. There is thus evidence that there is a strong users’ ingenuity in the sharing of content and that users often share so much for the sake of an exchange of any kind rather than for reasons driven by real critical thinking [336339].

The role of AI: Aid or pitfall?

In this literature review, several studies have been screened to explore the role of artificial intelligence (AI) in the dissemination of fake news. Surprisingly, the findings reveal that AI acts as both a spreader of fake news and an authoritative agent. On one hand, the power of AI can be harnessed to uncover and identify fake news, potentially aiding users in distinguishing between genuine and fabricated information [340,341].On the other hand, AI can also serve as a harmful agent, amplifying and spreading false or incorrect information, thereby posing a significant challenge in accurately assessing the authenticity of news sources [33,342]. These contradictory findings highlight the complexity and potential pitfalls associated with relying on the sole AI for the analysis of news authenticity. Further research and innovative approaches are required to mitigate the negative impact of AI in spreading fake news and to develop effective mechanisms for its verification. In Fig 15 is shown AI’s behavior versus the dissemination of ID through social networks.

Fig 15. AI bots vs social media platform.

Correlation matrix between AI behavior addressed in the screened papers and the most widely used social networks worldwide.

In most cases, AI shows to act more as a spreader than as an authoritative agent. This applies mainly to the most commonly used social networks (Twitter, Facebook, and YouTube). Values are less significant for the less used platforms in the dissemination of news in general (i.e., Instagram, Linkedin, Sina Weibo, etc.), and is not present at all in the papers analyzed coming from those social platforms reported with a green background.

It’s also interesting to consider the specific areas or topics in which AI operates as a disseminator of fake news or as an authoritative agent (see Fig 16).

Fig 16. AI bots vs topics.

Correlation matrix between AI behavior addressed in the screened papers and the major topics of dissemination of the ID.

This analysis too shows that AI behaves more like a fake news spreader instead of an authoritative agent used to protect users from IDs. This finding is due to several factors. First, AI algorithms rely heavily on data and patterns, often without fully understanding the nuances of context and credibility [343]. If it aligns with popular narratives or generates high engagement, it can lead to the unintentional amplification of misleading or false information. Additionally, the algorithms used by AI systems may prioritize maximizing user attention and interaction rather than prioritizing accuracy and authenticity [344]. This can result in the promotion of sensationalized or controversial content, including fake news, as it tends to generate more clicks, likes, and shares. Furthermore, the ever-evolving nature of fake news makes it challenging for AI systems to consistently and effectively identify and combat them. The manipulation tactics employed by purveyors of fake news continue to evolve, often surpassing the capabilities of AI systems designed to detect them. Consequently, the limitations of AI in accurately discerning between genuine and false information contribute to its tendency to inadvertently spread fake news instead of acting as a reliable authoritative agent.

Empowering critical thinking in tackling fake news

The role of Critical Thinking in the papers analyzed in this literature review is essential and multifaceted. Critical Thinking serves as a crucial tool in combating the detrimental effects of fake news by encouraging individuals to question, analyze, and evaluate the information they encounter [345,346]. It can alleviate the fear and panic that false alarms and sensationalized headlines can trigger, promoting a more rational and measured approach to news consumption [347,348]. Critical Thinking also aids in mitigating the impact of consensus bias, wherein individuals tend to believe information that aligns with their preexisting beliefs or the prevailing narrative [349]. By fostering a mindset of skepticism and inquiry, Critical Thinking helps to counteract narrative division and confusion by promoting a more nuanced understanding of complex issues [350,351]. Additionally, Critical Thinking acts as a shield against the allure of clickbait, which often leads to the spread of misinformation. By empowering individuals to assess the credibility and reliability of sources, Critical Thinking mitigates the distress and panic caused by ID [349,352]. Overall, the scientific literature recognizes that, although it often requires triggers to be activated [353], Critical Thinking as an essential component in navigating the landscape of fake news and its detrimental consequences, offering a potential solution to combat its spread and protect individuals from its harmful effects.

It is also important to note that Critical Thinking plays a crucial role in addressing fake news, as relying on the sole AI as a trained critical thinker on behalf of the user is not effective enough [354]: there are inherent limitations to AI systems that prevent them from effectively emulating the nuanced cognitive processes involved in Critical Thinking [355,356]. First, AI lacks the ability to grasp the intricacies of human emotions, values, and biases, which are essential components in critically evaluating information [357,358]. Critical Thinking requires an understanding of the broader context, cultural nuances, and the ability to discern subjective intent, factors that AI struggles to accurately interpret. AI systems primarily rely on algorithms and data patterns, which can be manipulated or biased themselves, leading to potential inaccuracies and reinforcing existing biases. Additionally, AI algorithms are not equipped to adapt and evolve at the same pace as the ever-changing tactics employed by those spreading fake news [343,344]. The dynamic nature of fake news necessitates human judgment and reasoning, which AI currently falls short of replicating. Therefore, while AI can assist in certain aspects, it cannot replace the inherent cognitive abilities of human critical thinking when it comes to detecting and combating fake news effectively.

Understanding the fragilities of the human mind is crucial to fully harnessing the potential of AI. By recognizing the limitations and biases that humans possess, we can better leverage AI as a complementary tool in the fight against fake news. By combining the strengths of AI, such as its ability to analyze vast amounts of data and detect patterns, with human critical thinking skills, we can create a more robust system for identifying and countering fake news. This approach acknowledges that AI can aid in information processing, fact-checking, and identifying inconsistencies, but it requires human judgment to interpret the findings and consider the broader context. By bridging the gap between human cognition and AI capabilities, we can maximize the potential of both effectively combating fake news and protecting users from its detrimental effects.

Discussions and conclusions

Navigating the information landscape: Partisan bias and fact-checkers

The issue of fake news on social media is a pressing concern with significant implications. While social media platforms have implemented measures to combat the spread of misinformation, it is evident that partisan bias can still influence fact-checking efforts [359]. Researchers have made efforts to study this phenomenon by creating data repositories that provide insights into the spread of fake news on social media [341,360362]. However a different dimension to the issue is highlighted by pointing out that anti-critical thinking practices can be detrimental to the development of critical thinking skills [363,364]. Such practices can limit free speech, suppress dissenting opinions, and promote misinformation, which can hinder the understanding of complex topics [365,366]. Therefore, it is essential to address the issue of anti-critical thinking to ensure that individuals develop the necessary skills to navigate the complex information landscape of social media.

Partisan bias refers to the tendency of people to interpret or report information in a way that is consistent with their political beliefs or affiliations [367,368]. In the context of fact-checking efforts on social media platforms, partisan bias can influence the way in which information is evaluated and classified as true or false [369]. For example, if a fact-checker has a political bias toward a particular party or ideology, they may be more likely to label information that corresponds with their beliefs as true and information that contradicts their beliefs as false. This can lead to a situation where misinformation is labeled as true or facts are labeled as false, which can further exacerbate the problem of fake news on social media [370]. Therefore, it is essential to mitigate the impact of partisan bias on fact-checking efforts to ensure that the information provided is accurate and unbiased. One example of how partisan bias has affected fact-checking efforts is the controversy surrounding Facebook’s program on third-party fact-checking [371,372]. In 2019, it was revealed that some of the fact-checkers hired by Facebook had political biases that influenced their decisions. For example, one of the fact-checkers, who was affiliated with a conservative think tank, was found to have labeled true posts from left-leaning sources as false, while false posts from right-leaning sources as true. This led to accusations of bias and raised concerns about the effectiveness of Facebook’s fact-checking program. Similarly, in 2020, Twitter received criticism for labeling a tweet from a conservative commentator as "manipulated media," while tweets with similar content from left-leaning sources were left unchecked [373]. These examples illustrate how partisan bias can influence fact-checking efforts and highlight the need for more rigorous and transparent fact-checking processes to combat the spread of misinformation on social media.

It can be challenging for users to identify fact-checkers with political biases, as these biases may not always be apparent [374]. However, there are some steps that users can take to evaluate the credibility of fact-checkers and the sources they use [375]. First, users can check the credentials of the fact-checkers to determine if they have expertise in the relevant area. Secondly, users can examine the sources cited by the fact-checkers to determine if they are reputable and unbiased. Additionally, users can compare the fact-checkers’ conclusions with those of other fact-checkers to see if there is a consensus. Finally, users can look for any evidence of political biases in the fact-checkers’ work, such as consistently labeling posts from a particular political ideology as false or true. However, it’s important to note that identifying political biases in fact-checkers can be a difficult task, and users should be cautious when evaluating the credibility of fact-checkers and the information they provide. There are several ways to determine if a source is reputable and unbiased:

  • Check the author or organization behind the source: Look for information about the author or organization to see if they have a reputation for producing accurate and unbiased information. You can do this by searching for the author or organization on search engines or checking their website.
  • Look for other sources to corroborate the information: Check other sources to see if they are reporting the same information. If multiple sources are reporting the same information, it is more likely to be accurate.
  • Check the date of the source: Make sure that the source you are using is current and up-to-date, as information can become outdated quickly.
  • Check for bias: Look for any signs of bias in the source, such as a clear political or ideological agenda. If the source appears biased, it may not be the most reliable source of information.
  • Pay attention to the tone of the source: Look for any emotional language or inflammatory statements that could indicate bias or an agenda.

By considering these factors, is possible to get a better sense of whether a source is reputable and unbiased. However, it is important to remember that no source is completely unbiased, and it is always a good idea to check multiple sources to get a more comprehensive understanding of a topic [376]. It’s important to approach the information with caution: if it is impossible to find any corroborating sources or additional information, it may be best to withhold judgment or refrain from using the information until more reliable information becomes available [377].

Visualizing information: How a knowledge graph can streamline your data management

In today’s information-saturated world, the volume of available knowledge presents a significant challenge. Traditional taxonomic structures, such as Linnaean trees or encyclopedias, are no longer sufficient to navigate this complex landscape. Additionally, the direct verification of reliable sources has become increasingly difficult. To address this issue, we propose an organizational framework derived from a comprehensive review. This framework aims to systematize and simplify knowledge organization, providing a solution to the overwhelming influx of information. By adopting this systematic approach, we can effectively manage and navigate the vast sea of information that surrounds us. In the realm of image recognition and cognitive processes, the utilization of cognitive artifacts, such as knowledge graphs, can greatly enhance cognitive capacities. Cognitive artifacts are tools or objects that assist in performing cognitive tasks more efficiently and accurately [378,379]. Knowledge graphs, structured representations of knowledge, offer a powerful cognitive artifact for enhancing image recognition capabilities. By organizing and capturing information about visual concepts, knowledge graphs facilitate a deeper understanding of visual information [380,381]. These tools prove to be valuable in representing and containing a huge amount of information and allow them to be navigated to grasp interesting findings and connections. They enable the comprehensive representation of both the topics and the social networks addressed in the analyzed papers, fostering a holistic understanding of these domains. Knowledge graphs not only provide an efficient means of representing complex relationships between concepts but also facilitate the discovery of new patterns and relationships [382]. Additionally, they can recommend personalized pathways based on topic interests or use of social networks, improving the dataset exploitation experience. Knowledge graphs allow a simplex management method [383] of literature review playing a crucial role in streamlining data management and overcoming the issue of information silos [384,385]. Information silos refer to the isolated storage and limited accessibility of information within specific domains or disciplines. This can hinder interdisciplinary collaboration and impede the comprehensive understanding of complex topics [384]. Simplex management allows to overcome the challenges posed by information silos, enabling the integration of diverse sources creating a unified holistic and interconnected knowledge base view of the research field [383]. The simplex management approach involves the systematic organization and synthesis of literature to extract key insights and findings. By consolidating information from various sources, simplex management enables researchers to navigate the vast amount of literature and identify relevant studies more effectively [383]. Combining the power of knowledge graphs and simplex management results in a streamlined and comprehensive approach to data management. The knowledge graph serves as a visual representation of information, facilitating the exploration and understanding of complex relationships. Simultaneously, simplex management ensures the systematic organization and synthesis of literature, preventing the fragmentation of knowledge and enabling a more cohesive and informed research process. These characteristics can greatly enhance cognitive capacities and streamlining data management, and a deeper understanding of information belonging to complex domains. Consequentially, researchers can navigate the vast amount of information more efficiently and uncover new insights.

Unraveling the dynamics of fake news through literature

The prevalence of fake news and its impact on individuals’ beliefs requires a comprehensive understanding of the underlying communication processes. This study delves into the intricate stages involved in the dissemination of false information, emphasizing the crucial need to understand the factors that contribute to individuals’ susceptibility to misleading content. Particularly in scenarios where false beliefs can lead to adverse outcomes, unraveling the mechanisms behind belief formation becomes imperative. Notably, the landscape of fake news propagation has evolved, with a growing shift towards closed social media applications. Within these closed networks, fake news effortlessly traverses from sender to receiver, concealing itself from the scrutiny of those outside the conversation. This hidden transmission poses significant challenges in combating misinformation and underscores the urgency of comprehending its dynamics.

[386] states falsehoods diffuse considerably faster and more broadly than truths on Twitter. The study analyzed over 126,000 Twitter stories tweeted by about 3 million people more than 4.5 million times and found that false political news had more pronounced effects than false news about less-partisan topics such as terrorism, natural disasters, science, urban legends, finance, or health issues, such as COVID-19 pandemic information. This study provides information on the growing trend of accessing news and information through social technologies, more precisely an increasing proportion of adults prefer to get their news online, including through social media platforms. The paper also discusses how AI can be used to detect and combat fake news on social media and the ethical concerns surrounding the use of AI in detecting fake news. AI algorithms can be also used for "dark creativity" to generate emotionally-loaded fakes for profit and notoriety. Such systems with explicitly deceptive intentions put AI technology at a disservice to society. Moreover, there are concerns about the potential biases in AI algorithms that could lead to false positives or negatives in detecting fake news. Humans are not always good at distinguishing between real and fake news, especially when the content aligns with their pre-existing beliefs or biases. This is known as confirmation bias. Additionally, humans may not have the time or resources to fact-check every piece of information they encounter online. AI can be used to complement human abilities in detecting fake news and improving overall accuracy amplifying and complementing human critical thinking by mimicking the procedures and know-how of experts or by requiring entirely new systematic approaches. Additionally, AI can be used to assist humans in detecting fake news by providing additional information and context that may not be immediately apparent to humans. However, it is important to note that AI should not replace critical thinking skills but rather enhance them.

According to [387] some examples of misinformation spreading on social media include rumors and unverified information shared during breaking news situations. For instance, after a terror attack on the Champs Élysées in Paris in April 2017, individuals on social media unwittingly published rumors, such as the news that a second policeman had been killed. People sharing this type of content are rarely doing so to cause harm but are caught up in the moment and fail to adequately inspect and verify the information they are sharing. The authors mention that various third-party actors have created websites that use a set of criteria to fact-check trending online content or certify the credibility (trustworthiness) of popular online news websites. Social media platforms have begun fact-checking what is posted and shared on their sites by users. However, the jury is still out on how vigorously and successfully they do this. As for reporting misinformation on social media platforms, most platforms have reporting features that allow users to flag content as false or misleading.

[388], in the paper titled "Creating Chaos Online" argues that the impact of disinformation on a society as a whole can be significant. Disinformation can render publics vulnerable to propaganda and influence attitudes and behaviors in target populations. Anonymity and automation are two factors that can contribute to the proliferation of disinformation on online platforms. Anonymity allows users to assume masked or faceless identities, which can make it easier for them to generate posts on news portals or social networking sites without being held accountable for their actions. Similarly, automation can foster the amplification and proliferation of disinformation by allowing certain ideas or information to spread rapidly from the margins to the mainstream. This can occur through the use of AI, bots, and other automated tools that are designed to amplify certain messages or content. These factors can make it easier for disinformation campaigns to gain traction online and reach a wider audience than they might otherwise be able to. Anonymity and automation are both typical features of the sociotechnical structure of online platforms. The term "sociotechnical" refers to the interplay between social and technical factors in shaping the design, use, and impact of technology. In the case of online platforms, the sociotechnical structure includes both the technical features of the platform (such as its algorithms, user interface, and data architecture) as well as the social practices and norms that emerge around its use (such as how users interact with each other, what types of content are shared, and how information is evaluated). Anonymity and automation are two examples of technical features that can have significant social consequences. By enabling users to remain anonymous or by amplifying certain types of content over others, these features can shape how information is produced, circulated, and consumed on online platforms. As a result, understanding the sociotechnical structure of online platforms is crucial for understanding how disinformation spreads online and what can be done to address it. According to the aforementioned article "Creating Chaos Online," disinformation tactics used online can include the deployment of propaganda that involves affective, deflective, and misleading information. The work also notes the recurrence of justification frames, which are similar to disinformation propaganda tactics of past and present dictatorships.

[389] discuss about the concept of polarization. This concept refers to the phenomenon where people with similar beliefs and values become more extreme in their views after taking position. This can lead to a widening gap between different groups in society, as each group becomes more entrenched in their own beliefs and less willing to consider alternative perspectives. Polarization can be influenced by various factors, including media consumption, social networks, and political discourse. Empirical studies have shown that blogs and personalized news portals can contribute to political polarization in society. In the USA, for example, supporters of the Republican Party have moved further to the right in recent years, while Democrats have drifted further to the left. The paper also covers topics that contribute to shaping opinions by polarization and societal divisions, including the transformation brought by the Internet, the influence of search engines like Google, the role of blogs and social media platforms. All these factors lead to the analysis of the power of framing and narratives, the creation of filter bubbles and echo chambers through social media algorithms, and the detrimental effects of conspiracy theories. Overall, Zoglauer’s article underscores the erosion of trust in traditional sources of authority and calls for critical examination of beliefs and open dialogue to foster a more nuanced understanding of truth.

[390], the "One-Dimensional Discourse" is analyzed. This is a concept that refers to limited communication characterized by a lack of critical thinking and analysis, reinforcing dominant ideologies and power structures. It is associated with authoritarianism, consumerism, and technological progress, leading to the colonization of human experience. Social media, considered a "new communicative paradigm," enables various forms of electronic communication and content production. However, within the context of communicative capitalism, social media can foster one-dimensional discourse by capturing resistance and promoting capitalist ideals. This plays a significant role in shaping public discourse and influencing political opinions. Moreover, the impact of social media on communication is analyzed, highlighting its transformative nature and potential for reinforcing dominant ideologies and power structures, ultimately affecting public discourse and political opinions.

In "Optimising Emotions, Incubating Falsehoods," by [391] practical strategies are provided to protect against disinformation and misinformation, such as fact-checking and critical thinking. Disinformation is intentionally spread to deceive, while misinformation may be spread without deceptive intent. The book highlights real-life examples of the impact of false information on global events, including the rise of populist movements and its influence on political elections and public health. It also discusses deepfakes and shallowfakes, manipulated videos that misrepresent reality. The dynamics of false information online involve the economics and politics of emotion, optimizing emotional content for financial and political gain. The authors emphasize the scale and virality of false information, involving bots and various types of spreaders that use emotionalized presentation to amplify their reach.

In [392] the authors discuss the relationship between fake news, conspiracy theories, and digital media. They argue that conspiracy theories are a dangerous form of fake news facilitated by the affordances of the digital media ecology. Conspiracy theorists not only believe in these theories but also generate content to spread them. The authors also highlight the emergence of fake news in the past few years, causing public anxiety and debates on truth, media responsibility, and audience literacy. They connect fake news to postmodern culture, where spectacle triumphs over substance, truth becomes relative, and reality is constructed through media representations. The authors draw parallels between fake news and propaganda, suggesting a similar impact on Donald Trump’s election. They emphasize the challenge posed by deepfake videos, which masquerade as authentic and manipulate viewers in an era of hyperreality and disinformation.

In "Building Back Truth in an Age of Misinformation," [393] the author emphasize the importance of being critical consumers of media to identify reliable sources. This involves evaluating source credibility, checking for bias, and verifying information with other sources. Social media platforms have accelerated the spread of false information, rewarding pages that share misinformation with more engagement. These platforms often evade responsibility as publishers. Educators play a crucial role in teaching students to combat misinformation by evaluating sources and incorporating critical media skills into the curriculum. Designers and developers can create healthier online communities by implementing features like limiting visibility of likes and shares, providing context for posts, promoting diverse perspectives, and reducing anonymity to discourage harmful behavior.

Conclusions: Illuminating insights and future directions

In conclusion, this scientific literature review analysis delved into the phenomenon of information disorder on social media platforms, with a particular focus on the dissemination of fake news related to politics, health, and science. Our findings shed light on the distinct ways in which misinformation, disinformation, and malinformation spread across various platforms, with Twitter being a common platform for political propaganda and Facebook for health-related misinformation. We also emphasized the dual role of artificial intelligence in both perpetuating and combatting false narratives. To combat information disorder, we proposed several strategies, including enhancing digital literacy skills and fostering critical thinking among social media users. However, it is important to acknowledge the limitations of our review, as it is based solely on scientific literature and may not encompass all aspects of the phenomenon. Moreover, the rapid pace of social media makes it challenging to keep up with the latest trends in fake news. Moving forward, future research should explore innovative approaches to tackling information disorder on social media platforms. Leveraging emerging technologies such as blockchain and machine learning algorithms could offer promising avenues to verify the authenticity of information. Additionally, concerted efforts should be made to promote digital literacy skills and encourage critical thinking to empower users in navigating the online information landscape. In conclusion, our review contributes fresh insights into the intricate issue of information disorder on social media platforms and presents potential solutions to address this pressing concern. By fostering collaboration and continuing research in this field, while harnessing the power of knowledge graph simplexity data management techniques, we can foster a more informed and responsible digital society. While we have identified several strategies for combating fake news, there are limitations to our review. For instance, our analysis is limited to the scientific literature and may not capture all aspects of the phenomenon. Additionally, the fast-paced nature of social media makes it difficult to keep up with the latest trends in fake news. Our study assumes that the extent of informational disorders on social media and AI bot behaviors is accurately reflected in the volume of scientific articles on these topics. This assumption becomes more credible as the number and recency of relevant articles increase. However, this approach has limitations due to the scientific literature’s potential lag in capturing the rapid evolution of digital behaviors. Factors such as publication bias and the academic community’s response time to emerging trends could affect the comprehensiveness of our analysis. Thus, while our methodology provides a substantial basis for understanding these phenomena, it necessitates cautious interpretation of findings, acknowledging the possibility of underrepresentation or delayed recognition of new developments in social media and AI bot activities. An additional limitation of the study concerns the exclusive use of the Scopus database for identifying articles relevant to our review. Although Scopus is renowned for its broad coverage and the high quality of indexed publications, it does not capture the entire spectrum of scientific publications. This approach has the potential to omit relevant studies published outside Scopus. However, given Scopus’s high coverage percentage in our specific research domain and the inclusion of major influential works, we believe that this limitation does not significantly compromise the robustness and representativeness of the results obtained. Future research could extend the analysis to additional databases to compare results and assess the impact of this methodological choice on the overall understanding of the field. Moving forward, future research should explore new ways to combat information disorder on social media platforms. One potential avenue is to leverage emerging technologies such as blockchain or machine learning algorithms to verify the authenticity of information. Furthermore, efforts should be made to promote digital literacy skills among users and encourage critical thinking when consuming information online.


Special thanks to Professor Garito, the Uninettuno coordinator of the Titan Project, and Professor Caprara, the scientific advisor, for their valuable suggestions and discussions on the subject. Thanks to all Titan project partners for sharing their insights on the issues addressed in this work. Many thanks to all members of the research team for sharing their ideas and experiences. Special thanks to the Editor and the reviewers for their suggestions, which significantly enhanced the work.


  1. 1. Needham A. Word of mouth, youth and their brands. Young Consumers. 2008;9: 60–62.
  2. 2. Yavetz G, Aharony N. Social media in government offices: usage and strategies. Aslib Journal of Information Management. 2020;72: 445–462.
  3. 3. Zhang XS, Zhang X, Kaparthi P. Combat Information Overload Problem in Social Networks With Intelligent Information-Sharing and Response Mechanisms. IEEE Transactions on Computational Social Systems. 2020;7: 924–939.
  4. 4. Asamoah DA, Sharda R. What should I believe? Exploring information validity on social network platforms. Journal of Business Research. 2021;122: 567–581.
  5. 5. Zhang W, Lu J, Huang Y. Research on the Dissemination of Public Opinion on the Internet Based on the News Channels. 2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). 2021. pp. 485–488.
  6. 6. Vese D. Governing Fake News: The Regulation of Social Media and the Right to Freedom of Expression in the Era of Emergency. European Journal of Risk Regulation. 2022;13: 477–513.
  7. 7. McPherson M, Smith-Lovin L, Cook JM. Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology. 2001;27: 415–444.
  8. 8. Weng L, Menczer F, Ahn Y-Y. Virality Prediction and Community Structure in Social Networks. Sci Rep. 2013;3: 2522. pmid:23982106
  9. 9. Esposito E, Sinatora FL. Social media discourses of feminist protest from the Arab Levant: digital mirroring and transregional dialogue. Critical Discourse Studies. 2022;19: 502–522.
  10. 10. Literat I, Boxman-Shabtai L, Kligler-Vilenchik N. Protesting the Protest Paradigm: TikTok as a Space for Media Criticism. The International Journal of Press/Politics. 2023;28: 362–383.
  11. 11. O’Leary H, Smiles D, Parr S, El-Sayed MMH. “I Can’t Breathe:” The Invisible Slow Violence of Breathing Politics in Minneapolis. Society & Natural Resources. 2023;0: 1–21.
  12. 12. Bhadani S, Yamaya S, Flammini A, Menczer F, Ciampaglia GL, Nyhan B. Political audience diversity and news reliability in algorithmic ranking. Nat Hum Behav. 2022;6: 495–505. pmid:35115677
  13. 13. Moorhead SA, Hazlett DE, Harrison L, Carroll JK, Irwin A, Hoving C. A New Dimension of Health Care: Systematic Review of the Uses, Benefits, and Limitations of Social Media for Health Communication. Journal of Medical Internet Research. 2013;15: e1933. pmid:23615206
  14. 14. Franceschi J, Pareschi L. Spreading of fake news, competence and learning: kinetic modelling and numerical approximation. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2022;380: 20210159. pmid:35400178
  15. 15. Oqlu Kazimi PF. Global Information Network and Conflicts of Interest (Parties, Interests and Conflicts). 2021 IEEE 16th International Conference on Computer Sciences and Information Technologies (CSIT). 2021. pp. 453–456.
  16. 16. Velichety S, Shrivastava U. Quantifying the impacts of online fake news on the equity value of social media platforms–Evidence from Twitter. International Journal of Information Management. 2022;64: 102474.
  17. 17. Chen S, Xiao L, Kumar A. Spread of misinformation on social media: What contributes to it and how to combat it. Computers in Human Behavior. 2023;141: 107643.
  18. 18. Soler-Costa R, Lafarga-Ostáriz P, Mauri-Medrano M, Moreno-Guerrero A-J. Netiquette: Ethic, Education, and Behavior on Internet—A Systematic Literature Review. International Journal of Environmental Research and Public Health. 2021;18: 1212. pmid:33572925
  19. 19. Suarez-Lledo V, Alvarez-Galvez J. Prevalence of Health Misinformation on Social Media: Systematic Review. Journal of Medical Internet Research. 2021;23: e17187. pmid:33470931
  20. 20. van der Linden S. Misinformation: susceptibility, spread, and interventions to immunize the public. Nat Med. 2022;28: 460–467. pmid:35273402
  21. 21. Wardle Claire, Derakhshan Hossein, Burnes Anne, Dias Nic. Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe; 2017 p. 109. Available:
  22. 22. Carmi E, Yates SJ, Lockley E, Pawluczuk A. Data citizenship: rethinking data literacy in the age of disinformation, misinformation, and malinformation. Internet Policy Review. 2020;9. Available:
  23. 23. Santos-d’Amorim K, Miranda MF de O. Informação incorreta, desinformação e má informação: Esclarecendo definições e exemplos em tempos de desinfodemia. Encontros Bibli: revista eletrônica de biblioteconomia e ciência da informação. 2021;26: 01–23.
  24. 24. Carson A, Gibbons A, Phillips JB. Recursion theory and the ‘death tax’: Investigating a fake news discourse in the 2019 Australian election. Journal of Language and Politics. 2021;20: 696–718.
  25. 25. Hameleers M. Disinformation as a context-bound phenomenon: toward a conceptual clarification integrating actors, intentions and techniques of creation and dissemination. Communication Theory. 2023;33: 1–10.
  26. 26. Azamfirei R, Kudchadkar SR, Fackler J. Large language models and the perils of their hallucinations. Critical Care. 2023;27: 120. pmid:36945051
  27. 27. Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK, et al. “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management. 2023;71: 102642.
  28. 28. Krügel S, Ostermaier A, Uhl M. ChatGPT’s inconsistent moral advice influences users’ judgment. Sci Rep. 2023;13: 4569. pmid:37024502
  29. 29. Hernández-Orallo J. The Measure of All Minds: Evaluating Natural and Artificial Intelligence. Cambridge: Cambridge University Press; 2017.
  30. 30. Turing AM. I.—COMPUTING MACHINERY AND INTELLIGENCE. Mind. 1950;LIX: 433–460.
  31. 31. Warwick K, Shah H. Effects of lying in practical Turing tests. AI & Soc. 2016;31: 5–15.
  32. 32. Himelein-Wachowiak M, Giorgi S, Devoto A, Rahman M, Ungar L, Schwartz HA, et al. Bots and Misinformation Spread on Social Media: Implications for COVID-19. Journal of Medical Internet Research. 2021;23: e26933. pmid:33882014
  33. 33. Shahid W, Li Y, Staples D, Amin G, Hakak S, Ghorbani A. Are You a Cyborg, Bot or Human?—A Survey on Detecting Fake News Spreaders. IEEE Access. 2022;10: 27069–27083.
  34. 34. Moffitt JD, King C, Carley KM. Hunting Conspiracy Theories During the COVID-19 Pandemic. Social Media + Society. 2021;7: 20563051211043212.
  35. 35. Dourado T. Who Posts Fake News? Authentic and Inauthentic Spreaders of Fabricated News on Facebook and Twitter. Journalism Practice. 2023;0: 1–20.
  36. 36. Balestrucci A, De Nicola R, Petrocchi M, Trubiani C. Do You Really Follow Them? Automatic Detection of Credulous Twitter Users. In: Yin H, Camacho D, Tino P, Tallón-Ballesteros AJ, Menezes R, Allmendinger R, editors. Intelligent Data Engineering and Automated Learning–IDEAL 2019. Cham: Springer International Publishing; 2019. pp. 402–410.
  37. 37. Brisola AC, Doyle A ea. Critical Information Literacy as a Path to Resist “Fake News”: Understanding Disinformation as the Root Problem. Open Information Science. 2019;3: 274–286.
  38. 38. Joshi SC, Gupta K, Manektala S. Misinformation, Public Opinion, and the Role of Critical Thinking. International Journal of Management and Humanities. 2022;8: 15–18.
  39. 39. Babii A-N. THE USE OF CRITICAL THINKING AGAINST FAKE NEWS. NORDSCI Conference proceedings, Book 1 Volume 3. SAIMA Consult Ltd; 2020.
  40. 40. Dingler T, Tag B, Lorenz-Spreen P, Vargo AW, Knight S, Lewandowsky S. Workshop on Technologies to Support Critical Thinking in an Age of Misinformation. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. ACM; 2021.
  41. 41. Machete P, Turpin M. The use of critical thinking to identify fake news: A systematic literature review. Springer; 2020. pp. 235–246.
  42. 42. Vandamme F, Kaczmarski P, Lin W. Disinformation, Critical Thinking and Dyssocial Techniques and Methods. Communication & Cognition. 2022;55: 49–114. 1022.
  43. 43. Falegnami A, Tronci M, Costantino F. The occupational health and safety risks of ongoing digital transformation. A knowledge management software powered literature review. 2021. Available:
  44. 44. Falagas ME, Pitsouni EI, Malietzis GA, Pappas G. Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses. FASEB J. 2008;22: 338–342. pmid:17884971
  45. 45. Mongeon P, Paul-Hus A. The journal coverage of Web of Science and Scopus: a comparative analysis. Scientometrics. 2016;106: 213–228.
  46. 46. Ooms J. Text extraction, rendering and converting of PDF documents: R package version 2.3. 1. 2020.
  47. 47. Powers DM. Applications and explanations of Zipf’s law. 1998.
  48. 48. Zipf G. The Psychobiology of Language. 1935.
  49. 49. Kao A, Poteet SR, editors. Natural Language Processing and Text Mining. London: Springer; 2007.
  50. 50. Bholowalia P, Kumar A. EBK-means: A clustering technique based on elbow method and k-means in WSN. International Journal of Computer Applications. 2014;105.
  51. 51. Shi C, Wei B, Wei S, Wang W, Liu H, Liu J. A quantitative discriminant method of elbow point for the optimal number of clusters in clustering algorithm. EURASIP Journal on Wireless Communications and Networking. 2021;2021: 1–16.
  52. 52. Jasser RA, Sarhan MA, Otaibi DA, Oraini SA. Awareness toward COVID-19 precautions among different levels of dental students in King Saud university, Riyadh, Saudi Arabia. Journal of Multidisciplinary Healthcare. 2020;13: 1317–1324. pmid:33154648
  53. 53. Abul-Fottouh D, Song MY, Gruzd A. Examining algorithmic biases in YouTube’s recommendations of vaccine videos. International Journal of Medical Informatics. 2020;140. pmid:32460043
  54. 54. Agusto FB, Numfor E, Srinivasan K, Iboi EA, Fulk A, Saint Onge JM, et al. Impact of public sentiments on the transmission of COVID-19 across a geographical gradient. PeerJ. 2023;11. pmid:36819996
  55. 55. Amoudi G, Albalawi R, Baothman F, Jamal A, Alghamdi H, Alhothali A. Arabic rumor detection: A comparative study. Alexandria Engineering Journal. 2022;61: 12511–12523.
  56. 56. Fletcher R, Nielsen RK. Are people incidentally exposed to news on social media? A comparative analysis. New Media and Society. 2018;20: 2450–2468.
  57. 57. Arbane M, Benlamri R, Brik Y, Alahmar AD. Social media-based COVID-19 sentiment classification model using Bi-LSTM. Expert Systems with Applications. 2023;212. pmid:36060151
  58. 58. Atehortua NA, Patino S. COVID-19, a tale of two pandemics: Novel coronavirus and fake news messaging. Health Promotion International. 2021;36: 524–534. pmid:33450022
  59. 59. Beletsky L, Seymour S, Kang S, Siegel Z, Sinha MS, Marino R, et al. Fentanyl panic goes viral: The spread of misinformation about overdose risk from casual contact with fentanyl in mainstream and social media. International Journal of Drug Policy. 2020;86. pmid:32949901
  60. 60. Bempong N-E, De Castañeda RR, Schütte S, Bolon I, Keiser O, Escher G, et al. Precision Global Health ‐ The case of Ebola: A scoping review. Journal of Global Health. 2019;9. pmid:30701068
  61. 61. Anderson AA, Huntington HE. Social Media, Science, and Attack Discourse: How Twitter Discussions of Climate Change Use Sarcasm and Incivility. Science Communication. 2017;39: 598–620.
  62. 62. Shimizu K. 2019-nCoV, fake news, and racism. The Lancet. 2020;395: 685–686. pmid:32059801
  63. 63. Brigo F, Ponzano M, Sormani MP, Clerico M, Abbadessa G, Cossu G, et al. Digital work engagement among Italian neurologists. Therapeutic Advances in Chronic Disease. 2021;12. pmid:34367544
  64. 64. Calleja-Solanas V, Pigani E, Palazzi MJ, Sole-Ribalta A, Suweis S, Borge-Holthoefer J, et al. Quantifying the drivers behind collective attention in information ecosystems. Journal of Physics: Complexity. 2021;2.
  65. 65. Diaz MI, Hanna JJ, Hughes AE, Lehmann CU, Medford RJ. The Politicization of Ivermectin Tweets during the COVID-19 Pandemic. Open Forum Infectious Diseases. 2022;9. pmid:35855004
  66. 66. Gareau S, Bailey J, Halberstadt ES, James T, Kenison K, Robb SW, et al. COVID-19 in South Carolina: Experiences Using Facebook as a Self-Organizing Tool for Grassroots Advocacy, Education, and Social Support. Journal of Humanistic Psychology. 2022.
  67. 67. Gaviria-Mendoza A, Mejía-Mazo DA, Duarte-Blandón C, Castrillón-Spitia JD, Machado-Duque ME, Valladales-Restrepo LF, et al. Self-medication and the ‘infodemic’ during mandatory preventive isolation due to the COVID-19 pandemic. Therapeutic Advances in Drug Safety. 2022;13. pmid:35237406
  68. 68. Grundmann O, Veltri CA, Morcos D, Knightes D III, Smith KE, Rogers JM. How essential is kratom availability and use during COVID-19? Use pattern analysis based on survey and social media data. Substance Abuse. 2022;43: 865–877. pmid:35179453
  69. 69. Imam NH, Vassilakis VG, Kolovos D. OCR post-correction for detecting adversarial text images. Journal of Information Security and Applications. 2022;66.
  70. 70. Isa SM, Nico G, Permana M. INDOBERT FOR INDONESIAN FAKE NEWS DETECTION. ICIC Express Letters. 2022;16: 289–297.
  71. 71. Jain L. An entropy-based method to control COVID-19 rumors in online social networks using opinion leaders. Technology in Society. 2022;70. pmid:35765463
  72. 72. Kaddoura S, Chandrasekaran G, Popescu DE, Duraisamy JH. A systematic literature review on spam content detection and classification. PeerJ Computer Science. 2022;8. pmid:35174265
  73. 73. Kothari A, Foisey L, Donelle L, Bauer M. How do Canadian public health agencies respond to the COVID-19 emergency using social media: A protocol for a case study using content and sentiment analysis. BMJ Open. 2021;11. pmid:33888527
  74. 74. Lentzen M-P, Huebenthal V, Kaiser R, Kreppel M, Zoeller JE, Zirk M. A retrospective analysis of social media posts pertaining to COVID-19 vaccination side effects. Vaccine. 2022;40: 43–51. pmid:34857421
  75. 75. Loeb S, Reines K, Abu-Salha Y, French W, Butaney M, Macaluso JN, et al. Quality of Bladder Cancer Information on YouTube[Formula presented]. European Urology. 2021;79: 56–59. pmid:33010986
  76. 76. Malhotra P. A Relationship-Centered and Culturally Informed Approach to Studying Misinformation on COVID-19. Social Media and Society. 2020;6. pmid:34192033
  77. 77. McDowell ZJ, Vetter MA. It Takes a Village to Combat a Fake News Army: Wikipedia’s Community and Policies for Information Literacy. Social Media and Society. 2020;6.
  78. 78. Murdock I, Carley KM, Yağan O. Identifying cross-platform user relationships in 2020 U.S. election fraud and protest discussions. Online Social Networks and Media. 2023;33.
  79. 79. Baltar F, Brunet I. Social research 2.0: Virtual snowball sampling method using Facebook. Internet Research. 2012;22: 57–74.
  80. 80. Segado-Fernández S, Herrera-Peco I, Jiménez-Gómez B, Ruiz Núñez C, Jiménez-Hidalgo PJ, Benítez de Gracia E, et al. Realfood and Cancer: Analysis of the Reliability and Quality of YouTube Content. International Journal of Environmental Research and Public Health. 2023;20. pmid:36981954
  81. 81. Galli A, Masciari E, Moscato V, Sperlí G. A comprehensive Benchmark for fake news detection. Journal of Intelligent Information Systems. 2022;59: 237–261. pmid:35342227
  82. 82. Kemei J, Alaazi DA, Tulli M, Kennedy M, Tunde-Byass M, Bailey P, et al. A scoping review of COVID-19 online mis/disinformation in Black communities. Journal of global health. 2022;12: 05026. pmid:35866205
  83. 83. Li Z, Wang M, Zhong J, Ren Y. Improving the Communication and Credibility of Government Media in Response to Public Health Emergencies: Analysis of Tweets From the WeChat Official Accounts of 10 Chinese Health Commissioners. Frontiers in Public Health. 2022;10. pmid:35937264
  84. 84. Subramanian KN, Ganapathy T. Light weight recommendation system for social networking analysis using a hybrid BERT-SVM classifier algorithm. Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 2022;22: 769–778.
  85. 85. Lee J-W, Kim J-H. Fake Sentence Detection Based on Transfer Learning: Applying to Korean COVID‐19 Fake News. Applied Sciences (Switzerland). 2022;12.
  86. 86. Hajli N, Saeed U, Tajvidi M, Shirazi F. Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence. British Journal of Management. 2022;33: 1238–1253.
  87. 87. Leahy R, Restrepo NJ, Sear R, Johnson NF. Connectivity Between Russian Information Sources and Extremist Communities Across Social Media Platforms. Frontiers in Political Science. 2022;4.
  88. 88. Ajaegbu O, Ajaegbu C, Quilling R. Nigeria EndSARS Protest: False Information Mitigation Hybrid Model. Ingenierie des Systemes d’Information. 2022;27: 447–455.
  89. 89. Mahmoudi O, Bouami MF, Badri M. Arabic Language Modeling Based on Supervised Machine Learning. Revue d’Intelligence Artificielle. 2022;36: 467–473.
  90. 90. Borukhson D, Lorenz-Spreen P, Ragni M. When Does an Individual Accept Misinformation? An Extended Investigation Through Cognitive Modeling. Computational Brain and Behavior. 2022;5: 244–260. pmid:35578705
  91. 91. Quinn EK, Fenton S, Ford-Sahibzada CA, Harper A, Marcon AR, Caulfield T, et al. COVID-19 and Vitamin D Misinformation on YouTube: Content Analysis. JMIR Infodemiology. 2022;2. pmid:35310014
  92. 92. Akintunde TY, Tassang AE, Okeke M, Isangha SO, Musa TH. Perceived Vaccine Efficacy, Willingness to Pay for COVID-19 Vaccine and Associated Determinants among Foreign Migrants in China. Electronic Journal of General Medicine. 2022;19.
  93. 93. Chidambaram S, Maheswaran Y, Chan C, Hanna L, Ashrafian H, Markar SR, et al. Misinformation about the Human Gut Microbiome in YouTube Videos: Cross-sectional Study. JMIR Formative Research. 2022;6. pmid:35576578
  94. 94. Olszowski R, Zabdyr-Jamróz M, Baran S, Pięta P, Ahmed W. A Social Network Analysis of Tweets Related to Mandatory COVID-19 Vaccination in Poland. Vaccines. 2022;10. pmid:35632506
  95. 95. Papadopoulou O, Makedas T, Apostolidis L, Poldi F, Papadopoulos S, Kompatsiaris I. MeVer NetworkX: Network Analysis and Visualization for Tracing Disinformation. Future Internet. 2022;14.
  96. 96. Kim MG, Kim M, Kim JH, Kim K. Fine-Tuning BERT Models to Classify Misinformation on Garlic and COVID-19 on Twitter. International Journal of Environmental Research and Public Health. 2022;19. pmid:35564518
  97. 97. Green J, Petty J, Whiting L, Orr F, Smart L, Brown A-M, et al. ‘Blurred boundaries’: When nurses and midwives give anti-vaccination advice on Facebook. Nursing Ethics. 2022;29: 552–568. pmid:35142239
  98. 98. Wang AH-E. PM Me the Truth? The Conditional Effectiveness of Fact-Checks Across Social Media Sites. Social Media and Society. 2022;8.
  99. 99. Sainju KD, Zaidi H, Mishra N, Kuffour A. Xenophobic Bullying and COVID-19: An Exploration Using Big Data and Qualitative Analysis. International Journal of Environmental Research and Public Health. 2022;19. pmid:35457691
  100. 100. Sylvia JJ, Moody K. BreadTube Rising: How Modern Creators Use Cultural Formats to Spread Countercultural Ideology. CLCWeb ‐ Comparative Literature and Culture. 2022;24.
  101. 101. Kiruthika NS, Thailambal DG. Dynamic Light Weight Recommendation System for Social Networking Analysis Using a Hybrid LSTM-SVM Classifier Algorithm. Optical Memory and Neural Networks (Information Optics). 2022;31: 59–75.
  102. 102. De Magistris G, Russo S, Roma P, Starczewski JT, Napoli C. An Explainable Fake News Detector Based on Named Entity Recognition and Stance Classification Applied to COVID-19. Information (Switzerland). 2022;13.
  103. 103. Tian H, Gaines C, Launi L, Pomales A, Vazquez G, Goharian A, et al. Understanding Public Perceptions of Per- and Polyfluoroalkyl Substances: Infodemiology Study of Social Media. Journal of Medical Internet Research. 2022;24. pmid:35275066
  104. 104. Gabarron E, Dechsling A, Skafle I, Nordahl-Hansen A. Discussions of Asperger Syndrome on Social Media: Content and Sentiment Analysis on Twitter. JMIR Formative Research. 2022;6. pmid:35254265
  105. 105. Boguslavsky DV, Sharova NP, Sharov KS. Public Policy Measures to Increase Anti-SARS-CoV-2 Vaccination Rate in Russia. International Journal of Environmental Research and Public Health. 2022;19. pmid:35329076
  106. 106. Rivera YM, Moran MB, Thrul J, Joshu C, Smith KC. Contextualizing Engagement With Health Information on Facebook: Using the Social Media Content and Context Elicitation Method. Journal of Medical Internet Research. 2022;24. pmid:35254266
  107. 107. Mourali M, Drake C. The Challenge of Debunking Health Misinformation in Dynamic Social Media Conversations: Online Randomized Study of Public Masking During COVID-19. Journal of Medical Internet Research. 2022;24. pmid:35156933
  108. 108. Raj C, Meel P. People lie, actions Don’t! Modeling infodemic proliferation predictors among social media users. Technology in Society. 2022;68.
  109. 109. Palani B, Elango S, Vignesh Viswanathan K. CB-Fake: A multimodal deep learning framework for automatic fake news detection using capsule neural network and BERT. Multimedia Tools and Applications. 2022;81: 5587–5620. pmid:34975284
  110. 110. Averza A, Slhoub K, Bhattacharyya S. Evaluating the Influence of Twitter Bots via Agent-Based Social Simulation. IEEE Access. 2022;10: 129394–129407.
  111. 111. Desty RT, Arumsari W. Receiving COVID-19 Messages on Social Media to the People of Semarang City. Kemas. 2022;18: 217–224.
  112. 112. Xu Q, McMann T, Godinez H, Nali MC, Li J, Cai M, et al. Impact of COVID-19 on HIV Prevention Access: A Multi-platform Social Media Infodemiology Study. AIDS and Behavior. 2023;27: 1886–1896. pmid:36471205
  113. 113. Ghazy RM, Yazbek S, Gebreal A, Hussein M, Addai SA, Mensah E, et al. Monkeypox Vaccine Acceptance among Ghanaians: A Call for Action. Vaccines. 2023;11. pmid:36851118
  114. 114. Boulianne S, Lee S. Conspiracy Beliefs, Misinformation, Social Media Platforms, and Protest Participation. Media and Communication. 2022;10: 30–41.
  115. 115. Bojic L, Nikolic N, Tucakovic L. State vs. anti-vaxxers: Analysis of Covid-19 echo chambers in Serbia. Communications. 2022.
  116. 116. Ali I, Ayub MNB, Shivakumara P, Noor NFBM. Fake News Detection Techniques on Social Media: A Survey. Wireless Communications and Mobile Computing. 2022;2022.
  117. 117. Qalaja EK, Al-Haija QA, Tareef A, Al-Nabhan MM. Inclusive Study of Fake News Detection for COVID-19 with New Dataset using Supervised Learning Algorithms. International Journal of Advanced Computer Science and Applications. 2022;13: 1–12.
  118. 118. Cárcamo-Ulloa L, Cárdenas-Neira C, Scheihing-García E, Sáez-Trumper D, Vernier M, Blaña-Romero C. On Politics and Pandemic: How Do Chilean Media Talk about Disinformation and Fake News in Their Social Networks? Societies. 2023;13.
  119. 119. Hurford B, Rana A, Sachan RSK. COMMENT: Narrative-based misinformation in India about protection against Covid-19: Not just another “moo-point.” Indian journal of medical ethics. 2022;VII: 1–10. pmid:34730095
  120. 120. Amaral ADR, Jung A-K, Braun L-M, Blanco B. Narratives of Anti‐Vaccination Movements in the German and Brazilian Twittersphere: A Grounded Theory Approach. Media and Communication. 2022;10: 144–156.
  121. 121. Srikanth J, Damodaram A, Teekaraman Y, Kuppusamy R, Thelkar AR. Sentiment Analysis on COVID-19 Twitter Data Streams Using Deep Belief Neural Networks. Computational Intelligence and Neuroscience. 2022;2022. pmid:35535182
  122. 122. Turco C, Ruvolo CC, Cilio S, Celentano G, Califano G, Creta M, et al. Looking for cystoscopy on YouTube: Are videos a reliable information tool for internet users? Archivio Italiano di Urologia e Andrologia. 2022;94: 57–61. pmid:35352526
  123. 123. Rohera D, Shethna H, Patel K, Thakker U, Tanwar S, Gupta R, et al. A Taxonomy of Fake News Classification Techniques: Survey and Implementation Aspects. IEEE Access. 2022;10: 30367–30394.
  124. 124. Elbarazi I, Saddik B, Grivna M, Aziz F, Elsori D, Stip E, et al. The Impact of the COVID-19 “Infodemic” on Well-Being: A Cross-Sectional Study. Journal of Multidisciplinary Healthcare. 2022;15: 289–307. pmid:35228802
  125. 125. Yeung AWK, Tosevska A, Klager E, Eibensteiner F, Tsagkaris C, Parvanov ED, et al. Medical and Health-Related Misinformation on Social Media: Bibliometric Study of the Scientific Literature. Journal of Medical Internet Research. 2022;24. pmid:34951864
  126. 126. Vijaykumar S, Rogerson DT, Jin Y, De Oliveira Costa MS. Dynamics of social corrections to peers sharing COVID-19 misinformation on WhatsApp in Brazil. Journal of the American Medical Informatics Association. 2022;29: 33–42. pmid:34672323
  127. 127. Nobre GP, Ferreira CHG, Almeida JM. A hierarchical network-oriented analysis of user participation in misinformation spread on WhatsApp. Information Processing and Management. 2022;59.
  128. 128. Mutanga MB, Abayomi A. Tweeting on COVID-19 pandemic in South Africa: LDA-based topic modelling approach. African Journal of Science, Technology, Innovation and Development. 2022;14: 163–172.
  129. 129. Zhu L, Peng Z, Li S. Factors Influencing the Accessibility and Reliability of Health Information in the Face of the COVID-19 Outbreak—A Study in Rural China. Frontiers in Public Health. 2021;9. pmid:35004558
  130. 130. Fedoruk B, Nelson H, Frost R, Ladouceur KF. The Plebeian Algorithm: A Democratic Approach to Censorship and Moderation. JMIR Formative Research. 2021;5. pmid:34854812
  131. 131. Tan EYQ, Wee RRE, Saw YE rn., Heng KJQ, Chin JWE, Tong EMW, et al. Tracking Private WhatsApp Discourse about COVID-19 in Singapore: Longitudinal Infodemiology Study. Journal of Medical Internet Research. 2021;23. pmid:34881720
  132. 132. Rovetta A. The Impact of COVID-19 on Conspiracy Hypotheses and Risk Perception in Italy: Infodemiological Survey Study Using Google Trends. JMIR Infodemiology. 2021;1. pmid:34447925
  133. 133. Kou Z, Zhang D, Shang L, Wang D. What and Why? Towards Duo Explainable Fauxtography Detection Under Constrained Supervision. IEEE Transactions on Big Data. 2023;9: 133–146.
  134. 134. Roe C, Lowe M, Williams B, Miller C. Public perception of SARS-CoV-2 vaccinations on social media: Questionnaire and sentiment analysis. International Journal of Environmental Research and Public Health. 2021;18. pmid:34948638
  135. 135. Balasubramaniam T, Nayak R, Luong K, Bashar MA. Identifying Covid-19 misinformation tweets and learning their spatio-temporal topic dynamics using Nonnegative Coupled Matrix Tensor Factorization. Social Network Analysis and Mining. 2021;11. pmid:34149960
  136. 136. de Oliveira DVB, Albuquerque UP. Cultural Evolution and Digital Media: Diffusion of Fake News About COVID-19 on Twitter. SN Computer Science. 2021;2. pmid:34485922
  137. 137. Muric G, Wu Y, Ferrara E. Covid-19 vaccine hesitancy on social media: Building a public twitter data set of antivaccine content, vaccine misinformation, and conspiracies. JMIR Public Health and Surveillance. 2021;7. pmid:34653016
  138. 138. Starr TS, Oxlad M. News media stories about cancer on Facebook: How does story framing influence response framing, tone and attributions of responsibility? Health (United Kingdom). 2021;25: 688–706. pmid:32186197
  139. 139. Boothby C, Murray D, Waggy AP, Tsou A, Sugimoto CR. Credibility of scientific information on social media: Variation by platform, genre and presence of formal credibility cues. Quantitative Science Studies. 2021;2.
  140. 140. Al-Jalabneh AA. Health Misinformation on Social Media and its Impact on COVID-19 Vaccine Inoculation in Jordan. Communication and Society. 2023;36: 185–200.
  141. 141. Popiołek M, Hapek M, Barańska M. Infodemia–an analysis of fake news in polish news portals and traditional media during the coronavirus pandemic. Communication and Society. 2021;34: 81–98.
  142. 142. Asare M, Lanning BA, Isada S, Rose T, Mamudu HM. Feasibility of utilizing social media to promote hpv self‐collected sampling among medically underserved women in a rural southern city in the united states (U.s.). International Journal of Environmental Research and Public Health. 2021;18. pmid:34682565
  143. 143. Alasmari A, Addawood A, Nouh M, Rayes W, Al-Wabil A. A retrospective analysis of the covid-19 infodemic in Saudi Arabia. Future Internet. 2021;13.
  144. 144. Alenezi MN, Alqenaei ZM. Machine learning in detecting covid-19 misinformation on twitter. Future Internet. 2021;13.
  145. 145. Li L, Aldosery A, Vitiugin F, Nathan N, Novillo-Ortiz D, Castillo C, et al. The Response of Governments and Public Health Agencies to COVID-19 Pandemics on Social Media: A Multi-Country Analysis of Twitter Discourse. Frontiers in Public Health. 2021;9. pmid:34650948
  146. 146. Buller DB, Pagoto S, Henry K, Berteletti J, Walkosz BJ, Bibeau J, et al. Human Papillomavirus Vaccination and Social Media: Results in a Trial With Mothers of Daughters Aged 14–17. Frontiers in Digital Health. 2021;3. pmid:34713152
  147. 147. Alsudias L, Rayson P. Social media monitoring of the COVID-19 pandemic and influenza epidemic with adaptation for informal language in Arabic twitter data: Qualitative study. JMIR Medical Informatics. 2021;9. pmid:34346892
  148. 148. Alshahrani R, Babour A. An infodemiology and infoveillance study on covid-19: Analysis of twitter and google trends. Sustainability (Switzerland). 2021;13.
  149. 149. Naseem U, Razzak I, Khushi M, Eklund PW, Kim J. COVIDSenti: A Large-Scale Benchmark Twitter Data Set for COVID-19 Sentiment Analysis. IEEE Transactions on Computational Social Systems. 2021;8: 976–988. pmid:35783149
  150. 150. Ianni M, Masciari E, Sperlí G. A survey of Big Data dimensions vs Social Networks analysis. Journal of Intelligent Information Systems. 2021;57: 73–100. pmid:33191981
  151. 151. Nazar S, Pieters T. Plandemic Revisited: A Product of Planned Disinformation Amplifying the COVID-19 “infodemic.” Frontiers in Public Health. 2021;9. pmid:34336759
  152. 152. Rogers R. Marginalizing the Mainstream: How Social Media Privilege Political Information. Frontiers in Big Data. 2021;4. pmid:34296078
  154. 154. Calvo D, Campos-Domínguez E, Simón-Astudillo I. Towards a critical understanding of social networks for the feminist movement: Twitter and the women’s strike. Tripodos. 2021; 91–109.
  155. 155. Onder ME, Zengin O. YouTube as a source of information on gout: a quality analysis. Rheumatology International. 2021;41: 1321–1328. pmid:33646342
  156. 156. Stecula DA, Pickup M. Social Media, Cognitive Reflection, and Conspiracy Beliefs. Frontiers in Political Science. 2021;3.
  157. 157. Argyris YA, Monu K, Tan P-N, Aarts C, Jiang F, Wiseley KA. Using machine learning to compare provaccine and antivaccine discourse among the public on social media: Algorithm development study. JMIR Public Health and Surveillance. 2021;7. pmid:34185004
  158. 158. Bossu R, Corradini M, Cheny J-M, Fallou L. A social bot in support of crisis communication: 10-years of @LastQuake experience on Twitter. Frontiers in Communication. 2023;8.
  159. 159. Bryanov K, Vziatysheva V. Determinants of individuals’ belief in fake news: A scoping review determinants of belief in fake news. PLoS ONE. 2021;16. pmid:34166478
  160. 160. Jennings W, Stoker G, Bunting H, Valgarõsson VO, Gaskell J, Devine D, et al. Lack of trust, conspiracy beliefs, and social media use predict COVID-19 vaccine hesitancy. Vaccines. 2021;9. pmid:34204971
  161. 161. Neely S, Eldredge C, Sanders R. Health information seeking behaviors on social media during the covid-19 pandemic among american social networking site users: Survey study. Journal of Medical Internet Research. 2021;23. pmid:34043526
  162. 162. Zotova E, Agerri R, Rigau G. Semi-automatic generation of multilingual datasets for stance detection in Twitter. Expert Systems with Applications. 2021;170.
  163. 163. Kochan A, Ong S, Guler S, Johannson KA, Ryerson CJ, Goobie GC. Social media content of idiopathic pulmonary fibrosis groups and pages on facebook: Cross-sectional analysis. JMIR Public Health and Surveillance. 2021;7. pmid:34057425
  164. 164. Helmstetter S, Paulheim H. Collecting a large scale dataset for classifying fake news tweets usingweak supervision. Future Internet. 2021;13.
  165. 165. Basch CE, Basch CH, Hillyer GC, Meleo-Erwin ZC, Zagnit EA. Youtube videos and informed decision-making about covid-19 vaccination: Successive sampling study. JMIR Public Health and Surveillance. 2021;7. pmid:33886487
  166. 166. Alnajrany SM, Asiri Y, Sales I, Alruthia Y. The commonly utilized natural products during the COVID-19 pandemic in Saudi Arabia: A cross-sectional online survey. International Journal of Environmental Research and Public Health. 2021;18. pmid:33924884
  167. 167. Balestrucci A, De Nicola R, Petrocchi M, Trubiani C. A behavioural analysis of credulous Twitter users. Online Social Networks and Media. 2021;23.
  168. 168. Yas H, Jusoh A, Streimikiene D, Mardani A, Nor KM, Alatawi A, et al. The negative role of social media during the COVID-19 outbreak. International Journal of Sustainable Development and Planning. 2021;16: 219–228.
  169. 169. Ahmed W, Das R, Vidal-Alaball J, Hardey M, Fuster-Casanovas A. Twitter’s Role in Combating the Magnetic Vaccine Conspiracy Theory: Social Network Analysis of Tweets. Journal of Medical Internet Research. 2023;25. pmid:36927550
  170. 170. Schück S, Foulquié P, Mebarki A, Faviez C, Khadhar M, Texier N, et al. Concerns discussed on chinese and french social media during the COVID-19 lockdown:comparative infodemiology study based on topic modeling. JMIR Formative Research. 2021;5. pmid:33750736
  171. 171. Katz M, Nandi N. Social media and medical education in the context of the COVID-19 pandemic: Scoping review. JMIR Medical Education. 2021;7. pmid:33755578
  172. 172. Zhang L, Li J, Zhou B, Jia Y. Rumor Detection Based on SAGNN: Simplified Aggregation Graph Neural Networks. Machine Learning and Knowledge Extraction. 2021;3: 84–94.
  173. 173. Preston S, Anderson A, Robertson DJ, Shephard MP, Huhe N. Detecting fake news on Facebook: The role of emotional intelligence. PLoS ONE. 2021;16. pmid:33705405
  174. 174. Guarino S, Pierri F, Di Giovanni M, Celestini A. Information disorders during the COVID-19 infodemic: The case of Italian Facebook. Online Social Networks and Media. 2021;22. pmid:34604611
  175. 175. Zhang Y, Wang L, Zhu JJH, Wang X. Conspiracy vs science: A large-scale analysis of online discussion cascades. World Wide Web. 2021;24: 585–606. pmid:33526966
  176. 176. Fenwick M, McCahery JA, Vermeulen EPM. Will the World Ever Be the Same After COVID-19? Two Lessons from the First Global Crisis of a Digital Age. European Business Organization Law Review. 2021;22: 125–145.
  177. 177. Wang H, Li Y, Hutch M, Naidech A, Luo Y. Using tweets to understand how COVID-19–Related health beliefs are affected in the age of social media: Twitter data analysis study. Journal of Medical Internet Research. 2021;23. pmid:33529155
  178. 178. Reuter K, Wilson ML, Moran M, Le N, Angyan P, Majmundar A, et al. General audience engagement with antismoking public health messages across multiple social media sites: Comparative analysis. JMIR Public Health and Surveillance. 2021;7. pmid:33605890
  179. 179. Yüce MÖ, Adalı E, Kanmaz B. An analysis of YouTube videos as educational resources for dental practitioners to prevent the spread of COVID-19. Irish Journal of Medical Science. 2021;190: 19–26. pmid:32700083
  180. 180. Bangyal WH, Qasim R, Rehman NU, Ahmad Z, Dar H, Rukhsar L, et al. Detection of Fake News Text Classification on COVID-19 Using Deep Learning Approaches. Computational and Mathematical Methods in Medicine. 2021;2021. pmid:34819990
  181. 181. Ulizko MS, Antonov EV, Grigorieva MA, Tretyakov ES, Tukumbetova RR, Artamonov AA. Visual analytics of twitter and social media dataflows: A casestudy of COVID-19 rumors. Scientific Visualization. 2021;13: 144–163.
  182. 182. Grandinetti J. Examining embedded apparatuses of AI in Facebook and TikTok. AI and Society. 2021. pmid:34539095
  183. 183. Alshareef M, Alotiby A. Prevalence and perception among saudi arabian population about resharing of information on social media regarding natural remedies as protective measures against covid-19. International Journal of General Medicine. 2021;14: 5127–5137. pmid:34511995
  184. 184. Olise FP. Level of acceptance of news stories on social media platforms among youth in Nigeria. Jurnal Komunikasi: Malaysian Journal of Communication. 2021;37: 210–225.
  185. 185. Larrondo-Ureta A, Fernández S-P, Morales-I-gras J. Disinformation, vaccines, and covid-19. Analysis of the infodemic and the digital conversation on twitter. Revista Latina de Comunicacion Social. 2021;2021: 1–18.
  186. 186. Yang LWY, Ng WY, Lei X, Tan SCY, Wang Z, Yan M, et al. Development and testing of a multi-lingual Natural Language Processing-based deep learning system in 10 languages for COVID-19 pandemic crisis: A multi-center study. Frontiers in Public Health. 2023;11. pmid:36860378
  187. 187. Chang MC, Park D. Youtube as a source of information on epidural steroid injection. Journal of Pain Research. 2021;14: 1353–1357. pmid:34045894
  188. 188. Al-Zaman MS. An exploratory study of social media users’ engagement with COVID-19 vaccine-related content. F1000Research. 2021;10. pmid:34853675
  189. 189. Vasconcelos C, Da Costa RL, Dias ÁL, Pereira L, Santos JP. Online influencers: Healthy food or fake news. International Journal of Internet Marketing and Advertising. 2021;15: 149–175.
  190. 190. Yafooz WMS, Alsaeedi A. Sentimental Analysis on Health-Related Information with Improving Model Performance using Machine Learning. Journal of Computer Science. 2021;17: 112–122.
  191. 191. Chang H-CH, Haider S, Ferrara E. Digital civic participation and misinformation during the 2020 taiwanese presidential election. Media and Communication. 2021;9: 144–157.
  192. 192. Macnamara J. Challenging post-communication: Beyond focus on a ‘few bad apples’ to multi-level public communication reform. Communication Research and Practice. 2021;7: 35–55.
  193. 193. Guimarães VHA, de Oliveira-Leandro M, Cassiano C, Marques ALP, Motta C, Freitas-Silva AL, et al. Knowledge about COVID-19 in Brazil: Cross-sectional web-based study. JMIR Public Health and Surveillance. 2021;7. pmid:33400684
  194. 194. Tang L, Fujimoto K, Amith M, Cunningham R, Costantini RA, York F, et al. “Down the rabbit hole” of vaccine misinformation on youtube: Network exposure study. Journal of Medical Internet Research. 2021;23. pmid:33399543
  195. 195. Kantartopoulos P, Pitropakis N, Mylonas A, Kylilis N. Exploring Adversarial Attacks and Defences for Fake Twitter Account Detection. Technologies. 2020;8.
  196. 196. Ridout B, Mckay M, Amon K, Campbell A, Wiskin AJ, Seng Du PML, et al. Social Media Use by Young People Living in Conflict-Affected Regions of Myanmar. Cyberpsychology, Behavior, and Social Networking. 2020;23: 876–888. pmid:33326325
  197. 197. Dong X, Victor U, Qian L. Two-Path Deep Semisupervised Learning for Timely Fake News Detection. IEEE Transactions on Computational Social Systems. 2020;7: 1386–1398.
  198. 198. Islam MR, Liu S, Wang X, Xu G. Deep learning for misinformation detection on online social networks: a survey and new perspectives. Social Network Analysis and Mining. 2020;10. pmid:33014173
  199. 199. Shang L, Zhang Y, Zhang D, Wang D. FauxWard: a graph neural network approach to fauxtography detection using social media comments. Social Network Analysis and Mining. 2020;10.
  200. 200. Berriche M, Altay S. Internet users engage more with phatic posts than with health misinformation on Facebook. Palgrave Communications. 2020;6.
  202. 202. Havey NF. Partisan public health: how does political ideology influence support for COVID-19 related misinformation? Journal of Computational Social Science. 2020;3: 319–342. pmid:33163686
  203. 203. Ahmed W, Seguí FL, Vidal-Alaball J, Katz MS. COVID-19 and the “Film Your Hospital” conspiracy theory: Social network analysis of Twitter data. Journal of Medical Internet Research. 2020;22. pmid:32936771
  204. 204. Faraon M, Jaff A, Nepomuceno LP, Villavicencio V. Fake news and aggregated credibility: Conceptualizing a co-creative medium for evaluation of sources online. International Journal of Ambient Computing and Intelligence. 2020;11: 93–117.
  205. 205. Narain K, Appiah Bimpong K, Kosasia Wamukota O, Ogunfolaji O, Nelson U-AU, Dutta A, et al. COVID-19 Information on YouTube: Analysis of Quality and Reliability of Videos in Eleven Widely Spoken Languages across Africa. Global Health, Epidemiology and Genomics. 2023;2023. pmid:36721521
  206. 206. Sutton J, Renshaw SL, Butts CT. COVID-19: Retransmission of official communications in an emerging pandemic. PLoS ONE. 2020;15. pmid:32936804
  207. 207. Stens O, Weisman MH, Simard J, Reuter K. Insights from twitter conversations on lupus and reproductive health: Protocol for a content analysis. JMIR Research Protocols. 2020;9. pmid:32844753
  208. 208. Pobiruchin M, Zowalla R, Wiesner M. Temporal and location variations, and link categories for the dissemination of COVID-19-related information on twitter during the SARS-CoV-2 outbreak in Europe: Infoveillance study. Journal of Medical Internet Research. 2020;22. pmid:32790641
  209. 209. Arce-García S, Menéndez-Menéndez M-I. Inflaming public debate: a methodology to determine origin and characteristics of hate speech about sexual and gender diversity on Twitter. Profesional de la Informacion. 2023;32.
  210. 210. Eysenbach G. How to fight an infodemic: The four pillars of infodemic management. Journal of Medical Internet Research. 2020;22. pmid:32589589
  211. 211. Larrouquere L, Gabin M, Poingt E, Mouffak A, Hlavaty A, Lepelley M, et al. Genesis of an emergency public drug information website by the French Society of Pharmacology and Therapeutics during the COVID-19 pandemic. Fundamental and Clinical Pharmacology. 2020;34: 389–396. pmid:32394481
  212. 212. Li HO-Y, Bailey A, Huynh D, Chan J. YouTube as a source of information on COVID-19: A pandemic of misinformation? BMJ Global Health. 2020;5. pmid:32409327
  213. 213. Ahmad AR, Murad HR. The impact of social media on panic during the COVID-19 pandemic in iraqi kurdistan: Online questionnaire study. Journal of Medical Internet Research. 2020;22. pmid:32369026
  214. 214. Ahmed W, Vidal-Alaball J, Downing J, Seguí FL. COVID-19 and the 5G conspiracy theory: Social network analysis of twitter data. Journal of Medical Internet Research. 2020;22. pmid:32352383
  215. 215. Míguez-González M-I, Martínez-Rolán X, García-Mirón S. From disinformation to fact-checking: How Ibero-American fact-checkers on Twitter combat fake news. Profesional de la Informacion. 2023;32.
  216. 216. Chen E, Lerman K, Ferrara E. Tracking social media discourse about the COVID-19 pandemic: Development of a public coronavirus Twitter data set. JMIR Public Health and Surveillance. 2020;6. pmid:32427106
  217. 217. Wahbeh A, Nasralah T, Al-Ramahi M, El-Gayar O. Mining physicians’ opinions on social media to obtain insights into COVID-19: Mixed methods analysis. JMIR Public Health and Surveillance. 2020;6. pmid:32421686
  218. 218. Pulido CM, Ruiz-Eugenio L, Redondo-Sama G, Villarejo-Carballido B. A new application of social impact in social media for overcoming fake news in health. International Journal of Environmental Research and Public Health. 2020;17. pmid:32260048
  219. 219. Fuentes-Lara C, Arcila-Calderón C. Islamophobic hate speech on social networks. An analysis of attitudes to Islamophobia on Twitter. Revista Mediterranea de Comunicacion. 2023;14: 225–239.
  220. 220. Lara-Navarra P, Falciani H, Sánchez-Pérez EA, Ferrer-Sapena A. Information management in healthcare and environment: Towards an automatic system for fake news detection. International Journal of Environmental Research and Public Health. 2020;17. pmid:32046238
  221. 221. Jamison AM, Broniatowski DA, Dredze M, Wood-Doughty Z, Khan D, Quinn SC. Vaccine-related advertising in the Facebook Ad Archive. Vaccine. 2020;38: 512–520. pmid:31732327
  222. 222. Jabardi MH, Hadi AS. Ontology Meter for Twitter Fake Accounts Detection. International Journal of Intelligent Engineering and Systems. 2020;14: 410–419.
  223. 223. Yerlikaya T, Aslan ST. Social media and fake news in the post-truth era: The manipulation of politics in the election process. Insight Turkey. 2020;22: 177–196.
  224. 224. Rumata VM, Nugraha FK. An analysis of fake narratives on social media during 2019 Indonesian presidential election. Jurnal Komunikasi: Malaysian Journal of Communication. 2020;36: 351–368.
  225. 225. Bahja M, Safdar GA. Unlink the link between COVID-19 and 5G Networks: an NLP and SNA based Approach. IEEE Access. 2020. pmid:34812369
  226. 226. Ahmad I, Yousaf M, Yousaf S, Ahmad MO. Fake News Detection Using Machine Learning Ensemble Methods. Complexity. 2020;2020.
  227. 227. Pascual-Ferrá P, Alperstein N, Barnett DJ. Social Network Analysis of COVID-19 Public Discourse on Twitter: Implications for Risk Communication. Disaster Medicine and Public Health Preparedness. 2020. pmid:32907685
  228. 228. Al-Rakhami MS, Al-Amri AM. Lies Kill, Facts Save: Detecting COVID-19 Misinformation in Twitter. IEEE Access. 2020;8: 155961–155970. pmid:34192115
  229. 229. Milani E, Weitkamp E, Webb P. The visual vaccine debate on twitter: A social network analysis. Media and Communication. 2020;8: 364–375.
  230. 230. Caddy C, Cheong M, Lim MSC, Power R, Vogel JP, Bradfield Z, et al. “Tell us what’s going on”: Exploring the information needs of pregnant and postpartum women in Australia during the pandemic with ‘Tweets’, ‘Threads’, and women’s views. PLoS ONE. 2023;18. pmid:36638130
  231. 231. Armitage L, Lawson BK, Whelan ME, Newhouse N. Paying SPECIAL consideration to the digital sharing of information during the COVID-19 pandemic and beyond. BJGP Open. 2020;4. pmid:32345692
  232. 232. Shrestha P, Sathanur A, Maharjan S, Saldanha E, Arendt D, Volkova S. Multiple social platforms reveal actionable signals for software vulnerability awareness: A study of GitHub, Twitter and Reddit. PLoS ONE. 2020;15. pmid:32208431
  233. 233. Jang Y, Park C-H, Seo Y-S. Fake news analysis modeling using quote retweet. Electronics (Switzerland). 2019;8.
  234. 234. Shah Z, Surian D, Dyda A, Coiera E, Mandl KD, Dunn AG. Automatically appraising the credibility of vaccine-related web pages shared on social media: A twitter surveillance study. Journal of Medical Internet Research. 2019;21. pmid:31682571
  235. 235. Ritonga R, Syahputra I. Citizen journalism and public participation in the Era of New Media in Indonesia: From street to tweet. Media and Communication. 2019;7: 79–90.
  236. 236. Noguera-Vivo JM, Del Mar Grandío-Pérez M, Villar-Rodríguez G, Martín A, Camacho D. Disinformation and vaccines on social networks: Behavior of hoaxes on Twitter. Revista Latina de Comunicacion Social. 2023;2023: 44–62.
  237. 237. Krishnamurthi S. Fiji’s coup culture: Rediscovering a voice at the ballot box. Pacific Journalism Review. 2019;25: 39–51.
  238. 238. Del Vicario M, Quattrociocchi W, Scala A, Zollo F. Polarization and fake news: Early warning of potential misinformation targets. ACM Transactions on the Web. 2019;13.
  239. 239. Dias da Silva MA, Walmsley AD. Fake news and dental education. British Dental Journal. 2019;226: 397–399. pmid:30903059
  240. 240. Papadopoulou O, Zampoglou M, Papadopoulos S, Kompatsiaris I. A corpus of debunked and verified user-generated videos. Online Information Review. 2019;43: 72–88.
  241. 241. Kabha R, Kamel A, Elbahi M, Narula S. Comparison study between the UAE, the UK, and India in Dealing with whatsapp fake news. Journal of Content, Community and Communication. 2019;10: 176–186.
  242. 242. Campinho BB. Constitution, democracy, regulation of the internet and electoral fake news in the Brazilian elections. Publicum. 2019;5: 232–256.
  243. 243. de Valk M. Recycling old strategies and devices: What remains, an art project addressing disinformation campaigns (Re)using strategies to delay industry regulation. Artnodes. 2019;2019: 34–43.
  244. 244. Heldt A. Reading between the lines and the numbers: An analysis of the first NetzDG reports. Internet Policy Review. 2019;8.
  245. 245. Bruns A. After the ‘APIcalypse’: social media platforms and their fight against critical scholarly research. Information Communication and Society. 2019;22: 1544–1566.
  246. 246. Eckert S, Sopory P, Day A, Wilkins L, Padgett D, Novak J, et al. Health-Related Disaster Communication and Social Media: Mixed-Method Systematic Review. Health Communication. 2018;33: 1389–1400. pmid:28825501
  247. 247. Bora K, Das D, Barman B, Borah P. Are internet videos useful sources of information during global public health emergencies? A case study of YouTube videos during the 2015–16 Zika virus pandemic. Pathogens and Global Health. 2018;112: 320–328. pmid:30156974
  248. 248. Haber N, Smith ER, Moscoe E, Andrews K, Audy R, Bell W, et al. Causal language and strength of inference in academic and media articles shared in social media (CLAIMS): A systematic review. PLoS ONE. 2018;13. pmid:29847549
  249. 249. Boididou C, Papadopoulos S, Zampoglou M, Apostolidis L, Papadopoulou O, Kompatsiaris Y. Detection and visualization of misleading content on Twitter. International Journal of Multimedia Information Retrieval. 2018;7: 71–86.
  250. 250. Sharma A, Goyal A. Tweet, truth and fake news: A study of BJP’s official tweeter handle. Journal of Content, Community and Communication. 2018;4: 22–28.
  251. 251. Sidhu S. Social media, dietetic practice and misinformation: A triangulation research. Journal of Content, Community and Communication. 2018;4: 29–34.
  252. 252. Fullwood MD, Kecojevic A, Basch CH. Examination of YouTube videos related to synthetic cannabinoids. International Journal of Adolescent Medicine and Health. 2018;30. pmid:27639268
  253. 253. McClain CR. Practices and promises of Facebook for science outreach: Becoming a “Nerd of Trust.” PLoS Biology. 2017;15. pmid:28654674
  254. 254. Lin Y-R, Keegan B, Margolin D, Lazer D. Rising tides or rising stars?: Dynamics of shared attention on twitter during media events. PLoS ONE. 2014;9. pmid:24854030
  255. 255. Syed-Abdul S, Fernandez-Luque L, Jian W-S, Li Y-C, Crain S, Hsu M-H, et al. Misleading health-related information promoted through video-based social media: Anorexia on youtube. Journal of Medical Internet Research. 2013;15: e30. pmid:23406655
  256. 256. Onder ME, Zengin O. Quality of healthcare information on YouTube: psoriatic arthritis. Zeitschrift fur Rheumatologie. 2023;82: 30–37. pmid:34468808
  257. 257. Soares FB, Salgueiro I, Bonoto C, Vinhas O. YOUTUBEASASOURCE OFINFORMATIONABOUT UNPROVENDRUGSFOR COVID-19: the role of the mainstream media and recommendation algorithms in promoting misinformation. Brazilian Journalism Research. 2022;19: 462–491.
  258. 258. Brockinton A, Hirst S, Wang R, McAlaney J, Thompson S. Utilising online eye-tracking to discern the impacts of cultural backgrounds on fake and real news decision-making. Frontiers in Psychology. 2022;13. pmid:36582319
  259. 259. Elhariry M, Malhotra K, Solomon M, Goyal K, Kempegowda P. Top 100 #PCOS influencers: Understanding who, why and how online content for PCOS is influenced. Frontiers in Endocrinology. 2022;13. pmid:36568090
  260. 260. Germone M, Wright CD, Kimmons R, Coburn SS. Twitter Trends for Celiac Disease and the Gluten-Free Diet: Cross-sectional Descriptive Analysis. JMIR Infodemiology. 2022;2. pmid:37113453
  261. 261. Yiannakoulias N, Darlington JC, Slavik CE, Benjamin G. Negative COVID-19 Vaccine Information on Twitter: Content Analysis. JMIR Infodemiology. 2022;2. pmid:36348980
  262. 262. DePaula N, Hagen L, Roytman S, Alnahass D. Platform Effects on Public Health Communication:A Comparative and National Study of Message Design and Audience Engagement Across Twitter and Facebook. JMIR Infodemiology. 2022;2. pmid:36575712
  263. 263. Eggleston A, Cook R, Over H. The influence of fake news on face-trait learning. PLoS ONE. 2022;17. pmid:36542558
  264. 264. Papadopoulou O, Kartsounidou E, Papadopoulos S. COVID-Related Misinformation Migration to BitChute and Odysee. Future Internet. 2022;14.
  265. 265. Moran R, Nguyen S, Bui L. Sending News Back Home: Misinformation Lost in Transnational Social Networks. Proceedings of the ACM on Human-Computer Interaction. 2023;7.
  266. 266. Weng Z, Lin A. Public Opinion Manipulation on Social Media: Social Network Analysis of Twitter Bots during the COVID-19 Pandemic. International Journal of Environmental Research and Public Health. 2022;19. pmid:36554258
  267. 267. Bovet A, Grindrod P. Organization and evolution of the UK far-right network on Telegram. Applied Network Science. 2022;7. pmid:36408456
  268. 268. Gangwar SS, Rathore SS, Chouhan SS, Soni S. Predictive modeling for suspicious content identification on Twitter. Social Network Analysis and Mining. 2022;12. pmid:36217359
  269. 269. Gongane VU, Munot MV, Anuse AD. Detection and moderation of detrimental content on social media platforms: current status and future directions. Social Network Analysis and Mining. 2022;12. pmid:36090695
  270. 270. Ng LHX, Cruickshank IJ, Carley KM. Cross-platform information spread during the January 6th capitol riots. Social Network Analysis and Mining. 2022;12. pmid:36105923
  271. 271. Hangloo S, Arora B. Combating multimodal fake news on social media: methods, datasets, and future perspective. Multimedia Systems. 2022;28: 2391–2422. pmid:35818516
  272. 272. Galbraith E, Li J, Rio-Vilas VJD, Convertino M. In.To. COVID-19 socio-epidemiological co-causality. Scientific Reports. 2022;12. pmid:35388071
  273. 273. Ruan T, Kong Q, McBride SK, Sethjiwala A, Lv Q. Cross-platform analysis of public responses to the 2019 Ridgecrest earthquake sequence on Twitter and Reddit. Scientific Reports. 2022;12. pmid:35102161
  274. 274. Van Natta J, Masadeh S, Hamilton B. Investigating the Impacts of YouTube’s Content Policies on Journalism and Political Discourse. Proceedings of the ACM on Human-Computer Interaction. 2023;7.
  275. 275. Malla SJ, Alphonse PJA. Fake or real news about COVID-19? Pretrained transformer model to detect potential misleading news. European Physical Journal: Special Topics. 2022;231: 3347–3356. pmid:35039760
  276. 276. Javed RT, Usama M, Iqbal W, Qadir J, Tyson G, Castro I, et al. A deep dive into COVID-19-related messages on WhatsApp in Pakistan. Social Network Analysis and Mining. 2022;12. pmid:34804253
  277. 277. Tokojima Machado DF, Fioravante de Siqueira A, Rallo Shimizu N, Gitahy L. It-which-must-not-be-named: COVID-19 misinformation, tactics to profit from it and to evade content moderation on YouTube. Frontiers in Communication. 2022;7.
  278. 278. Yoon HY, You KH, Kwon JH, Kim JS, Rha SY, Chang YJ, et al. Understanding the Social Mechanism of Cancer Misinformation Spread on YouTube and Lessons Learned: Infodemiological Study. Journal of Medical Internet Research. 2022;24. pmid:36374534
  279. 279. Tripathi J, de Vries RAJ, Lemke M. The three-step persuasion model on YouTube: A grounded theory study on persuasion in the protein supplements industry. Frontiers in Artificial Intelligence. 2022;5. pmid:36311552
  280. 280. Bacsu J-D, Cammer A, Ahmadi S, Azizi M, Grewal K, Green S, et al. Examining the Twitter Discourse on Dementia During Alzheimer’s Awareness Month in Canada: Infodemiology Study. JMIR Formative Research. 2022;6. pmid:36287605
  281. 281. Ghasiya P, Sasahara K. Rapid Sharing of Islamophobic Hate on Facebook: The Case of the Tablighi Jamaat Controversy. Social Media and Society. 2022;8.
  282. 282. Gagnon-Dufresne M-C, Azevedo Dantas M, Abreu Silva K, Souza dos Anjos J, Pessoa Carneiro Barbosa D, Porto Rosa R, et al. Social Media and the Influence of Fake News on Global Health Interventions: Implications for a Study on Dengue in Brazil. International Journal of Environmental Research and Public Health. 2023;20. pmid:37047915
  283. 283. Aleksandric A, Anderson HI, Melcher S, Nilizadeh S, Wilson GM. Spanish Facebook Posts as an Indicator of COVID-19 Vaccine Hesitancy in Texas. Vaccines. 2022;10. pmid:36298580
  284. 284. Melton CA, White BM, Davis RL, Bednarczyk RA, Shaban-Nejad A. Fine-tuned Sentiment Analysis of COVID-19 Vaccine-Related Social Media Data: Comparative Study. Journal of Medical Internet Research. 2022;24. pmid:36174192
  285. 285. Jain S, Dhaon SR, Majmudar S, Zimmermann LJ, Mordell L, Walker G, et al. Empowering Health Care Workers on Social Media to Bolster Trust in Science and Vaccination During the Pandemic: Making IMPACT Using a Place-Based Approach. Journal of Medical Internet Research. 2022;24. pmid:35917489
  286. 286. Stoner MCD, Browne EN, Tweedy D, Pettifor AE, Maragh-Bass AC, Toval C, et al. Exploring Motivations for COVID-19 Vaccination among Black Young Adults in 3 Southern US States: Cross-sectional Study. JMIR Formative Research. 2022;6. pmid:35969516
  287. 287. Denniss E, Lindberg R, McNaughton SA. Development of Principles for Health-Related Information on Social Media: Delphi Study. Journal of Medical Internet Research. 2022;24. pmid:36074544
  288. 288. Nistor A, Zadobrischi E. The Influence of Fake News on Social Media: Analysis and Verification of Web Content during the COVID-19 Pandemic by Advanced Machine Learning Methods and Natural Language Processing. Sustainability (Switzerland). 2022;14.
  289. 289. Albertus RW, Makoza F. Habermasian analysis of reports on Presidential tweets influencing politics in the USA. International Politics. 2023;60: 330–349.
  290. 290. Varshney D, Vishwakarma DK. A unified approach of detecting misleading images via tracing its instances on web and analyzing its past context for the verification of multimedia content. International Journal of Multimedia Information Retrieval. 2022;11: 445–459. pmid:35847991
  291. 291. Ramos MM, Machado RO, Cerqueira-Santos E. “It’s true! I saw it on WhatsApp”: Social Media, Covid-19, and Political-Ideological Orientation in Brazil. Trends in Psychology. 2022;30: 570–590.
  292. 292. Zinke-Allmang A, Hassan R, Bhatia A, Gorur K, Shipow A, Ogolla C, et al. Use of digital media for family planning information by women and their social networks in Kenya: A qualitative study in peri-urban Nairobi. Frontiers in Sociology. 2022;7. pmid:35992509
  293. 293. Tong C, Margolin D, Chunara R, Niederdeppe J, Taylor T, Dunbar N, et al. Search Term Identification Methods for Computational Health Communication: Word Embedding and Network Approach for Health Content on YouTube. JMIR Medical Informatics. 2022;10. pmid:36040760
  294. 294. Ruiz-Núñez C, Segado-Fernández S, Jiménez-Gómez B, Hidalgo PJJ, Magdalena CSR, Pollo MDCÁ, et al. Bots’ Activity on COVID-19 Pro and Anti-Vaccination Networks: Analysis of Spanish-Written Messages on Twitter. Vaccines. 2022;10. pmid:36016126
  295. 295. Skafle I, Nordahl-Hansen A, Quintana DS, Wynn R, Gabarron E. Misinformation About COVID-19 Vaccines on Social Media: Rapid Review. Journal of Medical Internet Research. 2022;24. pmid:35816685
  296. 296. Regmi PR, Dhakal Adhikari S, Aryal N, Wasti SP, van Teijlingen E. Fear, Stigma and Othering: The Impact of COVID-19 Rumours on Returnee Migrants and Muslim Populations of Nepal. International Journal of Environmental Research and Public Health. 2022;19. pmid:35897356
  297. 297. Pang H, Liu J, Lu J. Tackling fake news in socially mediated public spheres: A comparison of Weibo and WeChat. Technology in Society. 2022;70.
  298. 298. Röchert D, Shahi GK, Neubaum G, Ross B, Stieglitz S. The Networked Context of COVID-19 Misinformation: Informational Homogeneity on YouTube at the Beginning of the Pandemic. Online Social Networks and Media. 2021;26. pmid:34493994
  299. 299. Hernandez-Sanchez S, Moreno-Perez V, Garcia-Campos J, Marco-Lledó J, Navarrete-Muñoz EM, Lozano-Quijada C. Twelve tips to make successful medical infographics. Medical Teacher. 2021;43: 1353–1359. pmid:33342338
  300. 300. Satu MS, Khan MI, Mahmud M, Uddin S, Summers MA, Quinn JMW, et al. TClustVID: A novel machine learning classification model to investigate topics and sentiment in COVID-19 tweets. Knowledge-Based Systems. 2021;226. pmid:33972817
  301. 301. Barfar A. Cognitive and affective responses to political disinformation in Facebook. Computers in Human Behavior. 2019;101: 173–179.
  302. 302. Aswani R, Kar AK, Ilavarasan PV. Experience: Managing misinformation in social media-insights for policymakers from Twitter analytics. Journal of Data and Information Quality. 2019;12.
  303. 303. Chen X-X, Wagner AL, Zheng X-B, Xie J-Y, Boulton ML, Chen K-Y, et al. Hepatitis E vaccine in China: Public health professional perspectives on vaccine promotion and strategies for control. Vaccine. 2019;37: 6566–6572. pmid:31353258
  304. 304. Maweu JM. “Fake Elections”? Cyber Propaganda, Disinformation and the 2017 General Elections in Kenya. African Journalism Studies. 2019;40: 62–76.
  305. 305. Alsyouf M, Stokes P, Hur D, Amasyali A, Ruckle H, Hu B. ‘Fake News’ in urology: evaluating the accuracy of articles shared on social media in genitourinary malignancies. BJU International. 2019;124: 701–706. pmid:31044493
  306. 306. Soron TR. “I will kill myself”–The series of posts in Facebook and unnoticed departure of a life. Asian Journal of Psychiatry. 2019;44: 55–57. pmid:31323535
  307. 307. Workneh TW. Ethiopia’s Hate Speech Predicament: Seeking Antidotes Beyond a Legislative Response. African Journalism Studies. 2019;40: 123–139.
  308. 308. Valenzuela S, Halpern D, Katz JE, Miranda JP. The Paradox of Participation Versus Misinformation: Social Media, Political Engagement, and the Spread of Misinformation. Digital Journalism. 2019;7: 802–823.
  309. 309. Hemphill TA. ‘Techlash’, responsible innovation, and the self-regulatory organization. Journal of Responsible Innovation. 2019;6: 240–247.
  310. 310. Goobie GC, Guler SA, Johannson KA, Fisher JH, Ryerson CJ. YouTube videos as a source of misinformation on idiopathic pulmonary fibrosis. Annals of the American Thoracic Society. 2019;16: 572–579. pmid:30608877
  311. 311. Loeb S, Sengupta S, Butaney M, Macaluso JN, Czarniecki SW, Robbins R, et al. Dissemination of Misinformative and Biased Information about Prostate Cancer on YouTube. European Urology. 2019;75: 564–567. pmid:30502104
  312. 312. Deshpande AK, Deshpande SB, O’Brien CA. Hyperacusis and social media trends. Hearing, Balance and Communication. 2019;17: 1–11.
  313. 313. Gutiérrez-Martín A, Torrego-González A, Vicente-Mariño M. Media education with the monetization of YouTube: The loss of truth as an exchange value. Cultura y Educacion. 2019;31: 267–295.
  314. 314. Mehta N, Gupta A, Nissan M. All i Have Learned, i Have Learned from Google: Why Today’s Facial Rejuvenation Patients are Prone to Misinformation, and the Steps We can take to Contend with Unreliable Information. Facial Plastic Surgery. 2019;35: 387–392. pmid:31412380
  315. 315. Duncombe C. Digital diplomacy: Emotion and identity in the public realm. The Hague Journal of Diplomacy. 2019;14: 102–116.
  316. 316. Al Khaja KAJ, AlKhaja AK, Sequeira RP. Drug information, misinformation, and disinformation on social media: a content analysis study. Journal of Public Health Policy. 2018;39: 343–357. pmid:29795521
  317. 317. Liu Q, Yu F, Wu S, Wang L. Mining significant microblogs for misinformation identification: An attention-based approach. ACM Transactions on Intelligent Systems and Technology. 2018;9.
  318. 318. Aquino F, Donzelli G, De Franco E, Privitera G, Lopalco PL, Carducci A. The web and public confidence in MMR vaccination in Italy. Vaccine. 2017;35: 4494–4498. pmid:28736200
  319. 319. Bombaci SP, Farr CM, Gallo HT, Mangan AM, Stinson LT, Kaushik M, et al. Using Twitter to communicate conservation science from a professional conference. Conservation Biology. 2016;30: 216–225. pmid:26081769
  320. 320. Mazer JP, Thompson B, Cherry J, Russell M, Payne HJ, Gail Kirby E, et al. Communication in the face of a school crisis: Examining the volume and content of social media mentions during active shooter incidents. Computers in Human Behavior. 2015;53: 238–248.
  321. 321. Chen B, Zhang JM, Jiang Z, Shao J, Jiang T, Wang Z, et al. Media and public reactions toward vaccination during the “hepatitis B vaccine crisis” in China. Vaccine. 2015;33: 1780–1785. pmid:25731787
  322. 322. Zhao J, Cao N, Wen Z, Song Y, Lin Y-R, Collins C. #FluxFlow: Visual analysis of anomalous information spreading on social media. IEEE Transactions on Visualization and Computer Graphics. 2014;20: 1773–1782. pmid:26356891
  323. 323. Lau AYS, Gabarron E, Fernandez-Luque L, Armayones M. Social media in health ‐ what are the safety concerns for health consumers? Health Information Management Journal. 2012;41: 30–35. pmid:23705132
  324. 324. Fortinsky KJ, Fournier MR, Benchimol EI. Internet and electronic resources for inflammatory bowel disease: A primer for providers and patients. Inflammatory Bowel Diseases. 2012;18: 1156–1163. pmid:22147497
  325. 325. Pierpoint L. Fukushima , Facebook and Feeds: Informing the Public in a Digital Era. Electricity Journal. 2011;24: 53–58.
  326. 326. Abulaish M, Kumari N, Fazil M, Singh BK. A graph-theoretic embedding-based approach for rumor detection in twitter. 2019. pp. 466–470.
  327. 327. Théro H, Vincent EM. Investigating Facebook’s interventions against accounts that repeatedly share misinformation. Information Processing and Management. 2022;59.
  328. 328. Thomas MJ, Lal V, Baby AK, Rabeeh VP M, James A, Raj AK. Can technological advancements help to alleviate COVID-19 pandemic? a review. Journal of Biomedical Informatics. 2021;117. pmid:33862231
  329. 329. Reddy PS, DeBord LC, Gupta R, Kapadia P, Mohanty A, Dao H. Antibiotics for acne vulgaris: using instagram to seek insight into the patient perspective. Journal of Dermatological Treatment. 2021;32: 188–192. pmid:31190574
  330. 330. Zenone M, Kenworthy N. Pre-emption strategies to block taxes on sugar-sweetened beverages: A framing analysis of Facebook advertising in support of Washington state initiative-1634. Global Public Health. 2022;17: 1854–1867. pmid:34542004
  331. 331. COVID-19: fighting panic with information. The Lancet. 2020;395: 537. pmid:32087777
  332. 332. Zhong B. Going beyond fact-checking to fight health misinformation: A multi-level analysis of the Twitter response to health news stories. International Journal of Information Management. 2023;70: 102626.
  333. 333. Koch H, Franco ZE, O’Sullivan T, DeFino MC, Ahmed S. Community views of the federal emergency management agency’s “whole community” strategy in a complex US City: Re-envisioning societal resilience. Technological Forecasting and Social Change. 2017;121: 31–38.
  334. 334. Roud E. Collective improvisation in emergency response. Safety Science. 2021;135: 105104.
  335. 335. Social Networking App Revenue and Usage Statistics (2023). In: Business of Apps [Internet]. [cited 25 May 2023]. Available:
  336. 336. Asare-Donkoh F. Impact of social media on Ghanaian High School students. Library Philosophy and Practice. 2018; 1–33.
  337. 337. Boll S. Multitube—where web 2.0 and multimedia could meet. IEEE MultiMedia. 2007;14: 9–13.
  338. 338. Hansen D, Shneiderman B, Smith MA. Analyzing social media networks with NodeXL: Insights from a connected world. Morgan Kaufmann; 2010.
  339. 339. Lemke C. Innovation through technology. 21st century skills: Rethinking how students learn. 2010; 243–272.
  340. 340. Al-Asadi MA, Tasdemir S. Using artificial intelligence against the phenomenon of fake news: a systematic literature review. Combating Fake News with Computational Intelligence Techniques. 2022; 39–54.
  341. 341. Ozbay FA, Alatas B. Fake news detection within online social media using supervised artificial intelligence algorithms. Physica A: statistical mechanics and its applications. 2020;540: 123174.
  342. 342. Giglou HB, Razmara J, Rahgouy M, Sanaei M. LSACoNet: A Combination of Lexical and Conceptual Features for Analysis of Fake News Spreaders on Twitter. 2020.
  343. 343. De Bruyn A, Viswanathan V, Beh YS, Brock JK-U, Von Wangenheim F. Artificial intelligence and marketing: Pitfalls and opportunities. Journal of Interactive Marketing. 2020;51: 91–105.
  344. 344. Xu R, Chen H, Liang X, Wang H. Priority-based constructive algorithms for scheduling agile earth observation satellites with total priority maximization. Expert Systems with Applications. 2016;51: 195–206.
  345. 345. De Paor S, Heravi B. Information literacy and fake news: How the field of librarianship can help combat the epidemic of fake news. The Journal of Academic Librarianship. 2020;46: 102218.
  346. 346. Rose J. To believe or not to believe: An epistemic exploration of fake news, truth, and the limits of knowing. Postdigital Science and Education. 2020;2: 202–216.
  347. 347. Altheide DL. Terrorism and the Politics of Fear. Cultural Studies? Critical Methodologies. 2006;6: 415–439.
  348. 348. Islam AN, Laato S, Talukder S, Sutinen E. Misinformation sharing and social media fatigue during COVID-19: An affordance and cognitive load perspective. Technological forecasting and social change. 2020;159: 120201. pmid:32834137
  349. 349. Lutzke L, Drummond C, Slovic P, Árvai J. Priming critical thinking: Simple interventions limit the influence of fake news about climate change on Facebook. Global environmental change. 2019;58: 101964.
  350. 350. LaGarde J, Hudgins D. Fact vs. fiction: Teaching critical thinking skills in the age of fake news. International Society for Technology in Education; 2018.
  351. 351. Waisbord S. Truth is what happens to news: On journalism, fake news, and post-truth. Journalism studies. 2018;19: 1866–1878.
  352. 352. Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman AL, Recchia G, et al. Susceptibility to misinformation about COVID-19 around the world. Royal Society open science. 2020;7: 201199. pmid:33204475
  353. 353. Kim A, Moravec PL, Dennis AR. When do details matter? News source evaluation summaries and details against misinformation on social media. International Journal of Information Management. 2023;72: 102666.
  354. 354. Jarrahi MH. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business horizons. 2018;61: 577–586.
  355. 355. Spector JM, Ma S. Inquiry and critical thinking skills for the next generation: from artificial intelligence back to human intelligence. Smart Learning Environments. 2019;6: 1–11.
  356. 356. Willingham D. How to teach critical thinking. 2019.
  357. 357. Larson DA. Artificial Intelligence: Robots, Avatars and the Demise of the Human Mediator.
  359. 359. Walter N, Cohen J, Holbert RL, Morag Y. Fact-Checking: A Meta-Analysis of What Works and for Whom. Political Communication. 2020;37: 350–375.
  360. 360. Nasir JA, Khan OS, Varlamis I. Fake news detection: A hybrid CNN-RNN based deep learning approach. International Journal of Information Management Data Insights. 2021;1: 100007.
  361. 361. Ng LHX, Taeihagh A. How does fake news spread? Understanding pathways of disinformation spread through APIs. Policy & Internet. 2021;13: 560–585.
  362. 362. Shu K, Sliva A, Wang S, Tang J, Liu H. Fake News Detection on Social Media: A Data Mining Perspective. SIGKDD Explor Newsl. 2017;19: 22–36.
  363. 363. Nosich GM. The Need for Comprehensiveness in Critical Thinking Instruction. Inquiry: Critical Thinking Across the Disciplines. 1996;16: 50–66.
  364. 364. Veit WC. Culture Against Critical Thinking: Help Wanted. Inquiry: Critical Thinking Across the Disciplines. 1995;14: 88–91.
  365. 365. Horne CL. Internet governance in the “post-truth era”: Analyzing key topics in “fake news” discussions at IGF. Telecommunications Policy. 2021;45: 102150.
  366. 366. Vivian B. Campus Misinformation: The Real Threat to Free Speech in American Higher Education. Oxford University Press; 2022.
  367. 367. Ditto PH, Liu BS, Clark CJ, Wojcik SP, Chen EE, Grady RH, et al. At Least Bias Is Bipartisan: A Meta-Analytic Comparison of Partisan Bias in Liberals and Conservatives. Perspect Psychol Sci. 2019;14: 273–291. pmid:29851554
  368. 368. Weeks BE. Emotions, Partisanship, and Misperceptions: How Anger and Anxiety Moderate the Effect of Partisan Bias on Susceptibility to Political Misinformation. Journal of Communication. 2015;65: 699–719.
  369. 369. Bago B, Rand DG, Pennycook G. Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. Journal of Experimental Psychology: General. 2020;149: 1608–1613. pmid:31916834
  370. 370. Pereira A, Harris E, Van Bavel JJ. Identity concerns drive belief: The impact of partisan identity on the belief and dissemination of true and false news. Group Processes & Intergroup Relations. 2023;26: 24–47.
  371. 371. Garz M, Sörensen J, Stone DF. Partisan selective engagement: Evidence from Facebook. Journal of Economic Behavior & Organization. 2020;177: 91–108.
  372. 372. Hopp T, Ferrucci P, Vargo CJ. Why Do People Share Ideologically Extreme, False, and Misleading Content on Social Media? A Self-Report and Trace Data–Based Analysis of Countermedia Content Dissemination on Facebook and Twitter. Human Communication Research. 2020;46: 357–384.
  373. 373. Huszár F, Ira Ktena S, O’Brien C, Hardt M. Algorithmic amplification of politics on Twitter. 2021 [cited 29 May 2023]. Available: pmid:34934011
  374. 374. Singer JB. Border patrol: The rise and role of fact-checkers and their challenge to journalists’ normative boundaries. Journalism. 2021;22: 1929–1946.
  375. 375. Hameleers M, Powell TE, Van Der Meer TGLA, Bos L. A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinformation and Rebuttals Disseminated via Social Media. Political Communication. 2020;37: 281–301.
  376. 376. Mullainathan S, Shleifer A. The Market for News. American Economic Review. 2005;95: 1031–1053.
  377. 377. Acredolo C, O’Connor J. On the Difficulty of Detecting Cognitive Uncertainty. Human Development. 2010;34: 204–223.
  378. 378. Lomas D. Cognitive artifacts: an art-science engagement. Proceedings of the 6th ACM SIGCHI conference on Creativity & cognition. New York, NY, USA: Association for Computing Machinery; 2007. p. 289.
  379. 379. McLane S, Turley JP, Esquivel A, Engebretson J, Smith KA, Wood GL, et al. Concept Analysis of Cognitive Artifacts. Advances in Nursing Science. 2010;33: 352–362. pmid:21068556
  380. 380. De Nicola A, Villani ML, Costantino F, Di Gravio G, Falegnami A, Patriarca R. A Knowledge Graph to Digitalise Functional Resonance Analyses in the Safety Area. Resilience in a Digital Age. Springer; 2022. pp. 259–269.
  381. 381. Wang Q, Mao Z, Wang B, Guo L. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering. 2017;29: 2724–2743.
  382. 382. Falegnami A, Bernabei M, Colabianchi S, Tronci M. Yet Another Warehouse KPI’s Collection. Sanremo, Riviera dei Fiori; 2022.
  383. 383. Patriarca R, Falegnami A, Bilotta F. Embracing simplexity: the role of artificial intelligence in peri-procedural medical safety. Expert Review of Medical Devices. 2019;16: 77–79. pmid:30602324
  384. 384. Phillips D, Watson L, Willis M. Benefits of comprehensive integrated reporting: by standardizing disparate information sources, financial executive can eliminate the narrow perspectives of the elephant and the blind man parable—and" see" beyond merely information silos or reports. Financial Executive. 2011;27: 26–31.
  385. 385. Rilling J, Witte R, Schuegerl P, Charland P. Beyond information silos—An omnipresent approach to software evolution. International Journal of Semantic Computing. 2008;2: 431–468.
  386. 386. Rubin VL. Misinformation and Disinformation: Detecting Fakes with the Eye and AI. Springer Nature; 2022.
  387. 387. Omoregie U, Ryall K. Misinformation Matters: Online Content and Quality Analysis. CRC Press; 2023.
  388. 388. Zelenkauskaitė A. Creating chaos online: Disinformation and subverted post-publics. University of Michigan Press; 2022.
  389. 389. Zoglauer T. Post-Truth Phenomenology. In: Zoglauer T, editor. Constructed Truths: Truth and Knowledge in a Post-truth World. Wiesbaden: Springer Fachmedien; 2023. pp. 1–33.
  390. 390. Gounari P. From Twitter to Capitol Hill: far-right authoritarian populist discourses, social media and critical pedagogy. From Twitter to Capitol Hill: far-right authoritarian populist discourses, social media and critical pedagogy. 2021 [cited 30 May 2023]. Available:
  391. 391. Bakir V, McStay A. Optimising Emotions, Incubating Falsehoods: How to Protect the Global Civic Body from Disinformation and Misinformation. Springer Nature; 2022.
  392. 392. Cover R, Haw A, Thompson JD. Fake News in Digital Cultures: Technology, Populism and Digital Misinformation. Emerald Group Publishing; 2022.
  393. 393. Stebbins LF. Building Back Truth in an Age of Misinformation. Rowman & Littlefield; 2023.