Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Users Polarization on Facebook and Youtube

Abstract

Users online tend to select information that support and adhere their beliefs, and to form polarized groups sharing the same view—e.g. echo chambers. Algorithms for content promotion may favour this phenomenon, by accounting for users preferences and thus limiting the exposure to unsolicited contents. To shade light on this question, we perform a comparative study on how same contents (videos) are consumed on different online social media—i.e. Facebook and YouTube—over a sample of 12M of users. Our findings show that content drives the emergence of echo chambers on both platforms. Moreover, we show that the users’ commenting patterns are accurate predictors for the formation of echo-chambers.

Introduction

The diffusion of social media caused a shift of paradigm in the creation and consumption of information. We passed from a mediated (e.g., by journalists) to a more disintermediated selection process. Such a disintermediation elicits the tendencies of the users to a) select information adhering to their system of beliefs—i.e., confirmation bias—and b) to form groups of like minded people where they polarize their view—i.e. echo chambers [16]. Polarized communities emerge around diverse and heteorgeneous narratives often reflecting extreme disagreement with respect to the main stream news and recommended practices. The emergence of polarization in online environments might reduce viewpoint heterogeneity, which has long been viewed as an important component of democratic societies [7, 8].

Confirmation bias has been shown to play a pivotal role in the diffusion of rumors online [9]. However, on online social media, different algorithms foster personalized contents according to user tastes—i.e. they show users viewpoints that they already agree with. The role of these algorithms in influencing the emergence of echo chambers is still a matter of debate. Indeed, little is known about the factors affecting the algorithms’ outcomes. Facebook promotes posts according to the News Feed algorithm, that helps users to see more stories from friends they interact with the most, and the number of comments and likes a post receives and what kind of story it is—e.g. photo, video, status update—can also make a post more likely to appear [10]. Conversely, YouTube promotes videos through Watch Time, which prioritizes videos that lead to a longer overall viewing session over those that receive more clicks [11]. Not much is known about the role of cognitive factors in driving users to aggregate in echo chambers supporting their preferred narrative. Recent studies suggest confirmation bias as one of the driving forces of content selection, which eventually leads to the emergence of polarized communities where users acquire confirmatory information and ignore dissenting content [1217].

To shade light on the role of algorithms for content promotion in the emergence of echo chambers, we analyze the users behavior exposed to the same contents on different platforms—i.e. Youtube and Facebook. We focus on Facebook posts linking Youtube videos reported on Science and Conspiracy pages. We then compare the users interaction with these videos on both platforms.

We limit our analysis to Science and Conspiracy for two main reasons: a) scientific news and conspiracy-like news are two very distinct and conflicting narratives; b) scientific pages share the main mission to diffuse scientific knowledge and rational thinking, while the alternative ones resort to unsubstantiated rumors.

Indeed, conspiracy-like pages disseminate myth narratives and controversial information, usually lacking supporting evidence and most often contradictory of the official news. Moreover, the spreading of misinformation on online social media has become a widespread phenomenon to an extent that the World Economic Forum listed massive digital misinformation as one of the main threats for the modern society [16, 18].

In spite of different debunking strategies, unsubstantiated rumors—e.g. those supporting anti-vaccines claims, climate change denials, and alternative medicine myths—keep proliferating in polarized communities emerging on online environments [9, 14], leading to a climate of disengagement from mainstream society and recommended practices. A recent study [19] pointed out the inefficacy of debunking and the concrete risk of a backfire effect [20, 21] from the usual and most committed consumers of conspiracy-like narratives.

We believe that additional insights about cognitive factors and behavioral patterns driving the emergence of polarized environments are crucial to understand and develop strategies to mitigate the spreading of online misinformation.

In this paper, using a quantitative analysis on a massive dataset (12M of users), we compare consumption patterns of videos supporting scientific and conspiracy-like news on Facebook and Youtube. We extend our analysis by investigating the polarization dynamics—i.e. how users become polarized comment after comment. On both platforms, we observe that some users interact only with a specific kind of content since the beginning, whereas others start their commenting activity by switching between contents supporting different narratives. The vast majority of the latter—after the initial switching phase—starts consuming mainly one type of information, becoming polarized towards one of the two conflicting narratives. Finally, by means of a multinomial logistic model, we are able to predict with a good precision the probability of whether a user will become polarized towards a given narrative or she will continue to switch between information supporting competing narratives. The observed evolution of polarization is similar between Facebook and YouTube to an extent that the statistical learning model trained on Facebook is able to predict with a good precision the polarization of YouTube users, and vice versa. Our findings show that contents more than the algorithms lead to the aggregation of users in different echo chambers.

Results and Discussion

We start our analysis by focusing on the statistical signatures of content consumption on Facebook and Youtube videos. The focus is on all videos posted by conspiracy-like and scientific pages on Facebook. We compare the consumption patterns of the same video on both Facebook and Youtube. On Facebook a like stands for a positive feedback to the post; a share expresses the will to increase the visibility of a given information; and a comment is the way in which online collective debates take form around the topic promoted by posts. Similarly, on YouTube a like stands for a positive feedback to the video; and a comment is the way in which online collective debates grow around the topic promoted by videos.

Contents Consumption across Facebook and YouTube

As a preliminary analysis we measure the similarity of the users reaction to the same videos on both platforms. Focusing on the consumptions patterns of YouTube videos posted on Facebook pages, we compute the Spearman’s rank correlation coefficients between users’ actions on Facebook posts and the related YouTube videos (see Fig 1). We find strong correlations on how users like, comment and share videos on Facebook and Youtube. Despite the different algorithm for content promotion, information reverberate in a similar way.

thumbnail
Fig 1. Correlation Matrix.

Spearman’s rank correlation coefficients between users’ actions on Facebook posts and the related YouTube videos.

https://doi.org/10.1371/journal.pone.0159641.g001

By means of the Mantel test [22] we find a statistically significant (simulated p-value <0.01, based on 104 Monte Carlo replicates), high, and positive (r = 0.987) correlation between the correlation matrices of Science and Conspiracy. In particular, we find positive and high correlations between users’ actions on YouTube videos for both Science and Conspiracy, indicating a similar strong monotone increasing relationship between views, likes, and comments. Furthermore, we observe positive and mild correlations between users’ actions on Facebook posts linking YouTube videos for both Science and Conspiracy, suggesting a monotone increasing relationship between likes, comments, and shares. Conversely, we find positive yet low correlations between users’ actions across YouTube videos and the Facebook posts linking the videos for both Science and Conspiracy, implying that the success—in terms of received attention—of videos posted on YouTube does not ensure a comparable success on Facebook, and vice versa. This evidence suggests that the social response to information is similar on different contents and platforms.

As a further analysis we focus on the volume of actions to each post. In Fig 2 we show the empirical Cumulative Complementary Distribution Functions (CCDFs) of the consumption patterns of videos supporting conflicting narratives—i.e. Science and Conspiracy—in terms of comments and likes on Facebook and YouTube. The double-log scale plots highlight the power law behavior of each distribution. Top right panel shows the CCDFs of the number of likes received by Science (xmin = 197 and θ = 1.96) and Conspiracy (xmin = 81 and θ = 1.91) on Facebook. Top left panel shows the CCDFs of the number of comments received by Science (xmin = 35 and θ = 2.37) and Conspiracy (xmin = 22 and θ = 2.23) on Facebook. Bottom right panel shows the CCDFs of the number of likes received by Science (xmin = 1,609 and θ = 1.65) and Conspiracy (xmin = 1,175 and θ = 1.75) on YouTube. Bottom left panel shows the CCDFs of the number of comments received by Science (xmin = 666 and θ = 1.70) and Conspiracy (xmin = 629 and θ = 1.77) on YouTube.

thumbnail
Fig 2. Consumption Patterns of Videos on Facebook and YouTube.

The empirical CCDFs, 1 − F(x), show the consumption patterns of videos supporting conflicting narratives—i.e. Science and Conspiracy—in terms of comments (A and C) and likes (B and D) on Facebook and YouTube.

https://doi.org/10.1371/journal.pone.0159641.g002

Social response on different contents do not present a significant difference on Facebook and Youtube. Users’ response to content is similar on both platform and on both types of content. Science and Conspiracy videos receive the same amount of attention and reverberate in a similar way.

Polarized and Homogeneous Communities

As a secondary analysis we want to check whether the content has a polarizing effect on user. Hence, we focus on the users’ activity across the different type of contents. Fig 3 shows the Probability Density Functions (PDFs) of about 12M users’ and on how they distribute their comments on Science and Conspiracy posts (polarization) on both Facebook and YouTube. We observe sharply peaked bimodal distributions. Users concentrate their activity on one of the two narratives. To quantify the degree of polarization we use the Bimodality Coefficient (BC), and we find that the BC is very high for both Facebook and YouTube. In particular, BCFB = 0.964 and BCYT = 0.928. Moreover, we observe that the percentage of polarized users (users with ρ < 0.05 and ρ > 0.95) is 93.6% on Facebook and 87.8% on YouTube; therefore, two well separated communities support competing narratives in both online social networks.

thumbnail
Fig 3. Polarization on Facebook and YouTube.

The PDFs of the polarization ρ show that the vast majority of users is polarized towards one of the two conflicting narratives—i.e. Science and Conspiracy—on both Facebook and YouTube.

https://doi.org/10.1371/journal.pone.0159641.g003

Content has a polarizing effect, indeed, users focus on specific types of content and aggregate in separated groups—echo chambers—independently of the platform and content promotion algorithm.

To further detail such a segregation, we analyze how polarized users—i.e., users having more than the 95% of their interactions with one narrative—behave with respect to their preferred content. Fig 4 shows the empirical CCDFs of the number of comments left by all polarized users on Facebook and YouTube (, θFB = 2.13 and , θYT = 2.29). We observe a very narrow difference (HDI90 = [−0.18,−0.13]) between the tail behavior of the two distributions. Moreover, Fig 5 shows the empirical CCDFs of the number of comments left by users polarized on either Science or Conspiracy on both Facebook (, θSci = 2.29 and , θCon = 2.31, with HDI90 = [−0.018,−0.009]) and YouTube (, θSci = 2.86 and , θCon = 2.41, with HDI90 = [0.44, 0.46]). Users supporting conflicting narratives behave similarly on Facebook, whereas on YouTube the power law distributions slightly differ in the scaling parameters.

thumbnail
Fig 4. Commenting Activity of Polarized Users.

The empirical CCDFs, 1 − F(x), of the number of comments left by polarized users on Facebook and YouTube.

https://doi.org/10.1371/journal.pone.0159641.g004

thumbnail
Fig 5. Commenting Activity of Users Polarized towards Conflicting Narratives.

The empirical CCDFs, 1 − F(x), of the number of comments left by users polarized on scientific narratives and conspiracy theories on Facebook (A) and YouTube (B).

https://doi.org/10.1371/journal.pone.0159641.g005

The aggregation of users around conflicting narratives lead to the emergence of echo chambers. Once inside such homogeneous and polarized communities, users supporting both narratives behave in a similar way, irrespective of the platform and content promotion algorithm.

Prediction of Users Polarization

Now we want to characterize how the content attract users,—i.e. how users’ polarization evolves comment after comment. We consider random samples of 400 users who left at least 100 comments, and we compute the mobility of a user across different contents along time. On both Facebook and YouTube, we observe that some users interact with a specific kind of content, whereas others start their commenting activity by switching between contents supporting different narratives. The latter—after an initial switching phase—starts focusing only on one type of information, becoming polarized towards one of the two conflicting narratives. We exploit such a regularity to derive a data-driven model to forecast users’ polarizations. Indeed, by means of a multinomial logistic model, we are able to predict the probability of whether a user will become polarized towards a given narrative or she will continue to switch between information supporting competing narratives. In particular, we consider the users’ polarization after n comments, ρn with n = 1, …, 100, as a predictor to classify users in three different classes: Polarized in Science (N = 400), Not Polarized (N = 400), Polarized in Conspiracy (N = 400).

Fig 6 shows precision, recall, and accuracy of the classification tasks on Facebook and YouTube as a function of n. On both online social networks, we find that the model’s performances monotonically increase as a function of n for each class. Focusing on accuracy, significant results (greater than 0.70) are obtained for low values of n. A suitable compromise between classification performances and required number of comments seems to be n = 50, which provides an accuracy greater than 0.80 for each class on both YouTube and Facebook. To assess how the results generalize to independent datasets and to limit problems like overfitting, we split YouTube and Facebook users datasets in training sets (N = 1000) and test sets (N = 200), and we perform Monte Carlo cross validations with 103 iterations. Results of Monte Carlo validations are shown in Table 1 and confirm the goodness of the model.

thumbnail
Table 1. Monte Carlo Cross Validation.

Mean and standard deviation (obtained averaging results of 103 iterations) of precision, recall, and accuracy of the classification task for users Polarized in Conspiracy, Not Polarized, Polarized in Science.

https://doi.org/10.1371/journal.pone.0159641.t001

thumbnail
Fig 6. Performance measures the classification task.

Precision, recall, and accuracy of the classification task for users Polarized in Conspiracy, Not Polarized, Polarized in Science on Facebook and YouTube as a function of n. On both online social networks, we find that the model’s performance measures monotonically increase as a function of n. Focusing on the accuracy, significant results (greater than 0.70) are obtained for low values of n.

https://doi.org/10.1371/journal.pone.0159641.g006

We conclude that the early interaction of users with contents is an accurate predictor for the preferential attachment to a community and thus for the emergence of echo chambers. Moreover, in Table 2, we show that the evolution of the polarization on Facebook and YouTube is so alike that the same model (with n = 50), when trained with Facebook users (N = 1200) to classify YouTube users (N = 1200), leads to an accuracy in the classification task greater than 0.80 for each class. Similarly, using YouTube users as training set to classify Facebook users leads to similar performances.

thumbnail
Table 2. Performance measures of classification.

Precision, recall, and accuracy of the classification task for users Polarized in Conspiracy, Not Polarized, Polarized in Science when YouTube users are used as training set to classify Facebook users (top table), and when Facebook users are used as training set to classify YouTube users (bottom table).

https://doi.org/10.1371/journal.pone.0159641.t002

Conclusions

Algorithms for content promotion are supposed to be the main determinants of the polarization effect arising out of online social media. Still, not much is known about the role of cognitive factors in driving users to aggregate in echo chambers supporting their favorite narrative. Recent studies suggest confirmation bias as one of the driving forces of content selection, which eventually leads to the emergence of polarized communities [1215].

Our findings show that conflicting narratives lead to the aggregation of users in homogeneous echo chambers, irrespective of the online social network and the algorithm of content promotion.

Indeed, in this work, we characterize the behavioral patterns of users dealing with the same contents, but different mechanisms of content promotion. In particular, we investigate whether different mechanisms regulating content promotion in Facebook and Youtube lead to the emergence of homogeneous echo chambers.

We study how users interact with two very distinct and conflicting narratives—i.e. conspiracy-like and scientific news—on Facebook and YouTube. Using extensive quantitative analysis, we find the emergence of polarized and homogeneous communities supporting competing narratives that behave similarly on both online social networks. Moreover, we analyze the evolution of polarization, i.e. how users become polarized towards a narrative. Still, we observe strong similarities between behavioral patterns of users supporting conflicting narratives on different online social networks.

Such a common behavior allows us to derive a statistical learning model to predict with a good precision whether a user will become polarized towards a certain narrative or she will continue to switch between contents supporting different narratives. Finally, we observe that the behavioral patterns are so similar in Facebook and YouTube that we are able to predict with a good precision the polarization of Facebook users by training the model with YouTube users, and vice versa.

Methods

Ethics Statement

The entire data collection process has been carried out exclusively through the Facebook Graph API [23] and the YouTube Data API [24], which are both publicly available, and for the analysis we used only public available data (users with privacy restrictions are not included in the dataset). The pages from which we download data are public Facebook and YouTube entities. User content contributing to such entities is also public unless the user’s privacy settings specify otherwise and in that case it is not available to us. We abided by the terms, conditions, and privacy policies of the websites (Facebook/Youtube)

Data Collection

The Facebook dataset is composed of 413 US public pages divided to Conspiracy and Science news. The first category (Conspiracy) includes pages diffusing alternative information sources and myth narratives—pages which disseminate controversial information, usually lacking supporting evidence and most often contradictory of the official news. The second category (Science) includes scientific institutions and scientific press having the main mission of diffusing scientific knowledge. Such a space of investigation is defined with the same approach as in [19], with the support of different Facebook groups very active in monitoring the conspiracy narratives. Pages were accurately selected and verified according to their self description. For both the categories of pages we downloaded all the posts (and their respective users interactions) in a timespan of 5 years (Jan 2010 to Dec 2014). To our knowledge, the final dataset is the complete set of all scientific and conspiracy-like information sources active in the US Facebook scenario up to date.

We pick all posts on Facebook linking a video on Youtube and then through the API we downloaded the videos related metadata. To build the Youtube database of video we downloaded likes, comments and descriptions of each video cited/shared in Facebook posts using the Youtube Data API [25]. Each video link in Facebook contains an unique id that identify the resource in a unique way on both Facebook and Youtube. The comments thread in Youtube, with its time sequence, is the equivalent of the feed timeline in a Facebook page. The techniques used to analyse Facebook data can be then used in Youtube data with minimum modifications. The YouTube dataset is composed of about 17K videos linked by Facebook posts supporting Science or Conspiracy news. Videos linked by posts in Science pages are considered as videos disseminating scientific knowledge, whereas videos linked by posts in Conspiracy pages are considered as videos diffusing controversial information and supporting myth and conspiracy-like theories. Such a categorization is validated by all the authors and Facebook groups very active in monitoring conspiracy narratives. The exact breakdown of the data is shown in Tables 3, 4, 5 and 6. Summarizing, the dataset is composed of all public videos posted by the Facebook pages listed in the Page List section and their related instances on Youtube.

Preliminaries and Definitions

Polarization of Users.

Polarization of users, ρu ∈ [0, 1], is defined as the fraction of comments that a user u left on posts (videos) supporting conspiracy-like narratives on Facebook (YouTube). In mathematical terms, given su, the number of comments left on Science posts by user u, and cu, the number of comments left on Conspiracy posts by user u, the polarization of u is defined as

We then consider users with ρu > 0.95 as users polarized towards Conspiracy, and users with ρu < 0.05 as users polarized towards Science.

Bimodality Coefficient.

The Bimodality Coefficient (BC) [26] is defined as with μ3 referring to the skewness of the distribution and μ4 referring to its excess kurtosis, with both moments being corrected for sample bias using the sample size n.

The BC of a given empirical distribution is then compared to a benchmark value of BCcrit = 5/9 ≈ 0.555 that would be expected for a uniform distribution; higher values point towards bimodality, whereas lower values point toward unimodality.

Multinomial Logistic Model.

Multinomial logistic regression is a classification method that generalizes logistic regression to multi-class problems, i.e. with more than two possible discrete outcomes [27]. Such a model is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables. In the multinomial logistic model we assume that the log-odds of each response follow a linear model where αj is a constant and βj is a vector of regression coefficients, for j = 1, 2, …, J − 1. Such a model is analogous to a logistic regression model, except that the probability distribution of the response is multinomial instead of binomial, and we have J − 1 equations instead of one. The J − 1 multinomial logistic equations contrast each of categories j = 1, 2, …, J − 1 with the baseline category J. If J = 2 the multinomial logistic model reduces to the simple logistic regression model.

The multinomial logistic model may also be written in terms of the original probabilities πij rather than the log-odds. Indeed, assuming that ηiJ = 0, we can write

Classification Performance Measures.

To assess the goodness of our model we use three different measures of classification performance: precision, recall, and accuracy. For each class i, we compute the number of true positive cases TPi, true negative cases TNi, false positive cases FPi, and false negative cases FNi. Then, for each class i the precision of the classification is defined as the recall is defined as and the accuracy is defined as

Power law distributions.

Scaling exponents of power law distributions are estimated via maximum likelihood (ML) as shown in [28]. To provide a full probabilistic assessment about whether two distributions are similar, we estimate the posterior distribution of the difference between the scaling exponents through an Empirical Bayes method.

Suppose we have two samples of observations, A and B, following power law distributions. For the sample A, we use the ML estimate of the scaling parameter, , as location hyper-parameter of a Normal distribution with scale hyper-parameter . Such a Normal distribution represents the prior distribution, , of the scaling exponent θA. Then, according to the Bayesian paradigm, the prior distribution, p(θA), is updated into a posterior distribution, p(θA|xA): where p(xA|θA) is the likelihood. The posterior distribution is obtained via Metropolis-Hastings algorithm, i.e. a Markov Chain Monte Carlo (MCMC) method used to obtain a sequence of random samples from a probability distribution for which direct sampling is difficult [2931]. To obtain reliable posterior distributions, we run 50,000 iterations (5,000 burned), which proved to ensure the convergence of the MCMC algorithm.

The posterior distribution of θB can be computed following the same steps. Once both posterior distributions, p(θA|xA) and p(θB|xB), are derived, we compute the distribution of the difference between the scaling exponents by subtracting the posteriors, i.e.

Then, by observing the 90% High Density Interval (HDI90) of p(θAθB), we can draw a full probabilistic assessment of the similarity between the two distributions.

Acknowledgments

Funding for this work was provided by EU FET project MULTIPLEX nr. 317532, SIMPOL nr. 610704, DOLFINS nr. 640772, SOBIGDATA nr. 654024. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We want to thank Geoff Hall and “Skepti Forum” for providing fundamental support in defining the atlas of conspiracy news sources in the US Facebook.

Author Contributions

  1. Conceived and designed the experiments: AB AS WQ.
  2. Performed the experiments: AB MP.
  3. Analyzed the data: AB MP AS GC BU WQ.
  4. Contributed reagents/materials/analysis tools: AB MP FZ MD.
  5. Wrote the paper: AB FZ MD MP AS GC BU WQ.

References

  1. 1. Cacciatore MA, Scheufele DA, Iyengar S. The End of Framing As We Know it … and the Future of Media Effects. Mass Communication and Society;
  2. 2. Brown J, Broderick AJ, Lee N. Word of mouth communication within online communities: Conceptualizing the online social network. Journal of interactive marketing. 2007;21(3):2–20.
  3. 3. Kahn R, Kellner D. New media and internet activism: from the ‘Battle of Seattle’ to blogging. new media and society. 2004;6(1):87–95.
  4. 4. Quattrociocchi W, Conte R, Lodi E. Opinions Manipulation: Media, Power and Gossip. Advances in Complex Systems. 2011;14(4):567–586.
  5. 5. Quattrociocchi W, Caldarelli G, Scala A. Opinion dynamics on interacting networks: media competition and social influence. Scientific Reports. 2014;4. pmid:24861995
  6. 6. Kumar R, Mahdian M, McGlohon M. Dynamics of Conversations. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD’10. New York, NY, USA: ACM; 2010. p. 553–562.
  7. 7. Dewey J. The Public and Its Problems. Holt; 1927.
  8. 8. Habermas J. Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. MIT Press; 1998.
  9. 9. Del Vicario M, Bessi A, Zollo F, Petroni F, Scala A, Caldarelli G, et al. The spreading of misinformation online. Proceedings of the National Academy of Sciences. 2016;113(3):554–559.
  10. 10. Facebook. How News Feed Works; 2015. Website. Available from: https://www.facebook.com/help/327131014036297/.
  11. 11. YouTube. Watch Time optimization tips; 2015. Website. Available from: https://support.google.com/youtube/answer/141805?hl=en.
  12. 12. Bessi A, Scala A, Rossi L, Zhang Q, Quattrociocchi W. The economy of attention in the age of (mis) information. Journal of Trust Management. 2014;.
  13. 13. Bessi A, Coletto M, Davidescu GA, Scala A, Caldarelli G, Quattrociocchi W. Science vs Conspiracy: Collective Narratives in the Age of Misinformation. PLoS ONE. 2015;10(2):e0118093. pmid:25706981
  14. 14. Anagnostopoulos A, Bessi A, Caldarelli G, Del Vicario M, Petroni F, Scala A, et al. Viral misinformation: the role of homophily and polarization. arXiv preprint arXiv:14112893. 2014;.
  15. 15. Bessi A, Zollo F, Del Vicario M, Scala A, Caldarelli G, Quattrociocchi W. Trend of Narratives in the Age of Misinformation. PLoS ONE. 2015;10(8):e0134641. pmid:26275043
  16. 16. Quattrociocchi W. How does misinformation spread online? World Economic Forum. 2015;.
  17. 17. Zollo F, Novak PK, Del Vicario M, Bessi A, Mozetič I, Scala A, et al. Emotional dynamics in the age of misinformation. PloS one. 2015;10(9):e0138740. pmid:26422473
  18. 18. Howell L. Digital wildfires in a hyperconnected world. Report 2013 World Economic Forum. 2013;.
  19. 19. Zollo F, Bessi A, Del Vicario M, Scala A, Caldarelli G, Shekhtman L, et al. Debunking in a World of Tribes. 2015;.
  20. 20. Nyhan B, Reifler J. When Corrections Fail: The Persistence of Political Misperceptions. Political Behavior. 2010;32:303–330.
  21. 21. Bessi A, Caldarelli G, Vicario MD, Scala A, Quattrociocchi W. Social determinants of content selection in the age of (mis)information. Proceedings of SOCINFO 2014. 2014;abs/1409.2651.
  22. 22. Mantel N. The detection of disease clustering and a generalized regression approach. Cancer Research. 1967;27(2):209–220. pmid:6018555
  23. 23. Facebook. Using the Graph API; 2013. Website. Available from: https://developers.facebook.com/docs/graph-api/using-graph-api/.
  24. 24. YouTube. YouTube Data API; 2015. Website. Available from: https://developers.google.com/youtube/v3/.
  25. 25. Youtube. Youtube API; 2013. Website. Available from: https://developers.google.com/youtube/v3/docs/.
  26. 26. Pfister R, Schwarz KA, Janczyk M, Dale R, Freeman J. Good things peak in pairs: A note on the Bimodality Coefficient. Frontiers in Psychology. 2013;4(700).
  27. 27. Greene W. Econometric Analysis. Prentice Hall; 2011.
  28. 28. Clauset A, Shalizi CR, Newman MEJ. Power-Law Distributions in Empirical Data. SIAM Review. 2009;51(4):661–703.
  29. 29. Gelman A, Carlin J, Stern H, Rubin D. Bayesian Data Analysis. CRC Press; 2003.
  30. 30. Andrieu C, de Freitas N, Doucet A, Jordan M. An introduction to MCMC for machine learning. Machine Learning. 2003;(50):5–43.
  31. 31. Hartig F, Calabrese JM, Reineking B, Wiegand T, Huth A. Statistical inference for stochastic simulation models–theory and application. Ecology Letters. 2011;(14):816–827. pmid:21679289