Modeling the spread of fake news on Twitter

Fake news can have a significant negative impact on society because of the growing use of mobile devices and the worldwide increase in Internet access. It is therefore essential to develop a simple mathematical model to understand the online dissemination of fake news. In this study, we propose a point process model of the spread of fake news on Twitter. The proposed model describes the spread of a fake news item as a two-stage process: initially, fake news spreads as a piece of ordinary news; then, when most users start recognizing the falsity of the news item, that itself spreads as another news story. We validate this model using two datasets of fake news items spread on Twitter. We show that the proposed model is superior to the current state-of-the-art methods in accurately predicting the evolution of the spread of a fake news item. Moreover, a text analysis suggests that our model appropriately infers the correction time, i.e., the moment when Twitter users start realizing the falsity of the news item. The proposed model contributes to understanding the dynamics of the spread of fake news on social media. Its ability to extract a compact representation of the spreading pattern could be useful in the detection and mitigation of fake news.


Introduction
As smartphones become widespread, people are increasingly seeking and consuming news from social media rather than from the traditional media (e.g., newspapers and TV). Social media has enabled us to share various types of information and to discuss it with other readers. However, it also seems to have become a hotbed of fake news with potentially negative influences on society. For example, Carvalho et al. [1] found that a false report of United Airlines parent company's bankruptcy in 2008 caused the company's stock price to drop by 76% in a few minutes; it closed at 11% below the previous day's close, with a negative effect persisting for more than six days. In the field of politics, Bovet and Makse [2] found that 25% of the news outlets linked from tweets before the 2016 U.S. presidential election were either fake or extremely biased, and their causal analysis suggests that the activities of Trump's supporters influenced the activities of the top fake news spreaders. In addition to stock markets and elections, fake news has emerged for other events, including natural disasters such as the East Japan Great Earthquake in 2011 [3,4], often facilitating widespread panic or criminal activities [5]. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 In this study, we investigate the question of how fake news spreads on Twitter. This question is relevant to an important research question in social science: how does unreliable information or a rumor diffuses in society? It also has practical implications for fake news detection and mitigation [6,7]. Previous studies mainly focused on the path taken by fake news items as they spread on social networks [8,9], which clarified the structural aspects of the spread. However, little is known about the temporal or dynamic aspects of how fake news spreads online.
Here we focus on Twitter and assume that fake news spreads as a two-stage process. In the first stage, a fake news item spreads as an ordinary news story. The second stage occurs after a correction time when most users realize the falsity of the news story. Then, the information regarding that falsehood spreads as another news story. We formulate this assumption by extending the Time-Dependent Hawkes process (TiDeH) [10], a state-of-the-art model for predicting re-sharing dynamics on Twitter. To validate the proposed model, we compiled two datasets of fake news items from Twitter.
The contribution of this study is summarized as follows: • We propose a simple point process model based on the assumption that fake news spreads as a two-stage process.
• We evaluate the predictive performance of the proposed model, which demonstrates the effectiveness of the model.
• We conduct a text mining analysis to validate the assumption of the proposed model.

Related work
Predicting future popularity of online content has been studied extensively [11,12]. A standard approach for predicting popularity is to apply a machine learning framework, such that the prediction problem can be formulated as a classification [13,14] or regression [15] task. Another approach to the prediction problem is to develop a temporal model and fit the model parameters using a training dataset. This approach consists of two types of models: time series and point process models. A time series model describes the number of posts in a fixed window. For example, Matsubara et al. [16] proposed SpikeM to reproduce temporal activities on blogs, Google Trends, and Twitter. In addition, Proskurnia et al. [17] proposed a time series model that considers a promotion effect (e.g., promotion through social media and the front page of the petition site) to predict the popularity dynamics of an online petition. A point process model describes the posted times in a probabilistic way by incorporating the self-exciting nature of information spreading [18,19]. Point process models have also motivated theoretical studies about the effect of a network structure and event times on the diffusion dynamics [20]. Various point process models have been proposed for predicting the final number of re-shares [19,21] and their temporal pattern [10] on social media. Furthermore, these models have been applied to interpret the endogenous and exogenous shocks to the activity on YouTube [22] and Twitter [23]. To the best of our knowledge, the proposed model is the first model incorporating a two-stage process that is an essential characteristic of the spread of fake news. Although some studies [24] proposed a model for the spread of fake news, they focused on modeling the qualitative aspects and did not evaluate prediction performances using a real data set. Our contribution is related to the study of fake news detection. There have been numerous attempts to detect fake news and rumors automatically [6,7]. Typically, fake news is detected based on the textual content. For instance, Hassan et al. [25] extracted multiple categories of features from the sentences and applied a support vector machine classifier to detect fake news. Rashkin et al. [26] developed a long short-term memory (LSTM) neural network model for the fact-checking of news. The temporal information of a cascade, e.g., timings of posts and re-shares triggered by a news story, might improve fake news detection performance. Kwon et al. [27] showed that temporal information improves rumor classification performance. It has also been shown that temporal information improves the fake news detection performance [28], rumor stance classification [29], source identification of misinformation [30], and detection of fake retweeting accounts [31]. A deep neural network model [28] can also incorporate temporal information to improve the fake news detection performance. However, a limitation of the neural network model is that it can utilize only a part of the temporal information and cannot handle cascades with many user responses. The proposed model parameters can be used as a compact representation of temporal information, which helps us overcome this limitation.

Modeling the information spread of fake news
We develop a point process model for describing the dynamics of the spread of a fake news item. A schematic of the proposed model is shown in Fig 1. The proposed model is based on the following two assumptions.
• Users do not know the falsity of a news item in the early stage. The fake news spreads as an ordinary news story (Fig 1: 1st stage). Schematic of the proposed model. We propose a model that describes how posts or re-shares that are related to a fake news item spread on social media (Fake news tweets). Blue circles represent the time stamp of the tweets. The proposed model assumes that the information spread is described as a two-stage process. Initially, a fake news item spreads as a novel news story (1st stage). After a correction time t c , Twitter users recognize the falsity of the news item. Then, the information that the original news item is false spreads as another news story (2nd stage). The posting activity related to the fake news λ(t) (right: black) is given by the summation of the activity of the two stages (left: magenta and green). https://doi.org/10.1371/journal.pone.0250419.g001 • Users recognize the falsity of the news item around a correction time t c . The information that the original news is fake spreads as another news story (Fig 1: 2nd stage).
In other words, the proposed model assumes that the spread of a fake news item consists of two cascades: 1) the cascade of the original news story and 2) the cascade asserting the falsity of the news story. In this study, we use the term cascade meaning tweets or retweets triggered by a piece of information. To describe each cascade, we use the Time-Dependent Hawkes process model, which properly considers the circadian nature of the users and the aging of information.

Time-Dependent Hawkes process (TiDeH): Model of a single cascade
We describe a point process model of a single cascade: the information spreading triggered by a news story. In point process models [32], the probability of obtaining a post or reshare in a small time interval [t, t + Δt] is written as λ(t)Δt, where λ(t) is the instantaneous rate of the cascade, that is, the intensity function. The intensity function of the TiDeH model [10] depends on the previous posts in the following manner: and the memory function h(t) is defined as follows: where p(t) is the infection rate, t i is the time of the i-th post, and d i is the number of followers of the i-th post. The infection rate p(t) incorporates two main properties in the cascade: the circadian rhythm and decay owing to the aging of information where the time of the original post is assumed to be t 0 = 0 and T m = 24 hours is the period of oscillation. The parameters, a, r, θ 0 , and τ, correspond to the intensity, the relative amplitude, the phase of the oscillation, and the time constant of decay, respectively. The memory kernel ϕ (t) represents the probability distribution for the reaction time of a follower. A heavy-tailed distribution was adopted for the memory kernel [10,19] �ðsÞ ¼ ðOtherwiseÞ The parameters were set to c 0 = 6.94 × 10 −4 (/seconds), s 0 = 300 seconds, and γ = 0.242.

Proposed model of the spread of fake news
We formulate a point process model for the spread of a fake new item. Let us assumes that the spread consists of two cascades, namely, the one owing to the original news item and the other owing to the correction of the news item. The activity of the fake news cascade can be written as the sum of two cascades using TiDeH The first term p 1 (t)h 1 (t) represents the rate of the cascade caused by the original news item.
where a 1 represents the impact of the original news item on the spreading, τ 1 is the decay time constant, min(t, t c ) represents the smaller of the two values (t or t c ), and t c is the correction time of the fake news item. The second term p 2 (t)h 2 (t) represents the cascade induced by the correction.
where a 2 represents the impact of the falsity of the news on the spreading, and τ 2 is the decay time constant. It is assumed that the circadian parameters of p 2 (t) are the same as those of p 1 (t). Mathematically, the proposed model includes TiDeH as a special case. Let us consider the proposed model that satisfies the following conditions We can see that the proposed model is equivalent to TiDeH (with parameters a ¼ã and t ¼t) by substituting Eq (6) into Eqs (3), (4) and (5).

Parameter fitting
Here, we describe the procedure for fitting the parameters from the event time series (e.g., the tweeted times). Seven parameters {a 1 , τ 1 ;a 2 , τ 2 ; r, θ 0 ; t c } were determined by maximizing the log-likelihood function where t i is the i-th tweeted time, λ(t) is the intensity given by Eq (3), and T obs is the observation time. We first fix the correction time t c and the other parameters are optimized using the Newton method [33], provided by Scipy [34], within a range of 12 < τ 1 , τ 2 < 2T obs (hours). The correction time is separately optimized using Brent's method [35] within a range of 0.1T obs < t c < 0.9T obs . The code for fitting parameters from the tweeted times is available in Github [36]. We validate the fitting procedure by applying synthetic data generated by the proposed model (Eq 3). Fig 2 shows the dependence of the estimation accuracy on the observation time T obs . To evaluate the accuracy, we calculated the median and interquartile ranges of the estimates from 100 trials. The estimation error decreases as the observation time increases. The result suggests that this fitting procedure can reliably estimate the parameters for sufficiently long observations (�36 hours). The medians of the absolute relative errors obtained from 36 hours of synthetic data are 18%, 11%, 38%, 38%, and 10% for a 1 , τ 1 , a 2 , τ 2 , and t c , respectively. The estimation accuracy of the second cascade parameters (a 2 , τ 2 ) is worse than that of the first cascade parameters (a 1 , τ 1 ). This seems to be caused by the insufficiency of the observed data. While the first cascade parameters are estimated from the entire data, the second cascade parameters are estimated from the observation data after the correction time t c . Moreover, the model parameters are not identifiable [37,38] in the case of a 1 ¼ a 2 e À t c =t 2 and τ 1 = τ 2 . Because the proposed model is equivalent to TiDeH (a 2 = 0, t c � T obs ) in this case, other parameter sets can also reproduce the observed data. Fig 3 shows that the fitting procedure can estimate the parameters accurately except for the non-identifiable domain.

Dataset
We evaluate the proposed model and examine the correction time of fake news based on two datasets of the spread of fake news items. Datasets of the spread of fake news based on retweets of the original news post [39,40] are publicly available. However, rather than a simple retweet, the information sharing of fake news can be complex. To cover the information spread in detail, we manually compiled two datasets of fake news items spread on Twitter. In our dataset, 61% and 20% of the tweets are retweets of original posts in the Recent Fake News dataset and the 2011 Tohoku Earthquake and Tsunami dataset, respectively.

Recent Fake News (RFN)
We collected the spread of 10 fake news items from two fact-checking sites, Politifact.com [41] and Snopes.com [42] between March and May, in 2019. PolitiFact is an independent, non-partisan site for online fact-checking, mainly for U.S. political news and politicians' statements. Snopes.com, one of the first online fact-checking websites, handles political and other social and topical issues. Using the Twitter API, tweets highly relevant to the fake news stories were crawled based on the keywords and the URLs. We selected six fake news stories based on two conditions: 1) the number of posts must be greater than 300 and 2) the observation period

PLOS ONE
must be longer than 36 hours (as indicated by the experiments conducted on synthetic data, Fig 2). A summary of the collected fake news stories is presented in Table 1.

Fake news on the 2011 Tohoku earthquake and tsunami (Tohoku)
Numerous fake news stories emerged after the 2011 earthquake off the Pacific coast of Tohoku [3,4]. We collected tweets posted in Japanese from March 12 to March 24, 2011, by using sample streams from the Twitter API. There were a total of 17,079,963 tweets. We first identified 80 fake news items based on a fake news verification article [43] and obtained the keywords and related URLs of the news items. Then, we extracted the tweets highly relevant to the fake news. Finally, we selected 19 fake news stories using the same conditions as in the RFN dataset. A summary of the collected fake news items is presented in Table 2.

Experimental evaluation
To evaluate the proposed model, we consider the following prediction task: For the spread of a fake news item, we observe a tweet sequence {t i , d i } up to time T obs from the original post (t 0 = 0), where t i is the i-th tweeted time, d i is the number of followers of the i-th tweeting person, and T obs represents the duration of the observation. Then, we seek to predict the time series of the cumulative number of posts related to the fake news item during the test period [T obs , T max ], where T max is the end of the period. In this section, we describe the experimental setup and the proposed prediction procedure, and compare the performance of the proposed method with state-of-the-art approaches.

Setup
The total time interval [0, T max ] was divided into the training and test periods. The training period was set to the first half of the total period [0, 0.5T max ] and the test period was the remaining period [0.5T max , T max ]. The prediction performance was evaluated by the mean and median absolute error between the actual time series and its predictions: whereN k and N k are the predicted and actual cumulative numbers of tweets in a k-th bin [(k − 1)Δ + T obs , kΔ + T obs ], respectively, n b is the number of bins, and Δ = 1 hour is the bin width.

Prediction results
We evaluated the prediction performance of the proposed model and compared it with three baseline methods: linear regression (LR) [15], reinforced Poisson process (RPP) [44] and TiDeH [10]. We used the Python code in Github [45] to implement TiDeH. Details of the LR and RPP methods are summarized in the S1 Appendix. Fig 4 shows three examples of the time series of the cumulative number of posts related to fake news items and their prediction results. The proposed method (Fig 4: magenta) follows the actual time series more accurately than the baselines. While the proposed method reproduces the slowing-down effect in the posting activity, the baseline models tend to over-estimate the number of posts.

PLOS ONE
Next we examine the distribution of the proposed model's parameters. The spreading effect of the falsity of the news item a 2 is weaker than that of the news story itself a 1 for most fake news items (67% and 79% in the RFN and Tohoku datasets, respectively). The result can be attributed to the fact that the news story itself is more surprising for the users than the falsity of the news. The decay time constant of the first cascade τ 1 is approximately 40 (hours) in both datasets: the median (interquartile range) was 35 (22−92) hours and 40 (19−54) hours for the RFN and Tohoku datasets, respectively. The time constant of the second cascade τ 2 is widely distributed in both datasets, which is consistent with the result observed in the synthetic data (Fig 2). The correction time t c tends to be around 30−40 hours after the original post: 32 (21 −54) hours and 37 (31−61) hours for the RFN and Tohoku datasets, respectively. A previous study [46] reported that the fact-checking sites detect the fake news in 10−20 hours after the original post. The result implies that Twitter users recognize the falsity of a fake news item after 10−20 hours from the initial report by the fact-checking sites.
Finally, we evaluated the prediction performance using the two fake news datasets (Table 3). Table 3 demonstrates that the proposed method outperforms the baseline methods in both datasets and metrics. Comparison of the mean error for the proposed model and TiDeH suggests that the two-stage spreading mechanism reduces the mean error by 32% and 42% in the RFN and Tohoku datasets, respectively. Consistent with previous studies [10,19], the methods based on the point process model (the proposed method, TiDeH, and RPP) perform better than the linear regression (LR) method. Indeed, the proposed model performs best for most fake news items (100% and 89% in the RFN and Tohoku datasets, respectively). While TiDeH performs better than the proposed model for the other dataset (8%), the proposed model still performs much better than the other baselines (RPP and LR). Furthermore, we evaluated the goodness-of-fit of the model using Akaike's information criterion (AIC) [47]. Comparison of AIC values implies that the proposed model achieves a better fit than TiDeH for most fake news items (100% and 89% in the RFN and Tohoku datasets, respectively). These results suggest that the fake news occasionally spreads in a single cascade rather than in two cascades. This might happen when the users already know the falsity of the news in advance (e.g., April Fool's Day) or they are not interested in the falsity of the news at all. Overall, these results show that the proposed method is effective for predicting the spread of fake news posts on Twitter.

Inferring the correction time
We have demonstrated that the proposed method outperforms the existing methods for predicting the evolution of the spread of a fake news item. The proposed model assumes that Twitter users realize the falsity of the news around the correction time t c . In this section, we examine the validity of this assumption through text mining.
First, we compared the frequency of fake words with inferred correction time t c (Fig 5). The fake word frequency is regarded as the number of the tweets having fake words (e.g., false

PLOS ONE
rumors, fake, not true, and not real) in each hour. The spread of fake news items in the RFN dataset contained fewer "fake" words than those in the Tohoku dataset: 29 and 277 fake words in the tweets of b. Notredome and f. Sonictrans in the RFN dataset, and 1,752, 1,616, 1,723, and 1,930 fake words in the tweets of a. Saveenergy, l. Taiwan, q. Cartoonist, and s. Turkey in the Tohoku dataset during the observation period (150 hours), respectively. This is because most of the tweets in the RFN dataset are retweets of the original post. We observed that the fake words were posted around the correction time. The peak of the fake word frequency is close to the correction time for Taiwan and Cartoonist in the Tohoku dataset ( Fig 5). Next, we compared the word cloud before and after the correction time t c . Fig 6 demonstrates an example of a fake news item spreading "Turkey" in the Tohoku dataset. The fake news story is about the huge financial support (10 billion yen) from Turkey to Japan. The word cloud before the correction time implies that this fake news item spread due to the fact that Turkey is considered as a pro-Japanese country. The term "False rumor" starts to appear frequently after the correction time. The word "Taiwan" also appears after the correction time, which is related to another fake news story about Taiwan. These results suggest that Twitter users realize the falsity of the news after the correction time, which supports the key assumption of the proposed model.

Conclusion
We have proposed a point process model for predicting the future evolution of the spreading of fake news on Twitter (i.e., tweets and re-tweets related to a fake news story). The proposed model describes the fake news spread as a two-stage process. First, a fake news item spreads as an ordinary news story. Then, the users recognize the falsity of the news story and spread it as another news story. We have validated this model by compiling two datasets of fake news items spread on Twitter. We have shown that the proposed model outperforms the state-ofthe-art methods for accurately predicting the spread of fake news items. Moreover, the proposed model was able to infer the correction time of the news story. Our results based on text mining indicate that Twitter users realize the falsity of the news story around the inferred correction time.
There are several interesting directions for future works. The first direction is to investigate cascades exhibiting multiple bursts. While most fake news cascades exhibit the two-stage spreading pattern, this pattern can also be observed associated with cascades in general. A previous study [48] found that the cascades of image memes in Facebook consists of multiple popularity bursts and argued that the content virality is the primary driver of cascade recurrence. Our work implies that the change in the perception of the content can be another driver. Additional research is needed to determine whether this hypothesis explains the cascade recurrence better than the content virality or not. A second direction would be to extend the proposed model. While we simply assumed the two-stage process for the spread of a fake news item, this could be extended to describe the spread of fake news in more detail. For example, we can consider multiple types of tweets or a hidden variable to incorporate a soft switch to the second stage from the first one. Another direction would be to apply the proposed model to the practical problems such as fake news detection and mitigation. We believe that the proposed model provides an important contribution to the modeling of the spread of fake news, and it is also beneficial for the extraction of a compact representation of the temporal information related to the spread of a fake news item.
Supporting information S1 Appendix. Baseline methods. We summarize the baseline methods for predicting the evolution of the spread of a fake news item: linear regression (LR) and reinforced Poisson process (RPP). (PDF)