Figures
Abstract
The proliferation of harmful content and misinformation on social networks necessitates content moderation policies to maintain platform health. One such policy is shadow banning, which limits content visibility. The danger of shadow banning is that it can be misused by social media platforms to manipulate opinions. Here we present an optimization based approach to shadow banning that can shape opinions into a desired distribution and scale to large networks. Simulations on real network topologies show that our shadow banning policies can shift opinions and increase or decrease opinion polarization. We find that if one shadow bans with the aim of shifting opinions in a certain direction, the resulting shadow banning policy can appear neutral. This shows the potential for social media platforms to misuse shadow banning without being detected. Our results demonstrate the power and danger of shadow banning for opinion manipulation in social networks.
Citation: Chen Y-S, Zaman T (2024) Shaping opinions in social networks with shadow banning. PLoS ONE 19(3): e0299977. https://doi.org/10.1371/journal.pone.0299977
Editor: Poowin Bunyavejchewin, Thammasat University Institute of East Asian Studies, THAILAND
Received: November 27, 2023; Accepted: February 19, 2024; Published: March 27, 2024
Copyright: © 2024 Chen, Zaman. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Data and code for the simulations are available at https://github.com/ysghysgh/Shadowbanning.git.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
The digital age has borne witness to the rapid rise of social networks which influence the dynamics of public conversation. Inherent to their structure and expansive reach, these platforms possess the potential to shape public discourse, often wielding influence that transcends geographical boundaries. However, this powerful capacity can also serve as a conduit for the propagation of harmful content or disinformation. The ramifications of this can be significant, including societies being ensnared in a web of misinformation, or becoming perilously polarized to the brink of internal conflict. This necessitates strategies designed to stymie the potential exploitation and misuse of these influential platforms.
For content that manifestly constitutes a threat, platforms have the authority to expunge the user responsible. This course of action is typically employed in scenarios involving explicit threats of violence, unequivocal disinformation posing potential danger, or other instances that breach the platform’s stipulated policies. However, not all content resides within such clear-cut boundaries of propriety.
Certain types of content may straddle the periphery of policy violation without crossing the explicit threshold. Even though such content does not transgress the policies directly, its rampant dissemination can subtly skew the tenor of online discourse in ways that could engender undesirable outcomes. Consider, for instance, an ongoing political debate on the platform marked by heightened tension and polarization. In such circumstances, the platform may deem it necessary to curb the spread of emotionally-charged content that could further inflame the situation and potentially instigate violent acts. The content in question here may not represent a clear-cut violation of the platform’s policies. Nonetheless, its unchecked proliferation could exacerbate polarization at a critical juncture, thereby posing potential risks.
To address content that does not directly breach the platform’s rules but still presents certain risks, the platform can employ various content moderation strategies. One of these strategies is referred to as shadow banning. Characterized by its clandestine nature, shadow banning operates by limiting the visibility of a user’s content, effectively curtailing the user’s reach without their awareness [1]. Shadow banning allows platforms to exert control over the content they host, without disrupting user engagement significantly. It can be employed at different levels of precision. The platform could limit the visibility of all content of a user, or it could be more selective and limit the visibility of the user’s content to a set of specific users. In either case, the net effect is that certain content posted by a user will not be seen by others.
Shadow banning can serve as an effective strategy for maintaining the health of social media discourse. It was found that Twitter shadow banned accounts that exhibited automated or bot-like behavior, along with offensive posts [2]. While there are obvious benefits to shadow banning, it is not without potential drawbacks. Of significant concern is the potential to upset users who may perceive this practice as a form of covert censorship, infringing on their freedom of expression. Given the clandestine nature of shadow banning, users may feel betrayed or manipulated upon discovering that their content reach has been curtailed without their knowledge. These sentiments could lead to decreased user engagement, trust erosion, and potentially a mass exodus from the platform. For instance, many conservative users of Twitter accused the platform of shadow banning them as an exercise of political censorship [3]. Instagram has been accused of disproportionately shadow banning women in an attempt to limit the spread of inappropriate content, accusations which Instagram denies [4]. The negative stigma surrounding shadow banning has caused Elon Musk, owner of Twitter, to publicly state that he will not allow shadow-banning on the platform [5]. Thus, while the judicious application of shadow banning policies can be effective at content moderation, it is imperative that such measures are deployed sparingly and transparently. This careful balancing act between user satisfaction and content moderation underscores the intricate challenge of managing contemporary social media platforms.
The danger of content moderation policies such as shadow banning is that they can result in the manipulation of opinions by the platform. Traditionally, opinion manipulation has been considered from the perspective of a user in the network. The goal is to select target users to receive content in order to maximize an objective, such as the reach of the content [6–11] or the mean opinion in the network [12–14]. Other approaches to opinion manipulation focus on increasing the credibility of opinion leaders [15]. In contrast, content moderation is done by the platform itself and does not introduce new content into the network. Rather, it modifies the audience for existing content. However, content moderation can still manipulate opinions. For instance, one type of content moderation is recommendations, where the platform uses an algorithm to choose what content to show users. The recommendation algorithm is typically designed to show users content they are likely to prefer. Many studies have found that the bias of content recommendation algorithms creates a positive feedback loop that can lead to increased polarization [16–21]. It has been found that uniformly limiting the visibility of content in a social network can also inadvertently increase polarization [22]. From these results it is clear that content moderation can manipulate opinions in a social network, even when this was not the intention.
The ability of content moderation to affect opinions is very concerning. It raises the question of whether or not a social media platform could design content moderation policies with the explicit objective of manipulating the opinions into an arbitrary target distribution. If this was possible, it could be very dangerous for a society. Furthermore, one can ask whether this opinion manipulation could be done without being detected. For instance, can a social media platform deploy content moderation methods, like shadow banning, with a partisan intent, yet still uphold an outward semblance of political neutrality? This scenario suggests that a society could be covertly swayed by a social media platform, with the populace remaining unaware until potentially harmful consequences have firmly taken hold.
In this work, we demonstrate how a social media platform can employ a different form of content moderation, specifically shadow banning, to arbitrarily shape the opinions of its users. We frame this as an optimization problem which allows one to calculate shadow banning policies that shape opinions into a specified target distribution. The shadow banning policy is obtained by solving a simple linear program. Because of this, our approach can scale to large networks and can accommodate a variety of opinion dynamics models encompassing complex phenomena such as bounded confidence [23]. When determining shadow banning policies, we focus on two principal characteristics of the opinion distribution: the mean and the variance.
Altering the mean enables the platform to steer the prevalent sentiment regarding a topic in a designated direction. When utilized with upright intentions, this can permit the platform to curtail the spread of hazardous content and diminish the influence of misinformation. However, if employed with unethical intentions, manipulating the mean may allow the platform to forge an artificial bias either in favor or against a particular topic. This holds a potential for substantial risk, especially if, for example, implemented during an election year.
In contrast, manipulating the variance does not generate a bias towards a topic, but instead alters the overall character of the online dialogue. Reducing the variance has the potential to moderate online polarization and suppress intense sentiment. On the other hand, amplifying the variance enhances polarization and escalates the severity of sentiment. There are scarce justifiable reasons to amplify variance unless the objective is to destabilize a populace via information warfare. Nonetheless, it is an action that can be effortlessly executed through shadow banning. This shows the potential risks of shadow banning and emphasizes that it must be used with great care.
This paper is organized as follows. We begin by presenting the underlying opinion dynamics model used in our analysis. We then show how to calculate shadow banning policies by solving a linear program. Shadow banning policies are calculated for synthetic networks to provide intuition for their behavior. We then calculate shadow banning policies on two large-scale Twitter networks for multiple opinion objectives. We find that substantial manipulation of opinions can be achieved over time, even with limited shadow banning. Finally, we show that if one shadow bans with a politically biased objective in mind, such as maximizing the opinion mean, the resulting shadow banning policy appears to be politically neutral, or biased in a counter-intuitive way.
2 Methods
2.1 Opinion dynamics model
Shadow banning can be used to control the movement of opinions. However, we must first have a model for the underlying dynamics of the opinions in order to apply shadow banning. There are a variety of such models in existence, but they can all be reduced to a set of continuous time differential equations. We now present this differential equation framework and our choice of opinion dynamics model.
We represent the social network as a directed graph G = (V, E) where V is the set of vertices, which are the users of the social network platform, and E is the set of edges which represent following relationships. This model is appropriate with social networks with a follower/following structure, such as Twitter, Instagram, or TikTok. An edge (i, j) pointing from user i to user j means that user j follows user i, and subsequently will be shown content posted by i. User i posts content to user j at a rate λij, which is the number of posts per unit time. In practice this rate would only depend on i as it would correspond to his posting rate. However, it is possible that the rate could vary with j, if for instance j does not check the platform often and thus does not see all of i’s content. Therefore, we can consider λij as an effective posting rate from i to j. Also, we will consider shadow banning policies that limit the rate at which content flows along individual edges in the social network, so separating posting rates by edge simplifies our analysis.
Each user i has a time dependent latent opinion θi(t) which is a real number. The opinion of content posted by a user at any given time matches their latent opinion. More general models allow for the content to have a random opinion which equals θi(t) in expectation [13]. However, we will not consider such stochastic generalizations here.
Each time a user i posts in the network, all users update their opinions. Assume i posts at time t and consider a user j. If j does not follow i then there is no change in j’s opinion. However, if j does follow i, then j changes his opinion by an amount given by f(θi(t) − θj(t)), where f is the opinion shift function and its argument is the difference of the opinions of i and j. This form for the opinion shift function is in accordance with many popular opinion dynamics models [23, 24].
To simplify this analysis, we approximate the opinions as continuous functions. This is a good approximation for large networks. We first assume that users independently post content according to a Poisson process. Then the number of posts in the entire network is a merged Poisson process of the individual user posting processes. We define δ as the mean time between posts in the network. First consider the case where posts on each edge are independent. In this case, δ = 1/∑(i, j)∈E λij. Second, consider the more realistic case where users post independently, but their posts are broadcast to all of their followers simultaneously. In this case δ = 1/∑i∈V λi where λi is the posting rate of user i (and λij = λi). In either case, we see that as the network grows large, δ becomes increasingly small. Therefore, for large networks, a continuous time approximation is reasonable. We assume there was a post in the network at time t + δ and write down the update rule for user j’s opinion as
The random variable Xij(t) is one if there is a post on edge (i, j) and zero otherwise. Given that a post occurred, the mean value of Xij(t) is λijδ by properties of merged Poisson processes [25]. Taking the expectation over Xij(t) and doing some simple manipulations, the update rule becomes
As the network size increases, δ will approach zero, and the term on the left can be replaced with a time derivative dθj/dt. This then gives us our continuous time opinion dynamics model
(1)
This differential equation model is a good approximation to the opinion dynamics on large networks. In our application, we are considering a platform shadow banning users in the entire social network, so this approximation is valid.
The last piece to specify in this model is the opinion shift function f. There are several options here. The classic DeGroot model has f(x) = ωx for some non-negative constant ω which measures how much a single post can shift one’s opinion [24]. This term is capturing how reliable one considers the opinions of others. Hearing an opinion from someone deemed more reliable will cause one to change their opinion more than an opinion from someone unreliable. DeGroot’s model leads to opinion consensus on most networks. This is one flaw of the model, as many researchers have observed persistent polarization in real social networks [26–30]. Another flaw of the model is the fact that the opinion shift is proportional to the difference between the opinion of the post and one’s own opinion. However, it is unlikely that an opinion vastly different from one’s own would be persuasive in modern online social media. Instead, these opinions may be ignored.
To allow for persistent polarization and limit the persuasive power of posts with opinions with vastly different from their audience, the bounded confidence model was proposed [23, 31]. In this model the shift function is given by
(2)
where ϵ is the size of the confidence interval. The bounded confidence model places a limit on the range of trusted opinions. Opinions deviating too far (by more than ϵ) from one’s own opinion have no persuasive power. The bounded confidence model can result in consensus or persistent polarization depending upon the value of the confidence interval, the initial opinions, and the network structure [15, 32, 33]. It is a more complex model that better captures behavior in real social networks. In this work we use this bounded confidence model for the opinion dynamics. We note that there are more complex variations of this model, include labeling community belongings [34], factoring in various sociological phenomena [35], and taking into account opposing opinions when compromise happens [36]. Our framework would allow for the utilization of these model variations for the underlying opinion dynamics.
2.2 Shadow banning control
We can easily incorporate shadow banning into our opinion dynamics model. We define the shadow banning strength on an edge (i, j) at time t as uij(t) which is a real number between zero and one. Shadow banning reduces the posting rate λij by a multiplicative factor 1 − uij(t). At one extreme, uij(t) = 1 corresponds to total censorship of content from i to j. At the other extreme, uij(t) = 0 corresponds to no shadow banning. Under shadow banning, the opinion dynamics model is slightly modified to become
(3)
where we have dropped the time arguments to simplify notation.
To determine the shadow banning policy, the social network platform must have a target distribution for the opinions of its users. This is described by an objective function, or an instantaneous reward, r(θ(t)) of the opinions, where θ(t) refers to the opinions of each user in the network at time t. The objective can be any function of the opinions, but here we will consider the important cases where the objective is the opinion mean, variance or negative variance. The negative values allow the platform to minimize the variance under our objective maximization framework.
The platform can have different types of goals with respect to the objective. One possible goal is to maximize the objective at a final or terminal time T. This can be formulated as the following control theory problem:
Solving this problem is non-trivial, but could possibly be done using techniques from control theory [37]. However, there is an issue with scalability. The shadow banning control problem has one control variable for each edge in the network and one state variable for each user in the network. If one is performing shadow banning on an entire social media platform, this can result in hundreds of millions of state variables and billions of control variables. Standard control theory techniques will not work on such large problems. To avoid this issue, we use the following approximation. We can rewrite the final objective as
An optimal solution to this problem will choose the shadow ban controls in a manner to maximize this integral. However, the size of the problem prevents such a solution from being found. A more scalable approach is to find a greedy solution. Instead of maximizing the integral, we maximize the integrand at each time step sequentially. This means we choose the shadow banning policy to maximize dr/dt at each time t. It turns out that this objective can be maximized in a manner that scales to large networks. To see why, we can rewrite it as
where we have defined
. Above we have used Eq (3) for dθi/dt. We see from this expression that the shadow banning appears linearly in the reward derivative through the opinion dynamics. This observation gives us an efficient method to find the shadow banning policy: at time t we maximize the time derivative of the instantaneous reward. Because the derivative is a linear function in the shadow ban controls, this maximization problem can be formulated as a linear program. This allows us to solve for the policy in very large networks using a variety of well-known methods [38–40].
In addition to maximizing the objective, the platform also has constraints on the shadow banning policy. If the shadow banning is too strong, the user experience will be affected negatively. Therefore, the constraints limit the strength of the shadow banning. This limitation can be done at different levels. One can set a limit on the mean shadow banning strength in the entire network, or one can limit the shadow banning strength on individual edges. We refer to these limits as snetwork for the network average and sedge for individual edge. Combining these constraints with the greedy approximation leads to the following linear program for the shadow banning policy at time t:
where the decision vector u(t) denotes the set of uji(t) for every (j, i) ∈ E. uji(t) indicates the shadow banning strength on the following/follower edge (j, i). The resulting tweet rate on an edge (j, i) then reduces to λji(1 − uji(t)). The coefficients Bji(t) in the objective are determined by the opinions in the network θ(t), the network structure (which is contained in the edge set E), the posting rates λji, the derivative of the reward with respect to the opinions
, and the opinion shift function f. Because we assume the bounded confidence model for the dynamics, we use the shift function in Eq 2. As for shadow banning constraints, the first inequality corresponds to the limit of the mean shadow banning strength in the network, while the second inequality corresponds to the limit of the shadow banning strength on each individual edge.
The solution of the linear program gives the shadow ban policy at time t. Solving this linear program at every time step will give the complete dynamic shadow banning policy. The policy is dynamic because as time progresses, the user opinions change, leading to a potentially different shadow banning policy. The platform has the flexibility to decide the frequency of policy recalculations, whether it is a daily update or longer intervals like weekly or monthly. To put this policy into action, we can consider uji(t) as a probability. Therefore, at time t, after computing the policy by solving the shadow banning linear program, one potential approach for the platform to implement shadow banning is by making each post from user j invisible to user i with a probability of uji(t).
The impact of the particular choice of reward function on the resulting shadow banning policy is expressed through the partial derivative of the reward with respect to the opinions. We list the partial derivatives for the objectives we consider in Table 1. One nice feature of our approach is that the shadow banning policy can be found by solving a linear program for any objective function. This allows one to use more novel objective functions beyond those considered here.
We have used μ to refer to the mean of the opinions.
We can gain insight to the behavior of the shadow banning policy for different objectives by examining the linear program. Consider an edge (j, i) corresponding to node i following node j. Because we want to maximize the time derivative of the reward, we will shadow ban edges where the coefficient Bji(t) is negative (recall that uji(t) = 1 corresponds to maximum shadow banning). We first consider the case where the goal is to maximize the opinion mean. Using the partial derivative of the mean in Table 1 and the definition of Bji(t), we find that there will be shadow banning on edge (j, i) when f(θj(t) − θi(t)) is negative. The opinion shift functions we consider have odd symmetry and their sign matches the sign of their argument. This means that there is shadow banning if θi(t) > θj(t). In this case node j is pulling down the opinion of node i, which decreases the opinion mean. Therefore, the policy shadow bans the edge.
To understand the variance shadow banning policies, it is useful to define μ(t) as the mean of the user opinions at time t. If the goal is to minimize the opinion variance, then Bji(t) is negative when the partial derivative of the reward is negative and the opinion shift is positive, or vice versa. This corresponds to θi(t) > μ(t) and θi(t) < θj(t), or θi(t) < μ(t) and θi(t) > θj(t). In the first case node i’s opinion is above the mean and it is being pulled up by node j, which increases the variance. In the other case, i’s opinion is below the mean and it is being pulled down by j, which also increases the variance. Therefore, under either of these conditions this edge gets shadow banned. A similar analysis for maximizing the variance shows that the edges which are shadow banned correspond to a node being above the mean and being pulled down or a node being below the mean and being pulled up.
Comparing the mean and variance policies, we see that the policy for the mean depends only on the opinion shift on an edge since the goal is to shift the distribution in a particular direction. The global position of an opinion is not relevant. However, the policy for the variance is more complex since the goal is to either stretch out or compress the opinion distribution around the global mean. In this case the policy takes into account the position of the opinions relative to the mean in addition to the shift direction on the edge.
3 Results
We test our shadow banning algorithm in a variety of networks with different opinion objectives. We consider maximizing the mean, minimizing the variance, and maximizing the variance. We first calculate shadow banning policies on small synthetic networks to illustrate some of the intuition for the policies discussed earlier. We then calculate shadow banning policies on larger Twitter networks to demonstrate the scalability of the algorithm and show how it performs on real network topologies and opinion distributions. In our analysis we update the shadow banning policies daily, as this is a practical implementation scheme for social media platforms.
We use the bounded confidence model for the opinion dynamics. We must choose the parameters ϵ and ω to specify the opinion dynamics model. Larger values for these parameters correspond to stronger persuasion between nodes (wider confidence interval and larger shift magnitude). We choose to be conservative and choose small values for both parameters to limit the speed of the natural opinion dynamics and keep persistent polarization that is observed in real social networks. For ϵ we choose 0.1 so the confidence interval is fairly narrow (the initial opinions are distributed between zero and one). We set ω = 0.003, which indicates that users have much more confidence in their own opinions relative to the opinions of others. This aligns with several studies of persuasion which have found a single message can cause a very small opinion shift in a controlled environment [41–44]. We use a value for ω less than what is implied in these works as we expect there to be many factors that reduce the persuasive power of social media posts, such as user’s not seeing a post at all or scrolling past it without reading it. Note that we use constant values for ϵ and ω for all users at all time steps. In reality, users are likely to have heterogeneous and time-varying values for these parameters [45]. We do not have a good sense of how these parameters are distributed, so we instead choose to use a constant value for all users. However, if such information was available, it can easily be incorporated into our simulation framework.
3.1 Synthetic networks
3.1.1 Path network.
We begin with a path network shown in Fig 1. The network has 11 nodes whose opinions increase linearly from zero at one end to one on the opposite end. The posting rate of each user is set to one post per day. Our simulation will run for 365 days, with the shadow banning being updated daily. We set no limits on the maximum edge shadow banning strength (sedge = 1), but we limit the maximum mean shadow banning strength to snetwork = 0.5. To avoid issues with numerical rounding, we set ϵ = 0.101 so the confidence interval strictly greater than the difference between neighboring opinions.
The node colors indicate the opinion (lower are blue, higher are red). The direction of the edges indicates the flow of information on the network. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
We calculate the shadow banning policy for each objective function and show the evolution of the resulting opinions in Fig 2. With no shadow banning, the opinions do not converge and the mean is slightly above 0.5. When trying to maximize the mean, the opinions are pulled up to 0.6. For minimizing the variance, we see the opinions converge to 0.5, but with less polarization than with no shadow banning. For maximizing the variance, the opinions become more polarized as the simulation progresses. For each objective we also show the mean shadow banning strength in the network. As can be seen, the shadow banning remains near the 0.5 limit set by the linear program throughout the simulations. This is because the opinions move slowly, so the shadow banning can continue to increase the objective over the simulation duration.
For the opinions, the purple region is the 25th to 75th quantiles, and the pink region is the 5th to 95th quantiles. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
To understand what the different shadow banning control policies are doing for each objective, we visualize the initial decisions (t = 0) of the shadow banning policy. We draw the network keeping only non-shadow banned edges to show where the shadow banning occurred. We show the resulting networks in Fig 1. The structure of these shadow banned networks reflects the intuition from the linear program. We see that to maximize the mean, the control shadow bans edges pointing from a lower opinion node to a higher opinion node. This is being done to prevent any nodes from pulling their neighbor opinions down. For minimizing the variance, we see that initially the edges pointing from more extreme opinions are shadow banned, causing the opinions to shift towards the middle. By having all opinion shifts point towards the middle, the opinions will converge to 0.5 more quickly. For maximizing the variance, the opposite edges are blocked, causing the opinions to drift towards the extremes.
3.1.2 Stochastic block model network.
Real-world networks exhibit an assortative structure where users of similar opinions exist in distinct clusters in the network, often referred to as echo chambers [26, 27, 29, 46–51]. One popular model for this network structure is known as the stochastic block model [52]. In this model, one specifies the number of clusters and the number of nodes in each cluster. Then one specifies a k × k probability matrix p where element pab is the probability of an edge between a node in cluster a and a node in cluster b. All edges are formed independently. If all values in p are equal, then the stochastic block model reduces to the well-known Erdos-Renyi model [53]. Generally, the off-diagonal elements of p are less than the diagonal elements to make intra-cluster edges more likely than inter-cluster edges. This is how the assortativity structure is achieved.
We utilize a stochastic block model network with ten nodes equally divided between two clusters. The intra-cluster probabilities are one and the inter-cluster probabilities are 0.05. This produces a network of two cliques connected by a small number of directed edges, as shown in Fig 3. The nodes in each cluster have the same opinion, which is 0.35 in cluster one and 0.65 in cluster two. We chose these values so that they are close enough to allow some persuasion between the clusters under our model specification. To allow for non-trivial opinion dynamics, we set ϵ equal to 0.4. This allows persuasion to occur between the clusters under natural dynamics. Otherwise the two clusters do not interact in any meaningful way.
The node colors indicate the opinion (lower are blue, higher are red). The direction of the edges indicates the flow of information on the network. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance. For the no shadow banning policy, the node colors correspond to opinions at time t = 0. For the other objectives, the node colors correspond to opinions at time t = 10.
The shadow banning is applied in the same manner as with the path network, (daily update of policy with sedge = 1 and snetwork = 0.5). We calculate the shadow banning policy for each objective function and show the resulting evolution of the opinions in Fig 3. With no shadow banning the network slowly approaches consensus at 0.5. When maximizing the mean the shadow banning is able to push the opinions near 0.65, which is the maximum value in the initial opinions. Since shadow banning is only reducing content on the platform, the final opinions cannot be greater than the maximum opinion bounded by the initial values. Minimizing the variance causes the opinions to approach consensus, but faster than without any shadow banning as can be seen by the narrower spread in the final opinion distribution. When maximizing the variance, the opinions in each cluster stay at their initial values throughout the simulation. Like with the path network, the mean shadow banning strength stays above zero for the simulation duration for each objective. However, the value is lower than for the path network because fewer edges are shadow banned, as we will discuss next.
We visualize the early shadow banning policies for the stochastic block model network in Fig 3 as was done for the path network. The networks shown correspond to policies at t = 10. We did not use t = 0 because the equality of the initial opinions within the clusters resulted in no shadow banning. As the dynamics evolve the opinions take on different values and we obtain non-trivial shadow banning policies. For maximizing the mean, the shadow banning policy blocks the inter-cluster edges pointing from the lower opinion cluster to the higher opinion cluster. This prevents the lower opinion cluster from pulling down the higher opinion cluster. Within the lower opinion cluster, edges pointing from the more extreme nodes to the boundary nodes are blocked to avoid these boundary connectors being pulled away from their higher opinion neighbors. Minimizing variance removes the edges pointing to the nodes on the cluster boundaries. These edges pull the opinions to the extremes, so when trying to minimize the variance it is expected that they will be blocked. When maximizing the variance, the policy blocks all inter-cluster edges. This is to be expected as those are the only edges that pull the opinions together. In fact we see in Fig 4 that theses edges remain blocked for the entire simulation. In contrast, the other two objective functions have the shadow banning turn off once the opinions reach consensus.
For the opinions, the purple region is the 25th to 75th quantiles, and the pink region is the 5th to 95th quantiles. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
One important observation here is that shadow banning does not move the opinions beyond the maximum and minimum values in the initial condition. Shadow banning cannot drive anyone to an extreme opinion unless those opinions already exist in the network. This is in contrast to other methods to shift opinions which utilize bots that can drive opinions to arbitrary extremes [13]. The difference is that bots inject new content into the network which can have an extreme opinion. Shadow banning can only remove content produced naturally in the network, so it cannot move anyone beyond the bounds determined by the users’ initial opinions.
3.2 Twitter networks
3.2.1 Datasets.
We now apply shadow banning to a set of Twitter networks which have been utilized in previous studies on opinion dynamics [13, 54]. These datasets are ideal for us as they provide a network structure, posting rates, and opinions for a set of social media users engaged in an online conversation on politically polarizing topics. The topics these networks cover are the 2016 United States presidential election and the Gilets Jaunes protests in France. The raw datasets include tweets and also the follower graph formed by the users posting these tweets. For each dataset, tweets were collected that contained specific keywords (a full list of keywords can be found in [54]).
Each user’s posting rate was set equal to the number of their tweets in the dataset divided by the data collection period length. The tweets’ opinions on specific topics were measured using a neural network trained on a set of hand-labeled tweets. The opinions were real numbers between zero and one. For the U.S. election dataset an opinion of one represented a pro-Trump sentiment. For the Gilets Jaunes dataset an opinion of one represented a pro-Gilets Jaunes sentiment. Each user’s opinion was calculated as the mean of the opinions of their tweets in the dataset. Specific details on the collection and processing of these datasets can be found in [54]. We provide some summary statistics about the datasets in Table 2.
M is millions and K is thousands.
The U.S. election dataset consists of tweets by Twitter users who posted about the second debate of the 2016 U.S. presidential election between Hillary Clinton and Donald Trump. This dataset has 2.4 million tweets posted by 77,563 users. The resulting follower graph contained 5.4 million edges. The Gilets Jaunes, also known as the Yellow Vests movement, emerged in France in November 2018. Initially sparked by a significant hike in fuel prices, it rapidly expanded into a widespread protest against the policies of President Emmanuel Macron’s government. The Gilets Jaunes dataset consists of tweets between January 26th, 2019 to April 29th, 2019 that contained Gilets Jaunes related keywords. The resulting dataset contained 2.3 million tweets, 40,456 users, and 4.6 million edges in the associated follower graph.
For our simulations we use the subgraph induced by a random subset of users for each dataset. Each subgraph has 15,000 users with opinions less than or equal to 0.5 and 15,000 users with opinions greater than 0.5. For the U.S. election dataset, the resulting sampled network has 30,000 users and 844,563 edges. The Gilets Jaunes sampled network has 30,000 users and 1,084,678 edges. The network sizes are chosen to resemble the size of the networks used in a field experiment and observational study concerning content moderation. The field study in [55] recruited 23,377 US-based adult Facebook users to assess the impact of modifying the polarity of content seen by users on their political polarization. The observational study in [2] audited a random sample of 25,000 Twitter accounts to identify if they were shadow banned. In addition to replicating the size of networks in these works, using a subgraph of our data also reduces the computational time of the simulations.
3.2.2 Simulation results.
Our shadow banning simulations have a similar form to that for the synthetic networks. Shadow banning policies are calculated daily, with the maximum mean shadow banning strength snetwork set to 0.05, and no limit to the shadow banning strength on each individual edge (sedge = 1). The maximum mean shadow banning strength of 5% is chosen based on [2] that found 6.2% of sampled Twitter accounts were shadow banned at least once within a year of data collection. In addition, [56] estimated that between 0.5% to 2.3% of users were banned in the Twitter networks they studied. We do not limit sedge as shadow banning usually lasts at least 24 hours and up to two weeks on commonly used social media platforms, and we update our control daily. Our simulations cover 365 days. The opinion dynamics model is the bounded confidence model with ϵ = 0.1 and ω = 0.003. We also repeat our analysis on variations of these model parameters (see Appendix).
We plot the terminal objective value in the simulation for each dataset in Fig 5. As can be seen, the shadow banning policy is able to improve the objective value relative to no shadow banning by 7% to 60% depending on dataset and objective. We see that the shadow banning policy is able to shift the opinion mean, decrease the variance, and also increase the variance. Therefore, we see the variety of opinion manipulations we can achieve with shadow banning, even with limited mean shadow banning strength. Next we explore the evolution of the opinions in more details to understand how the shadow banning is affecting the opinions.
Bar plots of terminal objective values with (blue) no shadow banning versus (orange) shadow banning for the U.S. election and Gilets Jaunes datasets, with objectives being (left) maximize mean, (middle) minimize variance, and (right) maximize variance. For the variance objectives, the terminal variances are reported. The objective improvements by shadow banning compared to no shadow banning are (by U.S. election and Gilets Jaunes) 9% and 12% for maximizing mean, 7% and 23% for minimizing variance, and 40% and 60% for maximizing variance.
We begin with the U.S. presidential election dataset. We show the opinion evolution under no shadow banning and with shadow banning for different objectives in Fig 6. We also show density plots of the initial and final opinions in Fig 7. Our first observation is that over the one year simulation the opinion quantiles show a very small movement. This is due to our bounded confidence model specification as we would expect social media users not to experience a major change in opinion over this time period. While the changes are small, they differ significantly depending on the shadow banning objective. With no shadow banning, the opinions show a slight movement towards the center. The density plot shows that opinions are converging around significant values present in the initial distribution, resulting in three major modes: left, center, and right. This pattern closely resembles real-world election polls. When maximizing the mean, we see that the 75th quantile is driven upwards from an initial value of 0.65 to a final value of 0.75. Looking at the density plots, we see that the increase in the upper quantiles is primarily due to the creation of a mode centered around 0.8. Minimizing the variance does not impact the median opinion, but slightly pulls in the 25th and 75th quantiles. From the density plot we see that the shadow banning has pulled the opinions towards the center at 0.5. Maximizing the variance appears to widen the 25th and 75th quantiles over time. The density plot shows that this policy has removed opinions from the center.
For the opinions, the purple region is the 25th to 75th quantiles, and the pink region is the 5th to 95th quantiles. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
The mean shadow banning strength shows different behavior for the different objectives. For minimizing variance, the shadow banning strength decays to zero. For maximizing the mean and variance, it remains at maximum strength over time. The opinion dynamics are attractive, so less shadow banning is needed for minimizing variance as the natural dynamics assists in driving the variance towards zero. However, maximizing variance requires driving opinions apart, which goes against the natural opinion dynamics, and so constant shadow banning is needed. We see the same behavior for maximizing the mean, and this is due to the initial opinion distribution in the network not having a large proportion of users with high opinions. The constant shadow banning is needed so that these users are not pulled down and can continuously pull other users up.
We next look at the Gilets Jaunes Twitter network, with the opinion evolutions shown in Fig 8 and initial and final opinion densities shown in Fig 9. The natural dynamics of the network do not appear to move the opinion quantiles much. Maximizing the mean and variance result in similar final opinion distributions. The difference between the final 25th and 75ht quantiles is large for both objectives, but slightly larger for maximizing the variance. The final median is slightly higher for maximizing the mean. Apart from these differences, we find that maximizing either objective results in a network with a slightly increased opinion median and highly polarized opinions. Looking at the final opinion densities in Fig 9 we see that they are nearly the same for the two objectives. Minimizing the variance results in the opinion becoming concentrated at the center, as can be seen by the decrease in separation between the 25th and 75th quantiles in Fig 8. From the density plot we see that the shadow banning policy is creating a mode at 0.5 with a narrow width.
For the opinions, the purple region is the 25th to 75th quantiles, and the pink region is the 5th to 95th quantiles. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
The mean shadow ban strength has a similar behavior for the Gilets Jaunes network as for the U.S. presidential debate network. Minimizing the variance requires less shadow banning as the natural dynamics assist in creating consensus. Maximizing the mean and variance require more shadow banning. The difference is that the shadow banning strength for maximizing the variance decreases slowly over time as the opinions reach the extreme ends of the spectrum. The reason for the shadow banning decay is that once the opinions are away from the middle, then the natural attractive opinion dynamics takes over, pulling the opinions towards the extremes.
Our simulation studies provide valuable insights into the characteristics of shadow banning. Firstly, shadow banning proves to be a versatile tool for influencing opinions, with the potential to produce a range of effects, including steering opinions in specific directions, moderating polarization, or amplifying it. Secondly, the initial distribution of opinions within the network emerges as a significant determinant in shaping the shadow banning strategy. Even when pursuing the same objective, the resulting policies and trajectories of opinion evolution can exhibit substantial variations based on the network’s structure and the initial opinion distribution. Lastly, shadow banning demonstrates a notable degree of adaptability. Its necessary intensity fluctuates over time, depending on the evolving state of the network. In certain scenarios, as the network naturally progresses toward a desired state, the need for intense shadow banning diminishes. Conversely, in other cases, continuous shadow banning is required for maintaining the desired opinion trajectory.
3.2.3 Partisan bias in shadow banning.
One can choose an objective with a partisan bias when shadow banning. For instance, one can make the objective function be the mean (or negative mean) if one wants to shift the opinions up (or down). This is a clearly biased objective favoring one extreme of a topic. However, the implemented policy will not appear overtly partisan. To measure the overt partisan nature of a shadow banning policy at any given point in time, we segment users into two political groups based on their current opinion. For the U.S. presidential election dataset, we label Democrats as those with opinion less than or equal to 0.5, and Republicans as those with opinion greater than 0.5. For the Gilets Jaunes dataset, Gilets Jaunes opponents have opinion less or equal to 0.5, and Gilets Jaunes supporters have opinion greater than 0.5. We then look at the fraction of users shadow banned in each political group at a given time, which we refer to as the shadow ban rate. A user i is considered shadow banned at time t if the shadow ban strength uij(t) is greater than zero for at least one j. This means at least one follower of i is not seeing all content posted by i.
We would expect a political bias in the shadow ban rates given that the objective is to maximize the opinion mean. However, we find this is not the case. We plot the shadow ban rate at the initial time (t = 0) in our simulations in Fig 10. For the U.S. presidential election dataset, the two values are nearly identical, with the Republicans being shadow banned at a slightly higher rate than the Democrats. A more extreme result is found for Gilets Jaunes. We see that the pro-Gilets Jaunes users are shadow banned at nearly three times the rate of the anti-Gilets Jaunes users. These findings are counter-intuitive as they indicate that the shadow banning policies have a bias that is opposite the bias of the objective. However, from the opinion evolution plots in Figs 6 and 8, we see that these policies lead to opinions distributions which exhibit the bias suggested by the objective.
The shadow banning objective is to maximize the opinion mean. For the U.S. election, this means to shift the mean towards Republicans. For Gilets Jaunes this means to shift the opinions towards pro-Gilets Jaunes. Shadow ban rate here is calculated by the fraction of number of accounts, or vertices, that have at least one out-degree edge that is shadow banned. Error bars indicate the 95% confidence interval of the mean estimate.
To understand why a shadow banning policy can appear unbiased while being very biased, it is useful to consider again the path network discussed earlier. The initial shadow banning policy for maximizing the mean is shown in Fig 1. From the figure, we see that every node is shadow banned except for the red node with the highest opinion located at the end of the path. Specifically, the edge pointing to the neighbor with higher opinion is shadow banned. The remaining edges indicate that the posts can flow from nodes with higher opinion to those with lower opinion. This has the effect of only allowing upward opinion shifts, which causes the opinion mean to increase over time. However, every node (except for the maximum opinion node) has a neighbor with higher opinion. This means that all of these nodes are shadow banned, which causes the policy to appear unbiased.
In general, for maximizing the opinion mean, the shadow banning policy blocks any edge which pulls opinions downwards. These edges can be incident on nodes of either partisan group. In this manner the policy appears unbiased, or even possibly biased in the opposite direction depending upon the network structure and opinion distribution. The natural approach is to consider which users are shadow banned, which would allow biased shadow banning to go undetected. Our results suggest that to measure a bias in a shadow banning policy, one must look at the edges which are shadow banned, and not the users. In particular, one must look at the sign of the opinion shift among the shadow banned edges to identify the bias.
Our result shows the danger of shadow banning. One would think that if a social media platform employed an overtly biased content moderation policy, this bias would be easily observed. However, we find that the platform can employ a shadow banning policy which appears to be unbiased, yet over time creates a bias in the users’ opinions. The platform’s efforts at shifting opinions would likely go undetected as the actual implemented policy seems unbiased, or even biased in the opposite direction. One would not realize there was a bias in the policy until after it has been employed for a long period of time.
3.2.4 Sensitivity analysis.
We investigate the sensitivity of the performance of shadow banning policies as a function of the maximum mean shadow ban strength snetwork and edge shadow ban strength sedge. We investigate the sensitivity with respect to the opinion dynamics model parameters in Appendix.
We first see how performance changes if we vary snetwork with sedge = 1. We consider the terminal value of the objective over the duration of the simulation. Fig 11 shows the relative change of the different objectives in the simulation relative to no shadow banning as snetwork is increased for the two datasets. We find that the objectives plateaus for values of snetwork greater than 10% for both networks. There appears to be no benefit to applying stronger global shadow banning beyond this value. This most likely occurs because the shadow banning is applied to a limited number of critical edges at each time step. Therefore, the opinions can be shifted without shadow banning a significant fraction of the edges.
sedge = 1. The y-axis shows the relative magnitude of the objective value compared to that of no shadow banning.
We next investigate the impact of sedge on performance. We provide plots of how the terminal objective changes with respect to both sedge and snetwork for each dataset in Figs 12 and 13. We find that for values of sedge less than 0.5 there is very little change in the objective relative to no shadow banning. For larger values of sedge we see the shadow banning causing a non-trivial change in the objective values. This shows that strong shadow banning needs to be allowed on the targeted edges in order to produce a non-trivial shift in the opinion distribution. Therefore, while the shadow banning strength can be very low across the network, the critical edges that are targeted require a substantial amount of shadow banning.
The objectives are (top) maximize mean, (middle) minimize variance, and (bottom) maximize variance. Values in the cells are the magnitude relative to no shadow banning in percent.
The objectives are (top) maximize mean, (middle) minimize variance, and (bottom) maximize variance. Values in the cells are the magnitude relative to no shadow banning in percent.
4 Discussion
Our findings show the power and flexibility of shadow banning as a content moderation tool for online social media platforms. Precise shadow banning policies can be easily calculated for large networks by solving a linear program. By applying these shadow banning policies, platforms can exert delicate influence over the distribution of user opinions. While this can serve goals like reducing polarization or curbing misinformation, it also holds the potential for misuse. Shadow banning can intensify polarization within a network. Platforms might use shadow banning to steer opinions towards or away from specific topics. Additionally, platforms might employ this biased shadow banning and remaining unnoticed due to the outward appearance of political neutrality.
The danger of a social media platform engaging in biased shadow banning is significant. The effects are slow, and the bias can be undetected. Over time, this can lead to dangerous outcomes for which it is too late to prevent. Election outcomes can potentially be changed by such manipulation. Societies can be polarized to the point of instability. Intelligent policies should be enacted to prevent such abuse by social media platforms. Conventional measures such as shadow ban rates may not reveal the bias exerted by the platforms. However, more precise measures, such as shadow ban rates for edges of different opinion shift polarity, can reveal this bias. Such measures should be employed to ensure that social media platforms use shadow banning to maintain platform health and safety and not for other malicious purposes.
Our research has a framework for investigating how content moderation, particularly in the context of shadow banning, impacts user opinions. This framework can also be expanded to analyze the influence of content recommendation algorithms on opinions. An intriguing path for future research is to utilize our framework to evaluate the effects of content recommendation algorithms on opinion polarization. This avenue of exploration could pave the way for designing content recommendation algorithms that not only enhance user experiences but also proactively address the potential for increased polarization.
5 Appendix
5.1 Robustness for variations of the bounded confidence model
We present here the performance of different shadow banning objectives on the U.S. presidential election and Gilets Jaunes Twitter networks under different specifications of the bounded confidence model. All combinations of ϵ ∈ [0.01, 0.1, 0.3, 0.5, 1] and ω ∈ [0.001, 0.003, 0.01] are simulated and the terminal objective values compared to no shadow banning are illustrated in the heat maps in Figs 14 and 15 for each dataset. Shadow banning strength limits are fixed with snetwork = 0.05 and sedge = 1.
The objectives are (top) maximize mean, (middle) minimize variance, and (bottom) maximize variance. Values in the cells are the magnitude relative to no shadow banning in percent.
The objectives are (top) maximize mean, (middle) minimize variance, and (bottom) maximize variance. Values in the cells are the magnitude relative to no shadow banning in percent.
Our first observation is that regardless of the choice of ϵ and ω in the bounded confidence model, our policy leads to an improvement in the objective relative to no shadow banning. This shows that our shadow banning policies show some level of robustness with respect to the bounded confidence model.
For maximizing the mean, the objective smoothly increases as the persuasion strength is increased. However, the variance objectives show some more interesting behavior. In the U.S. election dataset, when minimizing the variance, larger ϵ values do not offer as much improvement as ϵ = 0.1. This is because stronger opinion dynamics play a more dominant role in determining the location of opinion consensus, overshadowing the impact of shadow banning. When maximizing the variance, the most substantial terminal variance increase occurs at ϵ = 0.5 when ω = 0.001 and 0.003, and at ϵ = 0.1 when ω = 0.01. However, at ϵ = 1, the improvements are smaller due to the network’s increased resistance to polarization under stronger attractive opinion dynamics. Similar trends are observed in the Gilets Jaunes dataset. For minimizing variance, ϵ = 0.1 leads to the smallest terminal variance, while for maximizing variance, ϵ = 0.3 results in the largest terminal variance.
These findings provide useful guidance when designing shadow banning policies. For objectives involving the opinion mean, the precise choice of opinion dynamics parameters is not critical. For objectives involving the opinion variance, one must decide if the opinion dynamics shows strong or weak persuasion, as strong persuasion makes shadow banning harder to overcome the natural attractive opinion dynamics. Since real-world social networks exhibit persistent polarization, better shadow banning policies will be calculated if using opinion dynamics models with weak persuasion.
References
- 1. Suzor NP, West SM, Quodling A, York J. What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation. International Journal of Communication. 2019;13:18.
- 2. Jaidka K, Mukerjee S, Lelkes Y. Silenced on social media: the gatekeeping functions of shadowbans in the American Twitterverse. Journal of Communication. 2023;.
- 3. Savolainen L. The shadow banning controversy: perceived governance and algorithmic folklore. Media, Culture & Society. 2022;44(6):1091–1109.
- 4.
Cook J. Instagram’s Shadow Ban On Vaguely’Inappropriate’ Content Is Plainly Sexist; 2019. Available from: https://www.huffpost.com/entry/instagram-shadow-ban-sexist_n_5cc72935e4b0537911491a4f.
- 5.
Musto J. Musk says new Twitter CEO will not shadow ban users: ‘That will not be the case’; 2023. Available from: https://www.foxbusiness.com/technology/musk-says-new-twitter-ceo-will-not-shadow-ban-users-not-be-the-case.
- 6.
Kempe D, Kleinberg J, Tardos É. Maximizing the spread of influence through a social network. In: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM; 2003. p. 137–146.
- 7.
Kempe D, Kleinberg J, Tardos É. Influential nodes in a diffusion model for social networks. In: Automata, languages and programming. Springer; 2005. p. 1127–1138.
- 8.
Leskovec J, Krause A, Guestrin C, Faloutsos C, VanBriesen J, Glance N. Cost-effective outbreak detection in networks. In: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM; 2007. p. 420–429.
- 9.
Chen W, Wang Y, Yang S. Efficient influence maximization in social networks. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM; 2009. p. 199–208.
- 10.
Chen W, Wang C, Wang Y. Scalable influence maximization for prevalent viral marketing in large-scale social networks. In: Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM; 2010. p. 1029–1038.
- 11. Aral S, Dhillon PS. Social influence maximization under empirical influence models. Nature human behaviour. 2018;2(6):375–382. pmid:31024158
- 12. Yildiz E, Ozdaglar A, Acemoglu D, Saberi A, Scaglione A. Binary opinion dynamics with stubborn agents. ACM Transactions on Economics and Computation. 2013;1(4):19.
- 13. Hunter DS, Zaman T. Optimizing Opinions with Stubborn Agents. Operations Research. 2022;70(4):2119–2137.
- 14. Ghezelbash E, Yazdanpanah MJ, Asadpour M. Polarization in cooperative networks through optimal placement of informed agents. Physica A: Statistical Mechanics and its Applications. 2019;536:120936.
- 15. Zhao Y, Kou G, Peng Y, Chen Y. Understanding influence power of opinion leaders in e-commerce networks: An opinion dynamics theory perspective. Information Sciences. 2018;426:131–147.
- 16. Sîrbu A, Pedreschi D, Giannotti F, Kertész J. Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model. PloS one. 2019;14(3):e0213246. pmid:30835742
- 17. Peralta AF, Neri M, Kertész J, Iñiguez G. Effect of algorithmic bias and network structure on coexistence, consensus, and polarization of opinions. Physical Review E. 2021;104(4):044312. pmid:34781537
- 18. Perra N, Rocha LE. Modelling opinion dynamics in the age of algorithmic personalisation. Scientific reports. 2019;9(1):1–11. pmid:31086228
- 19. Blex C, Yasseri T. Positive algorithmic bias cannot stop fragmentation in homophilic networks. The Journal of Mathematical Sociology. 2022;46(1):80–97.
- 20. Iannelli G, De Marzo G, Castellano C. Filter bubble effect in the multistate voter model. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2022;32(4):043103.
- 21.
Cinus F, Minici M, Monti C, Bonchi F. The effect of people recommenders on echo chambers and polarization. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 16; 2022. p. 90–101.
- 22. Vilela ALM, Pereira LFC, Dias L, Stanley HE, Da Silva LR. Majority-vote model with limited visibility: An investigation into filter bubbles. Physica A: Statistical Mechanics and its Applications. 2021;563:125450.
- 23. Hegselmann R, Krause U, et al. Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of artificial societies and social simulation. 2002;5(3).
- 24. DeGroot MH. Reaching a consensus. Journal of the American Statistical Association. 1974;69(345):118–121.
- 25.
Pishro-Nik H. Introduction to probability, statistics, and random processes. Kappa Research, LLC Blue Bell, PA, USA; 2014.
- 26.
Adamic LA, Glance N. The political blogosphere and the 2004 US election: divided they blog. In: Proceedings of the 3rd international workshop on Link discovery. ACM; 2005. p. 36–43.
- 27.
Conover M, Ratkiewicz J, Francisco M, Gonçalves B, Menczer F, Flammini A. Political polarization on twitter. In: Proceedings of the international aaai conference on web and social media. vol. 5; 2011. p. 89–96.
- 28. Bakshy E, Messing S, Adamic LA. Exposure to ideologically diverse news and opinion on Facebook. Science. 2015;348(6239):1130–1132. pmid:25953820
- 29.
Garimella K, De Francisci Morales G, Gionis A, Mathioudakis M. Political discourse on social media: Echo chambers, gatekeepers, and the price of bipartisanship. In: Proceedings of the 2018 world wide web conference; 2018. p. 913–922.
- 30. Rossetti M, Zaman T. Bots, disinformation, and the first impeachment of US President Donald Trump. Plos one. 2023;18(5):e0283971.
- 31. Deffuant G, Neau D, Amblard F, Weisbuch G. Mixing beliefs among interacting agents. Advances in Complex Systems. 2000;3(01n04):87–98.
- 32. Lorenz J. Consensus strikes back in the Hegselmann-Krause model of continuous opinion dynamics under bounded confidence. Journal of Artificial Societies and Social Simulation. 2006;9(1).
- 33. Blondel VD, Hendrickx JM, Tsitsiklis JN. On Krause’s multi-agent consensus model with state-dependent connectivity. IEEE transactions on Automatic Control. 2009;54(11):2586–2597.
- 34. Peng Y, Zhao Y, Hu J. On the role of community structure in evolution of opinion formation: A new bounded confidence opinion dynamics. Information Sciences. 2023;621:672–690.
- 35. Dong J, Hu J, Zhao Y, Peng Y. Opinion formation analysis for Expressed and Private Opinions (EPOs) models: Reasoning private opinions from behaviors in group decision-making systems. Expert Systems with Applications. 2024;236:121292.
- 36. Jiang B, Zhao Y, Dong J, Hu J. Analysis of the influence of trust in opposing opinions: An inclusiveness-degree based Signed Deffuant–Weisbush model. Information Fusion. 2024;104:102173.
- 37. Evans LC. An introduction to mathematical optimal control theory. Lecture Notes, University of California, Department of Mathematics, Berkeley. 2005;3:15–40.
- 38.
Bertsimas D, Tsitsiklis JN. Introduction to linear optimization. vol. 6. Athena scientific Belmont, MA; 1997.
- 39. Gondzio J. Interior point methods 25 years later. European Journal of Operational Research. 2012;218(3):587–601.
- 40.
Wright SJ. Primal-dual interior-point methods. SIAM; 1997.
- 41. Pink SL, Chu J, Druckman JN, Rand DG, Willer R. Elite party cues increase vaccination intentions among Republicans. Proceedings of the National Academy of Sciences. 2021;118(32):e2106559118.
- 42. Chu J, Pink SL, Willer R. Religious identity cues increase vaccination intentions and trust in medical experts among American Christians. Proceedings of the National Academy of Sciences. 2021;118(49):e2106481118.
- 43. Pink SL, Stagnaro MN, Chu J, Mernyk JS, Voelkel JG, Willer R. The effects of short messages encouraging prevention behaviors early in the COVID-19 pandemic. PLoS One. 2023;18(4):e0284354.
- 44.
Bai H, Voelkel J, Eichstaedt J, Willer R. Artificial intelligence can persuade humans on political issues. 2023;.
- 45. Vande Kerckhove C, Martin S, Gend P, Rentfrow PJ, Hendrickx JM, Blondel VD. Modelling influence and opinion evolution in online collective behaviour. PloS one. 2016;11(6):e0157685.
- 46.
Hanna A, Wells C, Maurer P, Friedland L, Shah D, Matthes J. Partisan alignments and political polarization online: A computational approach to understanding the French and US presidential elections. In: Proceedings of the 2nd Workshop on Politics, Elections and Data; 2013. p. 15–22.
- 47.
Borge-Holthoefer J, Magdy W, Darwish K, Weber I. Content and network dynamics behind Egyptian political polarization on Twitter. In: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing; 2015. p. 700–711.
- 48. Garrett RK. Echo chambers online?: Politically motivated selective exposure among Internet news users. Journal of computer-mediated communication. 2009;14(2):265–285.
- 49. Del Vicario M, Vivaldo G, Bessi A, Zollo F, Scala A, Caldarelli G, et al. Echo chambers: Emotional contagion and group polarization on facebook. Scientific reports. 2016;6(1):37825.
- 50. Cota W, Ferreira SC, Pastor-Satorras R, Starnini M. Quantifying echo chamber effects in information spreading over political communication networks. EPJ Data Science. 2019;8(1):35.
- 51. Del Vicario M, Bessi A, Zollo F, Petroni F, Scala A, Caldarelli G, et al. The spreading of misinformation online. Proceedings of the national academy of Sciences. 2016;113(3):554–559.
- 52. Holland PW, Laskey KB, Leinhardt S. Stochastic blockmodels: First steps. Social networks. 1983;5(2):109–137.
- 53. Erdos P. On random graphs. Mathematicae. 1959;6:290–297.
- 54. des Mesnards NG, Hunter DS, el Hjouji Z, Zaman T. Detecting bots and assessing their impact in social networks. Operations Research. 2022;70(1):1–22.
- 55. Nyhan B, Settle J, Thorson E, Wojcieszak M, Barberá P, Chen AY, et al. Like-minded sources on Facebook are prevalent but not polarizing. Nature. 2023; p. 1–8.
- 56.
Le Merrer E, Morgan B, Tredan G. Setting the Record Straighter on Shadow Banning. In: IEEE INFOCOM 2021—IEEE Conference on Computer Communications. IEEE; 2021. Available from: http://dx.doi.org/10.1109/INFOCOM42981.2021.9488792.