Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Infodemics: Do healthcare professionals detect corona-related false news stories better than students?

  • Sven Grüner ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    sven.gruener@landw.uni-halle.de

    Affiliation Institute of Agricultural and Nutritional Sciences, Martin Luther University Halle-Wittenberg, Halle (Saale), Germany

  • Felix Krüger

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Faculty of Law and Economics, Martin Luther University Halle-Wittenberg, Halle (Saale), Germany

Abstract

False news stories cause welfare losses and fatal health consequences. To limit its dissemination, it is essential to know what determines the ability to distinguish between true and false news stories. In our experimental study, we present subjects corona-related stories taken from the media from various categories (e.g. social isolation, economic consequences, direct health consequences, and strong exaggeration). The subject’s task is to evaluate the stories as true or false. Besides students with and without healthcare background, we recruit healthcare professionals to increase the external validity of our study. Our main findings are: (i) Healthcare professionals perform similar to students in correctly distinguishing between true and false news stories. (ii) The propensity to engage in analytical thinking and actively open-minded thinking is positively associated with the ability to distinguish between true and false. (iii) We find that the residence of the subjects (East- or West-Germany) plays only a minor role. (iv) If news stories are in line with existing narratives, subjects tend to think that the stories are true.

1 Introduction

The corona crisis has provided many examples of what Tedros Adhanom Ghebreyesus, Director-General of the World Health Organization [1], denoted as an “infodemic.” For example, it has been suggested to cure Covid-19 with the help of smoking, cocaine, or even cow urine. Hundreds of Iranians died from drinking methanol to cure Covid-19 and many others suffered from serious health implications [2]. There are also adventurous explanations for its roots, including bioweapon or the 5G wireless technology [3]. Moreover, conspiracy theories and false claims went viral when, for example, about 20,000 people were demonstrating in Berlin (Germany) in June 2020 against the corona measures of chancellor Merkel. In social networks, some people (mostly supporters of right-wing parties) shared postings that a lot more people would have joined the demonstration. These unedited news stories are dangerous in that they give the impression that a lot more people are against these measures. As a consequence of the rise of corona-related false news stories, various state governments and institutions have taken action. For example, the German Federal Ministry of Health released a warning against covid-19-related false news stories [4]. Twitter removed a tweet of President Donald Trump in which he retweeted a way to supposedly cure from Covid-19, when in fact no such way existed [5]. In several countries, laws against the spread of false news have been passed. In Hungary, for example, people can be sentenced to prison for up to 5 years if they violate the law, which, however, creates an atmosphere of uncertainty among journalists [6].

What’s the problem with false news stories in general? Lazer et al. [7] argue “We define “fake news” to be fabricated information that mimics news media content in form but not in organizational process or intent. Fake-news outlets, in turn, lack the news media’s editorial norms and processes for ensuring the accuracy and credibility of information.” We are in line with this definition but prefer to speak of false news stories throughout this paper. The reason is that the term fake news has not only been used to describe false information but also derogatory for information that does not reflect one’s own (e.g. of a politician) opinion [8]. Besides terminology, a key problem of false news stories is that they restrict the functioning of markets as well as democratic, political decision processes. They prevent competition of ideas what, in turn, can lead to societal misallocations, for example, by influencing public opinion and voting [9]. Moreover, trust in media and institutions in general can be eroded. Even worse, false news stories are spreading fast. For example, Vosoughi et al. [10] found in their Twitter study that false news spread faster than true news since the former are mostly topical and cause emotional reactions. That makes it hard to correct them, in particular when a huge amount of false news information is generated as it is the case in the corona pandemic [11].

Since its outbreak, there has been a huge amount of information on Covid-19 every day [12]. Many news items were correct, but there were also a large number of false news stories. Thus, there is no surprise that Donovan [13] calls in a Nature article that “Social-media companies must flatten the curve of misinformation.” To avoid welfare losses in general and adverse health consequences due to false claims in particular, it is socially desirable that people are able to distinguish between true and false news information. But who is good at this exercise? The objective of this paper is to identify determinants that help to distinguish between true and false news stories. Knowledge about such determinants can help to reduce the dissemination of false news stories.

This question is not new. For example, Pennycook and Rand [14,15] have tackled it before. However, our study differs from former studies in the design and context of the news stories. While other studies mostly present headlines of a news story, we show experimental subjects also a couple of sentences or a short paragraph. Instead of analyzing political news stories (e.g. Presidential Election Campaign; [14,15]), we address corona-related news stories. Furthermore, many experimental studies deal with students only [cf., 1618]. Students are easy to recruit because of their low opportunity costs. However, the external validity of a study can be questioned if only students are considered since they are rather an untypical population with regard to age, income, and education. We do not restrict ourselves with the population of students but also recruit healthcare professionals to increase the external validity of our study. It seems to be an interesting question to ask whether the expertise and experience of healthcare professionals help to more adequately process corona-related news information.

The rest of the paper is structured as follows. In Section 2, we describe our behavioral research questions. After presenting the methods (experimental design, approach to data analysis, recruitment procedure) in Section 3, we describe our results in Section 4. Section 5 concludes.

2 Behavioral research question

As indicated above, this paper aims at analyzing the ability to distinguish between true and false corona-related news stories. We tackle the following research questions:

  1. i. Are students who are enrolled in medicine or healthcare perform better in identifying false news stories than other students? Do healthcare professionals (e.g. physicians) perform best in differentiating between true and false news stories?

According to Pennycook et al. [19], COVID-19 is a scientific issue. We expect that students of medicine and the health care sciences are more capable of processing and classifying information on corona due to their theoretical knowledge than students of other degree programs. Thus, the former should be better in distinguishing between true and false news stories. We assume that healthcare professionals (e.g. physicians) are best at differentiating between true and false news stories by virtue of their theoretical knowledge and practical experience.

  1. ii. Are the propensity to engage in analytical reasoning (= cognitive sophistication) and actively open-minded thinking (AOT) positively correlated with the ability to correctly distinguish between true and false news?

Pennycook and Rand [14,15] find that the propensity to engage in analytical reasoning helps to differentiate between true and false news stories. In line with that, analyzing data from Canada, the U.K., and the U.S.A., Pennycook et al. [19] find a negative association between cognitive sophistication and misperceptions about COVID-19. Another safeguard against false news stories is actively open-minded thinking: experimental evidence has shown that there is a positive correlation between the AOT score and the ability to differentiate between true and false news stories [20].

  1. iii. Does the familiarity with the stories (i.e., subjects report that they have seen the story before) increase the probability that people think that the story is true?

In their experimental study, Pennycook et al. [21] show that even a single exposure increases the perceived accuracy of false news stories. The repetition of a news story promotes familiarity and higher familiarity increases, in turn, the probability that it is perceived as true [22]. This is also known as the illusory truth effect. Moreover, experimental results in the field of environmental economics provide evidence that even the perception of having seen a news story before, increases the likelihood that the story is considered to be true [23]. As a consequence, false news stories are more likely to be accepted as true. We investigate this relationship in the context of COVID-19.

  1. iv. Are there differences between the eastern and western population of Germany in the perception of news stories?

Before reunification, Germany experienced two distinct economic systems: capitalism in West Germany and socialism in East Germany. The different socialization could lead individuals to perceive information differently and reacting in a different manner on political measures (e.g. lockdown). Different socialization and experiences of corona may have led to different emotions and evaluations of media content. However, it remains unclear whether this affects the ability to differentiate between true and false news.

  1. v. How do anxiety and personal experiences influence the ability to distinguish between true and false news stories?

Confirmation bias assumes that individuals are more likely to believe information that is consistent with their own views [e.g. 24]. Similarly, people who are afraid of corona could be more likely to believe news stories that stress the negative consequences of corona. Similarly, personal experiences and involvement may be relevant: the more affected an individual is, the more likely he or she is to uncritically accept news items that address strong negative consequences. However, it remains an open question whether fears or personal experiences are important determinants to explain the ability to distinguish between true and false news stories.

3 Methods

This study has been approved by the German Association for Experimental Economic Research e.V. (No. 8ScdfpyT). The participants were informed about the background of the study (problem of false news stories in the health-care sector) and what had to be done in the study (to evaluate news stories, answer questions about experiences and opinions, etc.). They were also told that participation is voluntary and that data processing is anonymous and confidential. The study is in line with the General Data Protection Regulation (EU) 2016/679. To participate, individuals had to confirm (by actively checking the respective boxes in the web-based study) that they are at least 18 years old and accept the conditions of participation. Moreover, the study has been pre-registered before any data have been collected (AsPredicted #40327).

3.1 Experimental design

The study consists of two parts: In the first part, experimental subjects are shown 8 stories taken from the news media. Note that the study was launched in May 2020 and the news stories, therefore, reflect the early stages of the corona pandemic. In the second part, we collect data on a variety of socio-demographic variables, attitudes, and personality traits.

A. News stories.

We present each experimental subject with 8 corona-related stories taken from the media (cf., Table 1 for a short description of the stories; the sources of the stories can found in Table A1 of the Appendix). They were presented a headline and a couple of sentences (e.g. a small paragraph) of a news article. By not only showing a headline but also a couple of sentences, we provide the subjects with background information. For example, Germans who are not living in Saxony-Anhalt may never have heard about Haseloff. Thus, subjects can read some details about a topic if they want. In reality, people can also look for additional information, for example, by using a search engine. However, we cannot say if there are differences at all between our design and only providing headlines. This is left open for further research.

The overall topics of the stories can be roughly divided into 4 categories: Social isolation (stories 1 & 2), economic consequences (stories 3 & 4), direct health consequences (stories 5 & 6), and strong exaggeration (stories 7 & 8). For each story, we have a true and a false version. We refer to a story as true if we did adopt the story from the media without manipulating its content. The false news stories contain any kind of false news information. After presenting the subjects a story (either true or false), we asked them whether they believe that the story is accurate (i.e., does not contain any kind of false news information). We randomly assign subjects either to the correct or false version of a story. Randomization allows us to interpret the results in terms of causality and not only correlation. Moreover, we attached three further questions to each story: (i) how confident are subjects in their assessment, (ii) have the subjects seen the news story before, and (iii) has the news story surprised the subjects when they read it. Overall, we randomized the order of the news stories to mitigate possible order and anchoring effects. For example, the news stories could cause emotions to an unknown extent, which might affect the response behavior to other stories.

What do the manipulations of the stories (i.e., false news stories) look like? With the exception of the category strong exaggeration, we have changed the sign of the core statement of each story. For example, the correct version of story 1 is about Saxony-Anhalt’s Prime Minister Reiner Haseloff who has argued that East Germans are better prepared for the corona crisis. The key message of the false story is reversed in the sense that the politician said that East Germans are less well prepared for the corona crisis. Let us take a look at another example, for illustration purposes. The fifth story, which deals with direct health consequences due to corona, is about gender. The correct version argues that men are more vulnerable to the new coronavirus. The false version, where we completely change the central message, claims that women are more vulnerable to the new coronavirus. The stories that we label as strong exaggeration are made much more extreme in the false version. For example, story 8 is about the transmission of corona via farts. In its correct version, it is only said that an Australian medical doctor has made some statements on this. In the false version, it is claimed that the Texas vice-governor initiated a general pants duty and that there is some Twitter activity (e.g. #Pants duty).

B. Further variables.

Propensity to engage in analytical reasoning. Frederick [25] introduced a cognitive reflection test (CRT) to measure whether people can be described as intuitive or reflective thinkers. The test consists of a bunch of questions that have an intuitive but wrong answer. The correct answer can be found out at a second look (i.e., after rechecking the result). This test is often used to elicit the propensity to engage in analytical reasoning: a high score in this test is associated with analytical thinking, whereas a low score is related to intuitive thinking. We slightly changed the wording of the original items of Frederick’s test. For example, we asked: “A safety mask and a disinfectant product cost together €11.10. The protective mask costs €10 more than the disinfectant. How much does the disinfectant cost?” The correct answer reads €0.55 (an intuitive but false answer amounts to €1.10). Actively open-minded thinking (AOT). AOT measures whether actively open-minded thinking is perceived as good. We adopt the 7-item scale from Haran et al. [26]. For example, one of their items is: “People should take into consideration evidence that goes against their beliefs.” Beyond CRT and AOT, we collected socio-demographic variables (e.g. age, education, gender, residence) and variables about fears and anxiety. The latter does not only include worries about immediate adverse health consequences, social isolation, and economic consequences but also individual actions as a consequence of corona (basic food reserves, hygiene products, and homeopathy). We also captured data on consumption of information and related attitudes (e.g. trust in media, change of trust in media, trust in government).

C. Financial incentives.

We raffled 5 x €50 among all participants. In order to separate the answers given in the study and personal data, the participants were asked to send us an informal e-mail if they would like to participate in the raffle.

3.2 Statistical methods used for data analysis

Our variable of interest is whether subjects correctly identify news stories taken from the media. Correct identification means that correct news stories are identified as correct and false news stories are identified as wrong.

I. Overall correct identification.

On the aggregate level, we sum up how often the stories are correctly identified by the subjects. Since we examine a total of 8 news stories, the dependent variable can take values from 0 to 8. This allows us to run a simple OLS regression. The regression contains Population (i.e., whether subjects identified themselves as healthcare professionals, healthcare students, non-healthcare students or something else). The category (i.e., vector) Thinking captures both CRT and AOT. Familiarity measures whether subjects have seen the stories before (i.e., it is aggregated over all stories), whether the stories are surprising and whether subjects are certain in their answering behaviors. The dummy East covers the residence of the subjects (East Germany or West Germany). Anxiety&Involvement is about the worries of the subjects (e.g. social isolation, immediate health consequences, consequences for the economy; reactions: food reserves or more disinfection; risk factors, such as smoking behavior, age). The other variables in the regression are for exploratory purposes/serve as controls. Education measures the highest formal degree of the subjects. Gender captures the gender the subjects identify with. Information measures the activities and perception of news and how they are communicated (e.g. trust in media, media consumption). Week controls for the point in time when the subjects joined the study and Time measures the number of minutes the subjects needed to finish the study.

1

II. Story-by-story correct identification.

On the story-by-story analysis, we look at each story separately (i.e., no aggregation over the stories). Since the dependent variable is binary we run logit regressions. To meaningfully interpret the results, we provide (average) marginal effects [27,28].

2

3.3 Recruitment strategy

We planned to recruit healthcare professionals, healthcare students, and non-healthcare students. The starting point to recruit students was a list of universities in Germany from Wikipedia (https://de.wikipedia.org/wiki/Liste_der_Hochschulen_in_Deutschland). From this list, we selected the largest universities (in terms of the number of students) and contacted the deans/deans of studies with the request to advertise the study. In addition, we directly contacted several professors from different departments and student councils. We put emphasis on covering subjects from different regions in Germany to obtain meaningful results (e.g. not only subjects from the south of Germany). In order to recruit healthcare professionals, we used the publicly available physician lists of the Association of Statutory Health Insurance Physicians of various federal states of Germany. Furthermore, subjects from various university hospitals were considered as long as contact details are publicly available.

3.4 Data manipulation

Before analyzing the data, we carried out some plausibility checks. As a result, we dropped a total of three subjects. Two of them were fast straightliners who took less than 5 minutes to finish the study. The third subject gave implausible answers (e.g. age = 99).

4 Results

4.1 Description of the subjects

During the period from 18.5.2020–2.8.2020, we recruited a total of 2,077 experimental subjects. The vast majority of our subjects are university students (N = 1,457). Among them, there are 208 healthcare students. We recruited 367 professionals from the non-healthcare sector. Our sample contains 213 healthcare professionals (of which 128 subjects associated themselves as a physician). The description of the subjects is depicted in Table 2 (for details on the variables and their measurement, see Appendix A2). In the following, we do not attempt to describe the variables in detail, but rather to communicate a broad sense of the data set. This is sufficient (but also necessary) to better understand the (regression) results which we describe later. To our surprise, a considerable number of people whom we approached with the request to advertise for the study attended themselves. Therefore, the level of education of the non-healthcare professionals is relatively high. This is important since non-healthcare professionals are a significant part of our control group in the population analysis. There are several differences between professionals and students. It is no surprise that professionals are on average older than students. However, there are interesting differences (in the willingness to attend the study) in gender. While slightly less than 50% of the professionals identified themselves as women, there was a surplus of women among the students. This surplus was particularly evident among the healthcare students. Moreover, we consider subjects who identified themselves with the third gender. However, its sample size is only low and, thus, any results are preliminary.

4.2 Analysis of the decision behavior

4.2.1 First view on the decision behavior.

Overall, the healthcare professionals performed slightly better than the students in distinguishing between true and false news stories (Table 3). Non-healthcare professionals performed best but the gap to the other populations is quite small. Within the stories, the performance of the subpopulations is similar to each other. Story three is outstanding in spite of the worse performance of all subpopulations. A first educated guess is that there is a gap between the viral narrative (that there are shortages of medical goods and commodities, such as toilet paper and disinfect, which has been reported by the media) and the key message of the story (reliance on foreign countries in spite of medical goods cannot be empirically proven).

thumbnail
Table 3. Fraction of correct identification–overall and story by story (N = 2,074) (1).

https://doi.org/10.1371/journal.pone.0247517.t003

We want to mention another point that is related to the performance of healthcare professionals in the stories 5 and 6, i.e., the stories that address direct health implications due to Covid-19 and in which we could have expected the medical professionals to perform much better than the other ones. However, the healthcare professionals performed quite similar to the other subpopulations (or only slightly better). One reason for this might be that they “speak another language.” In other words, they might have perceived everyday articles from the media as incorrect due to the chosen words of the author of the news story.

4.2.2 Regression analysis.

I. Overall correct identification. The regression results are depicted in Table 4. Panel Ia is our main specification, which we want to describe in detail. The other two estimations serve as a robustness check. Healthcare professionals (β = -0.2635, p-value = 0.019) seem to perform less well than students with (β = 0.0339, p-value = 0.800) and without (β = 0.0127, p-value = 0.906) healthcare background. Its sign is negative, whereas the student variables are slightly but positively associated with the overall correct identification of corona-related news stories. CRT and AOT are positively related with the ability to correctly distinguish between true and false. The magnitude of CRT (β = 0.0886, p-value = 0.003) seems to be more pronounced than AOT (β = 0.0686, p-value = 0.124). The association of Familiarity with the stories and correct identification is positive, but very small (β = 0.0177, p-value = 0.514). The same pattern can be observed for the variable Surprising (β = 0.0136, p-value = 0.425). A somewhat larger effect results from Certainty (β = 0.0381, p-value = 0.040). Overall, subjects from East Germany do not seem to differ much from the other subjects. Its sign is positive, but the effect size is quite small in magnitude (β = 0.0467, p-value = 0.461). The associations between the variables Quality health system to Smoker of the regression output (which capture the subjects’ anxiety and personal experiences) and the ability to distinguish between true and false show no clear pattern. They all have in common that the effect size is relatively low, but the sign of the respective variables varies seemingly at random. Interestingly, Age is positively related to our variable of interest (β = 0.0405, p-value = 0.019). However, if people get older the effect declines (β = -0.0449, p-value = 0.027). The other variables are addressed exploratory. We only refer to these variables if they seem to have a large contribution to the distinction between true and false. Our focus is on education and time spent to finish the study. Education seems to have a considerable explanatory power for our research interest. The coefficient is positive and its magnitude is large (β = 0.1042, p-value = 0.005). The more time the subjects took to finish the study (i.e., Duration participation) the better they were in distinguishing between true and false (β = 0.0217, p-value = 0.012). But as subjects took more time the effect diminishes (β = -0.0218, p-value = 0.072).

thumbnail
Table 4. OLS regressions to explain “Overall correct identification” (N = 2,053).

https://doi.org/10.1371/journal.pone.0247517.t004

A robustness check revealed interesting insights: If we refrain from controlling for Education and Age, there are changes in the variables Population and AOT (the other variables remain the same). Most notably, the seemingly worse performance of the Healthcare professionals (compared to students with and without healthcare background) vanishes. In contrast, they even perform slightly better. Moreover, the magnitude of AOT increases. Considering panel IIa, the drivers of the effects seem to be mainly Education in the case of AOT, and Age in the case of the Healthcare professionals and students.

In the following story-by-story analysis, we check our regressions if there is a change if we do not control for age or/and education. If so, then we will report it. Otherwise, we stick to our main regression.

II. Story-by-story correct identification. In this section, we want to present insights that are related to the individual stories. We focus on some highlights for each story only (Tables 5 and 6).

thumbnail
Table 5. Marginal effects after logit regressions to explain “Correct identification” (N = 2,053).

https://doi.org/10.1371/journal.pone.0247517.t005

thumbnail
Table 6. Marginal effects after logit regressions to explain “Correct identification” (N = 2,053), cont.

https://doi.org/10.1371/journal.pone.0247517.t006

Story 1. The key driver to explain the ability to distinguish between true and false news is Familiarity (β = 0.1875, p-value<0.001). The East-Dummy is positive, i.e., subjects from East Germany performed better in this story (β = 0.0699, p-value = 0.005). This is in line with a confirmation bias: people from each of both regions might have thought that they are better prepared for the corona crisis. But only in the case of East-Germany, this answering behavior is associated with a correct answer (because Haseloff has actually said that the East is better prepared).

Story 2. Male perform slightly worse than women in distinguishing between true and false (β = -0.0355, p-value = 0.016). Possible explanations might include own experiences, empathy, and, in case of the students, training in their respective degree programs. Moreover, is interesting that Certainty increases the probability to perform well (β = 0.0256, p-value<0.001).

Story 3. In this story, Familiarity (β = -0.1841, p-value<0.001) and (to a less extent) Certainty (β = -0.0331, p-value<0.001) are drivers of the decision behavior of the subjects. Both signs are negative and the magnitude of the variables seems to be important. Probably the subjects have heard about news stories in this realm. We guess that the pervasive narrative (i.e., there is a shortage of medical goods and commodities as well as a considerable reliance on foreign countries) contradicts the finding of the correct version of the news story. This is in line with a confirmation bias.

Story 4. Familiarity is a strong predictor in this story (β = 0.2678, p-value<0.001). If people believe that they know the story they do perform better. Immunosuppression is positively related to the variable of interest (β = 0.0750, p-value = 0.042). The effect size is large. This is interesting because one might have expected that this group of people fears a lack of capacities of hospitals most. Maybe the unexpected finding can be explained by the experience of regular visits to physicians and hospitals.

Story 5. In this story, Familiarity is a strong predictor in this story (β = 0.2337, p-value<0.001). Compared to women, subjects who identified themselves as Diverse performed less well in distinguishing between true and false (β = -0.2155, p-value = 0.019). This result is at best preliminary since the sample size is small.

Story 6. As expected, Reaction homeopathy is negatively (but only slightly) associated with our variable of interest (β = -0.0155, p-value = 0.016). This is in line with a confirmation bias: The correction version of this story was that homeopathic remedies themselves have no effect. People who have stronger used homeopathic stuff probably did not exclude that there is a positive effect. The effect of Reaction disinfection (β = -0.0138, p-value = 0.015) is similar to Reaction homeopathy.

Story 7. Small, positive effects can be observed in the variables Surprising (β = 0.0464, p-value = 0.086) and (to a less extent) Certainty (β = 0.0171, p-value<0.001). The variable Trustmedia (β = 0.0305, p-value = 0.024) has a considerable positive effect. Compared to women, subjects who identified themselves as Diverse performed less well in distinguishing between true and false (β = -0.1940, p-value = 0.041).

Story 8. CRT (β = 0.0392, p-value<0.001) and AOT (β = 0.0418, p-value = 0.007) prevent of falling for this kind of strong exaggeration.

5 Conclusions and discussion

Infodemics–the spread of false news claims–constitutes a great societal challenge during the corona crisis. This paper addressed the question of who is good at distinguishing between true and false news stories in the realm of corona. For this purpose, we recruited not only students but also healthcare professionals. The main findings of the study can be summarized as follows: We find that healthcare professionals, non-healthcare professionals, and students both with and without healthcare background perform similarly in distinguishing between true and false news stories. Moreover, we find that the residence of the subjects (East- or West-Germany) plays only a minor role. Furthermore, we found evidence that the propensity to engage in analytical thinking (CRT) and actively open-minded thinking (AOT) are positively associated with the ability to correctly distinguish between true and false news stories. When this study was carried out, there was a shortage of commodities (most notably, the pictures where toilet paper and disinfect was out of sale went viral in the German media). Probably as a consequence of having this picture in mind, people incorrectly thought that Germany’s medical care heavily depends on foreign countries. If news stories are in line with existing narratives, subjects tend to think that the stories are true.

With regard to CRT and AOT, our results are in line with Pennycook and Rand [14,15]. They also found that these two determinants help to distinguish between true and false news stories. Our finding that narratives seem to matter is related to the literature of the confirmation bias [24]. Confirmation bias is about prior beliefs that influence if individuals agree to something or not. Narratives are stories that go viral at a specific time. They may influence the beliefs of individuals. To the best of our knowledge, the other two findings have not been systematically studied before. Overall, the residence of the subjects (East- or West-Germany) does not seem to matter much for our topic. Maybe about 30 years after reunification, the different socialization is not an important point when it comes to distinguishing between true and false news stories. To our surprise, healthcare professionals did not perform better than non-healthcare professionals or students even if the news stories were linked to immediate health implications. Maybe healthcare professionals speak in another language than other people, and perceive everyday articles from the media as incorrect due to the wording of the author who tries to reach a broad audience.

Our study shows that individuals are vulnerable to false news information, regardless of their level of education and expertise. In this realm, narratives seem to matter: communication of the mass media influences people’s perception of the state of the world. We identified AOT and CRT as protective factors: teaching activities in this area might help to better distinguish between true and false news stories and, in turn, help to reduce the spread of false news stories. However, our study suffers from some limitations. For example, we deal with a non-representative convenience sample with mostly high-educated individuals–further research should address the general population. Furthermore, the role of narratives should further be investigated. For example, it is important to find out how narratives and the perception of news stories (either true or false) are correlated with each other. Moreover, it would add value to the literature to find out under which circumstances people think about news stories, accept them uncritically or even ignore it.

Appendix

thumbnail
Table A2. Codebook of the collected variables and their measurement.

https://doi.org/10.1371/journal.pone.0247517.t008

References

  1. 1. World Health Organization (2020) https://www.who.int/director-general/speeches/detail/munich-security-conference. (Accessed: 02 February 2021).
  2. 2. Shokoohi M, Nasiri N, Sharifi H, Baral S, Stranges S (2020) A syndemic of COVID-19 and methanol poisoning in Iran: Time for Iran to consider alcohol use as a public health challenge? Alcohol 87: 25–27. pmid:32505493
  3. 3. Caulfield T (2020) Pseudoscience and COVID-19—we’ve had enough already. Nature: pmid:32341556
  4. 4. Aerzteblatt (2020): https://www.aerzteblatt.de/nachrichten/111060/Bun%C2%ADdes%C2%ADge%C2%ADsund%C2%ADheits%C2%ADmi%C2%ADnis%C2%ADter%C2%ADium-warnt-vor-Falschnachrichten. (Accessed: 05 January 2021).
  5. 5. Rosenberg H, Syed S, Rezaie S (2020) The Twitter pandemic: The critical role of Twitter in the dissemination of medical information and misinformation during the COVID-19 pandemic. Canadian journal of emergency medicine 22(4): 418–421. pmid:32248871
  6. 6. Koltay A (2020) The Punishment of Scaremongering in the Hungarian Legal System. Freedom of Speech in the Times of the COVID-19 Pandemic: http://dx.doi.org/10.2139/ssrn.3735867.
  7. 7. Lazer DMJ. et al. (2018) The science of fake news. Science 359(6380): 1094–1096. pmid:29590025
  8. 8. Grüner S (2020a) An empirical study on Internet-based false news stories: experiences, problem awareness, and responsibilities. International Journal of Applied Decision Sciences (forthcoming).
  9. 9. Waldman AE (2018) The Marketplace of Fake News. Journal of Constitutional Law 20(4): https://scholarship.law.upenn.edu/jcl/vol20/iss4/3.
  10. 10. Vosoughi S, Roy D, Aral S (2018) The spread of true and false news online. Science 359(6380): 1146–1151. pmid:29590045
  11. 11. van Bavel et al. (2020) Using social and behavioural science to support COVID-19 pandemic response. Nature Human Behaviour 4: 460–471. pmid:32355299
  12. 12. Fleming N (2020) Coronavirus misinformation, and how scientists can help to fight it. Nature 583: 155–156. pmid:32601491
  13. 13. Donovan J (2020) Social-media companies must flatten the curve of misinformation. Nature: pmid:32291410
  14. 14. Pennycook G, Rand DG (2019) Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188: 39–50. pmid:29935897
  15. 15. Pennycook G, Rand DG (2020) Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality 88(2): 185–200. pmid:30929263
  16. 16. Croson R (2006) The Method of Experimental Economics. Carnevale P, de Dreu CKW (Eds.): Methods of Negotiation Research. Martinus Nijhoff Publishers, Leiden: 289–306.
  17. 17. Druckman JN, Kam CD (2011) Students as Experimental Participants: A Defense of the “Narrow Data Base”. Druckman JN, Green DP, Kuklinski JH, Lupia A. (Eds.): Cambridge Handbook of Experimental Political Science. Cambridge University Press, Cambridge: 41–57.
  18. 18. Fréchette G.R. (2015) Laboratory Experiments: Professionals versus Students. Fréchette G.R; Schotter A. (Eds.): Handbook of Experimental Economic Methodology. Oxford University Press, Oxford: 360–390.
  19. 19. Pennycook G, McPhetres J, Bago B, Rand DG (2020) Attitudes about COVID-19 in Canada, the U.K., and the U.S.A.: A novel test of political polarization and motivated reasoning: https://doi.org/10.31234/osf.io/zhjkp.
  20. 20. Bronstein MV, Pennycook G, Bear A, Rand DG, Cannon TD (2019) Belief in Fake News is Associated with Delusionality, Dogmatism, Religious Fundamentalism, and Reduced Analytic Thinking. Journal of Applied Research in Memory and Cognition 8(1): 108–117.
  21. 21. Pennycook G, Cannon TD, Rand DG (2018) Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General 147(12): 1865–1880. pmid:30247057
  22. 22. Cook J, Lewandowsky S (2012) The Debunking Handbook. St. Lucia, Australia: University of Queensland: https://skepticalscience.com/Debunking-Handbook-now-freely-available-download.html.
  23. 23. Grüner S (2020b) Identifying and debunking environmental-related false news stories—An experimental study. https://doi.org/10.31235/osf.io/zmx5p.
  24. 24. Lord CG, Ross L, Lepper MR (1979) Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology 37: 2098–2109.
  25. 25. Frederick S (2005) Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19(4): 25–42.
  26. 26. Haran U, Ritov I, Mellers BA (2013) The role of actively open-minded thinking in information acquisition, accuracy, and calibration. Judgment and Decision Making 8: 188–201.
  27. 27. Cameron AC, Trivedi PK (2010) Microeconometrics Using Stata. Rev. ed. College Station, TX: Stata Press.
  28. 28. Long JS, Freese J (2014) Regression Models for Categorial Dependent Variables Using Stata. Texas: Stata Press.