Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Affirmative citation bias in scientific myth debunking: A three-in-one case study

  • Kåre Letrud ,

    Roles Conceptualization, Investigation, Methodology, Visualization, Writing – original draft, Writing – review & editing

    kare.letrud@inn.no

    Affiliation Inland School of Business and Social Sciences, Inland Norway University of Applied Sciences, Lillehammer, Norway

  • Sigbjørn Hernes

    Roles Conceptualization, Investigation, Methodology, Writing – review & editing

    Affiliation Lillehammer Campus Library, Inland Norway University of Applied Sciences, Lillehammer, Norway

Abstract

Several uncorroborated, false, or misinterpreted conceptions have for years been widely distributed in academic publications, thus becoming scientific myths. How can such misconceptions persist and proliferate within the inimical environment of academic criticism? Examining 613 articles we demonstrate that the reception of three myth-exposing publications is skewed by an ‘affirmative citation bias’: The vast majority of articles citing the critical article will affirm the idea criticized. 468 affirmed the myth, 105 were neutral, while 40 took a negative stance. Once misconceptions proliferate wide and long enough, criticizing them not only becomes increasingly difficult, efforts may even contribute to the continued spreading of the myths.

Introduction

Some misconceptions become engrained in academic publishing and debates. Examples include the low risk of addiction from opioids prescribed for chronic pain [1], the ‘Patient Zero’ supposedly responsible for the U.S. AIDS epidemic [2], the Yerkes-Dodson law [3, 4], the endless behavioral loops of the digger sphex [5], the Learning Styles [6, 7], the Learning Pyramid models [8, 9], and the Hawthorne Effect [10]. Despite fundamental flaws, these claims have proliferated in academic publications for decades, some of them for more than a century. Even though the number of positively identified myths appears to be limited, there is reason to suspect that scientific myths such as these are not a marginal phenomenon. Several unwarranted claims have become part of the scientific corpus [11, 12], and these claims could potentially become entrenched as common knowledge in the way exemplified by the above myths. Citation bias [1315] contribute to the academic cementation of ideas by favoring studies with positive evidence to those with negative evidence.

Active efforts at countering scientific myth proliferation are required, lest they misinform descriptive and normative deliberations, and clutter encyclopedias, review articles, and topic searches in databases. However, once claims such as these become entrenched in academic discourses, efforts at criticizing them are counteracted by an affirmative citation bias:

We theorize that there are three main ways of citing a critical paper (apart from remaining neutral). Consider the following scenario: A paper sets out to challenge a flawed yet widely distributed theory, and makes a well-argued case against it. If, on the one hand, readers accept the critical arguments and reject the theory, their citation will presumably reiterate the critique. On the other hand, those who disagree with the paper will cite it and make their case for why they choose to sustain the theory, engaging it in a debate. While a third group of readers will cite the critical paper as corroborating the theory, an instance of what Greenberg [15] terms ‘citation diversion’, presumably because they have not read it, or failed to understand it [1621]. If these three groups are of equal size, those that uphold the model criticized will outnumber those who echoes the critique.

Searching for evidence of affirmative citation bias, we perform case studies of the academic reception of three articles critical of the widely cited yet contentious Hawthorne Effect. By consulting papers citing these critical works, we seek to establish whether, and to what degree the accumulated citations are indeed skewed in favor of the Hawthorne Effect, suggesting the existence of an affirmative citation bias. We shall also search for evidence of citation diversion.

The Hawthorne effect

The idea of a Hawthorne Effect originated from studies on workplace behavior at the Western Electric Company’s Hawthorne Plant during the 1920s and 1930s, and was primarily based on studies of the relay assembly room, and on informal studies about worker response to changes of lighting [22, 23]. Surprisingly, both higher and lower lighting levels supposedly led to increased productivity. In 1941 Roethlisberger described the altered behavior of workers knowing that they are being observed [24], and in 1953 John French introduced the term ‘Hawthorne Effect’, describing an increase in productivity due to social position and social treatment [22]. ‘The Hawthorne Effect’ is now an ambiguous and vague, yet widely used, term, primarily associated with an observer effect: subjects altering their behavior when aware of being observed [25, 26]. The Hawthorne studies has been in the receiving end of extensive, and sometimes harsh, criticism [24, 2729], as has the Hawthorne Effect [10, 22, 30].

Method

We based the case studies on articles arguing against the Hawthorne Effect, interpreted as various observer effects. The selection criteria being that they were unequivocally critical of the effect, that their argumentation was substantial, and that they were extensively cited by peer reviewed articles. We limited our study to reviews and articles indexed by Scopus and Web of Science to ensure they had been peer reviewed, and to facilitate retrieval. Consulting references in, and citations of, Jones’ 1992 seminal critique of the Hawthorne Effect, we sought additional critiques. Among those available to us, we found only two articles that met all the above criteria. We retrieved the available articles indexed in Scopus and Web of science citing these three critiques. A handful were available as pre-prints.

Assessing these citations, we categorized the publications as affirming the effect, as neutral (the category included those not taking a stance on the issue, those that were ambiguous, and those that did not address the effect), and as negative. We separately reviewed the articles before comparing notes. Where our assessments differed we sought an agreement, and if no agreement were reached, we classified the article as neutral.

We found that a few authors cited the source correctly as being critical to the Hawthorne Effect, making the reader aware that it is a contentious theory. However, while taking a neutral, or perhaps even negative stance on the issue, they still approached the Hawthorne Effect as if real when discussing their method or results. We categorized these as de facto affirming.

We also sought to discern whether the authors of the affirmative citations cited the critical articles as affirming the Hawthorne Effect. We found that the majority of affirmative citations simply referred to a critical article when discussing how to avoid the Hawthorne Effect, thus implicitly presenting it as affirmative. However, a handful of these affirmative citations cited the article as a source for how they understood the Hawthorne Effect. Although not serving an argumentative role, the affirmative context nevertheless left the reader with the impression that the cited source did affirm the effect. We chose to classify also these as instances of citation diversion.

Findings

Case 1: Franke and Kaul 1978

Franke and Kaul perform the first statistical analysis of the Hawthorne Studies data, and draws conclusions ‘different from those heretofore drawn’ [24]. They do not analyze the data for a Hawthorne Effect, but when they address the effect they concur with several earlier critics on the issue:

Other social scientists have been diverted by the Hawthorne effect, described by Roethlisberger (1941:14): "… If a human being is being experimented upon, he is likely to know it. Therefore, his attitudes toward the experiment and toward the experimenters become very important factors in determining his responses to the situation" (cf. also Dickson and Roethlisberger, 1966, and Bishop and Hall, 1971). This concept of influence upon an experiment through the experiment itself was found either erroneous or misleading by Cook and Campbell (1976), Katz and Kahn (1966), Parsons (1974), and Rubeck (1975). Sommer's (1968) conclusion, that the "errors" called placebo or Hawthorne effect need themselves to be evaluated and understood, is most pertinent. [24]

Web of Science and Scopus (search date 7 September 2018) indexed in total 285 articles, all published between 1979 and 2018. We were able to retrieve the texts for 277 of these (Table 1; Fig 1).

In Figs 13, each colored line represents an article. Green lines represent articles affirming the validity of the Hawthorne Effect, while red lines represent those that reject it. Yellow lines are articles that are neutral, ambiguous, or do not address the effect. Black lines mark citation diversion, citing the critical articles as affirming the effect.

17 articles cited Franke and Kaul, while taking a negative stance towards the Hawthorne Effect, and 63 were neutral (of which the majority addressed the analysis of the Hawthorne studies, not the effect). 197 affirmed the Hawthorne Effect, and of these 189 cited Franke and Kaul as affirming the Hawthorne Effect.

Case 2: Jones 1992

Jones performs an analysis of the data from the relay studies, searching for evidence of the Hawthorne Effect (interpreted as the subjects being aware changes in experimental conditions before or during the experimental period), and finds none. His conclusion:

In this context, I must conclude that there is slender or no evidence of a Hawthorne effect in the Hawthorne Relay Assembly Test Room. Finally, in light of these results, I must also conclude that the Hawthorne effect is largely a construction of subsequent interpreters of the Hawthorne experiments. [22]

Web of Science (search date 24 May 2018) and Scopus (search date 6 September 2018) indexed in total 176 articles citing Jones. We managed to retrieve 141 of these. One contained no citation of Jones 1992, and was rejected, leaving a total of 140 peer reviewed articles, all published between 1996 and 2018. Consulting these 140 articles we found that 19 were neutral on the matter. 18 cited Jones while criticizing the Hawthorne Effect, whereas 103 affirmed its validity. Of the affirmative articles, 60 cited Jones as affirming the Hawthorne Effect (Table 2; Fig 2).

Case 3: Wickström and Bendix 2000

Citing former reanalyses, Wickström and Bendix [25] argue that the original Hawthorne Studies did not show adequate evidence of the effect. The term, they argue, has come to signify several non-specific outcomes from participating in a study. It is superfluous and truistic, and too vague to be useful:

Instead of referring to the ambiguous and disputable Hawthorne effect when evaluating intervention effectiveness, researchers should introduce specific psychological and social variables that may have affected the outcome under study but were not monitored during the project, along with the possible effect on the observed results. [25]

We were able to retrieve the text of 196 of 198 titles citing Wickström and Bendix published between 2001 and 2018 (search date 9 September 2018). Merely five articles took a critical stance towards the Hawthorne Effect. 23 were neutral, while 168 affirmed the effect. 155 of these 168 articles cited Wickström and Bendix as affirming the Hawthorne Effect (Table 3; Fig 3).

Discussion and conclusion

The ratios between articles that took an affirmative stance towards the Hawthorne Effect and those that rejected it, was for Jones roughly 6:1, and for Franke and Kaul 11:1. The reception of Wickström and Bendix appears to be an outlier at 34:1. The citation diversion group was extensive. Out of 197 affirmative citations of Franke and Kaul, 189 cited the critical articles as affirming the Hawthorne Effect. For Jones, the number was 60 of 103, for Wickström and Bendix 155 of 168.

Considering the numerous citation diversions, a major explanation for the asymmetry between the affirmative articles and the negative articles appears to be not reading, or not understanding, the cited paper. However, we suspect that additional factors may have contributed to the skewed reception:

First, there are incentives for not reiterating the critique if convinced by it. For authors accepting the arguments, the Hawthorne Effect becomes irrelevant, unless, of course, they take issue with the effect specifically. Consequently, we expect them to leave out any mention of the effect, without accounting for their deliberations, nor citing the critical publication, due to common standards of text conciseness and continuity. This can explain the small number of authors reiterating a critical stance towards the theory. Second, the findings may also reflect that the majority of the citers initially were partial to the Hawthorne Effect: Citing while applying the Hawthorne Effect in methodological discussions is presumably more frequent than citing it with the intent of criticizing it.

Of course, to assess whether the three articles were successful at communicating their critique of the Hawthorne Effect, we ought to consider the number of readers that has been dissuaded from believing in and using the Hawthorne Effect in their research. For all we know this group is in majority. It is, however, a silent one. When it comes to academic publishing, the affirming articles are dominant on the issue of the Hawthorne Effect, and are likely the major contributors to the forming of the published consensus. These publications, we surmise, will efficiently recruit new believers in the effect, and in turn new affirmative citations in the literature. The findings not only demonstrate that the three efforts at criticizing the Hawthorne Effect to varying degrees were unsuccessful, but they also suggest that if the intention behind the critiques were to reduce the frequency of affirmations of the claim in the scientific corpus, they may have achieved the very opposite

Supporting information

S1 File. Classification of citing articles.

https://doi.org/10.1371/journal.pone.0222213.s001

(XLSX)

Acknowledgments

We owe thanks to Finnur Dellsen, Espen Dragstmo, Anstein Gregersen, Anders Nes, Knut Olav Skarsaune, and Terje Ødegaard for suggestions that has greatly improved the manuscript.

References

  1. 1. Leung PTM, Macdonald EM, Stanbrook MB, Dhalla IA, Juurlink DN. A 1980 Letter on the Risk of Opioid Addiction. The New England Journal of Medicine. 2017;376:2194–5. pmid:28564561
  2. 2. Worobey M, Watts TD, McKay RA, Suchard MA, Granade T, Teuwen DE, et al. 1970s and ‘Patient 0’ HIV-1 genomes illuminate early HIV/AIDS history in North America. Nature. 2016;539(7627):98–101. pmid:27783600
  3. 3. Teigen KH. Yerkes-Dodson: A Law for all Seasons. Theory & Psychology. 1994;4(4):525–47.
  4. 4. Corbett M. From law to folklore: work stress and the Yerkes-Dodson Law. Journal of Managerial Psychology. 2015;30(6):741–52.
  5. 5. Keijzer F. The Sphex story: How the cognitive sciences kept repeating an old and questionable anecdote. Philosophical Psychology. 2013;26(4):502–19.
  6. 6. Kirschner P. Stop propagating the learning styles myth. Computers & Education. 2017;106:166–71.
  7. 7. Cedar R, Willingham DT. The Myth of Learning Styles. Change: The Magazine of Higher Learning. 2010:32–5.
  8. 8. Letrud K, Hernes S. The diffusion of the learning pyramid myths in academia: an exploratory study. Journal of Curriculum Studies. 2016;48(3):291–302.
  9. 9. Letrud K, Hernes S. Excavating the origin of the learning pyramid myth. Cogent Education [Internet]. 2018; 5(1).
  10. 10. Kompier MA. The “Hawthorne effect” is a myth, but what keeps the story going? Scandinavian Journal of Work, Environment & Health. 2006;32(5):402–12.
  11. 11. Ioannidis JPA. Why Most Published Research Findings Are False. PLoS Med [Internet]. 2005; 2:[e124 p.].
  12. 12. Tatsioni A, Bonitsis NG, Ioannidis JPA. Persistence of Contradicted Claims in the Literature. JAMA. 2007;298(21). pmid:18056905
  13. 13. Kivimäki M, Batty GD, Kawachi I, Virtanen M, Singh-Manoux A, Brunner EJ. Don't Let the Truth Get in the Way of a Good Story: An Illustration of Citation Bias in Epidemiologic Research. American Journal of Epidemiology. 2014;180(4):446–8. pmid:24989242
  14. 14. Nieminen P, Rucker G, Miettunen J, Carpenter J, Schumacher M. Statistically significant papers in psychiatry were cited more often than others. Journal of Clinical Epidemiology. 2007;60(9):939–46. pmid:17689810
  15. 15. Greenberg SA. How citation distortions create unfounded authority: analysis of a citation network. The British medical journal [Internet]. 2009; 339. Available from: https://www.bmj.com/content/bmj/339/bmj.b2680.full.pdf.
  16. 16. van de Weert M, Stella L. The dangers of citing papers you did not read or understand. Journal of Molecular Structure. 2019;1186:102–3. https://doi.org/10.1016/j.molstruc.2019.03.024.
  17. 17. Wetterer JK. Quotation error, citation copying, and ant extinctions in Madeira. Scientometrics. 2006;67(3):351–72.
  18. 18. Simkin M, Roychowdhury V. Do you sincerely want to be cited? Or: read before you cite. Significance. 2006;3(4):179–81.
  19. 19. Stordal B., Citations citations everywhere but did anyone read the paper? Colloids and Surfaces B: Biointerfaces. 2009;72(2):312. pmid:19411168
  20. 20. Jung D. “Assessing citizen adoption of e-government initiatives in Gambia: A validation of the technology acceptance model in information systems success”. A critical article review, with questions to its publishers. Government Information Quarterly. 2019;36(1):5–7.
  21. 21. Teigen KH. En artikkel for alle årstider [An article for all seasons]. Psykologitidsskriftet. 2017;55(5):472–7.
  22. 22. Jones SRG. Was There a Hawthorne Effect? American Journal of Sociology. 1992;98(3):451–68.
  23. 23. Adair JG. The Hawthorne Effect: A Reconsideration of the Methodological Artifact. Journal of Applied Psychology. 1984;69(2):334–45.
  24. 24. Franke RH, Kaul JD. HAWTHORNE EXPERIMENTS - 1ST STATISTICAL INTERPRETATION. American Sociological Review. 1978;43(5):623–43. WOS:A1978FT33200001.
  25. 25. Wickström G, Bendix T. The "Hawthorne effect"—what did the original Hawthorne studies actually show? Scandinavian Journal of Work, Environment & Health. 2000;26(4):363–7.
  26. 26. Lück HE. Der Hawthorne-Effekt–ein Effekt für viele Gelegenheiten? Gruppendynamik und Organisationsberatung. 2009;40(1):102–14.
  27. 27. Carey A. The Hawthorne Studies: A Radical Criticism. American Sociological Review. 1967;32(3):403–16.
  28. 28. Muldoon J. The Hawthorne studies: an analysis of critical perspectives, 1936–1958. Journal of Management History. 2017;23(1).
  29. 29. Busse R, Warner M. The legacy of the hawthorne experiments: A critical analysis of the human relations school of though. History of Economic Ideas. 2017;25(2):91–114.
  30. 30. Paradis E, Sutkin G. Beyond a good story: from Hawthorne Effect to reactivity in health professions education research. Medical Education. 2017;51(1):31–9. WOS:000393764500009. pmid:27580703