Figures
Abstract
In scientific discourse, the prevalence of overwhelming consensus obscures the presence of dissenting views as well as their characteristics. This paper explores the potential to meaningfully measure dissent within the context of such a consensus, using climate change research as a case study. Using citation analysis to explore the dynamics of scientific publications and the reception of dissenting opinions, this project questions whether there may be a methodological framework for quantifying dissent. This study employs analysis of citation networks to assess the visibility and impact of minority viewpoints, as well as the viability of such a study. The findings indicate that because dissent in climate change research is miniscule such measurements are limited. Despite that finding, researchers on the fringe of scientific consensus have an outsized impact on social viewpoints. This project has potential to disrupt the ways researchers critically consider the relevance of dissenting research in their own fields, and to think of ways to embrace the impact of research that expands their fields.
Citation: Grunert J (2025) Can dissent be meaningfully measured in an overwhelming consensus? A citation network case study in climate change research. PLOS Clim 4(6): e0000666. https://doi.org/10.1371/journal.pclm.0000666
Editor: Diogo Guedes Vidal, University of Coimbra: Universidade de Coimbra, PORTUGAL
Received: September 13, 2024; Accepted: June 6, 2025; Published: June 25, 2025
Copyright: © 2025 Jonathan Grunert. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Data for this project is supplementary to this article: https://doi.org/10.1088/1748-9326/8/2/024024
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Networks of research help to corroborate the validity of that research. Not only do these networks demonstrate the progression of scientific work through frequently cited articles, they also enhance the visibility of fact-making research within a specific field. Importantly, networks of citations can reveal influential disciplinary works, and they can also show the impact and constructed communities that develop around smaller works, not yet recognized as important research contributions.
Dissent is a valuable component of scientific discourse, though efforts to quantify dissenting research have been unsuccessful. This present study examines published articles that promulgate dissenting arguments about the nature of climate change. These kinds of articles, though, do more than dissent from the overwhelming consensus about climate change; they contradict generally accepted scientific models. Regardless, the social position of climate change dissent elevates their arguments in public perception. A 2019 study analyzed this particular phenomenon, finding that climate consensus only slightly outpaces climate anti-consensus in media outlets, despite the significant difference in rate of consensus among climate scientists [1]. These researchers utilize some citation analysis to evaluate the “flow of citations” among climate change researchers, finding that prominent climate dissenters cited consensus research in efforts to discredit it [1]. The vocality of public dissent suggests near-parity with the consensus view, which is, of course, not true among climate scientists.
Climate research, like any scientific endeavor, has detractors who challenge the core finding of that research. An important review of climate change denial research identifies several important questions in that area [2]: When and where was this research published? Who is the central figure in the denial? What are significant characteristics of denial research? and what, exactly, is being denied? This final question is most significant for the present project, in that denying climate data trends and trajectories, attributions of anthropogenic global warming, potential and realized climate impacts, and consensus among climate science researchers all fall under the broad category of climate denial. Rejecting trends, attributions, impacts, and consensus reports are all dominant pieces of dissenting research, with the goal of disrupting efforts toward policies that could mitigate the effects of climate change by scattering seeds of skepticism as widely as possible. Quantifying how skepticism appears in publications, though, is the aim of this present research.
The practice of mapping citations is not new. In 1979, the development of the Science Citation Index and publication of Citation Indexing helped researchers and librarians identify new areas of bibliometrics [3]. In recent decades, Egghe and Rousseau weighed in on the values of citation network analysis for evaluating similarities in scientific research, as well as applied realizations of scientific research [4,5]. Differentiating research nodes has also been an important part of bibliometrics and citation networks, especially in analyses of scientific research publications [6]. More specifically, citation network analysis has been utilized in efforts to identify prominent publications in a given field, and to identify fracturing within disciplinary research [7].
In climate change research, bibliometrics have clarified certain characteristics of researchers, publications, and disciplinary topics. Bibliometric methodologies have been used to measure shifts in climate change research subtopics and disciplinary terminology over time [8]. This kind of usage for bibliometrics revealed the potential for Garfield’s initial ideas, which have become realized through advancements in computing technologies. Climate change research also relies on differentiating consensus studies that endorse anthropogenic global warming (AGW) from publications that reject it. Separate from consensus studies, bibliometric researchers have engaged with climate skepticism to better understand the use of that research within the greater climate change research environment. Analysis of discourse among skeptics revealed some methods by which they spread their dissenting ideas [9]. Yet another publication developed a survey to measure skeptics’ distrust of science, especially their skepticism around credibility [10].
This paper argues that there is yet to emerge a measuring tool to meaningfully valuate the social strength of research among climate researchers that dissent from the consensus view. Social strength, in this regard, identifies the supportive structures that exist among peers in research circles. More social connections within a network provide more strength to that network. By critically examining methods of researching dissent, this research suggests that any strength of social connectivity among papers that deny climate consensus is severely undermined by the miniscule number of papers that present such research.
In this regard, I identify the rejection of anthropogenic climate change as a fringe position. Measurements of fringe science are scarce; indeed, how one accurately evaluates the rejection of a widely held position is a difficult task. The interactions in AGW anti-consensus in print strongly suggest that there is tenuous connection among those authors. Thus, two questions guided this project: 1— How can bibliometric analysis be employed in such a way that measures meaningful characteristics of publications that dissent from the consensus? 2— Moreover, can the strength of social connectivity among papers that deny climate consensus overcome the miniscule number of papers that present such research?
Context
Climate change consensus research
In 2004, Naomi Oreskes clarified the measure of consensus among researchers studying anthropogenic global warming. Her measurement divided the 928 abstracts indexed in ISI between 1993 and 2003 into six categories, ranging from explicit endorsement of the consensus view that human activity contributes to global warming, to no explicit position taken, to outright rejection of anthropogenic global warming. In her assessment, Oreskes found that only 25% of those abstracts took no position; all the others took some form of endorsing the consensus view. “Remarkably,” Oreskes writes, “none of the papers disagreed with the consensus position” [11].
Nearly a decade after Oreskes, John Cook et al. conducted similar research to evaluate changes in climate change consensus [12]. The twenty years of data, 1991–2011, encompasses Oreskes’ data, but their larger contribution is in differentiating levels of endorsement for anthropogenic global warming and in clarifying authors’ self-evaluation of their own positions. Cook’s dataset contains 11,944 articles from 1991 to 2011, written by 29,083 authors. Based on a reading of the abstracts, Cook’s team assigned each article a level of endorsement (levels 1–3), no position (level 4), and rejection (levels 5–7). Through subsequent communication with nearly 1,200 authors, they also determined the views of those authors regarding their endorsement or rejection of anthropogenic global warming. Among scientists who research climate change, Cook found that more than 97% endorsed anthropogenic global warming.
James Lawrence Powell, too, conducted a search that extended Oreskes’ initial argument. In 2015, he replicated Oreskes’ search in the Web of Science database and focused on a sample of articles published in two years (2013 and 2014), where he located 24,210 articles by 69,406 authors. Powell focused on the articles that explicitly rejected the consensus view and did not analyze abstracts that did not take a particular perspective on AGW. Powell’s analysis found explicit rejection of AGW in only five (5) articles, by four (4) authors.
In the following decade, yet another consensus study appeared, largely based on the methodology used by Cook had used in 2013 [13] . Lynas’ research team identified climate research articles from 2012 to 2020, and they found 88,125 relevant publications. Of these articles, they closely examined 3,000 abstracts and produced a list of words common among climate skepticism papers. The team developed an algorithm from this list, applied it to the full set of more than 88,000 papers, and concluded that the rate of consensus remained high in 2020: more than 99%.
Research conducted by Oreskes, Cook, Powell, and Lynas supports the position that researchers who reject anthropogenic global warming are in the minority of research scientists, and that the publication of such research is more miniscule than the vocality of climate change dissent suggests. Despite this work that demonstrates a clear consensus position around global warming, the research stops at the identification of the consensus line. That is, existing research measures the rate of consensus without deeply examining characteristics of that consensus or dissent. Cook [12] does some of this work as he surveyed authors of “No Position” papers for a self-assessment of their research, and the data set that accompanies that 2013 paper shows the diversity of research categories among climate papers irrespective of their endorsement or rejection of AGW. This paper argues that examining the characteristics of dissent is important work, as is measuring the strength of citation networks.
Critiques of consensus research
Consensus studies, especially Oreskes’ and Cook’s, have undergone a fair share of criticism. There is a general “Why bother?” component to some discussion of ongoing consensus studies, especially as the rate of consensus is consistently above 90%. Quibbling over specific percentages is not unimportant, but it does distract from actions that could be taken around climate change, especially mitigation, policy implementation, public education, and actionable research [14]. In addition to this broad critique, there are some specific problems that researchers have identified in consensus research.
Bias is a significant challenge to the quality of Cook’s data. The personnel who evaluated abstracts for Cook’s research may have fatigued or otherwise burdened as they conducted this research; this is sign enough of a bias in their work [15]. Additionally, their ratings contain “a hitherto overlooked flaw: confirmation bias in the consensus rate” [16]. These claims do take into consideration the amount of data taken for Cook’s research, but they challenge its consistency and how it may add to valuations of consensus.
In addition to claims about bias, the process and clarity of abstract review are specific categories of critique of consensus study. As Cook’s evaluators examined abstracts, their methods of addressing discrepancies and preventing communication of their ratings are unclear [17]. That is, raters were from a similar group, and transparency about this process may increase confidence in this consensus study. Similarly, there is a noted lack of clarity as to how Cook’s researchers defined the various levels of endorsement [18].
Furthermore, the rhetoric around Cook’s claim, especially regarding the “No position” papers as tacit AGW endorsement, is unsupported [19]. In assuming this of all publications that do not offer a clear or implied endorsement or rejection of the consensus position, Cook offers an unearned conclusion. As Jankó writes, “We need ‘no position’ papers to grasp the whole picture regarding the climate skeptic repost” [19]. Without position on these papers— and especially as they comprise nearly two-thirds of the whole dataset— confidence in the 97% consensus rate wanes.
It is clear that undertaking a project based on consensus studies— especially problematized ones— implies a general acceptance of those studies. This present project does not ignore the problems identified by Tol, Dean, and Jankó, nor does it seek to debunk their claims. Instead, this paper utilizes the data around papers found to be explicit dissent from the larger consensus, and it leaves further evaluation of the “no position” papers to others.
Methods of analysis
Research around rates of climate change consensus provide some useful methodological frames for analyzing climate change dissent. While it would be most useful for these frames to provide, or at least suggest, a rule for identifying publications that do not align with consensus, none of them so do. Nevertheless, these are helpful in identifying certain characteristics of consensus and dissenting research.
Oreskes, Cook et al., and Lynas et al. conducted their research into identifying and measuring consensus through a manual textual analysis of abstracts. Such an analysis offered a detailed consideration of the meanings contained within those abstracts, however nuanced. The large set of abstracts allows for significant conclusions, as Web of Science (where these researchers retrieved their data) indexes a variety of climate papers from respected publications. Cook et al. extended this research to survey authors of climate research in order to measure their evaluation of their articles’ climate change stance against the stance presented in the abstract. However, this method has its flaws. Most notably, not all abstracts will explicate or imply a position on the larger state of the field, as is evident in the large number of “No position” papers present in Cook’s data. As Jankó clearly writes, “Abstract rating is a suboptimal way to measure consensus” [19].
In all these cases— Oreskes’ research and both parts of Cook’s work— the data clearly identifies the strong consensus among climate scientists that climate change is anthropogenic. As such, their scholarship identifies two communities among climate researchers: consensus and dissent (or anti-consensus).
Analysis of these communities is important to identifying their characteristics and how they engage with each other. The social networks of researchers can provide insight into key pieces of literature that establish certain perspectives in the field, or track the emergence of subfields of research. A 2023 study examined the social networks of climate researchers who participate in the Intergovernmental Panel on Climate Change, an organization that aligns with the strong consensus of climate research, and the Nongovernmental International Panel on Climate Change, an organization that, despite the similarity in name, dissents from that consensus. This research found that dissenting researchers are disconnected to the broader collective of researchers as well as each other [20].
Examination of social networks demonstrates the connectivity that exists or is absent from a group of similarly situated researchers. In the case of Light et al., dissenting researchers do not collaborate with other researchers, irrespective of climate change position, at a rate anywhere near that of the consensus researchers. These research networks simply do not exist among dissenting scientists.
A third way may exist. Research into citation networks may identify key characteristics of dissenting papers: strength of their citation networks relative to consensus networks, prominent dissenting papers, or discrete networks within the larger field. Instead of examining researchers, it may be worthwhile to examine their contributions and what research they consider to be significant.
This comparison of citation networks may offer value to identify some salient, touchstone publications in the arena of research that dissents from the consensus view. And those significant papers may reveal distinctive characteristics of climate change denialist research, especially when it relies on data gathered by research conducted by consensus scientists. To examine networks in this different way, I focused on the publications other researchers have identified and annotated as rejecting AGW and the citation networks those publications created.
Analyzing Cook’s data
This present research into co-citations in climate publications focused on four datasets. Within Cook’s research are subsets that analyze the citation networks for all dissenters (n = 78), for a sample of consensus publications (n = 78), and for all work published in 2011 (n = 1,621). The final analysis of all the articles from 2011 included climate research articles of all degrees of endorsement, from explicit endorsement of anthropogenic global warming to articles without clear position, to those that explicitly rejected that position. I included a specific one-year collection of publications to compensate for possible disadvantages in the pool of citable literature. Examining published work from 2011, selected both for its large number of items and for its relative recency within the larger dataset, I examined the citation networks apparent from the full set (n = 1,621), all consensus levels, including tacit consent (Cook endorsement levels 1–4, n = 1,612), and dissenters (Cook endorsement levels 5–7, n = 9).
Cook’s dataset allows for several different analyses. This project sought to analyze dissenting articles, but some results, outlined below, led to analyses of different subsets. The initial analysis looked at all dissenting articles in Cook’s dataset. That is, it examined citation networks that emerged out of the bibliographies of articles that implicitly or explicitly rejected anthropogenic climate change— Cook’s endorsement levels labeled 5, 6, or 7.
Second, this project sought to compare citation networks of dissenting articles with those that endorsed anthropogenic climate change (Cook endorsement levels 1, 2, or 3). The purpose of this comparison was to see if the dissenting papers’ citation network was stronger or weaker than that of the consensus papers. Examining the full set was impractical, and the sheer number of consensus papers (n = 3,896) may have skewed the results in its massive number of articles and potential citation connectors, in comparison to the seventy-eight articles that were dissenters. To rectify both pieces of this situation, I created a sample set of consensus articles. A sample, randomly selected from the 3,896 consensus articles, allowed for an estimate of the strength of a citation network among the consensus, especially in comparison against the dissenters. This sample was evaluated in terms of three characteristics, to ensure that the seventy-eight in the sample are proportionately similar to the larger set of articles in consensus. Those characteristics— Category, Endorsement, and Year of publication— are balanced, so the sample set does not favor a selection of articles that cover the same topic within climate research, or from a subnetwork of ardent supporters of AGW, or from a narrow time period.
The final analysis was of all the articles from one year: 2011. This set included climate research articles of all degrees of endorsement, from explicit endorsement of anthropogenic global warming, to articles without clear position, to those that explicitly rejected that position. The purpose of this third subset was to look at articles that had the same pool of literature to cite. One problem with examining citation networks from the entire dataset was the potential disparity of citations. That is, articles from 1991 could not cite the same articles as those articles from 2011. Networks that emerged would be incomparable, and it would be difficult to infer nodes of significance in climate research.
For all these sets I located each of the articles in these subsets in Web of Science, putting them in separate lists— one for each analysis. From that list, I created a text file that included article information, as well as a list of cited references. These files became the basis for the data analysis, and from these files, I also created spreadsheets to identify specific shared citations from articles in the dataset.
This research utilized the program VOSviewer to visualize networks of citations in all these cases. VOSviewer uses text files from Web of Science (and other databases) to display several characteristics of the set of articles, such as funding source(s), author affiliations, and citation networking. It also identifies clusters of research: “non-overlapping” communities of research papers, which connect to each other through shared citations [21]. This research relied on VOSviewer’s co-citation analysis, as the important piece of network building was in the number of times references were cited by climate research articles, and the relative strength of those networks.
The results drawn from an analysis of Cook’s data are presented in Table 1. Each set contains the number of articles considered and the number of articles that contain at least one cited reference. Additionally, the analysis considered several data points gleaned from VOSViewer: the number of clusters that developed from citation networks, the number of articles with at least one connection to another bibliography, the number of links to other citations, and the total link strength. Total link strength indicates “the number of links of an item with other items and the total strength of the links of an item with other items,” or the number of linked citations and the number of common citations between two articles [21]. For example, two articles, A and B, that cite three articles in common, have more link strength than three articles that all cite the same common article.
Link strength revealed a stronger citation network among consensus research than exists among dissenting research. The larger numbers of shared references, along with the much higher numbers of links and link strengths reveal an interconnectedness among most climate researchers that does not exist among climate denialists.
However, the numbers produced are incomparable. 82,485 is staggeringly different from 4, and there must be a way to offer a comparison that can produce some meaning. One way is to identify a relationship between the link strength and the number of links. A valuable comparison may be in dividing the link strength as calculated in VOSviewer with the number of total links. This comparison provided comparable numbers that were not reliant on the number of links, as in calculating link strength; neither does it disadvantage small pool sizes. Instead, this method assesses the numerical relationship between the link strength and number of links, the result of which clarifies the strength of the citation network among a collection of publications. Using this equation, dissenting research maintained a significantly stronger network than did the consensus, with this value measuring at 1.85 to the consensus research sample’s 1.67.
Discussion and conclusion
Research into characteristics of climate change dissent publications is inherently challenging, primarily due to their miniscule number. The sets from Cook (2013) are insufficient to make any conclusions. Bibliometric research varies widely among studies that analyze 200 or fewer pieces of data [22], and the sample sizes for dissenters in this study (n = 7; n = 73) fall far short of that number. Even adding the “Skeptical” papers from Lynas et al. [13] only yields an additional thirty-one (31) papers, keeping the total well below 200. Given the small numbers of dissenting climate change research papers published in scholarly journals, it may be a significant challenge to locate meaningful sets. Perhaps the scarcity of dissenting research, then, corroborates the fringe nature of anti-consensus claims. That same scarcity, though, further complicates the apparent strength of anti-consensus— or dissenting— research relative to the consensus.
Furthermore, the data sets are incomparable. Table 1 shows the results, but its utility in drawing conclusions is scant. The table’s inutility is largely due to the lack of comparable data between sets. Calculating the relationship between link strength and the number of links may identify a more consistent measurement that could be useful to compare various samples. Such a “relative link strength” (my term) might provide comparable numbers that assess the numerical relationship between the link strength and number of links. Even with such a tool, the small number of data points would not give significant meaning to the study.
Additional research into the next decade of climate publications— 2011–2021—might provide enough data to meet the threshold of 200 datapoints for meaningful bibliometric research. However, the number of datapoints may still fall short of 200: if the 51,052 articles from 2012-2021 (copying Cook’s search terms) follow the same 0.55% rate of dissent as articles from 2011, they may yield only 283 additional publications. And if they follow a rate similar to that which Powell found in 2015, the number of dissenting articles could be as low as 11. The work of identifying, evaluating, and annotating those abstracts would require a large, skilled team, and it may not be able to offer meaning to the bibliometrics of co-citation analysis.
Repeated replications of and expansions on Naomi Oreskes’ initial study, as well as Cook’s and Powell’s follow-up studies, can demonstrate the stability of climate consensus— that the high level of consensus has not waned. Especially, valuable work could be conducted by scientists who personally know or work with fringe researchers, by social scientists who recognize larger social patterns around these ideas, and by librarians who study and teach information credibility, dissemination, and retrieval. Disciplinary affiliations of climate change dissenters, locations for publications and affiliations, and in-depth bibliographic analysis can all contribute to research around the fringe of this important topic.
Cook’s [12] data also presented different kinds of problems. First, data in Web of Science are inconsistently available. That is, the metadata indicating a cited reference may have prevented a link from forming between two articles, and similarly, though highly unlikely, links to a common cited reference may have been falsely created. Second, the age of the dataset may be cause for some to worry about its current validity. As this project examines characteristics of existing work and not necessarily applicable to future work, the age of the dataset is of less importance than its relative completeness.
This research project does not offer any satisfactory conclusions, and it begs the question as to whether there may exist a study of dissenting research in a field of overwhelming consensus can even have a satisfactory conclusion. Textual analysis requires nuanced readings, and it may be costly in terms of personnel time. Details about authors require annotations or a single source that aligns with the dissenting view. Finally, citation network comparison contains too few data points to be meaningful. Each of these methods offers description of dissenters rather than prescription. That is, none of them creates a rule one can use to identify fringe research in this field.
Perhaps the only conclusion that this research offers is that the small number of articles dissenting from the consensus that rampant global warming is human in origin demonstrates that it is, indeed, a fringe view. However, other factors need to be considered, such as the researcher’s perspective and the connectivity of a single publication to other publications.
In this analysis of clusters of dissent in the midst of consensus, the lack of unique citation networks suggests a certain dependence on consensus research to perpetuate their arguments. Perhaps this conclusion is an obvious one; even so, it bears repeating. In a research environment dominated by one perspective, the minority researchers (in this case, those who deny human impact on global warming) rely on the work done by the consensus view, selecting pieces of data that help them further their own analyses. Indeed, descriptions of citation networks are valuable tools for identifying certain characteristics of fringe research groups, and they can provide some defensive arguments against the vocal minority who perpetuate those lines of research.
While the calculations of networks revealed a strength among dissenters’ networks, the network is too small to suggest any conclusion about bibliographic mining, the practice of gathering citations from another publication’s bibliography. Furthermore— and to be perfectly direct— this does not suggest that dissenting climate researchers communicated amongst themselves to pick key data points and research that could be used to support their own perspective. Such a conclusion would require additional research into epistemologies of citations, comparing how dissenting and consensus researchers utilize cited references.
Instead of seeing how climate change dissent shares a common set of research (whatever quality it might be), it may be helpful to reframe them as lonely researchers. They exist on the fringes of scientific research, not just as anti-consensus, but as researchers, without a specific network. When consensus numbers exceed 95%, this confirms that scientists do agree about the climatological reality, and that the numbers on the fringe are indeed miniscule. However, for researchers who want to argue for this dissenting, fringe position, there must be some contextualization, whether that research be climate change denialism, vaccine skepticism, or Holocaust denialism. Presenting researchers with the citation networks and connectedness of research can demonstrate that such an important context exists for the consensus, but not for dissenters. And in that contextualization, the dissent remains separate, with few connections to the world outside.
References
- 1. Petersen AM, Vincent EM, Westerling AL. Discrepancy in scientific authority and media visibility of climate change scientists and contrarians. Nat Commun. 2019;10(1):3502. pmid:31409789
- 2. Björnberg KE, Karlsson M, Gilek M, Hansson SO. Climate and environmental science denial: a review of the scientific literature published in 1990–2015. J Clean Prod. 2017;167:229–41.
- 3.
Garfield E. Citation indexing - its theory and application in science, technology, and humanities. New York: Wiley; 1979.
- 4. Egghe L, Rousseau R. Co-citation, bibliographic coupling and a characterization of lattice citation networks. Scientometrics. 2002;55(3):349–61.
- 5. Egghe L, Rousseau R. A measure for the cohesion of weighted networks. J Am Soc Inf Sci Technol. 2003;54(3):193–202.
- 6. Shibata N, Kajikawa Y, Takeda Y, Matsushima K. Comparative study on methods of detecting research fronts using different types of citation. J Am Soc Inf Sci Technol. 2009;60(3):571–80.
- 7. Vasudevan RK, Ziatdinov M, Chen C, Kalinin SV. Analysis of citation networks as a new tool for scientific research. MRS Bull. 2016;41(12).
- 8. Haunschild R, Bornmann L, Marx W. Climate change research in view of bibliometrics. PLOS ONE. 2016;11(7):e0160393.
- 9. Adam S, Reber U, Häussler T, Schmid-Petri H. How climate change skeptics (try to) spread their ideas: using computational methods to assess the resonance among skeptics’ and legacy media. PLoS One. 2020;15(10):e0240089. pmid:33017444
- 10. Sarathchandra D, Haltinner K. A survey instrument to measure skeptics’ (dis)trust in climate science. Climate. 2021;9(2):18.
- 11. Oreskes N. The scientific consensus on climate change. Sci Am Assoc Adv Sci. 2004;306(5702):1686. pmid:15576594
- 12. Cook J, Nuccitelli D, Green SA, Richardson M, Winkler B, Painting R. Quantifying the consensus on anthropogenic global warming in the scientific literature. Environ Res Lett. 2013;8(2):024024.
- 13. Lynas M, Houlton BZ, Perry S. Greater than 99% consensus on human caused climate change in the peer-reviewed scientific literature. Environ Res Lett. 2021;16(11):114005.
- 14. Skuce AG, Cook J, Richardson M, Winkler B, Rice K, Green SA, et al. Does it matter if the consensus on anthropogenic global warming is 97% or 99.99%? Bull Sci Technol Soc. 2016;36(3):150–6.
- 15. Tol RSJ. Quantifying the consensus on anthropogenic global warming in the literature: a re-analysis. Energy Policy. 2014;73:701–5.
- 16. Tol RSJ. Quantifying the consensus on anthropogenic global warming in the literature: rejoinder. Energy Policy. 2014;73:709.
- 17. Tol RSJ. Comment on ‘Quantifying the consensus on anthropogenic global warming in the scientific literature’. Environ Res Lett. 2016;11(4):048001.
- 18. Dean BJF. Comment on ‘Quantifying the consensus on anthropogenic global warming in the scientific literature’. Environ Res Lett. 2015;10(3):039001.
- 19. Jankó F, Drüszler Á, Gálos B, Móricz N, Papp-Vancsó J, Pieczka I, et al. Recalculating climate change consensus: The question of position and rhetoric. J Clean Prod. 2020;254:120127.
- 20. Light R, Theis N, Edelmann A, Moody J, York R. Clouding climate science: A comparative network and text analysis of consensus and anti-consensus scientists. Soc Netw. 2023;75:148–58.
- 21.
van Eck NJ, Waltman L. VOSviewer Manual. 2017. pp. 49.
- 22. Rogers G, Szomszor M, Adams J. Sample size in bibliometric analysis. Scientometrics. 2020;125(1):777–94.