Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The geography of references in elite articles: Which countries contribute to the archives of knowledge?

The geography of references in elite articles: Which countries contribute to the archives of knowledge?

  • Lutz Bornmann, 
  • Caroline Wagner, 
  • Loet Leydesdorff
PLOS
x

Abstract

This study asks the question on which national “shoulders” the world’s top-level research stands. Traditionally, the number of citations to national papers has been the evaluative measures of national scientific standings. We raise a different question: instead of analyzing the citations to a countries’ articles (the forward view), we examine references to prior publications from specific countries cited in the most elite publications (the backward—citing—view). “Elite publications” are operationalized as the top-1% most-highly cited articles. Using the articles published from 2004 to 2013, we examine the research referenced in these works. Our results confirm the well-known fact that China has emerged to become a major player in science. However, China still belongs to the low contributors when countries are ranked as contributors to the cited references in top-1% articles. Using this perspective, the results do not support a decreasing trend for the USA; in fact, the USA exceeds expectations (compared to its publication share) in terms of references in the top-1% articles. Switzerland, Sweden, and the Netherlands also appear at the top of the list. However, the results for Germany are lower than statistically expected.

1 Introduction

The invention of the Science Citation Index in the 1960s was welcomed by citation analysts as well as historians and philosophers of science as offering an opportunity to study science empirically [1, 2, 3, 4, 5]. As it developed, citation analysis split into two research traditions with different perspectives on the reference: citation analysts are interested in counting “times cited” as a measure of quality or impact to aid evaluation; references can be used in the history and philosophy of science as a tool to retrieve “revolutions and reconstructions” in science [6]. Wouters [7] noted that transposing the cited/citing matrix by the database owner adds value: the references become citations. The Science Citation Index thus shaped the field of citation analysis as a domain analytically different from historical reconstructions.

The two perspectives are reflected in citation analysis as the difference between co-citation [8, 9] and bibliographic coupling [10]. By citing, a scholar reconstructs the intellectual context of one’s knowledge claim [11]. Being cited, however, is rewarding in terms of providing credit and reputation [12]. The intellectual organization of the sciences, however, evolves in—often anonymized—texts that are cleaned in a process of validation from the contingencies of the context of discovery [13]. In this study, we focus on the relative positions of countries: what is referenced in the worldwide top-level research in terms of national contributions?

Which countries provide the longer-term intellectual context of top-level research? We are inspired to assume this approach by bibliometric analyses for the Science and Engineering Indicators report of the US National Science Foundation [14]. In this context, Mervis [15] covered the shares gained by specific countries (and transnational units such as the European Union) in the 1% most-highly-cited papers. These shares were size-normalized by using the countries’ numbers of published papers as a baseline. In order to reveal on which “shoulders” the worldwide top-level research stands, we decided upon a similar approach as Mervis [15], but used the cited references instead of the citations perspective [16, 17]: In other words, we investigated which country’s literature is incorporated in the archive of work cited at the top of the pyramid.

Operationally, we include the 21 countries in this study that published more than 1% of the articles worldwide during the period under study. Using the years 2004 to 2013, we investigate a substantially longer and more recent time period than Mervis [15]. We focus on countries as units of analysis for three reasons: 1) national systems represent underlying cultural, social, economic, and political models; 2) national governments seek to encourage knowledge creation, diffusion, and exploitation; and 3) most basic research is paid for by public funds. In addition to activities at the research front, participation, prestige, and the build-up of intellectual capital are longer-term objectives of national policies. Prestige can be considered as generalized from performance [18].

Reputation and prestige have real consequences for attracting resources and market shares. For example, the label “Made in Italy” has a value that can be compared with “Made in China” in terms of assumption about quality; the ‘capital’ or reputation attached to “Made in Italy” has been built up over time and with attention to maintaining quality. Prestige in science attracts foreign students and collaborators who can contribute to the vitality of a system [19, 20, 21, 22, 14]. For this study, we consider publications as investments in intellectual capital, reflecting the investments made in maintaining quality. Citations of publications can be a way of measuring a return on investment [23]. To what extent do authors of top-1% publications make references to contributions from the same country or internationally?

In other work, we showed that, in terms of field-normalized performance for the top-1% and top-10% most-frequently cited publications, the USA held the lead in the 2000s, but increasing numbers of citations were going to EU28 nations, which had also increased their share of articles in the top 10% highly cited articles [24]. Several smaller European nations—Switzerland, Denmark, Sweden, and the Netherlands—had surpassed the USA in percentage share of highly cited articles. As noted, Mervis [15] showed that Asian scientists increasingly cite other Asian articles. Mervis’ [15] report was limited to only five countries or aggregates of countries (e.g., the Asia-8 and the EU). In this study, we expand upon Mervis’ [15] report and use the cited references to view national contributions to the archives of knowledge for the 21 countries that contributed 1% or more of all published material with the document type “article” in Web of Science (WoS) between 2004 and 2013.

2 Data and methods

The 21 countries that contributed 1% or more of all published material with the document type “article” in Web of Science (WoS) between 2004 and 2013 published 86% of all articles indexed in these years (across all subject categories). Using this 1% threshold, most countries worldwide with a substantial contribution to the archive are included; however, small-sized but potentially top-performing countries in terms of relative citation impact such as Denmark are unfortunately excluded because of the threshold [24].

From the set of all articles published between 2004 and 2013, we select the top-1% most highly cited research worldwide. We call these papers “elite” articles. “Elite” articles are those articles which belong to the 1% most-frequently cited papers in the corresponding WoS subject categories and publication years. From this set of articles, we use all the cited references. This results in a subset of the data that we further cleaned in three ways, removing about 40% of the material. 1) We include only references to papers with the document type “article”. 2) We remove articles lacking country information in the address lines—otherwise we could not link knowledge contributions to countries, which is the point of the analysis. 3) We eliminate referenced articles dated prior to 1980 because the database does not contain reliable address information prior to this date.

The final data contains articles listing one or more countries in the address lines. If an article has multiple country names in the address lines, we count contributions to these articles fractionally based upon the numbers of countries listed. The count goes to countries—not to authors: if multiple authors are listed from the same country, the count is still “one” for this country. If two authors from two different countries are listed on the article, the article is assigned to each country with a value of 0.5; for three countries, the value is .33, and so on.

Table 1 shows the countries with the largest shares of all articles between 2004 and 2013. As expected, the United States (USA) is at the top of the list of countries, followed by China, Japan, the UK, and Germany as the countries contributing the largest numbers of articles. (The European Union is not considered as a single unit in this analysis.) China appears second in the total numbers of articles. However, it did not begin the decade in this second position, but grew much more rapidly than other countries to finally claim the second spot [25].

thumbnail
Table 1. Twenty-one countries with the largest shares of all articles indexed in WoS between 2004 and 2013.

Only countries with more than 1% of the fractionally-counted articles are listed, in decreasing order of the percentage share of articles.

https://doi.org/10.1371/journal.pone.0194805.t001

3 Exploring international contributions

Table 2 lists countries in the same order as Table 1, but it shows the number of references per country and their respective shares of contributions to elite publications. For example, China published 9.85% of the worldwide articles (Table 1), but it contributed only 4.24% of cited references in the top-level research papers. The opposite is seen for the USA: it contributes 24.02% of worldwide articles (Table 1) and 44.1% of the references in top-1% (Table 2). This large share for the USA led us to consider weighting the data, since a factor accounting for the differences may be the publication volume of the various countries [26]. The more articles a country has published, the more citations can ceteris paribus be expected.

thumbnail
Table 2. Country counts from cited references in elite publications.

Sorting order matches Table 1.

https://doi.org/10.1371/journal.pone.0194805.t002

Fig 1 shows the same data as Table 2, but broken down by year of publication. As expected, the USA’s contribution is decreasing [27], whereas China’s contribution is increasing over the years. The USA’s share of cited references in top-1% articles dropped by approximately nine percentage points—more than the drop in publication volume [24]. However, China’s share increased by 5.7 percentage points, which is proportional to its gain in shares of publications. We included 95% confidence intervals in Fig 1 for China and the UK as interval estimates indicating the accuracy of our point estimates (the percentages) [28]. The absence of overlap of the 95% confidence intervals for the UK and China shows that the lead of the UK versus China is (still) statistically significant at the end of the period [29, 30]. The shares of the other countries are more or less constant over the years.

thumbnail
Fig 1. Countries’ shares of references cited in the elite articles between 2004 and 2013 (articles belonging to the 1% most frequently cited articles, fractionally counted).

95% confidence intervals are added to the UK’s and China’s shares.

https://doi.org/10.1371/journal.pone.0194805.g001

In Table 3, we report the ratio of cited references in the elite articles to citing articles for each country. Assuming that many papers reach their citation peak in the third year after publication, we use the ratio of cited references in year t and published articles from year t-3. This ratio reveals whether a country received more citations than expected on the basis of the number of published articles. The findings show that the USA has an average ratio of 1.7 (cited references) and (citing articles) during this period. Thus, the USA contributed much more to the archive of knowledge than can be expected on the basis of its publication volume. Maisonobe, Milard, Jégou, Eckert, and Grossetti [31] report on similar results for the US. Besides the US, Switzerland, the Netherlands, the UK, and Sweden had higher-than-expected citedness compared to publication volume in this study.

thumbnail
Table 3. Mean ratios and standard deviations of shares (cited references versus citing articles) across 10 years as well as the difference between the ratios in the last (2013/2010) and first years (2004/2001) (sorted by the means).

https://doi.org/10.1371/journal.pone.0194805.t003

Table 3 shows the mean ratio and standard deviation of shares (cited references versus published articles) that were calculated across 10 years, as well as the difference between the ratios in the last (2013/2010) and first years (2004/2001). By using the mean values, the countries can be categorized into three groups indicated as grey-shaded areas in the table: high, average, and low performers. The high performers have a ratio of at least 1.2, which means that they received substantially more citations in the elite publications than would be expected by publication volume (on average across the years). The average performers approximately meet the expectations (values between 0.8 and 1.19). The low performers fall significantly below the expectations (below 0.8) based upon volume. Table 3 shows that China is still on a low performance level (rank position for the mean is 13), but grows more quickly than the other countries across the years (rank position for the standard deviation is 1). The differences between the last (2013/2010) and first years (2004/2001) in the table point out that China has a high increase in its ratio (0.34), but other countries have a positive showing as well (e.g., the Netherlands = 0.28 and Germany = 0.27).

Fig 2 shows the developments of the countries’ ratios of cited references versus citing articles over time. The countries are categorized into three groups of performers as per Table 3: the top box includes USA, UK, the Netherlands, Sweden, and Switzerland—the high performers. Since the group consists of both large and small countries (in terms of published articles), the size-normalization used above seems to function properly. As Fig 2a reveals, the USA performs at the highest level across all years. In recent years, Switzerland has reached a very high level, too. The UK and the Netherlands show an increasing trend.

thumbnail
Fig 2.

Ratios of shares based on (1) cited articles: countries’ shares of references cited in the top-level research (fractionally counted). (2) Published articles: countries’ shares of articles published between 2001 and 2010 (fractionally counted). The countries are categorized as high (a), average (b), and low (c) performers (see Table 3).

https://doi.org/10.1371/journal.pone.0194805.g002

The average performing group in Fig 2b consists of four countries with ratios around 1 across the years (Germany, France, Canada, and Australia). With the exception of Canada, these countries show an increasing trend. The largest number of countries (n = 12), however, are categorized as belonging to the low-performing group, shown in Fig 2c. Several countries show an upward trend, notably China, which shows an upward turn beginning in 2005. Other countries also show upwards trends, including Italy (since 2009/2006), which performs in the average range by the end of the decade.

4 Exploring domestic contributions

As a second research question, we are able to distinguish between each country’s contribution to the archive of the international literature versus the domestic return on investment: how much does a country itself profit from this longer-term incorporation of its contribution to the elite literature? We operationalize this domestic effect by normalizing the country-level contributions to the cited references against the set of elite articles published by the country without considering internationally (co)authored articles. Thus, we focus on the countries stand-alone strength by including only domestic (cited) references and (citing) top-1% articles.

The result is shown in Fig 3, which is rather similar to Fig 2. However, it is the similarity that is telling. The focus on domestic articles in Fig 3 supports the previous results and suggests that the global-level contributions reflect the national efforts and strengths. However, two interesting differences are visible for countries in the top group: 1) When the analysis is limited to domestic articles, the excellent performance of the USA becomes more pronounced, suggesting that US authors of papers in the top-1% articles are more likely to cite other US work than work from abroad. 2) Since there is a larger gap between the USA and the other countries, the contribution of the USA seems to reflect its domestic strength.

thumbnail
Fig 3. Moving averages of ratios of shares between (1) countries’ shares of references cited in elite articles (fractionally counted) and (2) countries’ shares of articles published between 2001 and 2010 (fractionally counted).

The countries are categorized as high (a), average (b), and low (c) performers (as listed in Table 3).

https://doi.org/10.1371/journal.pone.0194805.g003

We suggest that the more articles from a country receive citations in its own top-level research papers, the more it can be considered to have contributed to its own knowledge base. Researchers in a country alternatively can cite international literature more than domestic papers or one can find an overrepresentation of domestic literature, reflecting self-reliance at the national level [32]. With a high share of cited references in the national top-level research (as shown in Fig 3), the investments in science seem to be used efficiently, at least from the perspective of a national government.

Since the shares of cited references cited in the national top-level research papers is not only dependent on the quality of research, but also on the publication volume, the shares of cited references have to be size normalized. After normalization, the comparison of country shares in cited references and citing articles can reveal how a country’s publication system uses the nation’s investment in research. It can further reveal whether these investments spill-over the national borders. In order to make this assessment, the domestic shares of cited articles from each country are contrasted to the shares of published articles. If the ratio is larger than 1, a benefit to the country can be inferred, or, a “gain” of investment from a national perspective.

Table 4 shows the result of this domestic analysis for the five countries identified as top-performers. The ratio includes 1) the share of cited references in a country’s top-level research papers that were published by the country itself among all cited references from that country. 2) The share of articles published by the country among all published articles. For example, the share of domestically cited references among the total of cited references in the top-level research articles from 2005 is 66.71% for the USA. The share of US articles among all articles worldwide in 2002 is 28.47%. The resulting ratio is 2.34.

thumbnail
Table 4.

Ratios of two shares: (1) share of the number of cited references in the country’s articles and the number of that part of cited references which were published by the country itself. (2) Country’s shares of articles published between 2001 and 2010 (fractionally counted).

https://doi.org/10.1371/journal.pone.0194805.t004

Return on investment at the national level far outperforms expectation for the Netherlands, Sweden, and Switzerland: domestic articles by authors in these countries are significantly more frequently cited in the top-level research than one would expect from their respective shares of published papers. However, the trend for Switzerland is decreasing. Thus, Switzerland’s research becomes less important for its own top-level research papers across the years.

5 Discussion

When Garfield [33, 34] introduced the Journal Impact Factor (JIF) as a two-year moving average of journal citations, he based this decision on Martyn and Gilchrist’s [35] evaluation of British scientific journals. However, these authors had focused mainly on journals in molecular biology and biochemistry. In these fields, there is a rapidly moving research front with more than 25% of the citations provided in the first two years after publication [36]. The JIF thus discounts the effects of “citation classics” [37] since it privileges the rapidly moving research fronts.

The sciences differ in terms of how relevant this “research front” is for the development of a field [38]. Short-term citation at a research front can be distinguished from longer term processes of incorporation and codification of knowledge claims into bodies of knowledge. Citation classics may not be highly cited in the first few years, but peak later [39].

For example, the American Journal of Sociology (AJS) and American Sociological Review (ASR)—two leading sociology journals—have cited and citing half-life times of more than ten years. In other words, more than half of the citations of these journals are from issues published more than ten years ago, and more than half of the references are to publications older than ten years. Coleman’s [40] study entitled “Social Capital in the Creation of Human-Capital,” for example, became a most highly-cited paper almost two decades after its publication [41]. Citation classics are not decaying as the citation curves of “normal science.”

This raises a caution in focusing on short-term impact, because one measures, from this perspective, not quality but variation in the positioning of the contribution at the research front. The selection mechanisms of high quality can be expected to develop much more slowly [6]. By choosing long citation windows and only top-1% citing papers, we focus on the most notable scientific papers that are referenced by the top papers. Our approach is “citing” as different from “cited:” using the top-1% elite papers—normalized for fields of science—we can retrieve co-reference patterns (bibliographic coupling [10]) among previous literature.

In a so-called “linked” citation database, one can retrieve bibliometric characteristics of the referenced literature as backward citation and/or forward citation rates. In this study, we focused on the geographical origins of the knowledge contributions by elaborating on a design similarly reported by Mervis [15]. We used the addresses in the bylines of the “citation classics” to attribute them to countries. In order to avoid noise by imprecise referencing in the margins of scientific developments, we focused on the top-1% elite of scientific papers (normalized for fields). One can assume that these authors of top-level articles have worked very carefully on their papers, including highly precise and selective referencing.

Based on previous studies, we expected to see a wider field of contributing countries to these top-level papers, but our results show that the US science system is very strong in contributions to the global knowledge pool, has a persistent reputation, and is heavily relied upon as the source of knowledge by both US and non-US authors. The USA remains the center of science in terms of the citation practices of other scientists who are seeking to advance research. Although the USA may be losing ground in science in other respects [21, 42, 43], this analysis suggests that American authors contribute more to the archives of elite global science than would be indicated by its number of published articles. The size-normalized contribution reveals that the USA has gained ground over other countries instead of losing it. This becomes especially visible if the analysis focusses on domestic publications.

Rodríguez-Navarro and Narin [44] who compared the European Union with the USA have published similar results. Their analyses of publications belonging to the 1% most frequently cited demonstrate the ongoing dominance of the US in science. On the other side, the analyses of Rodríguez-Navarro and Narin [44] also reveal the deficiencies of the European Union in the top-level segment. The European Union is, however, a very heterogeneous set of countries in terms of scientific performance. The results of our analyses show that Switzerland, Sweden, and the Netherlands have high contributions to the cited references used in the top-1% elite articles, as has the USA—a finding similar to results of studies showing these nations as garnering more citations to their work. The surplus-capacity of these countries, measured by the shares of cited references and citing articles, is very high, putting them as among the most productive (and probably effective) of elite science in the world. These high performing countries exist alongside many other European countries with comparably medium or low performance.

Somewhat to our surprise, our results show that Germany does not belong to the top-performing group of countries. This result differs from the results of impact-oriented studies, which have demonstrated high performance for Germany in recent years [45] and historically [23]. The results also differ from the results which have been recently presented by Abbott [46] in a Nature comment. The comment is entitled as “the secret to Germany’s scientific excellence” and the presented numbers (e.g., the field-weighted citation impact) “tell a positive story for science” (p. 22). While German articles generally show strong short-term citation impact, Germany’s long-term contributions to the elite literature is not as strong.

It has been argued that German scientists are not as likely to publish in high impact journals as others [47], which may reduce visibility of German research. It appears that German scientists spend less time on collaboration than peers in other top nations [48] and this may reduce the opportunity to contribute to cutting-edge problems. However, one can also argue that Germany is successful in optimizing profit from its investments by maintaining a national publication system. This question of Germany’s efficiency requires further study.

The analysis further confirms that China has emerged as a major player in science, at least in terms of numbers of articles [49, 50, 25]. We expected to find that others increasingly draw upon China’s science; however, our analysis suggests that China’s contributions to the literature are still less relevant for elite publications. There may be many reasons for this beyond quality of research—social networks and language capabilities play roles in the dissemination of scientific knowledge, and these may remain obstacles for many Chinese scientists [51, 52].

Two other countries deserve mention. This is the poor showing of the Russian Federation and Japan. These two nations have declined in science from former leading positions. King [23] showed Japan as the fourth strongest country in the world using data from 1997 to 2001 based upon the top 1% of highly cited publications. In this study, we show that Japan is contributing to references in elite research articles below expectation. This may be due, in part, to the fact that Japan is among the least internationalized nations in percentage terms [53]. Within Japanese culture, it is important to publish findings in Japanese language journals, which may reduce the dissemination of knowledge.

The position of the Russian Federation is more difficult to interpret because historical continuities have been disrupted by the break-up of the Soviet Union. Even so, in the 1990s, Russia was an average performing country, counted by King [23] as close to Finland and Denmark in producing science and claiming citations. Unlike Japan—which has continued to fund R&D at a high rate—Russian investment in R&D has declined. As in Japan, publishing in the national language is important in Russia for one’s reputation; but this orientation may hinder international visibility.

6 Conclusions

The measure presented in this paper is a way to reveal the “shoulders” on which worldwide research stands. Rather than a measure of impact—that is, citedness—this measure examines references in top-papers to see from where top researchers draw for their knowledge base. Thus, it is more a measure of prestige and reputation than research impact. We expect that ensuing credit of top-cited papers is persistent since it does not reflect the frontiers, but the archive. In this article, we show the persistent position held in the archive by the United States, the rise of recognition of Chinese papers, and a stable stronghold for Switzerland, Netherlands, and UK; Russia and Japan have a poor showing.

The available measures allow us to examine the extent to which a country’s authors rely upon national work as opposed to foreign research. This can be considered a measure of return on national investment in the research base. The Netherlands, Sweden, and Switzerland far exceeded expectations in terms of using national research in their work, perhaps indicating a strong return on investment. However, we know from other research that these rather smaller nations are also strong in collaborating with other nations. Thus, it seems that smaller nations’ investments in research are especially effective in fostering (own) top-level research, if they are open to foreign research [53].

Acknowledgments

The bibliometric data used in this paper are from an in-house database developed and maintained by the Max Planck Digital Library (MPDL, Munich) and derived from the Science Citation Index Expanded (SCI-E), Social Sciences Citation Index (SSCI), and Arts and Humanities Citation Index (AHCI) prepared by Clarivate Analytics, formerly the IP & Science business of Thomson Reuters. The underlying data for the paper are available at https://doi.org/10.6084/m9.figshare.5853483.v2

References

  1. 1. Cole JR, Cole S. Social Stratification in Science. Chicago/ London: University of Chicago Press); 1973.
  2. 2. Elkana Y, Lederberg J, Merton RK, Thackray A, Zuckerman H. Toward a Metric of Science: The advent of science indicators. New York, etc.: Wiley; 1978.
  3. 3. Gingras Y. Bibliometrics and research evaluation: an overview. Cambridge MA: MIT Press; 2016.
  4. 4. de Solla Price DJ. Quantitative Measures of the Development of Science. Arch Int Hist Sci (Paris). 1951;14: 85–93.
  5. 5. de Solla Price DJ. Networks of scientific papers. Science. 1965;149: 510–515. pmid:14325149
  6. 6. Hesse M. Revolutions and Reconstructions in the Philosophy of Science. London: Harvester Press; 1980.
  7. 7. Wouters P. The signs of science. Scientometrics. 1998;41: 225–241.
  8. 8. Small H. Co-citation in the scientific literature: A new measure of the relationship between two documents. Journal of the American Society for Information Science. 1973;24(4): 265–269.
  9. 9. Marshakova IV. Bibliographic coupling system based on references. Nauchno-Tekhnicheskaya Informatsiya Seriya, Ser. 1973;2: 3–8.
  10. 10. Kessler MM. Bibliographic coupling between scientific papers. American Documentation. 1963;14: 10–25.
  11. 11. Fujigaki Y. Filling the Gap Between Discussions on Science and Scientists’ Everyday Activities: Applying the Autopoiesis System Theory to Scientific Knowledge. Soc Sci Inf. 1998;37: 5–22.
  12. 12. Whitley RD. The Intellectual and Social Organization of the Sciences. Oxford: Oxford University Press; 1984.
  13. 13. Popper KR. The Logic of Scientific Discovery. London: Hutchinson; [1935] 1959.
  14. 14. National Science Board. Science and engineering indicators 2012. Arlington, VA, USA: National Science Foundation; 2012.
  15. 15. Mervis J. Report Notes China’s Influence in Emerging Asian Science Zone. Science. 2012;335: 274–275. pmid:22267783
  16. 16. Bornmann L, de Moya-Anegón F, Leydesdorff L. Do scientific advancements lean on the shoulders of giants? A bibliometric investigation of the Ortega hypothesis. PLoS ONE. 2010;5: e11344.
  17. 17. Merton RK. On the shoulders of giants. New York, NY, USA: Free Press; 1965.
  18. 18. Brewer DJ, Gates SM, Goldman CA. In Pursuit of Prestige: Strategy and Competition in U.S. Higher Education. Piscataway, NJ: Transaction Publishers, Rutgers University; 2001.
  19. 19. Adams J. Global research report: United Kingdom. Leeds, UK: Evidence; 2010.
  20. 20. Adams J, Pendlebury D, Stembridge B. Building bricks. Exploring the global research and innovation impact of Brazil, Russia, India, China and South Korea. Philadelphia, PA, USA: Thomson Reuters; 2013.
  21. 21. Leydesdorff L, Wagner C. Is the United States losing ground in science? A global perspective on the world science system. Scientometrics. 2009;78: 23–36.
  22. 22. Marshall E, Travis J. UK scientific papers rank first in citations. Science. 2011;334: 443–443.
  23. 23. King DA. The scientific impact of nations. What different countries get for their research spending. Nature. 2004;430: 311–316.
  24. 24. Leydesdorff L, Wagner CS, Bornmann L. The European Union, China, and the United States in the top-1% and top-10% layers of most-frequently cited publications: Competition and collaborations. J Informetr. 2014;8: 606–617.
  25. 25. Zhou P, Leydesdorff L. China Ranks Second in Scientific Publications since 2006. ISSI Newsletter. 2008 March 2008: 7–9.
  26. 26. Harzing AW, Giroud A. The competitive advantage of nations: an application to academia. J Informetr. 2014;8: 29–42.
  27. 27. Wagner C. The Shifting Landscape of Science. Issues in Science and Technology. 2011; 28: 77–81.
  28. 28. Williams R, Bornmann L. Sampling issues in bibliometric analysis. J Informetr. 2016;10: 1253–1257.
  29. 29. Cumming G. Understanding the new statistics: effect sizes, confidence intervals, and meta-analysis. London, UK: Routledge; 2012.
  30. 30. Cumming G, Finch S. Inference by eye—Confidence intervals and how to read pictures of data. American Psychologist. 2005;60: 170–180. pmid:15740449
  31. 31. Maisonobe M, Milard B, Jégou L, Eckert D, Grossetti M. The spatial de-concentration of scientific production activities: what about citations? A world-scale analysis at city level (1999–2011). In: Proceedings of the science and technology indicators conference 2017 Paris “Open indicators: innovation, participation and actor-based STI indicators”. Paris, France: ESIEE Paris; 2017.
  32. 32. Yan E, Ding Y, Cronin B, Leydesdorff L. A bird’s-eye view of scientific trading: Dependency relations among fields of science. J Informetr. 2013;7: 249–264.
  33. 33. Garfield E. Citation analysis as a tool in journal evaluation: journals can be ranked by frequency and impact of citations for science policy studies. Science. 1972;178(4060): 471–479.
  34. 34. Garfield E, Sher IH. New factors in the evaluation of scientific literature through citation indexing. Am Doc. 1963;14(3): 195–201.
  35. 35. Martyn J, Gilchrist A. An Evaluation of British Scientific Journals. London, UK: Aslib; 1968.
  36. 36. Garfield E. The Meaning of the Impact Factor. Int J Clin Health Psychol. 2003;3: 363–369.
  37. 37. Bensman SJ. Garfield and the impact factor. Annual Review of Information Science and Technology. 2007;41: 93–155.
  38. 38. de Solla Price DJ. Citation Measures of Hard Science, Soft Science, Technology, and Nonscience. In: Nelson CE, Pollock DK, editors. Communication among Scientists and Engineers. Lexington, MA: Heath; 1970. pp. 3–22.
  39. 39. Leydesdorff L, Bornmann L, Comins J, Milojević S. Citations: Indicators of Quality? The Impact Fallacy. Front Res Metr Anal. 2016;1: Article 1.
  40. 40. Coleman JS. Social Capital in the Creation of Human Capital. American Journal of Sociology. 1988;94: S95–S120.
  41. 41. van Raan AFJ. Sleeping Beauties in science. Scientometrics. 2004;59(3):467–72.
  42. 42. National Science Board. Science and engineering indicators 2004. Arlington, VA, USA: National Science Foundation; 2004.
  43. 43. National Science Board. Science and engineering indicators 2010. Arlington, VA, USA: National Science Foundation; 2010.
  44. 44. Rodríguez-Navarro A, Narin F. European Paradox or Delusion—Are European Science and Economy Outdated? Science and Public Policy. 2018;45: 14–23.
  45. 45. Bornmann L, Leydesdorff L. Macro-indicators of citation impacts of six prolific countries: InCites data and the statistical significance of trends. PLoS ONE. 2013;8: e56768. pmid:23418600
  46. 46. Abbott A. The secret to Germany’s scientific excellence. With a national election this month, Germany proves that foresight and stability can power research. Nature. 2017;549: 18–22.
  47. 47. Murmann JP. The Coevolution of Industries and Important Features of their Environments. Organization Science. 2013;24: 58–78.
  48. 48. Perkmann M, Tartari V, McKelvey M, Autio E, Brostrom A, D’Este P, et al. Academic engagement and commercialisation: A review of the literature on university-industry relations. Research Policy. 2013;42: 423–442.
  49. 49. Fu HZ, Chuang KY, Wang MH, Ho YS. Characteristics of research in China assessed with Essential Science Indicators. Scientometrics. 2011;88: 841–862.
  50. 50. Zhou P, Leydesdorff L. The emergence of China as a leading nation in science. Research Policy. 2006;35: 83–104.
  51. 51. Cao C, Li N, Li X, Liu L. Reforming China’s S& T System. Science. 2013;341: 460–462.
  52. 52. van Leeuwen TN, Moed HF, Tijssen RJW, Visser MS, van Raan AFJ. Language biases in the coverage of the Science Citation Index and its consequences for international comparisons of national research performance. Scientometrics. 2001;51: 335–346.
  53. 53. Wagner CS, Jonkers K. Open countries have strong science. Nature. 2017;550: 32–33. pmid:28980660