Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The scientific standing of nations and its relationship with economic competitiveness

Abstract

In the current knowledge-based economy, the abilities of the national research system are a key driver of the country’s competitiveness and socio-economic development. This paper compares the scientific standing of the OECD countries and eight other relevant economies. We use a bibliometric indicator of research performance, applied first at the individual level. This approach avoids the distortions of the aggregate-level analyses extant in literature and practice, which overlook the different publication intensities across research fields. We find a strong correlation between research performance and the economic competitiveness of nations and a moderate but significant correlation between research performance and the propensity to spend on research.

1. Introduction

The importance of research as a driver of innovation, competitiveness, and socio-economic development has been amply demonstrated, beginning with Robert Solow’s 1956 milestone work [1] and the countless subsequent studies [27]. On this basis, with ever more conviction, governments across OECD countries and partner economies have applied financial and non-financial instruments to stimulate investment in research and increase the effectiveness and efficiency of national research systems.

Over several years, economists and bibliometricians have developed and applied indicators and models that can effectively portray a nation’s research profile. Indicator systems, some of which do not rely solely on publication data, were formulated by entities such as the U.S. National Science Foundation and the UK government as early as the 1950s (refer to [8] for a historical examination of the development of science and research taxonomies, along with statistical insights). Additionally, various nations initiated assessments of their scientific and technological competitiveness during the 1970s. Subsequently, routine reporting systems were established; for instance, the Office of Science and Technology in France and the Ministry of Research and Technology in Germany implemented such systems in the late 1980s and 1990s. Nations’ research performance rankings have been produced to inform decisions at different levels. Research-based multinational corporations that can access reliable accounts of the different countries’ scientific standing can better inform their strategies for locating R&D activities. For the governments, such reports enable checks on the effectiveness of policies and initiatives in support of research, adjustments in light of the strengths and weaknesses of the different scientific fields, and effective allocation of resources. The publication of performance rankings can also stimulate continuous improvement within institutions and at the levels of individual research teams and researchers.

The resort to citation-based indicators to measure the scientific standing of nations began with the pioneering work by Robert May in 1997 [9], after which several scholars followed suit [1015]. These scholars have gradually expanded the number of countries and research fields analyzed, and refined bibliometric performance indicators. Some scholars have attempted to free the performance scores from their size dependency by dividing the overall bibliometric score by some measure of input (total number of researchers, R&D expenditures, or GDP), ignoring though that publication intensity varies across research fields [16].

This study aims to overcome some limitations of the methods and indicators adopted so far by starting with performance measurement for individual researchers. This involves identifying the researchers in each country (disambiguating their real identity), their publications in a period, their contributions to publications, and their prevailing field of research; next, measuring the total scholarly impact of each and comparing it with that of all other researchers in the world in the same field and period. The performance of a country in a field or area will then be given by the average of the normalized performance of the country’s researchers in that field or area and the overall performance by the average of the normalized performance of all researchers in the country.

In this paper, we present the proposed method and relevant scientific performance indicator, applying them for the calculation of performance scores and rankings in the pre-Covid 2015–2019 period for the 38 OECD countries and eight non-member economies (chosen by economic or scientific significance: China, India, Brazil, Russia, Taiwan, Argentina, Singapore, and South Africa), in 222 research fields, 11 research areas and overall. The authors already applied the same indicator to compare the research performance of the USA and Russia [17]. Computation problems represent the challenge here due to the scale of the analysis. The rankings of nations provided in each research field, area, and overall should convey valuable information for the above-mentioned stakeholders of research systems.

Furthermore, in this work, we investigate the possible association between a country’s scientific standing and i) its propensity to invest in R&D, as measured by gross domestic expenditure on R&D (GERD) as a percentage of GDP [18]; and ii) its economic competitiveness as measured by labour productivity [19].

2. The literature on evaluating a nation’s scientific standing

Assessing a country’s scientific standing at a field level presents a formidable challenge [20,21]. Definitions of “scientific standing” vary, and there is no consensus on the right approach to gauge it. Nevertheless, scientific standing inherently involves making comparisons and achieving superiority in terms of quality [22].

The fundamental question is understanding the concept of research quality and whether it differs from research impact. Some argue that impact is merely one dimension of research quality, with other dimensions including relevance and research rigor [2325]. Conversely, some claim that quality and impact are separate components of scientific standing [26].

A chronological analysis of studies applying citation-based metrics to evaluate a nation’s scientific standing reflects the evolution of bibliometric indicators and methodologies in recent years. May’s landmark study in 1997 compared the scientific standing of 15 nations in STEM fields over a 14-year period using indicators such as WoS-indexed publications, citations, and citations per unit of spending. Subsequently, Adams [10] compared England’s performance in 47 fields with that of six other countries. An extended analysis by King [11] involved 31 countries over a decade, introduced additional bibliometric indicators, and used field-normalized citations to account for country size differences. Cimini et al. [14] used citations to scientific articles to assess the scientific standing of 238 countries in 27 scientific domains and 307 subdomains. More recently, Patelli et al. [27] applied a framework leveraging on the Economic Fitness and Complexity algorithm [28] to quantify the scientific standing of nations and regions.

Alongside, research on assessing a nation’s relative research standing has increasingly focused on excellence, particularly as measured by highly cited articles (HCAs). HCAs offer a transparent framework for domestic and international comparisons [29]. Bornmann and Leydesdorff [12] developed a mapping approach to locate field-specific centers of excellence worldwide, while Pislyakov and Shukshina [30] utilized HCAs to identify “centers of excellence” in Russia. Abramo and D’Angelo [15] introduced an innovative output-to-input methodology enabling the assessment of research strengths and weaknesses by considering the ratios of leading scientists and HCAs to research funding in specific fields. Furthermore, publications like the Nature Index Annual Tables and organizations like the CWTS of the University of Leiden and the SCImago group have provided for long time yearly country rankings based on total publications, mean field-normalized citations per article, and the share of HCAs.

3. Methods and data

A significant concern regarding the approaches found in the literature is the need for more adoption of efficiency indicators, which account for output relative to input. Many studies either do not incorporate efficiency indicators or do so at the aggregate level, i.e. dividing total output or impact by total research expenditures or by total number of researchers, overlooking the different throughput across fields (for example, mathematicians will produce less than clinicians, and within the latter, vascular surgeons will produce less than haematologists). Consequently, the studies that do not account for input generally rank the USA at the top in most scientific fields. Still, they do not clarify whether the USA’s high ranking results from more significant research investment or superior scientific performance. Those that divide the overall output or impact by total R&D spending lead to unreliable performance scores.

To circumvent the problem of lack of input data, size-independent indicators like “average normalized citations per publication” or MNCS were introduced [31,32]. Unfortunately, they prove ineffective, violating a key principle of production theory: if output increases with constant input, performance should not decline. With the “average normalized citations per paper” approach, this violation occurs when organizations or individuals produce additional publications with even slightly lower normalized impact than their previous average [33].

To the best of our knowledge, as far as national-level rankings are concerned, besides Italy [34], successful attempts to account for inputs when evaluating individual, field, and institutional performance are limited to Norway [35] and Sweden [36]. The first two studies use an approach based on actual input and output levels, while the latter relies on changes in input and output levels.

The availability of input data at the field level represents the main obstacle to measuring and comparing nations’ research performance. Our methodology is built on the premise that the strength of one research field in a country can be determined by the superior performance of researchers in that field compared to others. Since scientific publication rates vary across fields, a direct comparison of performance at the aggregate level (total number of publications or citations) would favor those countries specialized in fields with higher publication rates, leading to distorted results [37].

To address this, we proceed in four steps. First, we identify the researchers in each country and classify them in research fields on the basis of the prevalent domain of their publications. Second, we measure the total impact of a country’s researchers. Third, we normalize the performance of each researcher by the average performance of all world researchers in the same field. Finally, we average the field-normalized performance of all researchers in a country by field, area, and overall level, leading to the relevant world rankings.

3.1 Dataset construction

The analyses were conducted first at the individual level and then at the aggregate level by research field (subject category, SC), research area, country, and concerned the 2015–2019 period; a more recent period would have compromised the accuracy of the measurement of the publications’ scholarly impact. In fact, the larger the time citation window, the closer the measurement approximates the overall impact [38]. To identify the countries’ researchers, we recurred to the rule-based scoring and clustering algorithm by Caron and Van Eck, or CvE [39]. We applied it to the in-house Web of Science (WoS) database of the Centre for Science and Technology Studies (CWTS) at Leiden University (updated to the 13th week of 2022).

In the author disambiguation process carried out by Caron and Van Eck in 2014, bibliometric data related to authors and their publications is used as input to identify clusters of publications likely authored by the same individual. The CvE method comprises three main stages: In the initial pre-processing phase, author name blocks are generated to reduce computational workload in subsequent phases.

During the rule-based scoring and oeuvre identification phase, potential author oeuvres are determined. For each author name block, the associated publication-author combinations (PACs) are identified. The score for a pair of PACs is computed using four sets of scoring rules that involve comparing author information, publication details, source data, and analyzing citation relationships.

The final score for a pair of PACs is the sum of scores obtained from these different scoring rules. These scores are assigned based on expert knowledge and fine-tuned by evaluating their accuracy using a test dataset. Experimental thresholds are also applied to decide whether two PACs belong to the same author oeuvre.

Candidate author oeuvres are identified separately for each author name block. Consequently, in the post-processing stage, candidate author oeuvres are merged if they share the same email address. This process results in the creation of final author oeuvres, which are referred to as “clusters.”

When the final author oeuvres have been obtained, meta-data is generated for each of the associated clusters. Table 1 reports, for example, the information that the algorithm associates with the cluster of publications by Nees Jan van Eck.

thumbnail
Table 1. Description of the output information of the CvE algorithm for Nees Jan van Eck.

https://doi.org/10.1371/journal.pone.0304299.t001

We use algorithmic output to identify each country’s research staff (i.e., by “country” field). Of course, this algorithm is not error-free, for example, in dealing with authors of very common names or those with highly diversified and heterogeneous bibliographies whose portfolios could be split into two or more clusters. Specifically, the CvE algorithm values precision over recall: if insufficient evidence exists for assigning publications to the same cluster, the method will assign them to different clusters. Consequently, an author’s publications may be split over multiple clusters. The validation of the algorithm conducted by the authors and based on two datasets of Dutch authors resulted in an average precision of 95% and an average recall of 90%, with the errors increasing for more common author names. Moreover, Tekles and Bornmann [40] found that the CvE algorithm was the best-performing approach compared to other unsupervised disambiguation approaches. Finally, it should be noted that for this paper, when moving to the aggregate level, errors in author disambiguation tend to compensate, being independent of the authors’ country, and so should have negligible effects on results.

To exclude “occasional” and terminated researchers and improve the robustness of the analysis, we exclude those clusters that fail to meet one or more of the following requirements over the 1980–2022 period:

  • include at least ten publications (excludes “occasional” authors, for whom clustering has lower confidence levels);
  • include at least one publication published in 2020 or later (designed to exclude researchers no longer active);
  • with a “research activity” (measured by the difference between the first and the last publication year) of a minimum of five years (designed to include only “established” authors).

To assign authors to research fields, we adopt the WoS field classification scheme, consisting of 254 subject categories (SCs) falling into 13 areas. We associate each publication with the WoS SC of the hosting journal. We then identify the “prevalent” SC of the publications in a cluster. The prevalent SC will be the author’s research field. There occur rare cases of clusters with more than one prevalent SC (representing around 7% of total researchers). In such cases, the prevalent SC is randomly assigned among those with the highest frequency.

To avoid distortions due to WoS limitations in the coverage of literature, we limit the dataset to all SCs of the sciences and several SCs of the social sciences [4143]. The field of observation then includes 222 SCs grouped in 11 areas. In such fields, we count 2,250,698 clusters/authors affiliated with 206 different countries. We limit the analysis to the 38 OECD countries and another eight non-OECD economies selected for economic/scientific relevance: China, India, Brazil, Russia, Taiwan, Argentina, Singapore, and South Africa.

Fig 1 illustrates the workflow for the data selection and refinement, while Table 2 shows the breakdown of clusters by area, for the main countries in the dataset and overall. The 46 countries considered total almost 2.1 million researchers, or almost 92.5% of the world’s total. Leading the way is the USA with just under half a million researchers, or 21.3% of the total, followed by China (242k, 10.7%), Japan and the UK (both with 110k researchers, or 4.9% of the total). Clinical Medicine is the area with the largest number of researchers worldwide, at just over 607k (or 27.0% of total), followed by Engineering (377k, 16.7%), Biology (346k, 15.4%), and Biomedical research (297k, 13.2%).

thumbnail
Table 2. Number of clusters (x 1000) by area, for seven main countries and overall.

https://doi.org/10.1371/journal.pone.0304299.t002

3.2 Measuring the scientific standing of a country

We measure the scientific performance of a country starting from the individual researcher through the Fractional Scientific Strength indicator or FSSp, defined as: where:

N = number of WoS publications by the author in the period under observation.

ci = impact of publication i. i.e. weighted average of the field-normalized citations received by publication i and the field-normalized impact factor of the hosting journal, as suggested in Abramo et al. [44] citations are normalized to the mean of the distribution referring to all cited publications of the same year and WoS SC of publication i. The impact factor of the journal refers to the year of publication and is normalized with respect to the average of the distribution of IFs of all journals in the same SC of publication i);

fi = fractional contribution of the author to publication i, given by the reciprocal of the number of co-authors in the byline.

A thorough description of the methodology, assumptions and limitations, and underlying theory can be found in Abramo and D’Angelo [34].

Note that in the dataset there are 73,679 clusters with nihil FSS. These are:

  • “inactive” researchers (with no eligible publications, i.e. articles, letters, reviews and conference proceedings), during the observation period (2015–2019),
  • “active” researchers, but whose 2015–2019 eligible publications show nihil impact.

The performance of countries, that are heterogeneous in the research fields of their staff cannot be directly measured at the aggregate level [37]. So, after measuring the performance of individual authors (FSSp) we normalize individual performance by the average of the relevant SC world distribution. At the aggregate level then, the yearly performance FSSA for the aggregate unit A (the national research staff in a SC/area/overall) is:

Where:

RS = number of authors in the unit, in the observed period;

= performance of author j in the unit;

= average performance of all world authors in the same SC of author j.

A value of FSSA = 1.20 means that the country unit A employs authors with average performance of 20% higher than expected, i.e. the world average.

In this way, we can measure the performance of a country at SC, area, and overall level, avoiding distortions due to the different intensities of publication and citation across SCs. For significance, the ranking lists include all and only the countries with at least:

  • 30 clusters, for analysis at SC level,
  • 100 clusters, for analysis at area level.

4. Results

Below, we will present the outcome of applying the proposed approach through some examples. For the research performance indicator and method adopted, one country will perform better than another if one or more of the following conditions occur: on average researchers are (i) better than others; (ii) devote more time to research; or (iii) have more resources available to them. The performance indicator used (Fractional scientific strength or FSS) is normalized to the world average; e.g. a score of FSS = 1.20 indicates a performance 20% higher than the world average.

A first application of the measurements made allows us to identify the scientific strengths and weaknesses of each country by field. As an example, in Table 3 we show the top ten and bottom ten research fields by research performance (FSS) of German researchers. Germany ranks top in three fields: Business; Soil Science; Mathematical & Computational Biology. At the same time, the country ranks bottom in four fields: Education, Scientific Disciplines; Materials Science, Composites; Engineering, Environmental; Materials Science, Characterization & Testing.

thumbnail
Table 3. Top ten and bottom ten research fields (SC) in Germany by research performance (FSS).

For the sake of significance, the ranking lists contain all and only countries with at least 30 researchers in the field and research fields with at least five countries.

https://doi.org/10.1371/journal.pone.0304299.t003

Complete data for all countries can be seen in S1 File.

Scaling up the aggregation level, we can observe the performance of each country in each research area. In this regard, Table 4 shows the scientific standing of New Zealand by area. The country ranks top in Biomedical research and close to the top in Psychology, and Clinical Medicine. In contrast, performance in Mathematics, Economics, and Political and social sciences is below the median.

thumbnail
Table 4. Research performance of New Zealand, by research area.

For the sake of significance, the ranking lists contain all and only countries with at least 100 researchers in the area.

https://doi.org/10.1371/journal.pone.0304299.t004

Full data can be seen in S2 File.

Adopting a complementary perspective, the analysis can provide rankings of countries by research field and area. Table 5 shows the case of “Meteorology & Atmospheric Sciences”, where 33 of the 46 countries considered employ at least 30 researchers. Leading the ranking is Switzerland, with an average performance of its 199 researchers that is 58 percent above the world average (FSS = 1.580), followed by Portugal (1.359), the USA (1.273), and the UK (1.221). Conversely, Argentina, Hungary, Mexico, and finally Russia appear in the trailing positions.

thumbnail
Table 5. Research staff and research performance of countries in “Meteorology & Atmospheric Sciences”.

For the sake of significance, the ranking lists contain all and only countries with at least 30 researchers in the research field.

https://doi.org/10.1371/journal.pone.0304299.t005

Finally, Table 6 reports the performance of all 46 countries considered, obtained by averaging the FSS of all researchers in each one. Leading the way is Singapore, a relatively small country with 9501 researchers, operating with at least 30 researchers in 79 fields out of 222: average research performance is about 60 percent higher than the world average. Australia is second, followed by Denmark, Netherlands, Switzerland, UK and USA. At the tail end, in addition to Argentina, are four countries of the former Soviet Union: Lithuania, Slovakia, Russia, and Latvia. China, the second largest country with its over 240,000 researchers, places twelfth, ahead of Sweden and Germany; France and Japan fall below the median.

thumbnail
Table 6. The dataset’s research staff and overall research performance of countries.

https://doi.org/10.1371/journal.pone.0304299.t006

5. Discussion and conclusions

While assessing a country’s scientific standing is crucial for governments, businesses, and funding agencies as they determine their scientific priorities and allocate resources, it is also interesting to judge the extent of association between scientific standing and i) propensity to invest in research and ii) economic competitiveness.

The relationship between the scientific standing of nations and their propensity to invest in research while complex and multifaceted, is often characterized by a positive feedback loop, where a nation’s commitment to research can enhance its scientific standing, and a solid scientific standing can, in turn, encourage more significant investment in research. Robust scientific ecosystems attract talent, encourage innovation, and provide a knowledge base for further research. It is important to note that the relationship between scientific standing and research investment is not linear, and there can be variations among countries. Some nations may prioritize research as a means to improve their scientific standing, while others may initially invest in building a strong scientific foundation to attract further research investment.

Fig 2 shows the position of each country in terms of scientific standing and propensity to invest in research. A significant correlation occurs between a country’s propensity to invest its wealth in R&D (measured through GOVERD+HERD as a percentage of GDP) and its scientific standing (Spearman rho: 0.610). In particular, among the countries that show a much better ranking in terms of research performance than in terms of propensity to invest in R&D are Ireland, the United Kingdom, China, and South Africa. Conversely, among those that show a better ranking in the propensity to invest in R&D than in research performance, Norway, the Czech Republic, Lithuania, and Finland stand out.

thumbnail
Fig 2. GOVERD+HERD (as a percentage of GDP) and research performance rankings across 43 countries in the dataset (India, Brazil and Costa Rica are not listed in OECD report).

https://doi.org/10.1371/journal.pone.0304299.g002

There are several reasons why the scientific standing of a nation is closely intertwined with its economic competitiveness. Countries with strong scientific standing tend to have well-educated and skilled labour forces. This human capital is critical for driving economic growth, as it enables businesses to be more productive and competitive. A solid scientific standing is often associated with a culture of innovation and the development of advanced technologies. Innovation, in turn, is a key driver of economic competitiveness. A robust scientific standing can attract foreign direct investment and business investments. Companies are often attracted to regions with a strong research and innovation ecosystem because they offers access to talent, resources, and a supportive environment for research and development activities. Scientific research contributes to the development of high-quality products and services. When a nation’s scientific community collaborates with industries, it creates goods and services that are more competitive in terms of quality, cost-effectiveness, and innovation. Finally, strong scientific capabilities often lead to economic diversification. A diversified economy is more resilient and competitive because it is not overly reliant on one specific industry or sector. A nation’s scientific capacity can facilitate this diversification by fostering innovation across multiple domains.

We therefore correlated the revealed scientific standing of nations with their economic competitiveness, as measured by the labour productivity as defined and calculated by the International Labour Organization [19]: The average 2015–2019 output per worker (GDP constant 2017 international $ at PPP).

The comparison for 46 countries in the dataset, shown in Fig 3, indicates a strong positive correlation (Spearman rho of 0.657) between the two rankings.

thumbnail
Fig 3. Economic competitiveness (labour productivity) and research performance rankings across countries in the dataset.

https://doi.org/10.1371/journal.pone.0304299.g003

In particular, the figure makes it easy to discriminate between countries above and below the bisector. Among those with ranking differences, in absolute value, over twelve positions, we find:

  • China, South Africa, India, New Zealand, the UK, and Australia—countries whose ranking, in terms of research performance, is better than that for competitiveness;
  • Turkey, France, Norway, and Ireland—countries whose ranking in terms of competitiveness is better than that for research performance.

It should be remembered that translating good scientific performance into equally good economic competitiveness requires ability in cross-sector technology transfer and good absorptive capacity of the production system [45]. In the absence of these requisites, because non-proprietary knowledge is a public good, easily transmitted, and transnational, the country’s scientific production may even benefit competing countries who have better developed these capabilities. Moreover, economic competitiveness can also be supported by factors other than research, such as natural resource endowments, macroeconomic stability, health, infrastructure, ITC adoption, and business dynamism.

This study overcomes the limitations of the methodological approaches and bibliometric indicators previously adopted for measurement of the research competitiveness of nations. Indeed, it is not correct to consider as an indicator of performance i) the sum of publications, because they do not have the same value, i.e. impact on scientific advances; ii) full counting of co-authored publications instead of individual contribution to the publications [46]; iii) the sum of citations, even if field-normalized, because the more recent is the publication date, the weaker is the early citations’ predicting power of overall future citations [44,47,48]; and iv) the mean normalized citations per paper because, among others, a country producing more papers and with all else remaining equal (i.e. inputs, mean citations per paper, etc.), would not improve its performance score [33]. Methodologically, it is also incorrect to measure productive efficiency by dividing research outcome by macro quantities, such as total research staff, R&D expenditures, and the like, because publication intensity and thus total impact vary considerably across research fields [16].

However much the proposed approach overcomes the above limitations, others remain, typical of bibliometric techniques, and must be taken into account when interpreting performance results. Bibliometrics assumes that all new knowledge produced is codified in publications indexed in bibliographic repertories, therefore it excludes publications not indexed in them and tacit knowledge. It also considers citation-based indicators as a measure of scholarly impact, assuming that when a scholar cites a publication they have drawn on it, more or less heavily, for the further advancement of knowledge. But it is known that this is not always true, as unjustified (self) citations may actually occur, as well as uncitedness and undercitation [49,50]. Furthermore, citations are unable to capture impact outside the scientific community.

Finally, the performance indicator adopted is only partially an indicator of productive efficiency, as other input data, such as time spent on research and resources available for researchers to do research, are not available.

Some possible distortions in rankings therefore remain. Indeed, countries with a greater propensity to publish in national language journals not surveyed in Web of Science are disadvantaged. This is the case for Japan [51], China and countries of the former Soviet Union and Eastern bloc [52]. Also disadvantaged are countries with a higher proportion of researchers in the private sector e.g., Japan and the US [53], whose publication intensity is on average lower than that of colleagues in the government and higher education sectors. In fact, in order to maximize the returns on R&D investments, the private sector tends to keep research results proprietary and avoid knowledge spillovers, so as not to favor competitors.

Supporting information

S1 File. Research performance in research fields.

https://doi.org/10.1371/journal.pone.0304299.s001

(XLSX)

S2 File. Research performance in research areas and overall.

https://doi.org/10.1371/journal.pone.0304299.s002

(XLSX)

Acknowledgments

We are indebted to the Centre for Science and Technology Studies (CWTS) at Leiden University for providing us with access to the in-house WoS database, from which we extracted data serving in our elaborations.

References

  1. 1. Solow RM. A contribution to the theory of economic growth. Q J Econ. 1956; 70(1):65–94.
  2. 2. Terleckyj N. Direct and indirect effects of industrial research and development on the performance growth of industries. In: Kendrick J, Vaccara B, editors. New Developments in Performance Measurement and Analysis. University of Chicago Press.; 1980.
  3. 3. Link AN. Basic research and performance increase in manufacturing: some additional evidence. Am Econ Rev. 1981; 71:1111–2.
  4. 4. Nadiri MI, Mamuneas TP. The effects of public infrastructure and R&D capital on the cost structure and performance of US manufacturing industries. Rev Econ & Stat. 1994; 76(1):22–37.
  5. 5. Griliches Z. R&D, education, and performance: A retrospective. Harvard University Press; 2000.
  6. 6. Scotchmer S. Innovation and incentives. MIT Press; 2004.
  7. 7. Deleidi M, De Lipsis V, Mazzucato M, Ryan-Collins J, Agnolucci P. The macroeconomic impact of government innovation policies: A quantitative assessment. 2019.
  8. 8. Godin B. Outline for a History of Science Measurement. Sci Technol Hum Values. 2002 Jan 19; 27(1):3–27.
  9. 9. May RM. The scientific wealth of nations. Science (80-). 1997; 275(5301):793–6.
  10. 10. Adams J. Benchmarking international research. Nature. 1998; 396(6712):615–8. pmid:9872303
  11. 11. King DA. The scientific impact of nations. Nature. 2004; 430(6997):311–6. pmid:15254529
  12. 12. Bornmann L, Leydesdorff L. Which cities produce more excellent papers than can be expected? A new mapping approach, using Google Maps, based on statistical significance testing. J Am Soc Inf Sci Technol. 2011; 62(10):1954–62.
  13. 13. Bornmann L, Leydesdorff L, Walch-Solimena C, Ettl C. Mapping excellence in the geography of science: An approach based on Scopus data. J Informetr. 2011; 5(4):537–46.
  14. 14. Cimini G, Gabrielli A, Labini FS. The scientific competitiveness of nations. PLoS One. 2014; 9(12). pmid:25493626
  15. 15. Abramo G, D’Angelo CA. A novel methodology to assess the scientific standing of nations at field level. J Informetr. 2020; 14(1).
  16. 16. D’Angelo CA, Abramo G. Publication rates in 192 research fields of the hard sciences. In: Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference. 2015. p. 909–19.
  17. 17. Abramo G , D’Angelo CA, Costa FD. USA vs Russia in the scientific arena. PLoS One. 2023; 18(7 July). pmid:37410762
  18. 18. OECD. Gross domestic expenditure on R&D (GERD) as a percentage of GDP. In: Main Science and Technology Indicators. Paris: OECD Publishing; 2022.
  19. 19. International Labour Organization (ILO) The Competitiveness Indicators (COMP) database. 2023. https://ilostat.ilo.org/data/#
  20. 20. Werner BM, Souder WE. Measuring R& D Performance-State of the Art. Res Manag. 1997 Mar 27; 40(2):34–42.
  21. 21. Hauser JR, Zettelmeyer F. Metrics to Evaluate R&D; E. Res Manag. 1997 Jul 27; 40(4):32–8.
  22. 22. Tijssen RJW. Scoreboards of research excellence. Res Eval. 2003 Aug 1; 12(2):91–103.
  23. 23. Martin BR, Irvine J. Assessing basic research. Res Policy. 1983 Apr; 12(2):61–90.
  24. 24. Boaz A, Ashby D. Fit for purpose? Assessing research quality for evidence based policy and practice. 2003.
  25. 25. OECD. The evaluation of scientific research: Selected experiences. Paris; 1997.
  26. 26. Grant J, Brutscher PC, Kirk SE, Butler L, Wooding S. Capturing Research Impacts: A review of international practice. Cambridge, UK: Rand Europe; 2010.
  27. 27. Patelli A, Napolitano L, Cimini G, Gabrielli A. Geography of science: Competitiveness and inequality. J Informetr. 2023 Feb; 17(1):101357.
  28. 28. Cristelli M, Gabrielli A, Tacchella A, Caldarelli G, Pietronero L. Measuring the Intangibles: A Metrics for the Economic Complexity of Countries and Products. PLoS One. 2013 Aug 5; 8(8):e70726. pmid:23940633
  29. 29. Tijssen RJW, Visser MS, van Leeuwen TN. Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? Scientometrics. 2002; 54(3):381–97.
  30. 30. Pislyakov V, Shukshina E. Measuring excellence in Russia: Highly cited papers, leading institutions, patterns of national and international collaboration. J Assoc Inf Sci Technol. 2014 Nov 8; 65(11):2321–30.
  31. 31. Waltman L, van Eck NJ, van Leeuwen TN, Visser MS, van Raan AFJ. Towards a new crown indicator: Some theoretical considerations. J Informetr. 2011 Jan; 5(1):37–47.
  32. 32. Moed HF. CWTS crown indicator measures citation impact of a research group’s publication oeuvre. J Informetr. 2010 Jul; 4(3):436–8.
  33. 33. Abramo G, D’Angelo CA. A farewell to the MNCS and like size-independent indicators: Rejoinder. J Informetr. 2016; 10(2):679–83.
  34. 34. Abramo G, D’Angelo CA. How do you define and measure research productivity? Scientometrics. 2014; 101(2):1129–44.
  35. 35. Abramo G, Aksnes DW, D’Angelo CA. Unveiling the distinctive traits of a nation’s research performance: The case of Italy and Norway. Quant Sci Stud. 2022; 3(3):732–54.
  36. 36. Sandström U, Van den Besselaar P. Funding, evaluation, and the performance of national research systems. J Informetr. 2018 Feb; 12(1):365–84.
  37. 37. Abramo G, D’Angelo CA, Di Costa F. Assessment of sectoral aggregation distortion in research productivity measurements. Res Eval. 2008; 17(2):111–21.
  38. 38. Abramo G, D’Angelo CA, Cicero T. What is the appropriate length of the publication period over which to assess research performance? Scientometrics. 2012; 93(3):1005–17.
  39. 39. Caron E, van Eck NJ. Large scale author name disambiguation using rule-based scoring and clustering. In: Proceedings of the 2014 Science and Technology Indicators Conference. Leiden, Universiteit Leiden-CWTS.; 2014. p. 79–86.
  40. 40. Tekles A, Bornmann L. Author name disambiguation of bibliometric data: A comparison of several unsupervised approaches. In: 17th International Conference on Scientometrics and Informetrics, ISSI 2019—Proceedings. 2019. p. 1548–59.
  41. 41. Hicks D. The difficulty of achieving full coverage of international social science literature and the bibliometric consequences. Scientometrics. 1999; 44(2):193–215.
  42. 42. Archambault É, Vignola-Gagné É, Côté G, Larivière V, Gingrasb Y. Benchmarking scientific output in the social sciences and humanities: The limits of existing databases. Scientometrics. 2006; 68(3):329–42.
  43. 43. Larivière V, Archambault É, Gingras Y, Vignola-Gagné É. The place of serials in referencing practices: Comparing natural sciences and engineering with social sciences and humanities. J Am Soc Inf Sci Technol. 2006; 57(8):997–1004.
  44. 44. Abramo G, D’Angelo CA, Felici G. Predicting publication long-term impact through a combination of early citations and journal impact factor. J Informetr. 2019; 13(1):32–49.
  45. 45. Haskel J, Hughes A, Bascavusoglu-Moreau E. The economic significance of the UK science base: a report for the Campaign for Science and Engineering. London; 2014.
  46. 46. Waltman L, van Eck NJ. Field-normalized citation impact indicators and the choice of an appropriate counting method. J Informetr. 2015; 9(4):872–94.
  47. 47. Bornmann L, Leydesdorff L, Wang J. How to improve the prediction based on citation impact percentiles for years shortly after the publication date? J Informetr. 2014; 8(1):175–80.
  48. 48. Stegehuis C, Litvak N, Waltman L. Predicting the long-term citation impact of recent publications. J Informetr. 2015; 9(3):642–57.
  49. 49. Tahamtan I, Bornmann L. Core elements in the process of citing publications: Conceptual overview of the literature. J Informetr. 2018; 12(1):203–16.
  50. 50. Tahamtan I, Safipour Afshar A, Ahamdzadeh K. Factors affecting number of citations: a comprehensive review of the literature. Scientometrics. 2016; 107(3):1195–225.
  51. 51. Pendlebury DA. When the Data Don’t Mean What They Say: Japan’s Comparative Underperformance in Citation Impact. In: Daraio C, Glänzel W, editors. Evaluative Informetrics: The Art of Metrics-Based Research Assessment. Springer; 2020.
  52. 52. Macháček V. Globalization of science: Evidence from authors in academic journals by country of origin. In: 17th International Conference on Scientometrics and Informetrics, ISSI 2019—Proceedings. 2019. p. 339–50.
  53. 53. OECD. Main Science and Technology Indicators. 2023.