Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Fair ranking of researchers and research teams

Abstract

The main drawback of ranking of researchers by the number of papers, citations or by the Hirsch index is ignoring the problem of distributing authorship among authors in multi-author publications. So far, the single-author or multi-author publications contribute to the publication record of a researcher equally. This full counting scheme is apparently unfair and causes unjust disproportions, in particular, if ranked researchers have distinctly different collaboration profiles. These disproportions are removed by less common fractional or authorship-weighted counting schemes, which can distribute the authorship credit more properly and suppress a tendency to unjustified inflation of co-authors. The urgent need of widely adopting a fair ranking scheme in practise is exemplified by analysing citation profiles of several highly-cited astronomers and astrophysicists. While the full counting scheme often leads to completely incorrect and misleading ranking, the fractional or authorship-weighted schemes are more accurate and applicable to ranking of researchers as well as research teams. In addition, they suppress differences in ranking among scientific disciplines. These more appropriate schemes should urgently be adopted by scientific publication databases as the Web of Science (Thomson Reuters) or the Scopus (Elsevier).

Introduction

The simplest way how to measure the quality of scientists is to evaluate the following three integer numbers: the number of published papers, the number of citations, and the h-index introduced by Hirsch [1] and defined as the maximum number of papers of a scientist which are cited at least h times. Although ‘a single number can never give more than a rough approximation to an individual's multifaceted profile, and many other factors should be considered in combination in evaluating an individual’ [1], it is believed that this metric provides a useful measure of the productivity of a scientist and the impact of his research. In particular, the h-index has become popular and widely accepted, because it reflects both the quality and quantity of the scientific output. It smartly suppresses the disproportionate weight of a few highly cited papers as well as it ignores less significant papers with no or few citations. The h-index is usually determined as an integer but it might be modified to be real valued [2]. Also other generalizations or modifications of the h-index have been proposed by some authors [39] including a specification which type of papers (e,g., peer-review papers, proceedings, book chapters) should be considered for ranking [10].

Obviously, research evaluation and the definition of ranking criteria have an impact on the research itself in a long-term prospect. Researchers try to increase their ranking by complying with the presently accepted criteria. As a result, the number of citations is being increased by self-citations and coercive citations [11], and the number of published papers and the h-index are rising over time by the inflated number of multi-author publications and the number of authors [1213]. As discussed by Papatheodorou et al. [14], the inflation of authors is not just due to an increasing research complexity but it is also shaped by the interplay of ‘publish or perish’ pressures, collaborative needs and the visibility of research. Kwok [15] also discusses an unethical behaviour of some scientists who required the co-authorship to get better ranking.

In this paper, I focus on the problem how to deal properly with multi-author papers in ranking researchers and research teams. I review several alternative approaches to the standard ranking scheme and discuss their pros and cons. Analysing synthetic examples as well as citation profiles of several highly-cited astronomers and astrophysicists I expose the failure of the standard ranking criteria and point to urgent need for adopting a more accurate and fairer ranking scheme in the evaluation practise. I show that the standard ranking can lead to completely incorrect and misleading evaluations. The behaviour and trends of team ranking in dependence of the quality and the number of individual team members is also discussed.

Authorship counting

Full counting

The main drawback of ranking of scientists by the total number of papers, citations or by the standard h-index is ignoring the problem of co-authorship in multi-author publications. So far, single-author or multi-author publications contribute to the publication record of a scientist equally [1]. This full counting method is very simple and easy to apply but it is apparently unfair and causes unjust disproportions in the evaluation [1214]. Obviously, individual contributions of ten co-authors to a paper are very different than if the paper is written by a single author. Moreover, papers with ten or more co-authors are not exceptional; some papers have even more than 500 co-authors [16]. For example, a physics paper with more than 5000 authors was published in 2015 [17]. Since various research fields are characterized by a different extent of collaboration, the full counting produces also significant differences among scientific disciplines [18].

Fractional counting

The fractional counting considers the number of papers fractionally according to the number of authors [1820]. For example, Batista et al. [18] substitute the h-index by the index hI = h2/Na, where Na is the total number of authors in the considered h papers. Another possibility, proposed by Schreiber [2122] and Egghe [4], is to distribute uniformly the authorship credit among authors for individual papers. For example, three authors of a paper receive equally one third of the authorship credit. Both approaches [18,2122] yield similar ranking which removes evident disproportions in author’s contributions in single- or multi-author papers and thus represents a significant improvement of the original h-index. Moreover, this counting removes quite effectively differences among various scientific disciplines [18].

Nevertheless, some authors argue that this scheme: (1) discourages a collaboration, which is essential for progress in science, and (2) divides the credit equally, which is not necessarily accurate and can lead to neglecting a crucial role of some co-authors [23]. However, as mentioned by Waltman & Van Eck [24] these arguments are not fully justified because:

  • The primary goal of ranking is to evaluate the scientific credit of researchers but not their collaboration abilities. If needed, the collaboration of a researcher can be quantified independently by the mean number of co-authors per paper or by the c-index, defined in analogy to the h-index but instead of counting citations we count the number of co-authors.
  • A fruitful collaboration results in publishing a high number of high-quality papers (i.e., more papers with more citations), so the co-authors benefit from a productive collaboration even under the fractional scheme.
  • The distribution of the equal credit among all authors cannot be taken as an argument for preferring the full counting over the fractional counting, because the equal-credit distribution is common to both schemes. Moreover, the equal-credit distribution can easily be removed by modifying the simple fractional scheme to more sophisticated authorship-weighted schemes which can better reflect contributions of the individual authors.

Authorship-weighted counting

This type of counting attempts to distribute credit between authors more properly than the simple fractional counting. The authorship credit of each paper is usually assumed to have a value of 1 and it is split according to various rules attempting to quantify contributions of individual authors. I review several possible schemes how to distribute the authorship among the co-authors [2527]:

  • The ‘equal-contribution’ (EC) scheme, when the authorship credit is distributed among all authors equally. This is the standard fractional scheme [4,2122], being appropriate to papers when the authors use the alphabetical sequence to emphasize a similar contribution in the collaborating group.
  • The ‘sequence-determines-credit’ (SDC) scheme, when the sequence of authors reflects declining importance of the co-author’s contribution. This scheme is appropriate if the authors do not use the alphabetical sequence and the number of authors is not too large. The first author is the main contributing author and receives the highest credit. The credit of the other authors gradually decreases with the position in the list. The distribution of credits among the authors can be calculated using harmonic counting [2831], geometric counting [25], arithmetic counting [30] or other counting methods [8].
  • The ‘first-author-emphasis’ (FA) scheme, when the first author as the main contributor has higher credit than the other co-authors. In an extreme case, the first author can receive the full credit and the other authors no credit [3233]. A more appropriate approach is, however, to allocate only some limited bonus to the first author. The other authors receive either equal credit as in the EC scheme or gradually decreasing credit as in the SDC scheme [34].
  • The ‘first-last-author-emphasis’ (FLA) scheme, when the first author as the main contributor and the last author as the project leader have higher credits than the other co-authors [26,3536]. This approach is, however, somewhat confusing because it mixes scientific credit with leadership abilities. Similarly as with quantifying the success in collaboration, the success in a research leadership and in supervision of young researchers should be evaluated separately by another factor.
  • The ‘corresponding-author-emphasis’ (CA) scheme, when an extra credit is allocated to the corresponding author [24,3334,3740]. This scheme is suitable particularly when the authors are listed in the alphabetical sequence.
  • The ‘contribution-indicated’ (CI) scheme, when individual contributions are explicitly acknowledged by authors themselves according to the policy of some journals.

Combined weighted counting scheme

The variety of counting schemes for evaluating the authorship indicates a complexity of the problem. It is evident that, except for the CI scheme, no other scheme is fully accurate and general [26,41]. Since the CI scheme is not applicable to all papers at present, it is desirable: (1) to find and accept some compromise for measure of the authorship, and (2) to know how sensitive is ranking of researchers to the applied counting scheme. For this purpose, I propose a simple authorship-weighted scheme which combines basic features of the most important weighted schemes listed in the previous section and compare the h-index calculated by this scheme with the full and fractional counting schemes.

The combined weighted scheme is defined as follows:

  • The sum of authorships of individual authors equals 1 for each paper. This is a basic condition which ensures that all papers have an equal weight irrespective of the number of co-authors. This condition is crucial for fair ranking of researchers and it is violated in the standard full ranking scheme.
  • If possible, the authorship is defined by the authors themselves (CI scheme).
  • In the other cases, the authorship is allocated as follows:
    1. ○. If the authors are listed in alphabetical order and the corresponding author is indicated, then the corresponding author receives a bonus and the rest is equally divided into all co-authors. If the bonus is zero, we get the simple fractional scheme. For bonus b = 20%, the authorship is 6/10 and 4/10 for two authors, and 7/15, 4/15 and 4/15 for three authors, for other examples see Tables 1 and 2. If the corresponding author is not indicated, the full authorship is equally divided into all co-authors.
    2. ○. If the authors are not listed in an alphabetical order, the first author and the corresponding author receive the same bonus b and the rest is equally divided into all co-authors. If the first author is also the corresponding author, the total bonus is 2b. Hence, for bonus b = 20%, the authorship is 7/10 and 3/10 for two authors, in the case that the first author is also the corresponding author, but 5/10 and 5/10 if the corresponding author is the second author. For other examples, see Tables 1 and 2.
    3. ○. In the case of several corresponding authors, the bonus b is split equally among them. The same applies to several equal-first authors, which might also occasionally occur [42].
thumbnail
Table 1. Authorship weights for the combined counting scheme with bonus b = 20%.

https://doi.org/10.1371/journal.pone.0195509.t001

thumbnail
Table 2. Authorship weights for the combined counting scheme with bonus b = 30%.

https://doi.org/10.1371/journal.pone.0195509.t002

This counting scheme is simple and similar to the fractional counting of Schreiber [2122,43] except for a bonus for the first and corresponding authors. Allocating a bonus to the first author in a non-alphabetical authorship reflects the principal role of this author. Allocating a bonus to the corresponding author in alphabetical and non-alphabetical authorships is desirable for several reasons. First, one of the authors has always a major contribution in preparing the paper, even in the case of publications with the alphabetical authorship. Second, it might happen that the alphabetical authorship is not intentional, in particular, when the number of authors is low [11]. In this case, the first author can get a bonus as the corresponding author. Third, the scheme is also able to distribute credits between young authors and supervisors and to emphasize the role of group leaders, who can receive an extra credit as the corresponding author(s). The value of the bonus for the first and corresponding authors should be between 10% and 40%. A low value of the bonus suppresses a role of principal contributors, while a high value of the bonus causes almost negligible authorship of the other authors. The authorship distributions for b of 20% and 30% are summarized in Tables 1 and 2.

Mathematical definition of the weighted scheme

The following series of numbers are needed for quantifying the publication career of a scientist:

  • ‘rank-citation profile’ cr, r = 1,…,N, which is the number of citations to his/her paper r ranked in the decreasing order,
  • ‘author-number profile’ nr, r = 1,…,N, which is the number of authors of his/her paper r,
  • ‘authorship profile’ ar, r = 1,…, N, which quantifies the authorship percentage of his/her paper r,
  • ‘cumulative authorship profile’ , r = 1,…,N, which cumulatively sums the authorship of ranked papers,

where N is the total number of papers published by a given scientist.

The authorship a of a paper is calculated for the alphabetical order of authors as (1) and for the non-alphabetical order of authors as (2) where n is the number of authors of the paper, b is the bonus, AF is 1 for the first author and 0 for the other authors, and AC is 1 for the corresponding author and 0 for the other authors. The bonus b can range from 0 (the simple fractional scheme) to 0.5 (full credit is distributed between the first and corresponding authors). Eqs (1) and (2) ensure that the sum of authorships of all authors of any individual paper equals 1.

Considering the authorship-weighted scheme, the number of published papers N and the number of citations C is replaced by the weighted number of papers NW and the weighted number of citations CW: (3) (4) Furthermore, the original h-index defined as (5) is replaced by the ‘authorship-weighted’ (or simply ‘weighted’) hW-index defined as (6) where rW is the number of papers contributing to the hW-index (7)

The weighted publications, citations and the hW-index are no longer integers but positive real numbers. The meaning of the hW-index is graphically illustrated in Fig 1. For rank-citation profiles with single-author publications only, the h-index and the hW-index yield identical values. For highly collaborative authors, both indices can be remarkably different. The collaboration of authors might be quantified using the collaboration index c: (8) where is the author-number profile nr sorted in the descending order. The definition of the c-index is analogous to that of the h-index, so the c-index of 5 means that a researcher published 5 papers with at least 5 authors. The c-index in (8) is defined using the whole author-number profile, but it can also be restricted to the papers which contribute to the original or weighted h-index [18]. Research teams or research institutions can be ranked in a similar way as individual researchers. The weighted numbers of publications and citations of a research team formed by M scientists can be obtained by summing the weighted numbers of publications and citations and the weighted -index is calculated analogously as for individual members of the team (9) where is the number of papers contributing to the -index (10) where is the rank-citation profile of the team, which is formed by gathering rank-citation profiles of its individual members and ordered in the decreasing sequence, and is the corresponding authorship profile.

thumbnail
Fig 1.

Definition of the hW-index for (a) single-author publications, and (b) a mix of single- and multi-author publications. Quantity cr is the rank-citation profile of a scientist (or simply ‘citations’), Ar is the corresponding cumulative authorship profile of published papers (or simply ‘rank’). The blue dot shows the threshold value controlling the hW-index. The hW-index in (a) is identical with the standard h-index.

https://doi.org/10.1371/journal.pone.0195509.g001

Synthetic example

Ranking of researchers and research teams is illustrated on the following example. We assume three research groups A, B and C, each with 10 researchers. For simplicity, the researchers have an identical rank-citation profile, cr = (70, 50, 35, 25, 20, 17, 15, 12, 11, 10, 9, 7, 6, 4, 3, 2, 2, 1, 1, 0). Hence, each researcher is an author/co-author of 20 publications with the total number of 300 citations. As indicated by cr, the most cited paper has 70 citations and the least cited paper has no citations. The A-researchers are single authors, the B-researchers publish papers of 5 co-authors and the C-researchers publish papers of 10 co-authors (see Table 3). Hence the productivity of the A researchers is 5 times higher than that of the B researchers and 10 times higher than that of the C researchers. For papers with 5 co-authors, 3 co-authors are external (i.e., they are not members of the team); for papers with 10 co-authors, 8 co-authors are external. Author’s names in all multi-author papers are in the alphabetical order. The corresponding authors are the external researchers.

thumbnail
Table 3. Standard, fractional and authorship-weighted ranking of researchers.

https://doi.org/10.1371/journal.pone.0195509.t003

Fig 2 (upper panels) and Table 3 illustrate the differences between the full, fractional and authorship-weighted counting in ranking of researchers (h-index, hm-index and hw-index, respectively). The full counting yields the same h-index for researchers of all three teams irrespective of the actual work load of researchers for producing the papers. By contrast, the fractional and authorship-weighted counting is sensitive to the number of co-authors of published papers. Obviously, the more co-authors of papers, the lower contribution of these papers to ranking is received. This is reflected in publications NW (or Nm), citations CW (or Cm) and the hW-index (hm-index). Fig 2 (lower panels) demonstrates the confusing results produced by applying the full counting criteria to evaluating research teams. Even though, team C has actually a twice lower weighted number of publications and citations than team B, the values of the full counting are identical for both teams. By contrast, the authorship-weighted quantities distinguish between the productivity of all three teams more properly.

thumbnail
Fig 2.

The rank-citation profiles for individual researchers (upper panels) and research teams (lower panels). Left: the B-researchers, right: the C–researchers. Light grey colour–full counting, dark grey colour–authorship-weighted counting. The red line marks the threshold defining the h-index. The plots are analogous to those in Fig 1 except for the citations axis, which is logarithmic. Consequently, the threshold line becomes curved.

https://doi.org/10.1371/journal.pone.0195509.g002

Calculating the hW-index for teams with a varying number of researchers, we can also address a problem how to build a team with the highest index. Fig 3 shows the team hW-index as a function of the number of the A-, B- and C-researchers in the team for two scenarios. First, we assume teams formed and gradually extended by including either A-, B- or C-researchers. So the teams are homogeneous consisting of researchers with the same authorship profile. Second, the teams have initially three A-researchers (single-author researchers) who form the core of the teams, and the teams are then extended by including either A-, B- or C-researchers. Hence, the initial hW-index of the core of the teams is equally 17.

thumbnail
Fig 3. The team hW-index as a function of the number of researchers in the team.

(a) The teams are formed by researchers with identical citation profiles: A (black line), B (blue line) or C (red line), respectively. (b) The teams are initially formed by three core A-researchers. The teams are further gradually extended either by the A-researchers (black line), the B-researchers (blue line) or the C-researchers (red line).

https://doi.org/10.1371/journal.pone.0195509.g003

In the case of the homogeneous teams, the hW-index linearly increases with the number of researchers (Fig 3A). The rate of the increase is, however, different: the single authors improve the team hW-index with the steepest rate. If including researchers, who published papers with a higher number of co-authors, the team hW-index increases with a lower rate. In the case of teams formed initially by three core A-researchers, the contributions of including the A-, B- or C-researchers to the team ranking are even more distinct. The increase of the team hW-index is much lower when extending the teams by the B- or C-researchers. It might even happen that including a new B- or C-researcher does not change the team index or the team index can slightly decrease (Fig 3B, red and blue curves).

Ranking of selected astronomers and astrophysicists

The differences between the standard full counting and the authorship-weighted counting schemes are exemplified on selected highly-cited researchers working in astronomy and astrophysics. This scientific discipline is particularly suitable for this purpose because it offers a variety of research profiles from rather single authors to highly collaborative authors publishing as members of large research teams. We selected the following 9 reputed researchers: M. Colless, B.T. Draine, A.V. Filippenko, S.W. Hawking, Z. Ivezic, J.A. Peacock, P.J.E. Peebles, K.S. Thorne, and D.G. York, who have the h-index in the range from 66 to 109 and the collaboration index (c-index) from 3 to 95, according to the Web of Science (WOS) in December 2016, see Table 4. The researchers with a low c-index are mainly theorists (Hawking, Peebles), while those with a high c-index are partially or dominantly involved in large-scale experiments (Ivezic, Thorne, York). The selection of the researchers is subjective with no intention to produce statistically relevant results applicable to all researchers in astronomy and astrophysics. The data sample is designed just to exemplify how large differences can appear between various ranking schemes for researchers with a high h-index but with a diverse c-index.

thumbnail
Table 4. Standard, fractional and authorship-weighted ranking of selected astronomers and astrophysicists.

https://doi.org/10.1371/journal.pone.0195509.t004

Figs 4 and 5 show the rank-citation profile and the histogram of the number of papers as a function of the number of authors (collaboration profile) of four selected researchers. The profiles show that the differences between the individual researchers are substantial. For some researchers, the frequency of the number of co-authors has distinct peaks caused by a high number of publications reporting results of a specific large-scale experiment (e.g., papers with 28–30 co-authors in histograms of M. Colless and J.A. Peacock are related to the 2dF Galaxy Redshift Survey, see http://www.2dfgrs.net/). The collaboration profiles are clipped at the maximum number of 40 co-authors, but some researchers published papers with a remarkably higher number of co-authors.

thumbnail
Fig 4.

The rank-citation profiles (left) and the collaboration profiles (right) for Peacock and Colless. Left: light grey colour–full counting, dark grey colour–authorship-weighted counting. The red line marks the threshold defining the h-index. The axis showing the number of authors is clipped at the maximum value of 40 authors. The collaboration profiles distinguish whether the researcher is the first author (in red) or not (in grey). The data are taken from the Web of Science (December, 2016).

https://doi.org/10.1371/journal.pone.0195509.g004

thumbnail
Fig 5.

The rank-citation profiles (left) and the collaboration profiles (right) for Hawking and Draine. For details, see the caption of Fig 4.

https://doi.org/10.1371/journal.pone.0195509.g005

Since the number of co-authors is fully ignored in the standard h-index, its value is overestimated for highly collaborative researchers. This is recognized when the more appropriate authorship-weighted ranking scheme is applied. Figs 4 and 5 (left-hand panels) show a comparison of both ranking schemes. The reduction in the h-index is enormous for some researchers. The differences between the full, fractional and authorship-weighted counting are summarized in Table 4. The authorship-weighted ranking is calculated for bonus b of 20% of the authorship, which is equally considered for the first and corresponding authors. Table 4 indicates:

  • If the full counting is substituted by the authorship-weighted counting, the reduction of the h-index ranges from 15.6 to 90.3. This corresponds to a relative reduction of 23% to 84% of the original h-index. The highest discrepancy is for Z. Ivezic, when the ranking drops from 107 to 16.7. Similar values are obtained when the full counting is substituted by the fractional counting.
  • The full counting scheme completely fails for 5 of 9 researchers, the difference between the standard and weighted h-index being higher than 60%.
  • The differences between the simple fractional scheme and the authorship-weighted scheme are rather minor being 4.4 at most. Calculations not shown here indicate that the authorship-weighted ranking usually decreases with increasing bonus except for S.W. Hawking and B.T. Draine who published a high number of papers as the first authors.

The discrepancy between the standard and authorship-weighted ranking is illustrated in Fig 6 (upper panel). The differences between the h-index and hW-index are so high for some researchers that the h-index cannot be considered even as a rough indication of the quality of a researcher. The standard ranking is simply wrong. Fig 6 (lower panel) shows a comparison of the differences between the standard and weighted ranking (in grey colour) and the differences between the weighted and fractional ranking (in red colour). Small differences between the fractional and weighted schemes confirm that: (1) applying even the simple fractional scheme leads to a significant improvement of the h-index, and (2) developing more complicated authorship-weighted schemes [8,23,30,3940,4445] than those analysed here is not very reasonable because the corrections will be minor.

thumbnail
Fig 6. A comparison of the standard h-index and the fractional and authorship-weighted indices for selected highly-cited astronomers and astrophysicists.

Upper panel: the standard h-index (grey colour) and the authorship-weighted hW-index (red colour). Lower panel: differences between the standard h-index and the authorship-weighted hW-index (grey colour) and between the fractional hm-index and the authorship-weighted hW -index (red colour). Absolute values of the differences are shown. The lower panel indicates a good consistency between hm and hW.

https://doi.org/10.1371/journal.pone.0195509.g006

Finally, we calculate the hW-index for teams with a varying number of researchers who have citation and authorship profiles identical with those of the selected astronomers and astrophysicists. Fig 7 shows an increase of the team hW-index with the number of team members characterized by two different citation profiles. Similarly as in the synthetic example (Fig 3), the less collaborative researchers contribute more to the team index than the more collaborative researchers. Even though the hW-index of the less and more collaborative researchers is similar, the increase of the team index with the number of researchers can be remarkably different.

thumbnail
Fig 7. The team hW-index as a function of the number of researchers in two teams.

The teams are formed by researchers with identical citation profiles of Draine and Filippenko.

https://doi.org/10.1371/journal.pone.0195509.g007

Discussion and conclusions

The current policy of evaluating the scientific output adopted by the Web of Science (Thomson Reuters) or Scopus (Elsevier) databases is unsatisfactory, because it ignores the problem of the authorship in multi-author publications. Although, many authors pointed to this controversy in ranking of scientists and proposed alternative schemes [4,16,2122,2627,36,4344,4647], the common practise has not been changed. This is unfortunate because a fair or unfair ranking has a feedback effect on science. Fair ranking of researchers can positively influence their publication habits. A fair distribution of authorship among co-authors can automatically suppress a tendency to unjustified inflation of co-authors because the authors will be more reluctant to share the authorship with colleagues not truly involved in the research or in preparing publications. The fair distribution of authorship will also remove existing evident disproportions between ranking of more and less collaborative researchers.

The synthetic examples and the analysis of real data show that substituting the full counting by the fractional or authorship-weighted counting systematically reduces the h-index of researchers and research teams. This reduction varies from 20% to 80% for the selected highly-cited astronomers and astrophysicists characterized by the collaboration index from 3 to 95. The h-index is reduced from 70 to 55 (Peebles), but also from 107 to 17 (Ivezic). These enormous disproportions point to a complete failure of ranking based on the full counting if applied to researchers with a high collaboration index. The disproportions are removed by applying a more appropriate counting scheme such as the fractional or authorship-weighted scheme. Applying the fractional scheme is elementary and the improvement in ranking is enormous. The authorship-weighted scheme is even more accurate because it is capable to distribute the authorship credit non-uniformly, for example, to allocate some extra credits to the first and/or corresponding authors. However, the analysis of real data shows that the improvement of the authorship-weighted scheme compared to the fractional scheme is not as high as one would expect. Hence, the first priority for fair ranking of researchers is to substitute the standard scheme by the fractional scheme in scientific publication databases as the Web of Science (Thomson Reuters) or the Scopus (Elsevier). At later stages, some simple authorship-weighted scheme, as described in this paper, can be adopted for more accurate evaluations.

Acknowledgments

I thank three anonymous reviewers for their helpful comments. All data used in the paper are available from the Web of Science database (https://webofknowledge.com/).

References

  1. 1. Hirsch JE. An index to quantify an individual’s scientific research output. Proc Nat Acad Sci. 2005; 102(46): 16569–16572. pmid:16275915
  2. 2. Rousseau R. A note on the interpolated or real-valued h-index with a generalization for fractional counting. ASLIB J Inform Manag. 2014; 66(1): 2–12.
  3. 3. Egghe L. Theory and practise of the g-index. Scientometrics. 2006; 69(1): 131–152.
  4. 4. Egghe L. Mathematical theory of the h- and g-index in case of fractional counting of authorship. J Am Soc Inform Sci Tech. 2008; 59: 1608–1616.
  5. 5. Jin BH, Liang L, Rousseau R, Egghe L. The R- and AR-indices: complementing the h-index. Chin Sci Bull. 2007; 52: 855–863.
  6. 6. Van Eck NJ, Waltman L. Generalizing the h- and g-indices. J Informetrics. 2008; 2(4): 263–271.
  7. 7. Alonso S, Cabrerizo FJ, Herrera-Viedma E, Herrera F. H-index: A review focused in its variants, computation and standardization for different scientific fields. J Informetr. 2009; 3(4): 273–289.
  8. 8. Stallings J, Vance E, Yang J, Vannier MW, Liang J, Pang L, et al. Determining scientific impact using a collaboration index. Proc Nat Acad Sci. 2013; 110(24): 9680–9685. pmid:23720314
  9. 9. Yong A. Critique of Hirsch’s citation index: A combinatorial Fermi problem. Notices Am Math Soc. 2014; 61(9): 1040–1050.
  10. 10. Miskiewicz J. Effects of publications in proceedings on the measure of the core size of coauthors. Physica A. 2013; 392: 5119–5131.
  11. 11. Herteliu C, Ausloos M, Ileanu BV, Rotundo G, Andrei T. Quantitative and qualitative analysis of editor behaviour through potentially coercive citations. Publications. 2017; 5(2): 15.
  12. 12. Wuchty S, Jones BF, Uzzi B. The increasing dominance of teams in production of knowledge. Science. 2007; 316: 1036–1039. pmid:17431139
  13. 13. Waltman L. An empirical analysis of the use of alphabetical authorship in scientific publishing. J Informetr. 2012; 6(4): 700–711.
  14. 14. Papatheodorou SI, Trikalinos TA, Ioannidis JPA. Inflated numbers of authors over time have not been just due to increasing research complexity. J Clinic Epid. 2008; 61: 546–551.
  15. 15. Kwok LS. The White Bull effect: abusive coauthorship and publication parasitism. J Med Ethics. 2005; 31: 554–556. pmid:16131560
  16. 16. Sekercioglu CH. Quantifying coauthor contributions. Science. 2008; 322(5900): 371.
  17. 17. Aad G, Abbott B, Abdallah J, Abdinov O, Aben R, Abolins M, et al. Combined measurement of the Higgs boson mass in pp collisions at root s = 7 and 8 TeV with the ATLAS and CMS experiments. Phys Rev Lett. 2015; 114: 191803. pmid:26024162
  18. 18. Batista PD, Campiteli MG, Kinouchi O, Martinez AS. Is it possible to compare researchers with different scientific interests? Scientometrics. 2006; 68(1): 179–189.
  19. 19. Lindsey D. Production and citation measures in the sociology of science: The problem of multiple authorship. Soc Stud Sci. 1980; 10: 145–162.
  20. 20. Lindsey D. Further evidence for adjusting for multiple authorship. Scientometrics. 1982; 4(5): 389–395.
  21. 21. Schreiber M. To share the fame in a fair way, hm modifies h for multi-authored manuscripts. New J Phys. 2008; 10(040201): 1–9.
  22. 22. Schreiber M. A modification of the h-index: The hm-index accounts for multi-authored manuscripts. J Informetr. 2008; 2: 211–216.
  23. 23. Hirsch JE. An index to quantify an individual’s scientific research output that takes into account the effect of multiple coauthorship. Scientometrics. 2010; 85(3): 741–754.
  24. 24. Waltman L, Van Eck NJ. Field-normalized citation impact indicators and the choice of an appropriate counting method. J Informetr. 2015; 9: 872–894. arXiv:1501.04431.
  25. 25. Egghe L, Rousseau R, Van Hooydonk G. Methods for accrediting publications to authors or countries: Consequences for evaluation studies. J Am Soc Inform Sci. 2000; 51(2): 145–157.
  26. 26. Tscharntke T, Hochberg ME, Rand TA, Resh VH, Krauss J. Author sequence and credit for contributions in multiauthored publications. PLOS Biology. 2007; 5(1): e18. pmid:17227141
  27. 27. Waltman L. A review of the literature on citation impact indicators. 2015. arXiv: 1507.02099.
  28. 28. Hodge SE, Greenberg DA. Publication credit. Science. 1981; 213: 950.
  29. 29. Hagen NT. Harmonic allocation of authorship credit: Source-level correction of bibliometric bias assures accurate publication and citation analysis. PLoS ONE. 2008; 3(12): e4021. pmid:19107201
  30. 30. Hagen NT. Harmonic publication and citation counting: sharing authorship credit equitably–not equally, geometrically or arithmetically. Scientometrics. 2010; 84(3): 785–793. pmid:20700372
  31. 31. Jian D, Xiaoli T. Perceptions of author order versus contribution among researchers with different professional ranks and the potential of harmonic counts for encouraging ethical co-authorship practices. Scientometrics. 2013; 96(1): 277–295.
  32. 32. Lange LL. Citation counts of multi-authored papers—First-named authors and further authors. Scientometrics. 2001; 52(3): 457–470.
  33. 33. Lin CS, Huang MH, Chen DZ. The influences of counting methods on university rankings based on paper count and citation count. J Informetr. 2013; 7(3): 611–621.
  34. 34. Zhang CT. A proposal for calculating weighted citations based on author rank. Eur Mol Biol Org Rep. 2009; 10(5): 416–417.
  35. 35. Romanovsky AA. Revised h index for biomedical research. Cell Cycle. 2012; 11(22): 4118–4121. pmid:22983124
  36. 36. Kosmulski M. The order in the lists of authors in multi-author papers revisited. J Informetr. 2012; 6(4): 639–644.
  37. 37. Hu X, Rousseau R, Chen J. In those fields where multiple authorship is the rule, the h-index should be supplemented by role-based h-indices. J Inform Sci. 2010; 36(1): 73–85.
  38. 38. Huang MH, Lin CS, Chen DZ. Counting methods, country rank changes, and counting inflation in the assessment of national research productivity and impact. J Am Soc Inform Sci Tech. 2011; 62(12): 2427–2436.
  39. 39. Liu XZ, Fang H. Fairly sharing the credit of multi-authored papers and its application in the modification of h-index and g-index. Scientometrics. 2012; 91(1): 37–49.
  40. 40. Liu XZ, Fang H. Modifying h-index by allocating credit of multi-authored papers whose author names rank based on contribution. J Informetr. 2012; 6(4): 557–565.
  41. 41. Kennedy D. Multiple authors, multiple problems, Science. 2003; 301:733. pmid:12907762
  42. 42. Hu X. Loads of special authorship functions: linear growth in the percentage of ‘equal first authors’ and corresponding authors. J Am Soc Inform Sci Tech. 2009; 60(11): 2378–2381.
  43. 43. Schreiber M. A case study of the modified Hirsch index hm accounting for multiple coauthors. J Am Soc Inform Sc Tech. 2009; 60(6): 1274–1282.
  44. 44. Ausloos M. A scientometrics law about co-authors and their ranking: the co-author core. Scientometrics. 2013; 95: 895–909.
  45. 45. Aziz NA, Rozing MP. Profit (p)-index: The degree to which authors profit from co-authors. PLoS ONE. 2013; 8(4): e59814. pmid:23573211
  46. 46. Galam S. Tailor based allocations for multiple authorship: a fractional gh-index. Scientometrics. 2011; 89: 365–379.
  47. 47. Abramo G, D’Angelo CA, Rosati F. The importance of accounting for the number of co-authors and their order when assessing research performance at the individual level in the life sciences. J Informetr. 2013; 7: 198–208.