Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The Altmetrics Collection

  • Jason Priem,

    Affiliation School of Information & Library Science, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America

  • Paul Groth ,

    Affiliation Department of Computer Science and The Network Institute, VU University Amsterdam, Amsterdam, The Netherlands

  • Dario Taraborelli

    Affiliation The Wikimedia Foundation, San Francisco, California, United States of America


What paper should I read next? Who should I talk to at a conference? Which research group should get this grant? Researchers and funders alike must make daily judgments on how to best spend their limited time and money–judgments that are becoming increasingly difficult as the volume of scholarly communication increases. Not only does the number of scholarly papers continue to grow, it is joined by new forms of communication from data publications to microblog posts.

To deal with incoming information, scholars have always relied upon filters. At first these filters were manually compiled compendia and corpora of the literature. But by the mid-20th century, filters built on manual indexing began to break under the weight of booming postwar science production. Garfield [1] and others pioneered a solution: automated filters that leveraged scientists own impact judgments, aggregating citations as “pellets of peer recognition.” [2].

These citation-based filters have dramatically grown in importance and have become the tenet of how research impact is measured. But, like manual indexing 60 years ago, they may today be failing to keep up with the literature’s growing volume, velocity, and diversity [3].

Citations are heavily gamed [4][6] and are painfully slow to accumulate [7], and overlook increasingly important societal and clinical impacts [8]. Most importantly, they overlook new scholarly forms like datasets, software, and research blogs that fall outside of the scope of citable research objects. In sum, citations only reflect formal acknowledgment and thus they provide only a partial picture of the science system [9]. Scholars may discuss, annotate, recommend, refute, comment, read, and teach a new finding before it ever appears in the formal citation registry. We need new mechanisms to create a subtler, higher-resolution picture of the science system.

The Quest for Better Filters

The scientometrics community has not been blind to the limitations of citation measures, and has collectively proposed methods to gather evidence of broader impacts and provide more detail about the science system: tracking acknowledgements [10], patents [11], mentorships [12], news articles [8], usage in syllabuses [13], and many others, separately and in various combinations [14]. The emergence of the Web, a “nutrient-rich space for scholars” [15], has held particular promise for new filters and lenses on scholarly output. Webometrics researchers have uncovered evidence of informal impact by examining networks of hyperlinks and mentions on the broader Web [16][18]. An important strand of webometrics has also examined the properties of article download data [7], [19], [20].

The last several years, however, have presented a promising new approach to gathering fine-grained impact data: tracking large-scale activity around scholarly products in online tools and environments. These tools and environments include, among others:

  • social media like Twitter and Facebook
  • online reference managers like CiteULike, Zotero, and Mendeley
  • collaborative encyclopedias like Wikipedia
  • blogs, both scholarly and general-audience
  • scholarly social networks, like ResearchGate or
  • conference organization sites like

Growing numbers of scholars are using these and similar tools to mediate their interaction with the literature. In doing so, they are leaving valuable tracks behind them–tracks with potential to show informal paths of influence with unprecedented speed and resolution. Many of these tools offer open APIs, supporting large-scale, automated mining of online activities and conversations around research objects [21].

Altmetrics [22], [23] is the study and use of scholarly impact measures based on activity in online tools and environments. The term has also been used to describe the metrics themselves–one could propose in plural a “set of new altmetrics.” Altmetrics is in most cases a subset of both scientometrics and webometrics; it is a subset of the latter in that it focuses more narrowly on scholarly influence as measured in online tools and environments, rather than on the Web more generally.

Altmetrics may support finer-grained maps of science, broader and more equitable evaluations, and improvements to the peer-review system [24]. On the other hand, the use and development of altmetrics should be pursued with appropriate scientific caution. Altmetrics may face attempts at manipulation similar to what Google must deal with in web search ranking. Addressing such manipulation may, in-turn, impact the transparency of altmetrics. New and complex measures may distort our picture of the science system if not rigorously assessed and correctly understood. Finally, altmetrics may promote an evaluation system for scholarship that many argue has become overly focused on metrics.

Scope of this Collection

The goal of this collection is to gather an emerging body of research for the further study and use of altmetrics. We believe it is greatly needed, as important questions regarding altmetrics’ prevalence, validity, distribution, and reliability remain incompletely answered. Importantly, the present collection, which has the virtue of being online and open access, allows altmetrics researchers to experiment on themselves.

The collection’s scope includes:

  • Statistical analysis of altmetrics data sources, and comparisons to established sources
  • Metric validation, and identification of biases in measurements
  • Validation of models of scientific discovery/recommendation based on altmetrics
  • Qualitative research describing the scholarly use of online tools and environments
  • Empirically-supported theory guiding altmetrics’ use
  • Other research relating to scholarly impact in online tools and environments.

The current collection includes articles that address many of these areas. It will publish new research on an ongoing basis, and we hope to see additional contributions appear in the coming months. We look forward to building a foundation of early research to support this new field.

Author Contributions

Wrote the paper: PG JP DT.


  1. 1. Garfield E (1955) Citation indexes to science: a new dimension in documentation through association of ideas. Science 123: 108–111.
  2. 2. Merton RK (1988) The Matthew Effect in Science, II. ISIS 79: 606–623.
  3. 3. Tenopir C, King D (2008) Electronic journals and changes in scholarly article seeking and reading patterns. DLib Magazine 14. Available:
  4. 4. Falagas M, Alexiou V (2008) The top-ten in journal impact factor manipulation. Archivum Immunologiae et Therapiae Experimentalis 56: 223–226
  5. 5. Wilhite AW, Fong EA (2012) Coercive Citation in Academic Publishing. Science 335: 542–543
  6. 6. The PLoS Medicine Editors (2006) The Impact Factor Game. PLoS Med 3: e291
  7. 7. Brody T, Harnad S, Carr L (2006) Earlier Web usage statistics as predictors of later citation impact. Journal of the American Society for Information Science and Technology 57: 1060–1072
  8. 8. Lewison G (2002) From biomedical research to health improvement. Scientometrics 54: 179–192
  9. 9. de Solla Price DJ, Beaver D (1966) Collaboration in an invisible college. American Psychologist 21: 1011–1018.
  10. 10. Cronin B, Overfelt K (1994) The scholar’s courtesy: A survey of acknowledgement behaviour. Journal of Documentation 50: 165–196
  11. 11. Pavitt K (1985) Patent statistics as indicators of innovative activities: Possibilities and problems. Scientometrics 7: 77–99
  12. 12. Marchionini G, Solomon P, Davis C, Russell T (2006) Information and library science MPACT: A preliminary analysis. Library and Information Science Research 28: 480–500.
  13. 13. Kousha K, Thelwall M (2008) Assessing the impact of disciplinary research on teaching: An automatic analysis of online syllabuses. Journal of the American Society for Information Science and Technology 59: 2060–2069
  14. 14. Martin BR, Irvine J (1983) Assessing basic research?: Some partial indicators of scientific progress in radio astronomy. Research Policy 12: 61–90
  15. 15. Cronin B, Snyder HW, Rosenbaum H, Martinson A, Callahan E (1998) Invoked on the Web. Journal of the American Society for Information Science 49: 1319–1328.[1319::AID-ASI9]3.0.CO;2-W
  16. 16. Almind TC, Ingwersen P (1997) Informetric Analyses on the World Wide Web: Methodological Approaches to “WEBOMETRICS.”. Journal of Documentation 53: 404–426.
  17. 17. Thelwall M, Vaughan L, Björneborn L (2005) Webometrics. Annual Review of Information Science and Technology 39.
  18. 18. Vaughan L, Shaw D (2005) Web citation data for impact assessment: a comparison of four science disciplines. Journal of the American Society for Information Science 56: 1075–1087.
  19. 19. Bollen J, Van de Sompel H, Hagberg A, Chute R (2009) A principal component analysis of 39 scientific impact measures. PLoS ONE 4. doi:10.1371/journal.pone.0006022.
  20. 20. Kurtz MJ, Eichhorn G, Accomazzi A, Grant CS, Demleitner M, et al. (2005) The bibliometric properties of article readership information. Journal of the American Society for Information Science 56: 111–128.
  21. 21. Priem J, Hemminger BH (2010) Scientometrics 2.0: Toward new metrics of scholarly impact on the social Web. First Monday 15. Available:
  22. 22. jasonpriem (2010) I like the term #articlelevelmetrics, but it fails to imply *diversity* of measures. Lately, I’m liking #altmetrics. Available: Accessed 2012 Oct 8.
  23. 23. Priem J, Taraborelli D, Groth P, Neylon C (2010) alt-metrics: a manifesto. Available: Accessed 2011 August 15.
  24. 24. Taraborelli D (2008) Soft peer review: Social software and distributed scientific evaluation. Proceedings of the 8th International Conference on the Design of Cooperative Systems (COOP ’08). Carry-Le-Rouet. Available: