Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Time to publish? Turnaround times, acceptance rates, and impact factors of journals in fisheries science

  • Brendan J. Runde

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Brendan.Runde@noaa.gov

    Current address: Southeast Fisheries Science Center, National Marine Fisheries Service, Beaufort, North Carolina, United States of America

    Affiliation Department of Applied Ecology, North Carolina State University, Morehead City, North Carolina, United States of America

Abstract

Selecting a target journal is a universal decision faced by authors of scientific papers. Components of the decision, including expected turnaround time, journal acceptance rate, and journal impact factor, vary in terms of accessibility. In this study, I collated recent turnaround times and impact factors for 82 journals that publish papers in the field of fisheries sciences. In addition, I gathered acceptance rates for the same journals when possible. Findings indicated clear among-journal differences in turnaround time, with median times-to-publication ranging from 79 to 323 days. There was no clear correlation between turnaround time and acceptance rate nor between turnaround time and impact factor; however, acceptance rate and impact factor were negatively correlated. I found no field-wide differences in turnaround time since the beginning of the COVID-19 pandemic, though some individual journals took significantly longer or significantly shorter to publish during the pandemic. Depending on their priorities, authors choosing a target journal should use the results of this study as guidance toward a more informed decision.

Introduction

Settling on a target journal for a completed scientific manuscript can be a non-scientific process. Some critical elements of the decision are intangible, e.g., attempting to reach a certain target audience or how well the paper “fits” within the scope of the journal [13]. Others, such as turnaround time, acceptance rate, and journal impact, can be measured but (other than impact) these metrics are often challenging to locate, leading authors to make decisions without full information [3, 4].

Timeliness of publication has been reported as among the most important factors in the decision of target journal [48]. Prolonged peer review and/or production can be a major hindrance to authors [9]. Aarssen et al. [4] surveyed authors of ecological papers and found that 72.2% considered likelihood of a rapid decision a “very important” or “important” factor in choosing a journal. In some fields, research outcomes may be time-sensitive, so lengthy review can render results obsolete even before publication [10]. Desires and expectations for turnaround time are often not met: Mulligan et al. [11] found that 43% of survey respondents rated “time-to-first-decision” of their most recent article as “slow” or “very slow.” Allen et al. [12] found that authors in the life sciences expect peer review to take less than 30 days (although this may be unrealistic). Moreover, Nguyen et al. [7] conducted a survey of authors in conservation biology in which the vast majority (86%) of respondents reported that their perceived optimal duration for peer review was eight weeks or under, though their experienced peer review time was on average 14.4 weeks. Over half of the respondents in Nguyen et al. [7] believed that lengthy peer-reviews can have a detrimental impact on their career, including individuals who reported that the lack of timely publication obstructed their acceptance into educational institutions and caused delays to degree conferral.

Despite the obvious and documented importance of journal turnaround time, published per-journal values are almost non-existent (BR, personal observation). Some journals do publicize “time-to-first-decision” on their (or their publisher’s) webpages (e.g., ICES Journal of Marine Science), but summary statistics of times to acceptance and publication remain generally unavailable to the public. Lewallen and Crane [13] recognized the importance of turnaround time and recommended authors contact potential target journals and request information directly. However, this approach is time-consuming and unlikely to result in universal acquiescence from potential target journals. Moreover, because the duration of the review process is unpredictable, journals are more likely to give an average or a range—as an indicator—rather than guarantee a specific turnaround time (H. Browman, Ed. in Chief, ICES J. Mar. Sci., personal communication).

In many biological journals, individual papers contain metadata that can be used to generate turnaround times. Specifically, a majority of journals in the sciences report “Date Received,” “Date Accepted,” and at least one of “Date Published,” “Date Available,” or similar on the webpage or in the downloadable PDF of each paper (BR, personal observation). Aggregating these dates on a per-journal basis allows for the calculation of turnaround time statistics, which would be extremely valuable to authors seeking to identify an ideal target journal.

In this study, I present summary data on turnaround times for over 80 journals that regularly publish papers in fisheries science and the surrounding disciplines. I restrict my analyses to this field out of personal interest and because cross-discipline comparisons may not be apt. Moreover, my goal in this study is to provide field-specific information, and data on journals in other disciplines was beyond that scope. In addition, I provide per-journal information on impact factor and acceptance rate (where available) which are also key factors in deciding on a target journal [4]. The information presented herein is intended to be used in concert with other factors, including authors’ notions of their paper’s “fit,” to refine the process of selecting a target journal.

Methods

Literature review and journal selection

I began by developing a list of journals that regularly publish papers in fisheries science. On 20 March 2021, I searched the Web of Science Core Collection (Clarivate Analytics; v.5.35) for published articles with “fisheries or fishermen or fishes or fish or fishing” as the topic. These terms were used by Branch and Linnell [14] for a similar purpose. I refined this search by selecting only “Articles” and “Proceedings Papers” thereby excluding reviews, meeting abstracts, brief communications, et cetera. Finally, I truncated the search to include only documents that were published during 2010–2020. This search resulted in 242,280 published works. Using Web of Science’s “Analyze Results” tool, I compiled a list of source titles (i.e., journals) that have published >400 papers meeting the specifics of my query. This threshold was used because it emerged as a natural break in the list of journals. A total of 85 journals met these requirements. I removed from this list journals that publish strictly in the field of food sciences (e.g., Food Chemistry) as well as hyper-regional journals that may not be of broad interest to authors in the field (though their exclusion is not indicative of their quality). Finally, I added several journals ad hoc that had not met the 400-paper minimum. These additions were included either because of my personal interest (e.g., Marine and Coastal Fisheries and Global Change Biology) or because of their relevance and value in among-journal comparisons (e.g., Science and Nature). After removals and additions, the list included 82 total journals.

Turnaround time.

In the spring of 2021, I accessed webpages of each of the 82 journals selected for inclusion. For each journal, I located publication history information (i.e., dates received, accepted, and published) on the webpages or in the PDFs of individual papers. I tabulated these dates for each paper. Generally, I aspired to gather dates for all papers published from present day back to at least the beginning of 2018. It was my explicit goal to compare timeliness of publication only for original research papers. For all journals where possible, I excluded papers if they were not original research articles. Some journals publish a higher proportion of reviews, brief communications, errata, or editorials, all of which likely have a shorter turnaround time than original research. Most journals list the paper type on each document, allowing for easy exclusion of papers that were not original research.

I examined distributions of time-to-acceptance (calculated as date accepteddate received) and time-to-publication (calculated as date published—date received). For date published, I used the earliest date after acceptance, i.e., if “date published online” and “date published in an issue” were both provided, I used only “date published online.” Some articles reported acceptance times that are inconsistent with the usual paradigm of peer review (for instance, progressing from received to accepted in 0 days). It is highly unlikely (perhaps impossible) that an unsolicited original research article could be accepted or published within 30 days of submission. I assumed that any implausibly short publication histories either were typographical errors, artifacts of that journal’s methods for tracking papers, or the papers were simply not unsolicited original research articles. I therefore excluded from further analysis any papers with a time-to-acceptance or time-to-publication of fewer than 30 days; by-journal proportions of such papers ranged from zero to 0.06 (Table 1). Similarly, some papers reported publication times on the order of several years or more since receipt. While extreme delays in publication are certainly possible, I assumed that any paper with a time-to-publication of over 600 days was either a typographical error or a result of extenuating circumstances for which the journal staff and reviewers likely played no role. I therefore excluded papers with a time-to-acceptance or a time-to-publication of over 600 days from further analysis; by-journal proportions of such papers ranged from zero to 0.08 (Table 1). Paper-by-paper information on the duration from receipt until reviews are received is generally not available. However, this so-called “time-to-first-decision” is often available on journal websites. Where available, I obtained time-to-first-decision for each journal.

thumbnail
Table 1. Publication histories for individual papers included in this study.

https://doi.org/10.1371/journal.pone.0257841.t001

I generated summary data for each journal in this study in R [15]. Specifically, I examined median time-to-acceptance, median time-to-publication, median time between acceptance and publication, proportion of papers published in under six months, and proportion of papers published in over one year. For the latter two metrics, I selected six months and one year because, though arbitrary, these durations may be representative of many authors’ notions of short versus long turnaround times. Medians were used because distributions of time-to-acceptance and time-to-publication were usually skewed right (see Results).

Some journals included in this study have an extremely broad scope. Specifically, Nature, PeerJ, PLOS ONE, Proceedings of the National Academy of Sciences, and Science publish papers on topics reaching far beyond fisheries or ecology. I hypothesized that turnaround times of fisheries papers published in these journals may be dissimilar to turnaround times for these journals overall since internal editorial structure at the journals may differ among disciplines. I queried Web of Science for “fisheries or fishermen or fishes or fish or fishing” for each of these five journals individually, obtained turnaround times for the resulting papers, and compared median times to publication for fisheries papers and for all papers in each journal.

COVID-19 pandemic effects

During the COVID-19 pandemic, some journals offered leniency to authors and reviewers when setting deadlines to account for the increased probability of extenuating personal or professional circumstances (B. Runde, personal observation). Because of this phenomenon, I hypothesized that turnaround times for each journal may be different prior to and after the start of the COVID-19 pandemic. Hobday et al. [16] showed that for seven leading journals in marine science, times in review were shorter in February–June 2020 as compared to the previous year. For each journal in my study, I compared times-to-publication of all papers published during the year prior to the pandemic (1 March 2019–29 February 2020) and the year following the beginning of the pandemic (1 March 2020–28 February 2021). As above, papers were excluded from this analysis if their time-to-publication was extremely short (< 30 days) or extremely long (> 600 days). I conducted two-sample Wilcoxon tests to examine for differences in publication times between these two periods. Significance was evaluated at the α = 0.05 level. Analyses were performed in R [15].

Impact factors

The most widely used metric of impact, impact factor, is considered flawed by some scientists due to the disproportionate influence of review articles and its propensity for manipulation [1719]. Nonetheless, impact factor is still listed on many journal webpages and is relied on by many authors [2022]. I obtained impact factor for 2018 (the most recent year for which it was available for all journals) from https://www.resurchify.com/impact-factor.php. Impact factor is calculated as the number of citations received in a given year by all articles published in that journal during the previous two years, divided by the number of articles published in that journal during the previous two years.

Acceptance rates

I searched the web for reliable (i.e., not anecdotal) information on per-journal acceptance rates, which was generally limited. Most journals reject a percentage of submissions at the editorial stage prior to peer review (so-called “desk rejections”) due to a lack of fit within the journal’s scope, deficiencies in writing quality, and/or insignificant scientific merit [23]. Of course, rejections after peer review also occur, and overall rejection rates are increasingly made available on journals’ or publishers’ websites or in compendium papers [e.g., 20]. Unfortunately, rates of desk rejections are still rarely available online [23]. However, many journals’ overall acceptance rates are reported either on their own page or on the publisher’s website. For instance, Elsevier and Springer both offer acceptance rates for some (but not all) of their journals on their JournalFinder (https://journalfinder.elsevier.com/) and Journal suggester (https://journalsuggester.springer.com/) respectively. I extracted reported acceptance rates wherever available and tabulated them per journal. In addition, I sent email correspondence to Editors-in-Chief and/or publishers of each of the journals included in this study asking for their journal’s desk rejection rate and overall acceptance rate. When information was provided, it was tabulated on a per-journal basis. In some cases, acceptance rates provided via email were not equal to the rate provided on the journal’s webpage. In these cases, the value provided by the editor or publisher was used, as it is likely more recent and thus more valid. Such chases did not differ in these figures by more than 10%. It is possible that there are discrepancies in the calculation of acceptance rates, e.g., resubmissions may be tabulated differently among journals. I made no attempt to account for these potential differences in the present study.

Data analysis

I examined summary data for each journal and calculated correlations between median time-to-publication, difference in median publication time during COVID-19 as compared to the prior year, impact factor, and acceptance rate (where available). I plotted correlations using the R package ‘corrplot’ [24]. In addition, I plotted relationships between median time-to-publication and impact factor.

Results

From the 82 journals in this study, I extracted publication information for 83,797 individual papers. Median times to acceptance ranged from 64 to 269 days and median times-to-publication ranged from 79 to 323 days (Fig 1). Turnaround times did not differ substantially for fisheries papers in any of the five broad-scope journals in this study (Fig 2); therefore, for the other analyses in this study data from these journals were not restricted to fish-only papers. The ranges of times-to-publication for each journal were generally broad (Fig 3); the middle 50% often spanned a range of 100 days or more. Distributions were typically skewed right. Virtually every journal in the study published one or more papers that took close to 600 days to publish (the maximum timespan retained in the analysis). Percentages of papers published in over one year ranged from 0 to 28%; percentages of papers published in under 6 months ranged from 2 to 99% (Table 1). Of 82 journals examined, 28 had significantly different (Wilcoxon p < 0.05) times-to-publication in the year following the start of the COVID-19 pandemic as compared to the previous year. Of these 28, 12 were significantly faster and 16 were significantly slower during the pandemic (Table 1).

thumbnail
Fig 1. Histograms of median days-to-acceptance (A) and median days-to-publication (B) for 82 journals that publish papers in fisheries and related topics.

https://doi.org/10.1371/journal.pone.0257841.g001

thumbnail
Fig 2. Median days-to-publication for all papers (green) and papers related to fish or fisheries (pink) for five broad-scope journals included in this study.

PNAS is Proceedings of the National Academy of Sciences.

https://doi.org/10.1371/journal.pone.0257841.g002

thumbnail
Fig 3. Boxplots showing days from submission to publication for 82 journals that publish papers in fisheries and related topics organized in descending order of medians.

Central vertical lines represent medians, hinges represent the 25th and 75th percentiles, and lower and upper whiskers extend to either the lowest and highest values respectively or 1.5 * the inter-quartile range. Black dots represent papers that were outside 1.5 * the inter-quartile range. Boxes are shaded to correspond with 2018 Impact Factor, where darker green represents higher impact.

https://doi.org/10.1371/journal.pone.0257841.g003

I was able to obtain overall acceptance rate information for 60 journals in this study. Of these 60, I gathered desk rejection rates for 27 journals. For each of these 27, I calculated acceptance rates for papers that were peer-reviewed (i.e., not desk rejected). There was a weak positive correlation between this value and the proportion of articles that were peer-reviewed, implying that rates of the two types of rejections are not independent (Fig 4A). Higher impact journals tended to have higher desk rejection rates and lower percentages of acceptance given that peer review occurred. Of the 60 journals with overall acceptance rate information, I obtained time-to-first-decision for 48 journals; I plotted overall acceptance rate against these values (Fig 4B). There was no clear relationship between these variables; however, journals with higher impact tended to have lower acceptance rates and shorter times-to-first-decision.

thumbnail
Fig 4.

A) The proportion of submissions that are peer-reviewed (i.e., 1 minus the desk rejection rate) versus the acceptance rate of submissions given that they are peer-reviewed for 27 journals that publish in fisheries and related topics. B) Time-to-first-decision (d) versus overall acceptance rate for 48 journals that publish in fisheries and aquatic sciences. Points in both panels are shaded to reflect 2018 Impact Factor of each journal, where darker green means higher impact.

https://doi.org/10.1371/journal.pone.0257841.g004

There was no strong correlation between any pairwise combination of median time-to-publication, difference in median publication time during COVID-19 as compared to the prior year, impact factor, and acceptance rate (Fig 5). A moderate correlation (Pearson correlation = -0.43) was found between impact factor and overall acceptance rate, a phenomenon that has been documented previously [4]. The relationship between a journal’s median time-to-publication and impact factor was broadly scattered (Fig 6).

thumbnail
Fig 5. Pearson correlations between time from submission to publication (PubTime; d), change in time from submission to publication since the start of the COVID-19 pandemic (COVID), 2018 Impact Factor (IF), and overall acceptance rate (Acc) for 61 journals that publish in fisheries and related topics (i.e., all journals in this study for which these four metrics were available).

Correlation bubbles are colored and shaded based on the calculated Pearson correlation coefficient, where negative correlations are pink, positive correlations are green, and darker shades and larger sizes represent stronger correlations.

https://doi.org/10.1371/journal.pone.0257841.g005

thumbnail
Fig 6. Median time-to-publication (d) versus 2018 impact factor for 82 journals that publish in fisheries and related topics.

Inset panels shows a broader view to include Science and Nature which have high impact factors.

https://doi.org/10.1371/journal.pone.0257841.g006

Discussion

There are clearly intrinsic differences in turnaround time among journals that publish in fisheries science (Fig 3). The causes for these differences are varied, and some are artifacts of the journal’s specific publishing paradigm. For instance, some journals publish uncorrected, non-typeset versions of accepted manuscripts very shortly after acceptance; for the purposes of this study, such papers were considered published even if they were not yet in their final form. I elected to consider any post-acceptance online version “published” because such versions can be shared and cited, thereby fulfilling the desires of many authors [7] and meeting one of the overall goals of science—disseminating research results. However, some journals do not publish any manuscript version other than the finalized document. Such journals have inherently longer turnaround times than those hosting unpolished versions online, and I made no attempt to specify or account for those differences in this study.

In addition to differences in which versions are published online first, differences in journal production formats can influence turnaround time. Some journals publish monthly, some publish quarterly, and some publish on a rolling basis (particularly those that are online only). Strictly periodical journals may choose to allow accepted papers to accumulate prior to publishing several in an issue all at once. Such journals, especially those with page limitations, may have a backlog of papers that are accepted but not yet published. I made no attempt to differentiate between journals based on these format differences, which certainly influence time-to-publication.

Similarly, some journals (or publishers) may enter revised manuscripts into their system as new submissions. This practice ostensibly artificially deflates turnaround times and may also artificially deflate acceptance rates. Unfortunately, to my knowledge no journals state publicly whether this is their modus operandi, precluding the possibility of applying any correction factor or per-journal caveat herein.

Beyond these differences in production time that stem from journal structure, the time it takes to publish a paper can be divided into time the paper is with editorial staff, reviewers, and authors after review. Differences may exist in author revision time among journals; it is possible that reviews of manuscripts submitted to higher impact journals are more thorough and therefore require longer response times. However, I found no association between impact factor and turnaround time (Fig 6), so it may be that no such differences exist. Further, extenuating circumstances on the part of the author(s) of a paper may result in extremely lengthy revision times. There is no data available on per-journal rates of extension requests, but presumably it is low and approximately equivalent across journals. I removed from my dataset any papers that took longer than 600 days to publish. Still, I present median turnaround times in this study as a measure that is robust to outliers.

In contrast to time with the authors, it seems likely that among-journal differences in time with editorial staff and reviewers are responsible for a large portion of differences in overall turnaround time. Delays at the editorial and reviewer level may be inherent to each journal, and could be a result of editorial workload (i.e., number of submissions per editor), level of strictness of the editor-in-chief when communicating with the associate editors, or differences in persistence on the part of the editors when asking reviewers to be expeditious. In addition, some journals may have a more difficult time finding a suitable number of agreeable reviewers; this may be especially true for lower-impact journals although no association between IF and turnaround time was found. A majority of authors surveyed by Mulligan et al. [11] had declined to review at least one paper in the preceding 12 months, mainly due to the paper being outside the reviewer’s area of expertise or the reviewer being too busy with work and/or prior reviewing commitments. If among-journal differences do exist in acceptance rates of review requests, this could possibly alter turnaround times.

In this study, I treated impact factor as a proxy for the quality of individual journals. While impact factor is often still used in this way [22], its limitations are well-documented by authors across many disciplines [e.g., 2527]. For instance, the calculation of how many “citable” documents a single journal has produced is often dubious, as this may or may not include errata, letters, and book reviews depending on the publisher [28]; misclassification can inflate or deflate a given journal’s impact factor, and the rate of misclassification may depend on the individual journal’s publishing paradigm [29]. Alternatives to impact factor, such as SCImago Journal Rank (SJR) and H-index, have been proposed and may in some cases be more valid metrics of journal prestige or quality [30, 31]. Comparison of these bibliometrics among journals in fisheries was beyond the scope of this paper, and I elected to use only impact factor given its ubiquity and despite its known disadvantages.

The COVID-19 pandemic had no discernable field-wide effect on turnaround time, and differences in turnaround time during the pandemic were not correlated with acceptance rate or impact factor (Fig 5). Hobday et al. [16] found minor changes in turnaround time during COVID-19 (through June 2020) for seven marine science journals; they reported only slight disruptions to scientific productivity in this field. Overall, my results corroborate those of Hobday et al. [16], although some journals took significantly longer or significantly shorter to publish during COVID-19. It is unclear whether these correlations were causal, as non-pandemic effects may have affected turnaround times at these individual journals.

The turnaround times, acceptance rates, and impact factors presented in this paper are snapshots and may change over time. The degree to which these metrics change is likely variable among journals. However, barring major changes in journal formats or editorial regimes, the data presented here are probably applicable for the next several years at least. Indeed, median monthly turnaround times for most journals in this study were approximately static for the period from January 2018 to April 2021 (Fig 7). Similarly, acceptance rates and impact factors [32] are generally strongly auto-correlated from one year to the next. I therefore suggest that the metrics presented here can be used by authors as a baseline, but if more than several years have transpired it may befit the reader to obtain updated information (particularly on impact factor and acceptance rate, which are generally more accessible than turnaround time). In addition, it is theoretically possible that this paper itself may alter turnaround times and/or acceptance rates for some journals. Enlightened readers may elect to change their submission habits in favor of certain journals that are more expeditious or that otherwise meet their priorities for a given paper. Authors without a preconceived notion of a specific target journal should still consider the paper’s “fit” to be the most important factor in their decision [1]. I suggest that after assembling a shortlist based on fit, authors should use the results of this paper to select a journal that best aligns with their priorities.

thumbnail
Fig 7. Median monthly publication time (d) as a proportion of January 2018 median publication time for 82 journals that publish in fisheries and related topics.

The dashed horizontal line at 1.0 represents the baseline proportion.

https://doi.org/10.1371/journal.pone.0257841.g007

Acknowledgments

This manuscript benefited greatly from discussions with H. I. Browman, D. D. Aday, W. L. Smith, R. C. Chambers, N. M. Bacheler, K. W. Shertzer, S. R. Midway, S. M. Lombardo, and C. A. Harms. My thanks to K. W. Shertzer and H. I. Browman for reviewing early drafts of this paper. I am grateful to my advisor, J. A. Buckel, for allowing me the time to pursue this side project while I worked on my dissertation. Thanks to the numerous editors, publishers, and other journal staff who replied to my requests for journal information.

References

  1. 1. Knight LV, Steinbach TA. Selecting an Appropriate Publication Outlet: A Comprehensive Model of Journal Selection Criteria for Researchers in a Broad Range of Academic Disciplines. International Journal of Doctoral Studies. 2008;3.
  2. 2. Carroll-Johnson RM. Submitting a Manuscript for Review. Clinical Journal of Oncology Nursing. 2001;5.
  3. 3. Tenopir C, Dalton E, Fish A, Christian L, Jones M, Smith M. What motivates authors of scholarly articles? The importance of journal attributes and potential audience on publication choice. Publications. 2016;4(3):22.
  4. 4. Aarssen LW, Tregenza T, Budden A, Lortie CJ, Koricheva J, Leimu R. Bang for your buck: rejection rates and impact factors in ecological journals. The open ecology journal. 2008;1(1).
  5. 5. Gasparyan AY. Choosing the target journal: do authors need a comprehensive approach? Journal of Korean medical science. 2013;28(8):1117. pmid:23960434
  6. 6. Thompson PJ. How to choose the right journal for your manuscript. Chest. 2007;132(3):1073–6. pmid:17873202
  7. 7. Nguyen VM, Haddaway NR, Gutowsky LF, Wilson AD, Gallagher AJ, Donaldson MR, et al. How long is too long in contemporary peer review? Perspectives from authors publishing in conservation biology journals. PloS one. 2015;10(8):e0132557. pmid:26267491
  8. 8. Solomon DJ, Björk BC. Publication fees in open access publishing: Sources of funding and factors influencing choice of journal. Journal of the American Society for Information Science and Technology. 2012;63(1):98–107.
  9. 9. Swan A. ‘WHAT AUTHORS WANT’: the ALPSP research study on the motivations and concerns of contributors to learned journals. Learned Publishing. 1999;12(3):170–2.
  10. 10. Lang M. Communicating academic research findings to IS professionals: An analysis of problems. Informing Sci Int J an Emerg Transdiscipl. 2003;6:21–9.
  11. 11. Mulligan A, Hall L, Raphael E. Peer review in a changing world: An international study measuring the attitudes of researchers. Journal of the American Society for Information Science and Technology. 2013;64(1):132–61.
  12. 12. Allen H, Boxer E, Cury A, Gaston T, Graf C, Hogan B, et al. What does better peer review look like? Definitions, essential areas, and recommendations for better practice. 2018.
  13. 13. Lewallen LP, Crane PB. Choosing a publication venue. Journal of Professional Nursing. 2010;26(4):250–4. pmid:20637447
  14. 14. Branch TA, Linnell AE. What makes some fisheries references highly cited? Fish and Fisheries. 2016;17(4):1094–133.
  15. 15. R Core Team. R: a language and environment for statistical compujting, Vienna, Austria. URL http://www.R-project.org/. 2021.
  16. 16. Hobday AJ, Browman HI, Bograd SJ. Publishing and peer reviewing as indicators of the impact of COVID-19 on the productivity of the aquatic science community. ICES Journal of Marine Science. 2020;77(7–8):2439–44.
  17. 17. Seglen PO. Why the impact factor of journals should not be used for evaluating research. Bmj. 1997;314(7079):497.
  18. 18. Kupiec-Weglinski JW. Journal Impact Factor (JIF): The Good, the Bad, and the Ugly. Nowotwory Journal of Oncology. 2015;65(6):481–2.
  19. 19. Ioannidis JP, Thombs BD. A user’s guide to inflated and manipulated impact factors. European journal of clinical investigation. 2019;49(9):e13151. pmid:31206647
  20. 20. Salinas S, Munch SB. Where should I send it? Optimizing the submission decision process. PLoS one. 2015;10(1):e0115451. pmid:25616103
  21. 21. Smith R. Commentary: The power of the unrelenting impact factor—Is it a force for good or harm? International Journal of Epidemiology. 2006;35(5):1129–30. pmid:16987843
  22. 22. Archambault É, Larivière V. History of the journal impact factor: Contingencies and consequences. Scientometrics. 2009;79(3):635–49.
  23. 23. Björk B-C. Acceptance rates of scholarly peer-reviewed journals: a literature survey. El profesional de la información. 2019.
  24. 24. Wei T, Simko V. R package "corrplot": Visualization of a Correlation Matrix (Version 0.84). https://github.com/taiyun/corrplot. 2017.
  25. 25. Bordons M, Fernández M, Gómez I. Advantages and limitations in the use of impact factor measures for the assessment of research performance. Scientometrics. 2002;53(2):195–206.
  26. 26. Bornmann L, Marx W, Gasparyan AY, Kitas GD. Diversity, value and limitations of the journal impact factor and alternative metrics. Rheumatology international. 2012;32(7):1861–7. pmid:22193219
  27. 27. Sharma M, Sarin A, Gupta P, Sachdeva S, Desai A. Journal impact factor: its use, significance and limitations. World journal of nuclear medicine. 2014;13(2).
  28. 28. Rossner M, Van Epps H, Hill E. Irreproducible results: a response to Thomson Scientific. Rockefeller University Press; 2008.
  29. 29. Golubic R, Rudes M, Kovacic N, Marusic M, Marusic A. Calculating impact factor: how bibliographical classification of journal items affects the impact factor of large and small journals. Science and engineering ethics. 2008;14(1):41–9. pmid:18004672
  30. 30. Falagas ME, Kouranos VD, Arencibia-Jorge R, Karageorgopoulos DE. Comparison of SCImago journal rank indicator with journal impact factor. The FASEB journal. 2008;22(8):2623–8. pmid:18408168
  31. 31. Ranjan C. Bibliometric Indices of Scientific Journals: Time to overcome the obsession and think beyond the Impact Factor. Medical Journal Armed Forces India. 2017;73(2):175–7. pmid:28924319
  32. 32. Chew M, Villanueva EV, Van Der Weyden MB. Life and times of the impact factor: retrospective analysis of trends for seven medical journals (1994–2005) and their Editors’ views. Journal of the Royal Society of Medicine. 2007;100(3):142–50. pmid:17339310