Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The influence of publication ranking specifications on publication strategy and academic careers in business administration

  • Gerhard Reichmann ,

    Roles Conceptualization, Formal analysis, Methodology, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    gerhard.reichmann@uni-graz.at

    Affiliation Department of Operations and Information Systems, Karl-Franzens University Graz, Graz, Austria

  • Christian Schlögl,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Supervision, Validation

    Affiliation Department of Operations and Information Systems, Karl-Franzens University Graz, Graz, Austria

  • Margit Sommersguter-Reichmann

    Roles Conceptualization, Methodology, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Finance, Karl-Franzens University Graz, Graz, Austria

Abstract

This study examines the impact of methodological variations in publication-based rankings on the evaluation of individual research performance in business administration. Drawing on a unique dataset comprising complete personal publication lists of 233 professors from Austrian public universities (2009–2018), we apply ten distinct ranking variants that differ in their treatment of data sources, co-authorship, publication languages, article lengths, and journal qualities. These variants are categorized into purely quantity-focused and predominantly quality-focused rankings. Our results demonstrate that researcher rankings are susceptible to specification choices. While quantity-focused rankings produce relatively small performance differentials and high variability, quality-focused variants consistently identify a stable group of leading researchers. These scholars publish more frequently in English, in journals indexed by Web of Science (WoS), and in top-tier outlets according to the JOURQUAL ranking. Notably, leading researchers publish over twice as many articles in high-ranking journals as their peers. The findings underscore the significant implications of ranking design for career advancement and research strategy. For early-career researchers, aligning publication efforts with the logic of quality-focused rankings—favoring English-language publications in highly ranked, peer-reviewed journals—is crucial for enhancing academic visibility and competitiveness. Moreover, our study offers a methodological stress test for ranking systems, revealing the extent to which technical design influences outcomes. By leveraging comprehensive and multilingual publication data and systematically comparing multiple ranking methodologies, this study contributes to both the academic evaluation literature and practical guidance for researchers navigating the demands of a metric-driven academic environment.

1. Introduction

1.1. Background

Evaluation plays a crucial role in universities, influencing administration, teaching, and research. The academic literature reflects this, with studies evaluating administrative processes [1,2], teaching quality [3,4], and, in particular, research performance [57]. This paper focuses on research performance measurement as a core element of evaluation. Researchers assess research output at various levels, including universities [8], faculties or subject areas [9], departments [10], and individual scholars [11,12]. This study concentrates on the last group, individual researchers.

Research performance at the individual level can be measured in several ways, including publication and citation metrics, awards, prizes, editorial roles, and third-party funding [1315]. We focus on publication output, which, along with citations, is the most widely used indicator in practice [16,17]. To ensure complete coverage, we use researchers’ personal publication lists rather than relying on external databases [18].

Multidisciplinary databases such as Web of Science (WoS), Scopus, or Google Scholar [1921] often fail to capture non-English publications. This limitation is particularly significant for researchers in non-English-speaking regions. Meyer et al. [22] demonstrate that, in German-speaking business administration, the data source has a greater impact on ranking results than the use of publication counts versus citation counts.

Research rankings have evolved into more than mere documentation tools, as they increasingly influence career trajectories. High rankings signal achievement and offer career benefits, such as better job prospects or increased funding [23,24]. As a result, researchers have strong incentives to optimize their publication strategies.

Similar to other disciplines, rankings in business administration aim to identify ‘leading’ scholars at national or international levels. Most existing league tables rely on a single ‘best’ or most established variant. Yet a change in one specification, such as using fractional instead of full counting, can significantly shift ranking positions. Despite this, little is known about how such technical choices affect researcher rankings and the strategies behind them.

This study addresses this gap by systematically comparing multiple ranking variants and assessing their effects. We aim to:

  • Test the stability of the top 10% of researchers (hereafter referred to as leading researchers) across various ranking variants.
  • Identify the most influential publication characteristics (e.g., co-authorship, language, article length, or journal quality).

Our goal is twofold: (i) to offer evidence-based guidance to researchers, especially early-career scholars, on how publication strategies affect rankings, and (ii) to ‘stress test’ ranking systems by showing how sensitive they are to design choices. A key strength of our approach is the use of complete personal publication lists, which ensure full coverage and avoid the data gaps found in multidisciplinary databases, such as Web of Science (WoS) or Scopus.

This paper proceeds as follows: Section 1.2 reviews research rankings in the social sciences and economics, including those specific to German-speaking countries. Section 2 discusses the methods for individual and aggregate ranking variants, along with their practical relevance. Section 3 provides individual and aggregate ranking results, while Section 4 discusses the findings and the study's limitations. Section 5 concludes with a concise summary.

1.2. Literature overview

First, we searched the WoS for literature on publication rankings using the search term ‘publication-based rankings.’ Despite an unlimited period, the search yielded only a few publications focusing on ranking business administration units. Next, we expanded the subject area to encompass all social and economic sciences, ultimately identifying 46 relevant articles. As we found no studies covering business administration in German-speaking countries, we conducted another search outside WoS, yielding four additional papers.

1.2.1. Search within the WoS.

Using the WoS search, we retrieved 46 papers that span the publication period from 1996 [25] to 2022 [26]. On average, each article covered ten years. The number of units analysed ranged from five to over 18,000, though most studies considered fewer than 100. Universities were the most frequently examined units. Other units included countries, schools, research programs, departments, and, as in our study, individual researchers.

Table 1 illustrates that 26 of the 46 studies (57%) restrict their analysis to a specific country or region, with the United States as the most frequent geographical focus. A single publication provided an economics ranking for Germany [27].

thumbnail
Table 1. Ranking studies retrieved from a WoS search (n = 46)).

https://doi.org/10.1371/journal.pone.0336492.t001

Nearly all (41 studies) limited their focus to a specific scientific discipline. Among these, 15 focused on economics (e.g., [28]), making it the most common field of study. Nine studies addressed business administration or its subfields, such as marketing [29], but none specifically focused on German-speaking countries.

The dominant data sources are (subject-specific) journal lists, used in 22 studies, followed by WoS (14), and other sources (10). The number of journals used per study ranges from 2 to 258. Scopus and Google Scholar appear only twice each; only one study used personal publication lists, likely due to the considerable effort required to compile them. However, personal publication lists generally provide more complete and higher-quality data.

Journal articles are by far the most common publication type assessed (41 of 46 studies), reflecting standard practice in most disciplines [6,30].

While scholars critically view language restrictions, especially English-only criteria [31], such filters are rarely applied explicitly: 44 studies accepted all languages, with only one limited to English and another to Chinese. Nonetheless, the reliance on multidisciplinary databases like WoS or Scopus implicitly narrows the scope to English-language publications, as these sources predominantly index English-language journals. In one variant of our study, we limited the sample to English-language articles to explore how results differ when relying solely on WoS instead of personal publication lists. Substantial differences suggest that even English-language articles from our authors are not indexed there.

Across the 46 studies, we identified 60 distinct ranking variants, exceeding the number of articles due to different approaches to considering multiple authorship. More than half of the studies employed full counting (33) over fractional counting (27 studies, with 26 using the formula 1/n, with n representing the number of authors), despite full counting being considered unfair due to the disadvantage it poses for single authors [3234]. Reflecting this concern, eight out of ten variants in our study apply fractional counting.

Only nine of the 46 articles adjusted for publication length, typically using unstandardized page counts, while 37 did not (e.g., [35,36]). Similarly, only 20 studies applied journal-ranking weights. These weights reflect a journal's value, based either on subjective, discipline-specific rankings (e.g., JOURQUAL; [37]) or objective metrics, such as the journal impact factor (IF) [38,39]. Among the 20 studies, WoS-IF was the most frequently used (7 cases), followed by various subject-specific rankings, especially in economics.

Overall, discipline-specific rankings of journal articles, often compiled from bespoke journal lists, dominate the literature. Language restrictions are rare, while studies paid limited attention to publication length or journal quality metrics.

1.2.2. Search outside the WoS.

To capture potential discipline- and region-specific aspects, we supplemented the WoS search with a review of business administration studies from the German-speaking region, resulting in four additional papers (Table 2). Studies ranking individual researchers in this context are scarce. Meyer et al. [22] evaluated the research performance of 298 accounting and marketing scholars in German-speaking countries, utilizing WoS, Scopus, Google Scholar, and Handelsblatt as data sources. These sources yielded substantial variation in publication counts due to differences in completeness levels. The authors applied full counting for co-authorship and measured journal quality in one case using the Handelsblatt ranking [40,41]. The findings highlight the substantial impact of the data source on individual rankings. Our study addresses these issues by utilizing personal publication lists, which provide greater completeness, taking into account fractional counting and different variants in journal quality.

thumbnail
Table 2. Ranking studies retrieved from a search outside the WoS (n = 4)*.

https://doi.org/10.1371/journal.pone.0336492.t002

Fülbier and Weller [42] analysed 175 German financial accounting researchers between 1950 and 2005, using journals ranked at least JOURQUAL B. They identified 733 articles, with approximately 87% published in German. Using full counting, they found that 20 leading researchers (11%) accounted for 28% of all publications. We avoid two limitations of their approach: exclusive reliance on full counting and the restriction to discipline-specific journals, which can disadvantage both single authors and interdisciplinary researchers.

Rost and Frey [43] examined the relationship between quantitative (publications, citations) and qualitative (editorial board memberships) performance indicators for 851 management scholars. They used 11 high-quality journals, selected based on various rankings, over a ten-year period. Rankings were based on raw and weighted publication counts, accounting for article length, journal IF, and co-authorship through fractional counting. While exploring multiple ranking variants, the authors relied on a narrow database of just 11 discipline-specific journals, which likely facilitated broader researcher inclusion but limited generalisability.

Macharzina et al. [44] ranked researchers and practitioners based on their publications in six major German-language business journals, identifying 2,255 articles over ten years. They used fractional counting (1/n) and included 142 individuals with at least three research points (the latter represent the number of publications utilizing the formula 1/n for co-authorship; in our study, however, we retain the term ‘publications’ as measurement unit for this case), comprising 105 professors, 16 junior researchers, and 21 practitioners. However, the study is over 20 years old, relies exclusively on German-language journals, and omits international outlets—an increasingly important venue for German-speaking researchers. In contrast, our study ensures a more homogeneous sample (professors only) and includes both German- and English-language publications.

1.2.3. Synthesis gaps.

Table 3 contrasts six recurring shortcomings in previous researcher-ranking studies with the approach taken in our research to address these shortcomings. Although many studies zero in on a single discipline or country, none cover Austrian business economists; moreover, most still rely on incomplete databases (WoS, Scopus, Google Scholar, or narrow journal lists), overlook language bias, credit every co-author fully, build on tiny sets of field journals, and analyse datasets that are now decades old. Our study addresses each of these issues by examining the publication records of business administration professors working in Austria, utilizing complete personal publication lists, conducting language sensitivity checks, applying mainly fractional counting, and covering output through 2018.

These gaps motivate our central aims: (a) test the stability of ‘top-10%’ researcher status across ten distinct ranking variants, and (b) identify publication attributes (e.g., co-authorship, language, length, journal quality) that drive rank shifts. By doing so, we provide actionable guidance to early-career scholars and a methodological’ stress-test ‘for ranking producers.

2. Materials and methods

This study focuses on researchers with professorial titles (full, associate, or assistant professors) at Austrian public universities offering Bachelor's or Master's programs in business administration. We further limited the sample to individuals affiliated with faculties of social and economic sciences, primarily in business administration departments or related chairs. We justify this approach based on the homogeneity of the scholars. We define homogeneity as follows:

  • Academic comparability: All researchers hold a professorial title, ensuring similar levels of education, research experience, and academic independence.
  • Organizational focus: All scholars work in units with a clear business administration orientation, regardless of individual educational or research backgrounds.

To facilitate a comprehensive comparison, we limited the sample to Austria as encompassing the entire German-speaking region would have exceeded our available resources. We further limited the scope to public universities to ensure comparable institutional conditions. However, this had no practical effect, as no private universities in Austria offered relevant programs at the time. We did not consider researchers from economics and sociology. As separate disciplines with distinct degree programs, economics has its own established rankings, and sociology follows a different publication culture, with a strong focus on monographs. However, we included staff from operations research and business informatics when their organizational assignment placed them within a Business School.

We only included researchers who (a) held a permanent university position, (b) earned a doctoral degree before 2009, and (c) were employed at one of six designated Austrian public universities as of July 31, 2019. Based on university websites, we identified 233 researchers who met all criteria. For these 233 individuals, we collected all journal articles (research and review papers) published between 2009 and 2018. The ten-year window and the focus on journal articles align with standard practices in researcher-level assessments. In business administration, journal articles carry more weight for career advancement than monographs, proceedings, or other types of publications [16,44]. This preference is partly due to the challenges of objectively assessing non-journal outputs [45]. We compiled the data between March and June 2020, using the researchers’ personal publication lists, which yielded 4,246 journal articles. Personal publication lists are considered the most complete data source, essentially the gold standard, because they typically include all publications, not just those published in journals covered by the bibliometric databases. Most Austrian public universities require researchers to maintain these records through internal databases, which can automatically generate personal publication lists.

In addition to standard bibliographic information, we recorded i) the number of authors, ii) language (German or English), iii) number of pages, and iv) journal coverage in WoS for each relevant article.

To assess journal quality, we documented the JOURQUAL-3 rating and 2-year WoS-IF as of summer 2023. JOURQUAL-3 is the standard in German-speaking business administration in its third edition, while WoS-IF is an international standard across disciplines.

Methodological choices made before evaluation can significantly influence ranking outcomes [46]. Despite this, most studies apply only one preferred ranking method [26,47,48]. These methods vary in terms of data source, treatment of co-authorship, publication language, and journal weighting.

Based on our literature review (see Section 1.2), we selected ten ranking variants (V1–V10) to assess the 233 researchers. Table 4 summarizes their specifications regarding data source, co-authorship, language, length, and journal quality.

We also grouped the rankings into Category 1 and Category 2 rankings. The key distinctions lie in 1) whether the ranking is quality-focused or quantity-focused, i.e., whether the ranking explicitly considers journal quality, typically operationalized through journal rankings; and 2) whether the practical relevance is high. We assigned rankings V1 to V5 to Category 1 and the remaining variants V6 to V10 to Category 2. While V7-V10 do incorporate journal rankings, V6 does not take journal rankings into account. Nevertheless, we assigned V6 to Category 2, as it is widely used as a de facto standard in evaluation practice [5], despite not accounting for journal quality. In this sense, V6 functions similarly to quality-focused rankings by shaping research evaluation and academic careers.

Table 4 shows that we applied fractional counting (1/n) in eight cases. We also considered full counting in two variants (V1, V9) to assess its effect despite being outdated [22,42]. To investigate the impact of language filters, V5 considered English-only publications. Regarding publication length, V3 used page counts, whereas V4 employed a minimum of five pages [6,44]. We incorporated journal quality in four Category-2 rankings (V7–V10). This approach reflects the pivotal role of journal rankings in hiring decisions and academic careers. The literature review reports similar findings, where 20 of 46 studies utilized journal rankings, most often subject-specific, survey-based lists. We used JOURQUAL in three variants (V8–V10), filtering for A + , A, and B journals only [42,49]. In V7, we weighted articles using the WoS-IF. We excluded articles without any WoS-IF, highlighting our focus on consistent quality standards.

We then applied the ten variants to rank the 233 researchers. We focused on the top 10%, i.e., the so-called leading researchers (23 individuals; see Section 3.1) and analyzed which attributes exerted the highest influence on the top positions.

To check robustness, we created two aggregate rankings (see Section 3.2):

  • The first includes all researchers ranked in the top 10% in at least one of the ten variants.
  • The second incorporates only those ranked in the top 10% in at least one of the five Category-2 variants (V6–V10).

Finally, we identified the attributes that most strongly contributed to being among the leading researchers.

3. Results

3.1. Leading researchers: individual rankings

Table 5 presents the ten ranking variants detailed in Table 4, highlighting the upper echelon of researchers, specifically, the 23 leading researchers of the 233 evaluated. Each ranking builds on column three, which varies to include publications (V1, V2, V4, V5, V6, V8, V9, and V10), publication points (V7), or number of pages (V3), depending on the variant. For fractional counting (V2, V3, V4, V5, V6, V7, V8, and V10), we applied the formula 1/n.

thumbnail
Table 5. Leading researchers: individual rankings V1-V10.

https://doi.org/10.1371/journal.pone.0336492.t005

A key element of our analysis is the final two rows of each ranking, which compare the average metrics of the leading researchers (’Mean value per Leading Researcher’) with those of the remaining group (’Mean value per Non-Leading Researcher.’ These values enable a clear distinction between the two cohorts. Fig 1 summarizes these comparisons and highlights the performance of the leading researchers, using the non-leading group as the reference point (100%).

thumbnail
Fig 1. Performance advantage of leading researchers compared to non-leading researchers (Mean values for leading researchers).

https://doi.org/10.1371/journal.pone.0336492.g001

As shown in Fig 1 and the final two rows of Table 5, the mean performance of leading researchers exceeds that of the non-leading group by a factor ranging from 1:3.55 (V3) to 1:14.21 (V9) across all ranking variants. The ratio 1:14.21 for V9, which considers publication lists, full counting, and A + /A JOURQUAL journals, illustrates that the leading researchers have, on average, fourteen times more publications in top-ranked journals than their peers. These results highlight the substantial performance gap between the two cohorts.

Notably, Category 1 rankings (V1–V5) show much more minor differences (ratios from 1:3.55 to 1:4.97) compared to Category 2 rankings (V6–V10), where the ratios range from 1:4.99 to 1:14.21.

3.2 Leading researchers: aggregate rankings

Table 6 identifies 63 researchers who achieved leading status in at least one of the ten rankings. Three appear as leading researchers in all variants, and nine in at least six. To be part of this elite group, a researcher must have achieved leading status in both Category 1 and Category 2 rankings.

thumbnail
Table 6. Leading researchers: aggregate ranking 1 (Categories 1 and 2).

https://doi.org/10.1371/journal.pone.0336492.t006

Among the remaining 51 leading researchers, a clear division emerges between the two categories. Notably, each of the eleven researchers who led in exactly three rankings did so exclusively within a single category, further validating our classification of ranking variants. As a result, our discussion will now focus on the five Category 2 rankings (V6–V10).

Table 7 ranks 26 leading researchers based on their performance in Category 2 rankings (V6–V10). All appear among the leading researchers in at least three of these variants; six lead in all five. Comparing their performance to the non-leading cohort (see the last two rows of Table 7 and Fig. 2) reveals apparent differences. Leading researchers published more frequently in English, in WoS-indexed journals, and in A + /A/B JOURQUAL outlets. Specifically, 19 of the 26 scholars published exclusively in English, with an average share of 97%.

thumbnail
Table 7. Leading researchers: aggregate ranking 2 (Category 2).

https://doi.org/10.1371/journal.pone.0336492.t007

thumbnail
Fig 2. Research attributes of leading vs. non-leading researchers: aggregate ranking (Category 2).

https://doi.org/10.1371/journal.pone.0336492.g002

On average, 64% of their publications are WoS-indexed, and 63% appear in A + /A/B JOURQUAL journals, representing more than double the shares in the non-leading group (26% and 27%, respectively). For A+ and A JOURQUAL publications, the contrast is even sharper: leading researchers average 34%, compared to just 7% in the non-leading cohort, nearly a fivefold difference.

4. Discussion

This study set out to address two key questions: first, how sensitive researcher rankings are to the choice of methodological specifications; and second, which publication attributes most strongly influence researchers’ positions within those rankings. The findings provide important insights for both research evaluation practice and the strategic considerations of early-career researchers.

4.1. Stability of leading positions across ranking variants

Our results demonstrate that Category 2 rankings, specifically, those that incorporate journal rankings or impact factors (V7–V10), consistently identify a relatively stable group of leading researchers. In contrast, the quantity-focused Category 1 rankings (V1–V5), which do not consider journal quality, produce more variation and instability in identifying leading performers (Fig. 1). This finding confirms earlier concerns raised in the literature that rankings based solely on publication counts are limited in their ability to identify consistent research excellence [50].

From a practical standpoint, these results support the prevailing approach in many academic appointment procedures, particularly in German-speaking countries, where quality-focused indicators, such as publications in highly ranked JOURQUAL or WoS-indexed journals, are prioritised. These measures not only better reflect the long-term academic impact of researchers but also align with studies suggesting that past high-quality productivity is a reliable predictor of future performance [51]. Hence, evaluation systems that rely heavily on quantity-based metrics risk overlooking those researchers who produce fewer but more impactful publications.

4.2. Publication attributes and their impact on rankings

Turning to our second research aim, the analysis of individual ranking variants reveals that specific publication attributes play a decisive role in determining a researcher's position.

Above all, journal quality emerges as the most influential factor. Leading researchers publish substantially more of their work in top-tier journals, with an average share of A + /A/B JOURQUAL publications more than twice that of their non-leading peers. This insight aligns with Mingers and Young (2016) [52], who emphasise the critical role of journal reputation in the business and management disciplines. These results also highlight the persistent tension between quality and quantity in academic publishing. The widespread ‘publish or perish’ culture has long incentivized quantity over quality, resulting in a flood of publications with few citations and reads [53]. However, critics question whether publishing in highly ranked journals is overrated, as these journals often prioritize methodological complexity over practical relevance [54]. Our results suggest that in quality-sensitive rankings, a small number of high-quality publications contributes more to a researcher's standing than a large volume of low-tier outputs, supporting recent calls to shift academic incentives away from volume-based metrics and toward impact- and reputation-sensitive measures [55].

Publication language also proves to be a strong differentiator. Leading researchers publish almost exclusively in English (ranging between 93% and 98% for rankings V5-V10), reflecting the international orientation of high-impact journals and evaluation systems. This fact has clear implications for early-career scholars: publishing in English is essential for global visibility and academic recognition. However, as Wei and Zhang (2020) [56] point out, the decline in national-language publications can create a disconnect between academic research and professional practice, particularly in non-English-speaking regions. A balanced strategy may therefore be warranted; one that prioritises English-language publications for career progression, while also maintaining contributions in national-language outlets to retain practitioner relevance.

Co-authorship, by contrast, does not significantly distinguish leading from non-leading researchers in our study. Nonetheless, it remains an important strategic factor. While full counting favours extensive co-authorship, fractional counting (used in most Category 2 rankings) eliminates its numerical advantage. Still, collaborations, particularly with experienced scholars, can offer early-career researchers essential learning opportunities and access to academic networks. As Xu and Pole [57] highlight, such networks are often crucial for building academic visibility and reputation, even when rankings may dilute individual credit.

Publication length, finally, appears to have a limited impact. It only influences rankings in variants where it is explicitly considered (V3 and V4). Even then, its practical relevance is minor, as most current evaluation systems do not include article length as a performance criterion (see Section 1.2). Early-career researchers are therefore better advised to focus on clarity, methodological rigour, and journal fit, rather than aiming for excessive length.

4.3. Strategic implications for early-career researchers

Taken together, the results provide several recommendations for early-career researchers seeking to establish a strong academic profile.

  1. Publishing in English and journals indexed in WoS, carrying a high WoS-IF, and ranked highly in JOURQUAL is essential. For early-career researchers, however, focusing exclusively on quality is challenging. Publishing in top-tier journals often takes years and may not be successful at all. Early-career researchers must therefore carefully consider whether multiple publications in lower-tier or unranked journals will be viewed more favourably, particularly in future job applications, than time-consuming or unsuccessful attempts to publish in prestigious outlets. Moreover, regarding impact factors, early-career researchers must also consider that the WoS-IF changes annually, with the common 2-year WoS-IF fluctuating more than the 5-year WoS-IF. Ideally, rankings should use the WoS-IF as it stood at the time of publication; however, most studies, including ours, apply the WoS-IF current at the time of compiling the ranking. JOURQUAL ratings, however, remain stable for long periods, offering consistency but may lag behind recent shifts in journal influence. When in doubt, let JOURQUAL guide journal selection, as it comprehensively captures the top tier. Additionally, young researchers could also consult best-practice guidance on publishing in premier outlets (e.g., [58]). By targeting reputable, WoS‑indexed, and highly ranked journals, researchers maximize their visibility and competitiveness in evaluation procedures that heavily weight these metrics.
  2. Collaborating strategically with experienced researchers can accelerate learning, extend networks, and enhance the quality and credibility of the work, provided that all co-authors make meaningful contributions [57]. Although full counting is increasingly regarded as outdated in research evaluation, placing greater pressure on early-career researchers to meet publication targets under fractional counting, they can still benefit greatly from collaboration. Joint work can support topic selection, broaden research perspectives, and help identify appropriate journals for publication.
  3. Prioritizing clarity and rigour over article length and aligning submissions with the standards and expectations of high-impact journals remains critical. Given the limited importance of article length in evaluation practices and the growing emphasis on word limits in academic publishing, early-career researchers should identify a suitable outlet before writing. This procedure allows them to tailor the article's length and structure to the journal's specific requirements. Crucially, early-career researchers should be aware that evaluation practices increasingly favour journal quality and international visibility over sheer publication volume.

4.4. Limitations

Our study is not without limitations. Notably, university rankings, regardless of level, are often viewed critically or outright rejected by parts of the academic community [59]. While we acknowledge these concerns, our use of multiple ranking systems seeks to mitigate inherent biases.

Our decision to focus exclusively on publications in scientific journals merits critical reflection. This approach excludes other relevant forms of scholarly output, such as citations, different types of publications (e.g., monographs or conference papers), academic awards, or editorial roles, thereby potentially overlooking critical dimensions of academic performance. However, this limitation is grounded in the widely acknowledged centrality of journal publications within the field of business administration [16]. Expanding the scope to include additional output types would require reliance on individual publication lists, as international databases primarily index journal articles. However, this approach introduces a further challenge: the need to compare fundamentally different formats, such as a comprehensive monograph from a reputable publisher versus a brief, non-indexed journal article, which complicates the fair and consistent assessment of academic quality.

Despite incorporating ten different rankings from the literature, our aggregate rankings inevitably emphasized specific variants over others. This insight may prompt early-career researchers to reflect critically on the influence of particular ranking methodologies. Some variants rely on questionable criteria but were included due to their practical relevance. A prominent example is full counting of co-authored publications, which tends to overestimate individual contributions [60], especially as the number of co-authors increases. This problem is even more pronounced in citation analyses, where full counting remains the norm [15], despite a few exceptions [61].

We also considered publication language and length in our analysis, albeit with caution. Restricting the scope to specific languages does not inherently improve the validity of the results. Likewise, favoring longer publications risks prioritizing volume over quality. Nevertheless, setting a minimum length threshold can act as a quality proxy, as databases like WoS and JOURQUAL typically exclude short articles. Continued vigilance is required to monitor trends that may favor shorter formats.

Notwithstanding the general reservations about using the WoS-IF as a proxy for quality [62], we acknowledge alternative metrics such as SCImago Journal Rank and CiteScore. We justify the use of the WoS-IF with its frequent application in academic evaluations. Regarding subject-specific rankings, such as JOURQUAL [37], it is crucial to consider their subjective nature, in contrast to ostensibly rule-based indicators developed by commercial providers, which, however, may lack transparency. Despite the limitations of the WoS-IF and JOURQUAL, we believe they offer an adequate opportunity for assessing publication quality.

Due to the considerable effort involved in collecting data from personal publication lists, we limited our sample to experienced researchers at Austrian public universities. While this approach diverges from broader methods in the literature, it allowed us to reduce noise and focus on the qualitative nuances of performance measurement. We excluded early-career researchers (pre-docs and post-docs) intentionally to minimize bias in the interpretation of rankings and their implications.

Additionally, our analysis does not differentiate among the various sub-disciplines within business administration, partly due to the limited sample size. This approach may be problematic in light of diverse evaluation standards across different fields [22]. For instance, publishing in English may not be as advantageous in disciplines such as external accounting, where national legal frameworks prevail. However, for scholars pursuing international careers, a focus on globally relevant themes and English-language publications may be beneficial. While the study's findings may be transferable to other national contexts, applying them across disciplines would require adapting to field-specific publication standards and norms.

Finally, we want to emphasize that while our recommendations for early-career researchers may seem self-evident, even seemingly obvious strategies require empirical confirmation through systematic research.

5. Conclusions

Our study demonstrates that the specification of ranking criteria can substantially influence the research performance of individual scholars, confirming findings from earlier studies [22,43,63]. To be classified as a leading researcher in business administration, it is advisable to be familiar with the diversity and frequency of ranking variants applied in practice. However, accomplishing this is far from straightforward. While rankings commonly used in academic research can be identified through literature analyses, gaining insight into ranking procedures used in university hiring decisions, particularly academic appointments, remains far more challenging. Information on the actual use and design of such methods remains opaque, and the scholarly literature on this topic is limited.

Our study aimed to help close this information gap by drawing on existing academic sources as well as additional insights from appointment processes for senior academic positions. Classifying ranking approaches into quantity-oriented (Category 1) and predominantly quality-oriented (Category 2) variants supported our hypothesis that the latter are more relevant for evaluating research performance in business administration. Indeed, Category 2 rankings yielded substantially greater performance differentials between leading researchers and their peers, highlighting their discriminatory power and practical relevance. Based on these findings, we recommend that early-career researchers align their publication strategies with the logic of Category 2 rankings to strengthen their career prospects.

To facilitate practical guidance, we synthesised the results across all Category 2 variants to identify consistent traits associated with leading researchers. Most notably, placing articles in journals indexed by Web of Science (WoS) or in discipline-specific rankings, such as JOURQUAL (particularly relevant in German-speaking contexts), and fostering publication in English significantly enhances visibility and perceived quality. Given the considerable overlap between high-ranking JOURQUAL journals (A+ and A-) and the journals indexed in WoS, the use of WoS data alone is generally sufficient for performance evaluations. For highly ranked journals, time-consuming analyses of individual publication lists offer no advantage, as they are unlikely to yield additional relevant information when high-quality journal publications are the sole focus.

Acknowledgments

The authors gratefully acknowledge financial support from their home institution, the Karl-Franzens University Graz. We also thank the reviewers and editors for their valuable comments and constructive feedback, which significantly improved the quality of this manuscript. We also want to take this opportunity to express our heartfelt gratitude to our friend and co-author, Christian Schlögl, who passed away unexpectedly during the preparation of this article.

References

  1. 1. Casu B, Thanassoulis E. Evaluating cost efficiency in central administrative services in UK universities. Omega. 2006;34(5):417–26.
  2. 2. Reichmann G, Sommersguter-Reichmann M. Efficiency measures and productivity indexes in the context of university library benchmarking. Appl Economics. 2010;42(3):311–23.
  3. 3. Berezvai Z, Lukáts GD, Molontay R. Can professors buy better evaluation with lenient grading? The effect of grade inflation on student evaluation of teaching. Assessment Evaluation in Higher Education. 2020;46(5):793–808.
  4. 4. Böttcher F, Thiel F. Evaluating research-oriented teaching: a new instrument to assess university students’ research competences. High Educ. 2017;75(1):91–110.
  5. 5. Jappe A. Professional standards in bibliometric research evaluation? A meta-evaluation of European assessment practice 2005-2019. PLoS One. 2020;15(4):e0231735. pmid:32310984
  6. 6. Reichmann G, Schlögl C. On the possibilities of presenting the research performance of an institute over a long period of time: the case of the Institute of Information Science at the University of Graz in Austria. Scientometrics. 2022;127(6):3193–223.
  7. 7. Rijcke S de, Wouters PF, Rushforth AD, Franssen TP, Hammarfelt B. Evaluation practices and effects of indicator use—a literature review. Research Evaluation. 2015;25(2):161–9.
  8. 8. Aguillo IF, Bar-Ilan J, Levene M, Ortega JL. Comparing university rankings. Scientometrics. 2010;85(1):243–56.
  9. 9. Schlögl C, Boric S, Reichmann G. Publication and citation patterns of Austrian researchers in operations research and other sub-disciplines of business administration as indexed in Web of Science and Scopus. Cent Eur J Oper Res. 2023;32(3):711–36.
  10. 10. Lazaridis T. Ranking university departments using the mean h-index. Scientometrics. 2009;82(2):211–6.
  11. 11. Abramo G, D’Angelo CA, Di Costa F. The effects of gender, age and academic rank on research diversification. Scientometrics. 2017;114(2):373–87.
  12. 12. Vavryčuk V. Fair ranking of researchers and research teams. PLoS One. 2018;13(4):e0195509. pmid:29621316
  13. 13. Stock W, Dorsch I, Reichmann G, Schlögl C. Counting research publications, citations, and topics: A critical assessment of the empirical basis of scientometrics and research evaluation. J Inf Sci Theory Pract. 2023;11:37–66.
  14. 14. Rost K, Frey B. Quantitative and Qualitative Rankings of Scholars 2011.
  15. 15. Moed H. Citation analysis in research evaluation. Dordrecht: Springer. 2005.
  16. 16. Ayaita A, Pull K, Backes-Gellner U. You get what you ‘pay’ for: academic attention, career incentives and changes in publication portfolios of business and economics researchers. J Bus Econ. 2017;89(3):273–90.
  17. 17. Robinson-García N, Calero-Medina C. What do university rankings by fields rank? Exploring discrepancies between the organizational structure of universities and bibliometric classifications. Scientometrics. 2013;98(3):1955–70.
  18. 18. Dorsch I, Askeridis J, Stock W. Truebounded, Overbounded, or Underbounded? Scientists’ Personal Publication Lists versus Lists Generated through Bibliographic Information Services. Publications. 2018;6(1):7.
  19. 19. Aguillo IF. Is Google Scholar useful for bibliometrics? A webometric analysis. Scientometrics. 2011;91(2):343–51.
  20. 20. Birkle C, Pendlebury DA, Schnell J, Adams J. Web of Science as a data source for research on scientific and scholarly activity. Quantitative Science Stud. 2020;1(1):363–76.
  21. 21. Baas J, Schotten M, Plume A, Côté G, Karimi R. Scopus as a curated, high-quality bibliometric data source for academic research in quantitative science studies. Quantitative Science Studies. 2020;1(1):377–86.
  22. 22. Meyer M, Waldkirch RW, Zaggl MA. Relative Performance Measurement of Researchers: The Impact of Data Source Selection. Schmalenbach Bus Rev. 2012;64(4):308–30.
  23. 23. De Fraja G, Facchini G, Gathergood J. How Much is that Star in the Window? Professorial Salaries and Research Performance in UK Universities. SSRN Journal. 2016.
  24. 24. Hicks D. Performance-based university research funding systems. Research Policy. 2012;41(2):251–61.
  25. 25. Scott LC, Mitias PM. Trends in rankings of economics departments in the u.s.: an update. Economic Inquiry. 1996;34(2):378–400.
  26. 26. KUBICZEK J, DEREJ W, KANTOR A. Scientific Achievements of Economic Academic Workers in Poland: Bibliometric Analysis. EKONOMISTA. 2022;1.
  27. 27. Ketzler R, Zimmermann KF. Publications: German economic research institutes on track. Scientometrics. 2009;80(1):231–52.
  28. 28. Chatzimichael K, Kalaitzidakis P, Tzouvelekas V. Measuring the publishing productivity of economics departments in Europe. Scientometrics. 2017;113(2):889–908.
  29. 29. Runyan RC, Hyun J. Author and institution rankings in retail research: an analysis of the four retail journals from 1994–2008. The International Review of Retail, Distribution and Consumer Re. 2009;19(5):571–86.
  30. 30. Huang M, Chang Y. Characteristics of research output in social sciences and humanities: From a research evaluation perspective. J Am Soc Inf Sci. 2008;59(11):1819–28.
  31. 31. Stockemer D, Wigginton MJ. Publishing in English or another language: An inclusive study of scholar’s language publication preferences in the natural, social and interdisciplinary sciences. Scientometrics. 2019;118(2):645–52.
  32. 32. Zhu J, Hassan S-U, Mirza HT, Xie Q. Measuring recent research performance for Chinese universities using bibliometric methods. Scientometrics. 2014;101(1):429–43.
  33. 33. Waltman L. A review of the literature on citation impact indicators. Journal of Informetrics. 2016;10(2):365–91.
  34. 34. Sivertsen G, Rousseau R, Zhang L. Measuring scientific contributions with modified fractional counting. J Informetrics. 2019;13(2):679–94.
  35. 35. Albers S. Misleading Rankings of Research in Business. German Economic Review. 2009;10(3):352–63.
  36. 36. Fabel O, Hein M, Hofmeister R. Research Productivity in Business Economics: An Investigation of Austrian, German and Swiss Universities. German Economic Review. 2008;9(4):506–31.
  37. 37. Schrader U, Hennig-Thurau T. VHB-JOURQUAL2: Method, Results, and Implications of the German Academic Association for Business Research’s Journal Ranking. Bus Res. 2009;2(2):180–204.
  38. 38. Balaban AT. Positive and negative aspects of citation indices and journal impact factors. Scientometrics. 2012;92(2):241–7.
  39. 39. Garfield E. The history and meaning of the journal impact factor. JAMA. 2006;295(1):90–3. pmid:16391221
  40. 40. Buehling K. Changing research topic trends as an effect of publication rankings – The case of German economists and the Handelsblatt Ranking. Journal of Informetrics. 2021;15(3):101199.
  41. 41. Hofmeister R, Ursprung HW. Das Handelsblatt Ökonomen-Ranking 2007: Eine kritische Beurteilung. Perspektiven der Wirtschaftspolitik. 2008;9(3):254–66.
  42. 42. Fülbier RU, Weller M. A Glance at German financial accounting research between 1950 and 2005: a publication and citation analysis. Schmalenbach Bus Rev. 2011;63(1):2–33.
  43. 43. Rost K, Frey BS. Quantitative and Qualitative Rankings of Scholars. Schmalenbach Bus Rev. 2011;63(1):63–91.
  44. 44. Macharzina K, Wolf J, Rohn A. Quantitative evaluation of German research output in business administration: 1992-2001. Manag Int Rev. 2004;44:335–59.
  45. 45. Verleysen FT, Engels TCE. How arbitrary are the weights assigned to books in performance-based research funding? An empirical assessment of the weight and size of monographs in Flanders. AJIM. 2018;70(6):660–72.
  46. 46. Kao C, Hung H. Efficiency analysis of university departments: An empirical study⋆. Omega. 2008;36(4):653–64.
  47. 47. Korkeamäki T, Sihvonen J, Vähämaa S. Evaluating publications across business disciplines: Inferring interdisciplinary “exchange rates” from intradisciplinary author rankings. J Business Res. 2018;84:220–32.
  48. 48. Usman M, Mustafa G, Afzal MT. Ranking of author assessment parameters using Logistic Regression. Scientometrics. 2020;126(1):335–53.
  49. 49. Fiedler M, Welpe IM, Lindlbauer K, Sattler K. Denn wer da hat, dem wird gegeben: Publikationsproduktivität des BWL-Hochschullehrernachwuchses und deren wissenschaftlicher Betreuer. Z Betriebswirtsch. 2008;78(5):477–508.
  50. 50. Coulthard D, Keller S. Publication Anxiety, Quality, and Journal Rankings: Researcher Views. AJIS. 2016;20.
  51. 51. Kwiek M, Roszka W. Once highly productive, forever highly productive? Full professors’ research productivity from a longitudinal perspective. High Educ. 2023;87(3):519–49.
  52. 52. Mingers J, Yang L. Evaluating journal quality: A review of journal citation indicators and ranking in business and management. arXiv. 2016.
  53. 53. van Dalen HP. How the publish-or-perish principle divides a science: the case of economists. Scientometrics. 2020;126(2):1675–94.
  54. 54. Burton FG, Heninger WG, Summers SL, Wood DA. Perceptions of Accounting Academics on the Review and Publication Process: An Update and Commentary. Issues in Accounting Education. 2024;39(1):29–45.
  55. 55. Haslam N, Laham SM. Quality, quantity, and impact in academic publication. Euro J Social Psych. 2009;40(2):216–20.
  56. 56. Wei F, Zhang G. Measuring the scientific publications of double first‐class universities from mainland China. Learned Publishing. 2020;33(3):230–44.
  57. 57. Xu W, Poole A. ‘Academics without publications are just like imperial concubines without sons’: the ‘new times’ of Chinese higher education. Journal of Education Policy. 2023;39(6):861–78.
  58. 58. Paul J. Publishing in premier journals with high impact factor and Q1 journals: Dos and Don’ts. Int J Consumer Studies. 2024;48(3).
  59. 59. Proserpio L, Kandiko Howson C, Lall M. The university ranking game in East Asia: the sensemaking of academic leaders between pressures and fatigue. Asia Pacific Educ Rev. 2024;26(1):159–71.
  60. 60. Gauffriau M. Counting methods introduced into the bibliometric research literature 1970–2018: A review. Quantitative Science Stud. 2021;2(3):932–75.
  61. 61. Stock WG, Dorsch I, Reichmann G, Schlögl C. Labor productivity, labor impact, and co-authorship of research institutions: publications and citations per full-time equivalents. Scientometrics. 2022;128(1):363–77.
  62. 62. Migheli M, Ramello GB. The unbearable lightness of scientometric indices. Manage Decis Econ. 2021;42(8):1933–44.
  63. 63. Reichmann G, Schlögl C, Stock W, Dorsch I. Forschungsevaluation auf Institutsebene: Der Einfluss der gewählten Methodik auf die Ergebnisse. Beiträge zur Hochschulforschung. 2022;44:74–97.
  64. 64. Serenko A, Jiao C. Investigating Information Systems Research in Canada. Can J Adm Sci. 2011;29(1):3–24.
  65. 65. Yu X, Gao Z. An updated ranking of the economic research institutions in China (2000–2009). China Economic Review. 2010;21(4):571–81.