Figures
Abstract
University scientific research ability is an important indicator to express the strength of universities. In this paper, the evaluation of university scientific research ability is investigated based on the output of sci-tech papers. Four university alliances from North America, UK, Australia, and China, are selected as the case study of the university scientific research evaluation. Data coming from Thomson Reuters InCites are collected to support the evaluation. The work has contributed new framework to the issue of university scientific research ability evaluation. At first, we have established a hierarchical structure to show the factors that impact the evaluation of university scientific research ability. Then, a new MCDM method called D-AHP model is used to implement the evaluation and ranking of different university alliances, in which a data-driven approach is proposed to automatically generate the D numbers preference relations. Next, a sensitivity analysis has been given to show the impact of weights of factors and sub-factors on the evaluation result. At last, the results obtained by using different methods are compared and discussed to verify the effectiveness and reasonability of this study, and some suggestions are given to promote China’s scientific research ability.
Citation: Zong F, Wang L (2017) Evaluation of university scientific research ability based on the output of sci-tech papers: A D-AHP approach. PLoS ONE 12(2): e0171437. https://doi.org/10.1371/journal.pone.0171437
Editor: Yong Deng, Southwest University, CHINA
Received: September 18, 2016; Accepted: January 20, 2017; Published: February 17, 2017
Copyright: © 2017 Zong, Wang. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper.
Funding: The work is supported by the Higher Education Research Fund in Northwestern Polytechnical University (Program No.2014GJY06).
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
Research and Development (R & D) ability is a crucial indicator to reflect the innovation capability of a country. Universities, as the highest-level academic institutions, are the most important sources to produce new knowledge and accelerate the advance of human civilization. In order to better promote the development of universities and quantify the performance of universities, the evaluation of university scientific research ability is of great significance [1–5].
Recent years, some organizations regularly release university rankings, such as QS World University Rankings, US News Top World University Rankings, Times Higher Education World University Rankings, Academic Ranking of World Universities (ARWU), etc. Within these rankings, the technology of Multi-Criteria Decision Making (MCDM) [6–8] is one of the most popular methodology to implement the ranking and evaluation of world universities. Some classical MCDM methods include Analytic Hierarchy and Network Processes (AHP/ANP) [9], Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) [10], VIKOR method [11], Preference ranking organization method for enrichment evaluation (PROMETHEE) [12], Data envelopment analysis (DEA) [13], and so on.
As the rapid elevation of China’s economic strength and international status, the government invests more and more effort to promote the research performance of China’s universities. A series of ambitious programs, for example 211 Project, 985 Project, Double First-Class Project, have been carried out. And many universities have formed alliances to share educational resources and promote cooperation, so as to fast boost their scientific research ability as a whole. By investing so much resources on universities, the impact of these projects is widely concerned and the evaluation of research performance of universities has been an important research field [14, 15]. Zhang et al.’s [16] have assessed the impact of the 985 Project on increasing the rate of publication in international journals at 24 universities by using the regression analysis approach. Different from measuring research performance by simply using the Science Citation Index (SCI) at the early stage [17], Li et al. [18] presented a two-dimensional approach by balancing “quantity” and “quality” to evaluate the research performance of universities in Mainland China, Hong Kong and Taiwan. In [19], the authors have developed a framework of performance measure indicators for universities which includes 18 measurement dimensions and 78 performance measure indicators. Chen and Kenney [20] have given a comparative research on the role of universities and research institutes in development of the Beijing and Shenzhen technology clusters. Moreover, a Chinese perspective on world university ranking, Academic Ranking of World Universities [21], has been released annually since 2003, which partially provides the evaluation of Chinese universities’ performance compared with other universities around the world.
In this paper, inspired by the idea of MCDM, the evaluation of university scientific research ability has studied. Four famous university alliances including Association of American Universities (AAU) of North America, Russell group (Rg) of UK, Group of Eight (Go8) of Australia, and C9 League (C9) of China, are considered. At first, the data are collected from a well-known science information dataset—Thomson Reuters InCites [22]. Then, a hierarchical structure for the scientific research ability evaluation has been established. The proposed hierarchical structure contains three main aspects including quantity of publications, quality of publications, and influence of papers and subjects. Especially, the quantity refers to the number of Total Publications (TP), the quality includes three sub-factors which are Total Citations (TC), Citation Impact (CI), and % Documents Cited (%DC), and the influence is composed by Impact Relative to World (IRW) and Number of Preponderant Discipline (NPD). After that, a D-AHP approach [23], which is a new AHP method extended by D numbers [24], is applied to implement the evaluation and rank the four university alliances in terms of their sci-tech papers output. Within the evaluation process, a data-driven approach is proposed to automatically generate the D numbers preference relations which is also called D matrix. Next, a sensitivity analysis is presented to show the impact of weights of factors and sub-factors on the evaluation result. At last, the results obtained by using different methods are compared and discussed to verify the effectiveness and reasonability of this study, and some suggestions are given to promote China’s scientific research ability.
The remainder of this paper is organized as follows. A brief review about China’s key programs on improving universities’ scientific research ability is given in section 2. A brief introduction about methodology including D numbers and D-AHP approach is presented in section 3. Then, the evaluation objects and data are collected in section 4. After that, the evaluation process of university scientific research ability using the D-AHP approach is illustrated is section 5. Next, a sensitivity analysis is given in section 6. Comparison and discussion among different methods on the study are shown in section 7. Finally, section 8 concludes the paper.
2 Review of China’s key programs on improving universities’ scientific research ability
With the fast progress of China’s economic strength, as the intellectual foundation and talent reserve for sustainable development, higher eduction has been placed on more and more important status by Chinese government. The governments, either the central or local, have implemented a series of programs to improve the scientific research ability of China’s universities. Some of the most important programs are reviewed as follows.
From 1995, the Chinese central government has implement a project entitled “High-level Universities and Key Disciplinary Fields”, as known as 211 Project, to create around 100 world class universities as a national priority for the 21st century to meet the demands of socio-economic development. Now there are 112 universities designated as 211 Project institutions which could receive focused support from the government including funding, construction of key laboratories, student enrollment right, and so on. From 1996 to 2000, during the first phase of the project, approximately 2.2 billion US dollars was distributed among the 211 Project universities [25]. The impact of the project to the participating universities is enormous, a typical case is given in [26] which takes Yanbian university as an example.
In 1998, a project named as 985 Project was announced by Chinese President Jiang Zemin at the Centenary Celebration of Beijing University. The 985 Project is entitled “World Class Universities” which is exactly consistent with its goal that is to build a number of first-rate universities of international advanced level. Currently, there are 39 universities participating in the 985 Project. Zhang et al. [16] have presented a work to assess the impact of the 985 Project. According to their research, after the implementation of the 985 Project the growth rate of publications for the 985 Project universities increases more quickly. Additionally, the discussion and reflection on the effects of the 985 Project have also been concerned [27, 28].
The 211 Project and 985 Project are the two most important projects for improving the research performance of China’s universities, currently both of them are prohibited to the participation of new universities. As the progress and continuation of 211 Project and 985 Project, the Higher Education Innovative Capacity Improvement Project or 2011 Project was developed in light of Chinese President Hu Jintao’s speech at Tsinghua University in 2011. This project aims to improve the innovation capability of universities and research institutions through a mechanism of collaborative partnerships, so as to speed up the establishment of China as an innovative country generating high quality and relevant research outcomes. In addition to these projects mentioned above, the central government of China has successively worked out a series of other projects for revitalizing China’s higher education and research & development strength, for examples 111 Project which aims to attract high-level talents to build a number of world class innovation bases, and 985 Project Innovation Platform that endeavors in constructing high-level innovation platforms for some designated key disciplines, and National Basic Ability Construction Project of Western and Central China that is for the revitalization of higher education in western and central China. Now a new major plan is implementing, which is called “Double First-Class Project” unofficially that is an upgraded version of the former 985 Project and 211 Project, and it is designed to construct a number of world-class universities and disciplines by 2020 and 2030.
With the leap of China’s higher education strength, a number of university alliances, analogous to the AAU in the US, the Go8 in Australia, and Russell group in the UK, have been formed officially or unofficially. The top 1 university alliance in China is called C9 League which consists of 9 elite universities. C9 League is the Chinese version of Ivy League. In addition, other famous university alliances in China include the Excellence League composed by 10 excellent technological universities, University Alliance of the New Silk Road (UANSR), E8 which consists of 8 key universities located in the delta region of Yangtze river, Federation of Beijing Hi-Tech Universities (12 schools located in Beijing ares), Z14 which is composed by 14 universities from western and central China, etc. By considering the vast investment, how to scientifically evaluate the university scientific research ability of different university alliances has been an important issue which is our concern in this study.
3 Methodology
3.1 D numbers
D numbers [23, 24, 29, 30] is a new model of representing and handling uncertain information, which is an effective extension of the basic probability assignment (BPA) of Dempster-Shafer evidence theory [31–36]. Theoretically, D numbers overcomes two typical deficiencies of Dempster-Shafer theroy, namely exclusiveness hypothesis and completeness constraint. Since its advantages in dealing with uncertain information, D numbers has attracted increasing attention and been used in environment impact assessment [29], supplier selection [23], failure mode and effects analysis [37], new produce development [38], curtain grouting efficiency assessment [39], etc. Some basic knowledge about D numbers are given as follows.
Definition 1 Let Ω be a finite nonempty set, D numbers is a mapping formulated by (1) with (2) where ∅ is an empty set and B is a subset of Ω.
If , the information is complete; If , the information is incomplete. An illustrative example is given to show a D numbers as below.
Example 1 Suppose a project is assessed, the assessment score is represented by interval [0, 100]. In the frame of D numbers, an expert may give the assessment as follows: where b1 = [0, 20], b2 = [35, 65], b3 = [40, 100]. Here, since D({b1}) + D({b3}) + D({b1, b2, b3}) = 0.9, it indicates that the information is incomplete in this D numbers. What’s more important, the elements in the set of {b1, b2, b3} are not mutually exclusive in the D numbers.
For a discrete set Ω = {b1, b2, ⋯, bi, ⋯, bn}, where bi ∈ R and bi ≠ bj if i ≠ j, a special form of D numbers can be expressed by (3) or simply denoted as D = {(b1, v1), (b2, v2), ⋯, (bi, vi), ⋯, (bn, vn)}, or , where vi > 0 and .
D numbers has the following properties which come from literature [29].
Definition 2 Permutation invariability. If there are two D numbers and then D1 ⇔ D2.
Example 2 If there are two D numbers: Then
Definition 3 For D = {(b1, v1), (b2, v2), ⋯, (bi, vi), ⋯, (bn, vn)}, the integration representation of D is defined as (4) where bi ∈ R, vi > 0 and .
Example 3 Let D = {(1, 0.2), (2, 0.1), (3, 0.3), (4, 0.3), (5, 0.1)}, Then
In addition, in References [24, 29, 38], the authors addressed the combination rules of D numbers, and the distance function of D numbers. These studies have further enriched the theoretical framework of D numbers.
3.2 D-AHP approach
The D-AHP approach was first proposed in literature [23] to solve the supplier selection problem under uncertain environment. As the first model based on D numbers, the D-AHP approach has extend the classical AHP method, as shown in Fig 1. Similar to the AHP method, the D-AHP model also has three levels, including goal, criteria, and alternatives. Ant it still uses the weighted averaging method to integrate the weights in each levels, as shown in Table 1. However, within the D-AHP model the pairwise comparison matrix is replaced by the D numbers preference relation which is also called as D matrix.
Essentially, D matrix is a fuzzy preference relation extended by D numbers. The conventional fuzzy preference relation [40–42] is represented by a n × n matrix R = [rij]n×n having the following form: (5) where (i) rij ≥ 0; (ii) rij + rji = 1, ∀i, j ∈ {1, 2, ⋯, n}; (iii) rii = 0.5, ∀i ∈ {1, 2, ⋯, n}. And rij = μR(Ai, Aj) denotes the preference degree of alternative Ai over alternative Aj. Here, rij = 0 means Aj is absolutely preferred to Ai; rij < 0.5 means Aj is preferred to Ai to some degree; rij = 0.5 means indifference between Ai and Aj; rij > 0.5 means Ai is preferred to Aj to some degree; rij = 1 means Ai is absolutely preferred to Aj. By contrast, a D matrix is (6) where , , and , , . Obviously, Dii = {(0.5, 1.0)}∀i ∈ {1, 2, ⋯, n} in RD.
A key point in the D-AHP model is how to obtain the weight of each alternative according to the D matrix. In order to solve that problem, literature [23] proposed a unified framework to obtain the ranking and weights of alternatives according to a D matrix, as shown in Fig 2. Briefly, it contains four steps.
- At first, a D matrix is seen as an input to obtain its corresponding crisp matrix Rc by using the integration representation of D numbers given in Eq (4).
- Second, construct a probability matrix Rp based on RI.
- Third, convert the probability matrix Rp to triangular matrix of probability .
- At last, integrate the crisp matrix Rc and triangular matrix to derive triangulated crisp matrix , so as to generate the weights of alternatives.
For more details about the procedure of solving a D matrix, please refer to literature [23]. In the following section, a numerical example will also be given to illustrate the calculation process in detail.
4 Evaluation objects and data
In this paper, four representative university alliances are selected to show the process of evaluating and comparing the scientific research ability of different universities. The four university alliances are: (1)“C9” which is an alliance of 9 prestigious Chinese universities including Peking University, Tsinghua University, Fudan University, Shanghai Jiao Tong University, Nanjing University, University of Science and Technology of China, Zhejiang University, Xi’an Jiao Tong University, Harbin Institute of Technology; (2) “Go8” which is a coalition of leading Australian universities, intensive in research and comprehensive in general and professional education, including Monash University, Australian National University, University of Adelaide, University of Melbourne, University of Queensland, University of Sydney, University of Western Australia, UNSW Australia; (3) “Rg” which represents a group of 24 leading universities in UK; and (4) “AAU” which is a nonprofit organization that comprises 62 leading public and private research universities in the United States and Canada.
With respect to the data, they are from a well-known science information dataset—Thomson Reuters InCites. In the study, we have collected the related data from 2003 to 2013. These data include three categories which are the quantity of papers, quality of papers, and influence of papers and subjects. They are introduced in the InCites Indicators Handbook [22] in detail. Here, the related indicators are given briefly as follows.
4.1 “Quantity”
In the paper, quantity is the amount of Total Publications (TP) within a period of time. Table 2 gives the quantity of published papers for these four university alliances from 2003 to 2013.
4.2 “Quality”
The quality of papers includes three sub-factors which are Total Citations (TC), Citation Impact (CI), and % Documents Cited (%DC), respectively. Total citations is the number of total citations within a period of time. Citation impact of a set of publications is calculated by dividing the total number of citations by the total number of publications. Citation impact shows the average number of citations that a publication has received. The %DC indicator is the percentage of publications, in a set, that has received at least one citation. The data of “Quality” for the four university alliances is collected as Table 3.
4.3 “Influence”
The influence includes two aspects. One is the Impact Relative to World (IRW) which is the ratio of the Citation Impact of a set of documents divided by the world Citation Impact for a given period of time. This indicator shows the impact of the research in relation to the impact of the global research and is an indicator of relative research performance. The world average is always equal to one. If the numerical value of the Impact Relative to World exceeds one, then the assessed entity is performing above the world average. If it is less than one, then it performs below the world average. Table 4 gives the IRW for these four university alliances including AAU, Rg, Go8, and C9.
The other one is the Number of Preponderant Discipline (NPD) which is based on the IRW in particular subject areas. Table 5 gives the IRW of each university alliance in different disciplines. For a discipline A, if its numerical value of the IRW is greater than one, we claim that it is a preponderant discipline belonging to a university alliance. Therefore, the NPD can be an indicator to show the research strength of an institution. From Table 5, it is found that the NPD of AAU, Rg, Go8, and C9 are 22, 22, 20, and 3, respectively.
5 Evaluation of university scientific research ability using the D-AHP approach
In this section, the process of using the D-AHP approach to evaluate university scientific research ability is illustrated based on the data collected in above section.
5.1 Hierarchical structure for the scientific research ability evaluation
By consulting with the domain experts, we build a hierarchical structure for the scientific research ability evaluation which mainly determines the relative weight of each factors in different level, as shown in Fig 3. According to Fig 3, the absolute weight of each sub-factor can be calculated, as given in Table 7. From Table 7, NPD has the biggest weight for the scientific research ability evaluation, and TC is of the least weight for the evaluation. Next we can use the D-AHP approach to evaluate the scientific research ability of different university alliances.
5.2 Construction of D matrix
In order to implement the scientific research ability evaluation based on the D-AHP approach, the key step is to construct the D numbers preference relation, namely D matrix. In the paper, a data-driven approach is proposed to generate the D matrix as follows.
Let us use the preference relation between AAU and Rg as the example. For AAU and Rg, according to Table 6 the TP of AAU is 2,071,303, that of Rg is 629,399. So the sum of TP of AAU and Rg is equal to 2,700,702, where AAU is with a percentage of 76.69%, and Rg is with a percentage of 23.31%. It implies that, on factor TP, AAU performs better than Rg with a preference degree of 0.7669, and Rg performs better than AAU with a preference degree of 0.2331. Therefore, u(AAU, Rg) = 0.7669 and u(Rg, AAU) = 0.2331. However, due to the absolute weight of TP is 0.20, the belief of u(AAU, Rg) = 0.7669 should be 0.20. Therefore, similarly, we have:
- On TC, the belief of u(AAU, Rg) = 0.7855 is 0.06;
- On CI, the belief of u(AAU, Rg) = 0.5267 is 0.20;
- On %DC, the belief of u(AAU, Rg) = 0.5029 is 0.14;
- On IRW, the belief of u(AAU, Rg) = 0.5263 is 0.16;
- On NPD, the belief of u(AAU, Rg) = 0.50 is 0.24.
As a result, the D numbers preference relation of denoting the preference degree of AAU over Rg is (7) By means of this way, the D numbers preference relations (D matrix) among AAU, Rg, Go8, C9, can been derived, which are given in Table 8.
5.3 Solving the D matrix
Once the D matrix has been constructed, the approach shown in Fig 2 can be used to solve it so as to obtain the priority weights and ranking of university alliances. Let us present the process step by step.
At first, based on Eq (4), the D matrix shown in Table 8 is converted to a crisp matrix (8)
Second, according to the crisp matrix Rc, we generate a probability matrix Rp to represent the preference probability between pairwise alternatives. The rule is: (i) Rp(Ai ≻ Aj) = 1 if Rc(i, j) > 0.5; (ii) Rp(Ai ≻ Aj) = 0 if Rc(i, j) ≤ 0.5. Hence, (9)
Third, convert the probability matrix Rp to triangular matrix of probability using the triangularization method [23]. In particular, in the example the triangular matrix has the same form of Rp, namely (10)
According to , the ranking of university alliances is obtained: (11) which means that AAU has the best scientific research ability, C9 has the worst performance, Rg and Go8 are located in the middle. The ranking is just a qualitative result. Based on the D-AHP approach, the quantitative priority weight of each university alliance can be obtained next.
Fourth, calculate the priority weights of university alliances. A triangulated crisp matrix is derived by integrating the crisp matrix Rc and triangular matrix : (12)
In matrix , the elements above and alongside the main diagonal (namely 0.5805, 0.5868, and 0.6260) indicate the weight relationship of university alliances. We have (13) By solving the above equations, we have (14) where parameter λ expresses the credibility of information. If the comparison information is provided by an authoritative expert, λ takes a smaller value. If the comparison information comes from an expert whose judgment is with low belief, λ takes a higher value. The decline of λ means the drop of expert’s cognitive ability to slight difference. As a result, the weights of proposals are closing to each others. Fig 4 shows the priority weight of each university alliance with the change of λ.
With respect to the selection of λ, in [23] the authors proposed a scheme to determine the value of λ: (15) where represents lower bound of λ, . And n is the number of alternatives.
In the study, we do not develop new scheme to determine the value of λ, but just simply use the scheme presented in [23]. According to such scheme, in this study we have: (i) λ = 1 if the information is with high credibility; (ii) λ = 4 if the information is with medium credibility; (iii) λ = 8 if the information is with low credibility. Therefore, the weights associated with different information credibility can be obtained, as shown in Table 9.
For the sake of comparison, we normalize all weights in interval [0, 100] by dividing the maximum one, and the results are shown in Table 10. From Table 10, we find that AAU always has the highest scores which indicate that it has the best scientific research ability. By contrast, C9’s scores of scientific research ability are always the lowest, especially it is 23.9 under high information credibility. Therefore, the results show that C9 falls behind the other university alliances in the aspect of scientific research ability, and the overall ranking is AAU ≻ Rg ≻ Go8 ≻ C9.
6 Sensitivity analysis
In the section, several different settings of factors’ weights have been investigated to study the impact of change of weights on the evaluation result. It is noted that we only compare the results in the situation of high information credibility assumed by the D-AHP approach.
6.1 Reducing the weight of Quantity
Some experts may argue that the weight of Quantity which is 0.2 as shown in Fig 3 is too high. Now we reduce it to 0.1 and assign the remainder 0.1 to Quality or Influence, respectively. Assume that Case 1 means Weight(Quantity, Quality, Influence) = (0.2, 0.4, 0.4), Case 2 means Weight(Quantity, Quality, Influence) = (0.1, 0.5, 0.4), Case 3 means Weight(Quantity, Quality, Influence) = (0.1, 0.4, 0.5). The new results are given in Table 11.
From Table 11, it is found that reducing the weight of Quantity can obviously increase the sores of Rg and Go8 either in Case 2 or in Case 3, however it slightly increases the score of C9 in Case 2 and decreases the score of C9 in Case 3. These results imply that AAU has a distinct advantage in Quantity. But if the importance of Quantity is reduced, Rg and Go8 could narrow the gap with AAU. However, the means does not always work for C9, it must invest more effort on enhancing its Influence in the future.
6.2 Reducing the weight of %DC and increasing the weight of CI
In the case, we reduce the weight of %DC and increase the weight of CI, keeping the weight of TC unchanged. The new results are given in Table 12. In that Table, Case 1 indicates Weight(TC, CI, %DC) = (0.15, 0.50, 0.35), Case 2 indicates Weight(TC, CI, %DC) = (0.15, 0.60, 0.25), Case 3 indicates Weight(TC, CI, %DC) = (0.15, 0.70, 0.15), Case 4 indicates Weight(TC, CI, %DC) = (0.15, 0.80, 0.05), respectively.
According to Table 12, it is found that, as the decreasing the weight of %DC and increasing the weight of CI, the gap between Rg and AAU slightly ascends, so as the gap between Go8 and AAU, however the gap between C9 and AAU rises apparently. Therefore, the gap between C9 and AAU in the aspect of CI is more obvious than that in the aspect of %DC. So, in order to enhance C9 in the aspect of Quality more quickly, the decision maker should pay more attention on promoting the citation impact of papers.
7 Comparison and discussion
In this section, the results obtained by using the D-AHP approach are compared with that obtained by using other methods, to verify the effectiveness and reasonability of this study. What’s more, the performance of university alliances on each factor is assessed respectively to explore the measures of promoting the scientific research ability of university alliances.
Firstly, Table 13 gives the comparison of university alliances’ scientific research ability by using different methods including the D-AHP, conventional AHP [9] and TOPSIS [10]. Herein, the results of D-AHP are associated with the case of high information credibility. And in AHP method, the pairwise comparison matrix is generated through converting the D matrix in Eq (8) by using transformation equation aij = 32(2rij−1) [43], then the classical eigenvector method [44] is employed to calculate the weight of each alliance, finally all weights are normalized in [0, 100] by dividing the maximum one. The TOPSIS is also a very popular MCDM method, the process of applying TOPSIS to MCDM problems can be clearly found in [45]. In this paper, the used TOPSIS is classical crisp-valued TOPSIS method since the collected data given in Table 6 are crisp values. From Table 13, it is found that these methods generate the same ranking AAU ≻ Rg ≻ Go8 ≻ C9, which verifies the reasonability of the results obtained by using the D-AHP approach. In addition, by investigating the concrete values in Table 13, we find that the score generated by the D-AHP and AHP are similar, but the score 2.2 coming from the TOPSIS is a little weird. If setting the score of AAU’s performance is 100, based on the TOPSIS, the score of C9 is only 2.2, it is a little counterintuitive. Therefore, the D-AHP and AHP is more effective in the application.
Secondly, let us investigate the scores of university alliances while considering each assessment factor respectively. Tables 14, 15 and 16 are associated with the cases of D-AHP, AHP with eigenvector method, and TOPSIS, respectively. These results are graphically illustrated in Fig 5. In Figs 5(a) and 5(b), associated with the use of D-AHP and AHP respectively, AAU gets 100 score on every assessment factor, and C9 always performs the worst on all factors except TP where Go8 does the worst, Rg and Go8 are in the middle in most cases. On the other hand, by especially considering C9, it is very close to other university alliances in the aspect of %DC, but falls behind very much in other aspects. The score rankings of C9 on these factors are TP < TC < NPD < IRW < CI < %DC in the case of D-AHP and TC < TP < NPD < IRW < CI < %DC in the case of AHP. The two rankings are basically consistent. These rankings provide valuable reference in reducing the gap between C9 and world first-class university alliances. For China’s policy makers:
- The quality of publications should be more and more emphasized through a variety of ways, because the score on TC is very low which means that these publications can not get much attention. The reasons are complicated. For example, domestic researchers may pay too much interest on some outdated research topics or fields, facing that the policy makers must reduce the funding support on related fields so as to force researches to transfer to new research directions.
- The quantity of publications can give less attention. Although the score of C9 on TP is very low, but C9 just consists of nine universities. Compared with Go8 which has 8 affiliated universities, the total publications of C9 already has a little advantage. AAU and Rg get high scores because they are composed by more universities. Therefore, C9 just needs to keep current increasing rate of publications.
- The coordinated and balanced development of multiple disciplines must be encouraged with much more strength. According to the rankings, for C9 the NPD score is the third-lowest. From Table 5, C9 just owns three preponderant disciplines which are “Agricultural Sciences”, “Mathematics” and “Plant & Animal Science”. On one hand, the number of preponderant disciplines is few. On the other hand, these preponderant disciplines are all traditional disciplines. Therefore, the policy makers must pay more attention on the development of emerging disciplines by various means to implement the coordinated and balanced development of multiple disciplines.
Correspondingly, according to Fig 5(c) associated with the case of TOPSIS, although the ranking of university alliances on each factor is the same with the cases of D-AHP and AHP, the score of Go8 on TP and the scores of C9 on all factors except TP are all 0s. It is obviously unreasonable. Moreover, based on these scores, the performance of C9 on factors TC, CI, %DC, IRW, and NPD, can not be differentiated.
Through the above two aspects of comparisons, the effectiveness and reasonability of using the D-AHP in the study are shown. By contrast, the conventional TOPSIS is not appropriate for this work since it generates many counterintuitive results. The AHP method could produce reasonable results, but the collected data given in Table 6 is not in the form of pairwise comparison matrix, the AHP method can not be directly used in this application. Therefore, the D-AHP approach is more suitable than the AHP for this study.
8 Conclusion
In this paper, the issue of university scientific research ability evaluation has been studied. Four university alliances including AAU from North America, Rg from UK, Go8 from Australia, and C9 from China, have been chosen to illustrate the evaluation process. Data coming from InCites have been collected first. Then, a hierarchical structure has been built for the evaluation task. Within the study, a data-driven approach has been proposed to automatically construct the D matrix. After that, a new MCDM method called D-AHP model is utilized to evaluate and rank the scientific research ability of these university alliances. Next, a sensitivity analysis is conducted on the weights of factors and sub-factors within the established hierarchical structure of evaluation. Finally, the results obtained by using different methods are compared and discussed to verify the effectiveness and reasonability of this study, and some suggestions are given to promote China’s scientific research ability. The contribution of the work contains these aspects. At first, a new framework for the university scientific research ability evaluation is constructed, and it can be extended and enriched in other evaluation tasks of universities in the future. Secondly, a data-driven approach is proposed to automatically generate the D numbers preference relations, which is an originality for the research of D numbers. Thirdly, the latest data 2003–2013 are used to evaluate the scientific research ability of C9, which gives a fresh information on the research performance of C9. Fourthly, some suggestions to improve China’s scientific research ability, for example emphasizing the quality of publications and focusing on coordinated and balanced development of multiple disciplines, are given based on the analysis of concrete data. The limitation of the study is that the established assessment indicator structure is mainly based on universities’ performance on publications, which is not sufficient to comprehensively evaluate the performance of universities. The future research plan is to improve the assessment indicator structure to elevate its comprehensiveness and rationality.
Acknowledgments
The work is supported by the Higher Education Research Fund in Northwestern Polytechnical University (Program No.2014GJY06).
Author Contributions
- Conceptualization: FZ LW.
- Data curation: FZ LW.
- Formal analysis: FZ LW.
- Methodology: FZ LW.
- Software: FZ.
- Supervision: FZ LW.
- Validation: FZ LW.
- Visualization: FZ.
- Writing – original draft: FZ.
- Writing – review & editing: FZ LW.
References
- 1. Jaffe AB. Real effects of academic research. The American Economic Review. 1989;79(5):957–970.
- 2. Geuna A, Martin BR. University research evaluation and funding: An international comparison. Minerva. 2003;41(4):277–304.
- 3. Hicks D. Evolving regimes of multi-university research evaluation. Higher Education. 2009;57(4):393–404.
- 4.
Moed HF. Citation Analysis in Research Evaluation. vol. 9. Springer Science & Business Media; 2006.
- 5.
Jin B, Rousseau R. Evaluation of research performance and scientometric indicators in China. In: Handbook of Quantitative Science and Technology Research. Springer; 2004. p. 497–514. https://doi.org/10.1007/1-4020-2755-9_23
- 6.
Triantaphyllou E. Multi-criteria decision making methods: a comparative study. vol. 44. Springer Science & Business Media; 2013.
- 7.
Kahraman C. Fuzzy multi-criteria decision making: theory and applications with recent developments. vol. 16. Springer Science & Business Media; 2008. https://doi.org/10.1007/978-0-387-76813-7
- 8.
Ishizaka A, Nemery P. Multi-criteria decision analysis: methods and software. John Wiley & Sons; 2013. https://doi.org/10.1002/9781118644898
- 9. Satty TL. Decision making-the analytic hierarchy and network processes (AHP/ANP). Journal of Systems Science and Systems Engineering. 2004;13(1):1–35.
- 10.
Hwang CL, Yoon K. Multiple attribute decision making: methods and applications a state-of-the-art survey. vol. 186. Springer Science & Business Media; 2012. https://doi.org/10.1007/978-3-642-48318-9
- 11. Yu PL. A class of solutions for group decision problems. Management Science. 1973;19(8):936–946.
- 12. Brans JP, Vincke P, Mareschal B. How to select and how to rank projects: The PROMETHEE method. European journal of operational research. 1986;24(2):228–238.
- 13.
Cooper WW, Seiford LM, Zhu J. Data envelopment analysis. In: Handbook on data envelopment analysis. Springer; 2004. p. 1–39. https://doi.org/10.1007/1-4020-7798-X_1
- 14. Liang L, Wu Y. Selection of databases, indicators and models for evaluating research performance of Chinese universities. Research Evaluation. 2001;10(2):105–113.
- 15. Feng Y, Lu H, Bi K. An AHP/DEA method for measurement of the efficiency of R&D management activities in universities. International Transactions in Operational Research. 2004;11(2):181–191.
- 16. Zhang H, Patton D, Kenney M. Building global-class universities: Assessing the impact of the 985 Project. Research Policy. 2013;42(3):765–775.
- 17. Moed H. Measuring China’s research performance using the Science Citation Index. Scientometrics. 2002;53(3):281–296.
- 18. Li F, Yi Y, Guo X, Qi W. Performance evaluation of research universities in Mainland China, Hong Kong and Taiwan: based on a two-dimensional approach. Scientometrics. 2012;90(2):531–542.
- 19. Chen SH, Wang HH, Yang KJ. Establishment and application of performance measure indicators for universities. The TQM Journal. 2009;21(3):220–235.
- 20. Chen K, Kenney M. Universities/research institutes and regional innovation systems: The cases of Beijing and Shenzhen. World Development. 2007;35(6):1056–1074.
- 21. Liu NC, Cheng Y. The academic ranking of world universities. Higher Education in Europe. 2005;30(2):127–136.
- 22.
InCites Indicators Handbook. Thomson Reuters;.
- 23. Deng X, Hu Y, Deng Y, Mahadevan S. Supplier selection using AHP methodology extended by D numbers. Expert Systems with Applications. 2014;41(1):156–167.
- 24. Deng Y. D Numbers: Theory and applications. Journal of Information and Computational Science. 2012;9(9):2421–2428.
- 25. Lixu L. China’s higher education reform 1998–2003: A summary. Asia Pacific Education Review. 2004;5(1):14–22.
- 26. Choi S. Globalization, China’s drive for world-class universities (211 Project) and the challenges of ethnic minority higher education: the case of Yanbian university. Asia Pacific Education Review. 2010;11(2):169–178.
- 27. Qi W. A discussion on the 985 Project from a comparative perspective. Chinese Education & Society. 2011;44(5):41–56.
- 28. Ying C. A Reflection on the Effects of the 985 Project. Chinese Education & Society. 2011;44(5):19–30.
- 29. Deng X, Hu Y, Deng Y, Mahadevan S. Environmental impact assessment based on D numbers. Expert Systems with Applications. 2014;41(2):635–643.
- 30. Deng X, Lu X, Chan FT, Sadiq R, Mahadevan S, Deng Y. D-CFPR: D numbers extended consistent fuzzy preference relations. Knowledge-Based Systems. 2015;73:61–68.
- 31. Dempster AP. Upper and lower probabilities induced by a multivalued mapping. Annals of Mathematics and Statistics. 1967;38(2):325–339.
- 32.
Shafer G. A Mathematical Theory of Evidence. Princeton: Princeton University Press; 1976.
- 33. Smets P, Kennes R. The transferable belief model. Artificial Intelligence. 1994;66(2):191–234.
- 34. Deng X, Han D, Dezert J, Deng Y, Shyr Y. Evidence combination from an evolutionary game theory perspective. IEEE Transactions on Cybernetics. 2016;46(9):2070–2082. pmid:26285231
- 35. Yager RR. Combining various types of belief structures. Information Sciences. 2015;303:83–100.
- 36. Yager RR, Alajlan N. Dempster-Shafer belief structures for decision making under uncertainty. Knowledge-Based Systems. 2015;80:58–66.
- 37. Liu HC, You JX, Fan XJ, Lin QL. Failure mode and effects analysis using D numbers and grey relational projection method. Expert Systems with Applications. 2014;41(10):4670–4679.
- 38. Li M, Hu Y, Zhang Q, Deng Y. A novel distance function of D numbers and its application in product engineering. Engineering Applications of Artificial Intelligence. 2016;47:61–67.
- 39. Fan G, Zhong D, Yan F, Yue P. A hybrid fuzzy evaluation method for curtain grouting efficiency assessment based on an AHP method extended by D numbers. Expert Systems with Applications. 2016;44:289–303.
- 40. Tanino T. Fuzzy preference orderings in group decision making. Fuzzy Sets and Systems. 1984;12(12):117–131.
- 41. Herrera-Viedma E, Herrera F, Chiclana F, Luque M. Some issues on consistency of fuzzy preference relations. European Journal of Operational Research. 2004;154(1):98–109.
- 42. Xu Z. A survey of preference relations. International Journal of General Systems. 2007;36(2):179–203.
- 43. Herrera-Viedma E, Herrera F, Chiclana F, Luque M. Some issues on consistency of fuzzy preference relations. European Journal of Operational Research. 2004;154(1):98–109.
- 44. Saaty TL. Decision-making with the AH: Why is the principal eigenvector necessary. European journal of operational research. 2003;145(1):85–91.
- 45. Dymova L, Sevastjanov P, Tikhonenko A. A direct interval extension of TOPSIS method. Expert Systems with Applications. 2013;40(12):4841–4847.