Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Research on performance and dynamic competency evaluation of bid evaluation experts based on weight interval number

  • Tie Li,

    Roles Data curation, Investigation, Methodology, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Faculty of Civil Engineering and Mechanics, Kunming University of Science and Technology, Kunming, Yunnan Province, China

  • Guoliang Li ,

    Roles Funding acquisition, Methodology, Project administration, Supervision

    liguoliang365@126.com

    Affiliation Faculty of Civil Engineering and Mechanics, Kunming University of Science and Technology, Kunming, Yunnan Province, China

  • Mi Zhang,

    Roles Supervision, Writing – original draft, Writing – review & editing

    Affiliation Faculty of Civil Engineering and Mechanics, Kunming University of Science and Technology, Kunming, Yunnan Province, China

  • Yuan Qin,

    Roles Methodology, Software, Visualization

    Affiliation Faculty of Civil Engineering and Mechanics, Kunming University of Science and Technology, Kunming, Yunnan Province, China

  • Guolong Wei

    Roles Investigation, Methodology

    Affiliation Faculty of Civil Engineering and Mechanics, Kunming University of Science and Technology, Kunming, Yunnan Province, China

Abstract

Purpose/Significance

In the past many years, some scholars have studied bid evaluation experts, such as the behavior of bid evaluation experts. However, previous research ignores the performance and competency of bid evaluation experts, so this paper aims to provide a theoretical basis for incentive and constraint mechanism and hierarchical or dynamic management of bid evaluation experts by implementing performance and dynamic competency evaluation of bid evaluation experts.

Method/Process

Firstly, the evaluation index system of performance and dynamic competency of bid evaluation experts is preliminarily constructed by referring to relevant literature, and then the constructed evaluation index was modified and improved by consulting relevant stakeholders’ experts. Secondly, considering the hesitation and consistency of expert weighting, the calculation method of expert weight coefficient and index score interval number is improved. Based on the theory of weight interval number, the corresponding mathematical optimization model is constructed to calculate the index weight according to the purpose of performance judgment and dynamic competency clustering of bid evaluation experts. Finally, the data of performance and dynamic competency of bid evaluation experts is obtained by questionnaire survey, and the empirical analysis was carried out by simulating the bid evaluation experts consistent with the actual situation.

Results/Conclusion

After improving the calculation method of index score interval number, and then calculating index weight interval number through index score interval number, the length of index weight interval number can be decreased and the calculation accuracy of index weight interval number can be increased. In addition, the index weight calculated by the constructed mathematical optimization model can make the intra-class discrimination smaller and the inter-class discrimination larger. Finally, some suggestions are also provided for the management of bid evaluation experts.

1. Introduction

Engineering bidding is a widely used transaction method in the world [13]. For project owners, the selection of contractors has a significant impact on project cost and quality [4]. In China, in order to select the bidders who best meets the bidding conditions, the relevant departments randomly select the bid evaluation experts in the relevant fields to form a temporary bid evaluation committee (China stipulates that the number of members of the bid evaluation committee is an odd number and more than 5 people) based on the provisions of relevant laws and regulations and the needs of the project, then the bid evaluation committee evaluates and selects the bidder according to the bidder’s quotation, technical measures, etc. As a result, this leads to the inequality of legal responsibility and power of the bid evaluation subject, and the contradiction between the temporality of the bid evaluation committee and the long-term nature of the project. The two aspects become an important part of the research hotspot [1].

The evaluation and selection of contractors is a difficult and challenging task [5], and the decisions of bid evaluation and bid winning are often considered as key links in auctions [6]. Therefore, the assessment of contractors and the selection of best bidders require complex knowledge and experience to ensure that selected contractors are able to implement projects as required by owners [7]. At the same time, the bid evaluation committee can decide by itself, but not in an arbitrary way [8]. They hold the dominant power in the evaluation work and have the most direct and fundamental impact on the evaluation results [9]. Therefore, bid evaluation experts must have high competency. Although the relevant laws and regulations in China have stipulated the qualification of bid evaluation experts when they enter into expert database, there are no reliable measures to implement the periodic assessment system after the bid evaluation experts entered into expert database. Hence, the quality of bid evaluation experts is worrying [9]. Some provinces put forward the hierarchical or dynamic management of bid evaluation experts. In the long run, the ability of bid evaluation experts will change with the accumulation of knowledge and experience. Therefore, it is necessary to take periodic assessment the competency of bid evaluation experts as the theoretical basis of hierarchical management. In addition, the evaluation records of China’s bid evaluation experts participating in the evaluation work are generally used for archiving and verification. Most provinces do not assess them through the performance of the bid evaluation experts. Although a few provinces propose “scoring system” management according to the performance of the bid evaluation experts, there are still large limitations, which only consider whether the bid evaluation experts are illegal or not.

At present, China is in the transition stage from offline bid evaluation (i.e. traditional bid evaluation method) to online bid evaluation. The bid evaluation process of the two methods is shown in Fig 1. Through comparison, the common points between the two and the advantages of online bid evaluation are found as follows:

thumbnail
Fig 1. Two bid evaluation mechanisms.

a. Offline bid evaluation. b. Online bid evaluation.

https://doi.org/10.1371/journal.pone.0269467.g001

(1) Common points between the two methods: no matter which method of bid evaluation is adopted, bid evaluation experts need to put forward bid evaluation suggestions, score the bids and put forward bid evaluation conclusions in the process of bid evaluation according to their professional knowledge and work experience. Meanwhile, they must comply with relevant laws and regulations.

(2) Advantages of online bid evaluation: no matter which method of bid evaluation is adopted, the performance of bid evaluation experts can be evaluated. However, online bid evaluation can automatically evaluate the performance of bid evaluation experts according to the evaluation process and results, and conduct periodic evaluation. At the same time, the digital footprint of the evaluation process of bid evaluation experts can be collected through technical means. Previous studies have shown that digital footprint provides an effective way to reduce information asymmetry and moral hazard [1013]. Therefore, the performance of bid evaluation experts can be evaluated by digital footprint (such as the seriousness of performance of bid evaluation experts through equipment testing and the time for bid evaluation experts to browse bids, etc.). Online can realize off-site bid evaluation, experts do not need to meet, do not affect each other, and can realize incomplete information static game and independent bid evaluation.

Therefore, how to evaluate the performance of bid evaluation experts and periodically assess the competency of bid evaluation experts under the background of online bid evaluation to provide a theoretical basis for the hierarchical or dynamic management and incentive and constraint mechanism of bid evaluation experts is a very meaningful research topic.

2. Related research review

2.1 Research on bid evaluation experts

The first section reviews the current management situation of bid evaluation experts in China and expounds the importance of competency and performance evaluation of bid evaluation experts and the limitations of current management under the background of information technology. The research on bid evaluation experts mainly includes two aspects: integrity, bid evaluation behavior and results. As for the integrity of bid evaluation experts, References [14, 15] constructed evaluation index system and evaluation model to evaluate the integrity of bid evaluation experts from different perspectives. As for the bid evaluation behavior and results of bid evaluation experts, the existing research focused on the behavior of the bid evaluation committee [16], the antagonism or uncooperative behavior of the bid evaluation expert groups (i.e. technical group and business group) [6, 1720], the bid evaluation behavior [21] and collusive behavior [22] of bid evaluation experts as well as the abnormal score of bid evaluation experts [23] and the difference of score results [24]. In addition, References [25, 26] proposed incentivizing and constraining bid evaluation experts by analyzing the principal-agent relationship, and Reference [27] further designed the incentive and constraint mechanism. However, the consensus-building process of bid evaluation experts, the generation of collective decision-making matrix and the rank-oriented decision-making method consider the expert decision-making problem, which is different from the perspective of this paper and will not be further discussed in this paper.

The bid evaluation process of experts can be regarded as the expert service process. Due to the information asymmetry in the process of expert service, there are different kinds of hidden moral behavior in the expert service market, such as fraud, improper service, and internalization of the entire objective functions of the clients and so on [28]. Bid evaluation experts also have the characteristics of “gig economy” such as temporary feature and feature of project. Based on the network platform, the ‘new gig economy’ [29] in the Internet era has derived an online labor platform with algorithm as the underlying technical logic [30] by deeply integrating digital technology with the on-demand gig economy. Highly automated and data-driven method is used to replace the functions of managers in the labor execution management of platform workers through algorithm management [31], and the current situation of the incomplete and asymmetric information acquisition of both sides [32] and the principal-agent problem of incomplete labor contracts under the condition of asymmetric information are overcome through the exchange of a large amount of information [33]. It makes that the individual behavior of platform workers in the labor process is almost completely exposed to the continuous and rigorous monitoring environment of the algorithm. Therefore, they must show behavior consistent with organizational goals and platform specifications, and complete the assigned tasks [34, 35]. As a result, human resource management activities such as performance management are significantly different from traditional model [36]. By reshaping the work mode, the digital economy has triggered a series of new problems in behavior, efficiency and ethics in the workplace, so research on organizational behavior and human resource management at the micro level is urgently needed [37]. Big data and artificial intelligence simplify the data acquisition, and provide more research data that are difficult to obtain and trace for existing research [38]. They cover all aspects of the production process, and can penetrate into each production link, insight into relevant factors including human emotions and preferences [39]. Therefore, it is feasible to evaluate the evaluation performance of online bid evaluation experts based on digital footprint research, which is also a topic worthy of in-depth discussion.

Reviewing the relevant research on bid evaluation experts, it is found that there is a lack of research on the performance evaluation of bid evaluation experts. In terms of China’s relevant policy provisions, practical needs of management and theoretical research, it is urgent to study the performance evaluation of online bid evaluation experts, and consider that the changes in performance, integrity, knowledge, experience and other factors within a period may lead to changes in their competence, so as to provide support for management practice and expand relevant theories. According to the definition of performance, the performance of bid evaluation experts should comprehensively consider bid evaluation behavior and results. Referring to competency theory [40], competency should consider the performance and other related indicators within the period. Based on the dynamic view of competency theory [41] and the static and dynamic content characteristics of competency [42], this paper defines the periodic competency of bid evaluation experts as dynamic competency. In view of these, this paper will overall consider the relevant factors to evaluate the performance and dynamic competency of bid evaluation experts.

2.2 Research on subjective weighting method

Based on the above contents, this paper constructs the evaluation index system in order to realize the performance evaluation and dynamic competency evaluation of bid evaluation experts. Therefore, the calculation of reasonable and effective index weight has become the key issue of evaluation.

The common practice is to find some stakeholders (i.e. all those who have sufficient professional knowledge to carry out reasonable evaluation) [43] and combine the importance of the indices with the linguistic value to construct a judgment matrix through linguistic variables [44, 45]. In this way, the limitations of individual expert opinions can be avoided and the reliability of the evaluation results can be improved by integrating multiple expert opinions, such as analytic hierarchy process (AHP) [46] and order relationship analysis method(G1) [47, 48]. In this paper, IAHP method of reference [49] is used to construct the evaluation model. However, there are also some shortcomings in Reference [49]: Firstly, in the process of judging the importance of evaluation indices, the main professional knowledge characteristics (i.e. the hesitation of cognitive limitations and the consistency of different preferences) [50] as expert judgment information reflects the credibility of expert evaluation and affects the final evaluation results, while Reference [48] only considers the evaluation consistency to calculate the expert coefficient. Secondly, it is also unreasonable to eliminate expert coefficient in the calculation method of index score interval number constructed in Reference [49], because the size of expert coefficient represents the credibility of judgment results. Therefore, this paper improves the method proposed in Reference [49].

At the same time, the purpose of dynamic competency evaluation in this paper is to cluster bid evaluation experts and provide a theoretical basis for hierarchical management. Some scholars consider optimizing under the condition of calculating the weight interval number, including the highest satisfaction [51], the minimum weight deviation [52], and the minimum total projection deviation [53] as the optimization objectives. Therefore, this paper refers to this idea and constructs a mathematical optimization model under the condition of weight interval number.

The structure of this paper is as follows. Section 1 introduces the significance of performance and dynamic competency evaluation of bid evaluation experts. Section 2 reviews the relevant research on bid evaluation experts and the related theories of subjective weighting method. Section 3 constructs the evaluation index system of performance and dynamic competency of bid evaluation experts. Section 4 improves the calculation method of expert weight coefficient and index score interval number based on the evaluation method proposed in Reference [49], and constructs a mathematical optimization model according to the purpose of performance and dynamic competency evaluation of bid evaluation experts. Section 5 makes an empirical analysis. Section 6 expounds the research conclusions of this paper, and puts forward suggestions for the management of bid evaluation experts based on the research and related theories.

3. Evaluation index system

3.1 Principles of constructing evaluation index system

  1. Purpose principle. Realize the performance and dynamic competency evaluation of bid evaluation experts, and provide a theoretical basis for hierarchical management, incentive and constraint of bid evaluation experts.
  2. Scientific principle. Fully follow the law of bid evaluation activities, and the selected indices, calculation methods and standards meet the characteristics of bid evaluation.
  3. Practical principle. Conform to the objective reality, the selected index data are collectable and easy to operate.
  4. Systematic principle. Comprehensively reflect the performance and dynamic competency of bid evaluation experts.

3.2 Construction process of evaluation index system

Based on the analysis of the management laws and regulations of some provincial bid evaluation experts and the current situation of bid evaluation of bid evaluation experts in China, this paper sorts the relevant evaluation indices of existing bid evaluation experts [14, 15], refers to other project evaluation experts [54, 55] and follow the above principles to preliminarily construct the evaluation index system of bid evaluation experts’ performance and dynamic competency according to the common points of offline bid evaluation and online bid evaluation and the digital footprint of online bid evaluation. The performance evaluation index system includes 3 first-level indices, namely bid evaluation performance, bid evaluation quality and code of conduct, and 10 corresponding second-level evaluation indices. The dynamic competency evaluation index system includes 3 first-level indices of interim performance, code of conduct, and database-entry competency, and 8 corresponding evaluation indices. The expert consultation method is used to consult a total of 12 experts, which include 5 owners, 3 from regulatory agency, 2 from construction organization, and 2 bid evaluation experts, so as to modify and improve the evaluation indices, and finally construct the performance evaluation index system including 3 first-level indices of bid evaluation performance, bid evaluation quality, code of conduct, and the corresponding 11 second-level evaluation indices, as shown in Table 1. As well as the dynamic competency evaluation index system including the 2 first-level indices of the interim comprehensive situation, capacity improvement and the corresponding six second-level evaluation indices, as shown in Table 2. Finally, referring to the relevant references [5456], the index calculation method is determined according to the actual situation of bid evaluation, as shown in Tables 1 and 2.

thumbnail
Table 1. Performance evaluation index system of bid evaluation experts.

https://doi.org/10.1371/journal.pone.0269467.t001

4. Evaluation model

4.1 Related theoretical knowledge

Definition 1 [58]. Let X be a non-empty domain, then an intuitionistic fuzzy set A on X is: . In the formula, and are respectively the degree of affiliation and non-affiliation of element x belonging to A, and satisfying is the degree of hesitation of element x in A, indicating degree of uncertainty that x belongs to the . All intuitionistic fuzzy sets on non-empty domain are denoted by IFS(X), and a = (μa, va, πa) is called intuitionistic fuzzy number (IFN), πa = 1−μAvA in this formula. The intuitionistic fuzzy number is expressed by IFN in the following paper.

Definition 2 [59, 60]. Let R denote a real number. If a, a+R and aa+, a = [a, a+] is called a binary interval number. If a is a positive interval number, then a = [a, a+] = {x|0≤axa+}.

4.2 Semantic information and intuitionistic fuzzy number

In this paper, referring to Reference [61], hesitation is divided into three levels of ‘very small’, ‘small’, ‘general’, where semantic evaluation granularity r = 5, π = 0.1,0.2,0.3 respectively represent three levels of hesitation, and the language evaluation value is quantified by referring to the reference [49, 6264], as shown in Table 3. During the evaluation, N experts independently evaluate the importance of the index aM,i of the M(M≥2) layer on the upper level associated indices aM−1,j.

4.3 Expert weight coefficient and index score interval number

The Reference [49] combined the basic theory of interval number with the hierarchical analysis method, proposed and proved the theorem of “positive interval number” and the theorem of “consistency of interval number judgment matrix”. In the process of calculating the index score interval number, the expert weight coefficient is calculated considering the consistency of expert evaluation, and then the index score interval number is calculated according to the expert weight coefficient, but the expert weight coefficient is approximately subtracted in the calculation process. The actual calculated index score interval number is independent of the expert weight coefficient, and the calculation of expert weight coefficient only considers the consistency of expert evaluation, and does not consider the hesitation of expert evaluation, which is also incomplete. Therefore, this paper comprehensively considers the consistency and hesitation of expert evaluation to calculate the weight coefficient of expert evaluation, improves the method of calculating the index score interval number and proves that the improved calculation method meets the ‘positive interval number’ theorem. The improved calculation steps are as follows:

Step 1: Calculate the expert weight coefficient [65] based on evaluation consistency according to the evaluation results of the importance of evaluation experts to indices.

(1)

In the formula, [66] is the deviation coefficient. The larger the deviation is, the smaller the expert weight coefficient is, and the smaller the deviation is, the larger the deviation weight coefficient is. According to the Reference [66], the parameter ∂ is the adjustment coefficient. It is generally appropriate to define ∂ = 10 in practical application. According to reference [65], ε is a moderator variable with a value greater than 0. ε = 0.2 based on the standard characteristics of the index importance evaluation scale.

Step 2: Calculate the weight coefficient of experts based on hesitation according to the evaluation results of the importance of evaluation experts to indices and IFNs. Due to the different professional knowledge and work experience of experts, there are different degrees of hesitation in the evaluation of the importance of the same index. In the judgment of the importance of a certain index, the greater the degree of hesitation is, the smaller the expert weight coefficient is, and the smaller the degree of hesitation is, the greater the expert weight coefficient is.

(2)

Step 3: Calculate the expert weight coefficient [61] in the evaluation based on comprehensively consideration of the consistency and hesitation of expert evaluation.

(3)

In this formula, the parameters ϑ1, ϑ2∈[0,1] satisfy ϑ1+ϑ2 = 1. When ϑ1>0.5, it indicates that more attention is paid to the consistency of expert evaluation information. When ϑ2>0.5, it indicates that more attention is paid to the determination of expert evaluation information. Since the evaluation experts are experts and scholars in this field, they are very familiar with each index. When evaluating, their hesitation is low and consistency information is more important, so ϑ1 = 0.8 and ϑ2 = 0.2 are determined.

Step 4: Calculate the index score interval number .

(4)

The calculation method given in Reference [49] is as follows, and the reasons for improving it are also as follows.

Description:

∴ This calculation method of interval number reduces the evaluation experts’ weight coefficient in the process of calculation, which is unreasonable.

In this paper, the improved calculation method is as follows. Firstly, it is proved that it satisfies the ‘positive interval number’ theorem, and then the rationality is explained.

Proof: the interval number is a positive interval number.

, namely, (if and only if the scores of all evaluation experts are equal, the equal sign is reached and the interval number degenerates into a real number)

the proof is completed.

Description: The rationality of the improved calculation method in this paper.

The calculation results of index score interval number given in Reference [49] are:

The calculation results of improved index score interval number in this paper are:

The expert weight coefficient reflects the credibility of the evaluation results. The greater the expert weight coefficient is, the higher the credibility is. It is found that the calculation method in Reference [49] reduces the expert weight coefficient in the calculation process, which results in the calculation results are independent of the expert weight coefficient. The improved formula in this paper avoids this situation.

4.4 Evaluation model

According to the provisions of the bidding law, the number of members of the bid evaluation committee is an odd number and more than 5 people. In practice, the number of members of the bid evaluation committee is generally 5, 7, 9, which is not a large base. The purpose of performance evaluation is to judge the bid evaluation results of bid evaluation expert, so it is not necessary to distinguish performance of bid evaluation experts. The normalized weight vector of the index adopts the method of reference [49]. The purpose of periodic evaluation is to judge the change of competency of bid evaluation experts and realize the classification management of bid evaluation experts, which requires low discrimination of intra-class bid evaluation experts and a high discrimination of inter-class intra-class bid evaluation experts. Therefore, based on the calculation of the weight interval number, this paper determines the calculation method of the normalized weight vector of the index interval number according to the needs of performance evaluation and dynamic competency evaluation. The specific calculation process is as follows:

Step 1: Calculate the index score interval number by steps 1 to 4 in section 4.3, and then calculate the interval number judgment matrix [49].

(5)

In the formula: m is the index number of layer M associated with index aM−1,j of layer M−1. pik indicates the comparison result of the importance for aM−1,j between any two aM,i, aM,k of layer M associated with index aM−1,j of layer M−1, which is determined by formula (6).

(6)

Step 2: Transform the interval number judgment matrix into ordinary judgment matrices PL and PR.

(7)

In the formula, , the matrix PL is the left matrix of the interval number judgment matrix P.

(8)

In the formula, , the matrix PR is the right matrix of the interval number judgment matrix P.

Step 3: Calculate the transfer matrices AL, AR of PL, PR.

(9)(10)

Step 4: Calculate the optimal transfer matrices BL, BR of transfer matrices AL, AR.

(11)(12)

In the formula, .

Step 5: Calculate the quasi-optimal matrices CL and CR of PL and PR [67].

(13)(14)

In the formula, .

Step 6: Calculate the normalized vectors and of eigenvector corresponding to the largest eigenvalues CL and CR, and obtain the weight interval number matrix [68].

(15)(16)(17)

In this formula, α and β are determined by the following formulas.

(18)(19)

Step 7: Calculate normalized weight vector of performance evaluation index weight according to the formula in Reference [49], namely formulas (20), (21).

(20)(21)

The weight vector of dynamic competency evaluation index is calculated according to the goal of small intra-class discrimination and large inter-class discrimination. The smaller the standard deviation is, the more concentrated the data is, indicating that the discrimination between the evaluation objects is smaller. Therefore, the intra-class discrimination is represented by standard deviation, and the inter-class discrimination is represented by deviation. The following mathematical optimization model is constructed to calculate the normalized weight vector of index weight.

Objective function: (22)

Constraint conditions:

Optimization method: The clustering is constantly updated to achieve optimization goal of minimizing the intra-class discrimination and maximizing the inter-class discrimination by iterating the weight value in the weight interval number.

In the formula, represents the eigenvalue of the final layer index of the evaluation object p; Gp and Gq represent dynamic competency of the evaluation objects p and q; , and Gp,z,min>Gp,z+1,max. Vz indicates the standard deviation of dynamic competency of the evaluation object within the z class, denotes the deviation of dynamic competency between z and z+1 classes, then represents the normalized weight vector of the index interval number of the M layer associated with the M−1 layer index j.

5. Empirical analysis

5.1 Calculation of index weight

5.1.1 Performance of virtual bid evaluation experts.

In view of the particularity of the bid evaluation expert group, it is difficult to obtain relevant data. In order to make the performance and dynamic competency of virtual bid evaluation experts more realistic, this paper obtains some characteristics of the performance of bid evaluation experts through the expert survey of relevant departments, as shown in Table 4, and simulate the performance and dynamic competency of the virtual bid evaluation experts according to expert opinions.

(1) Performance of virtual bid evaluation experts

In this paper, 10,000 kinds of performance of bid evaluation experts are simulated as the basis for calculating the interim performance in the dynamic competency evaluation index of bid evaluation experts. In addition, 11 kinds of performance are randomly selected as the performance of 11 bid evaluation experts in a bid evaluation committee for one bid evaluation, which is used for empirical analysis, as shown in Table 5. The specific methods are as follows: firstly, analyze the dependency relationship among indices as noted in Table 4, determine an index with more dependency relationship among the indices with dependency relationship to generate, then generate other indices with dependency relationship, check the cross dependency relationship of indices, and correct the generated data with cross dependency relationship, Finally, randomly combine the above indices with dependent relationship with the indices without dependent relationship.

The performance of bid evaluation experts is: (23)

In the formula, ρ1 = 1 or 0, which indicate timely or not timely submission of bid evaluation report. ρ2 = 0 or 1, which indicates the existence or absence of impostor, wi indicates the index weight of final layer.

(2) Dynamic competency of virtual bid evaluation experts

The dynamic competency evaluation of bid evaluation experts is carried out on the basis of performance evaluation. In this paper, taking Kunming city as an example, there are about 1000 experts in the bid evaluation expert database in the field of engineering in Kunming according to survey. Setting up an evaluation cycle of 2 years, the number of experts drawn accounts for 95% of the total number, the number of an expert drawn is about 1–100 times, and it is more likely to be drawn 10–20 times. Therefore, x (x∈[1,100]) times are extracted from 10,000 kinds of performance in line with the actual situation as the calculation basis of the interim performance in the dynamic competency, and x = 10–20 times are set as the times that most experts can be extracted in one cycle.

In addition, considering that performance evaluation is the basis of incentive and constraint mechanism of bid evaluation experts, it is assumed that the performance of bid evaluation experts in a cycle will not deteriorate under the effect of incentive and constraint mechanism, thus virtualizing the interim performance of bid evaluation experts in a cycle. The dynamic competency of a total of 1010 bid evaluation experts is virtualized, 1000 are used to calculate the index weight, and 10 were used for empirical analysis. Due to the limitation of space, only the dynamic competency of the 10 bid evaluation experts for empirical analysis is shown in Table 6.

thumbnail
Table 6. Dynamic competency of 10 bid evaluation experts.

https://doi.org/10.1371/journal.pone.0269467.t006

5.1.2 Index weight calculation.

Due to the different preferences of experts from relevant stakeholders on the performance evaluation indices, a total of 18 experts consisted of 4 owners, 3 from regulatory agency, 3 from construction organization, 3 from bidding agency, and 5 bid evaluation experts (3 experts from university and 2 experts from enterprise) judge the importance of the evaluation index (due to space limitations, some evaluation results are shown in Table 7).

thumbnail
Table 7. Findings on the importance of expert segment indices.

https://doi.org/10.1371/journal.pone.0269467.t007

The weight interval numbers of performance and dynamic competency evaluation indices calculated by formulas (1)–(19) are shown in Tables 7 and 8.

thumbnail
Table 8. Weight interval number of performance indices by improved method.

https://doi.org/10.1371/journal.pone.0269467.t008

thumbnail
Table 9. Weight interval number of dynamic competency indices by improved method.

https://doi.org/10.1371/journal.pone.0269467.t009

The normalized weight vector of performance evaluation index is calculated according to formulas (20) and (21), as shown in Table 9.

thumbnail
Table 10. Normalized weight vector of performance evaluation index.

https://doi.org/10.1371/journal.pone.0269467.t010

Through the optimization of formula (22), the calculated normalized weight vector of the final layer index of dynamic competency is shown in Table 10.

thumbnail
Table 11. Normalized weight vector of final layer of dynamic competency.

https://doi.org/10.1371/journal.pone.0269467.t011

Through the calculation of the above weight interval number and index weight, it can be found that bid evaluation performance A1 and bid evaluation quality A2 have the same weight interval number and the same index weights in the performance evaluation. The weight interval number of code of conduct A3 is relatively close to the left side of bid evaluation performance A1 and bid evaluation quality A2, and its weight is also relatively close, indicating that experts from relevant stakeholders attach great importance to evaluation performance A1, evaluation quality A2 and code of conduct A3, but pay more attention to evaluation performance A1 and evaluation quality A2.

In the dynamic competency evaluation, the interim comprehensive situation D1 is on the right side of the competency improvement D2, indicating that the experts of relevant stakeholders pay more attention to the interim comprehensive situation D1 in the dynamic competency evaluation of bid evaluation experts, pay more attention to the interim performance E1 in the interim comprehensive situation D1, and pay more attention to the credit E6 in the competency improvement D2.

5.2 Comparative analysis

5.2.1 Comparison of weight interval numbers.

Compared with the reference [49], the weight interval numbers of performance and dynamic competency evaluation indices calculated by the calculation method of Reference [49] are shown in Tables 11 and 12.

thumbnail
Table 12. Weight interval number of performance evaluation indices calculated in reference [49].

https://doi.org/10.1371/journal.pone.0269467.t012

thumbnail
Table 13. Weight interval number of dynamic competency evaluation indices in reference [49].

https://doi.org/10.1371/journal.pone.0269467.t013

Comparing the length len [69, 70] of each index weight interval number in Tables 7, 8, 11 and 12, it can be found that the length len of 22 interval numbers become small when the index weight interval number is calculated by the improved method. Therefore, the improved calculation method improves the calculation accuracy of the weight interval number and further proves the rationality of the improved calculation method of index score interval number.

5.2.2 Comparison of clustering results.

After using the improved method to calculate score interval number of the dynamic competency evaluation index, then the index weight interval number is calculated (Table 8), and the normalized ranking weight vector of the final layer index then is calculated according to steps (16)—(22) of reference [49], as shown in Table 13.

thumbnail
Table 14. Normalized ranking weight vector of final layer of dynamic competency.

https://doi.org/10.1371/journal.pone.0269467.t014

Because the rating is generally set to 5 levels, the number of clusters is set to 5. The normalized ranking weight vector of the final layer index above (Table 13) and the optimization method are respectively used to cluster the dynamic competency of 10,000 virtual bid evaluation experts. The clustering interval of dynamic competency and the number of experts are obtained, and the length (len) of the clustering interval and the inter-class distance are calculated, the results are shown in Table 15 It can be found that the number of experts in each category is similar, and optimized clustering interval length (len) is smaller, and inter-class distance is larger. Therefore, the results are reliable when the weight interval number is optimized.

thumbnail
Table 15. Dynamic competency clustering in reference [49] and this paper.

https://doi.org/10.1371/journal.pone.0269467.t015

Through the above normalized ranking vector of the final layer index of dynamic competency (Table 13) and the normalized ranking weight vector of the final layer quality assurance of dynamic competency (Table 10), the dynamic competency of 10 bid evaluation experts () is classified according to the clustering interval of this paper. The results are shown in Table 16.

thumbnail
Table 16. Comparison of clustering results of bid evaluation experts’ dynamic competency.

https://doi.org/10.1371/journal.pone.0269467.t016

According to the results of dynamic competency and clustering of 10 bid evaluation experts (Table 16), the reliability of optimization within the weight interval number is further proved.

5.2.3 Comparison of clustering discrimination.

The goal of optimization is to minimize the intra-class discrimination and maximize the inter-class discrimination. According to the clustering results in Table 16, the intra-class discrimination and inter-class discrimination are compared by referring to formula (22), and the calculation results are shown in Table 17.

thumbnail
Table 17. Comparison of the clustering discrimination of dynamic competency of 10 bid evaluation experts.

https://doi.org/10.1371/journal.pone.0269467.t017

Through the data of Table 17, it can be found that the bid evaluation expert competency calculated in this paper has smaller intra-class discrimination and larger inter-class discrimination, which is conducive to the hierarchical management of bid evaluation experts in the expert database and the implementation of incentive and constraint mechanism. Therefore, the evaluation results of this paper are more in line with the actual needs.

6. Conclusions and suggestions

By constructing the evaluation index system and evaluation model of the performance and dynamic competency of bid evaluation experts, simulating bid evaluation experts accorded with the actual situation, and calculating the weight vectors of the performance and dynamic competency evaluation indices on the basis of the weight interval number, and finally carrying out the empirical analysis, the following conclusions and suggestions are drawn:

  1. In the process of bid evaluation experts performing their duties, experts from relevant stakeholders attach great importance to the bid evaluation performance, bid evaluation quality and code of conduct of bid evaluation experts, but pay more attention to the bid evaluation performance and quality of bid evaluation experts.
  2. In the dynamic competency evaluation of bid evaluation experts, experts from relevant stakeholders pay more attention to the interim comprehensive situation of dynamic competency evaluation of bid evaluation experts, pay more attention to the interim performance in the interim comprehensive situation, and pay more attention to credit in the competency improvement.
  3. The improved calculation method of expert coefficient takes into account expert consistency and hesitation, which is more reasonable. The improved calculation method of index score interval number calculates the index score interval number and then calculates the weight interval number, which improves the calculation accuracy of the weight interval number, and the proposed mathematical optimization model meets the needs of hierarchical management of bid evaluation experts.
  4. The proposed idea of optimization in weight interval numbers has good generality, which can also be used to set other optimization objectives or to evaluate other personnel.
  5. The judgment results of the relevant stakeholders on the importance of evaluation indices reveal which aspects of quality of bid evaluation experts they pay more attention to, and also indicate which aspects of the bid evaluation experts may have prominent problems. Therefore, relevant management departments can strengthen the management in the future.
  6. The bid evaluation experts participate in the project review after entering expert database, and carry out the ‘scoring system’ management through the performance evaluation (scoring according to the performance and the number of bid evaluation: high scores for good performance and low scores for poor performance. Each time they participate in the bid evaluation, scoring once, and accumulating the scores). After a cycle, the dynamic competency is re-evaluated and classified, and repeating the cycle, to achieve the purpose of hierarchical management and dynamic management of bid evaluation experts.
  7. The relevant management departments may pay labor fees according to the performance of bid evaluation experts, give priority to the experts with high score and high competency to participate in project review, and kick experts with frequent poor performance out of the expert database.

This paper assumes that the performance of bid evaluation experts will not become worse under the effect of incentive and constraint mechanism is an ideal state. Referring to the performance curve of other staff under performance evaluation, the relationship between the performance of bid evaluation experts and the number of bid evaluations is complex. The performance curve may rise first and then tend to be stable, or it may be an inverted U-shaped curve. Future research can focus on the effect of incentive and constraint mechanism on the performance curve of bid evaluation experts to improve the reliability of virtual data.

Acknowledgments

The authors thank Shuang Zheng, Yankun Peng, Tao Huang, Shaopeng Huang, Tonghai Li, Xianhai Qin, et al. for expert technical assistance.

References

  1. 1. Zhang XZ, Ni YL. Mechanism of Agent Evaluation Based on Infinite Repeated Game. 2016 13th International conference on services systems and service management. 2016.
  2. 2. Marzouk M, Mohamed E. Modeling bid/no bid decisions using fuzzy fault tree. Construction innovation. 2018;18(1):90–108. http://doi.org/10.1108/CI-11-2016-0060
  3. 3. Takano Y, Ishii N, Muraki M. Determining bid markup and resources allocated to cost estimation in competitive bidding. Automation in Construction.2018;85:358–68. https://doi.org/10.1016/j.autcon.2017.06.007
  4. 4. Chen ZS, Zhang X, Rodríguez RM, Pedrycz W, Martínez L. Expertise-based bid evaluation for construction-contractor selection with generalized comparative linguistic ELECTRE III. Automation in Construction.2021;125:103578. http://doi.org/10.1016/j.autcon.2021.103578
  5. 5. Watt DJ, Kayis B, Willey K. Identifying key factors in the evaluation of tenders for pr-ojects and services. International Journal of Project Management. 2009;27(3):250–60. http://doi.org/10.1016/j.ijproman.2008.03.002
  6. 6. Liu XW, Wang DW, Jiang ZZ. Simulation and analysis of bid evaluation behaviors for multiattribute reverse auction. IEEE Systems Journal.2015;9(1):165–76. http://doi.org/10.1109/JSYST.2013.2258737
  7. 7. Alsugair AM. Framework for Evaluating Bids of Construction Contractors. Journal of Management in Engineering. 1999;15(2):72–8. http://doi.org/10.1061/(ASCE)0742-597X(1999)15:2(72)
  8. 8. Bana E Costa CA, Corrêa ÉC, De Corte J, Vansnick J. Facilitating bid evaluation in pu-blic call for tenders: a socio-technical approach. Omega. 2002;30(3):227–42. https://doi.org/10.1016/S0305-0483(02)00029-4
  9. 9. China Tendering and Bidding Association. China Tendering and Bidding Development Report (2018 Edition).Beijing: China Planning Press;2019.
  10. 10. Huang YP, Qiu H. Big Tech Lending: A New Credit Risk Management Framework. Management World.2021;37(02):12–21+50+2+16. https://doi.org/10.19744/j.cnki.11-1235/f.2021.0016
  11. 11. Berg T, Burg V, Gombović A, Puri M. On the Rise of FinTechs: Credit Scoring Using Digital Footprints. The Review of Financial Studies. 2020;33(7):2845–97. http://doi.org/10.1093/rfs/hhz099
  12. 12. Blakey R, Askelund AD, Boccanera M, Immonen J, Plohl N, Popham C, et al. Commu-nicating the Neuroscience of Psychopathy and Its Influence on Moral Behavior: Protocol of Two Experimental Studies. Frontiers In Psychology. 2017;8:294. pmid:28352238
  13. 13. Orlova EV. Methodology and Models for Individuals’ Creditworthiness Management Using Digital Footprint Data and Machine Learning Methods. Mathematics (Basel). 2021;9(15):1820. http://doi.org/10.3390/math9151820
  14. 14. Zheng L, Zhang YJ, Peng SN. Research on Evaluation Index System of Bid Evaluation Experts’ Integrity. Modern Management Science.2012;(08):39–41. http://doi.org/ 10.3969/j.issn.1007-368X.2012.08.014
  15. 15. Yu Y, Wang HZ, Chen JL, Ma XN. Construction of Credit Evaluation Index System and Model for Bid Evaluation Expert. Standard Science.2016;(07):20–23. http://doi.org/10.3969/j.issn.1674-5698.2016.07.004
  16. 16. Rodriguez MA, Bollen J, Van de Sompel H. Mapping the bid behavior of conference referees. Journal of Informetrics. 2007;1(1):68–82. http://doi.org/10.1016/j.joi.2006.09.006
  17. 17. Liu XW, Wang DW. Bidding Evaluation Behavior Analysis of Grouped Multi-attribute R-everse Auction Based on Qualitative Simulation. Journal of Northeastern University(Na-tural Science). 2012;33(03):314–317+322. https://kns.cnki.net/kcms/detail/detail.aspx?FileName=DBDX201203002&DbName=CJFQ2012
  18. 18. Liu XW, Qi W, Wang DW, Liu LL. The Experimental Study of Grouped Bid Evaluation Behavior about Multi-Attribute Reverse Auction. Mathematics in Practice and Theory.2014;44(19):40–47. http://doi.org/ CNKI:SUN:SSJS.0.2014-19-005
  19. 19. Liu XW, Wang DW. Analysis of referee experts’ behaviors for grouped bid evaluation based on evolutionary game. Journal of Management Sciences in China.2015;18(01):50–61. https://kns.cnki.net/kcms/detail/detail.aspx?FileName=JCYJ201501005&DbName=CJFQ2015
  20. 20. Wang DW, Liu XW. Analysis of administrative behaviors for bid evaluation based on evolutionary game. Journal of Systems Engineering.2014;29(06):771–779. http://doi.org/10.13383/j.cnki.jse.2014.06.006
  21. 21. Liu XW, Zhang ZX, Qi W, Wang DW. Behavior Analysis With an Evolutionary Game Theory Approach for Procurement Bid Evaluation Involving Technical and Business Experts. IEEE Systems Journal. 2019;13(4):4374–85. http://doi.org/10.1109/JSYST.2019.2925773
  22. 22. Huo ZG, Zhang ML. Bid Evaluation Experts to Participate in the Bidding Collusion Ga-me Analysis and Prevention Countermeasures. Journal of Technical Economics & Man-agement.2016;(09):20–24. http://doi.org/10.3969/j.issn.1004-292X.2016.09.004
  23. 23. Liang J. Reinforcement of Judgment and Supervision on Abnormal Score by Bidding E-valuation Experts. Construction Economy.2013;(02):97–99. http://doi.org/10.14181/j.cnki.1002-851x.2013.02.003
  24. 24. Liu XW, Wang DW. Simulation Research of Bidding Evaluation Mechanism in Multi-Attribute Auction Based on Multi-Agent. Journal of System Simulation.2013;25(10):2367–2373. http://doi.org/10.16182/j.cnki.joss.2013.10.001
  25. 25. Shu MY. Study of IC and IR in the principal-agent relationship in evaluating bids of construction projects. Journal of Hefei University of Technology(Natural Science).2009;32(11):1749–1752. http://doi.org/ 10.3969/j.issn.1003-5060.2009.11.029
  26. 26. Shu MY, Cai JG, Fan YR. Game Analysis of Principal-Agent Relation in Evaluating Bids of Construction Project. Journal of Henan University(Natural Science).2009;39(03):323–326. http://doi.org/10.15991/j.cnki.411100.2009.03.023
  27. 27. Tai SL, Zhu YX. Incentive Mechnism for Bid Evaluation Experts of Construction Projec-ts. Journal of Shenyang Jianzhu University(Social Science).2020;22(01):37–41. http://doi.org/ CNKI:SUN:SJSH.0.2020-01-006
  28. 28. Debo L G, Toktay L B, Van Wassenhove L N. Queuing for Expert Services. Manag-ement Science.2008;54(8):1497–1512. http://doi.org/ 10.1287/mnsc.1080.0867
  29. 29. Beijing: School of Social Sciences. Tsinghua University. 2020. https://www.tioe.tsinghua.edu.cn/info/1109/1801.htm
  30. 30. Kuhn K M, Maleki A. Micro-entrepreneurs, Dependent Contractors, and Instaserfs: Un-derstanding Online Labor Platform Workforces. Academy of Management Perspectives.2017;31(3):183–200. http://doi.org/ 10.5465/amp.2015.0111
  31. 31. Li LX, Zhou GS. Platform Economy, Labor Market, and Income Distribution: Recent Tr-end and Policy Options. International Economic Review.2022;(02):46–59+5. https://kns.cnki.net/kcms/detail/detail.aspx?FileName=GJPP202202004&DbName=CJFQTEMP
  32. 32. Wei W, Liu BN, Ling YR. The lmpact of Gig Worker’s Occupational Stigma Perception on Turnover Intention Under the Platform Algorithm. Human Resources Development of China.2022;39(02):18–30. https://doi.org/10.16471/j.cnki.11-2822/c.2022.2.002.
  33. 33. Omohundro Steve. Cryptocurrencies, smart contracts, and artificial intelligence. Ai Matters.2014;1(2):19–21. http://doi.org/ 10.1145/2685328.2685334
  34. 34. Pignot E. Who is pulling the strings in the platform economy? Accounting for the d-ark and unexpected sides of algorithmic control.Organization.2021;28(1):208–235. https://doi.org/10.1177/1350508420974523
  35. 35. Duggan J, Sherman U, R Carbery, et al. Algorithmic management and app-work in the gig economy: A research agenda for employment relations and HRM. Human Resource Management Journal.2020;30(1):114–132. http://doi.org/ 10.1111/1748-8583.12258
  36. 36. Li JS, Li RY. Current Situation and Future Research Prospect of Human Resource Man-agement in Gig Economy. Leadership Science.2021;(20):83–85. https://doi.org/10.19572/j.cnki.ldkx.2021.20.022
  37. 37. Baum JA, Haveman HA. Editors’ comments: the future of organizational theory. Academy of Management Review.2020;45(2):268–272. https://doi.org/10.5465/amr.2020.0030
  38. 38. Zhang ZX, Zhao SM, Shi JQ, Qin X, He W, Zhao XY, et al. An Academic review of the 254th Shuangqing Forum: Critical Scientific lssues of Organization and Management Research in the Digital Economy. Bulletin of National Natural Science Foundation of China.2021;35(05):774–781. https://doi.org/10.16262/j.cnki.1000-8217.2021.05.020
  39. 39. Jia K. Can Algorithm be Neutral?:Possibilities of Gig Economy. Beijing Cultural Review.2021;(04):117–124+159.https://kns.cnki.net/kcms/detail/detail.aspx?FileName=WHZH202104017&DbName=CJFQ2021
  40. 40. McClelland David C. Testing for competence rather than for "intelligence". American psychologist.1973;28(1):1–14. pmid:4684069.
  41. 41. Pan QQ, Wei HM. The Relationship between Founder Competency and Entrepreneurial Team Member’s Trust under Different Stages of New Venture. Science & Technology Progress and Policy.2016;33(01):114–20. http://doi.org/10.6049/kjjbydc.2015050703
  42. 42. Zhao TL, Lu W, Li CH. Research on the Evolutionary Logic of Dynamic Competency of Family Entrepreneurs based on Enterprise Life Cycle. Science & Technology Progress and Policy.2021;38(21):63–72. http://doi.org/10.6049/kjjbydc.2020120543
  43. 43. Sinclair SJ, Bruce MJ, Griffioen P, Dodd A, White MD. A condition metric for Eucalyp-tus woodland derived from expert evaluations. Conservation Biology. 2018;32(1):195–204. pmid:28370297
  44. 44. Fan Z, Feng B. A multiple attributes decision making method using individual and co-llaborative attribute data in a fuzzy environment. Information Sciences. 2009;179(20):3603–18. http://doi.org/10.1016/j.ins.2009.06.037
  45. 45. Zadeh LA. Linguistic variables, approximate reasoning and dispositions. Medical Informatics. 1983;8(3):173–86. pmid:6600041
  46. 46. Saaty TL. The Analytic Hierarchy Process. New York: McGraw-Hill; 1980.
  47. 47. Guo YJ. Comprehensive evaluation theory, method and application. Beijing: Science Press; 2007.
  48. 48. Jin M, Zhang JY, Cui S, Kang M, Xiao Y, Xiang RX, et al. Research on comprehensive evaluation of data link based on G1 method and entropy weight method. Journal of Physics: Conference Series. 2021;1820(1):12115. http://doi.org/10.1088/1742-6596/1820/1/012115
  49. 49. Lu F, Zhang J, Zhang ZY. An interval AHP method for comprehensive quality evaluate-on of the air traffic controllers. Journal of Safety and Environment.2019;19(02):547–553. http://doi.org/10.13637/j.issn.1009-6094.2019.02.028
  50. 50. Sellak H, Ouhbi B, Frikh B, Ikken B. Expertise-based consensus building for MCGDM with hesitant fuzzy linguistic information. Information Fusion.2019;50:54–70. http://doi.org/10.1016/j.inffus.2018.10.003
  51. 51. Wang Q, Geng XL. Multi-attribute Group Decision-making Method of Evaluation Based on Hesitant Fuzzy COPRAS. Statistics & Decision.2019;35(03):45–49. http://doi.org/10.13546/j.cnki.tjyjc.2019.03.010
  52. 52. Wang B, Li G, Cao Y, Peng XH, Chen K. Research on the Weighting Method for Gro-up Decision Based on the Weighting of the Indices. Operations Research and Management Science.2018;27(11):22–25. http://doi.org/CNKI:SUN:YCGL.0.2018-11-005
  53. 53. Shao LB, Zhao LL, Wen TX, Kong XB. Bidirectional projection method with interval va-lued intuitionistic fuzzy information based on prospect theory. Control and Decision.2016;31(06):1143–1147. http://doi.org/10.13195/j.kzyjc.2015.0590
  54. 54. Wang J, Wang PP, Ding J. A comprehensive model on the meta-evaluation of science and technology projects by review experts. Science Research Management.2020;41(02):183–192. http://doi.org/10.19571/j.cnki.1000-2995.2020.02.018
  55. 55. Zhao YJ. Reliability analysis of group experts and testing of evaluation outcomes. Jou-rnal of University of Science and Technology of China.2016;46(02):165–172. http://doi.org/CNKI:SUN:ZKJD.0.2016-02-010
  56. 56. He X Y. Research on a TOPSIS Credit Evaluation Model of Peer Reviewers in Science and Technology Projects. Science and Technology Management Research.2020;40(03):32–38. http://doi.org/CNKI:SUN:KJGL.0.2020-03-006
  57. 57. Liu JW, Shi YJ, Bai Y, Yan CW. An attention analysis method based on EEG supervise-ng face images. Computer Engineering & Science.2018;40(02):298–303. http://doi.org/10.3969/j.issn.1007-130X.2018.02.015
  58. 58. Xu ZS, Ronald RY. Some geometric aggregation operators based on intuitionistic fuzzy sets. International Journal of General Systems.2006;35(4):417–433. http://doi.org/10.1080/03081070600574353
  59. 59. Hu QZ, Zhang WH. Research and Application of the Interval Number Theory. Beijing: Science Press; 2010.
  60. 60. Ai CA, Feng FD, Li J, Liu KX. AHP Method of Subjective Group Decision-making Based on Interval Number Judgment Matrix and Fuzzy Clustering Analysis. Statistics & Decision.2019;35(02):39–43. http://doi.org/10.13546/j.cnki.tjyjc.2019.02.009
  61. 61. Lin Y, Zhan RJ, Wu HS. Expert eights Determination Method and Application Based on Hesitancy Degree and Similarity Measure.Control and Decision.2021;36(06):1482–1488.
  62. 62. Guo KH, Li WL. A Method for Multiple Attribute Group Decision Making with Compl-ete Unknown Weight Information and lts Extension. Chinese Journal of Management Science.2011;19(05):94–103. http://doi.org/10.16381/j.cnki.issn1003-207x.2011.05.025
  63. 63. Guo KH, Li WL. Evidential Reasoning-Based Approach for Multiple Attribute Decision Making Problems under Uncertainty. Journal of Industrial Engineering and Engineering Management.2012;26(02):94–100. http://doi.org/10.13587/j.cnki.jieem.2012.02.025
  64. 64. Zhang S, Liu S. A GRA-based intuitionistic fuzzy multi-criteria group decision making method for personnel selection. Expert Systems with Applications.2011;38(9):11401–5. http://doi.org/10.1016/j.eswa.2011.03.012
  65. 65. Xie CS, Lu F, Zhang ZN, Pan W. Research on Evaluation of Air Traffic Controllers’ Ov-erall Quality Based on lnterval Number. Science Technology and Engineering.2014;14(32):302–308. http://doi.org/10.3969/j.issn.1671-1815.2014.32.061
  66. 66. Xu ZS. Research on Methods for Derivign Experts’ Weights in Group Decision Making. Communication on Applied Mathematics and Computation.2001;(01):19–22. http://doi.org/CNKI:SUN:YONG.0.2001-01-002
  67. 67. Wang JP. A New Point of View of the Method of Optimal Transfer Matrix. Systems Engineering-Theory & Practice.1999;(10):125–126. http://doi.org/10.1002/14651858.CD007948
  68. 68. Wei YQ, Liu JS, Wang XZ. Concept of Consistence and Weights of the Judgement M-atrix in the Uncertain Type of AHP. Systems Engineering-Theory & Practice.1994;(04):16–22. http://doi.org/CNKI:SUN:XTLL.0.1994-04-002
  69. 69. Niu YT, Huang GH, Zhang XX, Yang YP. Study on Interval-parameter Linear Programmi-ng and lts Interval Solutions. Operations Research and Management Science.2010;19(03):23–29. http://doi.org/10.3969/j.issn.1007-3221.2010.03.004
  70. 70. Da QL, Liu XW. Interval Number Linear Programming and Its Satisfactory Solution. Systems Engineering-Theory & Practice.1999;(04):4–8. http://doi.org/10.3321/j.issn:1000–6788.1999.04.002