A fast combination method in DSmT and its application to recommender system

In many applications involving epistemic uncertainties usually modeled by belief functions, it is often necessary to approximate general (non-Bayesian) basic belief assignments (BBAs) to subjective probabilities (called Bayesian BBAs). This necessity occurs if one needs to embed the fusion result in a system based on the probabilistic framework and Bayesian inference (e.g. tracking systems), or if one needs to make a decision in the decision making problems. In this paper, we present a new fast combination method, called modified rigid coarsening (MRC), to obtain the final Bayesian BBAs based on hierarchical decomposition (coarsening) of the frame of discernment. Regarding this method, focal elements with probabilities are coarsened efficiently to reduce computational complexity in the process of combination by using disagreement vector and a simple dichotomous approach. In order to prove the practicality of our approach, this new approach is applied to combine users’ soft preferences in recommender systems (RSs). Additionally, in order to make a comprehensive performance comparison, the proportional conflict redistribution rule #6 (PCR6) is regarded as a baseline in a range of experiments. According to the results of experiments, MRC is more effective in accuracy of recommendations compared to original Rigid Coarsening (RC) method and comparable in computational time.


Introduction
The theory of belief functions, known as Dempster-Shafer Theory (DST) was developed by Shafer [1] in 1976 from Dempster's works [2]. Belief functions allow one to model epistemic uncertainty [3] and they have been already used in many applications since the 1990's [4], mainly those relevant to expert systems, decision-making support and information fusion. To palliate some limitations (such as high computational compelxity) of DST, Dezert and Smarandache proposed an extended mathematical framework of belief functions with new efficient quantitative and qualitative rules of combinations, which was called DSmT (Dezert and Smarandache Theory) in literature [5,6] with applications listed in [7]. One of the major drawbacks of DST and DSmT is their high computational complexities, on condition that the fusion PLOS  space (i.e. frame of discernment-FoD) and the number of sources to combine are large.
DSmT is more complex than DST, and the Proportional Conflict Redistribution rule #6 (PCR6 rule) becomes computationally intractable in the worst case as soon as the cardinality of the Frame of Discernment (FoD) is greater than six.
To reduce the computational cost of operations with belief functions when the number of focal elements is very large, several approaches have been proposed by different authors. Basically, the existing approaches rely either on efficient implementations of computations as proposed for instance in [8,9], or on approximation techniques of original Basic Belief Assignment (BBA) to combine [10][11][12][13][14], or both. From a fusion standpoint, two approaches are usually adopted: 1) one can approximate at first each BBA in subjective probabilities and use Bayes fusion rule to get the final Bayesian BBA [11,12], or 2) one can fuse all the BBAs with a fusion rule, typically Dempster-Shafer's, or proportional conflict redistribution rule #6 (PCR6) rules (which is very costly in computations), and convert the combined BBA in a subjective probability measure [10,14]. The former method is the simplest method but it generates a high loss of information included in the original BBAs, whereas the latter method is intractable for high dimension issues.
This paper presents a new combination method, called modified rigid coarsening (MRC), to get the final Bayesian BBAs based on hierarchical decomposition (coarsening) of the frame of discernment, which can be seen as an intermediary approach between the two aforementioned methods. This hierarchical structure allows to encompass bintree decomposition and mass of coarsening FoD on it. To prove the practicality of our proposed method, MRC is applied to combine users' preferences so as to provide the suitable recommendation for RSs. Preliminary work on original rigid coarsening (RC) has been published in our recent work [15] (This is an extended version of the paper presented at the 20th IEEE International Conference on Information Fusion, XIAN, China). In this paper, more detailed analyses of this new combination method are provided. More importantly, this innovative method is also applied into the real application. These are all added values (contributions) of this paper.
The main contributions of this paper are: 1. the presentation of the FoD bintree decomposition on which will be done the BBAs approximations; 2. user preferences in Recommender Systems (RSs) are modeled by DSmT-Modeling Function.
In order to measure the efficiency and effectiveness of the MRC, it is integrated in the RSs based on DSmT and compared to traditional methods in the experiments. The results show that regarding the accuracy of recommendations, MRC is extremely close to classical PCR6; and the computational time of MRC can be obviously superior to that of PCR6.
The remainder of this paper is organized as follows. In section 2, we review relevant prior work on DST and DSmT first. In section 3, MRC is presented. In section 4, a recommendation system based on DSmT, that employs MRC to combine users' preferences, is shown. In section 5, we evaluate our proposed algorithm based on two public datasets: Movielens and Flixster. Finally, we conclude and discuss future work.

Mathematical background
This section provides a brief reminder of the basics of DST and DSmT, which is necessary for the presentation and understanding of the more general MRC of Section 3.
In DST framework, the frame of discernment (Here, we use the symbol ≜ to mean equals by definition.) Y ≜ fy 1 ; . . . ; y n g (n ! 2) is a set of exhaustive and exclusive elements (hypotheses) which represents the possible solutions of the problem under consideration and thus Shafer's model assumes θ i \ θ j = ; for i 6 ¼ j in {1, . . ., n}. A basic belief assignment (BBA) m(Á) is defined by the mapping: 2 Θ 7 ! [0, 1], verifying m(;) = 0 and ∑ A22 Θ m(A) = 1. In DSmT, one can abandon Shafer's model (if Shafer's model doesn't fit with the problem) and refute the principle of the third excluded middle. The third excluded middle principle assumes the existence of the complement for any elements/propositions belonging to the power set 2 Θ . Instead of defining the BBAs on the power set 2 Y ≜ ðY; [ Þ of the FoD, the BBAs are defined on the so-called hyper-power set (or Dedekind's lattice) denoted D Y ≜ ðY; [ ; \ Þ whose cardinalities follows Dedekind's numbers sequence, see [6], Vol.1 for details and examples. A (generalized) BBA, called a mass function, m(Á) is defined by the mapping: D Θ 7 ! [0, 1], verifying m(;) = 0 and ∑ A2D Θ m(A) = 1. The DSmT framework encompasses DST framework because 2 Θ & D Θ . In DSmT, we can take into account also a set of integrity constraints on the FoD (if known), by specifying all the pairs of elements which are really disjoint. Stated otherwise, Shafer's model is a specific DSm model where all elements are deemed to be disjoint. A 2 D Θ is called a focal element of m(.) if m(A) > 0. A BBA is called a Bayesian BBA if all of its focal elements are singletons and Shafer's model is assumed, otherwise it is called non-Bayesian [1]. A full ignorance source is represented by the vacuous BBA m v (Θ) = 1. The belief (or credibility) and plausibility functions are respectively defined by BelðXÞ ≜ P Y2D Y jYX mðYÞ and PlðXÞ ≜ P Y2D Y jY \ X6 ¼; mðYÞ. BIðXÞ ≜ ½BelðXÞ; PlðXÞ is called the belief interval of X. Its length UðXÞ ≜ PlðXÞ À BelðXÞ measures the degree of uncertainty of X.
In 1976, Shafer did propose Dempster's rule and we use DS index to refer to Dempster-Shafer's rule (DS rule) because Shafer did really promote Dempster's rule in in his milestone book [1]) to combine BBAs in DST framework. DS rule is defined by m DS (;) = 0 and 8A 2 2 Θ \{;}, The DS rule formula is commutative and associative and can be easily extended to the fusion of S > 2 BBAs. Unfortunately, DS rule has been highly disputed during the last decades by many authors because of its counter-intuitive behavior in high or even low conflict situations, and that is why many rules of combination were proposed in literature to combine BBAs [16].
To palliate DS rule drawbacks, the very interesting PCR6 was proposed in DSmT and it is usually adopted (PCR6 rule coincides with PCR5 when combining only two BBAs [6]) where m 12 (A) = ∑ B,C2D Θ |B \ C=A m 1 (B)m 2 (C) is the conjunctive operator, and each element A and B are expressed in their disjunctive normal form. If the denominator involved in the fraction is zero, then this fraction is discarded. The general PCR6 formula for combining more than two BBAs altogether is given in [6], Vol. 3. We adopt the generic notation m PCR6 12 ð:Þ ¼ PCR6ðm 1 ð:Þ; m 2 ð:ÞÞ to denote the fusion of m 1 (.) and m 2 (.) by PCR6 rule. PCR6 is not associative and PCR6 rule can also be applied in DST framework (with Shafer's model of FoD) by replacing D Θ by 2 Θ in Eq (2).

Modified rigid coarsening for fusion of Bayesian BBAs
Here, we introduce the principle of MRC of FoD to reduce the computational complexity of PCR6 combination of original Bayesian BBAs. Considering the case of non-Bayesian BBAs, it requires decoupling all non-singletons in these BBAs in advance, The fusion of original non-Bayesian BBAs needs to be decoupled by using DSmP in advance, which will explain in Section 4.

Rigid coarsening
This proposal was initially called rigid coarsening (RC) in our previous works [17][18][19] and currently improved in our recent work [15]. The goal of this coarsening is to replace the original (refined) FoD Θ by a set of coarsened ones to make computation of the PCR6 rule tractable. Because we consider here only Bayesian BBA to combine, their focal elements are only singletons of the FoD Y ≜ fy 1 ; . . . ; y n g, with n ! 2, and we assume Shafer's model of the FoD Θ. A coarsening of the FoD Θ means to replace it with another FoD less specific of smaller dimension O = {ω 1 , . . ., ω k } with k < n from the elements of Θ. This can be done in many ways depending the problem under consideration. Generally, the elements of O are singletons of Θ, and disjunctions of elements of Θ. For example, if Θ = {θ 1 , θ 2 , θ 3 , θ 4 }, then a possible coarsened frame built from Θ could be, for instance, The rigid coarsening process is a simple dichotomous approach of coarsening obtained as follows: • If n = |Θ| is an even number: The disjunction of the n/2 first elements θ 1 to yn 2 of Θ define the element ω 1 of O, and the last n/2 elements yn 2 þ1 to θ n of Θ define the element and based on Eq (3), one has and based on Eq (3), one has For example, if Θ = {θ 1 , θ 2 , θ 3 , θ 4 , θ 5 }, and one considers the Of course, the same coarsening strategy applies to all original BBAs m Y s ð:Þ, s = 1, . . .S of the S > 1 sources of evidence to work with less specific BBAs m O s ð:Þ, s = 1, . . .S. The less specific BBAs (called coarsened BBAs by abuse of language) can then be combined with the PCR6 rule of combination according to formula Eq (2). This dichotomous coarsening method is repeated iteratively l times as schematically represented by a bintree. Here, we consider bintree only for simplicity, which means that the coarsened frame O consists of two elements only. Of course a similar method can be used with tri-tree, quad-tree, etc. The last step of this hierarchical process is to calculate the combined (Bayesian) BBA of all focal elements according to the connection weights of the bintree structure, where the number of layers l of the tree depends on the cardinality |Θ| of the original FoD Θ. Specifically, the mass of each focal element is updated depending on the connection weights of link paths from root to terminal nodes. This principle is illustrated in details in the following example.
Example 1: Let's consider Θ = {θ 1 , θ 2 , θ 3 , θ 4 , θ 5 }, and the following three Bayesian BBAs can be seen in Table 1: The rigid coarsening and fusion of BBAs is deduced from the following steps: Step 1: We define the bintree structure based on iterative half split of FoD as shown in Fig 1. The connecting weights are denoted as λ 1 , . . ., λ 8 . The elements of the frames O l are defined as follows: • At layer l = 1: • At layer l = 2: Step 2: The BBAs of elements of the (sub-) frames O l are obtained as follows: • At layer l = 1, we use Eqs (6) and (7) because |Θ| = 5 is an odd number. Therefore, we get the BBAs in Table 2: with the BBAs in Tables 3 and 4: These mass values are obtained by the proportional redistribution of the mass of each focal element with respect to the mass of its parent focal element in the bin tree. For example, Other masses of coarsening focal elements are computed similarly using this proportional redistribution method.
• At layer l = 3: We use again the proportional redistribution method which gives us the BBAs of the sub-frames O 3 in Table 5: Step 3: The connection weights λ i are computed from the assignments of coarsening elements. In each layer l, we fuse sequentially the three BBAs using PCR6 formula Eq (2). Because PCR6 fusion is not associative, we should apply the general PCR6 formula to get best results. Here we use sequential fusion to reduce the computational complexity even if the fusion result is approximate. More precisely, we compute at first m PCR6;O l  • At layer l = 1: • At layer l = 2: ðo 11 Þ ¼ 0:4137 ðo 12 Þ ¼ 0:5863 ðo 21 Þ ¼ 0:8121 • At layer l = 3: ðo 111 Þ ¼ 0:3103 ðo 112 Þ ¼ 0:6897 Step 4: The final assignments of elements in original FoD Θ are calculated using the product of the connection weights of link paths from root (top) node to terminal nodes (leaves). We eventually get the combined and normalized Bayesian BBA:

Modified rigid coarsening
One of the issues with RC described in the previous section is that no extra self-information of focal elements is embedded into the coarsening process. In this paper, the elements θ i selected to belong to the same group are determined using the consensus information drawn from the BBAs provided by the sources. Specifically, the degrees of disagreement between the provided sources on decisions (θ 1 , θ 2 , Á Á Á, θ n ) are first calculated using the belief-interval based distance d BI [20] to obtain disagreement vector. And then all focal elements in FoD are sorted in an ascending order. Finally, the simple dichotomous approach is utilized to hierarchical coarsen those Re-sorted focal elements.
Definition 2: The disagreement of opinions of two sources about θ i is defined as the L 1 -distance between the d BI distances of the BBAs m Y s ð:Þ, s = 1, 2 to m θ i (.), which is expressed by Definition 3: The disagreement of opinions of S ! 3 sources about θ i , is defined as where d BI distance is defined by [20] and proof of Definition 3 is given in S1 Appendix. For simplicity, we assume Shafer's model so that |2 Θ | = 2 n , otherwise the number of elements in the summation of Eq (10) should be |D Θ | − 1 with another normalization constant n c . The derivation of D 1−3 (θ 1 ) is given below for convenience. Based on the disagreement vector, a new bintree structure is obtained and shown in Fig 2. Compared with Fig 1, the elements in FoD Θ are grouped more reasonably. In vector D 1−3 , θ 1 and θ 5 lie in similar degree of disagreement so that they are put in the same group. Similarly for θ 2 and θ 4 . However, element θ 3 seems weird, which is put alone in the process of coarsening. Once this new bintree decomposition is obtained, other steps can be implemented which are identical to rigid coarsening in section to get the final combined BBA.
Step 1: According to Fig 2, the elements of the frames O l are defined as follows: • At layer l = 1: Step 2: The BBAs of elements of the (sub-) frames O l are obtained as follows: • At layer l = 1, we use Eqs (6) and (7) and we get (Table 6) • At layer l = 2: We use again the proportional redistribution method which gives us Tables 7  and 8 Table 9:  Step 3: The connection weights λ i are computed from the assignments of coarsening elements. Hence, we obtain the following connecting weights in the bintree: • At layer l = 1:

Summary of the proposed method
The fusion method of BBAs to get a combined Bayesian BBA based on hierarchical decomposition of the FoD consists of several steps (Algorithm 1) below illustrated in Fig 3. It is worth noting that when the given BBAs are not Bayesian, the first step is to use the existing Probabilistic Transformation (PT) to transform them to Bayesian BBAs. In order to use the proposed combination method in the RSs, modified rigid coarsening is mathematically denoted as L in the following sections.
• Computational efficiency: As we mentioned before, another advantage of the hierarchical combination method is the computational efficiency. Here, two experiments are conducted (All experiments are implemented on a PC with I3 CPU, Integrated graphics chipsets and 4G DDR): 1) the number of singletons is unchanged while the number of BBAs to be fused is increasing; 2) the number of BBAs is unchanged while the number of singletons in FoD is increasing. The results are illustrated in Fig 5 and 6. From experiment 1, all these three methods (classical PCR6, rigid coarsening and also modified rigid coarsening) calculate quickly (less than 1.2s) even the number of BBAs increases from 100 to 1000. However, such situation deteriorates when the number of focal elements increases. In Fig 6, when the number of focal elements increases to 500, time consumption of three combinations is: PCR6: 20.6857s; modified rigid coarsening: 7.3320s; rigid coarsening: 5.9748s. This phenomenon also proves that it is reasonable to map original FoD to the coarsening FoD, with the aim of reducing the number of focal elements at the time of fusion. But in any case, computing efficiency of rigid coarsening or modified rigid coarsening is still better than PCR6. On the other hand, modified rigid coarsening makes a significant improvement (accuracy) at the expense of parts of the computational efficiency. Modified rigid coarsening (MRC) method

A recommender system integrating with hierarchical coarsening combination method
In today's e-commerce, online providers often recommend proper goods or services to each consumer based on their personal opinions or preferences [21], [22]. However, it is a tough task to provide appropriate recommendation which may confront several difficulties. One difficulty is that users' preferences are usually characterized as uncertain, imprecise or incomplete [23], [24], which cannot be used directly in RSs. Besides, it is easy to understand that when the more information about user preferences are, the more accurate prediction of RSs will be [25], [26]. But, the problem is that which method we adopt to integrate multi-source uncertain information?
As a general framework for information fusion, DST can not only model uncertain information, but also provide an efficient way to combine multi-source information. These  Modified rigid coarsening (MRC) method mentioned features make this theory a wide range of applications [27][28][29], especially in RSs [23,25,[30][31][32]. According to DST, users' comments on products in RSs are described by using mass functions and rules of combination method are used frequently in order to provide appropriate recommendation.
As mentioned in previous sections, both the performances of combination rules in DST or in DSmT suffer from computational complex which is obviously ignored in [23,25]. Thus, in this paper, modified rigid coarsening method is applicable to combine the imprecise users' preferences in RSs. First, we are required to introduce the relevant knowledge of RSs. Actually, almost all characteristics of RSs have been introduced in [23,25,[30][31][32].
First, we give the corresponding representation of the mathematical notation in RSs based on DSmT. RSs usually contain two objects: {Users, Items}. A set of M users and a set containing N items is respectively denoted by U = {U 1 , U 2 , Á Á Á, U M } and I = {I 1 , I 2 , Á Á Á, I N }. Besides, we assume that users can give the corresponding ratings to the items, which include L rating levels (Θ = {θ 1 , θ 2 , Á Á Á, θ L }.). Here, L preference levels means multi-level evaluation results. For example, four-levels of user evaluation on the product are {Excellent, Good, Fair, Poor}. r i,k means a rating of user U i on item I k and a rating matrix R = {r i,k } comprises all the ratings of users on items. It should be noted that r i,k is originally modeled as a mass function m i,k : D Θ ! [0, 1]. Additionally, let I R i and U R k denote the set of items rated by user U i and the set of users having rated item I k , respectively.
Contextual information can often be summarized into several genres that significantly affect user's rating of items. Normally, we represent contextual information by a set containing P genres, denoted by S = {S 1 , S 2 , Á Á Á, S P }. And each genre S p , with 1 p P contains at most Q groups, denoted by S p = {g p,1 , g p,2 , Á Á Á, g p,q , Á Á Á, g p,Q }, 1 q Q. For a genre S p 2 S, a user U i 2 U can be interested in several groups and also an item I i 2 I can belong to one or some groups of this genre, which can be seen in Fig 7.   Fig 7. Contextual information. https://doi.org/10.1371/journal.pone.0189703.g007 Definition 4: In order to facilitate such expression, two functions κ(Á) and φ(Á) are defined to determine the groups in which user U i is interested and the groups to which item I k belongs, respectively: Generally, the main steps of a recommendation system is illustrated in Fig 8, which will be presented in details as follows:

DSmT-Modeling Function
Regarding the DS-partial probability models proposed in [23], the existing ratings r i,k , of user U i on item I k , are modeled by DSmT-modeling function M(Á) in order to transform such hard ratings into the corresponding soft ratings represented as m i,k as below: Definition 5: ðy lÀ 1 \ y l ; y l \ y lþ1 Þ; otherwise: where α i,k 2 [0, 1] and σ i,k are a trust factor and a dispersion factor, respectively [23].
Referring to the partial probability model analysis in [23], we also give the corresponding user profiles which can be seen in Fig 9. Compared to [23], the difference is that we not only consider the union (black and gray rectangle), but also consider the intersection (red rectangle) of the hard ratings, which is also the distinction between DS theory and DSmT theory. Modified rigid coarsening (MRC) method Lemma 1: Referring to Definition 5, we can also generate the relative refined BBA in the framework of DS theory: > > > > > > > < > > > > > > > : where α i,k 2 [0, 1] and σ i,k are a trust factor and a dispersion factor, respectively [23]. After soft ratings are generated, DSmP [33] is applied to decouple non-Bayesian m i,k , since the hierarchical fusion algorithm is currently just available for Bayesian BBAs.
As shown in [33], DSmP makes a remarkable improvement compared with BetP and CuzzP, since a more judicious redistribution of the ignorance masses to the singletons has been adopted by DSmP.

Predicting unrated items:
Assuming that users who are keen on the similar groups tend to have common preferences. In this RS, it is necessary to predict the unrated items first. Considering a group g p,q 2 S p with g p,q 2 φ(I k ), every soft rating, m i,k , of user U i , who is keen on group g p,q , on item I k is regarded as a block of common preference for group g p,q . Thus, G m p,q,k : D Θ ! [0, 1] which represents all users' group preferences on item I k regarding group g p,q , is computed as follows Supposing that item I k has not been rated by user U i , it usually contains three steps to generate unprovided rating r i,k of user U i which are shown as below • Step one: Considering a concept S p , for each group g p,q 2 κ p (U i ) \ φ p (I k ), it is assumed that all users' group preferences on item I k regarding group g p,q imply common preference of U i on I k regarding group g p,q . Furthermore, this group preference is regarded as a piece of user U i 's concept preference on item I k regarding concept S p . Therefore, concept preference of user U i on item I k regarding concept S p , denoted by mass function S m p,q,k : D Θ !
So far, all unprovided ratings are predicted in this RS. Subsequently, user-user similarities are computed depending on both provided and predicted ratings in the following steps.

Computing user-user similarities:
Here, we use the distance measure proposed in [34] to calculate distances between two users U i and U j with i 6 ¼ j, which is defined as below where m i,k and m j,k are the soft ratings of user U i and user U j on item I k respectively. Afterwards, the degree of similarity between U i and U j , denoted by s i,j , is calculated as follows Obviously, if the value of s i,j is high, it means the user U i and user U j are very close, and vice versa. Eventually, a mathematical matrix S = {s i,j |U i , U j 2 U, i 6 ¼ j} is employed to represent the similarities among all users.

Selecting neighbors based on user-user similarities:
Taking into account an active user U i , for each unrated item I k by user U i , a set containing K nearest neighborhoods, denoted by N i;k , is chosen by using the method proposed in [35]. Two simple steps of this method are shown below • Step one: the process of such selection depends on two criteria: 1. Those users who rated I k and 2. The corresponding user-user similarities with user U i are equal or greater than the threshold τ. N i;k denotes the selected set, which is acquired as follows: • Step two: all of members in N i;k is descending sorted by s i,j and top K members are selected as the neighborhood set N i;k .

Estimating ratings according to neighborhoods:
Supposing that item I k has not been rated by user U i . The predicted rating of U i on item I k is denoted asm i;k . Thus,m i;k is calculated according to the ratings of user U i 's nearest users. Mathematically,m i;k is given as beloŵ wherem i;k is the mass regarding the neighborhoods' whole preference in the set Eq (23) on item I k . Considering user U i 2 N i;k , and supposing that s i,j is the similarity between user U i and user U j . We use a discount rate 1 − s i,j to discount the rating of user U j on item I k . Therefore,m i;k is:m

Experiments
To evaluate the performance of modified rigid coarsening in precision of recommendation and computational time, original rigid coarsening method and also classical PCR6 combination method are selected to be regarded as baselines. Besides, we use DS-MAE [23] to measure the precision of recommendations. Definition 7: DS-MAE is mathematically given as follows where D j is the testing set identifying the user-item pairs whose true rating is θ j 2 Θ.
Those specific users' interested information about genres is unknown. Thus, we define a rule that if a user has rated an item then this user is interested in all genres to which the item belongs.

Experiment One:
Movielens (http://grouplens.org/datasets/movielens) is a movie recommendation dataset widely used for benchmarking process. There are nearly 100,000 hard ratings on 19 different types of movies (Action, Comedy and so on). The domain of such rating given in Movielens includes 5 levels, denoted as Θ = {1, 2, 3, 4, 5},. At the same time, each user is required to evaluate at least 20 movies, so as to ensure adequate rating information.
The relevant parameters used in RSs are setted: γ = 10 −4 and 8(i, k){α i,k , σ i,k } = {0.9, 2/3}. However, Setting parameter τ to be a fixed value is obviously unreasonable because the similarity between two users is quite different when using different combination methods. Hence, in this paper, the value of parameter τ will not be setted in advance. Instead, it is determined based on the similarity in matrix S. Specifically, the highest value of top 30% in S is selected for τ. Additionally, we adopt the robust strategy of 10-fold cross validation to conduct experiments, which is widely applied in experimental verification. Specific steps are as follows: original ratings in Movielens are first randomly divided into 10-folds and the experiments are thus carried out 10 times: in each sub-experiment, nine tenths of the ratings are chosen as training data and the remaining ratings are regarded as testing data. It's worth noting that all results illustrated in the following experiments are the average values of 10 times. Fig 10 demonstrates the values of overall DS-MAE varying with changing neighborhood size K. And the smaller values of DS-MAE indicate the better ones. As can be seen in Fig 10, with K 70 performances of the three methods increase sharply as well as being the same as each other. With K ! 70, performances of both methods become stable. Especially, performance of modified rigid coarsening method is very close to classical PCR6 rules. However, original rigid coarsening is slightly worse than the other two algorithms. Fig 11 depicts the computational time varying with changing neighborhood size K. In this figure, the time taken by hierarchical coarsening combination methods (both rigid coarsening and modified rigid coarsening method) is quite faster compared to classical PCR6. Besides, modified rigid coarsening is relatively slower than original rigid coarsening. All these results illustrate that modified rigid coarsening method sacrifices some of the computational efficiency, in exchange for upgrading the accuracy of approximation.
2. Experiment Two: Flixster (http://www.cs.ubc.ca/jamalim/datasets/) is a classical recommendation dataset which nearly contains 535013 hard ratings on 19 different types of movies (Drama, Comedy and so on). The domain of such rating given in Flixster includes 10 levels, denoted as Θ = {0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0},. At the same time, each user is required to evaluate at least 15 movies, so as to ensure adequate rating information. The relevant parameters used in RSs are setted: γ = 10 −4 and 8(i, k){α i,k , σ i,k } = {0.9, 2/3}. However, Setting parameter τ to be a fixed value is obviously unreasonable because the similarity between two users is quite different when using different combination methods. Hence, in this paper, the value of parameter τ will not be setted in advance. Instead, it is determined based on the similarity in matrix S. Specifically, the highest value of top 50% in S is selected for τ. Fig 12 demonstrates the values of overall DS-MAE varying with changing neighborhood size K. And the smaller values of DS-MAE indicate the better ones. As can be seen in Fig 12, we can get a similar result to the previous data set(Movielens). Especially, performance of modified rigid coarsening method is in the middle of the comparison methods. However, original rigid coarsening is worse than the other two algorithms. Fig 13 depicts the computational time varying with changing neighborhood size K. From this figure, we can also get the same conclusion that the time taken by hierarchical coarsening combination methods (both rigid coarsening and modified rigid coarsening method) is quite faster compared to classical PCR6.

Conclusion
In this paper, we propose a new combination method, called modified rigid coarsening method. This new method can map the original refined FoD to the new coarsening FoD in the process of combination. Compared to traditional fusion method PCR6 in DSmT, this approach can not only reduce computational complexity, but also ensure high approximation accuracy. Besides, in order to verify the practicality of our approach, we apply this approach to fuse soft ratings in RSs. To be specific, user preferences are first transformed by DSmT-partial probability model to accurately represent uncertain information. Then, information about user preferences from different sources can be easily combined. In the future work, more helpful information will be mined to discern focal element in FoD so as to improve the accuracy of approximation and more data sets will be applied.