PLoS ONEplosplosonePLOS ONE1932-6203Public Library of ScienceSan Francisco, CA USAPONE-D-17-3156110.1371/journal.pone.0189703Research ArticleBiology and life sciencesNeuroscienceCognitive scienceCognitive psychologyDecision makingBiology and life sciencesPsychologyCognitive psychologyDecision makingSocial sciencesPsychologyCognitive psychologyDecision makingBiology and life sciencesNeuroscienceCognitive scienceCognitionDecision makingPhysical sciencesMathematicsApplied mathematicsAlgorithmsResearch and analysis methodsSimulation and modelingAlgorithmsPhysical sciencesMathematicsApproximation methodsResearch and analysis methodsMathematical and statistical techniquesMathematical functionsBiology and life sciencesNeuroscienceCognitive scienceCognitive psychologyLanguageBiology and life sciencesPsychologyCognitive psychologyLanguageSocial sciencesPsychologyCognitive psychologyLanguageResearch and analysis methodsSimulation and modelingPhysical sciencesMathematicsProbability theoryStatistical distributionsStatistical dispersionEngineering and technologyMeasurementDistance measurementA fast combination method in DSmT and its application to recommender systemModified rigid coarsening (MRC) methodhttp://orcid.org/0000-0002-4441-3355DongYilinInvestigationMethodologyValidationVisualizationWriting – original draft^{1}LiXindeMethodologySupervision^{1}*LiuYihaiSupervision^{2}Key Laboratory of Measurement and Control of CSE, Ministry of Education, School of Automation, Southeast University, Nanjing, Jiangsu Province, ChinaJiangsu Automation Research Institute, Lianyungang, Jiangsu Province, ChinaDengYongEditorSouthwest University, CHINA
The authors have declared that no competing interests exist.
* E-mail: xindeli@seu.edu.cn20181912018131e01897032882017301120172018Dong et alThis is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
In many applications involving epistemic uncertainties usually modeled by belief functions, it is often necessary to approximate general (non-Bayesian) basic belief assignments (BBAs) to subjective probabilities (called Bayesian BBAs). This necessity occurs if one needs to embed the fusion result in a system based on the probabilistic framework and Bayesian inference (e.g. tracking systems), or if one needs to make a decision in the decision making problems. In this paper, we present a new fast combination method, called modified rigid coarsening (MRC), to obtain the final Bayesian BBAs based on hierarchical decomposition (coarsening) of the frame of discernment. Regarding this method, focal elements with probabilities are coarsened efficiently to reduce computational complexity in the process of combination by using disagreement vector and a simple dichotomous approach. In order to prove the practicality of our approach, this new approach is applied to combine users’ soft preferences in recommender systems (RSs). Additionally, in order to make a comprehensive performance comparison, the proportional conflict redistribution rule #6 (PCR6) is regarded as a baseline in a range of experiments. According to the results of experiments, MRC is more effective in accuracy of recommendations compared to original Rigid Coarsening (RC) method and comparable in computational time.
http://dx.doi.org/10.13039/501100001809National Natural Science Foundation of China61573097LiXinde Key Laboratory of Integrated Automation of Process IndustryPAL-N201704LiXindeQing Lan Project and Six Major Top-talent PlanLiXindePriority Academic Program Development of Jiangsu Higher Education InstitutionsLiXindeThis work was supported in part by the National Natural Science Foundation of China under Grant 61573097, 91748106, in part by Key Laboratory of Integrated Automation of Process Industry (PAL-N201704), in part by the Qing Lan Project and Six Major Top-talent Plan, and in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions to XL. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Data AvailabilityAll relevant data are within the paper and its Supporting Information files. The two datasets, Movielens and Flixster, are available on Figshare at DOIs 10.6084/m9.figshare.5677750 and 10.6084/m9.figshare.5677741 respectively.Introduction
The theory of belief functions, known as Dempster-Shafer Theory (DST) was developed by Shafer [1] in 1976 from Dempster’s works [2]. Belief functions allow one to model epistemic uncertainty [3] and they have been already used in many applications since the 1990’s [4], mainly those relevant to expert systems, decision-making support and information fusion. To palliate some limitations (such as high computational compelxity) of DST, Dezert and Smarandache proposed an extended mathematical framework of belief functions with new efficient quantitative and qualitative rules of combinations, which was called DSmT (Dezert and Smarandache Theory) in literature [5, 6] with applications listed in [7]. One of the major drawbacks of DST and DSmT is their high computational complexities, on condition that the fusion space (i.e. frame of discernment—FoD) and the number of sources to combine are large. DSmT is more complex than DST, and the Proportional Conflict Redistribution rule #6 (PCR6 rule) becomes computationally intractable in the worst case as soon as the cardinality of the Frame of Discernment (FoD) is greater than six.
To reduce the computational cost of operations with belief functions when the number of focal elements is very large, several approaches have been proposed by different authors. Basically, the existing approaches rely either on efficient implementations of computations as proposed for instance in [8, 9], or on approximation techniques of original Basic Belief Assignment (BBA) to combine [10–14], or both. From a fusion standpoint, two approaches are usually adopted: 1) one can approximate at first each BBA in subjective probabilities and use Bayes fusion rule to get the final Bayesian BBA [11, 12], or 2) one can fuse all the BBAs with a fusion rule, typically Dempster-Shafer’s, or proportional conflict redistribution rule #6 (PCR6) rules (which is very costly in computations), and convert the combined BBA in a subjective probability measure [10, 14]. The former method is the simplest method but it generates a high loss of information included in the original BBAs, whereas the latter method is intractable for high dimension issues.
This paper presents a new combination method, called modified rigid coarsening (MRC), to get the final Bayesian BBAs based on hierarchical decomposition (coarsening) of the frame of discernment, which can be seen as an intermediary approach between the two aforementioned methods. This hierarchical structure allows to encompass bintree decomposition and mass of coarsening FoD on it. To prove the practicality of our proposed method, MRC is applied to combine users’ preferences so as to provide the suitable recommendation for RSs. Preliminary work on original rigid coarsening (RC) has been published in our recent work [15] (This is an extended version of the paper presented at the 20th IEEE International Conference on Information Fusion, XIAN, China). In this paper, more detailed analyses of this new combination method are provided. More importantly, this innovative method is also applied into the real application. These are all added values (contributions) of this paper.
The main contributions of this paper are:
the presentation of the FoD bintree decomposition on which will be done the BBAs approximations;
user preferences in Recommender Systems (RSs) are modeled by DSmT-Modeling Function.
In order to measure the efficiency and effectiveness of the MRC, it is integrated in the RSs based on DSmT and compared to traditional methods in the experiments. The results show that regarding the accuracy of recommendations, MRC is extremely close to classical PCR6; and the computational time of MRC can be obviously superior to that of PCR6.
The remainder of this paper is organized as follows. In section 2, we review relevant prior work on DST and DSmT first. In section 3, MRC is presented. In section 4, a recommendation system based on DSmT, that employs MRC to combine users’ preferences, is shown. In section 5, we evaluate our proposed algorithm based on two public datasets: Movielens and Flixster. Finally, we conclude and discuss future work.
Mathematical background
This section provides a brief reminder of the basics of DST and DSmT, which is necessary for the presentation and understanding of the more general MRC of Section 3.
In DST framework, the frame of discernment (Here, we use the symbol ≜ to mean equals by definition.) Θ≜{θ1,…,θn} (n ≥ 2) is a set of exhaustive and exclusive elements (hypotheses) which represents the possible solutions of the problem under consideration and thus Shafer’s model assumes θ_{i} ∩ θ_{j} = ∅ for i ≠ j in {1, …, n}. A basic belief assignment (BBA) m(⋅) is defined by the mapping: 2^{Θ} ↦ [0, 1], verifying m(∅) = 0 and ∑_{A∈2Θ}m(A) = 1. In DSmT, one can abandon Shafer’s model (if Shafer’s model doesn’t fit with the problem) and refute the principle of the third excluded middle. The third excluded middle principle assumes the existence of the complement for any elements/propositions belonging to the power set 2^{Θ}. Instead of defining the BBAs on the power set 2Θ≜(Θ,∪) of the FoD, the BBAs are defined on the so-called hyper-power set (or Dedekind’s lattice) denoted DΘ≜(Θ,∪,∩) whose cardinalities follows Dedekind’s numbers sequence, see [6], Vol.1 for details and examples. A (generalized) BBA, called a mass function, m(⋅) is defined by the mapping: D^{Θ} ↦ [0, 1], verifying m(∅) = 0 and ∑_{A∈DΘ}m(A) = 1. The DSmT framework encompasses DST framework because 2^{Θ} ⊂ D^{Θ}. In DSmT, we can take into account also a set of integrity constraints on the FoD (if known), by specifying all the pairs of elements which are really disjoint. Stated otherwise, Shafer’s model is a specific DSm model where all elements are deemed to be disjoint. A ∈ D^{Θ} is called a focal element of m(.) if m(A) > 0. A BBA is called a Bayesian BBA if all of its focal elements are singletons and Shafer’s model is assumed, otherwise it is called non-Bayesian [1]. A full ignorance source is represented by the vacuous BBA m_{v}(Θ) = 1. The belief (or credibility) and plausibility functions are respectively defined by Bel(X)≜∑Y∈DΘ|Y⊆Xm(Y) and Pl(X)≜∑Y∈DΘ|Y∩X≠∅m(Y). BI(X)≜[Bel(X),Pl(X)] is called the belief interval of X. Its length U(X)≜Pl(X)-Bel(X) measures the degree of uncertainty of X.
In 1976, Shafer did propose Dempster’s rule and we use DS index to refer to Dempster-Shafer’s rule (DS rule) because Shafer did really promote Dempster’s rule in in his milestone book [1]) to combine BBAs in DST framework. DS rule is defined by m_{DS}(∅) = 0 and ∀A ∈ 2^{Θ}\{∅},
mDS(A)=∑B,C∈2Θ|B∩C=Am1(B)m2(C)1-∑B,C∈2Θ|B∩C=∅m1(B)m2(C)
The DS rule formula is commutative and associative and can be easily extended to the fusion of S > 2 BBAs. Unfortunately, DS rule has been highly disputed during the last decades by many authors because of its counter-intuitive behavior in high or even low conflict situations, and that is why many rules of combination were proposed in literature to combine BBAs [16]. To palliate DS rule drawbacks, the very interesting PCR6 was proposed in DSmT and it is usually adopted (PCR6 rule coincides with PCR5 when combining only two BBAs [6]) in recent applications of DSmT. The fusion of two BBAs m_{1}(.) and m_{2}(.) by the PCR6 rule is obtained by m_{PCR6}(∅) = 0 and ∀A ∈ D^{Θ}\{∅}
mPCR6(A)=m12(A)+∑B∈DΘ∖{A}|A∩B=∅[m1(A)2m2(B)m1(A)+m2(B)+m2(A)2m1(B)m2(A)+m1(B)]
where m_{12}(A) = ∑_{B,C∈DΘ|B∩C=A}m_{1}(B)m_{2}(C) is the conjunctive operator, and each element A and B are expressed in their disjunctive normal form. If the denominator involved in the fraction is zero, then this fraction is discarded. The general PCR6 formula for combining more than two BBAs altogether is given in [6], Vol. 3. We adopt the generic notation m12PCR6(.)=PCR6(m1(.),m2(.)) to denote the fusion of m_{1}(.) and m_{2}(.) by PCR6 rule. PCR6 is not associative and PCR6 rule can also be applied in DST framework (with Shafer’s model of FoD) by replacing D^{Θ} by 2^{Θ} in Eq (2).
Modified rigid coarsening for fusion of Bayesian BBAs
Here, we introduce the principle of MRC of FoD to reduce the computational complexity of PCR6 combination of original Bayesian BBAs. Considering the case of non-Bayesian BBAs, it requires decoupling all non-singletons in these BBAs in advance, The fusion of original non-Bayesian BBAs needs to be decoupled by using DSmP in advance, which will explain in Section 4.
Rigid coarsening
This proposal was initially called rigid coarsening (RC) in our previous works [17–19] and currently improved in our recent work [15]. The goal of this coarsening is to replace the original (refined) FoD Θ by a set of coarsened ones to make computation of the PCR6 rule tractable. Because we consider here only Bayesian BBA to combine, their focal elements are only singletons of the FoD Θ≜{θ1,…,θn}, with n ≥ 2, and we assume Shafer’s model of the FoD Θ. A coarsening of the FoD Θ means to replace it with another FoD less specific of smaller dimension Ω = {ω_{1}, …, ω_{k}} with k < n from the elements of Θ. This can be done in many ways depending the problem under consideration. Generally, the elements of Ω are singletons of Θ, and disjunctions of elements of Θ. For example, if Θ = {θ_{1}, θ_{2}, θ_{3}, θ_{4}}, then a possible coarsened frame built from Θ could be, for instance, Ω = {ω_{1} = θ_{1}, ω_{2} = θ_{2}, ω_{3} = θ_{3} ∪ θ_{4}}, or Ω = {w_{1} = θ_{1} ∪ θ_{2}, ω_{2} = θ_{3} ∪ θ_{4}}, etc.
Definition 1: When dealing with Bayesian BBAs, the projection (For clarity and convenience, we put explicitly as upper index the FoD for which the belief mass refers)m^{Ω}(.) of the original BBAm^{Θ}(.) is simply obtained by takingmΩ(ωi)=∑θj⊆ωimΘ(θj)
The rigid coarsening process is a simple dichotomous approach of coarsening obtained as follows:
If n = |Θ| is an even number:
The disjunction of the n/2 first elements θ_{1} to θn2 of Θ define the element ω_{1} of Ω, and the last n/2 elements θn2+1 to θ_{n} of Θ define the element ω_{2} of Ω, that is
Ω≜{ω1=θ1∪…∪θn2,ω2=θn2+1∪…∪θn}
and based on Eq (3), one has
mΩ(ω1)=∑j=1,…,n2mΘ(θj)mΩ(ω2)=∑j=n2+1,…,nmΘ(θj)
For example, if Θ = {θ_{1}, θ_{2}, θ_{3}, θ_{4}}, and one considers the Bayesian BBA m^{Θ}(θ_{1}) = 0.1, m^{Θ}(θ_{2}) = 0.2, m^{Θ}(θ_{3}) = 0.3 and m^{Θ}(θ_{4}) = 0.4, then Ω = {ω_{1} = θ_{1} ∪ θ_{2}, ω_{2} = θ_{3} ∪ θ_{4}} and m^{Ω}(ω_{1}) = 0.1 + 0.2 = 0.3 and m^{Ω}(ω_{2}) = 0.3 + 0.4 = 0.7.
If n = |Θ| is an odd number:
In this case, the element ω_{1} of the coarsened frame Ω is the disjunction of the [n/2 + 1] (The notation [x] means the integer part of x) first elements of Θ, and the element ω_{2} is the disjunction of other elements of Θ. That is
Ω≜{ω1=θ1∪…∪θ[n2+1],ω2=θ[n2+1]+1∪…∪θn}
and based on Eq (3), one has
mΩ(ω1)=∑j=1,…,[n2+1]mΘ(θj)mΩ(ω2)=∑j=[n2+1]+1,…,nmΘ(θj)
For example, if Θ = {θ_{1}, θ_{2}, θ_{3}, θ_{4}, θ_{5}}, and one considers the Bayesian BBA m^{Θ}(θ_{1}) = 0.1, m^{Θ}(θ_{2}) = 0.2, m^{Θ}(θ_{3}) = 0.3, m^{Θ}(θ_{4}) = 0.3 and m^{Θ}(θ_{5}) = 0.1, then Ω = {ω_{1} = θ_{1} ∪ θ_{2} ∪ θ_{3}, ω_{2} = θ_{4} ∪ θ_{5}} and m^{Ω}(ω_{1}) = 0.1 + 0.2 + 0.3 = 0.6 and m^{Ω}(ω_{2}) = 0.3 + 0.1 = 0.4.
Of course, the same coarsening strategy applies to all original BBAs msΘ(.), s = 1, …S of the S > 1 sources of evidence to work with less specific BBAs msΩ(.), s = 1, …S. The less specific BBAs (called coarsened BBAs by abuse of language) can then be combined with the PCR6 rule of combination according to formula Eq (2). This dichotomous coarsening method is repeated iteratively l times as schematically represented by a bintree. Here, we consider bintree only for simplicity, which means that the coarsened frame Ω consists of two elements only. Of course a similar method can be used with tri-tree, quad-tree, etc. The last step of this hierarchical process is to calculate the combined (Bayesian) BBA of all focal elements according to the connection weights of the bintree structure, where the number of layers l of the tree depends on the cardinality |Θ| of the original FoD Θ. Specifically, the mass of each focal element is updated depending on the connection weights of link paths from root to terminal nodes. This principle is illustrated in details in the following example.
Example 1: Let’s consider Θ = {θ_{1}, θ_{2}, θ_{3}, θ_{4}, θ_{5}}, and the following three Bayesian BBAs can be seen in Table 1:
10.1371/journal.pone.0189703.t001Three Bayesian BBAs for Example 1.
Focal elem.
m1Θ(.)
m2Θ(.)
m3Θ(.)
θ_{1}
0.1
0.4
0
θ_{2}
0.2
0
0.1
θ_{3}
0.3
0.1
0.5
θ_{4}
0.3
0.1
0.4
θ_{5}
0.1
0.4
0
The rigid coarsening and fusion of BBAs is deduced from the following steps:
Step 1: We define the bintree structure based on iterative half split of FoD as shown in Fig 1.
10.1371/journal.pone.0189703.g001Fusion of Bayesian BBAs using bintree coarsening for Example 1.
The connecting weights are denoted as λ_{1}, …, λ_{8}. The elements of the frames Ω_{l} are defined as follows:
At layer l = 1: Ω1={ω1≜θ1∪θ2∪θ3,ω2≜θ4∪θ5}
At layer l = 2:
Ω2={ω11≜θ1∪θ2,ω12≜θ3,ω21≜θ4,ω22=θ5}
At layer l = 3: Ω3={ω111≜θ1,ω112≜θ2}
Step 2: The BBAs of elements of the (sub-) frames Ω_{l} are obtained as follows:
At layer l = 1, we use Eqs (6) and (7) because |Θ| = 5 is an odd number. Therefore, we get the BBAs in Table 2:
At layer l = 2: We work with the two subframes Ω21≜{ω11,ω12} and Ω22≜{ω21,ω22} of Ω_{2} with the BBAs in Tables 3 and 4:
These mass values are obtained by the proportional redistribution of the mass of each focal element with respect to the mass of its parent focal element in the bin tree. For example, m2Ω21(ω11)=4/5 is derived by taking
m2Ω21(ω11)=m2Θ(θ1)+m2Θ(θ2)m2Θ(θ1)+m2Θ(θ2)+m2Θ(θ3)=0.40.5=45
Other masses of coarsening focal elements are computed similarly using this proportional redistribution method.
At layer l = 3: We use again the proportional redistribution method which gives us the BBAs of the sub-frames Ω_{3} in Table 5:
10.1371/journal.pone.0189703.t002The BBAs of elements of the sub-frames Ω<sub>1</sub> for Example 1.
Focal elem.
m1Θ(.)
m2Θ(.)
m3Θ(.)
ω1≜θ1∪θ2∪θ3
0.6
0.5
0.6
ω2≜θ4∪θ5
0.4
0.5
0.4
10.1371/journal.pone.0189703.t003The BBAs of elements of the sub-frames Ω<sub>21</sub> for Example 1.
Focal elem.
m1Ω21(.)
m2Ω21(.)
m3Ω21(.)
ω11≜θ1∪θ2
12
45
16
ω12≜θ3
12
15
56
10.1371/journal.pone.0189703.t004The BBAs of elements of the sub-frames Ω<sub>22</sub> for Example 1.
Focal elem.
m1Ω22(.)
m2Ω22(.)
m3Ω22(.)
ω21≜θ4
34
15
1
ω22≜θ5
14
45
0
10.1371/journal.pone.0189703.t005The BBAs of elements of the sub-frames Ω<sub>3</sub> for Example 1.
Focal elem.
m1Ω3(.)
m2Ω3(.)
m3Ω3(.)
ω111≜θ1
13
1
0
ω112≜θ2
23
0
1
Step 3: The connection weights λ_{i} are computed from the assignments of coarsening elements. In each layer l, we fuse sequentially the three BBAs using PCR6 formula Eq (2). Because PCR6 fusion is not associative, we should apply the general PCR6 formula to get best results. Here we use sequential fusion to reduce the computational complexity even if the fusion result is approximate. More precisely, we compute at first m12PCR6,Ωl(.)=PCR6(m1Ωl(.),m2Ωl(.)) and then m(12)3PCR6,Ωl(.)=PCR6(m12PCR6,Ωl(.),m3Ωl(.)). Hence, we obtain the following connecting weights in the bintree:
At layer l = 1:
λ1=m(12)3PCR6,Ω1(ω1)=0.6297λ2=m(12)3PCR6,Ω1(ω2)=0.3703
At layer l = 2:
λ3=m(12)3PCR6,Ω21(ω11)=0.4137λ4=m(12)3PCR6,Ω21(ω12)=0.5863λ5=m(12)3PCR6,Ω22(ω21)=0.8121λ6=m(12)3PCR6,Ω22(ω22)=0.1879
At layer l = 3:
λ7=m(12)3PCR6,Ω3(ω111)=0.3103λ8=m(12)3PCR6,Ω3(ω112)=0.6897
Step 4: The final assignments of elements in original FoD Θ are calculated using the product of the connection weights of link paths from root (top) node to terminal nodes (leaves). We eventually get the combined and normalized Bayesian BBA:
mΘ(θ1)=λ1·λ3·λ7=0.6297·0.4137·0.3103=0.0808mΘ(θ2)=λ1·λ3·λ8=0.6297·0.4137·0.6897=0.1797mΘ(θ3)=λ1·λ4=0.6297·0.5863=0.3692mΘ(θ4)=λ2·λ5=0.3703·0.8121=0.3007mΘ(θ5)=λ2·λ6=0.3703·0.1879=0.0696
Modified rigid coarsening
One of the issues with RC described in the previous section is that no extra self-information of focal elements is embedded into the coarsening process. In this paper, the elements θ_{i} selected to belong to the same group are determined using the consensus information drawn from the BBAs provided by the sources. Specifically, the degrees of disagreement between the provided sources on decisions (θ_{1}, θ_{2}, ⋯, θ_{n}) are first calculated using the belief-interval based distance d_{BI} [20] to obtain disagreement vector. And then all focal elements in FoD are sorted in an ascending order. Finally, the simple dichotomous approach is utilized to hierarchical coarsen those Re-sorted focal elements.
Calculating the disagreement vector
Let us consider several BBAs msΘ(·), (s = 1, …, S) defined on same FoD Θ of cardinality |Θ| = n. The specific BBAs m_{θi}(.), i = 1, …, n entirely focused on θ_{i} are defined by m_{θi}(θ_{i}) = 1, and for X ≠ θ_{i}m_{θi}(X) = 0.
Definition 2: The disagreement of opinions of two sources aboutθ_{i}is defined as theL_{1}-distance between thed_{BI}distances of the BBAsmsΘ(.), s = 1, 2 to m_{θi}(.), which is expressed byD12(θi)≜|dBI(m1Θ(·),mθi(·))-dBI(m2Θ(·),mθi(·))|
Definition 3: The disagreement of opinions ofS ≥ 3 sources aboutθ_{i}, is defined asD1-S(θi)≜12∑i=1S∑j=1S|dBI(miΘ(·),mθi(.))-dBI(mjΘ(·),mθi(.))|whered_{BI}distance is defined by [20] and proof of Definition 3 is given inS1 Appendix. For simplicity, we assume Shafer’s model so that |2^{Θ}| = 2^{n}, otherwise the number of elements in the summation ofEq (10)should be |D^{Θ}| − 1 with another normalization constantn_{c}.
dBIE(m1,m2)≜nc·∑i=12n-1[dI(BI1(θi),BI2(θi))]2Here, n_{c} = 1/2^{n−1}is the normalization constant andd^{I}([a, b], [c, d]) is the Wasserstein’s distance defined bydI([a,b],[c,d])=[a+b2-c+d2]2+13[b-a2-d-c2]2. AndBI(θ_{i}) = [Bel(θ_{i}), Pl(θ_{i})].
The disagreement vector D_{1−S} is defined by
D1-S≜[D1-S(θ1),…,D1-S(θn)]
Modified rigid coarsening by using the disagreement vector
Once D_{1−S} is derived, all focal elements {θ_{1}, θ_{2}, ⋯, θ_{n}} are sorted according to their corresponding values in D_{1−S}.
Let us revisit example 1 presented in the previous section. It can be verified in applying formula Eq (9) that the disagreement vector D_{1−3} for this example is equal to
D1-3=[0.4085,0.2156,0.3753,0.2507,0.4086]
The derivation of D_{1−3}(θ_{1}) is given below for convenience.
D1-3(θ1)=|dBI(m1Θ(·),mθ1(θ1))-dBI(m2Θ(·),mθ1(θ1))|+|dBI(m2Θ(·),mθ1(θ1))-dBI(m3Θ(·),mθ1(θ1))|+|dBI(m1Θ(·),mθ1(θ1))-dBI(m3Θ(·),mθ1(θ1))|=0.4085.
Based on the disagreement vector, a new bintree structure is obtained and shown in Fig 2. Compared with Fig 1, the elements in FoD Θ are grouped more reasonably. In vector D_{1−3}, θ_{1} and θ_{5} lie in similar degree of disagreement so that they are put in the same group. Similarly for θ_{2} and θ_{4}. However, element θ_{3} seems weird, which is put alone in the process of coarsening. Once this new bintree decomposition is obtained, other steps can be implemented which are identical to rigid coarsening in section to get the final combined BBA.
10.1371/journal.pone.0189703.g002Fusion of Bayesian BBAs using MRC for Example 1.
Step 1: According to Fig 2, the elements of the frames Ω_{l} are defined as follows:
At layer l = 1: Ω1={ω1≜θ2∪θ4∪θ3,ω2≜θ1∪θ5}
At layer l = 2: Ω2={ω11≜θ2∪θ4,ω12≜θ3,ω21≜θ1,ω22≜θ5}
At layer l = 3: Ω3={ω111≜θ2,ω112≜θ4}
Step 2: The BBAs of elements of the (sub-) frames Ω_{l} are obtained as follows:
At layer l = 1, we use Eqs (6) and (7) and we get (Table 6)
At layer l = 2: We use again the proportional redistribution method which gives us Tables 7 and 8. Here, masses of ω_{21}, ω_{22} in m3Ω22(.) are not considered because the mass of their parent focal element (m3Ω1(ω2)) in bintree is 0.
At layer l = 3: We work with the two subframes Ω3≜{ω111,ω112} of Ω_{3} with the BBAs in Table 9:
10.1371/journal.pone.0189703.t006The BBAs of elements of the sub-frames Ω<sub>1</sub> Using MRC for Example 1.
Focal elem.
m1Ω1(.)
m2Ω1(.)
m3Ω1(.)
ω1≜θ2∪θ4∪θ3
0.8
0.2
1.0
ω2≜θ1∪θ5
0.2
0.8
0.0
10.1371/journal.pone.0189703.t007The BBAs of elements of the sub-frames Ω<sub>21</sub> Using MRC for Example 1.
Focal elem.
m1Ω21(.)
m2Ω21(.)
m3Ω21(.)
ω11≜θ2∪θ4
58
12
12
ω12≜θ3
38
12
12
10.1371/journal.pone.0189703.t008The BBAs of elements of the sub-frames Ω<sub>22</sub> Using MRC for Example 1.
Focal elem.
m1Ω22(.)
m2Ω22(.)
m3Ω22(.)
ω21≜θ1
12
12
–
ω22≜θ5
12
12
–
10.1371/journal.pone.0189703.t009The BBAs of elements of the sub-frames Ω<sub>3</sub> Using MRC for Example 1.
Focal elem.
m1Ω3(.)
m2Ω3(.)
m3Ω3(.)
ω111≜θ2
25
0.0
15
ω112≜θ4
35
1.0
45
Step 3: The connection weights λ_{i} are computed from the assignments of coarsening elements. Hence, we obtain the following connecting weights in the bintree:
At layer l = 1:
λ1=0.8333;λ2=0.1667.
At layer l = 2:
λ3=0.5697;λ4=0.4303;λ5=0.5000;λ6=0.5000.
At layer l = 3:
λ7=0.0669;λ8=0.9331;
Step 4: We finally get the following combined and normalized Bayesian BBA
mΘ(·)={0.0833,0.0318,0.3586,0.4430,0.0834}.
Summary of the proposed method
The fusion method of BBAs to get a combined Bayesian BBA based on hierarchical decomposition of the FoD consists of several steps (Algorithm 1) below illustrated in Fig 3. It is worth noting that when the given BBAs are not Bayesian, the first step is to use the existing Probabilistic Transformation (PT) to transform them to Bayesian BBAs. In order to use the proposed combination method in the RSs, modified rigid coarsening is mathematically denoted as ⨁ in the following sections.
10.1371/journal.pone.0189703.g003Modified rigid coarsening of FoD for fusion.
Algorithm 1: Modified Rigid Coarsening Method
Input: All original BBAs m1Θ(·),⋯,msΘ(·), s = 1, 2, ⋯, s
Output: The final combined BBA m^{Θ}(⋅)
1 ifCompound focal elements in Θ: θ_{i} ∪ θ_{j} ≠ ∅ or θ_{i} ∩ θ_{j} ≠ ∅then
20 Then connection weights λ is calculated: PCR6(m^{Ω}(ω_{1}), m^{Ω}(ω_{2}))
21 end
22 foreachfocal element θ_{i}, i ∈ 1, ⋯, ndo
23 m^{Θ}(θ_{i}) equals to the product of path link weights from root to terminal nodes.
24 end
Simulation considering accuracy and computational efficiency
Accuracy:
Assuming that the FoD is Θ = {θ_{1}, θ_{2}, θ_{3}, θ_{4}, θ_{5}, θ_{6}, θ_{7}, θ_{8}, θ_{9}, θ_{10}, θ_{11}, θ_{12}, θ_{13}, θ_{14}, θ_{15}, θ_{16}, θ_{17}, θ_{18}, θ_{19}, θ_{20}}, then 1000 BBAs are randomly generated to be fused with three methods: modified rigid coarsening, rigid coarsening and also PCR6. And then distances of fusion results are computed using d_{BI} between two pairs: modified rigid coarsening and PCR6; rigid coarsening and PCR6. Comparisons are made in Fig 4, which show the superiority of our new approach proposed in this paper (The average value of the approximation of modified rigid coarsening is 97.5% and original rigid coarsening is 94.5%). Here, similarity represents the approximate degree between fusion results using hierarchical approximate method (both rigid and modified rigid coarsening) and PCR6.
Computational efficiency:
As we mentioned before, another advantage of the hierarchical combination method is the computational efficiency. Here, two experiments are conducted (All experiments are implemented on a PC with I3 CPU, Integrated graphics chipsets and 4G DDR): 1) the number of singletons is unchanged while the number of BBAs to be fused is increasing; 2) the number of BBAs is unchanged while the number of singletons in FoD is increasing. The results are illustrated in Figs 5 and 6. From experiment 1, all these three methods (classical PCR6, rigid coarsening and also modified rigid coarsening) calculate quickly (less than 1.2s) even the number of BBAs increases from 100 to 1000. However, such situation deteriorates when the number of focal elements increases. In Fig 6, when the number of focal elements increases to 500, time consumption of three combinations is: PCR6: 20.6857s; modified rigid coarsening: 7.3320s; rigid coarsening: 5.9748s. This phenomenon also proves that it is reasonable to map original FoD to the coarsening FoD, with the aim of reducing the number of focal elements at the time of fusion. But in any case, computing efficiency of rigid coarsening or modified rigid coarsening is still better than PCR6. On the other hand, modified rigid coarsening makes a significant improvement (accuracy) at the expense of parts of the computational efficiency.
10.1371/journal.pone.0189703.g004Accuracy comparisons between MRC and PCR6 (Only Singletons).10.1371/journal.pone.0189703.g005Efficiency comparisons between MRC, RC and PCR6 (With the number of BBAs increasing).10.1371/journal.pone.0189703.g006Efficiency comparisons between MRC, RC and PCR6 (With the number of focal elements increasing).A recommender system integrating with hierarchical coarsening combination method
In today’s e-commerce, online providers often recommend proper goods or services to each consumer based on their personal opinions or preferences [21], [22]. However, it is a tough task to provide appropriate recommendation which may confront several difficulties. One difficulty is that users’ preferences are usually characterized as uncertain, imprecise or incomplete [23], [24], which cannot be used directly in RSs. Besides, it is easy to understand that when the more information about user preferences are, the more accurate prediction of RSs will be [25], [26]. But, the problem is that which method we adopt to integrate multi-source uncertain information?
As a general framework for information fusion, DST can not only model uncertain information, but also provide an efficient way to combine multi-source information. These mentioned features make this theory a wide range of applications [27–29], especially in RSs [23, 25, 30–32]. According to DST, users’ comments on products in RSs are described by using mass functions and rules of combination method are used frequently in order to provide appropriate recommendation.
As mentioned in previous sections, both the performances of combination rules in DST or in DSmT suffer from computational complex which is obviously ignored in [23, 25]. Thus, in this paper, modified rigid coarsening method is applicable to combine the imprecise users’ preferences in RSs. First, we are required to introduce the relevant knowledge of RSs. Actually, almost all characteristics of RSs have been introduced in [23, 25, 30–32].
First, we give the corresponding representation of the mathematical notation in RSs based on DSmT. RSs usually contain two objects: {Users, Items}. A set of M users and a set containing N items is respectively denoted by U = {U_{1}, U_{2}, ⋯, U_{M}} and I = {I_{1}, I_{2}, ⋯, I_{N}}. Besides, we assume that users can give the corresponding ratings to the items, which include L rating levels (Θ = {θ_{1}, θ_{2}, ⋯, θ_{L}}.). Here, L preference levels means multi-level evaluation results. For example, four-levels of user evaluation on the product are {Excellent, Good, Fair, Poor}. r_{i,k} means a rating of user U_{i} on item I_{k} and a rating matrix R = {r_{i,k}} comprises all the ratings of users on items. It should be noted that r_{i,k} is originally modeled as a mass function m_{i,k}: D^{Θ} → [0, 1]. Additionally, let IiR and UkR denote the set of items rated by user U_{i} and the set of users having rated item I_{k}, respectively.
Contextual information can often be summarized into several genres that significantly affect user’s rating of items. Normally, we represent contextual information by a set containing P genres, denoted by S = {S_{1}, S_{2}, ⋯, S_{P}}. And each genre S_{p}, with 1 ≤ p ≤ P contains at most Q groups, denoted by S_{p} = {g_{p,1}, g_{p,2}, ⋯, g_{p,q}, ⋯, g_{p,Q}}, 1 ≤ q ≤ Q. For a genre S_{p} ∈ S, a user U_{i} ∈ U can be interested in several groups and also an item I_{i} ∈ I can belong to one or some groups of this genre, which can be seen in Fig 7.
Definition 4: In order to facilitate such expression, two functionsκ(⋅) andφ(⋅) are defined to determine the groups in which userU_{i}is interested and the groups to which itemI_{k}belongs, respectively:
κp:Ui↦κp(Ui)⊆Spφp:Ik↦φp(Ik)⊆Sp
Generally, the main steps of a recommendation system is illustrated in Fig 8, which will be presented in details as follows:
DSmT-Modeling Function
Regarding the DS-partial probability models proposed in [23], the existing ratings r_{i,k}, of user U_{i} on item I_{k}, are modeled by DSmT-modeling function M(⋅) in order to transform such hard ratings into the corresponding soft ratings represented as m_{i,k} as below:
Definition 5:
mi,k={αi,k(1-σi,k),forA=θl;12αi,kσi,k,forA=B;12αi,kσi,k,forA=C;1-αi,k,forA=Θ;0,otherwise.withB={θ1∪θ2,ifl=1;θL-1∪θL,ifl=L;θl-1∪θl∪θl+1,otherwise.C={θ1∩θ2,ifl=1;θL-1∩θL,ifl=L;(θl-1∩θl,θl∩θl+1),otherwise.whereα_{i,k} ∈ [0, 1] andσ_{i,k}are a trust factor and a dispersion factor, respectively [23].
Referring to the partial probability model analysis in [23], we also give the corresponding user profiles which can be seen in Fig 9. Compared to [23], the difference is that we not only consider the union (black and gray rectangle), but also consider the intersection (red rectangle) of the hard ratings, which is also the distinction between DS theory and DSmT theory.
Lemma 1: Referring to Definition 5, we can also generate the relative refined BBA in the framework of DS theory:
mi,kRefined={αi,k(1-σi,k),forA=θl;αi,kσi,k,forA=B;1-αi,k,forA=Θ;0,otherwise.
with
B={θ1∪θ2,ifl=1;θL-1∪θL,ifl=L;θl-1∪θl∪θl+1,otherwise.
where α_{i,k} ∈ [0, 1] and σ_{i,k} are a trust factor and a dispersion factor, respectively [23].
After soft ratings are generated, DSmP [33] is applied to decouple non-Bayesian m_{i,k}, since the hierarchical fusion algorithm is currently just available for Bayesian BBAs.
Definition 6: DSmP is a new generalized pignistic transformation defined byDSmP_{ε}(∅) = 0 and for any singletonθ_{i} ∈ Θ byDSmPε(θi)≜m(θi)+(m(θi)+ε)×{∑A∈2Θ,θN⊂A,|A|≥2m(A)∑B∈2Θ,B⊂A,|B|=1m(B)+ε·|A|} As shown in [33], DSmP makes a remarkable improvement compared with BetP and CuzzP, since a more judicious redistribution of the ignorance masses to the singletons has been adopted by DSmP.
Predicting unrated items:
Assuming that users who are keen on the similar groups tend to have common preferences. In this RS, it is necessary to predict the unrated items first. Considering a group g_{p,q} ∈ S_{p} with g_{p,q} ∈ φ(I_{k}), every soft rating, m_{i,k}, of user U_{i}, who is keen on group g_{p,q}, on item I_{k} is regarded as a block of common preference for group g_{p,q}. Thus, G_{mp,q,k}: D^{Θ} → [0, 1] which represents all users’ group preferences on item I_{k} regarding group g_{p,q}, is computed as follows
Gmp,q,k=⨁{j|Ik∈IjR,gp,q∈κp(Uj),gp,q∈φp(Ik)}mj,k
Supposing that item I_{k} has not been rated by user U_{i}, it usually contains three steps to generate unprovided rating r_{i,k} of user U_{i} which are shown as below
Step one: Considering a concept S_{p}, for each group g_{p,q} ∈ κ_{p}(U_{i}) ∩ φ_{p}(I_{k}), it is assumed that all users’ group preferences on item I_{k} regarding group g_{p,q} imply common preference of U_{i} on I_{k} regarding group g_{p,q}. Furthermore, this group preference is regarded as a piece of user U_{i}’s concept preference on item I_{k} regarding concept S_{p}. Therefore, concept preference of user U_{i} on item I_{k} regarding concept S_{p}, denoted by mass function S_{mp,q,k}: D^{Θ} → [0, 1], can be computed as below
Smp,q,k=⨁{q|gp,q∈κp(Ui),gp,q∈φp(Ik)}Gmp,q,k
Step two: If there exists at least one common group in concept S_{p} which item I_{k} belongs to and also user U_{i} is interested in, then U_{i}’s concept preference on item I_{k} regarding concept S_{p} is regarded as a piece of context preference. Therefore, this user’s contextual preference on item I_{k}, denoted by mass function S_{mi,k}: D^{Θ} → [0, 1], is achieved as follows
Smi,k=⨁p=1,⋯,PSmp,i,k
Step three: Context preference of U_{i} on item I_{k} is assigned to unprovided rating m¯i,k as below
m¯i,k=Smi,k
So far, all unprovided ratings are predicted in this RS. Subsequently, user-user similarities are computed depending on both provided and predicted ratings in the following steps.
Computing user-user similarities:
Here, we use the distance measure proposed in [34] to calculate distances between two users U_{i} and U_{j} with i ≠ j, which is defined as below
D(Ui,Uj)=∑k=1N(lnmaxθ∈Θmj,k(θ)mi,k(θ)-lnminθ∈Θmj,k(θ)mi,k(θ))
where m_{i,k} and m_{j,k} are the soft ratings of user U_{i} and user U_{j} on item I_{k} respectively. Afterwards, the degree of similarity between U_{i} and U_{j}, denoted by s_{i,j}, is calculated as follows
si,j=e-γ×D(Ui,Uj),whereγ∈(0,∞).
Obviously, if the value of s_{i,j} is high, it means the user U_{i} and user U_{j} are very close, and vice versa. Eventually, a mathematical matrix S = {s_{i,j}|U_{i}, U_{j} ∈ U, i ≠ j} is employed to represent the similarities among all users.
Selecting neighbors based on user-user similarities:
Taking into account an active user U_{i}, for each unrated item I_{k} by user U_{i}, a set containing K nearest neighborhoods, denoted by Ni,k, is chosen by using the method proposed in [35]. Two simple steps of this method are shown below
Step one: the process of such selection depends on two criteria: 1. Those users who rated I_{k} and 2. The corresponding user-user similarities with user U_{i} are equal or greater than the threshold τ. Ni,k denotes the selected set, which is acquired as follows:
Ni,k={Uj∈U|Ik∈IjR,si,j≥τ}
Step two: all of members in Ni,k is descending sorted by s_{i,j} and top K members are selected as the neighborhood set Ni,k.
Estimating ratings according to neighborhoods:
Supposing that item I_{k} has not been rated by user U_{i}. The predicted rating of U_{i} on item I_{k} is denoted as m^i,k. Thus, m^i,k is calculated according to the ratings of user U_{i}’s nearest users. Mathematically, m^i,k is given as below
m^i,k=mi,k⨁m˜i,k
where m˜i,k is the mass regarding the neighborhoods’ whole preference in the set Eq (23) on item I_{k}. Considering user Ui∈Ni,k, and supposing that s_{i,j} is the similarity between user U_{i} and user U_{j}. We use a discount rate 1 − s_{i,j} to discount the rating of user U_{j} on item I_{k}. Therefore, m˜i,k is:
m˜i,k=⨁{j|Uj∈Ni,k}m˙j,ksi,jwherem˙j,ksi,j={si,j×mj,k(A),forA⊂Θ;si,j×mj,k(Θ)+(1-si,j),ifA=Θ.
Generating recommendations:
In order to generate appropriate recommendations for the candidate user U_{i}, predicted ratings of U_{i} on all unprovided items are sorted, and then based on the sorted list, the appropriate recommendations are generated.
10.1371/journal.pone.0189703.g008General process of recommendations.10.1371/journal.pone.0189703.g009DSmT modeling function.Experiments
To evaluate the performance of modified rigid coarsening in precision of recommendation and computational time, original rigid coarsening method and also classical PCR6 combination method are selected to be regarded as baselines. Besides, we use DS-MAE [23] to measure the precision of recommendations.
Definition 7: DS-MAE is mathematically given as followsDS-MAE(θj)=1|Dj|∑(i,k)∈Dj,θl∈Θ|m^i,k(θj)-M(θj)|whereD_{j}is the testing set identifying the user-item pairs whose true rating isθ_{j} ∈ Θ.
Those specific users’ interested information about genres is unknown. Thus, we define a rule that if a user has rated an item then this user is interested in all genres to which the item belongs.
Experiment One:
Movielens (http://grouplens.org/datasets/movielens) is a movie recommendation dataset widely used for benchmarking process. There are nearly 100,000 hard ratings on 19 different types of movies (Action, Comedy and so on). The domain of such rating given in Movielens includes 5 levels, denoted as Θ = {1, 2, 3, 4, 5},. At the same time, each user is required to evaluate at least 20 movies, so as to ensure adequate rating information.
The relevant parameters used in RSs are setted: γ = 10^{−4} and ∀(i, k){α_{i,k}, σ_{i,k}} = {0.9, 2/3}. However, Setting parameter τ to be a fixed value is obviously unreasonable because the similarity between two users is quite different when using different combination methods. Hence, in this paper, the value of parameter τ will not be setted in advance. Instead, it is determined based on the similarity in matrix S. Specifically, the highest value of top 30% in S is selected for τ.
Additionally, we adopt the robust strategy of 10-fold cross validation to conduct experiments, which is widely applied in experimental verification. Specific steps are as follows: original ratings in Movielens are first randomly divided into 10-folds and the experiments are thus carried out 10 times: in each sub-experiment, nine tenths of the ratings are chosen as training data and the remaining ratings are regarded as testing data. It’s worth noting that all results illustrated in the following experiments are the average values of 10 times.
Fig 10 demonstrates the values of overall DS-MAE varying with changing neighborhood size K. And the smaller values of DS-MAE indicate the better ones. As can be seen in Fig 10, with K ≤ 70 performances of the three methods increase sharply as well as being the same as each other. With K ≥ 70, performances of both methods become stable. Especially, performance of modified rigid coarsening method is very close to classical PCR6 rules. However, original rigid coarsening is slightly worse than the other two algorithms.
Fig 11 depicts the computational time varying with changing neighborhood size K. In this figure, the time taken by hierarchical coarsening combination methods (both rigid coarsening and modified rigid coarsening method) is quite faster compared to classical PCR6. Besides, modified rigid coarsening is relatively slower than original rigid coarsening. All these results illustrate that modified rigid coarsening method sacrifices some of the computational efficiency, in exchange for upgrading the accuracy of approximation.
Experiment Two:
Flixster (http://www.cs.ubc.ca/jamalim/datasets/) is a classical recommendation dataset which nearly contains 535013 hard ratings on 19 different types of movies (Drama, Comedy and so on). The domain of such rating given in Flixster includes 10 levels, denoted as Θ = {0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0},. At the same time, each user is required to evaluate at least 15 movies, so as to ensure adequate rating information. The relevant parameters used in RSs are setted: γ = 10^{−4} and ∀(i, k){α_{i,k}, σ_{i,k}} = {0.9, 2/3}. However, Setting parameter τ to be a fixed value is obviously unreasonable because the similarity between two users is quite different when using different combination methods. Hence, in this paper, the value of parameter τ will not be setted in advance. Instead, it is determined based on the similarity in matrix S. Specifically, the highest value of top 50% in S is selected for τ.
Fig 12 demonstrates the values of overall DS-MAE varying with changing neighborhood size K. And the smaller values of DS-MAE indicate the better ones. As can be seen in Fig 12, we can get a similar result to the previous data set(Movielens). Especially, performance of modified rigid coarsening method is in the middle of the comparison methods. However, original rigid coarsening is worse than the other two algorithms. Fig 13 depicts the computational time varying with changing neighborhood size K. From this figure, we can also get the same conclusion that the time taken by hierarchical coarsening combination methods (both rigid coarsening and modified rigid coarsening method) is quite faster compared to classical PCR6.
10.1371/journal.pone.0189703.g010Overall DS-MAE between three combination methods.
(Movielens).
10.1371/journal.pone.0189703.g011Overall computational time between three combination methods.
(Movielens).
10.1371/journal.pone.0189703.g012Overall DS-MAE between three combination method.
(Flixster).
10.1371/journal.pone.0189703.g013Overall computational time between three combination methods.
(Flixster).
Conclusion
In this paper, we propose a new combination method, called modified rigid coarsening method. This new method can map the original refined FoD to the new coarsening FoD in the process of combination. Compared to traditional fusion method PCR6 in DSmT, this approach can not only reduce computational complexity, but also ensure high approximation accuracy. Besides, in order to verify the practicality of our approach, we apply this approach to fuse soft ratings in RSs. To be specific, user preferences are first transformed by DSmT-partial probability model to accurately represent uncertain information. Then, information about user preferences from different sources can be easily combined. In the future work, more helpful information will be mined to discern focal element in FoD so as to improve the accuracy of approximation and more data sets will be applied.
Supporting informationThe compressed file package of all datasets used in this paper.
(RAR)
Proof of <italic>D</italic><sub>1−<italic>S</italic></sub> in <xref ref-type="disp-formula" rid="pone.0189703.e074">Eq (9)</xref>.
(DOCX)
This work was supported in part by the National Natural Science Foundation of China under Grant 61573097, 91748106, in part by Key Laboratory of Integrated Automation of Process Industry (PAL-N201704), in part by the Qing Lan Project and Six Major Top-talent Plan, and in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions. The authors thank the reviewers and editors for giving valuable comments, which are very helpful for improving this manuscript.
ReferencesShaferG. DempsterA. Upper and lower probabilities induced by a multivalued mapping. JiangW, WangS, LiuX, ZhengH, WeiB. Evidence conflict measure based on OWA operator in open world. Smets P. Practical uses of belief functions. In K.B. Laskey and H. Prade Editors, 15th Conf. on Uncertainty in Artificial Intelligence, pp. 612–621, Stockholm, Sweden, 1999.DezertJ. Foundations for a new theory of plausible and paradoxical reasoning. Smarandache F, Dezert J (Editors). Advances and applications of DSmT for information fusion. American Research Press, Rehoboth, NM, U.S.A., Vol. 1–4, 2004–2015. Available at webpage 2 of [7].http://www.onera.fr/staff/jean-dezertKennesR. Computational aspects of the Möbius transform of graphs, ShaferG, LoganR. Implementing Dempster’s rule for hierarchical evidence, YangY, LiuYL. Iterative approximation of basic belief assignment based on distance of evidence. DenœuxT. Inter and outer approximation of belief structures using a hierarchical clustering approach. YangY, HanDQ, HanCZ, CaoF. A novel approximation of basic probability assignment based on rank-level fusion. Han DQ, Yang Y, Dezert J. Two novel methods of BBA approximation based on focal element redundancy. Proc. of Fusion 2015, Washington, D.C., USA, July 2015.LiMZ, ZhangQ, DengY. A New Probability Transformation Based on the Ordered Visibility Graph. Dong YL, Li XD, Dezert J. A Hierarchical Flexible Coarsening Method to Combine BBAs in Probabilities, accepted in 20th International Conference on Information Fusion (Fusion 2017), Xi’an, China, July 10-13, 2017.SmetsP. Analyzing the combination of conflicting belief functions, in LiXD, DezertJ, HuangXH, MengZD, WuXJ. A fast approximate reasoning method in hierarchical DSmT (A). LiXD, YangWD, WuXJ, DezertJ. A fast approximate reasoning method in hierarchical DSmT (B). LiXD, YangWD, DezertJ. A fast approximate reasoning method in hierarchical DSmT (C). Han DQ, Dezert J and Yang Y. Belief Interval-Based Distance Measures in the Theory of Belief Functions. IEEE Transactions on Systems, Man and Cybernetics: Systems.2016:1–18.BobadillaJ, OrregaF, Hernando A and GutierrezA. Recommender systems survey. ChenT. Ubiquitous Hotel Recommendation Using a Fuzzy-Weighted-Average and Backpropagation-Network Approach. WichramarathneTL, PremaratneK, KubatM, JayaweeraDT. CoFiDS: a belief-theoretic approach for automated collaborative filtering. Ladyzynski P and GrzegorzewskiP. Vague preferences in recommender systems. NguyenVD, HuynhVN. Two-probabilities focused combination in recommender systems. BagherRC, Hassanpour H and MashayekhiH. User trends modeling for a content-based recommender system. DenoeuxT. Maximum likelihood estimation from uncertain data in the belief function framework. KanjanararakulO, SriboonchittaS, DenoeuxT. Forecasting using belief functions: an application to marketing econometrics. MassonM, DenoeuxT. Ensemble clustering in the belief functions framework. TroianoL, Rodriguez-MunizLJ, DiazJ. Discovering user preferences using Dempster-Shafer Theory. Nguyen VD, Huynh VN. A reliably weighted collaborative filtering system. ECSQARU 2015:429–439.Jglesias J, Bernardos AM, Casar JR. An evidential and context-aware recommendation strategy to enhance interactions with smart spaces. HAIS 2013:242–251.Dezert J, Smarandache F. A new probabilistic transformation of belief mass assignment. In Proc. of 11th Int. Conf. on Information Fusion, Cologne, Germany, pp. 1-8, June-July 2008.Chan H, Darwiche A. A distance measure for bounding probabilistic belief change. International Journal of Approximate Reasoning.2005;38(2): 149–174.Herlocker JI, Konstan JR, Borchers A and Riedl J. An algorithmic framework for performing collaborative filtering. SIGIR’99, ACM, 1999: 230–237.