Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

FctClus: A Fast Clustering Algorithm for Heterogeneous Information Networks

  • Jing Yang,

    Affiliation Institute of Computer Science and Technology, Harbin Engineering University, Harbin, China

  • Limin Chen ,

    chenlimin_clm@126.com

    Affiliations Institute of Computer Science and Technology, Harbin Engineering University, Harbin, China, Institute of Computer Science and Technology, Mudanjiang Teachers College, Mudanjiang, China

  • Jianpei Zhang

    Affiliation Institute of Computer Science and Technology, Harbin Engineering University, Harbin, China

FctClus: A Fast Clustering Algorithm for Heterogeneous Information Networks

  • Jing Yang, 
  • Limin Chen, 
  • Jianpei Zhang
PLOS
x

Abstract

It is important to cluster heterogeneous information networks. A fast clustering algorithm based on an approximate commute time embedding for heterogeneous information networks with a star network schema is proposed in this paper by utilizing the sparsity of heterogeneous information networks. First, a heterogeneous information network is transformed into multiple compatible bipartite graphs from the compatible point of view. Second, the approximate commute time embedding of each bipartite graph is computed using random mapping and a linear time solver. All of the indicator subsets in each embedding simultaneously determine the target dataset. Finally, a general model is formulated by these indicator subsets, and a fast algorithm is derived by simultaneously clustering all of the indicator subsets using the sum of the weighted distances for all indicators for an identical target object. The proposed fast algorithm, FctClus, is shown to be efficient and generalizable and exhibits high clustering accuracy and fast computation speed based on a theoretic analysis and experimental verification.

Introduction

Information networks are ubiquitous and include social information networks and DBLP bibliographic networks. Numerous studies on homogeneous information networks, which consist of a single type of data object, have been performed; however, little research has been performed on the clustering of heterogeneous information networks, which consist of multiple types of data objects. Clustering on a heterogeneous network may lead to better understanding the hidden structures and deeper meanings of the networks[1].

The star network schema is popular and important in the field of heterogeneous information networks. The star network schema includes one data object target type and multiple data object attribute types, whereby each relation is the target data objects and all attribute data objects linking to it.

Algorithms based on compatible bipartite graphs can effectively consider multiple types of relational data. Various classical clustering algorithms, such as algorithms based on semi-definite programming[2,3], algorithms based on information theory[4] and spectral clustering algorithms for multi-type relational data[5], have been proposed for heterogeneous data from the compatible point of view. These algorithms are generalizable, but the computational complexity of these algorithms is too great for use in clustering heterogeneous information networks.

Sun et al. presents an algorithm, NetClus[6], and a PathSim-based clustering algorithm[7] for clustering heterogeneous information networks. NetClus is effective for DBLP bibliographic networks, but the algorithm is not a general model for clustering other heterogeneous information networks; NetClus is not sufficiently stable. The concept behind NetClus is also used for clustering service webs[8,9]. The PathSim-based clustering algorithm requires a user guide, and the clustering quality reflects the requirements of users rather than the requirements of the network. ComClus[10] is a derivation algorithm of NetClus for use with hybrid networks that simultaneously include heterogeneous and homogeneous relations. NetClus and ComClus are not general and depend on the given application.

Dynamic link inference in heterogeneous networks[11] requires more accurate initial clustering. A high clustering quality is necessary for network analysis, but low computation speed is intolerable because of the large network scales involved. The accuracy of the LDCC algorithm[12] is improved, while both the heterogeneous and homogeneous data relations are explored. The CESC algorithm[13] is very effective for clustering homogeneous data using an approximate commute time embedding. A heterogeneous information network with a star network schema can transform into multiple compatible bipartite graphs from the compatible point of view. When the relation between any two nodes of the bipartite graph is presented with the commute time, the relation of both heterogeneous and homogenous data objects can be explored; the clustering accuracy can also be improved. The heterogeneous information networks are large but very sparse; therefore, the approximate commute time embedding of each bipartite graph can be quickly computed using random mapping and a linear time solver[14]. All of the indicator subsets in each embedding indicate the target dataset, and subsequently, a general model for clustering heterogeneous information networks is formulated based on all indicator subsets. All weighted distances between the indicators and the cluster centers in the respective indicator subsets are computed. All indicator subsets can be simultaneously clustered according to the sum of the weighted distances for all indicators for an identical target object. Based on the above discussion, an effective clustering algorithm, FctClus, which is based on the approximate commute time embedding for heterogeneous information networks, is proposed in this paper. The computation speed and clustering accuracy of FctClus are high.

Methods

Commute Time Embedding of the Bipartite Graph

Given two types of datasets, and , the graph Gb = 〈V, E〉 is called a bipartite graph if V(Gb) = X0X1 and , where 1 ≤ in0, 1 ≤ jn1. is the relation matrix between X0 and X1, where the element wij is the edge weight between and . Then, the adjacency matrix of the bipartite graph Gb can be denoted as

D1 and D2 are the diagonal matrices, where the diagonal element of D1 is and the diagonal element of D2 is . thus the Laplacian matrix of the bipartite graph Gb is . L can be eigen-decomposed into L = ΦΛΦT, where Λ = diag(λ1, λ2,⋯, λn) is a diagonal matrix composed of the eigenvalues of L and λ1λ2 ≤ ⋯ ≤ λn, Φ = (ϕ1, ϕ2, ⋯, ϕn) is an eigenmatrix and ϕi is an eigenvector corresponding to the eigenvalue λi. Let L+ be a pseudo-inverse matrix of L and . The bipartite graph is also an undirected weighted graph. According to the literature[15], the commute time cij between nodes i and j of Gb can be computed by the pseudo-inverse matrix L+. (1) where is the (i, j) element of L+, gv = ∑ wij, ei is a unit column vector in which the i-th element is 1; that is, .

According to the literature[15,16], the commute time cij between nodes i and j of Gb is

Thus, the commute time cij is the square pairwise Euclidean distance between the row vectors in the space or the column vectors in the space [13], or is called the commute time embedding of the bipartite graph Gb. cij is the average path length between two nodes rather than the shortest path between two nodes. Using the commute time for clustering the noisy data increases robustness and captures the complex clusters. Therefore clustering in the commute time embedding can also effectively capture the complex clusters. is used in this paper. If a normal Laplacian matrix Ln = D−1/2 LD−1/2 is used, the commute time embedding is [13].

Approximate Commute Time Embedding of the Bipartite Graph

If directly computing or , the process requires O(n3) time for the eigen-decomposition of the Laplacian matrix L or Ln. n = n0+n1 is the number of nodes and s is the number of edges in the bipartite graph Gb. According to the literature[17], if the edges in Gb are oriented and where i and j are nodes of Gb, then Bs×n is a directed edge-node incidence matrix. Using as a diagonal matrix whose entries are the edge weights, thus . Furthermore, (2) thus, ψ is the commute time embedding of the bipartite graph Gb, where the square root of the commute time is the Euclidean distance between i and j in ψ because

According to the literature[18], given vectors v1,⋯, vnRs and ε > 0, is a random matrix of row vectors, where is equivalent when kr = O(log n / ε2). With probability 1−1 / n, at least (3) for all pairs.

Therefore, given the bipartite graph Gb with n nodes and s edges, ε > 0, and a matrix with probability of at least 1−1 / n: (4) for any nodes i, jGb, where kr = O(log n / ε2).

The proof of Eq (4) comes directly from Eq (2) and Eq (3). cij ≈||Y(eiej)||2 with an error ε based on Eq (4). If directly computing , L+ must first be computed, but the computational complexity of directly computing L+ is excessive. However, using the method in the literature[19,20] to compute , the complexity is decreased. Let ; then, Y = θL+, which is equal to YL = θ. First, is computed, and then, YL = θ. Each row of Y, yi, is computed by solving the system yiL = θi, where θi is the i-th row of θ. The linear time solver of Spielman and Teng[19,20] requires only time to solve the system. Because [17], where is the solution, yiL = θi using the linear time solver. Then, [17]

Therefore, with an error bound of ε2. The component of the algorithm for the approximate commute time embedding of the bipartite graph is illustrated as follows.

Algorithm1 ApCte (Approximate Commute Time Embedding of the Bipartite Graph)

  1. input the relation matrix ;
  2. compute the matrices B, and L using ;
  3. compute ;
  4. compute each using the system yiL = θi by calling to the Spielman-Teng solver kr times[14], 1 ≤ ikr;
  5. output the approximate commute time embedding .

All data objects of X0 and X1 are mapped into a common subspace , where the first n0 column vectors of indicate X0 and the last n1 column vectors of indicate X1. The dataset is composed of the n = n0+n1 column vectors of is called an indicator dataset. The input matrix is a sparse matrix with s nonzero elements. Therefore, the complexity of computing the matrices B, and L in step 2 is O(2s) + O(s) + O(n). The sparse matrix B has 2s nonzero elements, and the diagonal matrix has s nonzero elements. Computing takes O(2skr + s) time in step 3. Because the linear time solver of Spielman and Teng[19,20] requires only time to solve for each yi of system yiL = θi, constructing takes time in step 4. Therefore, the complexity of algorithm1, ApCte, is only O(2s) + O(s) +O(n) + O(2skr + s) + = . In practice, kr = O(log n / ε2) is small and does not vary between different datasets. The indicator dataset includes low-dimensional homogeneous data; therefore, traditional algorithms can be used for the indicator dataset.

A General Model Formulation

Given a dataset with T+1 types, where Xt is a dataset belonging to the t-th type, a weighted graph G = < V, E, W > on χ is called an information network; if V(G) = χ, the E(G) is a binary relation on V and W: ER+. Such an information network is called a heterogeneous information network when T ≥ 1 and a homogeneous information network when T = 0[6].

An information network G = < V, E, W > on χ is called a heterogeneous information network with a star network schema if ∀e = 〈xi, xj〉 ∈ E, xiX0 and xjXt (t ≠ 0). X0 is the target dataset, and Xt (t ≠ 0) is the attribute dataset.

To derive a general model for clustering the target dataset, a heterogeneous information network with a star network schema using the dataset with T+1 types is given, where X0 is the target dataset and are the attribute datasets. , where nt is the object number of Xt. denotes the relation matrix between the target dataset X0 and the attribute dataset Xt, where the element denotes the relation between of X0 and of Xt. If an edge between and exists, its edge weight is . If no edge exists, = 0. T relation matrices exist in the heterogeneous information network with a star network schema.

The target dataset X0 and the attribute dataset Xt constitute a bipartite graph, G(0t), which corresponds to the relation matrix W(0t). The indicator dataset which also is the approximate commute time embedding of G(0t) can be quickly computed by ApCte, where the first n0 data of Y(0t) indicate X0 and the last nt data of Y(0t) indicate the attribute dataset Xt. consists of the first n0 data of Y(0t), and Y(t) consists of the last nt data of Y(0t). and Y(t) are called the indicator subsets. indicates the i-th object of X0 and is called an indicator for 1 ≤ in0. There exists a one-to-one correspondence between the indicators of and the objects of X0. Because T bipartite graphs correspond to T indicator datasets, the target dataset X0 is simultaneously indicated by the T indicator subsets , and each object of X0 is simultaneously indicated by T indicators.

β(t) is the weight of the relation matrix W(0t), where , β(t) > 0. The target dataset X0 is partitioned into K clusters. The indicators of , which indicate the identical object of X0, belong to T clusters. The T clusters are in T different indicator subsets and are denoted using the same label. Let (5) where is the j-th cluster center of the indicator subset . There exists a one-to-one correspondence between the indicator function and the objects of X0. If all indicators, , that indicates the i-th object of X0 belong to the j-th cluster, γij = 1; otherwise, γij = 0.

If the objective function F in Eq (5) is minimized, the clusters of X0 are optimal from the compatible point of view because each indicator subset reflects the relation between the target dataset and the attribute dataset. Obviously, determining the global minimum of Eq (5) is NP hard.

Derivation of Fast Algorithm for Clustering Heterogeneous Information Networks

The following steps allow for the local minimum of F in Eq (5) to be quickly achieved by simultaneously clustering all of the indicator subsets.

Setting the Cluster Label

When given the cluster label of each indicator subset, the modeling process can be simplified. Suppose that the labels of the K clusters of each are set. Let q1, q2X0, , . indicate q1, and indicate q2. The clusters which indicators for an identical target object belong to have the same label. If one indicator of belongs to the j-th cluster, all of the other indicators of also belong to the j-th cluster in their respective indicator subset. If belongs to the j-th cluster, then all either belong to the j-th cluster in their respective indicator subset or none belong to the j-th cluster.

Each cluster of has an initial center. K random objects are selected from the target dataset X0. The indicators indicating the K objects are taken as the initial cluster centers for each and for the clusters whose center indicates an identical target object with the same label. Then, all of the other indicators for an identical target object only belong to the j-th cluster in each or no indicators belong to the j-th cluster, where 1 ≤ jK. Therefore, the K clusters of are set labels.

The sum of the Weighted Distances

An object of X0 is indicated by T indicators. All of the T distances between the indicator and the center in each affect the object allocation. The target object allocation is determined by the sum of the weighted distances for the T indicators. Setting qiX0, , , indicates qi. The weighted distance between and the j-th cluster center in is . The sum of the weighted distances is , which determines the cluster that the object qi belongs. (6) where j is the cluster label.

The Local Minimum of F

F in Eq (5) can also be expressed as (7)

Obviously, Eq (7) is another representation of Eq (5).

Given the initial centers and the cluster labels in the T indicator subsets , is first partitioned by computing Eq (6) and setting F = F0 in Eq (7). The cluster centers of remain the same, and γij is unchanged. The new center of each cluster in is computed. The new center is the mean of all data of each cluster. The new centers of replace the old centers, and subsequently, Eq (7) is used to set F = F1. Then, (8) proving

Because only the new centers of replace the old centers, γij remains unchanged. Therefore

Because the cluster centers of also remain unchanged, is constant, and . Subsequently,

Thus, the cluster centers of , for F1F0, are replaced.

The new centers of replace the old centers, while the centers of remain unchanged. Re-clustering using Eq (6), where the corresponding value is F = F2 in Eq (7), gives F2F1.

Partitioning using Eq (6) computes the new cluster centers of ; the new centers replace the old centers . Then, the same procedure is repeated for each . The value of F decreases in this case. The above procedures are repeated until F in Eq (7) converges; then, the local minimum of F in Eq (7) is obtained. The algorithm based on the approximate commute time embedding for heterogeneous information networks is shown below.

Algorithm 2 FctClus (Fast Clustering Algorithm based on the Approximate Commute Time Embedding for Heterogeneous Information Networks)

  1. Input relation matrices , weights and cluster number K;
  2. for t = 1 to T do
  3. Compute indicator dataset Y(0t) of the bipartite graph corresponding to W(0t) using algorithm 1;
  4. Constitute the indicator subset that indicates X0;
  5. end for
  6. Initialize the K initial cluster centers of and set the cluster label;
  7. loop
  8. for t = 1 to T do
  9. Partition into K clusters by computing Eq (6);
  10. Re-compute the new cluster centers of ;
  11. ;
  12. end for
  13. end loop
  14. Output the clusters of X0.

The computational complexity of steps 2~5 is in algorithm 2, where T is the number of relational matrices in the heterogeneous information network and kr is the data dimension of . nt and st are the node number and edge number of the t-th bipartite graph, respectively. Step 6 requires only O(K) time; the time is constant. The object number of X0 is equal to the indicator number of each indicator subset, thus the computational complexity of steps 7~13 is O(uTKkrn0), where K is the number of clusters of each ; n0 is the data number of each ; and u is the iteration number for F in Eq (7) convergence. Therefore, the computational complexity of algorithm 2, FctClus, is + O(uTKkrn0), where kr and u are small and T and K are constant.

Experiments

The Experimental Dataset

The experimental datasets are composed of real data selected from the DBLP data. The DBLP is a typical heterogeneous information network in computer science domain and contains 4 types of objects, including papers, authors, terms and venues. Two different-scaled heterogeous datasets called Ssmall and Slarge respectively are used in experiments.

Ssmall is the small test dataset and is called the "four-area dataset", as in the literature[6]. Ssmall extracted from the DBLP dataset downloaded in 2011 contains four areas related to data mining: databases, data mining, information retrieval and machine learning. Five representative conferences for each area are chosen, and all papers and terms that appear in the titles are included. Ssmall is showed in S1 File.

Slarge is the large test dataset and extracted from the Chinese DBLP dataset, which are sharing resources released by Institute of automation, Chinese Academy of Sciences. Slarge includes 34 computer science journals, 16, 567 papers, 47, 701 authors and 52,262 terms(keywords). Slarge is showed in S2 File.

When analyzing the papers, this object is the target dataset, and the other objects are the attribute datasets. There is no direct link between papers because the DBLP provided very limited citation information. When analyzing the authors, this object is the target dataset, while papers and venues are the attribute datasets. However, there is a direct link between authors because of the co-author relation between various authors; therefore, authors are another attribute dataset related to the target dataset.

The experiments are performed in the MATLAB 7.0 programming environment. The matlab source codes for our algorithm are showed in S3 File and are available online at https://github.com/lsy917/chenlimin, which include a main program and three function programs. FctClus.m is the main program which output the clusters of the object dataset, and ApCte.m, Prematrix.m and Net_Branches.m are function programs. The Koutis CMG solver[14] is used in all experiments as the nearly linear time solver to create the embedding. The solver uses symmetric, diagonally dominant matrices that are available online at http://www.cs.cmu.edu/~jkoutis/cmg.html.

The Relational Matrix

Papers are the target dataset, while authors, venues and terms are the attribute datasets. X0 denotes papers, and X1, X2 and X3 denote authors, venues and terms, respectively. W(0t) is the relation matrix between X0 and Xt, 1 ≤ t ≤ 3. The element of is

When authors are the target dataset, papers and venues are the attribute datasets. Authors are also an attribute dataset because of the co-author relation existing between authors. X0 denotes authors when X1 and X2 denote papers and venues, respectively. W(0t) is the relation matrix between X0 and Xt, 0 ≤ t ≤ 2. The element of is

All the algorithms use the same relation matrix for all experiments.

Parameter Analysis

Analysis of Parameter kr.

The equation [13] is used to compute the clustering accuracy in the experiments, where n is the object number of dataset, label(i) is the cluster label, and ci is the predicted label of an object i. δ(⋅) is an indicator function:

kr is small in practice, and minimal differences exist among the various datasets[13]. The literature[13] has proved that the accuracy curve is flat for clustering different homogeneous datasets when kr≥50.

Using the small dataset Ssmall, the clustering accuracy as a function of kr in a heterogeneous information network is studied.

An experiment with different kr is conducted in the small dataset, Ssmall. In the FctClus algorithm, the weight of is taken as β(1) = 0.3, β(2) = 0.4 and β(3) = 0.3 for clustering papers; the weight of is taken as β(1) = 0.4, β(2) = 0.2 and β(3) = 0.4 for clustering authors. The clustering accuracy is affected by kr, as shown in Fig 1 and Fig 2.

The parameter kr is quite small because the accuracy curve is flat when kr obtains a certain value. kr = 60 is suitable for the dataset in the experiment. kr is small and does not considerably affect the computation speed of FctClus. It is advantageous that FctClus is not sensitive to kr in terms of both accuracy and performance. All weights of the relation matrix and kr = 60 are studied in other experiments.

Analysis of Iteration u

An experiment is conducted in the small dataset Ssmall to compare the influence of iteration u on the clustering result, where kr = 60. The influence of the iteration u on clustering papers and authors is shown in Fig 3 and Fig 4. The algorithm quickly convergences when u = 30. u = 40 is examined in the other experiments.

Comparison of Clustering Accuracy and Computation Speed

The complexity of the algorithms is too high for large-scale networks based on semi-definite programming[2,3] and spectral clustering algorithms for multi-type relational data[5]. The low-complexity algorithms CIT[4], NetClus[6] and ComClus[10] are selected for comparison with the FctClus algorithm in terms of clustering accuracy and computation speed; the datasets Ssmall and Slarge are also chosen for this experiment.

The initial cluster centers of FctClus or the initial cluster partitions of the other three algorithms are randomly selected 3 times. The best clustering accuracy of the 3 measurements is used as the clustering accuracy of the four algorithms, and the computation speed at this time is considered as the measured computation speed. The parameters in literature[6] are used as the parameters in NetClus, and the parameters in literature[10] are used as the parameters in ComClus in this experiment. The comparison results are shown in Table 1 and Table 2.

The clustering accuracy of FctClus is the highest of all four algorithms. The clustering accuracy of CIT is lower than that of FctClus because the bipartite graphs of the heterogeneous information networks are sparse. The computational complexity of CIT is O(n2), and the convergence speed of CIT is low when the heterogeneous information network is sparse. The clustering accuracy of NetClus is low because only heterogeneous relations are used. Homogeneous and heterogeneous relations are both used in ComClus; therefore, the accuracy of ComClus is higher than that of NetClus. FctClus is an algorithm based on commute time embedding. The data relations are explored using commute time and the direct relations of the target dataset are considered. FctClus is not affected by the sparsity of networks; thus, FctClus is highly accurate.

The computation speed of FctClus is nearly as fast as NetClus. The experiment demonstrates that FctClus is effective. FctClus is more universal and can be adapted for clustering any heterogeneous information network with a star network schema. However, NetClus and ComClus can only be adapted for clustering bibliographic networks because NetClus and ComClus depend on a ranking function of a specific application field.

Comparison of Clustering Stability

To compare the stability of the FctClus, NetClus and CIT algorithms, the small dataset Ssmall is used for clustering papers in this experiment. ComClus is a derivation algorithm of NetClus; it has the same properties as NetClus. ComClus is not considered in this study.

The initial cluster centers of FctClus and the initial cluster partitions of NetClus and CIT are randomly recorded 10 times, and the three algorithms are executed 10 times respectively. The clustering accuracy of the three algorithms for 10 times is shown in Fig 5. Although the computation speeds of FctClus and NetClus are both high, Fig 5 shows that the stability of FctClus is higher than that of NetClus and that the initial centers do not greatly impact the clustering result of FctClus. However, NetClus is very unstable, and the initial clusters greatly impact the clustering accuracy and convergence speed of NetClus. CIT is more stable than NetClus, but the clustering accuracy is low.

Running Time Analysis of the FctClus Algorithm

The running time distributions of FctClus on the two datasets are shown in Table 3. The experimental data show that FctClus is effective. The running time for serial computing the three embedding is less than 50% of the total running time. When utilizing parallel computing for the three embedding, the computation speed is higher. When clustering indicator subsets in parallel, the computation speed may also be increased.

Conclusions

The relation between the original data described by the commute time guarantees the accuracy and performance of the FctClus algorithm. Because heterogeneous information networks are sparse, FctClus can use random mapping and a linear time solver[14] to compute the approximate commute time embedding, which guarantees the high computation speed. FctClus is effective and may be broadly implemented for large heterogeneous information networks, as demonstrated in theory and experimentally. The weight of the relation matrix impacts the target function, but the weight cannot be determined self-adaptively; this requires further research. The relations of data in the real world are typically high-order heterogeneous, so effective clustering algorithms for heterogeneous information networks with any schema will be studied in the future.

Author Contributions

Conceived and designed the experiments: JY LMC. Performed the experiments: LMC. Analyzed the data: JPZ. Contributed reagents/materials/analysis tools: JY LMC. Wrote the paper: LMC.

References

  1. 1. Sun Y, Han J (2012) Mining heterogeneous information networks: principles and methodologies. Synthesis Lectures on Data Mining and Knowledge Discovery, 3 (2). pp.1–159.
  2. 2. Gao B, Liu TY, Qin T, Zheng X, Cheng QS, Ma WY (2005) Web image clustering by consistent utilization of visual features and surrounding texts. In Proceedings of the 13th annual ACM international conference on Multimedia.pp.112-121.
  3. 3. Gao B, Liu TY, Zheng X, Cheng QS, Ma WY (2005) Consistent bipartite graph co-partitioning for star-structrured high-order heterogeneous data co-clustering. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. pp.41-50.
  4. 4. Gao B, Liu TY, Ma WY (2006) Star-structured high-order heterogeous data co-clustering based on cosistent information theory. In Data Mining, 2006. ICDM'06. Sixth International Conference on. pp.880-884.
  5. 5. Long B, Zhang ZM, Wu X, Yu PS (2006) Spectral clustering for multi-type relational data. In Proceedings of the 23rd international conference on Machine learning. pp.585-592.
  6. 6. Sun Y, Yu Y, Han J (2009) Rankclus: ranking-based clustering of heterogeneous information networks with star network schema. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. pp.797-806.
  7. 7. Sun Y, Norick B, Han J, Yan X, Yu PS, Yu, X (2012) Integrating meta-path selection with user-guided object clustering in heterogeneous information networks, In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. pp.1348-1356.
  8. 8. Li P, Wen J, Li X (2013) SNTClus: A novel service clustering algorithm based on network analysis and service tags. Przegląd Elektrotechniczny.pp.89.
  9. 9. Li P, Chen L, Li X, Wen J (2013) RNRank: Network-based ranking on relational tuples, In Behavior and Social Computing. Springer International Publishing. pp. 139–150.
  10. 10. Wang R, Shi C, Yu PS, Wu B (2013) Integrating clustering and ranking on hybrid heterogeneous information network. In Advances in Knowledge Discovery and Data Mining.pp.583-594.
  11. 11. Aggarwal CC, Xie Y, Philip SY (2012) Dynamic link inference in heterogeneous networks. In SDM.pp.415-426.
  12. 12. Zhang L, Chen C, Bu J, Chen Z, Cai D, Han J (2012)Locally discriminative coclustering. Knowledge and Data Engineering, IEEE Transactions on, 24 (6).pp.1025–1035.
  13. 13. Khoa NLD, Chawla S (2011) Large scale spectral clustering using approximate commute time embedding. arXiv preprint arXiv:1111.4541.
  14. 14. Koutis I, Miller GL, Tolliver D (2011) Combinatorial preconditioners and multilevel solvers for problems in computer vision and image processing. Computer Vision and Image Understanding, 115 (12).pp.1638–1646.
  15. 15. Fouss F, Pirotte A,Renders JM, Saerens M (2007)Random walk computation of similarities between nodes of a graph with application to collaborative recommendation. Knowledge and Data Engineering, IEEE Transactions on,19 (3).pp.355–369.
  16. 16. Qiu H, Hancock ER (2007)Clustering and embedding using commute times. Pattern Analysis and Machine Intelligence, IEEE Transactions on,29 (11).pp.1873–1890. pmid:17848771
  17. 17. Spielman DA, Srivastava N (2008) Graph sparsification by effective resistances. In Proceedings of the 40th annual ACM symposium on Theory of computing, STOC '08.pp.563-568.
  18. 18. Achlioptas D (2001) Database-friendly random projections, in Proceedings of the twentieth ACM SIGMOD SIGACT SIGART symposium on Principles of database systems, PODS '01.pp.274-281.
  19. 19. Spielman DA, Teng SH (2004) Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, STOC '04.pp.81-90.
  20. 20. Spielman DA, Teng SH (2014) Nearly-linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems. SIAM Journal on Matrix Analysis and Applications, 35 (3).pp.835–885.