A chaotic viewpoint-based approach to solve haplotype assembly using hypergraph model

Decreasing the cost of high-throughput DNA sequencing technologies, provides a huge amount of data that enables researchers to determine haplotypes for diploid and polyploid organisms. Although various methods have been developed to reconstruct haplotypes in diploid form, their accuracy is still a challenging task. Also, most of the current methods cannot be applied to polyploid form. In this paper, an iterative method is proposed, which employs hypergraph to reconstruct haplotype. The proposed method by utilizing chaotic viewpoint can enhance the obtained haplotypes. For this purpose, a haplotype set was randomly generated as an initial estimate, and its consistency with the input fragments was described by constructing a weighted hypergraph. Partitioning the hypergraph specifies those positions in the haplotype set that need to be corrected. This procedure is repeated until no further improvement could be achieved. Each element of the finalized haplotype set is mapped to a line by chaos game representation, and a coordinate series is defined based on the position of mapped points. Then, some positions with low qualities can be assessed by applying a local projection. Experimental results on both simulated and real datasets demonstrate that this method outperforms most other approaches, and is promising to perform the haplotype assembly.


Introduction
Improving the high-throughput DNA sequencing technologies dramatically decreased the costs of genome sequencing methods. This achievement help researchers to understand the variation of individual's genomic data and pave the way toward individualized strategies for diagnostic or therapeutic decision-making [1]. The most frequent type of genetic variation is the single nucleotide polymorphisms (SNPs). Each SNP is just a mutation over similar distinctive positions on the DNA sequences of homologous pair of chromosomes in an individual, and among the corresponding DNA sequences of the whole population. Similarly, the term "allele" refers to different forms of a gene at one loci. Accordingly, four different alleles are possible for a given SNP site. Nonetheless, most SNPs are bi-allelic containing only two kinds of a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 the MEC model. In [26], a heuristic method, namely, Fasthap was developed, where it makes a weighted fuzzy conflict graph based on the MEC model. Furthermore, the constructed graph is used to cluster the input fragments. Fuzzy C-means (FCM) approach has been applied in [25] to enhance the performance of the proposed method in clustering the fragments. However, this method obtains low performance in dealing with noisy fragments. Some popular methods, including MCMC [31], HapCUT [27], and HapCUT2 [32], have differently construct the graph. These methods start with a set of arbitrary sequences as initial haplotypes, and improve it step by step concerning the input fragments. They make a similar weighted graph in their distinctive model. However, instead of fragments, SNPs are used as vertices of the graph. Each pair of SNPs is connected if they are covered by at least one input fragment. The weight of each edge determines the amount of consistency with their corresponding positions in the current haplotypes. Although this model efficiently determines the consistency of the current haplotype with the input fragments, the existing gaps and noise lead to a loss of accuracy in determining the weight of edges. In [33]. It has been proved that the hypergraph can precisely describe the distance of input fragments.
Although, various methods have been developed to solve the SIH problem, most of them can only be applied to diploid organisms, and fail to consider polyploid organisms. It should be noted that the haplotype reconstruction in polyploid type is more complicated than a diploid one. Suppose that P is the number of ploids, and m is the length of haplotype sequences. In this case, there are at least 2 m−1 (P − 1) m different solutions for phasing the haplotypes [23]. Recently, several studies, such as [23,[34][35][36], have been conducted on the polyploid organism. Althap [23] and SCGD [36] are two recently developed methods based on matrix factorization to solve the SIH problem. H-PoP [34] is a heuristic method that divides the input fragments into P clusters. Therefore, the members of each cluster have the minimum distance with each other and are entirely far from the fragments of other clusters. Belief propagation (BP) [35] is another method addressing the SIH problem by mapping the MEC model to a decoding mechanism. It involves a message transmission in a noisy channel. In this context, it has been reported that the haplotype's blocks with proper lengths can exhibit chaotic behavior. This feature has been recently used to improve the reconstruction rate in the single individual haplotyping problem [37].
Considering the chaotic nature of haplotype sequences, in this paper, an iterative algorithm is proposed to reconstruct the haplotypes using the hypergraph model. The method includes two main steps. Firstly, an iterative mechanism is applied due to the SNP matrix to construct the haplotype set, and the consistency between SNPs is modeled based on the hypergraph. Then, the corrected parts of the haplotypes are determined by partitioning the hypergraph. This step is followed by transforming the obtained haplotypes into a line using the chaos game representation, where a coordinate series is defined based on the position of the mapped points. Also, a local projection (LP) method is applied to refine the remaining ambiguous measures and increasing the quality of the reconstructed haplotypes.
The significant contributions of the proposed method are as follows: • The similarity measurement between the input fragments can be described more accurately by utilizing the hypergraph model. Moreover, it helps to overcome challenges originated from the huge amount of gaps and sequencing errors.
The rest of the paper is organized as follows. Section 2 provides a brief review of the problem statement. In section 3, the proposed method is described in detail. Experimental results are presented in section 4. Finally, the conclusion is arrived at section 5.

Preliminaries and assumptions
The challenge of the SIH problem in the polyploid organisms includes the reconstruction of the whole set H = {h 1 , h 2 , . . ., h P } containing P haplotype sequences. It is based on the available aligned input fragments. Similar to diploid case, the input fragments can be represented as a standard form. Let X be the SNP matrix in which each row corresponds to an input fragment, and each column indicates a specified SNP. In binary allelic haplotypes, it is assumed that x ij 2 {0,1, 0 − 0 } indicating the obtained allele in a specified fragment f i at SNP s j . Also, each haplotype h i (i = 1,2, . . ., P) equals to {1,0} N . In diploid case, there are some positions called homozygote sites in which h 1k equals to h 2k . On the other hand, the sites with different measures are called heterozygote positions. Homozygote sites are usually removed from the input matrix, as they do not provide useful information for the haplotype assembly problem. It is worth noting that the 0 − 0 sign indicates missing information during the sequencing process. For two fragments which are originated from different haplotypes, it is expected that there are some dissimilarities between them. Several relations have been developed to describe the differences between the two fragments. Hamming distance (HD) is the most practical approach, which can be used to calculate the differences between two input fragments f i and f j as follows: Where d is defined as follows: dðx; yÞ ¼ 1 x 6 ¼ y and; x 6 ¼ 0 À 0 and y 6 ¼ 0 À 0 In the case where the SNP matrix is error-free, two fragments that were sequenced from the same haplotype are compatible, as their distance equals to zero. On the other hand, in dealing with the noisy SNP matrix, for two arbitrary fragments f i , f j , it is not possible to simply interpret the dissimilarity between two fragments, as they can be originated from the existing noise or have been sequenced from different haplotypes. In the error-free case, the fragments can be clustered in P clusters, such that the members of each cluster are compatible with each other. Fig 1 represents an example of the SIH problem in the ploidy level. The rows of matrix X indicate sequenced fragments, and the rows of matrix H contain the obtained haplotypes.
In diploid case, several models have been proposed to solve the SIH problem based on the input fragments.
Extending the models to solve the SIH problem in polyploidy form is a difficult task [38]. Recently, several MEC-based approaches have been developed to solve this problem. In this regard, the input fragments are organized in P clusters, and the haplotypes are considered as the centers of constructed clusters. In fact, each cluster involves the fragments which have the same provenance. The optimized result of the clustering algorithm can be obtained by minimizing the following Eq.: In the optimal case, if the SNP matrix is error-free, then the MEC measurement equals zero, and each fragment f belonging to C i is compatible with H i . However, in dealing with the noisy SNP matrix, it is expected that some fragments to be in conflict with their corresponding haplotypes. It should be noted that finding the optimal MEC measure is an NP-hard problem. On the other hand, the huge amount of gaps in the input fragments does negatively affect the distance measurement between pairs of input fragments. Therefore, the current work aims to address these challenges by a better description of the similarity measurement between the input fragments. This was done by a heuristic method with a favorable runtime based on the hypergraph model.

The proposed method
This section presents a Haplotype Reconstruction approach based on the Chaotic viewpoint and Hypergraph model (HRCH). The proposed method is briefly described below.
(i) a set of haplotype sequences is randomly generated;(ii) the input fragments are assigned to the haplotype sequences based on their similarities;(iii) a weighted SNP hypergraph is built, using the similarity measure between haplotype sequences and the assigned input fragments; (iv) the constructed hypergraph is used to find a set called CutSet, containing the SNPs which should be modified. This procedure is repeated for a predefined number of iterations to minimize the MEC score. Next, by considering the existence of chaotic properties of haplotype sequences, the results are improved. A high-level overview of the method is demonstrated in

Data preprocessing
As described in the preliminaries sections, X M×N is a matrix containing M reads with length N. It is essential to note that homozygote columns can be ignored in diploid cases. Removing the homozygote positions was performed as described by [33] such that the most frequent

PLOS ONE
measure for each column could be found. If the frequency is higher than 0.8, the column is identified as a homozygote site. Thus, the output of this step is a matrix with M fragments and N 0 columns; where N 0 � N. Finally, H 0 = {h 1 , h 2 , . . ., h p }, as an initial set of haplotypes is randomly generated.

Pair-SNP consistency
Let ⋈ be a binary operator which provides the concatenation of two variables. For example, if a and b are two variables with measures '0' and '1', respectively, a ⋈ b equals to '01'. Given two variables, c, d 2 { 0 00 0 , 0 01 0 , 0 10 0 , 0 11 0 } the operator ⊕ is defined as follows: Definition 1 (Pair-SNP consistency). Given matrix X M×N involving the input fragments, pair-SNP consistency, ω ij is defined between s i and s j as two arbitrary SNPs which are covered by f k (k = 1,2, . . ., M) as follows.
Where T ij is the number of fragments covering both SNPs s i and s j . By applying this measure, ω ij is normalized such that its value ranges between -1 and +1 (i.e., −1 � ω ij � +1). Moreover,

PLOS ONE
cov(s i , s j ) includes the fragments which cover the SNPs s i and s j . Finally, as mentioned above, c(f k ) identifies the origin of f k . The Pair-SNP consistency metric is used to evaluate the compatibility between each pair of SNPs with the current haplotype H t . The intuition behind the Eq 5 is as follows. For given SNPs s i and s j , ω ij describes the amount of similarity between the pair SNPs and their corresponding measures in H t . This measure equals to -1 if the current haplotype is entirely identical with the covered fragments in columns i and j. On the other hand, it takes 1 if they are completely different in those columns. Otherwise, ω ij equals to 0, when the SNPs are not covered by any fragment. It is noticeable that, for high measures of ω ij , it is expected that the SNPs, s i and s j , are considered to belong to different clusters upon partitioning. The complexity of this step is O(MN 2 ), where M and N are the number of fragments and SNPs, respectively.

Hypergraph construction
To construct the weighted hypergraph based on the achieved ω matrix, for each SNP s i , its K nearest neighbors is found using the following Eq.: Where l is index of the K th nearest SNP of s i . KNN(s i ) is a set containing the index of K nearest neighbors of s i . More specifically, each set represents the K SNPs, which have the most consistent relationship with s i . In this case, each hyperedge can connect more than two vertices. Applying K nearest neighbors is a common approach to determine the hyperedges. However, it is necessary to specify the hyperedges more precisely due to the existing noise and sparsity of the SNP matrix. Therefore, the connectivity of vertices is defined by finding frequent itemsets. In other words, the hyperedges are determined as the shared K nearest neighbors, which can be defined as follows: In Eq 7, E contains several SNPs, and SKNN(E) provides a set of SNPs which are shared between all nearest neighbors of E. If the number of shared KNNs is more than a predefined threshold, called minimum support count (sc), then E can be defined as a frequent itemset. In the proposed model, each frequent itemset is defined as a specified hyperedge e i , and the number of shared KNN is assigned as its weight measure w i . Among the existing methods, frequent pattern (FP)-growth [39] has been gaining much attention due to the ability to find frequent itemsets. FP-growth is a tree-based method which uses a depth-first strategy to mine frequent itemsets. Accordingly, the database is modeled as a prefix tree, and the depth-first search is recursively applied to generate all maximal frequent itemsets. The runtime of this algorithm increases linearly, and it depends on the number of SNPs [40].

Improving H t by partitioning the hypergraph
As can be seen in Fig 3, in the constructed hypergraph, the SNPs correspond with vertices, and each hyperedge equals with an obtained frequent itemset. In other words, it contains a set of SNPs that has more consistency with the corresponding position in H t . It is noteworthy that hyperedges with higher weights indicate the higher similarity between the constituent's SNPs and their relevant positions in H t . The vertices can be divided into two clusters via partitioning the hypergraph. The objective of the partitioning is to minimize the sum of the weights of the hyperedges located between the clusters. To this end, hmetis as a popular algorithm was used. The algorithm includes three steps: (i) a number of small hypergraphs in several layers are built; (ii) the hypergraph in the lowest level is partitioned; (iii) the resulted partitions are extended to the upper levels through a successive mapping.
The computational complexity of the algorithm is O(|E|), where E is the set of hyperedges. Suppose that C 1 and C 2 are two clusters obtained by the hmetis algorithm. As can be seen in Fig 4, H t 1 and H t 2 are partial haplotypes originating from the resulting clusters. In the diploid case, as can be seen in Fig 5 like the HapCUT method, improving H t is performed as follows. First, C 1 or C 2 is selected as a CutSet. Next, H t+1 is obtained from H t by flipping the measures of the SNPs in the CutSet.

PLOS ONE
For polyploid, improving H t is accomplished based on the algorithm which is shown in Fig  6. In the first step, a partial haplotype (i.e., H t 1 or H t 2 ) is randomly assigned to CutSet. This set involves some parts of H t that should be corrected.
As shown in Fig 7, all combinations of the CutSet are evaluated to find a new set of haplotypes (H t new ) with lower MEC score. Moreover, in order to evaluate more allelic combinations of SNPs, for a predefined percent of SNPs belonging to the CutSet, in each time two arbitrary SNPs are nominated. Then one of its various genotype's combinations is randomly selected, and is replaced at corresponding positions in H t . This step repeats for a predefined percent of SNPs.
Since H t has randomly generated, in the early iterations, its MEC score is poor. Therefore, finding the hyperedges with lower weights is not a difficult task. But, by improving the quality of H t and increasing the consistencies between SNPs, MEC measure will be decreased slowly.

Refinement of H t
Computing confidence score. Upon performing the iterative procedure of the proposed method, the haplotype H = {h 1 , h 2 , . . ., h p } will be obtained. It is possible to define a confidence

PLOS ONE
measure for each loci of the reconstructed haplotypes. For diploid case, we used the emission probability P(X j |h j , R j ) that has been defined in [41], which is used to identify errors in the reconstructed haplotype. This measure is calculated for each position j as follows: Where h is a haplotype sequence belongs to H, h j 2 {0,1} denotes an allele in position j, and PO(i) contains the columns which have been covered by f i as the i-th fragment. Furthermore, R j is a set which includes fragments such as f i covering a position j. Finally, p(x ij | h j , f i ) is calculated as follows: Where Q is an M × N matrix; for each element x ij 2 X, Q ij includes the probability of sequencing error and h j (f i ) as the j-th loci of the reconstructed haplotype is computed based on the following Eq.: Where C 1 and C 2 are the obtained clusters containing similar fragments that indicate the provenance of f i . Eq (10) provides more information in each loci, and is based on the fact that h 1 ¼ h 2 . Therefore, the confidence score could be calculated more precisely. On the other hand, there is no relationship between the haplotype sequences in the polyploid form. Hence, applying Eq (8) is not applicable. In this case, we used genotype information. Suppose that g i is the genotype information in position i and H i is the reconstructed measure in this position. The sorted measure of g i and H i are compared, and the position i will be selected for refinement if the two sets are not equivalent. Applying chaos game representation. Chaos game representation (CGR) is a graphical tool which maps an arbitrary sequence to a 2-dimensional form.
This map is reversible and all the information of the sequence is preserved. Moreover, it depicts the hidden dependencies among the letters. CGR was initially introduced by Barnsley [42] to evaluate random sequences. Afterwards, Jeffrey [43] developed the method for visualizing genomic sequences. For this aim, according to the number of distinct letters constructing the input sequence, a regular polygon can be considered. For example since DNA sequences are constructed from four nucleotides 'a', 't', 'c', and 'g', a square with unit length is considered and each distinct letter is assigned to one vertex. Each letter of the given sequence is iteratively mapped as a point inside the square. The process is started by locating the first point half-way between the center of the square and the corner related to the occurrence of the first letter. The method continues such that the i-th point is placed half-way between the previous point and the vertex related to the i-th letter. Using this procedure, many attempts have been made with the purpose of extracting novel features from biological sequences by exploiting CGR [44][45][46][47][48].
Recently, CGR was used to reveal the chaotic properties of haplotypes [37]. Since haplotypes are represented in binary form, the achieved map will be a dotted line which its vertices are named by 0 and 1, respectively. In this step, in order to improve the reconstructed haplotypes, CGR is utilized as follows.
For loci's which their qualities are less than θ, as a predefined threshold, their measures may be disrupted by noise or missing information. Therefore, it is refined based on the existing dependencies between SNPs. For this purpose each h i 2 H is mapped to a line by applying CGR. The places assigned to each point construct a coordinate series, namely cs i . The route of chaos helps to refine the ambiguous measures in the low-quality positions. To this end, the points in cs i that are correspond with the alleles with low confidences, are shown by '-'. Then, the measure of ambiguous positions can be determined by applying a local projection (LP) method. After filling the removed measures based on the LP method, the refined coordinate series called b cs i , are transformed into the final haplotype known as b h i . It must be indicated that extracting the cs and applying the LP method are accomplished in linear time. The conversion is calculated according to Eq 11:

Results
In the following section, the performance of the proposed method is compared with several state-of-the-art approaches in diploid and polyploid forms. The method was implemented in MATLAB, and all the results were obtained on a Windows 10 PC with 3.6 GHz CPU and 16 G Ram. The parameters of the algorithm are defined as t = 100, k = 5, sc = 2, and nc = 20%. Reconstruction rate (RR) [4] as a conventional metric was used to evaluate the quality of the obtained haplotypes. In diploid case, RR is defined as follows: Here, HD denotes hamming distance between h i and b h j which are the target and the reconstructed haplotype, respectively and i, j = 1,2. For polyploid case, this formula is written in the form of Eq 13: Where M is a one-to-one mapping from the set of reconstructed haplotypes to the set of target haplotypes.

Diploid case
The experiments have been carried out on two widely used and well-known datasets including Geraci's dataset [49] and a dataset from the 1000 genome project that are prime examples of the simulated and experimental datasets, respectively.  [36], H-pop [34], ARO [24], HG [33], FCM [25], FastHap [26], DGS [50], SHR [51], MLF [52], HapCut [27], 2d [22], Fast [53], and SPH [54]. All of these methods were run with their default parameter settings. In accordance with the existing methods, the reconstruction rate (RR) was also used to assess the result of the current method. Tables 1-3 comparatively show the reconstruction rate of the proposed method with those described for haplotype blocks with length 100, 350, and 700. Note that in the last column of each table, the highest measures are boldfaced. Moreover, the second-highest measures are highlighted. The results, demonstrate that current method has an acceptable level of performance and outperforms in most of the cases.
The performance of the refinement phase has been considered in Table 4. Since evaluating the chaotic feature is limited to the long coordinate series, this phase can only be performed for sequences with length 700. For this purpose, the LP method is applied for each coordinate series with embedding dimensions (em) equal to 1 and 2, individually. It should be noted that the first column demonstrates the quality of the obtained haplotypes after terminating the first phase. The next two columns involve the rate of reconstruction for em equals to 1 and 2,

PLOS ONE
respectively. The obtained results demonstrate that the inclusion of the chaotic nature of haplotype sequences can significantly improve the reconstruction rate. Experimental dataset. The second dataset which is used for evaluation of the proposed algorithm involves experimental data which was provided by 1000 genome project. The gathered data belongs to an individual NA12878 which often is used to analyze the performance of the existing haplotype assembly methods. The sample was provided by using 454 sequencing method. According to the overlapping of the obtained fragments, they are represented in multiple matrices. In this experiment, for each chromosome, the first 500 matrices have been selected. In each matrix, the length of each row is~90 in average and cover the genome at a depth of~× 3. Furthermore, the trio-phased variant calls from the GATK resource bundle [55] was used as the target haplotypes. The obtained reconstruction rates of the proposed method are compared to those of H-pop [34], SCGD [36], HG [33], ARO [24], and FCM [25]

Polyploid case
Here, the proposed method is compared with three recent approaches that have been developed to solve haplotype assembly in polyploid form including Althap [23], H-POP [34] and SCGD [36]. The source codes of all comparing methods are available. To investigate the quality of reconstructed haplotypes, reconstruction rate (RR), and MEC measure of the methods have compared. Indeed, the benchmark dataset is provided by simulation. For this aim, we have used the source code, which is available upon request by [23]. Its input parameters are coverage (c), error rate (e), and length of haplotypes (l). In this experiment, we defined c 2 {5,10,15,20}, e 2 {0.1,0.2,0.3} and l 2 {100,350,700}. For each combination of those parameters 10 samples have generated. Each sample contains an SNP matrix with a huge amount of gaps. As can be seen in Tables 6-8 the proposed method is compared with RR and MEC-based algorithms.
The results demonstrate that the proposed method outperforms the other approaches in most cases considering both RR and MEC parameters. Similar to the previous section, to

PLOS ONE
emphasize the efficiency of the refinement phase, the RRs of haplotypes with length 700 have been considered, as provided in Table 9. Similar to the diploid case, the improvement of RRs reveals the role of chaotic viewpoint to efficiently decrease the amount of remaining noise in the constructed haplotypes. Obviously, the proposed method is slower than the competitors, because it starts from a random measure and is iterative. However, it can solve the problem in a reasonable amount of time.
Since sequencing coverage of the used benchmark datasets were relatively low, we further evaluated the performance of HRCH by dealing with high coverage data. For this aim, by using the provided source code in [23], for diploid and polyploid cases, several samples were generated individually. For each combination of l = 500, e = 0.3, and c = {30,40,50}, 10 samples were generated. As shown in Fig 8, the reconstruction rates of the proposed method are compared to those of AltHap [23], SCGD [36], and H-pop [34]. The obtained results demonstrate that HRCH provides encouraging accuracy as compared to the competing schemes in diploid and polyploid forms.

Conclusion
The high amounts of noise, as well as existing gaps in the input fragments, are the main challenges in solving the SIH problem. In this study, we established a sampling-based method that starts from an initial set of haplotypes and iteratively proceeds to improve the input data by correcting the SNPs with wrong measures. The proposed method involves two main steps. First, it utilizes the hypergraph model to conquer the sparsity and high amount of noise, and  https://doi.org/10.1371/journal.pone.0241291.g008

PLOS ONE
reconstructs haplotypes iteratively. Positions with low confidence are then rectified by mapping haplotype sequences to the coordinate series and applying a chaotic viewpoint. The proposed method has the capability to manipulate genomic data of both diploid and polyploid organisms. The promising results for diploid and polyploid data highlight that the method is comparable with the existing approaches, and they have complementary roles to each other. Finally, the source code of the proposed method is available at https://github.com/mholyaee/ HRCH.