Inferring the Clonal Structure of Viral Populations from Time Series Sequencing

RNA virus populations will undergo processes of mutation and selection resulting in a mixed population of viral particles. High throughput sequencing of a viral population subsequently contains a mixed signal of the underlying clones. We would like to identify the underlying evolutionary structures. We utilize two sources of information to attempt this; within segment linkage information, and mutation prevalence. We demonstrate that clone haplotypes, their prevalence, and maximum parsimony reticulate evolutionary structures can be identified, although the solutions may not be unique, even for complete sets of information. This is applied to a chain of influenza infection, where we infer evolutionary structures, including reassortment, and demonstrate some of the difficulties of interpretation that arise from deep sequencing due to artifacts such as template switching during PCR amplification.

RNA viruses have evolutionary dynamics characterized by high turnover rates, large population sizes and very high mutation rates [1], [2], resulting in a genetically diverse mixed viral population [1], [3].Subpopulations in these mixtures containing specific sets of mutations are referred to as clones and their corresponding mutation sets as haplotypes.Unveiling the diversity, evolution and clonal composition of a viral population will be key to understanding factors such as infectiousness, virulence and drug resistance [4].
High throughput sequencing technologies have resulted in the generation of rapid, cost-effective, large sequencing datasets [5].When applied to viruses, the set of reads obtained from a deep sequencing experiment represents a sample of the viral population which can be used to infer the underlying structure of that population at an unprecedented level of detail [1].
In this study, we aim to identify the haplotypes of clones and quantify their prevalence within a viral population.The method also constructs evolutionary histories of the process consistent with the data.Reconstructing the structure of a mixed viral population from sequencing data is a challenging problem [6].Only a few works address the issue of viral mixed population haplotype reconstruction which infer both the genomes of sub-populations and their prevalence.Reviews of the methods and approaches dealing with these issues can be found in [1], [7], [8], [9] and [10].
These works frequently make use of read graphs, which consist of a graph representation of pairs of mutations linked into haplotypes [11].Haplotypes then correspond to paths through these graphs, although not every path will necessarily be realized as a genuine haplotype, which can lead to overcalling haplotypes.Different formalizations of this problem has led to different optimization problems in the literature [9], including minimum-cost flows [12], minimum sets of paths [11], [13], probabilistic and statistical methods [6], network flow problems [3,14], minimum path cover problems [15], maximizing bandwidth [16], graph coloring problems [17] or K-mean clustering approaches [11].After the haplotypes are constructed, in many cases an expectation-maximisation (EM) algorithm is used to estimate their prevalence in the sampled population.Some other works [4], [18] use a probabilistic approach instead of a graph-based method.
In this work we take an integrative approach to address both the genetic diversity and the evolutionary trajectory of the viral population.The method presented is not read graph based and constructs evolutionary trees and recombination networks weighted by clone prevalences.This reduces the size of the solution set of haplotypes.The method does not rely exclusively on reads physically linking mutations so is applicable to longer segments.The method will also be shown to have particular utility with time series data and is highlighted on a chain of infections by influenza (H3N8).
The question of the influenza genome diversity has been addressed in the literature largely between strains or samples from different hosts, considering one single dominant genome for each host [19].Within-host evolution is a source of genetic diversity the understanding of which may lead to the development of models that link different evolutionary scales [8].Kuroda et al. [20] addressed the question of evolution within a single host of influenza extracted on a patient who died of an A/H1N1/2009 infection, but with a focus on HA segment using a de-novo approach.Our approach provides a method to further understand within host evolution of such viruses.
The next section highlights the approach with an overview of the tree and network construction methods with simulated data, followed by an application of the method to a daily sequence of real influenza data.The Methods section describes the construction of the trees and networks in more detail.

Results
We next outline the tree and network construction methods.

An Evolutionary Tree
Consider the pedagogic simulation in Figure 1, where we have a region of interest (such as an influenza segment, for example) that has undergone mutational and selective processes encapsulated by the evolution tree in Figure 1A.This tree contains five mutations M 1 , M 2 , M 3 , M 4 , M 5 that lie on various branches of the tree.These combine into the six clones that are the leaves of the tree.For example, the second leaf is labeled C 11000 , indicating a clone with haplotype consisting of mutations M 1 , M 2 but not M 3 , M 4 , M 5 .Note that the path from the root of the tree to this leaf crosses the two branches corresponding to mutations M 1 and M 2 .The number 20 at the leaf indicates that this clone makes up 20% of the viral population, and is termed the prevalence.
Note that these prevalences form a conserved flow network through the tree [21].For example, the prevalence of mutation M 1 is 55%, which accounts for the two haplotypes C 11001 and C 11000 , with prevalences 35% and 20%, respectively.In general, we find that the prevalence flowing into a node of the tree must equal the sum of the exiting prevalences.This represents conservation of the viral sub-populations.The total prevalence across all the leaves is therefore 100%.
In reality we are not privy to this information and perform a sequencing experiment to investigate the structure.This takes the form of molecular sequencing, where we detect the five mutations, which each have a different depth of sequencing, as portrayed in Figure 1B.We will later see with real influenza data in that percentage depth can be reasonably interpreted as prevalence.Furthermore, we can look at the mutations arising on individual sequencing reads and group them into clusters.For our example, this groups the mutations into two clusters, giving the haplotype tables in Figure 1C.
[10] M1 (55) [20  We first construct an evolution tree for each of these tables.Our approach is based upon two sources of information; one utilizes mutation sequencing depth with a pigeon hole principle, the other utilizes linkage information from haplotype tables.Now we have mutation M 2 present in 80% of viruses and mutation M 1 present in 55% of viruses.If these mutations are not both simultaneously present in a sub population of viruses, then the mutations are exclusive.This implies the two populations of size 80% and 55% do not overlap.However, the total population of viruses containing either of these viruses would then be greater than 100% < 80% + 55%.This is not possible, and the only explanation is that a subpopulation of viruses contain both mutations; the pigeon hole principle.The only tree-like evolutionary structure possible is that M 1 is a descendant of M 2 , as indicated by the rooted, directed tree in Figure 1Di.Note that we have not utilized any haplotype information to infer this, just the mutation prevalence of the two mutations and a pigeon hole principle.
Mutation M 3 has a prevalence that is too low to repeat a prevalence based argument.However, we have a second source of information; the paired read data that can link together mutations into the haplotypes in Figure 1Ci.This table is based on three mutations, which group into 2 3 = 8 possible haplotypes.However, a tree structure with three mutations will only contain four leaves [22] and we see that four of the halpotypes (emboldened) have notably larger counts of reads and are likely to be genuine.The four haplotypes with a notably lower read counts are likely to be the result of sequencing error at the mutant base positions, or template switching from a cycle of rtPCR, and are ignored.The presence of genuine haplotypes C 011 and C 010 , lead us to conclude that M 3 is descendant from M 2 but not M 1 , resulting in the tree of Figure 1Di.
From the mutation prevalences 55%, 80% and 15% of M 1 , M 2 and M 3 , we can also use the conserved network flow to measure the haplotypes prevalence.For example, the leaf descending from M 2 (80%), but not M 1 (55%) or M 3 (15%) (clone C 010 of Figure 1Di) must represent the remaining 10% = 80% − 55% − 15% of the population.This provides us with two sources of information (sequencing depth and linkage information) we can utilize to reconstruct the clone haplotypes, prevalence, and evolution.However, not all mutations can be connected by sequencing reads.They may be either separated by a distance beyond the library insert size, or may lie on distinct (unlinked) segments.Our approach is then as follows.We first construct a tree for each cluster of linked mutations.This will be a subtree of the full evolutionary structure.We then construct a supertree from this set of subtrees.Now both of the trees in Figure 1D must be subtrees of a full evolutionary tree for the collective mutation set so we need to construct a supertree of these two trees.We can do this recursively as follows.We take the mutations and place them in decreasing order according to their prevalence, as given in Figure 1E.We then attach branches corresponding to these mutations to the supertree in turn, checking firstly network flow conservation, and secondly that the haplotype information in the subtrees is preserved.The steps for this example can be seen in Figure 1F.We start with a single incoming edge with prevalence 100%; the entire viral population.We next place an edge corresponding to M 2 , the mutation with maximum prevalence of 80%.The next mutation in the tree either descends from the root or this new node.Any descendants of M 2 must have a prevalence less than this 80%.Any other branches must descend from the top node but can only account for up to 20% of the remaining population.These two values are the capacities indicated in square brackets.The next value we place is M 1 with prevalence 55%.This is beyond the capacity 20% of the top node, so M 1 is descendant to M 2 , accounting for 55% of the 80%, leaving 25%.We thus have a three node tree with capacities 20%, 25% and 55%.The third ordered mutation M 5 has prevalence 35%, which can only be placed at the bottom node with maximum capacity.Out next mutation M 3 has a prevalence 15% that is less than any of the four capacities available, and no useful information on the supertree structure is obtained.This branch is the first to use haplotype information.We know from the first subtree that the corresponding branch is a descendant of M 2 and not M 1 .The only node we can use (in red) has capacity 25% and we place the branch.For the final branch corresponding to mutation M 4 , the prevalence 15% is less than four available capacities.The second subtree tells us that M 4 is not a descendant of M 5 .This only rules out one of the four choices, and any of the three (red) nodes will result in a tree consistent with the data.The top node selected results in a tree equivalent to that in Figure 1A.To see this tree equivalence, the internal nodes in the last tree of Figure 1F have additional leaves attached (dotted lines) to obtain Figure 1A.
We thus find that a single dataset can result in several trees that are consistent with the data.However, having a time series of samples means a tree consistent with all days of data is required, which will substantially reduces the solution space.Note that the prevalences of the clones at the leaves of the tree results from this recursive process.We thus find that supertree construction is relatively straightforward with the aid of prevalence.
However, trees do not always fit the data.This can be due to recombination occurring within segments, or re-assortment occurring between segments.In the next section we construct recombination networks to cater for this, although we will see that they cannot be constructed as efficiently as trees.

A Recombination Network
In Figure 2A we see another simulated evolution based upon the two segments in Figure 2Ci that accumulate four mutations, M1, M2, M3 and M4.First we have mutations M1 and M3.Then we have the first of two recombination events, r1, where we have recombination within the first segment as described in Figure 2Cii.We then have mutations M2 and M4, followed by the second recombination event r2 in Figure 2Ciii, a re-assortment between the two segments.This results in the seven clones given at the leaves of Figure 2A.The prevalences of the four mutations across five time points are given in Figure 2E.Note that we no longer have the conservation of prevalence observed in trees.For example, mutations M1 and M3 are on distinct branches extending from the root, yet their total prevalence is in excess of 100% (on Day 5 for example).This is due to recombination r1 resulting in the presence of a clone containing both mutations.The use of the prevalence to reconstruct this structure from observable data thus requires more care.Now we see in Figure 2Ci that the four mutations cluster into two groups of mutations each bridged by a set of paired reads, resulting in two tables of read counts in Figure 2Bi,ii.We would like to reconstruct the evolution in Figure 2A from these data.
Firstly, we need to decide which of the haplotypes in Figure 2B are real.The haplotypes with consistently low entries are classified as artifact (in opaque).We next use a standard approach (such as a canonical splits network [23]) to construct sub-networks from the real haplotypes in each of these tables, such as those given in Figure 2Dii.We then build super-networks ensuring that all sub-networks are contained as a sub-graph.There does not appear to be an efficient way of doing this (such as ordering by prevalence which works so well with trees) so a brute force approach is taken, where we construct all possible networks that contain four mutations and the haplotypes observed in Figure 2Bi,ii.
This results in many candidate super-networks.We now find that the prevalence information can be used to reject many cases.For example, the super-network in Figure 2Hi contains both sub-networks of Figure 2Di,ii as subgraphs.Note that the root node, representing the entire 100% of the population, has daughter branches containing mutations M4, M1 and M3.However, from the prevalences on Day 5 we see that M4 has prevalence 66% and M1 and M3 (which recombine) have a collective prevalence (from clones C 001 , C 100 , C 101 , and C 111 in Figure 2Bi) of 93%.This is in excess of the possible 100% available and the network is rejected.
Application of filtering by prevalence (see Methods section for full details) rejects all networks with one recombination event, so we try all networks with two recombination events, resulting in just seven possible recombination networks.These all contain the same set of clones, all of which correspond to the single phylogenetic network in Figure 2F.Although only one recombination event is present across the subnetworks, all super-networks with one recombination event were filtered out and two recombination events were required.Lastly we require estimates of the prevalences of each of the seven clones.We would like to match these to the prevalences in the tables of Figure 2B.This is a linear programming problem, the full details of which are given in the Methods section.The resulting estimates are given in Figure 2G where we see that some clones have point estimates, whereas others have ranges.For example, we see that clone C 0010 has a point estimate for each day.This is because it is the only clone of the super-network that corresponds to clone C 001 of Figure 2Bi and their prevalences can be matched.Conversely, we see ranges for the prevalences of clones C 1110 and C 1111 .This is because both clones correspond to clone C 111 of Figure 2Bi and prevalence estimates for each clone cannot be uniquely specified.
Full details of this approach can be found in the Methods section.In the next section we describe the results obtained when applying these methods to a time series of influenza samples.

Application to Influenza
The data used in this study were generated from a chain of horse infections with influenza A H3N8 virus (Murcia, unpublished).An inoculum was used to infect two horses labeled 2761 and 6652.These two animals then infected horses labeled 6292 and 9476.This latter pair then infected 1420 and 6273.The chain continued and daily samples were collected from the horses resulting in 50 samples in total.For the present study we used 16 samples; the inoculum and hosts 2761 (days 2 to 6), 6652 (days 2, 3 and 5), 6292 (days 3 to 6) and 1420 (days 3, 5 and 6).
Influenza A virus is a member of the family Orthomyxoviridae which contains eight segmented, negative-stranded genomic RNAs commonly referred to as segments and numbered by their lengths from the longest 2341 to the shortest 890 bps [19], as summarized in Figure 3A.Daily samples were collected from each host and paired end sequenced was performed with Hi-seq and Mi-seq machines.The samples sequences were aligned with Bowtie2 [24] with default parameters.We obtain for each sample a SAM file containing mapping information of all the different reads in the sample.Any mapped read whose average Phred-quality per base was less than 30 were discarded.
In order to identify mutations from real data we need a reference sequence to compare the read sequences to.Consistent differences between the two can then be classified as a mutation.We constructed a majority consensus sequence from the inoculum sample.This consensus sequence was then used as a starting reference for the chain of infected animals.An amplification procedure was used to produce viral DNA.This involves PCR, which will result in different levels of amplification and mutations.This in turn is likely to introduce significant differences between the sequencing depth and prevalence.To combat this, all identical paired end reads (with equal beginning and endpoints) were grouped, classified as a single PCR product, deriving from a single molecule and only counted once.The depth of sequencing with these adjusted counts then provided an improved measurement of the prevalence of viral subpopulations.We compared an identical sample that was sequenced separately, the results of which can be seen across two samples in Figures3B,C i,ii.Both the position and prevalence of mutations were reproducible to good accuracy suggesting proportional sequencing depth is a good surrogate for prevalence.
We then applied the methods to sets of high prevalence mutations in each of the eight segments individually, and also to a set of three mutations from distinct segments.The main observations are below.

Within Segment Evolution
For segments 1, 3, 5, 6, 7 and 8 we obtained tree like evolutions for the segments.In all cases the mutations involved lay on distinct branches and were indicative of mutations arising in independent clones.Segment 6 can be seen in Figure 4A, where we see five mutations on six branches.We also see from the stacked bar chart in Figure 4B that many of the mutations arose during different periods in the infection chain.
However, the evolution structure of mutations within segments did not always appear to be tree like, with segment 2 containing one putative recombination event and seven mutations, and segment 4 containing three putative recombination events and six mutations.This latter case arose because we found three pairs of mutations in putative recombination.Using nucleotide positions as labels, these were (431, 674), (431, 709) and (709, 1401).That is, we found significant counts of all four combinations of mutations, labeled C 00 , C 01 , C 10 and C 11 , lying on paired reads.Examples of typical counts for three (out of sixteen) samples are given in the top table in Figure 5B (see Supplementary Information for full details).If the evolution is tree-like, reads from one of the types C 01 , C 10 or C 11 should only arise as an artifact.Note that we have high read counts of all four categories, which is indicative of recombination.
However, various studies have shown that there is very little evidence of genuine recombination that occurs within segments of influenza [26], [27], [28], and these kind of observations can arise from tem-plate switching across different copies of segments during the rtPCR sequencing cycle [25].We developed an analytic approach to consider this possibility in more detail.Now if the true underlying structure is tree-like, it suggests that one of C 11 , C 01 or C 10 arises purely from template switching (the wild type C 00 is assumed to always occur).This gives us the three models (labeled i-iii in Figure 5A,B) to consider.We let a, b and c be the population proportions of the three real genotypes.We let n be the probability that a cycle of rtPCR causes template switching.We then treat template switching as a continuous time three state random process.This allows us to derive probabilities that genotypes C 00 , C 01 , C 10 and C 11 arise on paired end reads, as given in Figure 5A (see Methods for derivations).The counts of the four classes of read then follow a corresponding multinomial distribution.Maximum likelihood was used to estimate parameters, obtain log-likelihood scores, and a chi-squared measure of fit was obtained for each of the three models.
For the pair (431, 674) we found that the best log-likelihood, on all sixteen sampled days, was Model 1 (Figure 5i), where reads of type C 11 are artifacts arising from template switching alone.The parameters obtained provided an almost perfect fit; the expected counts were almost equal to the observed counts and the goodness of fit significance values were close to 1.The other two models had substantially lower likelihoods and significantly bad fits.This tells us that if the underlying structure is a tree, it involves the three genotypes C 00 , C 01 and C 10 and mutations 431 and 674 lie on distinct branches.
For the pair (431, 709) we found that the best log-likelihood, over the sixteen sampled days, was Model 3 (Figure 5iii), where reads of type C 10 are artifacts.The parameters obtained provided an almost perfect fit on most days with goodness of fit significance values close to 1.A couple of days had relatively poor fits, but were not significant when multiple testing across all sixteen days was considered.The model with C 11 as an artifact had very similar likelihoods, but the data exhibited significantly poor fits on multiple days.The model with C 01 as an artifact performed very badly.This tells us that if the underlying structure is a tree, it involves the three genotypes C 00 , C 01 and C 11 and mutation 431 is a descendant of 709.
For the pair (709, 1401) we found that the best log-likelihood, on all sixteen sampled days, was the Model 2 (Figure 5ii), where reads of type C 01 are artifacts.The parameters obtained provided an almost perfect fit on all days with goodness of fit significance values close to 1.The other two models performed very badly.This tells us that if the underlying structure is a tree, it involves the three genotypes C 00 , C 10 and C 11 and mutation 1401 is a descendant of 709.
Thus the three cases where data are indicative of recombination can be explained purely by template switching during rtPCR.This is reinforced somewhat by the fact that the same model emerged across all sampled days for each mutation pair.However, this does not definitively rule out recombination, which could also exhibit these consistent patterns across sampled days, and so care is needed when interpreting data.Furthermore, the rates of template switching required to explain the data without recombination were not always consistent.For example, in the sample from host 2761 Day 4, the estimated template switching between mutations (431,674) was 43.6% (95% c.i 38.2% -49.2%).Between mutations (431,709) it was 48.8% (95% c.i 46.3% -51.4%), giving reasonable agreement.Between mutations (709, 1401) it was somewhat higher, at 58.6% (95% c.i 68.3% -78.7%), although this may be expected due to the greater distance between the mutations.However, in sample 1420 Day 3, the template switching rate for the pair (431,674), at 99.6% (95% c.i 76.6% -99.3%), was notably higher than both the mutation pair (431,709), at 43.2% (95% c.i 39.5% -47.1%), and mutation pair (709,1401), at 49.3% (95% c.i 36.9% -64.1%).Although differences between samples (and so library preparations) may be expected, differences such as this in the same library are harder to explain without implicating genuine recombination.
We thus have two explanations of the data; genuine recombination or template switching artifacts.We consider both cases and then draw comparisons.
Firstly we consider segment 4 assuming recombination has taken place.The results can be seen in Figure 6.The prevalences of six mutations of interest are given in Figure 6A.Reasonable linkage   information was available across the segment, including the two haplotype tables in Figure 6C.The first is linkage information between mutations 709 and 1401, where all four combinations of mutation occur to reasonable depth, implying recombination between the mutations.The second is between mutations 1387 and 1401, where we see only three haplotypes occur to significant depth, suggesting a tree like evolutionary structure between the two mutations.The full set of tables is in Supplementary Information.
Although the sequencing depth in the first table is lower, due to the rarer occurrence of sufficiently large insert sizes, the information gleaned is just as crucial.The most parsimonious evolution found involved three recombination events, resulting in the single cloneset contained in the phylogenetic network given in Figure 6D.There were 22 possible recombination networks that fit this phylogenetic network, one example of which is given in Figure 6E.The relatively complete linkage information resulted in point estimates for the clone prevalences (rather than ranges), as given in Figure 6B.
If we now assume that the recombination like events are template switching during rtPCR, then from above, we observed that mutations 431 and 674 are on distinct branches, mutation 431 is a descendant of 709, and 1401 is also a descendant of 709.This resolves all three reticulation events in the network of Figure 6E and we end up with the tree given in Figure 6F.However, this structure still has two minor conflicts.Firstly, the tree like structure suggests that mutation 431 should have a lower prevalence than 1401, and on most days it does.However, the sample from host 2761 Day 4 has prevalences 54.3% and 51.2% for mutations 431 and 1401, respectively.Similarly, the samples from host 1420 Day 3 are 66.1% and 70.3%, respectively.Secondly, the four mutations 674, 709, 1013 and 1401 all descend from the root  on separate branches and should have a total prevalence that is less than 100% and on fifteen of sixteen samples this is true.However, on sample 6292 Day 3 the prevalences are 9.9%, 52.8%, 18.0% and 26.0%, which combine to 106.7%.Although the conflicts are relatively small, these differences are larger than would be expected from Poisson sampling of such deep data.However, this is the most plausible tree structure we found.

Detecting Reassortment
Re-assortments occur when progeny segments from distinct viral parents are partnered into the same viral particle, resulting in a recombinant evolutionary network.Now re-assortment is a form of recombination.This is usually possible to detect in diploid species such as human because linkage information is available across a region of interest, such as a chromosome, and recombination can be inferred.Furthermore human samples have distinct sequencing samples for each member of the species.Inferring re-assortment across distinct viral samples is more difficult because firstly we do not have linkage information across distinct segments, and secondly, we have mixed populations within each sample.However, we show that re-assortment can still be detected within mixed population viral samples with the aid of information provided by prevalence.Consider Figure 7.We have three mutations in segments 2, 3 and 4, along with their mutation nucleotide positions 2037, 201 and 709, respectively.We refer to the mutations as S 2 2037, S 3 201 and S 4 709 accordingly.We see in Figure 7D that S 2 2037 and S 3 201 have prevalences that alternate in magnitude across the 16 days sampled.If we assume a tree like structure, these two mutations cannot lie on a single branch, because one prevalence would have to be consistently lower than the other; they must therefore lie on distinct branches.Now mutation S 4 709 can; i) be on a distinct third branch, ii) be a descendant of S 2 2037, iii) be a descendant of S 3 201, iv) be an ancestor of S 2 2037, v) be an ancestor of S 3 201, or vi) be an ancestor of both.We can rule out all of these choices as follows.
Firstly we note that S 4 709 has a prevalence that is consistently larger than that of S 2 2037 or S 3 201, so cannot be a descendant of either mutation, ruling out ii) and iii).We see from sample 6292 Day 3 that S 2 2037 and S 3 201 have a total prevalence greater than S 4 709, meaning S 4 709 cannot be an ancestor of both mutations, ruling out vi).In this sample, the total prevalence of all three mutations is in excess of 100%, ruling out i).Now if S 4 709 and S 2 2037 lie on distinct branches, we see from 2761 Day 4 that their combined prevalence is in excess of 100%, ruling out v).Finally, if S 4 709 and S 3 201 lie on distinct branches, we see from 6292 Day 4 that their combined prevalence is in excess of 100%, ruling out iv).No tree structure is possible and we conclude the presence of re-assortment as the most likely explanation.
In fact, application of the full method reveals that two re-assortment events are required to explain the data.This results in 51 possible recombination networks, one such example is given in Figure 7B.These correspond to the four clonesets given in Figure 7C, arising from two possible phylogentic networks.The four clonesets have prevalences that could not be uniquely resolved; their possible ranges are shown in Figure 7D.Although we cannot uniquely identify the network or the prevalences, all solutions involved two re-assortments, one involving mutations S 4 709 and S 2 2037, the other involving S 4 709 and S 3 201.This observation was only possible because of inferences made with the prevalence.

Discussion
We have introduced a methodology to analyze time series viral sequencing data.This has three aims; to identify the presence of clones in mixed viral populations, to quantify the relative population sizes of the clones, and to describe underlying evolutionary structures, including reticulated evolution.
We have demonstrated the applicability of these methods with paired end sequencing from a chain of infections of the H3N8 influenza virus.Although we could identify underlying evolutionary structures, some properties of the viruses and the resulting data make interpretation difficult.In particular, template switching during the rtPCR cycle of sequencing an RNA virus is known to occur, and can result in paired reads that imply the presence of recombination.Although any underlying tree like evolutions can still be detected, these artifacts confound the signal of any genuine recombination that may be taking place, making it harder to identify.The prevalence of mutations, measured as sequencing depth proportion, offers an alternative source of information that can help resolve these conflicts in theory, although more work is needed to evaluate how robust this metric is in practice.
For example, although tree like evolutions were identified in six of the segments, in the two remaining segments the approach found reticulated networks, with three distinct reticulated nodes in the hemagglu- tinin segments network.Although each of these nodes were consistent with template switching artifacts, the resultant tree structure could not quite be fitted to the mutation prevalences.Although this conflict implies the original network is correct and recombination has taken place, within segment recombination in influenza is rare [26], [27], [28] and other explanations may be required.In particular, we note in Figure 3B that there are slight differences between the prevalences obtained from independent Mi-seq and Hi-seq runs.Although some of this will be due to Poisson variation of depth, there could be some biases in PCR over certain mutations, for example.The application of prevalence thus needs to be used with caution, and further studies are needed to fine tune this type of approach.
When the approach was applied to mutations in distinct segments, two re-assortment events were inferred.The differences in mutation prevalences were more marked in this case suggesting the inference is more robust and re-assortment more likely to have taken place.This is also biologically more plausible, with events such as this accounting for the emergence of new strains.We note that although re-assortment may have genuinely taken place, only one of the original clones (containing just mutation 709 on segment 4) survived the infection chain and a longitudinal study would not have picked up such transient clonal activity.
These methods utilized paired end sequencing data and showed that even when paired reads do not extend the full length of segments, or bridge distinct segments, we can still make useful inferences on the underlying evolutionary structures.The two main sources of information are the linkage offered by two or more mutations lying on the same paired reads, and the prevalence information.It is by utilizing the variability of the prevalence in a time series dataset that we can narrow down the predictions to a useful degree; application of this method to individuals days will likely result in too many predictions to be useful.Furthermore, this has greatest application to mutations of higher prevalence; this places more restrictions on possible evolutions consistent with the data.Subsequently, this kind of variability is most likely to manifest itself under conditions of differing selectional forces.A stable population is less likely to contain mutations moving to fixation under selective forces.Lower prevalence mutations will result, meaning less predictive power.Simulations also suggest that although clone-sets may be uniquely identified, prediction of the underlying reticulation network is difficult, with many networks explaining the same dataset.
As we lower the minimum prevalence of analyzed mutations, their number will increase.The number of networks will likely explode and raise significant challenges.Furthermore, single strand RNA viruses such as influenza mutate quickly, suggesting a preponderance of low prevalence mutations likely exist.This is further exacerbated by the fact that sequencing uses rt-PCR, introducing point mutations and template switching artifacts that create noise in the data.These processes are likely responsible for the grass-like distribution of low prevalence mutations visible in Figure 3B,C.Thus as we consider lower prevalence mutations we are likely to get a rapidly growing evolution structure of increasingly complex topology.The methods we have introduced, however, can provide useful information at the upper-portions of these structures.
The software ViralNet is available at www.uea.ac.uk/computing/software.The raw data is available from the NCBI (project accession number SRP044631).More detailed outputs from the algorithm are available in Supplementary Information.

Methods
We now consider tree and network construction methods, a template switching model, and validation of the methods.

Tree Construction
The construction of phylogenetic trees is a well established area [22].Trees are frequently constructed from tables of haplotypes of different species.However, we have two properties that change the situation.Firstly, if we have a set of n mutations linked by reads, we can have up to 2 n distinct haplotypes.However, a consistent set of splits from such a table should only have up to n + 1 distinct haplotypes, in a split-compatible configuration [22].To construct a phylogenetic tree we thus need to classify the genotypes as real or artifact.Secondly, we have prevalence information, in the form of a conserved network flow through the tree.This can help us to both decide which haplotypes to believe and to construct a corresponding tree.
To describe the algorithm we first introduce some notation.Now, the evolutionary structure is represented by two types of rooted directed tree; one where each edge represents a mutation, such as in Figure 1F, and one where all leaves represent clones in the population, such as in Figures 1A,D.The first is a subtree of the latter.The latter has a conserved flow network.These will be termed the Compact Prevalence Tree and Complete Prevalence Tree respectively.Now to each edge e in the compact prevalence tree, we assign prevalence ρ(e).This represents the proportion of population containing the mutation represented by the edge e.The single directed edge e in (v) pointing toward a vertex v (away from the root) represents a viral population of prevalence ρ(e in (v)), all containing the mutation corresponding to edge e in (v), along with its predecessor mutations.The set of daughter edges E out (v) leading away from node v represent populations containing subsequent mutations, each with prevalence ρ(e), e ∈ E out (v).The remaining population from ρ(e in (v)) contains just the original mutation set, having a prevalence described by the capacity ζ(v).The conservation of prevalence satisfied by each vertex v ∈ T is then represented by the condition: The root node has total prevalence of 1, representing the entire population of interest.This describes the mutation based trees such as that in Figure 1F.To obtain a complete tree containing all the clones, we need to extend an edge from each internal node to represent the associated clone (these are the dashed lines in Figure 1A).The prevalence of the additional edges are equal to the capacities of the parental nodes.
We saw in Figure 1B that mutations can be clustered together, and evolution trees constructed for each cluster.We refer to these as Subtrees.We then look for a tree that contains all such subtrees as a subset of edges.We refer to these as Supertrees.
The algorithm is broken into two steps.The first calculates subtrees.The second calculates supertrees.
Step 1 Subtree Construction Now, for n mutations we have 2 n possible haplotypes, with corresponding counts n i , i = 1, 2, ..., 2 n , and a tree with n + 1 haplotypes to fit.This implies that 2 n − n − 1 of those counts are artifacts.For example, in Figure 8D we see the simulated counts for 2 3 haplotypes on 3 mutations.Now Cayley's formula states that there are n n−2 different labeled trees that can be constructed on n vertices [30].These are easily constructed with the aid of Prüfer sequences [31], which are any integer sequence [p 1 , ..., p n−2 ] such that p i ∈ {1, 2, ..., n}.The first few examples are given in Figure 8A.
To construct a tree we start with p = [p 1 , ..., p n−2 ] and the vector v = [1, 2, ..., n].At each step we take the lowest entry of v not in p, and the lowest entry of p, and join the two corresponding nodes together with an edge.For example in Figure 8B we start with v = [R, 1, 2, 3], where the root node R is treated as the minimum value, along with Prüfer sequence p = [R, 3].The smallest element of v not in p is 1.The corresponding node is then joined to the node for the smallest element R of p, such as exemplified in Figure 8Biii.These two elements are removed from p and v and the process repeated until we are left with two elements in v. Our example leaves us with the two elements 2 and 3, the corresponding nodes networks is to look for an optimal path of trees across the recombination sites [33].These methods generally have the full mutation profile of a set of species of interest to compare.Our problem is exacerbated by missing data and the full haplotypes of distinct species (clones in our case) are not available.However, we have prevalence information which can help identify structures consistent with the data.We construct recombination networks in five steps; haplotype classification, super-network construction, super-network filtering, prevalence maximum likelihood estimation, and prevalence range estimation.We describe these steps in detail.
Step 1 Haplotype Classification.In order to distinguish the real and artifact haplotypes in any table such as Figure 6C we do the following.For any count n (t)  h associated with haplotype h and time point t, we calculate the probability it arises as an error using the Poisson distribution.This gives a term of the form Poiss n (t) h (n (t) x(h) ), where n (t) is the total read depth from that time point, x(h) is the number of mutations distinct from the wild type, and is a user selected error rate per base per read.We then take the combined log-likelihood L across all time points.All log-likelihoods below a threshold L 0 are classified as artifact.The values = 0.005 and L 0 = −360 were used in implementation.
Step 2 Super-Network Construction.This is a brute force approach where we construct all possible recombination networks using r = 0, 1, 2, ... reticulated nodes in turn.Any networks that do not contain the real haplotypes of the individual haplotype tables of Step 1 are rejected.The value of r selected is the smallest value with any valid networks after Steps 3 and 4 are implemented.
Step 3 Filtering.We need to utilize the prevalence to identify and remove invalid networks.Each leaf c of the recombination graph represents a single clone of the mixed population.We let ρ c denote the prevalences of that clone.We then have the conditions: Now we have the estimated prevalence λ m of each mutation m = 1, 2, ..., M from the proportional sequencing depth at the mutations position.If we let C m denote the set of clones from the super-network that contain mutation m, we have conditions of the form: We solve the linear programming problem defined by Equations 3 and 4 with the simplex method.If no solution exists on any day t the network is rejected.If a solution is found, it is the input to the (more precise) calculation in Step 4. This step generally reduces the number of networks to manageable levels.
Step 4 Prevalence Point Estimation In reality λ m is an estimate and we have more information than just the depth of mutations.For each cluster of mutations we have the count n (t)  h for each real genotype h (artifacts are ignored) and time points t in the corresponding table.Conditioning on the total count n (t) of real genotypes results in a binomial log-likelihood of the following form: Here the sum is over the set C h of clones that contain haplotype h.We then sum this over all tables and time points and maximize for estimates of the clone prevalences ρ (t)  c .We use gradient descent to maximize, projecting each step onto the simplex in Equation 3. Projecting onto the simplex is relatively straightforward, the updated prevalence vector ρ just becomes ρ ||ρ|| 1 , where negative components are set to zero.
Step 4 does not always result in a unique estimate, because there may be ranges of values ρ (t)  c on the simplex of Equation 3 that yield identical terms c∈s ρ (t) c .Then if ρ(t) c are the estimates from the gradient descent, we use the simplex method to maximize ±ρ (t) c subject to Equation 3 and conditions of the form: Valid clonesets with the maximum likelihood are then selected.This can be applied to any putative network to either conclude that the network is not feasible, or produce a range of possible prevalences associated with the network.

Template Switching
We model template switching during rtPCR as follows.Suppose we have two mutations of interest and four possible genotypes, labeled C 00 , C 01 , C 10 and C 11 .We have corresponding read depth counts n 00 , n 01 , n 10 and n 11 .Now, if tree like evolution exists, one of C 01 , C 10 or C 11 is an artifact arising from template switching during rtPCR (the wild type C00 is assumed to always occur).We demonstrate the case where C 01 is an artifact (model 2 in Figure 5Aii).The derivation for the other two models is similar.Then we assume that the real clones C 00 , C 10 and C 11 have prevalences of a, b and c, respectively, so that a + b + c = 1.
We model rtPCR as a time continuous three state process, where template switching occurs at a rate λ, jumping to any of the three templates C 00 , C 10 or C 11 with probabilities a, b and c, respectively.We also refer to the states as a, b and c.We let p a (t), p b (t) and p c (t) be the probabilities of occupying a copy of the corresponding templates at position t.Then conditioning p a (t) over a time interval (t, t + dt) results in the following expression (see [32] for typical derivations):  The counts n 00 , n 01 , n 10 and n 11 then follow a multinomial distribution, from which log-likelihoods can be derived.A chi-squared goodness of fit can then be obtained.We note that in many cases, solutions for the four terms Pr(C 00 ), Pr(C 01 ), Pr(C 10 ) and Pr(C 11 ) in terms of a, b, c and n can be obtained, resulting in a perfect fit.When this is not possible, one or more of the three models can be rejected if the fit is sufficiently bad.
Note that none of these three models necessarily explain the data.In the last column of Figure 5D, for example, we have four artificial counts 50, 1000, 1000 and 1000 corresponding to genotypes C 00 , C 01 , C 10 and C 11 .All three models are a bad fit suggesting recombination is present.However, this relies on small counts for C 00 , which were not observed in the real data that was examined.Note that template switching has no effect on the prevalence of individual mutations.For example, considering Figure 4Ciii, if we add Pr(C 01 ) and Pr(C 11 ), we get b + c, which is precisely the prevalence of mutation M2.

Validation and Simulation
The validation of the method is based upon simulated data.This will give some idea of the reconstruction capabilities of the methods and allow benchmarking with other existing approaches.In particular, we compared our tree construction algorithm to the benchmark software Shorah using the same simulation approaches as Zagordi et al [10,13] and Astrovskaya et al. [14].
To measure the performance of the mixed population estimation, we computed the Precision, the Recall, and the Accuracy of prevalence estimation for the methods of interest.The recall (or sensitivity) gives the ratio T P T P+FN of correctly reconstructed haplotypes to the total number of true haplotypes, where we have true positives (T P), false negatives (FN) and false positives (FP).The precision gives their ratio to the total number of generated haplotypes, T P T P+FP .The accuracy measures the ability of the method to recover the true mixture of haplotypes, and was defined as measuring the mean absolute error of the prevalence estimate.Where a range estimate is obtained for the prevalence, we calculate the shortest distance from the true value to the range.
Comparison with Shorah was done on simulated deep sequencing data from a 1.5 kb-long region of HIV-1.Simulated reads have been generated by MetaSim [34], a meta-genomic simulator which generates collections of reads reproducing the error model of some given technologies such as Sanger and 454 Roche.It takes as input a set of genome sequences and an abundancy profile and generates a collection of reads sampling the inputted genomic population.
For up to 12 haplotypes and 3 reticulations we performed 100 runs as follows.We randomly constructed a network by attaching each new branch to a random selected node.Reticulations were also randomized.The prevalences of the resulting clones (at the leaves) were randomly selected from a Dirichlet distribution.This is repeated for 10 time points of data.We used MetaSim to generate a collection of 5,000 reads having an average length of 500bp and replicating the error process of Roche 454 sequencing.The methods were then applied to the resulting data.
Shorah output can display mismatches or gaps in the outputted genomes, with increasing frequency at the segment edges.We applied a modification on Shorah output by trimming the edge and we then corrected one or two mismatches or gaps on all the genomes before addressing the comparison.Figures 9A-C provide the comparison for recall, precision and error indicators.We found slight improvements for recall, especially for tree like evolution.The precision and error also had improved results.We

Figure 1 :
Figure 1: (A) A contrived evolution of a mixed viral population involving five mutations and six clones.Dotted lines indicate internal nodes extended to a leaf.(B) A notional representation of sequencing across the region of interest, and the resulting Depth-Position graph.Paired reads bridge two clusters of mutations.(C) Read count data obtained for the two clusters, with total depth x1000, along with artifacts †. (D) Evolutionary trees corresponding to (Ci,ii).(E) Ordered list of mutations and population prevalences (%).(F) Reconstruction of original tree in (A).

Figure 2 :
Figure 2: (A) A pedagogic evolution recombination network with four mutations and two recombination events across two segments.(B) Typical haplotype tables arising from (A). (Ci) Two clusters of mutations grouped by paired reads on two segments.(Cii) Clones C 1000 and C 0010 undergo within segment recombination into C 1010 , with a crossover site between M 1 and M 3 .(Ciii) Clones C 1001 and C 1110 undergo between segment recombination (reassortment) into C 1111 .(D) Recombination networks arising from the haplotype tables in (B).(E) The prevalence of the four mutations across five days.(F) Phylogenetic network associated with (A).(G) Point and range prevalence estimates.(Hi) A network consistent with the two networks of (D).(Hii) Incompatible prevalence conditions associated with (Hi).

Figure 3 :
Figure 3: (A) Size and function of the eight flu segments.(B,C) Depth of mutations from segment 4 from host 2761 Day 4 and host 6292 Day 3, respectively.(i,ii) Results from Hi-Seq and Mi-Seq experiments on separate libraries from the same samples.

Figure 4 :
Figure 4: (A) Evolution tree arising from five mutations on segment 6. (B) Prevalences of the clones across the times series.

Figure 5 :
Figure 5: (A) Three template switching models.(B) Fitted models for three pairs of linked mutations on segment 4.

Figure 6 :
Figure 6: (A) Mutation prevalences of six mutations in segment 4. (B) Prevalences of the ten associated clones.(C) Two tables of linked mutations exhibiting network like relationship of mutations 709 and 1401, and tree like relationship of 1387 and 1401.(D) The phylogenetic network of the single fitted cloneset.(E) One of 22 possible recombination networks that arise from (D). (F) Probable tree structure from (E) after template switching is considered.

Figure 7 :
Figure 7: Three mutations between three segments that indicate two re-assortment events.(A) Mutation prevalences across time series.(B) One of 51 recombination networks that fit the data.(C) Two phylogenetic networks that fit the data ((i) and (ii)-(iv)), corresponding to four clonesets.(D) Prevalence ranges for the four clonesets.
p a (t + dt) = p a (t)(1 − λdt) + p a (t)aλdt + p b (t)aλdt + p c (t)aλdtThis gives us the following differential equation and solution:d p a dt = λ(a − p a ) ⇐⇒ p a (t) = a + (p a (0) − a)e −λt We rescale time so that t = 1 represents one rtPCR cycle.We then have the following transition matrix between states: 1− a)e −λ b − be −λ c − ce −λ a − ae −λ b + (1 − b)e −λ c − ce −λ a − ae −λ b − be −λ c + (1 − c)e −λProbabilities for all types C 00 , C 01 , C 10 and C 11 can now be defined, which we demonstrate for C 10 .Derivations for the other terms can be obtained in a similar manner.From Figure5Aiiwe see that to obtain a read of the form C 10 , we can start in either state b or c and end in either state a or b.This gives us four terms to add:Pr(C 10 ) = bT ba + bT bb + cT ca + cT cb = b(a − ae −λ + b + (1 − b)e −λ ) + c(a − ae −λ + b − be −λ ) = b + acnHere n = 1 − e −λ is the probability a template switch occurs.The formulas in Figure5Aare obtained similarly.

Figure 9 :
Figure 9: Validation profiles for a range of haplotype counts, including; (A) Recall, (B) Precision, and (C) Error.In all cases the solid line denotes the algorithm preformance, the dashed line indicates Shorah performance.