Segmentation and genome annotation algorithms for identifying chromatin state and other genomic patterns

Segmentation and genome annotation (SAGA) algorithms are widely used to understand genome activity and gene regulation. These algorithms take as input epigenomic datasets, such as chromatin immunoprecipitation-sequencing (ChIP-seq) measurements of histone modifications or transcription factor binding. They partition the genome and assign a label to each segment such that positions with the same label exhibit similar patterns of input data. SAGA algorithms discover categories of activity such as promoters, enhancers, or parts of genes without prior knowledge of known genomic elements. In this sense, they generally act in an unsupervised fashion like clustering algorithms, but with the additional simultaneous function of segmenting the genome. Here, we review the common methodological framework that underlies these methods, review variants of and improvements upon this basic framework, and discuss the outlook for future work. This review is intended for those interested in applying SAGA methods and for computational researchers interested in improving upon them.


Background and motivation
High-throughput sequencing technology has enabled numerous techniques for genome-scale measurement of chemical and physical properties of chromatin and associated molecules in individual cell types.Using sequencing assays, the Encyclopedia of DNA Elements (ENCODE) Project, the Roadmap Epigenomics Project, and myriad individual researchers have generated thousands of such datasets.These datasets quantify various facets of gene regulation such as genome-wide transcription-factor binding, histone modifications, open chromatin, and RNA transcription.Each dataset measures a particular activity at billions of positions, and the collection of datasets does so in hundreds of samples across a variety of species and tissues.Transforming these quantifications of diverse properties into a holistic understanding of each part of the genome requires effective means for summarization.Segmentation and genome annotation (SAGA) algorithms (Box 1) have emerged as the predominant way to summarize activity at each position of the genome, distilling complex data into an interpretable précis of genomic activity.
SAGA algorithms take as input a collection of genomic datasets, such as ChIP-seq measurements of histone modifications or of transcription factor binding (Figure 1).Overview of segmentation and genome annotation (SAGA).First, preprocessing transforms genomic assay sequencing reads into signal datasets.Second, with signal datasets as input, a SAGA algorithm partitions the genome and assigns an integer label to each segment, yielding an annotation.Third, a researcher interprets the labels, assigning a biological interpretation to each.
The SAGA task is to use the input datasets to partition the genome into segments and assign a label to each segment.SAGA algorithms perform this task in a way that leads to positions with the same label having similar patterns in the input data.

Year Name or description
annotation that captures the regulatory state of chromatin.Creating these chromatin activity annotations has served as the predominant use of SAGA methods thus far.Less frequently, researchers have gone beyond measurements of chromatin and DNA-binding proteins and have used SAGA methods for other kinds of data.The output annotation summarizes the input datasets, so the choice of input greatly influences the annotation's content and its subsequent interpretation.SAGA methods can work for any sort of dense linear signal along the genome.Individual studies have applied it DNA replication timing data [3,24,27], interspecies comparative genomics data [25], and RNA-seq data [28].Other studies have even found ways to incorporate non-linear chromatin 3D genome organization data into the SAGA framework [3,27].

Signal representation of genomic assays
Most genomic assay data so far has come from bulk samples of cells.These data depict a noisy mixture of sampling an assayed property from the many cells within the population.These cells may represent subpopulations of slightly different types, or within different cell cycle stages.Thus, each subpopulation might have different characteristics in the assayed properties.In the mixture of cell subpopulations, only frequently sampled properties will rise above background noise.By comparison, less frequently sampled properties seen in a minority of cells, may remain indistinguishable from background noise.
Often, the property examined by an epigenomic assay is exhibited or not exhibited by some position of a single chromosome in a single cell, with no gradations between the extremes.For example, at some nucleotide of one chromosome in a single cell, an interrogated histone modification is either present or it is not.A single diploid cell has two copies of the chromosome.Thus, at that position, each eudiploid cell can have only 0, 1, or 2 instances of the histone modification.
Summing or averaging discrete counts over a population of cells leads to a representation of the assay data called "signal".Signal appears as a continuous-scale measurement.Signal arises, however, only from the aggregation of position-specific properties, which in each cell may have only a small number of potential ordinal value at the moment of observation.
Unlike epigenomic assays, transcriptomic assays can measure any number of transcript copies of one position per cell.Despite similar data representations, one must avoid the temptation to interpret epigenomic signal intensity as one might interpret transcriptomic signal intensity.For a transcriptomic assay, greater signal intensity for a transcriptomic assay might reflect a greater "level" of some transcriptional property within each cell.For an epigenomic assay, greater signal intensity indicates primarily that a higher number of cells within a sample have the property of interest.
In both the epigenomic and transcriptomic cases, it remains difficult or impossible to untangle the contribution to higher signal intensity that arises from frequency of molecular activity within each cell of a subpopulation from that from the composition of subpopulations within a whole bulk population.Improvements in single-cell assays, however, may enable SAGA algorithms on data from single cells in the near future (see "Outlook for future work").

Preprocessing of input data
SAGA methods generally use a signal representation of the input data.This signal representation originates from raw experimental data, such as sequencing reads, by way of a preprocessing procedure.For simplicity, we describe the steps of preprocessing as if a human analyst conducted them all individually, although some SAGA software packages might perform some steps without manual intervention: Required preprocessing for all SAGA methods: 1.The analyst transforms the experimental data into raw numeric signal data.
• For sequencing data, the analyst: 1. Aligns each sequencing read to the reference genome, 2. May choose to extend each read to an estimated length of the DNA fragment it begins, and 3. Computes the number of reads per base or extended reads per base for each genomic position [9,10].• For microarray data, the analyst: 1. Acquires microarray signal intensity for the experimental sample and for a control sample, and 2. Computes the ratio of experimental intensity to control intensity.
2. The analyst chooses units to represent the strength of activity at each position and may perform further transformation of the raw numeric signal data into these units.
• For sequencing data, the analyst commonly uses one of: • Read count (no transformation), • Fold enrichment of observed data relative to a control [6], or • − log 10 Poisson p-values indicating the likelihood of statistically significant peaks relative to control [19].The latter two units can mitigate experimental artifacts because they compare to a control experiment such as a ChIP input control.
Optional preprocessing or preprocessing required only for specific SAGA methods: 3. The analyst may normalize data to harmonize signal across cell types [40].Normalization proves especially important when annotating multiple cell types (see "Annotating multiple cell types").
4. To prevent large outlier signal values from dominating the results, the analyst may transform signals using one of three variance-stabilizing transformations of each signal value x: • log 2 (x + pseudocount) [19], or • an empirical variance-stabilizing transformation [41].
5. The analyst may downsample 1-bp resolution signal into bins (see "Spatial resolution").This involves computing one of: • Average read count, • Reads per million mapped reads fold enrichment [42], • Total count of reads [16,43,44], or • Maximum count of reads of each bin [6,18].Binning greatly decreases the computational cost of the SAGA algorithm and can improve the data's statistical properties.
6.The analyst may binarize numeric signal data into presence/absence values [2,12,21,45,46].Binarizing signal simplifies analysis by avoiding issues related to the choice of units, but eliminates all but 1 bit of information about signal intensity per bin.

Missing data
Genomic assays almost always cannot produce signal for every region of the whole genome.Regions where an assay cannot provide reliable information about the interrogated property constitute "missing data" for that assay.Missing data in sequencing assays may arise due to unmappable sequences, which occur when repetitive reads do not uniquely map to a region [47,48].Missing data in microarray assays comes from regions covered by no microarray probes.There are three main ways to treat regions of missing data: (1) by treating missing data as 0-valued data, (2) by decreasing the model resolution, averaging over available data so that the missing data has limited impact, or (3) statistical marginalization over the missing data [9,49].
When analyzing coordinated assays across multiple cell types, researchers may have to contend with having no data on some properties within a subset of cell types.This represents another kind of missing data: one with an entire dataset missing rather than only data at specific positions.Researchers can impute [21] entire missing datasets through tools such as ChromImpute [50], PREDICTD [51] or Avocado [52].Alternatively they can use a SAGA model with built-in capability for handling the missing datasets [12].

Hidden Markov model (HMM) formulation
Many SAGA methods rely on an HMM, a probabilistic model of the relationships between sequences of observed events and the unobservable hidden states which generate the observed events.The structure of HMMs, and similar models such as dynamic Bayesian networks (DBNs) [53], naturally reflect the SAGA task of clustering observed data generated by processes that act on sequences of genomic positions.

Simple HMM example
As an illustration of a simple HMM, consider a dog, Rover, and his owner, Thomas.Thomas is 5 years old and too short to see out of the windows in his home.Rover can leave the house through his dog door and loves taking walks, playing indoors, and napping.Every morning, he will either wait by the door for Thomas, play with his squeaky toys, or sleep in.Whichever action he takes depends on the weather he sees outdoors.For example, on rainy days Rover will more likely nap or play with his toys indoors.
Thomas must infer the state of the weather outside, hidden to him, based on the behavior he observes from Rover.Thomas knows the general weather patterns outside near his home-for example, that rainy weather likely continues across multiple days.
This scenario fits well into the an HMM framework.It has a sequence of observations (Rover's behavior) generated by hidden, non-independent unobservables (the weather outside).One would like to infer the sequence of hidden unobservables based on the sequence of observations.

Mathematical formulation
Formally, we can define an HMM over time t ∈ {1, . . ., T } as follows [54,55].Let the sequence of observed events X = {X t } T t=1 consist of each observed event X t at every time t.Let the sequence of hidden states Q = {Q t } T t=1 consist of each hidden state Q t at every time t.Each Q t takes on a value q t from a set of m possible hidden state values (Figure 2a).January 5, 2021 7/24 .In this representation, each node represents a hidden discrete (white rectangle) or observed continuous (grey circle) random variable.For every index t, each hidden random variable Q t takes on some value q t ; similarly, each observed variable X t takes on some value x t .X t may represent either scalar or vector observations.Solid arcs represent conditional dependence relationships between random variables.(b) State transition diagram representation of Rover and Thomas's weather example.In this representation, each node represents a potential value of the hidden variable Q t .The hidden variable takes on values r (rainy) or ¬r (not rainy) on any given day t.Solid arcs represent transitions between hidden states, which have transition probabilities A.
Under the Markov assumption, the probability of realizing state value q t+1 at the next time step t + 1 depends only on the current state value q t : We define the transition probability A(q t+1 |q t ) = P (Q t+1 = q t+1 |Q t = q t ), which reflects the frequency of moving from state q t to state q t+1 .We define the emission probability B(x t |q t ) = P (X t = x t |Q t = q t ) as the probability that the observable X t is x t if the present hidden state Q t = q t .Specifically, we assume that B(x t |q t ) depends only on Q t = q t , such that Finally, we define the hidden state probability at the first time step as π 0 (q 0 ) = P (Q 0 = q 0 ).We can fully define an HMM M = (A, B, π 0 ) by specifying all of A, B and π 0 .
In the case of Rover and Thomas, we have m = 2 possible hidden states (rainy, not-rainy) and 3 possible observations (Rover is napping, playing indoors, or waiting by the door).To Thomas, the hidden variable Q t captures the weather outside, while the observed variable X captures Rover's behavior.The probability of the state of the weather outside changing on a day-to-day basis is defined by the transition probabilities A (Figure 2b).The probability of Rover's behavior, given the weather, is defined by the emission probabilities B.

Inference
The main task one uses HMMs for is to quantify how well some predicted sequence of hidden states fits the observed data.Other common tasks like decoding or training serve as variations of, or build on, this inference task.
In HMMs inference, we can compute the likelihood of any sequence of hidden states Q.We use the sequence of observed events X and the model probabilities M to compute the likelihood function P (X|Q, M ).The likelihood function is the probability that our predicted sequence of hidden states produced our observed sequence of observed states.We often compute the likelihood function using the forward-backward algorithm [56,57].

Viterbi decoding
Given a sequence of observed events X, we often wish to know the maximum likelihood sequence of corresponding hidden states Q.For example, if Thomas observes that in the past 3 mornings, Rover slept, played, and then slept again, what weather sequence outside is most likely?
To answer this question, we decode the optimal sequence of hidden states q * such that q * = arg max Q P (Q|X, M ).The Viterbi algorithm [58] provides an efficient solution for this problem, making it unnecessary to compare the likelihood for every possible sequence of hidden states.

Training
Usually, we do not know the model parameters (A, B, π 0 ) and must learn them from data.We define training as the process of learning these parameters, and training data as the sequence of observations upon which we learn.An efficient algorithm that finds the global optimum parameter values for some training data does not exist.Instead, researchers commonly train HMMs using expectation-maximization (EM) [59] algorithms such as the Baum-Welch algorithm [60], which find a local optimum.Other reviews [54] describe inference and training methods in more detail.

HMMs for SAGA
We can readily apply the HMM formalization to genomic data for use in SAGA methods.Instead of time, we define the dynamic axis t in terms of physical position along a chromosome.Each position t refers to a single base pair or, in the case of lower-resolution models, a fixed-size region (see "Spatial resolution").The observation at each genomic position usually represents genomic signal (see "Input data").Each position's hidden state represents its label (see "Understanding labels").As a result, decoding the most probable sequence of hidden states reveals the most probable sequence of labels across the genome.We call this resulting sequence of labels an annotation.
Many SAGA methods use an HMM structure [2,9,12,16,21,24,39,42], or generalizations thereof.For example, DBNs are generalizations of HMMs that can model connections between variables over adjacent time steps.Methods such as Segway [9] use a DBN model in their approach to the SAGA problem.This can make it easier to extend the model to tasks such as semi-supervised, instead of unsupervised, annotation [61].

Understanding labels
SAGA methods are unsupervised.The labels they produce usually begin with integer designations without any essential meaning.Ideally each label corresponds to a particular category of genomic element.To make this correspondence explicit, we must assign a biological interpretation, such as "Enhancer" or "Transcribed gene", to each label.
Usually, one makes assignments of labels to biological interpretations in a post-processing step.In post-processing, a researcher compares each label to known biological phenomena and assigns an interpretation that matches the researcher's understanding of molecular biology.For example, a label characterized by the histone modification H3K36me3 (associated with transcription) and enriched in annotated gene bodies might have the interpretation "Transcribed".A label characterized by H3K27ac and H3K4me1, both histone modifications canonically associated with active enhancers, might have the interpretation "Enhancer" [29].
The interpretation process provides an opportunity to discover new categories of genomic elements.For example, one SAGA study found that their model consistently produces a label corresponding to transcription termination sites.Previously, none had described a distinctive epigenetic signature for transcription termination [6].
Manual interpretation proves time-consuming for human analysis.Applying SAGA to multiple cell types independently exacerbates this problem (see "Annotating multiple cell types").
Two existing methods automate the label interpretation process: expert rules and machine learning.In both cases, an interpretation program considers the information that a researcher would use for interpretation.This includes examining the relationship between labels and individual input data properties.It also includes reviewing colocalization of labels with features in previously created annotations.These annotations may have come from SAGA approaches or other manual or automated methods.
In the expert rule approach, an analyst designs rules about what properties a given label must have to receive a particular interpretation.The analyst then applies these rules to assign interpretations to labels from all models [15].
In the machine learning approach, one trains a classifier on previous manual annotations.The classifier then learns a model that assigns interpretations to labels given their properties [11].One analysis [11] found that automatic interpretation agreed with manual for 77% of labels, compared to 19% expected by chance.

Spatial resolution
Baroque music often employs a musical architecture known as "ternary form".Specifically, pieces of this structure follow a general "ABA" pattern, whereupon the second "A" section recapitulates the first with some variation.Each section contains multiple musical "sentences", which may repeat or vary.Just like linguistic sentences, each musical sentence contains clusters of notes, or motifs, between "breaths" in the musical articulation.Finer examination of the motifs shows they contain a few notes and chords each.Finer examination of the notes themselves shows they behave just like isolated phonemes in speech, with little meaning on their own.
The genome resembles a musical composition in that one observes different behaviors at different scales.The scale of genomic behavior one wishes to observe influences the choice of SAGA method and parameters chosen for the method.To observe nucleosome-scale behavior such as genes, promoters and enhancers, one desires ∼10 3 bp segments.To describe behavior on the scale of topological domains [62], one desires segments of length 10 5 bp to 10 6 bp [1, 3,17].
The most important parameter influencing segment length is the underlying resolution of the SAGA method.As noted above (see "Input data"), most SAGA methods downsample data into bins.To observe nucleosome-scale segment lengths (∼10 3 bp), one should use one should use 100 bp to 200 bp resolution [2,9,18].To observe domain-scale segment lengths (∼10 5 bp), one should use ∼10 4 bp resolution [3,4,27].Segway [9] and RoboCOP [63] provide some of few SAGA methods optimized for single-base resolution inference, and can identify behavior on a 1 bp scale.While most existing SAGA methods handle data at just one genomic scale, there exist methods capable of learning from data at multiple genomic scales [21].
Limitations of of the experimental data itself influence the choice of SAGA model resolution.Spatial imprecision in ChIP-seq data gives it an inherent resolution of about 10 bp.More precise assays such as ChIP-exo [64] and ChIP-nexus [65] can approach 1 bp resolution.Conversely, assays like DNA adenine methyltransferase identification (DamID) and Repli-seq have a coarser resolution of ≥100 bp.
The desired scale may also influence the choice of input data.When aiming to annotate at the domain scale, one should include data with activity at this scale, such as replication time data and Hi-C data [3,4,24,27].The inclusion of long-range contact information from Hi-C data poses a challenge because standard algorithms for HMMs cannot be used for a probabilistic model that includes long-range dependencies.Therefore, one must instead use alternative approaches such as graph-based regularization [3] or approximate inference [27].
SAGA methods model segment length through their transition parameters.HMM models assume a geometric distribution in determination of a segment's length [66].Related DBN methods can include constraints to tune segment length further.Constraints include the enforcement of a minimum or maximum segment length [9].Enforcement of a minimum segment length ensures that one does not obtain segments shorter than the effective resolution of the underlying data or biological phenomena.Probabilistic models often additionally use a prior distribution on the transition parameters during training to encourage them to produce shorter or longer segment lengths.

Choosing the number of labels
Most SAGA methods require the user to define the number of labels.Using more labels increases the granularity of the resulting annotation at the cost of added complexity.Typically, the number of labels ranges from 5-20, with more recent work favoring 10-15 labels.
One might think to make the choice of number of labels automatically with a statistical approach.The Akaike information criterion (AIC), Bayes information criterion (BIC), and factorized information criterion (FIC) [67] measure the statistical support a particular number of labels has.Instead of a fixed number of labels, one may give the model flexibility to choose the number of labels during training and include a hyperparameter that encourages it to choose a higher or lower number [14].Or one might define labels according to local minima in an optimization based on a network model of assays [46].One could even exhaustively assign a separate label to every observed presence/absence pattern in binary data [43].
In practice, however, researchers rarely use these statistical approaches for determining the number of labels.Optimizing an information criterion does not necessarily yield the most interpretable annotation.Interpretability reigns supreme in most SAGA applications.End users find annotations most useful when they have about 5-20 labels for two reasons.First, most can only articulate that many known distinctions between classes of genomic elements.Second, even if one could find meaningful distinctions between a large number of labels, few using the resulting annotations could keep fine distinctions between such a large number of patterns in their working memory.[68] Even if a statistical approach supported the use of 50 labels, the complexity of such an annotation would make it impractical for most users.

Annotating multiple cell types
There now exist epigenomics datasets describing hundreds of biological samples (Figure 3a).Researchers have correspondingly adapted SAGA methods to work with many samples simultaneously.
We use the term "sample" to refer to some population of cells on which one can perform an epigenomic assay.A sample could correspond to a primary tissue sample, a cell line, cells affected by some perturbation such as drug or disease, or even cells from different species.
The simplest approach for annotating multiple samples involves simply training a separate model on each sample [11] (Figure 3b).The large number of models produced by this approach necessitates using an automated label interpretation process (see "Label interpretation").
Two categories of approaches aim to share information across samples.The first, "horizontal sharing" or "concatenated" approaches, share information between samples to inform the label-training process.The second, "vertical sharing" or "stacked" approaches, share position-specific information to inform the label assignment of each position.A second approach uses a concatenated model that additionally learns a position-specific preference over the labels for each position.Through this preference, data from one sample can influence inference on another.Two implementations have applied variants of this hybrid horizontal-vertical sharing approach.First, TreeHMM [12] uses a cellular lineage tree as part of its input.For each genomic position, TreeHMM models statistical dependency between the label of a parent cell type and that of a child cell type.Second, IDEAS [18] uses a similar approach to TreeHMM, but dynamically identifies groups of related samples rather than using a fixed developmental structure.The IDEAS model allows these sample groups to vary across positions, which allows for different relationships between samples in different genomic regions.

Assay types
A third approach for vertical sharing uses a pairwise prior to transfer position-specific information between cell types [3,17].In other words, if position i and position j received the same label in many other samples, then they should be more likely to receive the same label in an additional sample.In contrast to the other methods in this section, the pairwise prior approach does not require the use of concatenated annotation.Therefore, the pairwise approach has the advantage of not requiring the same available data in all cell types.
A fourth approach imputes missing datasets in the target cell type, then applies any of the above annotation methods to the imputed data [50].Imputation provides three advantages.First, it ensures that all target cell types have the same set of datasets.Second, one can conduct imputation entirely as a preprocessing step, allowing the use of any SAGA method afterward.Third, the imputation process can normalize some artifactual differences between datasets, making concatenated annotation more appropriate.
Vertical sharing approaches have the downside that one cannot understand the annotation of each sample in isolation.This arises from the influence on label assignments in one sample by data from other samples.In particular, vertical sharing tends to mask differences between samples.For example, if some position has an enhancer label in many samples, vertical sharing approaches will annotate that position as an enhancer in a target cell type too, even with no enhancer-related data in the target cell type.A number of resources can aid in the application of SAGA algorithms and annotations.Reference annotations exist for many cell types (Table 2).These obviate the need for a user of the annotation to actually run a SAGA method.Alternatively, if the user must run a SAGA algorithm on their own data, standardized protocols describe how to perform this process [8,70].

Using and visualizing SAGA annotations
Most users of SAGA annotations view them through one of three visualization strategies.The first, and most common, strategy displays individual annotations as individual rows or "tracks" on a genome browser (Figure 4a).In each row, the browser displays the segments of that annotation for a region of one chromosome, usually indicating the label by color.Popular genome browsers for displaying segmentations include the University of California, Santa Cruz (UCSC) Genome Browser [73], the Washington University in St. Louis (WashU) Epigenome Browser [74], and Ensembl [75].
A second visualization strategy integrates annotations of all samples (Figure 4b).This visualization stacks all labels for a given position on top of one another and scales the vertical axis by an estimate of functional importance of that position.Two methods can estimate this importance: Epilogos (https://epilogos.altius.org/),which emphasizes rare activity, and the CAAS, which measures activity that is correlated with evolutionary conservation [11].
A third visualization strategy aggregates information about each label across the entire genome.This shows the enrichment of each label at positions of known significance, such as gene components (Figure 4c) or curated enhancers.Tools such as Segtools [76] and deepTools [77] can create these visualizations.
SAGA annotations can provide valuable reference datasets to other analyses and tools.The assignment of one and only one label from a small set to every mappable part of the genome greatly eases downstream analyses.SAGA annotations summarize genomic activity in a much simpler way than the individual input datasets, and even than processed versions of the input datasets such as peak calls.
Most SAGA annotations are in the tab-delimited browser extensible data (BED) format (https://genome.ucsc.edu/FAQ/FAQformat.html#format1).This makes it easy to remix SAGA annotations with other datasets using powerful software such as BEDTools [78].SAGA annotations form building blocks for methods for integrative analysis of genomic data such as CADD [79].

Conclusions and outlook for future work
SAGA methods provide a powerful and flexible tool for analyzing genomic datasets.These methods promise to continue to play an important role as researchers generate more datasets.Despite the large existing literature, future work could still address many challenges.
Alternate scales and data types.Nucleosome-scale annotations (100 bp to 1000 bp segments) of histone modifications have wide usage.While annotations of different data types or at different length scales exist, they are less widely used.Currently, there exist reference domain annotations with segments of length 10 5 bp to 10 6 bp for only a small number of samples [3,4,42,80], and few or no annotations at other scales.
Data preprocessing .Genome annotations would improve with better processing and normalization of input datasets.Representations such as fold enrichment used by existing methods seem primitive compared to more rigorous quantification schemes used in RNA-seq analysis such as transcripts per million (TPM).One could also improve SAGA preprocessing by more frequently incorporating information from multi-mapping reads [81].
Confidence estimates.Most methods do not report any measure of confidence in their predictions.Two types of confidence would prove useful.First, one would often like to know the level of confidence that a position in some sample has label X and not label Y. Second, in many cases one would like to have confidence in a differential labeling between two samples-that cell type A and cell type B have different labels.
Fig 1.Overview of segmentation and genome annotation (SAGA).First, preprocessing transforms genomic assay sequencing reads into signal datasets.Second, with signal datasets as input, a SAGA algorithm partitions the genome and assigns an integer label to each segment, yielding an annotation.Third, a researcher interprets the labels, assigning a biological interpretation to each.

Fig 2 .
Fig 2. Two representations of an HMM.(a) Conditional dependence diagram representation of an unrolled HMM with sequence of hidden states {Q t } T t=1 and sequence of observations {X t } T t=1.In this representation, each node represents a hidden discrete (white rectangle) or observed continuous (grey circle) random variable.For every index t, each hidden random variable Q t takes on some value q t ; similarly, each observed variable X t takes on some value x t .X t may represent either scalar or vector observations.Solid arcs represent conditional dependence relationships between random variables.(b) State transition diagram representation of Rover and Thomas's weather example.In this representation, each node represents a potential value of the hidden variable Q t .The hidden variable takes on values r (rainy) or ¬r (not rainy) on any given day t.Solid arcs represent transitions between hidden states, which have transition probabilities A.

Fig 3 .
Fig 3. Annotating multiple cell types.(a) Datasets generated by the ENCODE and Roadmap Epigenomics consortia as of 2019.The black cells represent the datasets actually generated out of a larger number of potential combinations of cell type and assay type.(b) Annotating 6 datasets from 3 different samples: 3 from cell type A, 2 from cell type B, and 1 from cell type C. Colored letters over signal data indicate data associated with those samples.One can use three different SAGA strategies with this collection of datasets: Independent: Performing training and inference completely independently on each sample.This yields a different annotation for each sample.Concatenated (horizontal sharing): Training a single model across all cell types.This yields one annotation per sample with a shared label set.Each sample must have the same datasets, necessitating imputation of any missing datasets.Stacked (vertical sharing): Performing training and inference on datasets from all samples.This yields a single pan-cell-type annotation.

Fig 4 .
Fig 4. Visualizations of SAGA annotations.(a) Genome browser display showing 164 cell type annotations for a 20 kbp region on human chromosome 15 (GRCh37/hg19) [71].Each annotation has 8 labels: Promoter (red), Enhancer (orange), Transcribed (green), Permissive regulatory (yellow), Bivalent (purple), Facultative heterochromatin (light blue), Constitutive heterochromatin (black), Quiescent (grey), and Low Confidence (light grey).(b) Importance score (conservation-associated activity score (CAAS)) for the same region.Total height at each position indicates the position's estimated importance.Height of a given color band denotes the contribution towards importance of the associated label.(c) Genome-wide visualization of the SAGA annotation for 164 samples aggregated over GENCODE [72] protein-coding gene components.Rows: the 9 labels of the annotation.Columns: gene components, including 10 kbp flanking regions upstream and downstream.Each cell shows the enrichment of the row's label with a position along the column's component.Figures derived from [11].

BED
browser extensible data BIC Bayes information criterion CAAS conservation-associated activity score ChIP-seq chromatin immunoprecipitation-sequencing CUT&RUN cleavage under targets and release using nuclease DamID DNA adenine methyltransferase identification DNase-seq deoxyribonuclease-sequencing DBN dynamic Bayesian network EM expectation-maximization ENCODE Encyclopedia of DNA Elements FIC factorized information criterion HMM hidden Markov model SAGA segmentation and genome annotation TPM transcripts per million UCSC University of California, Santa Cruz WashU Washington University in St. Louis

Table 2 .
Existing large-scale human reference annotations.