Addressing erroneous scale assumptions in microbe and gene set enrichment analysis

By applying Differential Set Analysis (DSA) to sequence count data, researchers can determine whether groups of microbes or genes are differentially enriched. Yet sequence count data suffer from a scale limitation: these data lack information about the scale (i.e., size) of the biological system under study, leading some authors to call these data compositional (i.e., proportional). In this article, we show that commonly used DSA methods that rely on normalization make strong, implicit assumptions about the unmeasured system scale. We show that even small errors in these scale assumptions can lead to positive predictive values as low as 9%. To address this problem, we take three novel approaches. First, we introduce a sensitivity analysis framework to identify when modeling results are robust to such errors and when they are suspect. Unlike standard benchmarking studies, this framework does not require ground-truth knowledge and can therefore be applied to both simulated and real data. Second, we introduce a statistical test that provably controls Type-I error at a nominal rate despite errors in scale assumptions. Finally, we discuss how the impact of scale limitations depends on a researcher’s scientific goals and provide tools that researchers can use to evaluate whether their goals are at risk from erroneous scale assumptions. Overall, the goal of this article is to catalyze future research into the impact of scale limitations in analyses of sequence count data; to illustrate that scale limitations can lead to inferential errors in practice; yet to also show that rigorous and reproducible scale reliant inference is possible if done carefully.


Introduction
Sequence count data (e.g., data from 16S rRNA-Seq or RNA-Seq studies) have become ubiquitous in modern biomedical research.These data are often used to identify whether a pre-determined set of entities (i.e., set of microbes or set of genes) are differentially enriched between two biological conditions [1,2].For example, in the analysis of RNA-Seq data, researchers often use tools such as Gene Set Enrichment Analysis (GSEA) [1] to identify pathways (sets of genes) that are up or down regulated between conditions [3][4][5].We refer to this inferential task as Differential Set Analysis (DSA).We show that common approaches to DSA can display high rates of false positives due to limitations of the sequence counting measurement process.We introduce new tools and insights to help researchers mitigate these errors.
Sequencing depth (the number of measured DNA molecules in a sample) can vary substantially between samples due to non-biological (i.e., technical) factors [6,7].This variation can confound analyses.To remove this unwanted variation and facilitate inference that requires between sample comparisons, many authors turn to normalization [8,9].Yet there are limitations to this approach.Various methods of normalization have been proposed, but current guidelines for choosing a normalization primarily rely on simulation-based benchmarking studies which may not generalize to a particular analysis of real data (e.g., [10]).This problem is magnified by the fact that the choice of normalization can drive modeling results [11,12].Yet other perspectives on the non-biological variation of sequencing depth bring the use of normalization into question.Rather than viewing this non-biological variation as extra noise that must be removed, some view it as a symptom of an imperfect measurement process that lacks information about the scale (i.e., total size) of the system being studied [7,13,14].From this perspective, any normalization is actually making an implicit assumption about the system scale (a scale assumption) that we cannot validate from the observed data [15].Following this view, some authors argue against the use of any normalization, instead arguing that we should only use these data to answer questions that are invariant to the unmeasured system scale [7,13].This view is predominant within the field of Compositional Data Analysis (CoDA) which is founded upon the axiom of scale invariance [16].While intuitive, this scale invariance perspective can be scientifically limiting, as many biologically relevant scientific questions require scale information [17,18].
A third perspective has recently emerged.Nixon et al. proposed a reformulation of CoDA called Scale Reliant Inference (SRI) for statistically rigorous non-scale invariant (scale reliant) analyses with sequence count data [14].Using SRI, those authors show that statistically rigorous analysis of these data requires explicitly considering uncertainty in the unmeasured system scale: in essence building statistical models that consider uncertainty in the normalization method itself.By applying their framework to the study of Differential Analysis (DA, i.e., differential abundance and differential expression analyses) they show that even slight errors in scale assumptions can lead to false discovery rates as high as 80%.Viewing DSA as a generalization of DA, where the goal is to study sets of entities rather than individual entities, we hypothesized that errors in scale assumptions may lead to inferential errors in DSA.
We review concepts and terminology from the field of SRI to study how errors in scale assumptions can impact DSA.As in SRI, our approach centers around a mathematical quantity called a target estimand, the quantity a researcher wants to estimate.We study the impact of erroneous scale assumptions and characterize how this impact depends on the target estimand.We show that many common estimands are scale reliant and that, in the context of those estimands, common methods for performing DSA result in elevated rates of false positives.To mitigate these problems, we develop a type of sensitivity analysis to identify possible false positives.In addition, we characterize a broad class of target estimands that are insensitive to erroneous scale assumptions and discuss how to determine when research goals fall within this class.Beyond providing new tools for performing DSA, the overall purpose of this article is to catalyze future research and discussions into the impact of scale limitations in sequence count data; to show that rigorous and reproducible scale reliant inference is possible with these data; and to also highlight that care is required to perform such analyses.

Review of scale reliant inference
Scale Reliant Inference (SRI) is a conceptual framework used to study scale assumptions in sequence count data [14].This section reviews core terms and concepts from SRI relevant to the study of DSA.For concreteness, this review is presented in terms of a hypothetical study performing differential abundance analysis on a cross-sectional microbiome dataset of N study participants, half with a disease of interest and half healthy controls.A summary of mathematical notations used throughout this work is provided in Table 1.
The scaled system, observed data, and target estimand.For differential abundance, the observed data are represented as a D × N matrix of counts Y with elements Y dn representing the number of DNA sequences that map to the d-th microbial taxon in the sample obtained from participant n.Central to SRI is the notion that the observed data (Y) is a measurement of an underlying biological system called a scaled system and denoted as W. Like Y, W is a D × N matrix.Unlike Y, the elements W dn represent the true (as opposed to measured) amount of the d-th taxon in the gut microbiota of the n-th participant.Note that the definition of true amount is not restricted to the total number of microbes in a subject's gut.For example, for a researcher interested in host-pathogen interactions, true amounts could be the ratio of bacterial to human cells at the epithelial surface of the distal colon.
The scaled system represents an unmeasured standard of truth, a reference against which the limitations of the observed data can be discussed.While the sequence counting process can provide limited information about the scaled system in many ways (e.g., PCR bias [19]), this article studies the lack of scale information in these data.Consider that the scaled system can be uniquely described in terms of its scale (summed, W ? ) and compositional (proportional, W k ) parts: Intuitively, saying sequence count data lacks scale information means that, due to the non-biological variation in sequencing depth, sequence count data cannot be used to estimate the system scale (W ? ) to any reasonable degree of precision [6,18] (see Nixon et al. for a more formal definition involving model identifiability [14]).Ultimately, the extent to which scale limitations matter to an applied researcher depends on their research goal.In SRI, the research goal is represented by a mathematical quantity called a target estimand.Formally, a target estimand (ψ) is some function f applied to the scaled system: ψ = f(W).For example, a differential abundance analysis may estimate the Log-Fold-Change (LFC) of each taxa d: for a binary covariate x n denoting two mutually exclusive conditions (e.g., case versus control).
The fundamental challenge of SRI occurs when the desired target estimand relies upon scale information not present in the observed data.For example, using the relationship n implied by Eq (1), the LFC target estimand in Eq (2) can be written as � |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } � |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } In vector notation, this can be written as θ = θ k + 1θ ?where θ = (θ 1 , . .., θ D ), y k ¼ ðy k 1 ; . . .; y k D Þ, and where 1 denotes a vector of ones.This target estimand is scale reliant: it requires knowledge of θ ?and therefore W ? to uniquely determine θ d .
Scale assumptions and the applied estimator.The target estimand, the scaled system, and the observed data make up the three core constructs of SRI.From this core, the goal of an analysis can be defined, and the challenge of achieving that goal given the limited information content of the observed data can be investigated.Notably, this core does not include the specific analytical method applied.That method, referred to as an applied estimator g, is a function applied to the observed data to estimate the target estimand ψ: ĉ ¼ gðYÞ.In SRI, the target estimand serves as a backdrop against which the impact of errors made by the applied estimator can be studied.For example, consider observed data (Y) that lacks scale information and a target estimand (ψ) that is scale reliant (that requires scale information).Against this backdrop, it is clear that any applied estimator ĉ ¼ gðYÞ that produces a unique estimate of ψ must have made some assumption about the unmeasured scale of the system W ? .In SRI, these are called scale assumptions.
Following Nixon et al. [14], we illustrate the concept of scale assumptions through a study of the Centered Log-Ratio (CLR) normalization in differential abundance analysis.Recognizing the scale limitations of sequence count data, a variety of tools (e.g., ALDEx2 [20] or Songbird [15]) estimate LFCs using CLR normalized abundances.In brief, these methods can be thought of as producing estimates of a target estimand: where W clr dn ¼ log ½W dn =GðW �n Þ� and where G(W �n ) denotes the geometric mean of the vector W �n = (W 1n , . .., W Dn ).The system's scale (i.e., W ? ) cancels out of the fraction log[W dn /G(W �n )] (see Section F in S1 Text), and thus W clr dn can be equivalently expressed entirely in terms of the system's composition (i.e., W k ): W clr dn ¼ log ½W k dn =GðW k �n Þ�.The CLR target estimand may be further decomposed into its compositional and scale components, similar to Eq (3), as � |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } The CLR target estimand in Eq (4) is not the same as the LFC target estimand in Eq (2).Yet many authors take the estimates produced by ALDEx2 and Songbird as estimates of the LFC as defined in Eq (2) (e.g., [21,22]).While θ k = ϑ k , θ ?(Eq 3) differs from ϑ ?(Eq 5) as the former is a function of the system's scale (i.e., W ? ) and the latter is a function of the system's composition (i.e., W k ).Assuming that the output of ALDEx2 or Songbird represents estimates of LFCs (as defined in Eq 2) is equivalent to an implicit assumption that θ d = ϑ d .This assumption can be further simplified to θ ?= ϑ ?which is an assumption that the true log fold change in scale can be imputed from the system composition: We refer to this as the CLR assumption.
To date, SRI has primarily been used to study target estimands associated with differential analysis.In what follows, we provide the first application of SRI to DSA.

Conceptual overview of methods
For a set of entities S, DSA estimates a target estimand ϕ S which can take on three values: +1 if the set is differentially enriched, −1 if differentially depleted, and 0 if neither enriched nor depleted.DSA is often performed in two steps: first, LFCs are estimated using tools like ALDEx2 [20] or DESeq2 [23]; second, tools like Gene Set Enrichment Analysis (GSEA) [1] are applied to the estimated LFCs to produce an estimate �S .In the next section, we introduce these two-step estimators using the language of applied estimators and target estimands.After that, we provide a core methodological contribution of this work: we show how errors in scale assumptions used to estimate LFCs propagate into errors in estimates of ϕ S .This result forms the basis of our LFC Sensitivity Analyses which allow researchers to quantify the sensitivity of estimates �S to errors in scale assumptions.This idea also forms the basis of the LFC Sensitivity Test which identifies entity sets where conclusions about enrichment or depletion are completely insensitive (invariant) to errors in scale assumptions.Beyond the main text, Section A in S1 Text develops more general forms of sensitivity analysis for DSA that generalize beyond these two-step LFC-based estimators and can be applied to other methods like CAM-ERA [24].

Applied estimators and target estimands for DSA
There are many applied estimators for DSA.A particularly popular approach is to apply Gene Set Enrichment Analysis (GSEA) [1] to LFCs estimated using tools such as ALDEx2 [20], Songbird [15], or DESeq2 [23].DSA estimators such as GSEA and others [25] can be thought of as two-stage applied estimators: the first stage estimator h (e.g., ALDEx2) estimates LFCs from observed data ( ŷ ¼ hðYÞ) and the second stage estimator u (e.g., GSEA) then estimates DSA using ŷ ( �S ¼ uð ŷÞ).We use the two-stage form of these applied estimators to define target estimands for DSA.
It is challenging to identify target estimands.Just as researchers' goals can differ, so too can their definitions of enriched and depleted entity sets.Moreover, many studies do not explicitly state their estimation goals but instead simply use an applied estimator leaving the estimation goal implicit.To address this challenge, we assume a correspondence between the applied estimator a researcher uses and their estimation goals.Consider a researcher who uses a DSA applied estimator with second stage �S ¼ uð ŷÞ.We assume their target estimand is defined just as the applied estimator but with the estimated LFCs replaced with the true LFCs: ϕ S = u(θ).

LFC sensitivity analysis and testing
Consider any DSA applied estimator that can be written as �S ¼ uð ŷÞ with corresponding target estimand ϕ S = u(θ).We can relate the true value of the estimand ϕ S and the estimate �S by considering error in the estimated value of θ.Let � denote error in the estimate ŷ such that y ¼ ŷ þ �.Just as the LFC estimate can be decomposed as θ = θ k + 1θ ?, the error can also be decomposed as � = � k + 1� ?where � k ¼ y k À ŷk is a D vector with elements � k d denoting error in each compositional component of the estimate, and � ?¼ y ?À ŷ? is a scalar denoting error in the estimated scale.While there can be error in compositional estimation such that, for some d, � k d 6 ¼ 0, the consideration of such error only serves to complicate our study of scale.Consider that unless � ?and � k d are strongly anti-correlated, if a conclusion is highly sensitive to error � ?when � k d ¼ 0, it will still be highly sensitive when � k d 6 ¼ 0. Thus, we restrict our analysis to � ?by assuming that � k d � � ?such that � � 1� ? .Then the true value θ can be expressed in terms of the estimate ŷ as Returning to DSA, Eq (7) can be used to represent the truth (ϕ S ) in terms of error � ?and LFC estimates ŷ: By comparing ϕ S versus �S as a function of � ?we can study how our estimates differ from the truth as a function of errors in scale assumptions (� ?).We call this LFC Sensitivity Analysis.
LFC Sensitivity Analysis has a number of appealing properties.To perform LFC Sensitivity Analysis, we only need to know the applied LFC estimates ŷ and not the true value of the DSA target estimand ϕ S or the true value of the LFCs θ.Thus this method can be applied to both simulated and real data.
Another appealing quality of LFC Sensitivity Analysis is its interpretability.Consider that ŷ? ¼ mean n:x n ¼1 ðlog Ŵ?
n Þ À mean n:x n ¼0 ðlog Ŵ ?n Þ (Eq 3).This can be rewritten as where G denotes the geometric mean.It follows that � ?can be interpreted as the error in the assumed log-fold-change of scales.For example, an error of � ?= 1 can be read as a statement: the ratio of the mean scale in the case condition compared to the control condition is e 1 � 2.7 times higher than assumed.Moreover, this interpretation does not depend on the chosen notion of amount: any units attached to a researcher's chosen notion of amount (e.g., cells per mL) cancel in the ratio.LFC Sensitivity Analysis can also be used to create a hypothesis test for DSA that is robust to errors in scale assumptions.Suppose there is an entity set S such that for all � ? 2 (−1, 1) the target estimand ϕ S is always either 1 or −1.Intuitively, conclusions about this entity set are invariant to errors in scale assumptions.We can turn this intuition into a robust hypothesis test for DSA as follows.Let p � ?denote a p-value corresponding to a test of the null hypothesis ϕ S = 0 calculated from a chosen applied DSA estimator when applied to LFCs y ¼ ŷ þ 1� ? .We can calculate a new p-value summarizing over � ?as p ¼ max This new p-value implicitly defines a test which we call the LFC Sensitivity Test.Prior work on Type-I error control in the presence of nuisance parameters (e.g., [26]) establishes that the LFC Sensitivity Test can control Type-I error in spite of errors � ? in scale assumptions.Of course the cost of such rigorous Type-I error control is the potential for low statistical power (low probability of detecting non-zero ϕ S ).Remarkably, in the section LFC Sensitivity Analysis and Testing of Real Data we show that the LFC Sensitivity Test displays non-zero power in practice.

The GSEA-LFC target estimand is typically scale reliant
We focus our study of DSA on GSEA applied to estimated LFCs due to the popularity of this approach.The GSEA algorithm is visualized in There are two common variations on GSEA that differ in the permutation scheme used to test the significance of the estimated �S statistic.The first permutes the entity labels.The second permutes the sample labels.Assuming adequate sample size, sample-label permutations are generally preferred, as they ensure that the null model accounts for inter-entity correlations [24,27,28].Coupled with these two variants on GSEA, we can define two target estimands each based on applying GSEA to the true LFCs.We denote the estimand formed from GSEA The LFCs of 60 entities were simulated from a standard normal distribution and rank ordered (black); 12 of these entities were randomly selected to be in the entity set.For this simulation, the CLR assumption equated to a value of ŷ? ¼ À 0:1.Blue and orange points represent the true LFCs if the assumed value of ŷ? had error � ?: y ?¼ ŷ? þ � ? .(B) For each level of error � ?depicted in Panel A, we show the GSEA running sum and the corresponding Enrichment Scores (ES).The dotted lines represents the GSEA enrichment score, which is the maximum distance from zero of the running sum.Also shown are p-values and ϕ S values, although they depend on permutation tests which are not depicted visually.ϕ S is the GSEA-LFC estimand.ϕ S is ±1 when the entity set is significantly enriched or depleted (blue line, here p � 0.05 is used for significance) and 0 when the entity set is not enriched (black and orange lines).(C) 10,000 entity sets of size 12 were simulated using the procedure as in with entity label permutations as GSEA-LFC and the estimand formed from GSEA with sample label permutations as GSEA-LFC-S.In the main text we focus on GSEA-LFC as sensitivity analyses with GSEA-LFC-S are more complex yet show similar results.Still, we present a form of sensitivity analysis for GSEA-LFC-S as well as other DSA methods such as CAMERA [24] in Section A in S1 Text.
We used LFC Sensitivity Analysis to demonstrate that, for many entity sets, the GSEA-LFC target estimand is scale reliant.An example is depicted visually in Fig 1 which shows that different errors � ? in the assumed scale ŷ? lead to different values of the target estimand ϕ S .That is, for the depicted entity set, knowledge of the system composition alone is insufficient to uniquely determine the value of ϕ S .
To confirm that these results were not unique to the simulated entity set shown in Fig 1A and 1B, we repeated this analysis with 10,000 simulated entities sets and summarized the results in terms of the Positive Predictive Value (PPV) of the applied GSEA-LFC estimator under various levels of error � ? .By design, the PPV equals 100% when the assumed value ŷ? is equal to the true value θ ?(when � ?= 0) but may decrease when |� ?| > 0. Fig 1C summarizes those results and shows that the PPV can decrease to below 50% with errors on the order of ±0.5.In words, when the average difference in scales between the two conditions is 1.65 times larger (e 0.5 ) or 0.6 times lower (e −0.5 ) than assumed, more than half of the entity sets identified as significantly enriched or depleted are false positives.At � = 1 the PPV drops to just 9%.
Finally, in Section C in S1 Text, we expand upon these simulation studies and show how the PPV of a GSEA-LFC applied estimator varies with different LFC distributions, entity set sizes, and total number of entities.Of these factors, asymmetric LFC distributions (having more entities increase than decrease or vice versa) led to the most striking drops in PPV (to just 0.2%) with only slight error in the assumed scale (� ?= ±0.6).This result reinforces recent work showing the dramatic impact of compositional asymmetry on the fidelity of differential analysis [29].

LFC sensitivity analysis and testing of real data
We analyzed two previously published studies that used GSEA-LFC applied estimators to perform DSA.The first compared gene pathway expression in healthy versus normal-adjacent-totumor thyroid tissue [4].The second compared the abundance of different microbe sets in the oral microbiota of smokers versus non-smokers [30].We leave discussion of the microbiome study to Section D in S1 Text, as it resembles the thyroid tissue analysis and simply demonstrates that our conclusions hold beyond gene expression analysis.For both analyses, prior literature was used to determine upper and lower bounds for biologically plausible levels of error (� ? ) in scale assumptions (see Methods).
The sensitivity of GSEA-LFC to errors in scale assumptions varied substantially over the 50 hallmark gene sets analyzed in Aran et al. [4] (Fig 2).The KRAS Signaling Down pathway was largely insensitive to errors in scale assumptions (it was significant over all � ?tested).Other sets such as the MYC Targets V2 gene set were highly sensitive (significant only over a narrow range of � ?).With an error as small as � ?= −0.05multiple gene sets identified as enriched would no longer be significant (e.g., the columns Inflammatory Response to DNA Repair).To interpret the magnitude of this error, consider that for this dataset the CLR assumption equates to ŷ? ¼ À 0:3 implying that if the mean scale in the normal-adjacent-to-tumor tissue is less than or equal to exp(−0.3 − 0.05) = 0.70 times the mean scale in the healthy tissue, then multiple significant gene sets are false positives.
We then applied the LFC Sensitivity Test to the same data set.As expected, we found that the LFC Sensitivity Test has low yet non-zero power.Zero significant gene sets were identified when the LFC Sensitivity Test was applied to the 50 hallmark gene sets studied in Aran et al. [4].However, expanding to a larger group of 4441 gene sets (see Methods) identified 96 gene sets as significantly enriched.In Section E in S1 Text we present a full power analysis which suggests that the power of the LFC Sensitivity Test ranges from 6% to 13.5% in the context of this data.

GSEA with compositional weighting
Under the SRI framework, the distinction between whether a problem is scale reliant or scale invariant depends on the scientific goal (the target estimand).Until now, we have only considered target estimands defined by replacing estimated LFCs with true LFCs.For example, we defined the GSEA-LFC target estimand as ϕ S = u(θ) by replacing the estimates ŷ with their true values θ in the GSEA-LFC applied estimator �S ¼ uð ŷÞ.This approach allowed us to study how errors in scale assumptions, used in the estimation of θ, propagate into the estimation of ϕ S .In this section, we take a different approach and assume that the applied estimator a researcher uses is tautologically consistent with their research goals.For a researcher using the In the context of real data, GSEA-LFC applied estimators can demonstrate substantial sensitivity to errors in scale assumptions.LFC Sensitivity Analysis was used to replicate the results of Aran et al. [4], which used GSEA-LFC to compare differential pathway enrichment between normal-adjacent-to-tumor and healthy thyroid tissue.We explored sensitivity to errors in the CLR assumption which, for this study, equates to ŷ? ¼ À 0:3.For an error � ?(Y-axis), the implied true log-fold-change of scales is given by y ?¼ ŷ? þ � ? .Higher (or lower) values of � ?correspond to a higher (or lower) scale in normal-adjacent-to-tumor compared to healthy tissue than assumed.The range of � ? 2 [−0.6, 1.2] was informed by prior research on how much scales can vary between conditions in similar experiments; it is asymmetric to account for the CLR assumption ŷ? ¼ À 0:3 (see Methods for full details).Higher values (red) of the NES correspond to more enrichment in normal-adjacent-to-tumor tissue, and lower values (blue) more enrichment in healthy tissue.https://doi.org/10.1371/journal.pcbi.1011659.g002 GSEA-LFC applied estimator, we assume that the target estimand is �S ¼ uð ŷÞ.In this case, there are no scale assumptions, as there is no discrepancy between the methods applied and the goals of an analysis.Moreover, without scale assumptions, there is no need for sensitivity analysis.Instead, this section studies the research goals implied by this target estimand.This leads us to a deeper understanding of when researchers should be concerned about potential errors in scale assumptions.
To better express our meaning under this new approach, we modify the notation used in the prior sections.Rather than using ŷ to denote estimated LFCs, where the superscript � emphasized that this was an estimate, we now use the notation W ¼ ŷ.The lack of a � emphasizes that this is not simply an estimate or approximation but a quantity of direct scientific interest that is tautologically free of potential error.For the same reason, we replace �S with φ S .
For a researcher using an applied estimator of the form �S ¼ uð ŷÞ, we now have a target estimand of the form φ S ¼ uð ŷÞ ¼ uðWÞ.This notation also emphasizes that this target estimand is not the same as the GSEA-LFC target estimand in Eq (2).For concreteness, we again focus on the common use of the CLR assumption in LFC estimation: let ϑ denote the log-fold change of CLR transformed abundances (Eq 4).We call φ S the GSEA-CLR target estimand.How do we interpret the scientific goal represented by this estimand?In Section F in S1 Text, we demonstrate that the LFC of CLR transformed abundances (ϑ d ) is related to the LFC of the d-th entity (θ d ) by the equation: In words, researchers purposefully choosing to analyze LFCs of CLR transformed abundances (ϑ d ) (as opposed to LFCs of actual abundances, θ d ) do not care if an entity is truly increased or decreased between conditions, only that the entity is increased or decreased relative to the average change of all of the entities.This distinction is shown visually in Fig 3 .Extending this intuition to GSEA applied to ϑ, researchers purposefully using this approach are only concerned about whether the entities of a set S are non-randomly increased or decreased between conditions relative to the average change of all the entities.For example, such researchers would still be interested in an entity set that is not actually enriched (or depleted) between conditions, but is only relatively enriched (or relatively depleted) when compared to the entities not in the set.Two examples will solidify this intuition and illustrate the practical implications of understanding this distinction.Consider two researchers, Researcher A and Researcher B. Researcher A wants to identify if a particular genomic trait confers a selective advantage in a mixed microbial community exposed to an antibiotic.Researcher A does not care if the trait actually stimulates growth in the presence of the antibiotic, only that microbes with that trait increase in abundance relative to microbes that do not have that trait.This goal is more consistent with the GSEA-CLR target estimand than the GSEA-LFC target estimand.As a result, Researcher A can apply GSEA to LFCs estimated using the CLR assumption without needing to consider potential error in scale assumptions.In contrast to Researcher A, Researcher B wants to identify gene pathways that are differentially activated in diseased compared to healthy tissue.Researcher B wants to understand which pathways may play a causal role in the disease.Researcher B is not interested in a pathway that is unrelated to disease and is simply enriched relative to some other pathway that is repressed in disease.Researcher B's scientific goal is more consistent with the GSEA-LFC target estimand than the GSEA-CLR target estimand.As this is a scale reliant target estimand, we would recommend that Researcher B considers potential error in scale assumptions regardless of the applied estimator chosen.
Extending beyond the GSEA-CLR target estimand, Section F in S1 Text shows that the GSEA-CLR target estimand is actually just one of an infinite number of target estimands for DSA that are built around GSEA yet are scale invariant.We call this broader class GSEA with Compositional Weighting (GSEA-CW).Appealingly, each target estimand in this class can be naturally paired with a different applied estimator, which can in turn be computed from observed data.In Section F in S1 Text, we provide further characterization of this class to help researchers understand both the types of scientific goals that are invariant to erroneous scale assumptions and how to identify appropriate analytical tools given such goals.In Section G in S1 Text, we illustrate some of the advantages of this class over an alternative scale invariant approach to DSA recently proposed by Nguyen et al. [31].

Discussion
Differential Set Analysis (DSA) is a core analysis performed in modern biomedical research [32]; it is used to identify sets of entities that are differentially enriched or depleted between two experimental conditions.Although the majority of our presentation focused on the analysis of gene expression data using GSEA applied to estimated LFCs, we also showed that these same problems can be found when studying 16S rRNA microbiome data or when investigating other popular methods such as CAMERA (Section A in S1 Text).In all these cases, we arrived at the same three conclusions: 1. for common scientific goals, even slight errors in scale assumptions can lead to false positives in DSA; 2. sensitivity analyses enable researchers to identify those gene and microbe sets whose significance is highly sensitive to errors in scale assumptions; 3. the sensitivity of DSA to errors in scale assumptions depends on the goal of the study.
Our results suggest caution is warranted when performing DSA from sequence count data as we do find that results can be highly sensitive to even slight errors in scale assumptions.For example, in the reanalysis of Aran et al. [4] we found that gene sets in the Apoptosis and Interferon Alpha Response pathways, which were reported by those authors as enriched (in Thyroid normal-adjacent-to-tumor tissue), were no longer significant with scale assumption error as small as � ?= −0.05.To be clear, we are not arguing that those conclusions are wrong: we do not know the ground truth LFCs.Instead, we have only studied the sensitivity of those conclusions to errors in modeling assumptions.Still, we find such sensitivity concerning and suggest that such conclusions should be viewed skeptically.
The concept of target estimands highlights how the scientific goals of a study dictate the impact of erroneous scale assumptions.In this article we have discussed a number of target estimands such as GSEA-LFC and GSEA-CW.We also discussed a CAMERA based estimand in Section A in S1 Text.Still, we expect that some researchers will feel that they perform DSA for reasons not represented in the estimands we have studied.For example, researchers may perform GSEA on LFCs but be interested in population level LFC estimates, where instead of a mean in Eq (2), there is an expectation with respect to some population-level model.We believe the study of DSA from the perspective of different target estimands represents a prime area for future research.
This article has focused on DSA performed using sequence count data.Yet there has been an increasing interest in combating scale limitations by combining sequence count data with secondary measurements designed to directly measure the system scale.For example, in the study of human microbiota, some researchers use qPCR or flow cytometry to measure the total 16S rRNA concentration or the total cellular concentration in a sample [17].While we believe that the field's increasing interest in these types of measurements necessitates more careful study, we suspect these measurements may be more limited than often discussed.Putting aside issues of the accuracy and precision of such measurements [14], we expect that the scale measured by these technologies may not accord with a researcher's desired notion of amount.Many researchers use qPCR to measure the total concentration of 16S rRNA in fecal material.In the context of DSA, how often are researchers interested in identifying if the concentration of 16S rRNA from microbes in fecal material is enriched or depleted?Even if this was a concentration measurement from the colon (rather than stool), we expect there are at least some researchers interested in defining enriched or depleted sets with regards to alternative notions of amount (different definitions of scale).As a result, we expect that these technologies will not solve the problem of scale limitations for all researchers.Neither do we claim that our methods are so general as to solve the problem of scale limitations.Still, we note that our methods are not tied to a single notion of scale and may therefore have some advantages over these external measurement-based approaches.

Preprocessing and differential analysis of thyroid tissue RNA-seq data
Following Aran et al. [4], we downloaded pre-processed read count data for healthy and normal-adjacent-to-tumor tissue from the Genotype-Tissue Expression project (GTEx) and The Cancer Genome Atlas (TCGA) via NCBI's Gene Expression Omnibus (GEO) under accession numbers GSE86354 and GSE62944, respectively [33,34].The downloaded read count data were further processed to use the same 16038 genes, 361 healthy thyroid samples, and 59 normal-adjacent-to-tumor thyroid samples used by Aran et al [4].LFCs were estimated using the Songbird multinomial logistic-normal regression model (Version 1.0.3)[15] including an intercept term and a binary condition indicator (healthy versus normal-adjacent-to-tumor). The model was trained with default parameters and validated by ensuring cross validation error plateaued over training epochs.

LFC sensitivity analysis and testing of thyroid data
Following Aran et al. [4], GSEA p-values were calculated using a list of 50 hallmark gene sets from the Molecular Signature Database (MSigDB, Version 7.4.0)[35] and FDR corrected as described in Subramanian et al. [1].p-values were calculated using 25,000 entity set label permutations.
The LFC Sensitivity Test was performed first on the 50 hallmark gene sets described above, and then on the MSigDB C2 (version 7.4.0)curated list of gene sets [1,37].Based on the default parameters of the GSEA software package [1], only gene sets that containing between 15 and 500 genes were retained in our analysis resulting in a set of 4,441 candidate gene sets.p-values for the LFC Sensitivity Test were calculated over range of � ? 2 [−200, 200] with a grid size of 1, except in the range � ? 2 [−10, 10] where a grid size of 0.1 was used for better resolution.In all cases a significance threshold of p < 0.05 was used.

Simulated LFC sensitivity analysis using the positive predictive value
The simulated LFC Sensitivity Analysis presented in this work was summarized using the Positive Predictive Value (PPV).For each value of � ? 2 [−2, 2] considered, PPV was calculated as the percentage of entity sets for which �S ¼ � S ð� ?Þ when �S 6 ¼ 0 where �S is the DSA estimate under the CLR assumption (when � ?= 0) and ϕ S (� ? ) denotes the true value of ϕ S when the error in the LFC estimates is equal to � ? .It follows that, by definition, the PPV is equal to 100% when � ?= 0.
Fig 1 and a formal definition is provided in Section B in S1 Text.In brief, GSEA is an algorithm that first orders entities by a ranking statistic (here LFCs; Fig 1A), then measures the degree of non-random clustering in the ranked list through an enrichment score calculated as the maximum distance of a weighted running sum (Fig 1B), and then performs a permutation test to determine whether that enrichment score is unusually large or small.The output of GSEA can be summarized as the quantity ϕ S introduced above.

Fig 1 .
Fig 1.The GSEA-LFC target estimand is often scale reliant.(A) The LFCs of 60 entities were simulated from a standard normal distribution and rank ordered (black); 12 of these entities were randomly selected to be in the entity set.For this simulation, the CLR assumption equated to a value of ŷ? ¼ À 0:1.Blue and orange points represent the true LFCs if the assumed value of ŷ? had error � ?: y ?¼ ŷ? þ � ? .(B) For each level of error � ?depicted in Panel A, we show the GSEA running sum and the corresponding Enrichment Scores (ES).The dotted lines represents the GSEA enrichment score, which is the maximum distance from zero of the running sum.Also shown are p-values and ϕ S values, although they depend on permutation tests which are not depicted visually.ϕ S is the GSEA-LFC estimand.ϕ S is ±1 when the entity set is significantly enriched or depleted (blue line, here p � 0.05 is used for significance) and 0 when the entity set is not enriched (black and orange lines).(C) 10,000 entity sets of size 12 were simulated using the procedure as in Panel A in order to demonstrates how the Positive Predictive Value (PPV) of GSEA-LFC can vary with � ? .The dashed lines indicate the same values of � ?shown in Panels A and B.
Fig 1.The GSEA-LFC target estimand is often scale reliant.(A) The LFCs of 60 entities were simulated from a standard normal distribution and rank ordered (black); 12 of these entities were randomly selected to be in the entity set.For this simulation, the CLR assumption equated to a value of ŷ? ¼ À 0:1.Blue and orange points represent the true LFCs if the assumed value of ŷ? had error � ?: y ?¼ ŷ? þ � ? .(B) For each level of error � ?depicted in Panel A, we show the GSEA running sum and the corresponding Enrichment Scores (ES).The dotted lines represents the GSEA enrichment score, which is the maximum distance from zero of the running sum.Also shown are p-values and ϕ S values, although they depend on permutation tests which are not depicted visually.ϕ S is the GSEA-LFC estimand.ϕ S is ±1 when the entity set is significantly enriched or depleted (blue line, here p � 0.05 is used for significance) and 0 when the entity set is not enriched (black and orange lines).(C) 10,000 entity sets of size 12 were simulated using the procedure as in Panel A in order to demonstrates how the Positive Predictive Value (PPV) of GSEA-LFC can vary with � ? .The dashed lines indicate the same values of � ?shown in Panels A and B. https://doi.org/10.1371/journal.pcbi.1011659.g001

Fig 2 .
Fig 2.In the context of real data, GSEA-LFC applied estimators can demonstrate substantial sensitivity to errors in scale assumptions.LFC Sensitivity Analysis was used to replicate the results of Aran et al.[4], which used GSEA-LFC to compare differential pathway enrichment between normal-adjacent-to-tumor and healthy thyroid tissue.We explored sensitivity to errors in the CLR assumption which, for this study, equates to ŷ? ¼ À 0:3.For an error � ?(Y-axis), the implied true log-fold-change of scales is given by y ?¼ ŷ? þ � ? .Higher (or lower) values of � ?correspond to a higher (or lower) scale in normal-adjacent-to-tumor compared to healthy tissue than assumed.The range of � ? 2 [−0.6, 1.2] was informed by prior research on how much scales can vary between conditions in similar experiments; it is asymmetric to account for the CLR assumption ŷ? ¼ À 0:3 (see Methods for full details).Higher values (red) of the NES correspond to more enrichment in normal-adjacent-to-tumor tissue, and lower values (blue) more enrichment in healthy tissue.

Fig 3 .
Fig 3. A visual depiction of the difference between Log Fold Changes (LFCs), θ defined by Eq (2), and LFCs of CLR transformed amounts, ϑ defined by Eq (4).The X's and O's are plotted on a number line and represent the LFCs of eight entities in and not in some set of interest.The index 1 refers to the leftmost X in the plot.mean(θ 1 , . .., θ D ) is the mean distance of all X's and O's from 0. In this illustration, none of the entities in the set (the X's) are strongly increased or decreased in amount between conditions: for d 2 S, θ d � 0. Still, each of these entities has a negative ϑ d , as their LFCs are each less than the mean LFC of all the entities.According to the GSEA-CLR target estimand, this set would therefore be depleted whereas it is neither depleted nor enriched according to the GSEA-LFC target estimand.The equality ϑ 1 = θ 1 − mean(θ 1 , . .., θ D ) is derived in Section F in S1 Text.https://doi.org/10.1371/journal.pcbi.1011659.g003

Table 1 . Summary of mathematical notation. Symbol Definition d
Entity (e.g., microbe or gene) index; d = 1, . .., D. n Sample index; n = 1, . .., N. W dn An element of the D × N matrix W representing the true amount of entity d in the system from which sample n was obtained.Y dn An element of the D × N matrix Y representing the number of DNA sequences observed from entity d in sample n.