Efficient and accurate causal inference with hidden confounders from genome-transcriptome variation data

Mapping gene expression as a quantitative trait using whole genome-sequencing and transcriptome analysis allows to discover the functional consequences of genetic variation. We developed a novel method and ultra-fast software Findr for higly accurate causal inference between gene expression traits using cis-regulatory DNA variations as causal anchors, which improves current methods by taking into consideration hidden confounders and weak regulations. Findr outperformed existing methods on the DREAM5 Systems Genetics challenge and on the prediction of microRNA and transcription factor targets in human lymphoblastoid cells, while being nearly a million times faster. Findr is publicly available at https://github.com/lingfeiwang/findr.


Introduction
Genetic variation in non-coding genomic regions, including at loci associated with complex traits and diseases identified by genome-wide association studies, predominantly plays a generegulatory role 1 . Whole genome and transcriptome analysis of natural populations has therefore become a common practice to understand how genetic variation leads to variation in phenotypes 2 . The number and size of studies mapping genome and transcriptome variation has surged in recent years due to the advent of high-throughput sequencing technologies, and ever more expansive catalogues of expression-associated DNA variants, termed expression quantitative trait loci (eQTLs), are being mapped in humans, model organisms, crops and other species 1, [3][4][5] . Unravelling the causal hierarchies between DNA variants and their associated genes and phenotypes is now the key challenge to enable the discovery of novel molecular mechanisms, disease biomarkers or candidate drug targets from this type of data 6,7 .
It is believed that genetic variation can be used to infer the causal directions of regulation between coexpressed genes, based on the principle that genetic variation causes variation in nearby gene expression and acts as a causal anchor for identifying downstream genes 8,9 .
Although numerous statistical models have been proposed for causal inference with genotype and gene expression data from matching samples 10,12,14,55,64,65 , no software implementation in the public domain is efficient enough to handle the volume of contemporary datasets, hindering any attempts to evaluate their performances. Moreover, existing statistical models rely on a conditional independence test which assumes that no hidden confounding factors affect the coexpression of causally related gene pairs. However gene regulatory networks are known to exhibit redundancy 16 and are organized into higher order network motifs 17 , suggesting that confounding of causal relations by known or unknown common upstream regulators is the rule rather than the exception. Moreover, it is also known that the conditional independence test is susceptible to variations in relative measurement errors between genes 8,9 , an inherent feature of both microarray and RNA-seq based expression data 18 .
To investigate and address these issues, we developed Findr (Fast Inference of Networks from Directed Regulations), an ultra-fast software package that incorporates existing and novel statistical causal inference tests. The novel tests were designed to take into account the presence of unknown confounding effects, and were evaluated systematically against multiple existing methods using both simulated and real data.  Figure S1). This, together with efficient programming, resulted in a dramatic speedup compared to the standard computationally expensive approach of generating random permutations. The six posterior probabilities are then combined into the traditional causal inference test, our new causal inference test, and separately a correlation test that does not incorporate genotype information (Section 4.6). Each of these tests verifies whether the data arose from a specific subset of (E, A, B) relations (Table 1)

Traditional causal inference fails in the presence of hidden confounders and weak regulations
Findr's computational speed allowed us to systematically evaluate traditional causal inference methods for the first time. We obtained five datasets with 999 samples simulated from synthetic gene regulatory networks of 1,000 genes with known genetic architecture from the DREAM5 Systems Genetics Challenge (Section 4.1), and subsampled each dataset to observe how performance depends on sample size (Section 4.7). The correlation test (P 0 ) does not incorporate genotype information and was used as a benchmark for performance evaluations in terms of areas under the receiver operating characteristic (AUROC) and precision-recall (AUPR) curves (Section 4.7). The traditional method 55 combines the secondary (P 2 ) and independence (P 3 ) tests sequentially ( Table 1, Section 4.6), and was evaluated by comparing P 2 and P 2 P 3 separately against the correlation test. Both the secondary test alone and the traditional causal inference test combination were found to underperform the correlation test ( Figure 1A,B). Moreover, the inclusion of the conditional independence test worsened inference accuracy, more so with increasing sample size ( Figure 1A,B) and increasing number of regulations per gene (Supplementary Material S2.3). Similar performance drops were also observed for the Causal Inference Test (CIT) 64,65 software, which also is based on the conditional independence test ( Figure S3).
We believe that the failure of traditional causal inference is due to an elevated false negative rate (FNR) coming from two sources. First, the secondary test is less powerful in identifying weak interactions than the correlation test. In a true regulation E → A → B, the secondary linkage (E → B) is the result of two direct linkages chained together, and is harder to detect than either of them. The secondary test hence picks up fewer true regulations, and consequently has a higher FNR. Second, the conditional independence test is counter-productive in the presence of hidden confounders (i.e. common upstream regulators). In such cases, even if E → A → B is genuine, the conditional independence test will find E and B to be still correlated after conditioning on A due to a collider effect ( Figure S5) 19 . Hence the conditional independence test only reports positive on E → A → B relations without confounder, further raising the FNR. This is supported by the observation of worsening performance with increasing sample size (where confounding effects become more distinguishable) and increasing number of regulations per gene (which leads to more confounding).
To further support this claim, we examined the inference precision among the top predictions from the traditional test, separately for gene pairs directly unconfounded or confounded by at least one gene (Section 4.7). Compared to unconfounded gene pairs, confounded ones resulted in significantly more false positives among the top predictions ( Figure 1C). Furthermore, the vast majority of real interactions fell outside the top 1% of predictions (i.e. had small posterior probability) [92% (651/706) for confounded and 86% (609/709) for unconfounded interactions, Figure 1C]. Together, these results again showed the failure of the traditional test on confounded interactions and its high false negative rate overall.

Findr accounts for weak secondary linkage, allows for hidden confounders, and outperforms existing methods on simulated data
To overcome the limitations of traditional causal inference methods, Findr incorporates two additional tests (Table 1 and Section 4.3). The relevance test (P 4 ) verifies that B is not independent from A and E simultaneously and is more sensitive for picking up weak secondary linkages than the secondary linkage test. The controlled test (P 5 ) ensures that the correlation between A and B cannot be fully explained by E, i.e. excludes pleiotropy. The same subsampling analysis revealed that P 4 performed best in terms of AUROC, and AUPR with small sample sizes, whilst the combination P 2 P 5 achieved highest AUPR for larger sample sizes ( Figure 1A,B). Most importantly, both tests consistently outperformed the correlation test (P 0 ), particularly for AUPR. This demonstrates conclusively in a comparative setting that the inclusion of genotype data indeed can improve regulatory network inference. These observations are consistent across all five DREAM datasets ( Figure S2).
We combined the advantages of P 4 and P 2 P 5 by averaging them in a composite test (P ) (Section 4.6), which outperformed P 4 and P 2 P 5 at all sample sizes ( Figure 1 and Figure S2) and hence was appointed as Findr's new test for causal inference. Findr's new test (P ) obtained consistently higher levels of local precision (i.e. one minus local FDR) on confounded and unconfounded gene pairs compared to Findr's traditional causal inference test (P T ) ( Figure   1C,D), and outperformed the traditional test (P T ), correlation test (P 0 ), CIT, and every participating method of the DREAM5 Systems Genetics Challenge (Section 4.1) in terms of AUROC and AUPR on all 15 datasets ( Figure 1E,F, Table S1, Figure S4).
Specifically, Findr's new test was able to address the inflated FNR of the traditional method due to confounded interactions. It performed almost equally well on confounded and unconfounded gene pairs and, compared to the traditional test, significantly fewer real interactions fell outside the top 1% of predictions (55% vs. 92% for confounded and 45% vs. 86% for unconfounded interactions, Figure 1D, Figure S6).

The conditional independence test incurs false negatives for unconfounded regulations due to measurement error
The traditional causal inference method based on the conditional indepedence test results in false negatives for confounded interactions, whose effect was shown signficant for the simulated DREAM datasets. However, the traditional test surprisingly reported more confounded gene pairs than the new test in its top predictions (albeit with lower precision), and correspondingly fewer unconfounded gene pairs ( Figure 1C,D, Figure S6).
We hypothesized that this inconsistency originated from yet another source of false negatives, where measurement error can confuse the conditional independence test. Measurement error in an upstream variable (called A in Table 1) does not affect the expression levels of its downstream targets, and hence a more realistic model for gene regulation is E → A (t) → B with A (t) → A, where the measured quantities are E, A, and B, but the true value for A, noted A (t) , remains unknown. When the measurement error (in A (t) → A) is significant, conditioning on A instead of A (t) cannot remove all the correlation between E and B and would therefore report false negatives for unconfounded interactions as well. This effect has been previously studied, for example in epidemiology as the "spurious appearance of odds-ratio heterogeneity" 20 .
We verified our hypothesis with a simple simulation (Section 4.8). In a typical scenario with 300 samples from a monoallelic species, minor allele frequency 0.1, and a third of the total variance of B coming from A (t) , the conditional independence test reported false negatives (likeilihood ratio p-value 1, i.e. rejecting the null hypothesis of conditional indepencence, cf. Table 1) as long as measurement error contributed more than half of A's total unexplained variance ( Figure 2B). False negatives occurred at even weaker measurement errors, when the sample sizes were larger or when stronger A → B regulations were assumed ( Figure S7).
This observation goes beyond the well-known problems that arise from a large measurement error in all variables, which acts like a hidden confounder 9 , or from a much larger measurement error in A than B, which can result in B becoming a better measurement of A (t) than A itself 8 . In this simulation, the false negatives persisted even if E → A was observationally much stronger than E → B, such as when A's measurement error was only 10% (σ 2 A1 = 0.1) compared to up to 67% for B ( Figure 2B). This suggested a unique and mostly neglected source of false negatives that would not affect other tests. Indeed, the secondary, relevance, and controlled tests were much less sensitive to measurement errors (Figure 2A,C,D).

Findr outperforms traditional causal inference and machine learning methods on microRNA target prediction
In order to evaluate Findr on a real dataset, we performed causal inference on miRNA and mRNA sequencing data in lymphoblastoid cell lines from 360 European individuals in the Geuvadis study 3 (Section 4.1). We first tested 55 miRNAs with reported significant cis-eQTLs against 23,722 genes. Since miRNA target predictions from sequence complimentarity alone result in high numbers of false positives, prediction methods based on correlating miRNA and gene expression profiles are of great interest 21 . Although miRNA target prediction using causal inference from genotype and gene expression data has been considered 22 , it remains unknown whether the inclusion of genotype data improves existing expression-based methods.
To compare Findr against the state-of-the-art for expression-based miRNA target prediction, we used miRLAB, an integrated database of experimentally confirmed human miRNA target genes with a uniform interface to predict targets using twelve methods, including linear and non-linear, pairwise correlation and multivariate regression methods 62 . We were able to infer miRNA targets with 11/12 miRLAB methods, and also applied the GENIE3 random forest regression method 63  genotype-assisted methods, while also being over 500,000 times faster than CIT ( Figure 3, Table S2, Figure S8). Findr's correlation test outperformed all other methods not using geno-type information, including correlation, regression, and random forest methods, and was 500 to 100,000 times faster ( Figure 3, Table S2, Figure S8). This further illustrates the power of the Bayesian gene-specific background estimation method implemented in all Findr's tests (Section 4.5).

Findr predicts transcription factor targets with more accurate FDR estimates
We considered 3,172 genes with significant cis-eQTLs in the Geuvadis data 3 (Section 4.1) and inferred regulatory interactions to the 23,722 target genes using Findr's traditional (P T ), new (P ) and correlation (P 0 ) tests, and CIT. Groundtruths of experimentally confirmed causal gene interactions in human, and mammalian systems more generally, are of limited availability and mainly concern transcription or transcription-associated DNA-binding factors (TFs).
Here we focused on a set of 25 TFs in the set of eQTL-genes for which either differential expression data following siRNA silencing (6 TFs) or TF-binding data inferred from ChIPsequencing and/or DNase footprinting (20 TFs) in a lymphoblastoid cell line (GM12878) was available 66 (Section 4.1). AUPRs and AUROCs did not exhibit substantial differences, other than modest improvement over random predictions ( Figure S9). To test for enrichment of true positives among the top-ranked predictions, which would be missed by global evaluation measures such as AUPR or AUROC, we took advantage of the fact that Findr's probabilities are empirical local precision estimates for each test (Section 4.5), and assessed how estimated local precisions of new, traditional, and correlation tests reflected the actual precision. Findr's new test correctly reflected the precision values at various threshold levels, and was able to identify true regulations at high precision control levels ( Figure 4). However, the traditional test significantly underestimated precision due to its elevated FNR. This lead to a lack of predictions at high precision thresholds but enrichment of true regulations at low thresholds, essentially nullifying the statistical meaning of its output probability P T . On the other hand, the correlation test significantly overestimated precisions because it is unable to distinguish causal, reversed causal or confounded interactions, which raises its FDR. The same results were observed when alternative groundtruth ChIP-sequencing networks were considered ( Figure S9, Figure S10).

Discussion
We developed a highly efficient, scalable software package Findr (Fast Inference of Networks from Directed Regulations) implementing novel and existing causal inference tests. Application of Findr on real and simulated genome and transcriptome variation data showed that our novel tests, which account for weak secondary linkage and hidden confounders at the potential cost of an increased number of false positives, resulted in a significantly improved performance to predict known gene regulatory interactions compared to existing methods, particularly traditional methods based on conditional independence tests, which had highly elevated false negative rates.
Causal inference using eQTLs as causal anchors relies on crucial assumptions which have been discussed in-depth elsewhere 8,9 . Firstly, it is assumed that genetic variation is always causal for variation in gene expression, or quantitative traits more generally, and is independent of any observed or hidden confounding factors. Although this assumption is valid for randomly sampled individuals, caution is required when this is not the case (e.g. case-control studies).
Secondly, measurement error is assumed to be independent and comparable across variables.
Correlated measurement error acts like a confounding variable, whereas a much larger measurement error in the source variable A than the target variable B may lead to an inversion of the inferred causal direction. The conditional independence test in particular relies on the unrealistic assumptions that hidden confounders and measurement errors are absent, the violation of which incurs false negatives and a failure to correctly predict causal relations, as shown throughout this paper.
Although the newly proposed test avoids the elevated FNR from the conditional independence test, it is not without its own limitations. Unlike the conditional independence test, the relevance and controlled tests (Table 1) are symmetric between the two genes considered.
Therefore the direction of causality in the new test arises predominantly from using a different eQTL when testing the reverse interaction, potentially leading to a higher FDR as a minor trade-off. About 10% of cis-regulatory eQTLs are linked (as cis-eQTLs) to the expression of more than one gene 26 . In these cases, it appears that the shared cis-eQTL regulates the genes independently 26 , which in Findr is accounted for by the 'controlled' test (Table 1). When causality between genes and phenotypes or among phenotypes is tested, sharing or linkage of (e)QTLs can be more common. Resolving causality in these cases may require the use of Findr's conservative, traditional causal inference test, in conjunction with the new test.
In this paper we have addressed the challenge of pairwise causal inference, but to reconstruct the actual pathways and networks that affect a phenotypic trait, two important limitations have to be considered. First, linear pathways propagate causality, and may thus appear as densely connected sets of triangles in pairwise causal networks. Secondly, most genes are regulated by multiple upstream factors, and hence some true edges may only have a small posterior probability unless they are considered in an appropriate multivariate context.
The most straightforward way to address these issues would be to model the real directed interaction network as a Bayesian network with sparsity constraints. A major advantage of Findr is that it outputs probability values which can be directly incorporated as prior edge probabilities in existing network inference softwares.
In conclusion, Findr is a highly efficient and accurate open source software tool for causal inference from large-scale genome-transcriptome variation data. Its nonparametric nature ensures robust performances across datasets without parameter tuning, with easily interpretable output in the form of accurate precision and FDR estimates. Findr is able to predict causal interactions in the context of complex regulatory networks where unknown upstream regulators confound traditional conditional independence tests, and more generically in any context with discrete or continuous causal anchors.

Datasets
We used the following datasets/databases for evaluating causal inference methods: 1. Simulated genotype and transcriptome data of synthetic gene regulatory networks from the DREAM5 Systems Genetics challenge A (DREAM for short) 57 , generated by the SysGenSIM software 28 . DREAM provides 15 sub-datasets, obtained by simulating 100, 300, and 999 samples of 5 different networks each, containing 1000 genes in every subdataset but more regulations for sub-datasets with higher numbering. In every subdataset, each gene has exactly one matching genotype variable. 25% of the genotype variables are cis-expression Quantitative Trait Loci (eQTL), defined in DREAM as: their variation changes the expression level of the corresponding gene directly. The other 75% are trans-eQTLs, defined as: their variation affects the expression levels of only the downstream targets of the corresponding gene, but not the gene itself. Because the identities of cis-eQTLs are unknown, we calculated the P-values of genotype-gene expression associations with kruX 69 , and kept all genes with a P-value less than 1/750 to filter out genes without cis-eQTL. For the subsampling analysis (detailed in Section 4.7), we restricted the evaluation to the prediction of target genes from these cis-genes only, in line with the assumption that Findr and other causal inference methods require as input a list of genes whose expression is significantly associated with at least one cis-eQTL. For the full comparison of Findr to the DREAM leaderboard results, we predicted target genes for all genes, regardless of whether they had a cis-eQTL.
2. Genotype and transcriptome sequencing data on 465 human lymphoblastoid cell line samples from the Geuvadis project 3 consisting of the following data products: • Genotype data (ArrayExpress accession E-GEUV-1) 30 .
• Gene quantification data for 23722 genes from nonredundant unique samples and after quality control and normalization (ArrayExpress accession E-GEUV-1) 31 .
• Quantification data of miRNA, with the same standard as gene quantification data (ArrayExpress accession E-GEUV-2) 32 . < 0.05 from R package qvalue for differential expression in siRNA silencing, or having at least 2 TF-binding peaks within 10kb of their transcription start site. We also obtained the filtered proximal TF-target network from 67 , which had 14 TFs and 7,000 target genes in common with the Geuvadis data.

General inference algorithm
Consider a set of observations sampled from a mixture distribution of a null and an alternative hypothesis. For instance in gene regulation, every observation can correspond to expression levels of a pair of genes wich are sampled from a bivariate normal distribution with zero (null hypothesis) or non-zero (alternative hypothesis) correlation coefficient. In Findr, we predict the probability that any sample follows the alternative hypothesis with the following algorithm (based on and modified from 55 ): 1. For robustness against outliers, we convert every continuous variable into standard normally distributed N (0, 1) values using a rank-based inverse normal transformation across all samples. We name this step as supernormalization. Implementational details can be found in Findr's source code.

Likelihood ratio tests
Consider correlated genes A, B, and a third variable E upstream of A and B, such as a significant eQTL of A. The eQTLs can be obtained either de novo using eQTL identification tools such as matrix-eQTL 68 or kruX 69 , or from published analyses. Throughout this article, we assume that E is a significant eQTL of A, whereas extension to other data types is straightforward. We use A i and B i for the expression levels of gene A and B respectively, which are assumed to have gone through the supernormalization in Section 4.2, and optionally the genotypes of the best eQTL of A as E i , where i = 1, . . . , n across samples. Genotypes are assumed to have a total of n a alleles, so E i ∈ {0, . . . , n a }. We define the null and alternative hypotheses for a total of six tests, as shown in Table 1. LLRs of every test are calculated separately as follows: 0. Correlation test: Define the null hypothesis as A and B are independent, and the alternative hypothesis as they are correlated: The superscript (0) is the numbering of the test. Both hypotheses are modeled with gene expression levels following bivariate normal distributions, as for i = 1, . . . , n. The null hypothesis corresponds to ρ = 0.
Maximum likelihood estimators (MLE) for the model parameters ρ, σ A0 , and σ B0 arê and the LLR is simply In the absence of genotype information, we use nonzero correlation between A and B as the indicator for A → B regulation, giving the posterior probability alt , we model E → A as A follows a normal distribution whose mean is determined by E categorically, i.e.
From the total likelihood p( we find MLEs for model parameters µ j , j = 0, 1, . . . , n a , and σ A , aŝ where n j is the sample count by genotype category, The Kronecker delta function is defined as δ xy = 1 for x = y, and 0 otherwise. When summing over all genotype values (j = 0, . . . , n a ), we only pick those that exist (n j > 0) throughout this article. Since the null hypothesis is simply that A i is sampled from a genotype-independent normal distribution, with MLEs of mean zero and standard deviation one due to the supernormalization (Section 4.2), the LLR for test 1 becomes By favoring a large LLR (1) , we select H (1) alt and verify that E regulates A, with (1) ).

Secondary (linkage) test: The secondary test is identical with the primary test, except
it verifies that E regulates B. Hence repeat the primary test on E and B and obtain the MLEs:ν and the LLR as H (2) alt is chosen to verify that E regulates B.

(Conditional) independence test:
Verify that E and B are independent when conditioning on A. This can be achieved by comparing H LLRs close to zero then prefer H (3) null , and ensure that E regulates B only through A: For H (3) alt , the bivariate normal distribution dependent on E can be represented as For H null , the distributions follow Eq 4, as well as null with their MLEs, we obtain the LLR: where andρ is defined in Eq 2.
4. Relevance test: Since the indirect regulation E → B tends to be weaker than any of with indirect regulation E → B as well as the direct regulation A → B for stronger distinguishing power on weak regulations. We define H

This simply verifies that B is not independent from both
A and E simultaneously. In the alternative hypothesis, B is regulated by E and A, which is modeled as a normal distribution whose mean is additively determined by E categorically and A linearly, i.e.
We can hence solve its LLR as

Controlled test:
Based on the positives of the secondary test, we can further distinguish the alternative hypothesis H null ≡ B ← E → A to verify that E does not regulate A and B independently. Its LLR can be solved as

Null distributions for the log-likelihood ratios
The null distribution of LLR, p(LLR | H null ), may be obtained either by simulation or analytically. Simulation, such as random permutations from real data or the generation of random data from statistics of real data, can deal with a much broader range of scenarios in which analytical expressions are unattainable. However, the drawbacks are obvious: simulation can take hundreds of times longer than analytical methods to reach a satisfiable precision. Here we obtained analytical expressions of p(LLR | H null ) for all the tests introduced above.
indicates no correlation between A and B. Therefore, we can start fromB In order to simulate the supernormalization step, we normalizeB i into B i with zero mean and unit variance as: Transform the random variables {B i } by defining SinceB i ∼ i.i.d N (0, 1) (according to Eq 7), we can easily verify that X 1 , X 2 , X 3 are independent, and Expressing Eq 3 in terms of X 1 , X 2 , X 3 gives in which follows the Beta distribution.
We define distribution D(k 1 , k 2 ) as the distribution of a random variable The probability density function (PDF) for Z ∼ D(k 1 , k 2 ) can be derived as: for z > 0, and for z ≤ 0, p(z | k 1 , k 2 ) = 0. Here B(a, b) is the Beta function. Therefore the null distribution for the correlation test is simply 1. Primary test: H null = E A indicates no regulation from E to A. Therefore, similarly with the correlation test, we start fromÃ i ∼ i.i.d N (0, 1) and normalize them to A i with zero mean and unit variance.
The expression of LLR (1) then becomes: For now, assume all possible genotypes are present, i.e. n j > 0 for j = 0, . . . , n a . Trans- Then we can similarly verify that {X i } are pairwise independent, and Again transform {X i } by defining independent random variables

Some calculation would reveal
i.e.
To account for genotypes that do not show up in the samples, define n v ≡ j∈{j|n j >0} 1 as the number of different genotype values across all samples. Then ThenB i is normalized to B i according to Eq 8. The null distribution of LLR (3) can be obtained with similar but more complex computations from Eq 6, as 4. Relevance test: The null distribution of LLR (4) can be obtained similarly by randomizing B i according to Eq 7 and Eq 8, as

Controlled test:
To compute the null distribution for the controlled test, we start from and then normalizeB i into B i according to Eq 8. Some calculation reveals the null distribution as We verified our analytical method of deriving null distributions by comparing the analytical null distribution v.s. null distribution from permutation for the relevance test in Section S2.2.

Bayesian inference of posterior probabilities
After obtaining the PDFs for the LLRs from real data and the null hypotheses, we can convert LLR values into posterior probabilities P (H alt | LLR). We use a similar technique as in 55 , which itself was based on a more general framework to estimate local FDRs in genome-wide studies 42 . This framework assumes that the real distribution of a certain test statistic forms a mixture distribution of null and alternative hypotheses. After estimating the null distribution, either analytically or by simulation, it can be compared against the real distribution to determine the proportion of null hypotheses, and consequently the posterior probability that the alternative hypothesis is true at any value of the statistic.
To be precise, consider an arbitrary likelihood ratio test. The fundamental assumption is that in the limit LLR → 0 + , all test cases come from the null hypothesis (H null ), whilst as LLR increases, the proportion of alternative hypotheses (H alt ) also grows. The mixture distribution of real LLR values is assumed to have a PDF as The priors P (H null ) and P (H alt ) sum to unity and correspond to the proportions of null and alternative hypotheses in the mixture distribution. For any test i = 0, . . . , 5, Bayes' theorem then yields its posterior probability as Based on this, we can define the posterior probabilities of the selected hypotheses according to Table 1, i.e. the alternative for tests 0, 1, 2, 4, 5 and the null for test 3 as Lastly, in a typical application of Findr, inputs of (E, A) pairs will have been pre-determined as the set of significant eQTL-gene pairs from a genome-wide eQTL associaton analysis. In such cases, we may naturally assume P 1 = 1 for all considered pairs, and skip the primary test.

Tests to evaluate
Based on the six tests in Section 4.3, we use the following tests and test combinations for the inference of genetic regulations, and evalute them in the results.
• The correlation test is introduced as a benchmark, against which we can compare other methods involving genotype information. Pairwise correlation is a simple measure for the probability of two genes being functionally related either through direct or indirect regulation, or through coregulation by a third factor. Bayesian inference additionally considers different gene roles. Its predicted posterior probability for regulation is P 0 . • Trigger 43 is an R package implementation of the method. However, since Trigger in-tegrates eQTL discovery with causal inference, it is not practical for use on modern datasets. For this reason, we reimplemented this method in Findr, and evaluated it with P 2 and P 2 P 3 separately, in order to assess the individual effects of secondary and independence tests. As discussed above, we expect a set of significant eQTLs and their associated genes as input, and therefore P 1 = 1 is assured and not calculated in this paper or the package Findr. Note that P T is the estimated local precision, i.e. the probability that tests 2 and 3 are both true. Correspondinly, its local FDR (the probability that one of them is false) is 1 − P T .
• The novel test, aimed specifically at addressing the failures of the traditional causal inference test, combines the tests differently: Specifically, the first term in Eq 25 accounts for hidden confounders. The controlled test replaces the conditional independence test and constrains the hypothesis space more weakly, only requiring the correlation between A and B is not entirely due to pleiotropy.
Therefore, P 2 P 5 (with P 1 = 1) verifies the hypothesis that On the other hand, the relevance test in the second term of Eq 25 addresses weak interactions that are undetectable by the secondary test from existing data (P 2 close to 0). This term still grants higher-than-null significance to weak interactions, and verifies , also a superset of E → A → B. In the extreme undetectable limit where P 2 = 0 but P 4 = 0, the novel test Eq 25 automatically reduces to P = 1 2 P 4 , which assumes equal probability of either direction and assigns half of the relevance test probability to A → B.
The composite design of the novel test aims not to miss any genuine regulation whilst distinguishing the full spectrum of possible interactions. When the signal level is too weak for tests 2 and 5, we expect P 4 to still provide distinguishing power better than random predictions. When the interaction is strong, P 2 P 5 is then able to pick up true targets regardless of the existence of hidden confounders.  In order to demonstrate the inferential precision among top predictions for any inference test (here the traditional and novel tests separately), we first ranked all (ordered) gene pairs (A, B) according to the inferred significance for A → B. All gene pairs were split into groups according to their relative significance ranking (9 groups in Figure   1C,D, as top 0% to 0.01%, 0.01% to 0.02%, etc). Each group was divided into two subgroups, based on whether each gene pair shared at least one direct upstream regulator gene (confounded) or not (unconfounded), according to the gold standard. Within each subgroup, the local precision was computed as the number of true directed regulations divided by the total number of gene pairs in the subgroup. For simplicity, we only considered monoallelic species. Therefore the genotype E in each sample followed the Bernoulli distribution, parameterized by the predetermined minor allele frequency. Each regulatory relation (of E → A (t) , A (t) → A, and A (t) → B) correponded to a normal distribution whose mean was linearly dependent on the regulator variable. In particular, for sample i:

Simulation studies on causal models with measurement error
in which σ A1 , σ A2 , and σ B are parameters of the model. Note that σ 2 B is B's variance from all unknown sources, including expression level variations and measurement errors. The tilde normalizes the variable into zero mean and unit variance, as: whereX and Var(X) are the mean and variance of X ≡ {X i } respectively.
Given the five parameters of the model (the number of samples, the minor allele frequency, σ A1 , σ A2 , and σ B ), we could simulate the observed data for E, A, and B, which were then fed into  Teamwork: improved eQTL mapping using combinations of machine learning methods.  The mean AUROC (A) and AUPR (B) on subsampled data are shown for traditional (P 2 , P 2 P 3 ) and newly proposed (P 4 , P 2 P 5 , P ) causal inference tests against the baseline correlation test (P 0 ). Every marker corresponds to the average AUROC or AUPR at specific sample sizes. Random subsampling at every sample size was performed 100 times. Half widths of the lines and shades are the standard errors and standard deviations respectively. P i corresponds to test i numbered in Table 1 Table S1.
s variance from other sources and σ 2

A2
is the variance due to measurement noise. A total of 100 values from 10 −2 to 10 2 were taken for σ 2 A1 and σ 2 A2 to form the 100 × 100 tiles. Tiles that did not produce a significant eQTL relation E → A with p-value ≤ 10 −6 were discarded. Contour lines are for the log-average of smoothened tile values. Note that for the conditional independence test (B), the true model corresponds to the null hypothesis, i.e. small (purple) p-values correspond to false negatives, whereas for the other tests the true model corresponds to the alternative hypothesis, i.e. small (purple) p-values correspond to true positives (cf. Table 1). For details of the simulation and results from other parameter settings, see Section 4.8 and Figure S7

S1.1 Practical details for Bayesian inference
In practice, real PDFs are approximated with histograms. This requires a proper choice of histogram bin widths and counts. We use n To enforce monotonicity of posterior distribution, we calculated the two following functions.
First, starting from raw posterior probability P (H alt | LLR) at every bin center, set every the posterior at every bin to be no smaller than every value on its the negative side. Second, also starting from raw posterior distribution values P (H alt | LLR) at every bin center, set every the posterior at every bin to be no larger than every value on its the positive side. A mean is then taken between the two functions to ensure monotonicity whilst minimizing systematic bias. † Corresponding author. Email: Tom.Michoel@roslin.ed.ac.uk We then smoothened the monotonic posterior distribution, by convolving its bin differences against a predefined normal filter, after which cumulative sum was calculated to recover the posterior distribution. A normalization was then performed to maintain the span between the previous minimum and maximum. The major purpose of smoothening is to remove duplicate values, especially those introduced during monotonicity enforcement, rather than to obtain a visually smooth function. After smoothening, we performed linear interpolation to obtain the individual post-processed posterior probabilities for each LLR.
More details can be found in the source code.

S2.1 Iterative conditioning conflicts with local FDR-based probability estimation
In 55 , the authors suggested that the probability of each test should be conditioned on the survival of all preceding tests, i.e. that the null distribution of each test should be estimated only on the (A, B) pairs that survive all preceding tests, although this is not implemented in Trigger package.
As an example, we applied the test combination P 2 P 3 on Geuvadis dataset. After choosing an appropriate threshold probability for secondary test, as suggested in 55 , we filtered only the gene pairs positive for secondary test and calculated their real and null LLR distributions of the independence test. Random permutations were applied for null distribution, with high-speed sampling from Metropolis-Hastings algorithm, whose sampling rate is exponentially increasing below secondary test's positivity threshold, and uniformly 1 above that. To balance between efficient sampling and the prevention of being trapped in local maxima, a proper exponential factor of sampling rate (≈ 1 − n v /n) can be obtained from the null LLR distribution in Eq 19.
The calculation revealed that the null and real distributions form different shapes at the LLR (3) → 0 + side, which contradicted with the fundamental assumption of the local FDRbased probability estimation method (Section 4.5). On the other hand, the histograms aligned flawlessly without conditioning. We conclude that although appropriate conditioning may enhance statistical power, we are still yet to find a self-consistent approach. Since unconditioned tests have been shown self-consistent and reliable, we do not apply test conditioning in Findr.

S2.2 Analytical null distribution matches random permutations
An important feature of Findr is the novel derivation of analytical expressions for the distributions of likelihood ratio test statistics under various null distributions (Section 4.4). We compared the analytical null distributions to empirical null distributions obtained from random simulations. Simulated data were obtained either by permuting sample labels of the independent gene (tests 0,1,2,4), or by simulating expression levels of the gene whilst taking into account existing correlations (tests 3,5). Sufficient simulated data were then fed into the original algorithm to obtain p(LLR | H null ). As demonstrated with an example in Figure   S1, our analytical derivation was confirmed indistinguishable with simulated distribution for miRNA hsa-miR-200b-3p's targets.
More importantly, the analytical result holds for any sample size and does not assume infinite sample sizes (n → ∞). Indeed, in this asymptotic limit, the LLRs of null distributions reduce to χ 2 distributions, in agreement with Wilks's theorem 56 . For example, it is easy to confirm:  Figure S2.

S2.4 Findr achieves best performance on miRNA target predictions from Geuvadis dataset
We compared the performance of Findr on miRNA target prediction from the Geuvadis dataset with a suite of network inference methods that are based on gene expression data and, for some, genotype information. They include: • All methods in the miRLAB package 62 : -Correlation methods: Pearson correlation, Spearman correlation, Kendal correlation, Distance correlation (dcov), Hoeffding's D measure (hoeffding), Randomized Dependence Coefficient (rdc), and Mutual Information (mi).
-Failed method: Intervention calculus (ida) method failed due to excessive memory usage (greater than 16GB) and hence is excluded from comparison.
• CIT 64,65 which performs multiple causal inference tests with genotype data.
The Trigger package 55 was not attempted because its eQTLs discovery routine exceeds both our memory and time limitations.
Their AUROCs, AUPRs, and running times are presented in Table S2. The ROC and PR curves are shown in Figure S8. We observed the following results: • The correlation test topped among methods without genotype information, and in particular performs much better than Pearson and Spearman correlations. The performance gain is due to Bayesian inference, which is able to account for different gene roles such as hubs. This suggests the possibility of replacing correlation based methods with their FPR estimation counterparts in future inference of genetic regulations.
• The new test P performed better than correlation test P 0 . This is the first comparative study to demonstrate the effect of genotype information in the inference of gene regulatory relations.
• The traditional causal inference test performed worse than random predictions. This confirms with real data that the indirect secondary test fails to identify true but weak regulations. The independence test had negligible effect as the sample size is small (not shown in Figure S8).
• Findr achieved higher AUROC and AUPR than all other methods attempted. It was also much faster than all other methods, especially CIT which also includes genotype data.
• Findr obtained a lower precision than lasso and elastic-net at small recalls. This might be explained by the fact that lasso and elastic-net are multi-variate methods which incorporate all other gene expression levels besides pairwise information, and therefore exclude indirect regulators better.

S2.5 Findr predicts transcription factor targets with accurate FDR estimates
Since CIT is much slower than Findr, with CIT we were only able to infer genetic regulations of the intersection set of the prediction and groundtruth datasets, as opposed to inferring all possible regulations with Findr.
As shown in Figure 4, AUPRs and AUROCs for TF target prediction with respect to known targets from siRNA silencing or TF-binding did not exhibit substantial differences, other than modest improvement over random predictions. We believe this is due to the unavoidable noise and size limitations in groundtruth data, which lead to large fluctuations in evaluation metrics and therefore could not compare methods perfectly. Furthermore, AUPR and AUROC test the entire ranked list of predictions for overlap with the groundtruth and will miss differences in enrichment for true positives between methods if they occur only among a small fraction of top-ranked predictions.
The construction of gold standard regulatory networks from TF-binding data is dependent on how TF binding sites are mapped to target genes. Here a TF regulatory interaction was assumed if a gene had at least two binding sites for a particular TF within 10kb of its transcription start site (TSS) 66 . We repeated the analysis using the high-confidence (binding within 2.5kb) TF-target network derived from ENCODE from ChIP-sequencing data of 119 TFs in five cell types, including the lymphoblastoid cell line GM12787 67 . Fourteen TFs had a significant eQTL in the Geuvadis data. The analysis results are consistent (Figure 4 and Figure S10).

S3 Discussion
• Existing softwares/methods not considered in this study: No previous causal inference software has been able to handle the size of modern datasets.
In Trigger 55 , eQTL discovery and causal inference are inextricably linked, which is impractical for any sizeable dataset, especially for mammalian species; much more efficient tools for eQTL discovery have meanwhile become available 68,69 . NEO 70 is similar to CIT 65 in the tests employed, but even slower. It is able to account for multiple eQTLs, but avoids permutations using asymptotic χ 2 approximations, which result in deflated estimations for null distributions as shown in Section S2.2. As stated in the discussion, multi-variate network inference can be a downstream analysis based on pairwise causal inference, but evaluating such methods 71-73 fell out of the scope of this paper.
Finally, note that CIT computes null distributions by permutation, separately for every gene pair, requiring months to tens of years of CPU time on a modern dataset. We were only able to include comparisons to CIT on the Geuvadis data by limiting it to the subset of gene pairs in the ground-truth tables, which correspond to 0.2% of all gene pairs for the TF target prediction case.
• Considerations on human datasets: Proving in an easily reproducible manner that the results on simulated DREAM data also hold on human data is challenging, due to the often restricted access to individual-level genotype data and the limited availability of reference databases of known interactions, especially when cell type specificity is taken into account. We performed our evaluation on the Geuvadis data, which provides transcriptome data in lymphoblastoid cells of nearly 400 individuals whose genotype data is publicly available through the 1000 Genomes project. We found that Findr predicted miRNA targets more accurately than other causal inference methods and a panel of machine learning methods that used expression data alone. Although the absolute power to predict miRNA targets may appear modest, we were primarily interested in the relative performance of various methods, and did not incorporate information about known miRNA biology, such as a preference for negative correlations or the presence of miRNA seed sequences. Moreover, since miR-NAs are frequently studied in the context of diseases such as cancer, the ground-truth set of experimentally confirmed targets may represent a biased set of interactions that are not necessarily present in the lymphoblastoid cell type studied.
To address the issue of cell type specificity, we analysed the predicted targets of 25 tran-   Every marker corresponds to the AUROC or AUPR of one dataset. CIT is an R package that includes the conditional independence test, along with tests 2 and 5, while also comparing E → A → B against E → B → A. The subsampling analysis on CIT was not feasible due to its low speed.  Figure S4: Estimated and real precision-recall curves for dataset 4 of the DREAM challenge. The real precision was computed according to the groundtruth, whilst the estimated precision was obtained from the estimated FDR from the respective inference method (precision = 1−FDR). Only genes with cis-eQTLs were considered as primary targets in prediction and validation. Both the novel (A, B) and the traditional (C, D) tests were evaluated. In A, C the original groundtruth table was used to validate predictions, whereas in B, D an extended groundtruth was used that also included indirect regulations at any level based on the original groundtruth.  Figure S5: The conditional independence test fails in the presence of hidden confounders. When A and B are both regulated by a hidden confounder C, which is independent of E (left), A becomes a collider and conditioning on A would introduce inter-dependency between E and C, which maintains E → B regulation (right).    Figure S8: ROC (top) and PR (bottom) curves of miRNA target predictions were compared for Findr's traditional, new, and correlation tests, GENIE3, CIT, and 11 methods in miRLAB, based on Geuvadis data. The solid black lines correspond to expected performances from random predictions. A higher curve indicates better prediction performance. P P T C P 0 P P T C P 0 P P T C P 0 P P T C P 0 miRNA siRNA TF-binding ENCODE 0.40 0.45 0.50 0.55 0.60 AUROC P P T C P 0 P P T C P 0 P P T C P 0 P P T C P 0 miRNA siRNA TF-binding ENCODE 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 AUPR Figure S9: Three methods of causal inference were evaluated and compared against the baseline correlation test method (P 0 ): Findr's new test (P ), traditional causal inference test in Findr (P T ), and CIT (C). AUROC and AUPR metrics are measured for three inference tasks (Section 4.1). MiRNA compares miRNA target predictions based on Geuvadis miRNA and mRNA expression levels against groundtruths from miRLAB. SiRNA and TF-binding compares gene-gene interaction predictions based on Geuvadis gene expression levels against groundtruths from siRNA silencing and TF-binding measurements 66 respectively. ENCODE compares the same gene-gene interaction predictions against TF-binding networks derived from ENCODE data 67 . Dashed lines indicate expected performances from random predictions.