Presymptomatic Risk Assessment for Chronic Non-Communicable Diseases

The prevalence of common chronic non-communicable diseases (CNCDs) far overshadows the prevalence of both monogenic and infectious diseases combined. All CNCDs, also called complex genetic diseases, have a heritable genetic component that can be used for pre-symptomatic risk assessment. Common single nucleotide polymorphisms (SNPs) that tag risk haplotypes across the genome currently account for a non-trivial portion of the germ-line genetic risk and we will likely continue to identify the remaining missing heritability in the form of rare variants, copy number variants and epigenetic modifications. Here, we describe a novel measure for calculating the lifetime risk of a disease, called the genetic composite index (GCI), and demonstrate its predictive value as a clinical classifier. The GCI only considers summary statistics of the effects of genetic variation and hence does not require the results of large-scale studies simultaneously assessing multiple risk factors. Combining GCI scores with environmental risk information provides an additional tool for clinical decision-making. The GCI can be populated with heritable risk information of any type, and thus represents a framework for CNCD pre-symptomatic risk assessment that can be populated as additional risk information is identified through next-generation technologies.


Introduction
Common chronic non-communicable diseases (CNCDs) are caused by a combination of genetic and environmental risk factors. These diseases account for the majority of disease burden, and the majority of health care cost, globally. Pre-symptomatic risk assessment of an individual for CNCDs, and personalized management to extend the healthy lifespan and reduce costs, is increasingly a global priority [1]. CNCDs include diseases that are not monogenic in nature, not purely environmental (trauma), and not purely somatic. They do include the most common forms of disease such as heart disease, metabolic disorders, neurological and mental health disorders, heritable cancers, and many noncongenital/non-monogenic pediatric disorders. Examples include myocardial infarction, arrhythmia, diabetes, Alzheimer's disease, prostate cancer, and autism spectrum disorder.
Recent advances in genotyping technology have greatly improved our understanding of the genetic risk factors that contribute to such diseases. In particular, whole-genome association studies have uncovered many common variants that increase an individual's risk of developing a disease during his/her lifetime. Since disease prevention will be the most effective means to ensure a healthier population in the coming decades, it is necessary to understand how to integrate inherited genetic risk information into our clinical decision-making process early in life so that we can minimize the chance of developing disease in the future. Low effect size common SNP variants, rare and private variants, DNA copy number variants and epigenetic modifications are together believed to account for most of the inherited risk. When we can fully articulate the relative contribution of each of these elements to any specific disease, and the effects of their interactions with one another, our predictive accuracy will peak.
Accurately estimating an individual's risk to develop a CNCD is a challenging task. To begin, the risk is determined by many factors including the genetic risk factor load, environmental factors, gender, age etc and not all contributing factors are known. It is therefore clear that for most conditions the best risk assessments can only provide a probabilistic estimate. In order to accurately estimate the risk of an individual, one has to take into account the different associated variants, their effect sizes, their frequency in the population, the environmental factors affecting the individual, such as diet, age, family history and ethnic background as well as their interactions. Large-scale studies that investigate all of these factors at once are prohibitively expensive to conduct, and to our knowledge, none have been conducted.
Here, we study the performance of risk estimates based on the genetic composition of an individual alone, keeping all other factors fixed. Several approaches for risk estimation based on genetics alone have been proposed in the past [2][3][4]. These methods generally use the assumption that the disease-associated loci are independent of one another and that the relative risk of each locus is given. In practice, the relative risks are normally not known since in case-control studies, the odds-ratios and not the relative risks are given. In [2], the relative risks are inferred from the odds ratios by solving a set of equations that takes into account the prevalence of the disease, the frequencies of the genotypes and the odds ratios. Here, we suggest using a new method which aims at estimating the risk over the lifetime of an individual. The probability of disease as calculated using our method will be referred to as the Genetic Composite Index (GCI) or the GCI score (see Methods).
Similarly to previous approaches, we rely on several assumptions, main among them being the assumption of independence between the disease-associated loci. We use simulated data as well as real data to assess the performance of the risk estimates under different conditions. Importantly, we find that the assumption of independence does not greatly affect the generality of our method and modest SNP-SNP interactions in simulated data do not seem to significantly affect its predictability.
In order to measure the quality and effectiveness of GCI and similar methods, it is important to understand their limitations and merits. For example, [2] use Receiver Operating Characteristic (ROC) curves in order to measure the effectiveness of various risk measures. We adapt their use of ROC curves to evaluate our proposed score, and in particular, we consider the use of GCI in the context of three different diseases: Type 2 Diabetes, Crohn's disease and Rheumatoid Arthritis. We use simulations to calculate the predictive power of these different methods under an ideal ''best-case'' theoretical scenario, in which all the genetic factors are known. This ideal risk assessment depends on several factors including the heritability and the average lifetime risk of the disease. We find that the predictive power currently achieved for these diseases is substantially lower than the ideal predictive power, suggesting that major interactions and possible epigenetic factors are yet to be discovered. We emphasize that GCI is not a substitute for large-scale studies designed to simultaneously test multiple risk factors, but is rather an index that can be used when the result of such studies are simply not available, as is the case for virtually all common diseases.

Evaluation of the GCI risk score and its assumptions
We use the Wellcome Trust Case Control Consortium (WTCCC) data [5] to test our GCI methodology. This dataset contains the genotypes of approximately 14,000 individuals divided into seven subpopulations based on disease phenotypes and one unaffected control subpopulation of 1,500 samples from the UK Blood Service Control Group. We limited our attention to the Type 2 Diabetes, Crohn's Disease and Rheumatoid Arthritis subpopulations and the common control group and did not consider any environmental variables in this analysis. We used SNPs that were reported to be significantly associated with each of these conditions in literature (see Table 1) and that passed a set of quality criteria. The main criteria were that i) The SNP association was consistently replicated within a given ancestral group and ii) The number of cases and controls were at least 250 when the effect size was less than 1.5 (Details about genotyping quality criteria in WTCCC data are given in [5]. There are no other criteria with respect to genotyping except that the SNPs chosen were reported in high quality studies that use good genotyping methodology). For each of the chosen SNPs, we computed the relative risk (see Methods) based on the empirical distribution of alleles found in the WTCCC dataset and used the GCI formula to calculate an estimated risk per individual. We note that some of the known risk variants are not present on the Affymetrix 500k GeneChip array that was used by the WTCCC, and therefore we expect the predictability of the GCI to be better than what is presented in our analysis below.
As noted before, we use Receiver Operating Characteristic (ROC) curve analysis [12] in order to evaluate the ability of GCI to serve as a predictive test for a condition. ROC curves have been previously used as a measure of the reliability of a genetics-based risk assessment test [2]. For a perfect test, a threshold t could be chosen such that all individuals with a score larger than t develop the condition, and all individuals with a score less than t don't. However, in practice, we will find that for any given threshold there is some fraction of false positive and false negative assignments. The ROC curve graphically depicts the relationship between false positive rates and true positive rates, and thus it can be used to guide the tradeoffs between test sensitivity and specificity. We use the area under the ROC curve (AUC) as a quantitative measure to compare different risk scores. In general, the larger the value of the AUC, the better the score used for the classification. If classification were done randomly, the AUC is expected to be 0.5 and for the perfect score the AUC is equal to 1.

Comparisons with an interactions model
One of the assumptions made by the GCI framework is that the disease-associated SNPs are independent. This assumption is useful since the score can then be calculated just from summary data; furthermore, when interactions are modeled based on limited data, there is a risk of over-fitting. Nevertheless, in an attempt to quantify how much information might be lost by the independence assumption, we compared our method with a model that accounts for both SNP-SNP interactions and the marginal contribution of each SNP. Particularly, we used logistic regression to account for the interactions. If the SNPs are s 1 , s 2 …s n , then the model assumes that the logit transformation of the binary outcome reflecting disease or non-disease status is X~cza 1 s 1 z a 2 s 2 z . . . za n s n z a 12 s 12 z . . . za n{1,n s n{1,n , where s ij is the interaction between s i and s j . We first trained the model using the WTCCC data and then generated a ROC curve based on its probability estimates. Since this model takes into account the pairwise interactions between SNPs, it should be at least as accurate as the GCI score, which does not consider them. Note that the logistic regression model is an optimistic upper bound on the GCI since it can easily over-fit the model to the data; therefore, we are being conservative in our estimation of the information lost under the independence assumption. Figure 1 shows the ROC curves for the three disease scenarios and Table 2 gives their AUCs. We observe that the AUCs for GCI and logistic regression are quite similar for these three diseases, leading us to the conclusion that SNP-SNP interactions do not add substantial information to the risk assessment for the diseases investigated here. We can therefore justify our assumption (at least in these cases) that SNP-SNP interactions can be ignored as long as there is no evidence for such an interaction from previous studies.

Theoretical upper bound for disease-risk predictability
The number of SNPs used in our analysis reflects the current knowledge about the effect of common SNPs on the risk of a disease. These, however lack many other factors such as epigenetic factors, rare variants, copy number variants, interactions etc. The question remains as to how much more accurate could we potentially be when considering genetic factors alone. We shed light on this by comparing our empirical results to theoretical disease models that assume that the disease is affected by both environmental and genetic factors, and that the two factors are independent (see Methods). Our model assumes that there are many small genetic effects that are cumulative and therefore the genetic factors include a normally distributed random variable. It takes into account the heritability and lifetime risk of the condition, resulting in a realistic extrapolation of the unknown genetic risk factors based on the currently known ones.
Formally, the theoretical model uses a phenotype variable P, and it assumes that P~G z E, where G is the genetic risk and E is the environmental risk and an individual will develop the condition in his/her lifetime if P w bfor a fixed b (see Methods for more details). We generated 100,000 random samples for the distribution of P based on our theoretical models for G and E and determined their disease status. We then assumed that G is known for each individual (but E is unknown), and generated a ROC curve for the samples using this information alone. This curve represents an optimal scenario where the genetic risk is entirely understood and can be measured correctly for every individual but environmental risk factors are completely unknown. We will refer to the area under the ROC curve in this case as the theoretical genetic maximum. Figure 1 shows the ROC curves for such a scenario and Table 2 gives their areas. We observe that the GCI area under the curve with currently known variants is much less than that of the optimal theoretical genetic models, which suggests that many additional unknown genetic variants and/or interactions are expected to affect these diseases.
Based on Figure 1, we conclude that there is room for improvement in predictive modeling that will most likely come through the discovery of additional genetic variants and geneenvironment interactions for the three conditions discussed in this text. It is useful to know what percentage of the genetic factors have been captured to date. Under the assumption that all the major genetic factors have already been discovered and that there are no gene-gene or gene-environment interactions, we can estimate the number of variants that will suffice to obtain a ROC curve with an AUC as large as the theoretical genetic maximum. If we assume that the GWAS studies performed to date have sufficient density to identify all large effect size common variants in B. Type 2 Diabetes. C. Rheumatoid Arthritis. In each plot, the black line corresponds to random expectation, the blue lines correspond to theoretical expectations (under the two disease models described in Methods) when the genetic variable is known, the red line corresponds to GCI, and the green line corresponds to logistic regression. doi:10.1371/journal.pone.0014338.g001 the genome, and that all the unknown variants are common (minor allele frequency = 10%), yet of weak effect size and that such variants contribute relative risks of 1.1 for the homozygous risk genotype and 1.05 for the heterozygous genotype; then our results show that under these assumptions the number of undiscovered risk factors is quite large (in the 1000s). Furthermore, we observe that only about 6% of the genetic variance is explained by the known variants for Type 2 Diabetes, about 9% for Crohn's disease, and about 14% for rheumatoid arthritis. It is also reasonable to assume that additional large-effect size variants will be discovered through the use of next-generation technologies and take the form of rare/de novo nucleotide variants, copy number variants and epigenetic modification of the primary nucleotide sequence -and that it is likely that a blend of a few of these larger effects will account for the missing heritability together with a larger number of common and weak effect size variants.
Attempts to estimate the number of causal variants in complex diseases have been made in the past [19][20][21]. These attempts reach somewhat different conclusions than ours, i.e. these studies estimate the number of genetic effects to be found to be quite modest, even under the assumption of independence between genes and environment. The main difference in the methodology between our approach and these previous approaches is that previous approaches have been published prior to the results achieved by GWAS studies. Thus, they do not make the assumption that the major common effects have already been found, and they do not take into account the heritability and lifetime risk. We note that [22] used a model similar to ours to investigate the relationship between the number of disease loci and the relative risk of the loci and their results are broadly similar to ours. They use the prevalence of the disease instead of lifetime risk. It must be mentioned that inaccuracies in the heritability estimates can affect these numbers, but as long as they are not off by an order of magnitude, we expect the results to be qualitatively similar.

Theoretical effect of unknown SNP-SNP interactions
Our GCI score is based on the assumptions that all SNPs are in linkage equilibrium and that they have independent effects on the risk of the disease. As discussed above, the three examples studied here show no significant difference between the GCI model and a model in which pair-wise dependencies among the SNPs are included through logistic regression. This assumption may not always hold since, we know of some rare examples for which there is evidence of epistasis [23]. If these interactions are known, they can easily be incorporated into the GCI model by considering the interacting SNPs together as a combination. However, it is important to understand the effect of unknown SNP-SNP interactions on the multiplicative risk estimates.
In order to further explore the issue of interactions, we simulated datasets under a model in which a single pair of SNPs is interacting. Formally, the model can be described as follows. Let l i denote the relative risk of the disease for a particular combination of genotypes (g i ) and p denote the average lifetime risk. If all SNPs are independent, the total risk is proportional to l i~P n j~1 l ij where l ij denotes the relative risk for the j th locus. In the interactions model, we assume that for a particular pair, the relative risk for some combinations of genotypes is c times larger than the product of their relative risks. For all other SNPs and for all other genotype combinations, relative risks are assumed to be multiplicative. Thus, for example, if SNPs x and y interact, then the relative risk for the pair, K~cl ix l iy for certain configurations of (g ix , g iy ), and K~l ix l iy for other combinations. The total risk in this case would be K P n j 1 x,j 1 y l ij .
We set the values of l ix , l iy for the interacting SNPs x and y so that the relative risks for each of these SNPs under univariate models is equal to what is observed in real data (given in Table 1). We assign the probability that an individual is a case to be P (disease j g i )~Cl i , where C is a normalizing factor, and l i is the relative risk of individual i based on the interactions model. We choose C so that the fraction of cases is close to the average lifetime risk of the disease.
Let RR, RN and NN denote the observed values of relative risks for any SNP for risk-allele homozygote (2), heterozygotes (1) and non-risk-allele homozygote (0) respectively and let rr, rn and nn denote the respective genotype frequencies. Since l ij for any locus j can only take 3 possible values corresponding to the 3 possible genotypes, we will denote these by l ij0 , l ij1 , and l ij2 respectively and set l ij0~1 for all SNPs. We obtain values of l ix1 , l iy1 , l ix2 , l iy2 for SNPs x and y by solving the following system of equations: RR x~( crr y l ix2 l iy2 z crn y l ix2 l iy2 z nn y l ix2 )=(rr y l iy2 z rn y l iy1 z nn y ) RN x~( crr y l ix1 l iy2 z crn y l ix1 l iy1 z nn y l ix1 )=(rr y l iy2 z rn y l iy1 z nn y ) RR y~( crr x l iy2 l ix2 z crn x l iy2 l ix1 z nn x l iy2 )=(rr x l ix2 z rn x l ix1 z nn x ) RN y~( crr x l iy1 l ix2 z crn x l iy1 l ix1 z nn x l iy1 )=(rr x l ix2 z rn x l ix1 z nn x ) Based on the risks in the interactions model, we assigned disease status labels for 100,000 randomly drawn samples. We used this simulated case-control data to plot ROC curves based on two approaches for risk assessment. First, we calculate the relative risk of an individual according to the true interactions model. Then, we assigned relative risks assuming the independence model. As observed in Figure 2 and in Table 3, we find that the ROC curves can differ marginally when the interaction factor is high (i.e. c = 10). However, it can be argued that strong deviations from the independence model will also be more detectable in genome wide association studies. Particularly, whole-genome association studies often report that SNP-SNP interactions were tested but were not found to be significant (e.g. [24]). Therefore, when no interactions have been reported in the literature for a set of SNPs, it seems unlikely that the classification accuracy of the multiplicative test will differ dramatically from that of the true model that includes interactions.

Measuring the Absolute Error in the Risk Estimate
The ROC curve serves as one metric for evaluating a diagnostic in that it provides a quantitative measure of the ability of the test to distinguish between unaffected and affected individuals. However, when estimating the lifetime risk, the ROC curve alone may not be sufficient if a score does not directly estimate the correct probabilistic measure (i.e. the probability of developing disease in one's lifetime) but instead computes some function of this probability. In particular, for any given pair of score functions, f 1 (G) and f 2 (G), the ROC curves of the functions will be identical as long as f 1 is a monotonic increasing function of f 2 . For instance, we could simply assign f 2 (G)~log(f 1 (G)), and in this case by using the scores f 1 and f 2 to estimate risk we will get exactly the same ROC curves. However, these two functions may give very different lifetime risk estimates to individuals. Therefore, ROC curves alone are not sufficient for tests that report probabilistic risk. For quality assessment, we also need a more informative quantity, the absolute value of relative error between the true risk probability and the estimated risk probability. The relative error is defined as the difference between the estimated and true risk probability divided by the true risk probability. Thus, the absolute value of relative error is given by: jEstimated Risk Probability -True Risk Probabilityj=

True Risk Probability
Since the true probability of developing a disease is unknown, we simulated a scenario in which case-control data is used to calculate the GCI parameters (i.e. the relative risks), and then applied the GCI risk estimates to another independently simulated population. The disease model we used for the simulation assumes that the genetic factors of the disease can be decomposed into a small number of large effects and a large number of small effects that can be approximated by a normal distribution (see Methods). Since most diseases are diagnosed later in life, we introduced the age of onset of the disease to the model. For each individual that has been determined to develop the disease based on the model, we choose the age of onset of the disease based on some distribution for the age of onset (Normal distribution with mean = 50 and SD = 13). Thus, in our simulation, some of the controls may in fact be cases that have not been diagnosed at the time of the study. To create a realistic simulation of an age-matched casecontrol study, we first repeatedly simulated the genetic and environmental factors, as well as the age of onset for individuals; we picked the age of the individuals from a uniform distribution between 0 and 100. We generated 10,000 cases using this process. For each of these cases, we generated an age-matched control by sampling 10,000 controls conditioned on their age. We estimated the odds ratios for each SNP based on this case-control data, and then used these odds ratios to calculate the relative risks for each SNP associated with the disease, using our GCI methodology.
The above procedure was used to generate a simulated set of relative risk values. We then generated 500 individuals randomly according to the theoretical disease model. Since the variables are known for each of these individuals, we know the correct genetic risk to develop the condition. We use these 'true risks' as a baseline for the accuracy measure. We compare the GCI based risk estimates to this baseline, as well as a variant of the GCI in which the relative risks are replaced by the odds ratios. We note that methods that calculate disease risk based on prevalence (e.g. [2]) will usually get relative risks that are close to the odds ratios.
In Figure 3, we plot the distribution of the absolute value of relative errors for a simulated disease with average lifetime risk of 25% and heritability of 64% (Figure 3a), and for a disease with average lifetime risk of 42% and heritability of 57% (Figure 3b). These values roughly correspond to the lifetime risk and heritability of Type 2 Diabetes and Myocardial Infarction respectively. It is clear from the Figure that there is a dramatic difference between the lifetime risks when using the relative risks and when using the odds ratios. This may not be noticeable using a ROC curve that only measures the classification accuracy. Thus, using odds ratios or prevalence based calculation for relative risk  generally inflates the results for lifetime risk calculations, and under some circumstances can generate lifetime risk estimates that are larger than 100% (hence these are not good enough for lifetime risk calculations and our methodology is necessary).

Can the addition of environmental risk factors improve our current predictions?
In the previous sections, we used only the genetic information to estimate the risk of disease. In order to estimate the potential contribution of known environmental factors to disease prediction, we now consider the case where both environmental and genotypic data are used to estimate risk. Such an example was studied for the case of Type 2 Diabetes in [2]. Here, we demonstrate the utility of environmental factors across Type 2 Diabetes, Crohn's Disease and Rheumatoid Arthritis, which have very different heritability and average lifetime risk values. As in [2], we also assume that the risks and frequencies across all SNPs as well as across all environmental factors are independent and multiplicative. Based on this assumption, we generalized the GCI score for the case where environmental factors are also taken into account. We call the resulting statistic for lifetime risk Environmental-Genetic Composite Index (EGCI). The EGCI score (like the GCI score) is defined as the product of relative risks across all the markers and all of the environmental factors normalized by a constant. Note that when calculating the EGCI, the different levels/classes of any particular environmental factor are treated in exactly the same way as the different alleles of a marker in GCI. Thus, environmental factors are mathematically no different from additional markers. Table 4 gives the frequencies and relative risks of the environmental variables for the 3 diseases.
We simulated the genotype and environmental factor values for a set of 100,000 individuals based on their known frequencies in the population (See Tables 1 and 4). For every individual, we randomly and independently generate each genotype and environmental variable using these frequencies (In particular, we use a uniform random number between 0 and 1 for doing this). We then randomly assigned a disease status for all individuals based on the lifetime risk probabilities calculated from the generalized multiplicative model (i.e. EGCI). Next, we compared the predictive power of the pure genetics based GCI score to the new generalized EGCI score. The ROC curves for Type 2 Diabetes, Crohn's Disease and Rheumatoid Arthritis are shown in Figure 4. The added value of environmental factors is not dramatic for Crohn's Disease and Rheumatoid Arthritis, however it is substantial for Type 2 Diabetes. This is driven by the fact that Body Mass Index is crucially affecting the risk for Type 2 Diabetes (with a relative risk of 42.1 when BMI .35 [25]). Note that for a disease such as Crohn's disease we do not expect environmental factors to play a major role since the heritability of this condition is roughly 80%.
GCI and EGCI for Type 2 Diabetes case-control data from the GENEVA study GENEVA study refers to the Gene Environment Association Studies initiative (www.genevastudy.org) funded by the trans-NIH Genes, Environment, and Health Initiative (GEI). The goal of this study is to identify novel genetic factors that contribute to Type 2 Diabetes Mellitus through a large-scale genome-wide association study of well-characterized cohorts of nurses and health professionals. In this study, around 1 million SNPs have been genotyped in about 2712 cases with Type 2 Diabetes and 3179 controls. A variety of environmental variables have also been collected for these individuals. We illustrate the performance of GCI and EGCI methodology using 15 disease SNPs present in the GENEVA dataset. We only used unrelated individuals of Caucasian ancestry for this analysis. For calculating EGCI, we considered 2 environmental variables namely the Body Mass Index (BMI) and the smoking status ( Table 5 gives their relative risks). The results obtained are shown in Figure 5 and the SNPs used are listed in Table 6.

Discussion
The Human Genome Project [26], the HapMap project [27], and related initiatives have resulted in a reference human genome sequence, a catalog of common genetic variation and a haplotype map of several reference populations. Furthermore, this information combined with cost-effective technologies to test associations between variations throughout the genome and traits and diseases of all sorts, has resulted in dozens of common variants shown to be unequivocally statistically associated with the risk of common diseases. These common variants can be used much like population-derived environmental risk factor data in assessing probabilistic pre-symptomatic risk of disease.
We have presented a new method for the estimation of an individual's lifetime risk based on genetic data through a genetic score function (the GCI). The GCI, like all estimates of a particular quantity, requires a set of assumptions that may bias the risk estimates. Particularly, the assumptions made by the GCI score are that the allele frequencies of the causal SNPs and effect sizes are known, and that all the SNPs are independent of each other. We show through simulation studies and by the analysis of the WTCCC data that, moderate SNP-SNP interactions have almost no effect on the power of the multiplicative GCI score. However, in principle strong non-additive effects between variants might affect the risk estimates, and thus care has to be taken when interpreting the results. In most scenarios, we expect that such effects will likely be discovered prior to the use of GCI and can be incorporated in the risk calculation. So, we view this as a minor problem, especially given that no significantly strong SNP-SNP interactions have been uncovered in whole genome association studies performed over the past several years.
We used the ROC curve analysis and the heritability of each of the conditions we considered to find the total genetic variation explained by known variants, compared to the expected genetic variation based on heritability. We find that current scientific knowledge can explain approximately 6%-14% of the total genetic variation for these conditions. This suggests that the risk estimates provided by the GCI may vary considerably in the future, as more genetic variants are found and used for risk estimation (e.g. see [24]). The fact that only a small fraction of the genetic variants have been found to date suggests that the variance of the risk calculated by the GCI is still large; however, the GCI score aims at estimating the expected frequency of individuals with a given genetic load that will develop the condition during their lifetime, and the accuracy of the estimate of expectation will not be affected by the number of unknown variants.
It is clear that next-generation technologies will be used in study designs similar to GWAS to identify additional heritable risk factors for CNCDs. As each new genetic association is validated to the appropriate industry thresholds, this new genetic risk information can be added into the GCI in a scalable fashion, on a disease-bydisease basis to improve the accuracy of the GCI in real time.
Given these interpretations of the GCI score, it is informative to use such a score in order to estimate the risk of an individual based on their genetic data. The medical benefits of such individualized knowledge are intuitive, but have to be clinically proven through prospective studies. The main open question is whether individuals will benefit by change of behavior, early diagnosis or an individualized course of treatment based on their genetic information for actionable CNCDs. We believe that tools such as the GCI score will facilitate such studies and help transition us into the era of personalized preventive medicine.

Ethics Statement
The datasets used were approved by the relevant boards in Navigenics Inc and University of California Davis.

Introduction
We consider a disease for which k risk loci have been identified. As done in [2,3], we assume that the different loci are acting independently, and thus Pr(g 1 , . . . ,g k jD)~P Pr(g i ), where g i is genotype of an individual in locus i, and D represents the event that the individual will develop the disease across his or her lifetime. As noted by [2], it is straightforward to extend this model to cases where some interactions are known. Previous methods consider D as the event that the individual is currently diseased and thus the risk estimated by these methods is for a snapshot in time. Such risk is related to the overall lifetime risk of the disease but with obvious differences. This difference can be quite dramatic in some cases, as we show in the results section. When calculating the risk across multiple SNPs for an individual with genotypes (g 1 ,…,g n ), we are interested in finding the probability Pr(Djg 1 , . . . ,g n ). Using Bayes law and the independence assumption  Table 6) and environmental factors (BMI, Smoking) versus genetic factors alone for predicting Type 2 Diabetes in 2600 cases and 3000 controls in the GENEVA data. The AUCs of the two curves are 0.727 and 0.565 respectively. The relative risks for BMI and Smoking are given in Table 5. doi:10.1371/journal.pone.0014338.g005 In order to estimate the lifetime risk of a specific individual, we therefore need to have an estimate of the average lifetime risk Pr(D) across the entire population and the risk of developing the disease across the lifetime of an individual with genotype g i . The former has been estimated for a wide range of conditions using prospective studies [16][17][18]. The latter can be estimated using our method from case-control studies as described below.

Odds ratios vs. relative risk
In epidemiology literature, the relative risk is often considered an intuitive and informative measure of risk. The relative risk is defined as l i~P r(Dja i ) Pr (Dja 0 ) , where a 0 , a 1 , and a 2 correspond to the genotypes with 0, 1, and 2 risk alleles. If the relative risks are known, we could estimate Pr(D|a i ) by using the following: Pr(D)~Pr(D j a 2 )Pr(a 2 ) z Pr(D j a 1 )Pr(a 1 ) z Pr(D j a 0 )Pr(a 0 ) ð1Þ Equation 1, together with the relative risks provide three independent equations with three variables, since Pr(a i ) can be found by considering a reference population, and Pr(D) is known. Unfortunately, the relative risk cannot be directly calculated in the context of case-control studies and whole-genome association studies. The relative risk can usually be estimated through prospective studies in which a set of healthy individuals is studied over a long period of time. In contrast, odds ratios are normally reported in case-control studies. The odds-ratio is the ratio between the odds of carrying the risk allele in cases vs. controls. For rare diseases, the odds ratio is a good approximation of relative risk; however for common diseases, the odds ratio could result in a misleading estimate of risk, where the odds ratios may be quite high even when the increase in risk is minor.
As previously noted [2], one can estimate the relative risks from the odds ratios by solving a set of equations. However, the equations proposed in [2] assume that the control population will never develop the disease. In the context of lifetime risk estimation this assumption is no longer valid since a subset of the control population might eventually develop the disease.  Calculating Risk in the Presence of Diseased Controls We now turn to the calculation of Pr(D|g i ) given that an a fraction of the controls will eventually develop the disease along their lifetime. We consider a locus in which m+1 different alleles are present. This allows us to deal with general scenarios, in which g i may represent any number of interacting SNPs, and where m = 3 s , where s is the number of SNPs represented by g i . We will denote the m+1 possible alleles by a 0 , a 1 …, a m , where a 0 is the non-risk allele, and their respective allele frequencies in the general population as f 0, f 1 ,…, f m . Given that an a fraction of the controls will eventually develop the condition, we can write the odds ratios as: Therefore, by Equation 2, we get the following: For a fixed a, we can solve this equation using a binary search on the variable Pr(D|a 0 ); there is exactly one solution between 0 and Pr(D) since the right hand side of this equation is an increasing function of Pr(D|a 0 ) and binary search is guaranteed to find that solution.
Generally, the value of a is unknown and it has to be determined based on the age characteristics of the study population. For instance, if the control population is a sample from the general population, then a should be taken as the average lifetime risk of the disease. However, if the control population was chosen so that their age range is after the age of onset of the disease, a should be close to 0. When case-control genotype data is given, one can use maximum likelihood estimation to calculate a.

Calculating the GCI score
The GCI method essentially provides a way to compute the relative risks of an individual as compared to an individual with non-risk alleles at each of the disease-associated marker. In order to calculate the lifetime risk, we take the product of the relative risks across all loci (this is the overall relative risk of the individual under the multiplicative model) and multiply it by the average lifetime risk of the disease in the population. We then divide this product by the average overall relative risk of the population. To approximate the average relative risk of the population, we assume that the SNPs at different loci are independent of one another (i.e. in linkage equilibrium). Under this assumption, the average overall relative risk of the population is equal to the product of the average relative risks at each disease-associated marker.
If all the markers effects are independent, the relative risk of individual i is equal to l i~P n j~1 l ij where l ij denotes the relative risk for the j th locus. Let Pr(D) denote the average lifetime risk of the disease in the population. Then, the GCI lifetime risk probability or GCI score of an individual i is: Pr(D) P n j~1 l ij = P n j~1 ( X k~m k~0 f jk l jk ) Here, m+1 alleles are possible at each marker locus and l jk denotes the relative risk of the k th allele of the j th locus and f jk denotes its frequency in the sample.

Theoretical Disease Models
We compared the GCI score to the optimal risk scores calculated under two different theoretical disease models. These models assume that the disease is affected by both environmental and genetic factors, and that the two factors are independent of each other. We denote the phenotype P~GzE, where G is the genetic variable and E is the environmental variable. Our first model assumes that both G and E are normally distributed with standard deviations of s G and s E respectively, and that an individual will develop the condition in his or her lifetime if P w afor a fixed a. Similar models have often been used when heritability calculations are made [28]. We fix s G, s E and a using the constraint that h~s G 2 =(s G 2 zs E 2 ), and that the average lifetime risk is equal to Probability (P w a). Since the heritabilities and average lifetime risks are known for each of the conditions we test, we can set the parameters of the models according to the disease. For this disease model, we can analytically show that the theoretical genetic maximum of AUC (i.e. when G is known but E is unknown) only depends on the heritability and the average lifetime risk of the disease (See next section) and not on the choice of s G, s E , or a which are difficult to estimate. In the second model, a variant of the previous model, we assume that G~P l i X i zG1, where G1 is normally distributed with standard deviation s G1 , and X i * B(2, p i ) is Binomially distributed. In this case, X i corresponds to SNPs with large effects and G1 represents many other small genetic effects; if there are enough small genetic effects, we expect that the asymptotic behavior of their sum would be according to a normal distribution. By setting the parameters l, s G1 and p appropriately, we can control the relative risks of the large effect SNPs. We tune these parameters such that the relative risks are close to values observed in Table 1 (see below). As for the previous model, we can show that when G is known (but E is unknown) and the relative risks of the large effect SNPs and risk-allele frequencies are fixed, the area under the ROC curve for the second model only depends on the heritability and the average lifetime risk of the disease (see below).
Proof for theoretical disease model 1 In this section, we will show that the theoretical genetic maximum of the area under the ROC curve for model 1 depends on the average lifetime risk (ALTR) and the heritability of the disease alone. Let s e denote the variance in the environmental variable and s g denote the variance in the genetic variable. In model 1, both genetic (G) and environmental (E) variables are normally distributed. The theoretical maximum of ROC curve is obtained when the genetic variable is known exactly while the environmental variable is unknown. An individual is a true case if G z E w aand a true control otherwise. For any cutoff chosen for the genetic variable, the individuals who are above that cutoff will be counted as cases and the rest as controls. The true positive fraction (TPF) is the fraction of true cases that are called as cases