Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Characteristic Gene Selection via Weighting Principal Components by Singular Values

  • Jin-Xing Liu,

    Affiliations: Bio-Computing Research Center, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, Guangdong, China, College of Information and Communication Technology, Qufu Normal University, Rizhao, Shandong, China

  • Yong Xu ,

    laterfall2@yahoo.com.cn

    Affiliations: Bio-Computing Research Center, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, Guangdong, China, Key Laboratory of Network Oriented Intelligent Computation, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, Guangdong, China

  • Chun-Hou Zheng,

    Affiliation: College of Electrical Engineering and Automation, Anhui University, Hefei, Anhui, China

  • Yi Wang,

    Affiliation: School of Mechanical Engineering and Automation, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, Guangdong, China

  • Jing-Yu Yang

    Affiliation: School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing, Jiangsu, China

Characteristic Gene Selection via Weighting Principal Components by Singular Values

  • Jin-Xing Liu, 
  • Yong Xu, 
  • Chun-Hou Zheng, 
  • Yi Wang, 
  • Jing-Yu Yang
PLOS
x

Abstract

Conventional gene selection methods based on principal component analysis (PCA) use only the first principal component (PC) of PCA or sparse PCA to select characteristic genes. These methods indeed assume that the first PC plays a dominant role in gene selection. However, in a number of cases this assumption is not satisfied, so the conventional PCA-based methods usually provide poor selection results. In order to improve the performance of the PCA-based gene selection method, we put forward the gene selection method via weighting PCs by singular values (WPCS). Because different PCs have different importance, the singular values are exploited as the weights to represent the influence on gene selection of different PCs. The ROC curves and AUC statistics on artificial data show that our method outperforms the state-of-the-art methods. Moreover, experimental results on real gene expression data sets show that our method can extract more characteristic genes in response to abiotic stresses than conventional gene selection methods.

Introduction

The growth of plants is greatly affected by a variety of abiotic stresses, such as cold, drought, salt, heat, UV-B light, osmotic press, and so on. In response to these abiotic stresses, plants have evolved a number of defense mechanisms that can increase tolerance to the adverse conditions. The underlying concept is that there exists a specific set of interacting genes responding to the abiotic stresses. Therefore, understanding abiotic stress responses is now thought to be one of the most important topics in plant science [1].

In order to obtain the characteristic genes responding to these stresses, many conventional experimental methods were proposed, such as RT-PCR [2], [3] and Northern blotting [4], [5], etc. RT-PCR can accurately position the genes in tissue or cell and Northern blotting can display the information of detected genes. However, these two methods have the following fatal flaw: only a limited amount of genes can be simultaneously studied. To overcome this flaw, people have developed the gene microarray technology [6][8], which makes it possible to monitor gene expression levels on a genomic scale [9], [10].

With the rapidly development of gene microarray technology, how to efficiently analyze gene expression data becomes a matter of great urgency. During the last decade, feature selection from gene expression data has been extensively studied. The most commonly used methods of feature selection first calculate a score for each gene, respectively, then select the genes with high scores [11]. These methods are often denoted as univariate feature selections (UFS). The main virtues of UFS are: (a) intuitive and easy to understand; (b) computationally simple; and (c) fast [12]. A common disadvantage of UFS based methods is that each feature is separately considered, thereby ignoring feature dependencies. In order to handle the problem, the method of multivariate feature selection (MFS), also denoted as dimension reduction, was introduced [13]. MFS uses all the gene expression data simultaneously to select the genes. Until now, many mathematical methods for MFS have been used for gene expression data analysis. For example, Park et al. gave the theoretical analysis on feature extraction capability of class-augmented PCA [14]. Ma et al. used PCA to identify differential gene pathways in [15]. De Haan et al. used PCA to analyze microarray data [16]. Musumarra et al. used PLS to identify genes for new diagnostics [17]. Boulesteix et al. provided a systematic comparison of the PLS methods for the analysis of gene data [18].

thumbnail
Figure 1. ROC curves for artificial data. (SNR denotes the signal-to-noise ratio).

http://dx.doi.org/10.1371/journal.pone.0038873.g001

However, the classical methods, such as PCA and PLS, still have some drawbacks, e.g. the principal components (PCs) of PCA or the latent components (LCs) of PLS are usually dense, which makes it difficult to interpret PCs or LCs without subjective judgment. To overcome these drawbacks, many mathematical tools have been devised to reduce the complexity of the data. Among them, sparse methods have significant advantage, while giving up little statistical efficiency. For example, Zou et al. proposed sparse PCA using the lasso [19]. In [20], Journée et al. described a general power method for sparse PCA. Lai et al. used sparse local discriminant projections for feature extraction [21]. Moreover, many sparse methods have been widely used for gene expression data analysis. Lass et al. used the SPCA for clustering and feature selection [22]. In [23], Witten et al. proposed a penalized matrix decomposition, which was used to analyze plant gene expression data by Liu et al. [24]. Cao et al. used sparse PLS discriminant analysis for biologically relevant feature selection [25].

Though sparse methods are useful, yet these methods used the first LC of PLS or PC of SPCA to select feature. For example, Boulesteix et al. used the first LC of PLS to select the important genes [18]. Liu et al. used the first PC of SPCA for characteristic genes selection [26]. These methods indeed assumed that the first component of PLS or SPCA plays a dominant role in gene selection. However, we identify that in a number of cases this assumption is not satisfied and conventional PCA-based gene selection methods usually provide poor results. Actually, not only the first LC or PC but also the remaining LCs or PCs include the important information for gene selection [27]. So if only the first LC or PC is used in selecting genes, the poor results may have been obtained with loss of some important information.

In this paper, in order to select characteristic genes, a novel method is proposed that is based on the weighted PCs of SPCA by singular values (WPCS). First, the PCs of SPCA and singular values are calculated. Second, using the singular values as the weights of PCs, the weighted PCs (WPC) are gotten. Then, as the absolute value of the i-th entry of every WPC somewhat denotes the importance of the i-th gene, we take the sum of all the i-th entries of all the WPC as the extent of importance of the i-th gene and use it to select features. The genes corresponding to the first largest sums are selected as characteristic genes. The experimental results show that our method is efficient and powerful for gene selection. Our work has the following contributions: first, it proposes, for the first time, the method of weighting PCs by singular values for gene selection. Second, from the viewpoint of minimizing reconstitution error, it clearly presents the idea of the proposed method. Third, it conducts a large number of gene selection experiments.

Results and Discussion

In this section, our proposed WPCS method is compared with the following existing methods. (a) SPCA-1 method uses only the first one PC of SPCA (proposed by Journée et al. [20]) to identify the characteristic genes; (b) SPCA-2 method uses the first two PCs of SPCA to identify the characteristic genes; (c) PLS method uses partial least squares regression (PLS) (proposed by Boulesteix et al. [18]) to identify the characteristic genes.

First, these methods are carried on the artificial data. Then, these methods are used to extract the characteristic genes responding to abiotic stresses from real gene expression data.

Simulation on Artificial Data

To investigate the performance of the methods, the average receiver operator characteristic (ROC) curves are shown in Figure 1 with six different SNR.

Figure 1(A) and 1(B) show that our WPCS and the competitive methods can identify the patterns even with very low SNR. As shown in Figure 1, with different SNR, WPCS outperforms other methods. For example, with SNR = 1.0, PLS method are dominated by SPCA-2 and SPCA-1; and our WPCS can achieve the best results.

The area under curve (AUC) statistics are listed in Table 1, from which we can conclude that under the same SNR, the ascending order of accuracy given by these methods is: PLS, SPCA-1, SPCA-2 and WPCS.

From the experiments on artificial data, a conclusion can be drawn that WPCS method outperforms other methods for feature selection.

thumbnail
Table 8. The numbers of response to light stimulus (GO: 0009416) in root samples.

http://dx.doi.org/10.1371/journal.pone.0038873.t008

Gene Ontology (GO) Analysis

The Gene Ontology (GO) Term Enrichment tool can be used to help discover what those genes may have in common [28]. GOTermFinder is a web-based tool that finds the significant GO terms shared among a list of genes. The analysis of GOTermFinder provides significant information for the biological interpretation of high-throughput experiments. In this paper, our proposed method will be evaluated by GOTermFinder [29], which is publicly available at http://go.princeton.edu/cgi-bin/GOTermFinder. Its threshold parameters are set as following: maximum p-value = 0.01 and minimum number of gene products = 2.

thumbnail
Table 9. Different genes of response to light stimulus (GO:0009416) in root samples.

http://dx.doi.org/10.1371/journal.pone.0038873.t009

Here, only the main results of GO are given. Figure 2 shows the sample frequency of response to stimulus (GO: 0050896) given by the four methods. From Figure 2(A), WPCS method outperforms the others in all the data sets of shoot samples with six different stresses. Figure 2(B) shows that only in drought-stress data set of root samples, our method is dominated by SPCA-1 and SPCA-2 methods. In other data sets, our method is superior to the others.

Figure 3(A) shows the sample frequency of response to stress (GO: 0006950) in shoot samples. From Figure 3(A), it can be seen that only in drought-stress data set, the PLS method is slightly superior to our method. In other data sets, our method is superior to the other methods. Figure 3(B) shows that only in drought-stress data set of root samples, our WPCS gives a similar result to that of SPCA-1 and SPCA-2 methods, and exceeds that of PLS method. In other data sets, our WPCS method surpasses the others.

thumbnail
Figure 4. The graphical depiction of SPCA of a matrix A with factor scores and PCs .

In this figure, with factor scores and PCs . is the row vector of PCs the j-th gene, which transforms the original data vector into factor scores . Correspondingly, is the column vector of PCs , which transforms the original data vector into factor scores .

http://dx.doi.org/10.1371/journal.pone.0038873.g004

The remarkable results are listed in Tables 25. The number of genes responding to stimulus (GO: 0050896) selected by the four methods in shoot and root samples are listed in Table 2 and Table 3, respectively.

As Table 2 listed, in shoot samples, WPCS method outperforms the others in all the data sets with six different stresses. As Table 3 listed, in root samples, only in drought-stress data set, WPCS method is dominated by SPCA-2. For other stresses data sets, WPCS outperforms our competitive methods.

Table 4 and Table 5 give the gene numbers and P-value of response to stress (GO: 0006950) selected by the four methods in shoot and root samples, respectively.

To sum up, for all the data sets except drought-stress data set, our method is superior to other methods. For the drought-stress data set in shoot samples, only the PLS method slightly suppresses our WPCS method.

To further study the characteristic genes closely related to the stresses, the cold stress in shoot samples and UV-B stress in root samples are analyzed. Table 6 lists the numbers of response to cold (GO: 0009410) in shoot samples selected by these methods. The background sample frequency of response to cold (GO: 0009410) is 0.9% (276/29887). As Table 6 listed, our method can select more genes than others.

In detail, we compare the genes selected by WPCS with the genes selected using others. Different genes selected using WPCS and neglected by other methods are listed in Table 7. As Table 7 listed, the functions of genes selected using WPCS are closely related with cold stress.

Table 8 gives the numbers of response to light stimulus (GO: 0009416) in root samples selected using these methods. The background sample frequency of response to light stimulus (GO: 0009416) is 1.8% (547/29887).

As Table 8 listed, WPCS can select more genes than others. Moreover, we compare the genes selected by WPCS with the ones by other methods. The genes selected using WPCS and neglected by others are listed in Table 9. As Table 9 listed, the functions of genes selected using WPCS are closely related with UV-B stress.

From the experiments and analyses on gene expression data, a conclusion can be drawn that WPCS method is very efficient and powerful for gene selection.

Conclusion

In this paper, a novel method of gene selection, WPCS, is proposed, that uses the weighted PCs by SVs as the basis of selection. The idea of WPCS is clearly shown. WPCS works as follows. First, it obtains the PCs of SPCA and SVs. Second, using the SVs as the weights of PCs, it obtains the WPC. Then, it sums the absolute value of the WPC in row, and sorts the sum in descending order. Finally, it selects the genes corresponding to the top part of the sum as the characteristic genes. A large number of experiments on artificial data and gene expression data demonstrate that the proposed WPCS method outperforms the state-of-the-art gene selection methods. For gene expression data, WPCS can extract more characteristic genes in response to abiotic stresses than the other methods.

Materials and Methods

Artificial Data

The artificial data are in with and generated as . Let be four 2000-dimensional vectors, such that , and ; , and ; , and ; and , and . Let be 2000-dimensional noise matrix, . Then the noise matrix is added to with different Signal-to-Noise Ratios (SNR). The first four eigenvectors of are chosen to be. To make these four eigenvectors dominate, we let the eigenvalues be,, , and for . Then the simulation scheme in [30] is used to generate the artificial data, which include ten samples in each test.

Gene Expression Data

The raw data include two classes: roots and shoots in each stress, which were downloaded from NASCArrays [http://affy.arabidopsis.info/] [31], reference numbers are: control, NASCArrays-137; cold stress, NASCArrays-138; osmotic stress, NASCArrays-139; salt stress, NASCArrays-140; drought stress, NASCArrays-141; UV-B light stress, NASCArrays-144; and heat stress, NASCArrays-146. The sample numbers of each stress type are listed in Table 10. There are 22810 genes in each sample. The data are adjusted for background of optical noise using the GC-RMA software by Wu et al. [32] and normalized using quantile normalization. The results of GC-RMA are gathered in a matrix for further processing.

Selection of the Parameters

In SPCA, we take -norm penalty and set . In PLS, only the first component is used. For the sake of comparison, on gene expression data, 300 genes are roughly selected by all the methods.

Singular Value Decomposition (SVD)

In this subsection, the details of the WPCS method are presented. Let denote an matrix of real-valued gene expression data, which consists of genes in samples. In the case of gene expression data, is the expression level of the j-th gene in the i-th sample. The elements of the j-th column in form the n-dimensional vector , which is referred to as the transcriptional response of the j-th gene. Correspondingly, the elements of the i-th row in form the p-dimensional vector, referred to as the expression profile of the i-th sample. Usually, so it is a classical small-sample-size problem. To integrate SPCA and SV, the singular value decomposition (SVD) and the SPCA via cardinality penalty (-penalty) are introduced as follows.

If the variables contained in the columns of with rank are centered, the equation for SVD of is as follows:(1)where is an matrix of left singular vectors with , is a matrix of right singular vectors with , and is a diagonal matrix of singular values. Let denote column -th of , let denote column -th of , and note that denotes the k-th diagonal element of the matrix . According to Eckart et al. [33],(2)which is the closest rank-l matrix to . The term “closest” means that minimizes the square error sum between the elements of and

Sparse Principal Component Analysis (SPCA)

The results given with - and - norm penalty in SPCA are similar, which is also shown in [20]. Since the -norm is faster than -norm, -norm penalty is taken on SPCA. Let and , eq.(1) can be written as follows:(3)where and . It is the classical PCA formation.

Extracting one principal component (PCs) amounts to computing the dominant eigenvector of (or, equivalently, dominant right singular vector of ). That is, PCA seeks to project the data onto the linear combination of variables that maximizes the sample variance. It is well-known that the solution to this problem is given by the right singular vector of . In general, PCs is not expected to have many zero coefficients. So, to makes it easy to interpret PCs without subjective judgment, Sparse PCA proposed by Journée et al. in [20] is used to generate the sparse PCs.

Let us consider the optimization problem.(4)with sparsity-controlling parameter , denotes the -norm, that is, the number of non-zero components (cardinality).

According to [20], eq.(4) can be rewritten as follows:(5)where the maximization with respect to for a fixed has the closed form solution

(6)According to [20], eq.(5) can be cast in the following form:(7)

Here, for, sign(t) denotes the sign of the argument and .

For large enough, . Since.(8)we get.(9)for all nonzero vector , when > .So, this derivation assumes that , then.

(10)Otherwise, so(11)

The Idea of the Proposed Method

The residual sum of squares (RESS) can be used for evaluating the quality of the reconstitution of with PCs of SPCA. According to [34], it can be computed as follows:(12)where is a reconstitution of , denotes the sum of all the squared elements, and is the singular value of the l-th component. The smaller the value of RESS is, the better the SPCA model is. From eq.(12), we can see that a larger may give a better estimation of . If takes value (rank of ),(13)where the matrix can be perfectly reconstituted. Let , the can be expressed as follows:(14)Substituting eq.(13) into eq.(14), the can be obtained as follows:

(15)As eq.(15) shown, if only one PC is used to reconstitute the matrix , may be larger than . So, if only the first PC is used for characteristic gene selection, some important information may be lost, especially when the second one or two SVs are approximately equal to the first one. In order to obtain a better reconstitution, all the PCs of SPCA need to be utilized.

In SPCA, is the matrix of factor scores and is a loading matrix of the principal components (PCs), which transforms the original data matrix into factor scores. The data matrix , factor scores matrix and PCs are shown in Figure 4.

As Figure 4 shown, the PCs give the coefficients of the linear combinations used to compute the factors scores . So the bigger the absolute value of the elements in PCs is, the more contribution it gives for the factor scores matrix, the more important the corresponding gene in is. So the characteristic genes can be selected according to the PCs .

Let.(16)denote the i-th PC, and then the PCs can be given as the follows:

(17)Substituting into eq.(3), it can be reformed as follows:(18)where is the diagonal matrix of singular values. Let(19)eq.(18) can be reformed as follows:

(20)Substituting eq.(17) into eq.(19),(21)

The matrix is referred to as weighted PCs (WPC), which can be obtained via weighting PCs by the diagonal matrix of SVs (WPCS).

Substituting eq.(16) into eq.(21), the WPC can be reformed as follows:(22)

As the absolute value of the i-th row of WPC somewhat denotes the importance of the i-th gene, the absolute value sum of all the entries in the i-th row as the evaluating vector EV, which can be expressed as follows:(23)

In particular, if the dimensionality of the gene data is , the EV has entries. After sorting the evaluating vector EV, the genes corresponding to the first largest entries can be selected as characteristic genes.

In summary, the main steps of WPCS method are shown as follows.

  1. Given the observation matrix , , .
  2. To obtain the PCs , execute SPCA on the .
  3. To obtain the SVs, execute the SVD.
  4. Obtain the WPC via multiplying PCs by the diagonal matrix of SVs.
  5. Obtain the evaluating vector EV by summing the absolute value of WPC in row.
  6. Sort the EV in descending order.
  7. Select the genes corresponding to the first largest entries as characteristic genes.

The workflow diagram of our method is shown in Figure 5.

Acknowledgments

WPCS source codes and data set are also available at http://www.yongxu.org/code/LJX_PONE.rar.

Author Contributions

Conceived and designed the experiments: JXL. Performed the experiments: YX CHZ. Analyzed the data: YW JYY. Wrote the paper: JXL YX CHZ.

References

  1. 1. Hirayama T, Shinozaki K (2010) Research on plant abiotic stress responses in the post-genome era: past, present and future. The Plant Journal 61: 1041–1052.
  2. 2. Liu H, Rahman A, Semino-Mora C, Doi SQ, Dubois A (2008) Specific and sensitive detection of H. pylori in biological specimens by real-time RT-PCR and in situ hybridization. Plos One 3: e2689.
  3. 3. Maan NS, Maan S, Nomikou K, Johnson DJ, El Harrak M, et al. (2010) RT-PCR assays for seven serotypes of epizootic haemorrhagic disease virus & their use to type strains from the Mediterranean Region and North America. Plos One 5: e12782.
  4. 4. Blevins T (2010) Northern blotting techniques for small RNAs. Methods Mol Biol 631: 87–107.
  5. 5. Josefsen K, Nielsen H (2011) Northern blotting analysis. Methods in molecular biology (Clifton, NJ) 703: 87–105.
  6. 6. Schena M, Shalon D, Davis RW, Brown PO (1995) Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science 270: 467–470.
  7. 7. Heller MJ (2002) DNA microarray technology: devices, systems, and applications. Annual Review of Biomedical Engineering 4: 129–153.
  8. 8. Sato F, Tsuchiya S, Terasawa K, Tsujimoto G (2009) Intra-platform repeatability and inter-platform comparability of microRNA microarray technology. Plos One 4: e5540.
  9. 9. Seki M, Narusaka M, Ishida J, Nanjo T, Fujita M, et al. (2002) Monitoring the expression profiles of 7000 Arabidopsis genes under drought, cold and high-salinity stresses using a full-length cDNA microarray. The Plant Journal 31: 279–292.
  10. 10. Kilian J, Whitehead D, Horak J, Wanke D, Weinl S, et al. (2007) The AtGenExpress global stress expression data set: protocols, evaluation and model data analysis of UV-B light, drought and cold stress responses. The Plant Journal 50: 347–363.
  11. 11. Dudoit S, Shaffer JP, Boldrick JC (2003) Multiple hypothesis testing in microarray experiments. Statistical Science 18: 71–103.
  12. 12. Saeys Y, Inza I, Larra aga P (2007) A review of feature selection techniques in bioinformatics. Bioinformatics 23: 2507–2517.
  13. 13. Sampson DL, Parker TJ, Upton Z, Hurst CP (2011) A Comparison of Methods for Classifying Clinical Samples Based on Proteomics Data: A Case Study for Statistical and Machine Learning Approaches. Plos One 6: e24973.
  14. 14. Park MS, Choi JY (2009) Theoretical analysis on feature extraction capability of class-augmented PCA. Pattern recognition 42: 2353–2362.
  15. 15. Ma S, Kosorok MR (2009) Identification of differential gene pathways with principal component analysis. Bioinformatics 25: 882–889.
  16. 16. De Haan J, Piek E, van Schaik R, De Vlieg J, Bauerschmidt S, et al. (2010) Integrating gene expression and GO classification for PCA by preclustering. BMC bioinformatics 11: 158.
  17. 17. Musumarra G, Barresi V, Condorelli DF, Fortuna CG, Scire S (2004) Potentialities of multivariate approaches in genome-based cancer research: identification of candidate genes for new diagnostics by PLS discriminant analysis. Journal of Chemometrics 18: 125–132.
  18. 18. Boulesteix AL, Strimmer K (2007) Partial least squares: a versatile tool for the analysis of high-dimensional genomic data. Briefings in Bioinformatics 8: 32–44.
  19. 19. Zou H, Hastie T, Tibshirani R (2006) Sparse principal component analysis. Journal of computational and graphical statistics 15: 265–286.
  20. 20. Journée M, Nesterov Y, Richtarik P, Sepulchre R (2010) Generalized power method for sparse principal component analysis. The Journal of Machine Learning Research 11: 517–553.
  21. 21. Lai ZH, Wan MH, Jin Z, Yang JA (2011) Sparse two-dimensional local discriminant projections for feature extraction. Neurocomputing 74: 629–637.
  22. 22. Luss R, d’Aspremont A (2010) Clustering and feature selection using sparse principal component analysis. Optimization and Engineering 11: 145–157.
  23. 23. Witten DM, Tibshirani R, Hastie T (2009) A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics 10: 515–534.
  24. 24. Liu JX, Zheng CH, Xu Y (2012) Extracting plants core genes responding to abiotic stresses by penalized matrix decomposition. Computers in Biology and Medicine:. DOI = ‘10.1016/j.compbiomed.2012.1002.1002’.
  25. 25. Le Cao KA, Boitard S, Besse P (2011) Sparse PLS Discriminant Analysis: biologically relevant feature selection and graphical displays for multiclass problems. BMC bioinformatics 12: 253.
  26. 26. Liu JX, Zheng CH, Xu Y (2011) Lasso Logistic Regression based Approach for Extracting Plants Coregenes Responding to Abiotic Stresses. IWACI. Wuhan, CN. 463∼466.
  27. 27. Yang H, Yang JY (2003) Why can LDA be performed in PCA transformed space? Pattern recognition 36: 563–566.
  28. 28. Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, et al. (2000) Gene Ontology: tool for the unification of biology. Nature genetics 25: 25–29.
  29. 29. Boyle EI, Weng SA, Gollub J, Jin H, Botstein D, et al. (2004) TermFinder - open source software for accessing Gene Ontology information and finding significantly enriched Gene Ontology terms associated with a list of genes. Bioinformatics 20: 3710–3715.
  30. 30. Shen H, Huang JZ (2008) Sparse principal component analysis via regularized low rank matrix approximation. Journal of multivariate analysis 99: 1015–1034.
  31. 31. Craigon DJ, James N, Okyere J, Higgins J, Jotham J, et al. (2004) NASCArrays: a repository for microarray data generated by NASC’s transcriptomics service. Nucleic Acids Research 32: D575–D577.
  32. 32. Wu Z, Irizarry RA, Gentleman R, Martinez-Murillo F, Spencer F (2004) A model-based background adjustment for oligonucleotide expression arrays. Journal of the American Statistical Association 99: 909–917.
  33. 33. Eckart C, Young G (1936) The approximation of one matrix by another of lower rank. Psychometrika 1: 211–218.
  34. 34. Abdi H, Williams LJ (2010) Principal component analysis. Wiley Interdisciplinary reviews: Computational Statistics 2: 433–459.