Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Multispectral co-occurrence of wavelet coefficients for malignancy assessment of brain tumors

  • Shaswati Roy ,

    Roles Conceptualization, Formal analysis, Methodology, Validation, Visualization, Writing – review & editing

    r.shaswati@gmail.com

    Affiliation Department of Information Technology, RCC Institute of Information Technology, Kolkata, West Bengal, India

  • Pradipta Maji

    Roles Conceptualization, Formal analysis, Methodology, Supervision, Validation, Visualization, Writing – review & editing

    Affiliation Biomedical Imaging and Bioinformatics Lab, Machine Intelligence Unit, Indian Statistical Institute, Kolkata, West Bengal, India

Abstract

Brain tumor is not most common, but truculent type of cancer. Therefore, correct prediction of its aggressiveness nature at an early stage would influence the treatment strategy. Although several diagnostic methods based on different modalities exist, a pre-operative method for determining tumor malignancy state still remains as an active research area. In this regard, the paper presents a new method for the assessment of tumor grades using conventional MR sequences namely, T1, T1 with contrast enhancement, T2 and FLAIR. The proposed method for tumor gradation is mainly based on feature extraction using multiresolution image analysis and classification using support vector machine. Since the wavelet features of different tumor subregions, obtained from single MR sequence, do not carry equally important information, a wavelet fusion technique is proposed based on the texture information content of each voxel. The concept of texture gradient, used in the proposed algorithm, fuses the wavelet coefficients of the given MR sequences. The feature vector is then derived from the co-occurrence of fused wavelet coefficients. As each wavelet subband contains distinct detail information, a novel concept of multispectral co-occurrence of wavelet coefficients is introduced to capture the spatial correlation among different subbands. It enables to convey more informative features to characterize the tumor type. The effectiveness of the proposed method is analyzed, with respect to six classification performance indices, on BRATS 2012 and BRATS 2014 data sets. The classification accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and area under curve assessed by the ten-fold cross-validation are 91.3%, 96.8%, 66.7%, 92.4%, 88.4%, and 92.0%, respectively, on real brain MR data.

1 Introduction

Brain tumor is not very common, but it is among the most fatal cancers [1]. It can be defined as an abnormal lump of tissues, which infiltrates surrounding brain tissues and interferes the normal brain activities. The treatment strategy for the patients with brain tumor depends on the assessment of its malignancy state. According to its aggressiveness, the World Health Organization (WHO) classifies brain tumor into grades I to IV. The patients with high grade tumor, classified as either WHO grade III or grade IV, have a worse prognosis, with median survival rate of about two years or less. Glioblastoma multiforme (WHO grade IV) is the most common malignant brain tumor. It develops very rapidly, yielding poor median survival rate of about 14 months. Therefore, the patients with high grade brain tumor require aggressive treatment, such as chemotherapy and radiation, as soon as possible. In contrast, the patients with low grade brain tumor, classified as either WHO grade I or grade II, have a better life expectancy of several years. In effect, aggressive treatment can be delayed in this case. Thus, the accurate classification of brain tumor into high and low grades is important to determine the treatment strategies.

Many techniques have been proposed so far to quantify the tumor heterogeneity as an imaging biomarker for the prediction of tumor grades [2]. Magnetic resonance imaging (MRI) is an important diagnostic technique for non-invasively providing accurate information about the existence, extent, and aggressiveness of brain tumor. The MR images give the anatomical and structural information, not only about the active brain tumor area, but also about the surrounding tissues, which is helpful in characterizing the tumor grade. Dean et al. [3] have showed the importance of MR images in a group of patients with gliomas, and verified the analysis with the corresponding results of biopsy diagnosis. In fact, MRI provides different sequences such as T1-weighted (T1), T1-weighted with contrast enhancement (T1C), T2-weighted (T2), and fluid-attenuated inversion recovery (FLAIR), to capture different types of tissue contrasts. The T1 sequence allows structural analysis of healthy tissues, while tumor borders appear brighter in T1C. Moreover, the necrotic and active tumor areas can be easily discriminated in the T1C MR sequence. In T2 sequence, the edema region appears brighter, whereas it can be separated properly from cerebrospinal fluid in FLAIR. Therefore, one MRI sequence is not sufficient to characterize all the subregions of brain tumors. In effect, a combination of these sequences provides a considerable amount of information for revealing the underlying tumor grade. In [4], a fusion strategy is proposed to combine the structural and textural information of these four MRI sequences for the detection of brain tumor. A fused single image has been reconstructed using the inverse discrete wavelet transform from the coefficient matrices of the MRI sequences. Cheng et al. [5] proposed a fusion technique to automatically learn a mapping function for integrating multimodal features. A weight map is learnt to gate the scale of information. The gated content from individual MRI modalities is then fused to form the integrated representation.

In a brain MR image, texture analysis provides information about the uniformity of image intensities. The textural properties of brain tumor region give valuable information for the prediction of tumor type. The wavelet based multiresolution analysis [6] is the most effective technique for extracting the textural features from brain MR images [79]. In [7], the wavelet analysis is performed on apparent diffusion coefficient images to predict the degree of malignancy of brain tumor. In [10], brain tumor texture is formulated using a multiresolution-fractal model for characterizing patient-independent brain tumor texture. Bauer et al. [11] have presented an approach to establish the correspondence between a healthy atlas and MR images of tumor patients. A tumor growth model, based on multiscale-multiphysics model, in combination with registration algorithms, has been employed. Another approach for extracting textural information is the computation of gray level co-occurrence matrix (GLCM). The GLCM considers the gray level transition between two voxels, and several features can be extracted from the GLCM. In order to increase the discriminability of the descriptor, several extensions of GLCM are proposed in the literature [1216]. Siqueira et al. [12] extended the GLCM to multiple scales through two different approaches namely, a Gaussian scale-space representation and an image pyramid. Gelzinis et al. [13] computed the co-occurrence matrices simultaneously for different distance parameter values. In [14], multi-scale texture analysis using GLCM is introduced by changing the windows size. Pacifici et al. [15] performed GLCM based multi-scale texture analysis using multiple windows with different directions and different shift values of displacement vector. Walker et al. [16] extracted features from the GLCM by weighted sum of the elements of these matrices in localized neighborhoods. In [17], the distance parameter is varied from one to six to generate the multi-scale GLCM.

Law et al. [18] showed that relative cerebral blood volume measurements, derived from perfusion MR imaging and metabolite ratios from proton MR spectroscopy, are useful in predicting glioma grade. In this case, the histopathologic grading has been used to verify the tumor grade determined from the method. Li et al. [19] used patient specific fifteen features and support vector machine (SVM) with floating search method to predict the degree of malignancy of brain tumor. However, the selection of these patient specific features requires the intervention of domain experts. In [7], a mixture of unsupervised artificial neural networks and hierarchical multiresolution wavelet has been used to evaluate the degree of aggressiveness of brain tumor. In this work, the wavelet filtered apparent diffusion coefficient images, along with T2 and FLAIR images, are used to generate the features. Zacharaki et al. [20] used a combination of conventional MRI and perfusion MRI to extract the features, followed by feature selection and classification. The features include tumor shape and intensity characteristics as well as rotation invariant texture features. The SVM, with recursive feature elimination, is used to obtain the feature subset. In [21], an unsupervised method is proposed to obtain clustered images from diffusion tensor images using multiple parameters. A two-level clustering approach namely, self-organizing map followed by k-means algorithm, has been developed to enable unsupervised clustering of images. These clustered images then allow visual grading of gliomas by applying the SVM. To determine the glioma grade, the 16-class diffusion tensor-based clustered images are used. In [22], tumor heterogeneity is evaluated by using texture analysis performed on apparent diffusion coefficient maps. Three features are extracted within the region of interest. These include entropy, obtained from gray level co-occurrence matrices, and the skewness and kurtosis of the image histogram. These texture and histogram features act as the parameters to discriminate between low and high grade gliomas using an unpaired student’s t-test.

Therefore, the pattern recognition methods, in a supervised manner, are useful for the prediction of tumor grades. However, the performance of the existing methods is mainly affected by two factors: 1) variability in information of different MR sequences; and 2) identification of proper descriptors that can capture the intrinsic textural properties for proper tumor grading. Moreover, the brain tumor gradation methods, reported recently in [7, 2022], have used several high-cost advanced modalities, such as diffusion-weighted images and perfusion MRI. A brain tumor gradation method that relies on easily accessible low-cost conventional MR images is desirable.

In this regard, a new method is proposed for the assessment of tumor grades. The proposed algorithm projects the tumor region onto an appropriate feature space, which is able to capture the essential attributes for differentiating different tumor types. Given several low-cost MR sequences namely, T1, T1C, T2 and FLAIR, the proposed algorithm combines the textural features from these images. The multiresolution wavelet analysis is performed to extract the image features within the region of interest. The proposed method introduces a fusion algorithm that combines the wavelet coefficients of the MR sequences, depending on the texture information content of different tumor subregions. After generating the fused wavelet subbands, feature vector is obtained from the co-occurrence of wavelet coefficients. In order to capture the spatial correlation among different wavelet subbands, a novel concept called multispectral co-occurrence of wavelet coefficients is introduced in this paper. The proposed method, for computing the co-occurrence of a pair of wavelet coefficients, can capture the edge continuity present along different subbands. Subsequently, seven Haralick textural features are computed from the proposed multispectral co-occurrence matrix of wavelet coefficients, which are then fed into the SVM to classify the tumor types. Finally, the effectiveness of the proposed method is analyzed using leave-one-out cross-validation and ten-fold cross-validation strategies, along with a comparison with other methods.

2 Proposed methodology

This section presents a novel method for the evaluation of malignancy stage of brain tumor using MR images. The overview of the processing pipeline used in this study is depicted in Fig 1. It consists of mainly three steps as described below:

  1. Wavelet fusion of different MR sequences based on texture gradient;
  2. Generation of feature vector obtained from multispectral co-occurrence matrix of fused wavelet coefficients; and
  3. Gradation of tumor subtypes using the SVM.
thumbnail
Fig 1. Block diagram of the proposed brain tumor gradation method.

https://doi.org/10.1371/journal.pone.0250964.g001

The first step is mentioned in left block of Fig 1, while the remaining two steps are presented in right block of this figure. Each step of the proposed tumor gradation algorithm is elaborated next one by one.

2.1 Texture-gradient based wavelet fusion

Image fusion is often desirable to merge images having complementary characteristics, so that essential aspects of each image can be retained. The variability in information of different MR sequences makes fusion as an important step for determining the tumor type. In [7], it has been shown that the texture analysis can appropriately capture the characteristics of individual regions of an entire tumor. The multiresolution approach of wavelet is suitable for extracting local textural features from the images. Since every wavelet coefficient carries unique information, different textural information can be used to identify different homogeneous texture regions within a tumor region, such as necrosis, enhancing, non-enhancing, and edema. The association among these regions can reveal the tumor characteristics. Therefore, the texture gradient [23], which can capture the differences in textures within a tumor region, can be used suitably in characterizing brain tumor.

In this background, a new wavelet based fusion method (left block of Fig 1) is proposed. It employs texture content of each spatial position of each resolution level to fuse the images. The wavelet decomposition generates detail subbands, containing important high frequency components at different directions. In the proposed method, the detail information is extracted from each of the four MR sequences namely, T1, T1C, T2 and FLAIR, using 3-D dyadic wavelet decomposition. However, the features of different tumor subregions, obtained from one MR sequence, do not carry equally important information. In other words, one tumor subregion may contain more detail information in one MR sequence, whereas the same tumor subregion may have less textural information in another sequence. Therefore, it is reasonable to characterize the texture content of each spatial position at each spectrum to obtain most desirable information. In this regard, the proposed wavelet based fusion algorithm considers texture gradient within tumor region to fuse the given MR images.

Let, the input brain MR volume be decomposed by dyadic wavelet upto Lth level. So, the number of generated detail subbands is . Here, d is the number of detail subbands generated at each decomposition level. In case of 3-D dyadic wavelet decomposition, used in the current study, the value of d is 7. Let, P be the number of MR sequences analyzed and denotes each of the wavelet detail subbands, where and p = 1, …, P. Let, r = (x, y, z) be a coordinate vector in and Gp(r) be the texture gradient at position r for the pth sequence. The proposed fusion method integrates the detail images of each resolution level q and at each position r of q as follows: (1) where ⋆ represents point-wise multiplication and Fq(r) denotes the qth detail subband with fused information at spatial position r. In the proposed fusion method, the magnitude of texture gradient gives the weight value for corresponding MR sequence in fusion. In this way, the regions with more texture content contribute more in fusion, thereby, maximizing the information flow from subbands of individual sequence to fused wavelet subbands. In order to compute the texture gradient for each MR sequence, median filtering, with a kernel of size 3 × 3 × 3, is performed on each , followed by gradient extraction. The texture gradient, termed as Gp(r), is then derived by adding the gradient magnitudes of each subband as follows: (2) where represents the median filtered subband of and ∇ is approximated using gradient extraction technique. The normalizing term denotes L2-norm energy of median filtered subband of .

2.2 Multispectral co-occurrence of wavelet coefficients

Texture analysis plays an important role in measuring the aggressiveness of brain tumor. The GLCM is one of the dominant texture descriptors used in image analysis. It has been extended in [1216] to capture texture information of the given image at multiple scales. In these approaches, features are extracted from each co-occurrence matrix of multiple scales and then concatenated to obtain the final feature vector. Although the concatenation of features obtained from co-occurrence matrices of different wavelet subbands is able to capture texture information at multiple scales [17], it has the drawback that each co-occurrence matrix, representing the relationships of wavelet coefficients, is confined into a single subband. Fig 2(a) presents two simulated images having different textures, while Fig 3(a) presents T1-weighted brain MR images with high grade and low grade tumors. The image at the top row of Fig 2(a) contains vertical and horizontal lines, while the image of bottom row of Fig 2(a) contains criss-cross lines and dotted region. Fig 2(b)–2(d) present the detail subbands resulting from the wavelet decomposition of two simulated images of Fig 2(a). For brain tumor images, the horizontal and diagonal subbands are presented in Fig 3(b) and 3(c) for two different tumor regions namely, A and B, respectively. Analyzing the detail subbands in Fig 2(b)–2(d), it is observed that the vertical and diagonal subbands of the two simulated images of Fig 2(a) are similar. In case of brain tumor images presented in Fig 3, the wavelet coefficients of horizontal subbands for region A, computed from high grade and low grade brain tumor images, are almost same. Similarly, the wavelet coefficients of diagonal subbands for region B of these two brain tumor images are nearly equal to each other. Therefore, it is seen that the co-occurrence of wavelet coefficients computed within a single subband, such as vertical and diagonal in Fig 2 and horizontal and diagonal in Fig 3, cannot reflect well the distinction present in the patterns of these images.

thumbnail
Fig 2. Detail wavelet subbands of two simulated images having different textures.

https://doi.org/10.1371/journal.pone.0250964.g002

thumbnail
Fig 3. Detail wavelet subbands of two T1-weighted brain MR images with high grade (top) and low grade (bottom) tumors.

https://doi.org/10.1371/journal.pone.0250964.g003

In this regard, the proposed algorithm introduces a novel concept, called multispectral co-occurrence matrix of wavelet coefficients. It considers the joint distribution of wavelet coefficient pairs of neighboring points that are taken, not only from single spectrum, but also from multiple spectra of different resolution levels. In effect, the features, derived from the proposed multispectral co-occurrence matrix of wavelet coefficients, have a high descriptive ability as well as retain the advantage of co-occurrence of wavelet coefficients. Each element of the proposed multispectral co-occurrence matrix represents the number of transitions between each pair of intra- and inter-subband wavelet coefficients involved in the spatial relationship denoted by . So, each (i, j)-th element of multispectral co-occurrence matrix for the qth subband is defined as follows: (3) where , and # denotes the number of elements in the set.

The parameter is a displacement vector between the pair of wavelet coefficients and () denotes any spectrum. Here, the term (r, q) represents the location of a wavelet coefficient which resides at spatial position r of qth detail wavelet subband. The computation of multispectral co-occurrence matrix for an example 4 × 4 matrix is explained in Appendix 5.1.

The importance of the proposed multispectral co-occurrence matrices over the co-occurrence matrix computed from individual wavelet subbands is illustrated in Figs 46 and Tables 1 and 2. Figs 46 present the histograms of co-occurrence matrices obtained from different wavelet subbands of top image of Fig 2(a) and two brain tumor images of Fig 3(a), considering single subband, termed as “Individual Subband Co-Occurrence” and multiple subbands, termed as “Multispectral Co-Occurrence”. The co-occurrence of wavelet coefficients of the example images is measured at 0°, 45°, 90°, and 135° directions for the computation of both individual subband and multispectral co-occurrence matrices, as shown in Figs 46. Analyzing the histograms presented in Figs 46, it is seen that the number of distinct pairs of wavelet coefficients obtained from multispectral co-occurrence matrix is significantly higher than that obtained from single subband co-occurrence matrix. The wavelet transform analyzes the image at a nested set of scales at three different directions namely, horizontal, vertical and diagonal. Each wavelet coefficient represents image contents localized in spatial location and frequency, which enables wavelet transform to efficiently represent the local edge contents of the image for different orientations. The proposed multispectral co-occurrence matrix considers the wavelet coefficients of different wavelet spectra. Therefore, it is able to capture the correlation of important image content located at all higher frequency components of the image, which results in generating significantly more number of distinct pairs of wavelet coefficients than that obtained from the co-occurrence matrix computed within single subband, as shown in Figs 46. Hence, the features derived from the multispectral co-occurrence matrix can adequately capture the edge continuity among different subbands, while the co-occurrence matrix of a single subband losses important spatial information.

thumbnail
Fig 4. Histograms of co-occurrence of wavelet coefficients for different subbands (top row: Horizontal, middle row: Vertical, and bottom row: Diagonal) of top image of Fig 2.

https://doi.org/10.1371/journal.pone.0250964.g004

thumbnail
Fig 5. Histograms of co-occurrence of wavelet coefficients for different subbands (top row: Horizontal, middle row: Vertical, and bottom row: Diagonal) of top image of Fig 3.

https://doi.org/10.1371/journal.pone.0250964.g005

thumbnail
Fig 6. Histograms of co-occurrence of wavelet coefficients for different subbands (top row: Horizontal, middle row: Vertical, and bottom row: Diagonal) of bottom image of Fig 3.

https://doi.org/10.1371/journal.pone.0250964.g006

thumbnail
Table 1. Textural features obtained from individual subband co-occurrence and multispectral co-occurrence for two example images.

https://doi.org/10.1371/journal.pone.0250964.t001

thumbnail
Table 2. Textural features obtained from individual subband co-occurrence and multispectral co-occurrence for brain images with high and low grade tumors.

https://doi.org/10.1371/journal.pone.0250964.t002

The advantage of the proposed multispectral co-occurrence matrix over the co-occurrence matrix computed from individual wavelet subband is also illustrated quantitatively using Haralick features in Tables 1 and 2. Three Haralick features namely, inverse difference moment, sum average and correlation, are examined for two simulated images of Fig 2(a), while a different set of three Haralick features namely, inverse difference moment, entropy and sum entropy is considered for two brain tumor images of Fig 3(a). These features are computed for each wavelet subband, obtained from different co-occurrence matrices, for two simulated images of Fig 2(a) and for two brain tumor images of Fig 3(a), and the corresponding results are presented in Tables 1 and 2, respectively. Analyzing the feature values reported in these tables, it is seen that the values of Haralick features are almost similar for both simulated images as well as for high grade and low grade brain tumor images at each wavelet subband when the co-occurrence matrix is computed within single subband. On the other hand, the difference between feature values for the two simulated images and two brain tumor images is significantly higher when considering the multispectral co-occurrence matrices, as shown in Tables 1 and 2, respectively. Therefore, the features computed from the co-occurrence matrices of individual subbands cannot adequately discriminate between two types of patterns present in these simulated and brain tumor images and thus become insignificant for further analysis.

The proposed multispectral co-occurrence matrices at different orientations can be formed by using different displacement vectors as mentioned in (3). In the volumetric analysis, 26 different combinations of the displacement vectors or the spatial relationship can be obtained; and among them 13 are distinct (mentioned in Appendix 5.2). For the simplification of computation, it is usually set as one voxel in distance and it, thus, yields only 13 different types of configurations of wavelet coefficient pairs in 3-D space. In the current study, the coefficients of each wavelet subband are normalized in the range of 0 to 255 before computing the multispectral co-occurrence matrices.

In order to extract useful information present in co-occurrence matrices, the proposed algorithm forms a feature vector for each subband q. The feature set includes seven Haralick measures namely, contrast, correlation, inverse difference moment, sum average, sum variance, sum entropy, and difference variance. The resultant feature vector for each subband is then obtained by averaging the feature vectors obtained from co-occurrence matrices for 13 different orientations. Next, the final feature vector is derived by concatenating the averaged feature vectors, obtained from the proposed multispectral co-occurrence matrices of each subband. So, the dimension of the feature vector becomes , as seven Haralick features are used in the current study.

2.3 Gradation of brain tumor

In the present work, support vector machine (SVM) [24] is used to classify low grade and high grade brain tumors. The SVM is a margin classifier, defined by an optimal hyperplane in the feature vector space. For a given labeled training data, the SVM outputs a decision hyperplane which categorizes new query instances. An important property of the SVM is that it is robust to outliers. It finds a decision boundary that maximizes the margin between two classes and tolerates the individual outliers at the same time. Moreover, the SVM is able to construct nonlinear decision boundary using kernel tricks. In the current study, linear kernels are used.

3 Experimental results and discussions

The performance of the proposed algorithm for evaluating the brain tumor malignancy is extensively studied and compared with that of some existing algorithms. The classification performance is assessed using two strategies namely, ten-fold cross-validation and leave-one-out cross-validation. The performance of the proposed method is analyzed with respect to six quantitative indices namely, accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the receiver operating characteristic (ROC) curve (AUC). The definitions of these indices are presented in Appendix 5.3. For ten-fold cross-validation, the mean value of each index, computed over ten-folds, is reported. On the other and, for leave-one-out cross-validation, the decision value is evaluated for each and every test subject, and after all repeats, these decision values are used to compute the classification performance indices.

The comparative performance analysis of different algorithms is studied using Tables 36, box-and-whisker plots of Figs 7, 9, 11 and 13, and ROC curves of Figs 8, 10, 12 and 14. The ROC curves in Figs 8, 10, 12 and 14 are obtained from leave-one-out cross-validation, while Figs 7, 9, 11 and 13 show the analysis for ten-fold cross-validation. Similarly, Table 3 presents the classification performance of the proposed method for leave-one-out cross-validation, while Tables 46 report the means, standard deviations, and p-values computed through Wilcoxon signed-rank test (one-tailed) and paired-t test (one-tailed) with respect to six classification indices namely, accuracy, sensitivity, specificity, PPV, NPV and AUC, for ten-fold cross-validation. Table 3 highlights the best classification indices in bold font. Similarly, in Tables 46, best mean indices and significant p-values, considering 95% confidence level, are marked bold, while lower, but not significant, p-values are made italics. In box-and-whisker plots of Figs 7, 9, 11 and 13, the top and bottom boundaries of each box represent upper and lower quartiles, respectively, central line represents the median, whiskers are extended to three standard deviations from the mean, and the outliers are represented by ‘+’. In the box-and-whisker plots, red color corresponds to the proposed algorithm.

thumbnail
Fig 7. Comparative performance analysis of different decomposition levels of wavelet analysis using ten-fold cross-validation.

https://doi.org/10.1371/journal.pone.0250964.g007

thumbnail
Fig 8. ROC curve obtained using leave-one-out cross-validation for different decomposition levels of wavelet analysis.

https://doi.org/10.1371/journal.pone.0250964.g008

thumbnail
Fig 9. Comparative performance analysis of the proposed fusion method over individual sequences for ten-fold cross-validation.

https://doi.org/10.1371/journal.pone.0250964.g009

thumbnail
Fig 10. ROC curve obtained using leave-one-out cross-validation for the proposed fusion method and individual sequences.

https://doi.org/10.1371/journal.pone.0250964.g010

thumbnail
Fig 11. Comparative performance analysis of different fusion approaches using ten-fold cross-validation.

https://doi.org/10.1371/journal.pone.0250964.g011

thumbnail
Fig 12. ROC curve obtained using leave-one-out cross-validation for different fusion approaches.

https://doi.org/10.1371/journal.pone.0250964.g012

thumbnail
Fig 13. Comparative performance analysis of the proposed multispectral co-occurrence matrix and multi-scale GLCM based methods using ten-fold cross-validation.

https://doi.org/10.1371/journal.pone.0250964.g013

thumbnail
Fig 14. ROC curve obtained using leave-one-out cross-validation for the proposed multispectral co-occurrence matrix and multi-scale GLCM based methods.

https://doi.org/10.1371/journal.pone.0250964.g014

thumbnail
Table 3. Performance analysis of different algorithms using leave-one-out cross-validation.

https://doi.org/10.1371/journal.pone.0250964.t003

thumbnail
Table 4. Accuracy and AUC of different algorithms for ten-fold cross-validation.

https://doi.org/10.1371/journal.pone.0250964.t004

thumbnail
Table 5. Sensitivity and specificity of different algorithms for ten-fold cross-validation.

https://doi.org/10.1371/journal.pone.0250964.t005

thumbnail
Table 6. PPV and NPV of different algorithms for ten-fold cross-validation.

https://doi.org/10.1371/journal.pone.0250964.t006

The results reported in Figs 9 and 10 and Tables 36 establish the importance of the proposed fusion technique. From all the results reported in Fig 10 and Table 3, it is seen that the proposed algorithm, fusing all MR sequences, outperforms the methods using individual MR sequences with respect to all indices, except for specificity and PPV. For these indices, T2 yields better results than the proposed fusion method. Analyzing the sensitivity and NPV in Table 3, it is seen that T2 obtains lowest values for these indices among all the methods including individual MR sequences and the proposed algorithm. The proposed method obtains best results in 4 cases, out of total 6 classification indices, with respect to all the individual MR sequences, in case of leave-one-out cross-validation.

3.1 Data sets used

Two multi-sequence brain MR volume data sets namely, BRATS 2012 and BRATS 2014, are used in the current research work. Each of these data sets contains four different MR sequences namely, T1, T1C, T2 and FLAIR. All volumes are skull stripped and linearly co-registered to the corresponding T1C MR sequence. Both BRATS 2012 and BRATS 2014 data sets provide the manual segmentations and diagnostic labels (high or low grade) for tumors, done by experts. The BRATS 2012 brain tumor image data [25] is obtained from the MICCAI 2012 Challenge on Multimodal Brain Tumor Segmentation (www.imm.dtu.dk/projects/BRATS2012) organized by B. Menze, A. Jakab, S. Bauer, M. Reyes, M. Prastawa, and K. Van Leemput. The BRATS 2014 tumor image data [25] is obtained from the MICCAI 2014 Challenge on Multimodal Brain Tumor Segmentation (www.braintumorsegmentation.org/) organized by K. Farahani, M. Reyes, B. Menze, E. Gerstner, J. Kirby and J. Kalpathy-Cramer. The challenge database contains fully anonymized images from the following Institutions: ETH Zurich, University of Bern, University of Debrecen, and University of Utah and publicly available images from the Cancer Imaging Archive. BRATS 2012 data set contains 30 real brain MR volumes, in which 20 are high grade and remaining 10 are low grade. BRATS 2014 database has significantly enlarged training data set obtained from the NIH Cancer Imaging Archive. It contains 200 real high grade and 44 real low grade brain MR volumes. In order to analyze the performance of the proposed algorithm, these two data sets are merged, resulting in total 220 high grade and 54 low grade brain MR volumes for each MR sequence.

3.2 Optimum value of wavelet decomposition level

The 3-D wavelet transform generates one approximation and seven detail subbands at each level. In case of dyadic wavelet transform, the approximation part is iteratively decomposed as the decomposition level is increased. Hence, if an input volume is decomposed upto Lth level, total 7L number of detail subbands are generated. In order to find out the optimum value of decomposition level L, experiments are carried out on several brain MR volumes by varying L = 1 to 3. To compare the performance of the proposed method at different wavelet decomposition levels, Figs 7 and 8 report box-and-whisker plots and ROC curve, respectively, while Tables 36 depict the classification results with respect to different indices. The methods with decomposition levels of one to three are named as “Level 1”, “Level 2”, and “Level 3”, respectively, in Figs 7 and 8. As shown in Fig 7, the performance of the proposed algorithm is better when the wavelet decomposition level L = 2 than that of other levels using ten-fold cross-validation, irrespective of the classification indices used. The improvement of classification performance can also be seen in ROC curve of Fig 8 obtained from leave-one-out cross-validation strategy. Since the proposed tumor classification algorithm uses wavelet decomposition level upto two, the proposed algorithm in Tables 36 corresponds to the method “Level 2” of Figs 7 and 8.

In leave-one-out cross-validation, it is seen in Table 3 that the proposed algorithm performs better than the methods of “Level 1” and “Level 3”, irrespective of the classification indices used. In case of ten-fold cross-validation, as shown in Tables 46, the proposed method also achieves best mean values for all the classification indices, at decomposition level 2. There are total 24 comparisons with the methods “Level 1” and “Level 3”, in terms of p-values, computed through Wilcoxon signed-rank test and paired-t test, for all six indices. Out of total 24 cases, the proposed method at L = 2 performs significantly better in 8 cases and attains lower, but not significant, p-values in remaining 16 cases, as shown in Tables 46. So, it can be concluded that the classification performance of the proposed method at wavelet decomposition level L = 2 is better than that of other levels, irrespective of the quantitative indices and experimental setup used. Hence, each volume is decomposed upto level L = 2 in the current study.

3.3 Importance of fusion over individual sequences

This section establishes the importance of information fusion from multiple MR sequences over individual ones. The classification performance is analyzed using ten-fold cross-validation as well as leave-one-out cross-validation. In case of ten-fold cross-validation, it is seen in Fig 9 and Tables 46 that the method using single T1C sequence performs better than the methods using other MR sequences namely, T1, T2 and FLAIR, with respect to all the indices, except in three cases. The T2 sequence is better with respect to sensitivity, while FLAIR sequence obtains highest mean values for specificity and PPV. For leave-one-out cross-validation (Table 3), it is seen that different MR sequences produce highest values for different classification indices. In case of accuracy, sensitivity, and AUC, the T1C sequence yields better results, while T2 sequence gives highest specificity and PPV values and T1 and FLAIR are having best result in NPV. Therefore, it can be concluded that individual MR sequence is not sufficient to characterize all the subregions of tumors, in turn, poorly determines the malignancy state of entire tumor. A combination of these sequences may provide a considerable amount of information for revealing the underlying tumor grade.

Analyzing the box plots of Fig 9, it is observed that the proposed method generates highest median values with respect to accuracy, specificity, PPV, and AUC. For remaining two indices namely, sensitivity and NPV, the proposed method achieves highest median values, which are also obtained by some of the individual MR sequences such as T1C and T2. As depicted in Tables 46, the performance of the proposed method, in case of ten-fold cross-validation, is improved with respect to all individual sequences irrespective of the quantitative indices used. However, the method using individual T2 sequence produces highest sensitivity value of 1.0 and lower specificity values. It reveals that the false positive count is very high, while the false negative count is zero, indicating the overall poor performance of the method using T2 sequence. Therefore, the proposed method performs significantly better than any individual MR sequences namely, T1, T1C, T2 and FLAIR, in 36 cases, out of total 48 comparisons, while it obtains lower, but not significant, p-values in 8 cases. The AUC values of the proposed algorithm and the methods using individual MR sequences, as shown in Fig 10, indicate that the proposed method provides significantly better performance than the methods using T1, T1C, T2, and FLAIR modalities. All these results indicate the importance of the proposed fusion method over the use of single MR sequence with respect to six classification indices.

3.4 Effectiveness of texture gradient based wavelet fusion

The proposed method employs texture gradient based fusion method to integrate multi-sequence MR volumes namely, T1, T1C, T2 and FLAIR. It uses the texture information content of different sequences to fuse the wavelet coefficients for each voxel within the region of interest. Figs 11 and 12 and Tables 36 present the effectiveness of the proposed fusion algorithm embedded in the method, over two other fusion approaches done by maximum and average operators. In this section, these two methods are named as Fusion_MAX and Fusion_AVG.

In leave-one-out cross-validation strategy, the proposed tumor classification algorithm obtains highest values for all the indices, with respect to both Fusion_MAX and Fusion_AVG methods, as shown in Table 3. However, Fusion_AVG method performs better than the proposed method with respect to PPV. For ten-fold cross-validation, the proposed method attains higher mean values than that of methods using fusion with these two operators for all classification metrics used, except sensitivity. Therefore, the proposed method performs significantly better than the fusion with average operator in 10 cases, out of 12 cases, while it yields lower p-values compared to the method using fusion with maximum operator in all cases, irrespective of quantitative indices used. In addition, the median values of the proposed algorithm are higher than that of the methods Fusion_MAX and Fusion_AVG for all the classification indices, as shown in Fig 11, except specificity with respect to Fusion_MAX method. In this case, the proposed algorithm and the Fusion_MAX method obtain same median specificity value of 0.733. However, the mean specificity value of the proposed algorithm is better than that of the Fusion_MAX method as shown in Table 5. Analyzing the AUC values computed through leave-one-out cross-validation, presented in Fig 12, it is seen that the proposed fusion method outperforms both Fusion_MAX and Fusion_AVG methods. The better performance of the proposed tumor gradation method is achieved due to the fact that the texture content, which is effective for characterizing tumor type, is used to obtain most desirable information at each spatial position of each wavelet subband for a specific MR sequence.

3.5 Importance of multispectral co-occurrence

The proposed algorithm computes the co-occurrence of detail wavelet coefficients from multiple subbands and extracts Haralick features from those multispectral co-occurrence matrices. In this section, the performance of the proposed algorithm is compared with that of two existing multi-scale GLCM based methods namely, MS-GLCM (2013) [12] and MGLCM (2016) [17]. In these methods, the features are extracted for each of T1, T1C, T2, and FLAIR modalities and then concatenated to obtain the final feature vector. In order to keep the same experimental background, seven Haralick features are considered and the SVM with linear kernel is used as classifier for MS-GLCM (2013) [12] and MGLCM (2016) [17], as done for the proposed tumor gradation algorithm. Figs 13 and 14 and Tables 36 present the improvement of the classification performance using multispectral co-occurrence matrices over two related methods namely, MS-GLCM (2013) and MGLCM (2016). In Figs 13 and 14, the proposed algorithm using multispectral co-occurrence matrices is named as “Multispectral Co-Occurrence”.

From the results reported in Tables 46, it is observed that the proposed algorithm achieves better classification indices with respect to both the methods for ten-fold cross-validation strategy. In addition, analyzing the classification results of Table 3 for leave-one-out cross-validation, it is seen that the proposed tumor gradation algorithm performs better than both MS-GLCM (2013) and MGLCM (2016) with respect to all indices, except PPV, in which both the proposed method and MGLCM (2016) achieve same value of 0.968182. Out of total 24 cases of ten-fold cross-validation, the proposed method obtains significant p-values in 18 cases and lower, but not significant, p-values in remaining 6 cases, irrespective of the classification indices used, as shown in Tables 46. Analyzing the box plots of Fig 13 for ten-fold cross-validation, it is observed that the proposed method yields higher median values compared to both MS-GLCM (2013) and MGLCM (2016), for all the classification indices. The better performance of the proposed algorithm can also be seen with respect to ROC curve of Fig 14, obtained using leave-one-out cross-validation strategy. The AUC value of the proposed method is 0.903, which is significantly higher than that of the methods MS-GLCM (2013) and MGLCM (2016), as shown in Fig 14. The significantly better performance of the proposed algorithm with respect to both MS-GLCM (2013) and MGLCM (2016) is achieved due to the fact that the proposed algorithm takes benefit from multiresolution wavelet analysis. Moreover, the proposed method considers multispectral co-occurrence matrices, rather than restricting itself within single wavelet subband, to capture the image content of all high frequency component of the image. It helps to derive the co-occurrence of the pair of wavelet coefficients from multiple subbands.

4 Conclusion

This paper presents a new algorithm for the gradation of brain tumor, using conventional MR sequences. The contribution of this work is mainly three-fold namely,

  1. development of a wavelet based fusion method to integrate complementary information of multiple MR sequences;
  2. defining a new spatial relationship among the pair of wavelet coefficients of different wavelet subbands, called multispectral co-occurrence of wavelet coefficients; and
  3. demonstrating the performance of the proposed tumor gradation algorithm, along with a comparison with other related methods.

The proposed fusion method consolidates the wavelet coefficients of each of the four MR sequences namely, T1, T1C, T2 and FLAIR, based on the texture information content. It is observed that the fusion of different MR sequences boosts the performance of the proposed method significantly over that of individual MR sequences. The feature descriptor, obtained from the proposed multispectral co-occurrence of wavelet coefficients, effectively extracts the important detail information, obtained across the wavelet subbands. Finally, the effectiveness of the proposed algorithm is evaluated by both leave-one-out and ten-fold cross-validation strategies, on a set of 274 real brain MR volumes, obtained from BRATS 2012 and BRATS 2014 data sets.

5 Appendix

5.1 Computation of multispectral co-occurrence matrices

In this section, the computation of proposed multispectral co-occurrence matrix is elaborated with the help of an example image. Fig 15 shows a 4 × 4 fragment region of the top gray-scale image of Fig 2 and the wavelet decompositions of the corresponding fragment region. In this example, the wavelet coefficients of the detail subbands namely, horizontal, vertical and diagonal, are normalized in the range between 0 and 4. The multispectral co-occurrence matrix is then computed for each wavelet detail subband at each direction namely, horizontal (0°), right-diagonal (45°), vertical (90°) and left-diagonal (135°). The first four matrices namely, , , and , portray four multispectral co-occurrence matrices with respect to horizontal subband, which represent the spatial dependency of inter- and intra-subband wavelet coefficients at the directions of horizontal, right-diagonal, vertical, and left-diagonal, respectively. Similarly, the multispectral co-occurrence matrices with respect to vertical and diagonal subbands, denoted as and , respectively, are also depicted in Fig 15. In this example, two neighboring points are separated by a distance of 1. In order to determine the element in the (i, j) position of the matrix , for example, the values of three terms are added, as the number of wavelet subbands is 3. These three terms are , , and , described as follows:

  • denotes the number of times the wavelet coefficient of value i in vertical wavelet subband and that of value j in horizontal subband, occurred horizontally adjacent to each other with respect to spatial coordinates within the respective subband;
  • denotes the total number of times two wavelet coefficients of values i and j occurred horizontally adjacent to each other in the vertical wavelet subband; and
  • denotes the number of times the wavelet coefficient of value i in vertical wavelet subband and that of value j in diagonal subband, occurred horizontally adjacent to each other with respect to spatial coordinates within the respective subband.
thumbnail
Fig 15. Calculation of multispectral co-occurrence matrix for all detail wavelet subbands at four directions.

https://doi.org/10.1371/journal.pone.0250964.g015

For example, the element in the (0, 0) position of the matrix is 5, as shown in Fig 15. To determine this number, the values of , , and , corresponding to the (0, 0) position of , are to be computed. Since there is no pair of wavelet coefficients of value 0 occurred horizontally in vertical subband, the value of . In order to compute , let us consider the coordinate position of (2, 2) in vertical subband. If the neighbors of (2, 2) position of diagonal subband are considered with the wavelet coefficient at position (2, 2) of vertical subband as the center of the neighborhood, then the corresponding neighborhood region becomes

Here, two wavelet coefficients of value 0 occur horizontally adjacent to each other. Similarly, considering the coordinate positions (0, 0), (0, 2), and (2, 0) in vertical subband with respect to diagonal subband, a pair of wavelet coefficients of value 0, adjoined to each other horizontally, is found for each position. Therefore, the value of . Again, there exists only one pair of wavelet coefficients of value 0, when computing multispectral co-occurrence of wavelet coefficients for vertical subband with respect to horizontal subband. Hence, . So, the entry of the position (0, 0) of the matrix is . In a similar way, the other elements of this matrix , and the elements of other multispectral co-occurrence matrices for different wavelet subbands and different angular relationships of adjacent neighbors can be computed, as shown in Fig 15.

5.2 Co-occurrence matrices for volumetric data

Let, θ be the angle between the vector (blue line) and the z-axis, and φ be the angle between the projection of the vector on xy plane and the x-axis, as depicted in Fig 16. Since the co-occurrence matrix is computed over volumetric data, there are 26 different possible combinations of the displacement vector. Among them, only 13 displacement vectors, are different, which are shown in Table 7 along with the corresponding values of θ and φ. However, for the simplification of computation, it is usually set as one voxel in distance and it, thus, yields only 13 different types of configurations of wavelet coefficient pairs at 3-D space.

thumbnail
Fig 16. Illustration of the displacement vector in 3-D space.

https://doi.org/10.1371/journal.pone.0250964.g016

thumbnail
Table 7. Displacement vector for multispectral co-occurrence matrices.

https://doi.org/10.1371/journal.pone.0250964.t007

5.3 Classification indices

The basic idea of performing a diagnostic test is to ensure about a particular disease such that treatment planning can be done properly. The validity of the diagnostic test can be assessed quantitatively by different classification indices namely, accuracy, sensitivity, specificity, positive predictive value and negative predictive value, with the help of ground truth information.

5.3.1 Accuracy.

The classification accuracy is defined as the number of correct predictions made divided by the total number of predictions made, given by (4) where TP denotes true positive, TN is true negative, FP denotes false positive, and FN presents false negative.

5.3.2 Sensitivity.

Sensitivity is the ability of a test to correctly classify an individual as diseased. It is defined as the probability of being a test positive when the disease is present, which is given by (5)

5.3.3 Specificity.

Specificity is the ability of a test to correctly classify an individual as disease-free. It is defined as the probability of being a test negative when the disease is absent, which is given as follows: (6)

5.3.4 Positive predictive value.

The positive predictive value (PPV) is the percentage of patients with a positive test who actually have the disease. It indicates how many of test positives are true positives; and if this number is higher, then it suggests that a new test is doing as good as gold standard. The PPV is defined as the probability that a patient having disease when test is positive, defined as follows: (7)

5.3.5 Negative predictive value.

The negative predictive value (NPV) is the percentage of patients with a negative test who do not have the disease. It indicates how many of negative tests are true negative. The NPV is defined as the probability of a patient not having disease when the test is negative, which is given by (8)

The higher values of different classification indices namely, accuracy, sensitivity, specificity, PPV and NPV, indicate the better classification performance.

5.3.6 Area under curve.

The area under receiver operating characteristic (ROC) curve or simply, area under curve (AUC), is a popular performance measure for classification problem at various test cutoff points. It evaluates and compares the classification rules when one cannot decide a priori what classification threshold will be used. The diagnostic performance of a test or the accuracy of a test to discriminate the diseased cases from normal cases is evaluated using ROC curve analysis. The ROC curve displays all possible cut-off points, and one can read the optimal cut-off for correctly identifying diseased or non-diseased subjects. The ROC curve is graphical display of sensitivity, also known as true positive rate (TPR) on y-axis and (1—specificity) or false positive rate (FPR) on x-axis for varying cut-off points of test values.

The AUC is an effective way to summarize the overall diagnostic accuracy of the test. The values of AUC lie within 0.5 and 1. A value of 0.5 for AUC indicates that the ROC curve will fall on the line of equality and hence, it suggests that the diagnostic test has no discriminatory ability. The ROC curve above this diagonal line, that is, the value of AUC close to 1, represents better performance of the test. Furthermore, the AUC value of 1 indicates that the diagnostic test is perfect in differentiating the diseased from non-diseased subjects. This implies both sensitivity and specificity are one and both errors namely, false positive and false negative, are zero.

References

  1. 1. Stewart BW, Wild CP. World Cancer Report 2014. International Agency for Research on Cancer; 2014.
  2. 2. Alic L, Niessen WJ, Veenland JF. Quantification of Heterogeneity as a Biomarker in Tumor Imaging: A Systematic Review. PLoS ONE. 2014;9(10):e110300.
  3. 3. Dean BL, Drayer BP, Bird CR, Flom RA, Hodak JA, Coons SW, et al. Gliomas: Classification with MR Imaging. Radiology. 1990;174(2):411–415. pmid:2153310
  4. 4. Amin J, Sharif M, Gul N, Yasmin M, Shad SA. Brain Tumor Classification Based on DWT Fusion of MRI Sequences Using Convolutional Neural Network. Pattern Recognition Letters. 2020;129:115–122.
  5. 5. Chen C, Dou Q, Jin Y, Chen H, Qin J, Heng PA. Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion. In: Medical Image Computing and Computer Assisted Intervention—MICCAI 2019. Springer International Publishing; 2019. p. 447–456.
  6. 6. Mallat S. A Theory for Multiresolution Signal Decomposition: The Wavelet Representation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1989;11(7):674–693.
  7. 7. Vijayakumar C, Damayanti G, Pant R, Sreedhar CM. Segmentation and Grading of Brain Tumors on Apparent Diffusion Coefficient Images Using Self-Organizing Maps. Computerized Medical Imaging and Graphics. 2007;31:473–484.
  8. 8. Maji P, Roy S. Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation. PLoS ONE. 2015;10(4):e0123677.
  9. 9. Maji P, Roy S. SoBT-RFW: Rough-Fuzzy Computing and Wavelet Analysis Based Automatic Brain Tumor Detection Method from MR Image. Fundamenta Informaticae. 2015;142:237–267.
  10. 10. Islam A, Reza SMS, Iftekharuddin KM. Multifractal Texture Estimation for Detection and Segmentation of Brain Tumors. IEEE Transactions on Biomedical Engineering. 2013;60(11):3204–3215.
  11. 11. Bauer S, May C, Dionysiou D, Stamatakos G, Buchler P, Reyes M. Multiscale Modeling for Image Analysis of Brain Tumor Studies. IEEE Transactions on Biomedical Engineering. 2012;59(1):25–29.
  12. 12. de Siqueira FR, Schwartz WR, Pedrini H. Multi-Scale Gray Level Co-Occurrence Matrices for Texture Description. Neurocomputing. 2013;120:336–345.
  13. 13. Gelzinis A, Verikas A, Bacauskiene M. Increasing the Discrimination Power of the Co-Occurrence Matrix-Based Features. Pattern Recognition. 2007;40(9):2367–2372.
  14. 14. Hu Y, x Zhao C. Unsupervised Texture Classification by Combining Multi-Scale Features and K-Means Classifier. In: 2009 Chinese Conference on Pattern Recognition; 2009. p. 1–5.
  15. 15. Pacifici F, Chini M, Emery WJ. Urban Land-Use Multi-Scale Textural Analysis. In: IGARSS 2008—2008 IEEE International Geoscience and Remote Sensing Symposium. vol. 5; 2008. p. 342–345.
  16. 16. Walker RF, Jackway PT, Longstaff D. Genetic Algorithm Optimization of Adaptive Multi-Scale GLCM Features. International Journal of Pattern Recognition and Artificial Intelligence. 2003;17(1):17–39.
  17. 17. Dandan L, Ruixiang M, Yan W, Xianli Z, Qiao J. Multi-Scale GLCM Based Quantitative Assessment Method for Ultrasound Liver Fibrosis Images. In: Proceedings of 13th IEEE International Conference on Signal Processing (ICSP); 2016. p. 677–680.
  18. 18. Law M, Yang S, Wang H, Babb JS, Johnson G, Cha S, et al. Glioma Grading: Sensitivity, Specificity, and Predictive Values of Perfusion MR Imaging and Proton MR Spectroscopic Imaging Compared with Conventional MR Imaging. American Journal of Neuroradiol. 2003;24:1989–1998. pmid:14625221
  19. 19. Li GZ, Yang J, Ye CZ, Geng DY. Degree Prediction of Malignancy in Brain Glioma Using Support Vector Machines. Computers in Biology and Medicine. 2006;36:313–325.
  20. 20. Zacharaki EI, Wang S, Chawla S, Yoo DS, Wolf R, Melhem ER, et al. Classification of Brain Tumor Type and Grade Using MRI Texture and Shape in a Machine Learning Scheme. Magnetic Resonance in Medicine. 2009;62(6):1609–1618. pmid:19859947
  21. 21. Inano R, Oishi N, Kunieda T, Arakawa Y, Yamao Y, Shibata S, et al. Voxel-Based Clustered Imaging by Multiparameter Diffusion Tensor Images for Glioma Grading. NeuroImage: Clinical. 2014;5:396–407. pmid:25180159
  22. 22. Ryu YJ, Choi SH, Park SJ, Yun TJ, Kim JH, Sohn CH. Glioma: Application of Whole-Tumor Texture Analysis of Diffusion-Weighted Imaging for the Evaluation of Tumor Heterogeneity. PLoS ONE. 2014;9(9):e108335.
  23. 23. Hill PR, Canagarajah CN, Bull DR. Image Segmentation Using a Texture Gradient Based Watershed Transform. IEEE Transactions on Image Processing. 2003;12(12):1618–1633.
  24. 24. Vapnik V. In: The Nature of Statistical Learning Theory. New York: Springer; 1995.
  25. 25. Menze BH, et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Transactions on Medical Imaging. 2015;34(10):1993–2024. pmid:25494501