Spatial Uncertainty Modeling of Fuzzy Information in Images for Pattern Classification

The modeling of the spatial distribution of image properties is important for many pattern recognition problems in science and engineering. Mathematical methods are needed to quantify the variability of this spatial distribution based on which a decision of classification can be made in an optimal sense. However, image properties are often subject to uncertainty due to both incomplete and imprecise information. This paper presents an integrated approach for estimating the spatial uncertainty of vagueness in images using the theory of geostatistics and the calculus of probability measures of fuzzy events. Such a model for the quantification of spatial uncertainty is utilized as a new image feature extraction method, based on which classifiers can be trained to perform the task of pattern recognition. Applications of the proposed algorithm to the classification of various types of image data suggest the usefulness of the proposed uncertainty modeling technique for texture feature extraction.


Introduction
There are different types of images used in the diverse applications of image classification; to name a few, medical images, biological images, remote-sensing images, scene images, and so on. The information contents of different image types are different from each other, but they may share some common properties. To classify categorical images, the numerical description of the images, which is known as an image feature, is necessary in order to capture their distinctive characteristics that can be used for training classifiers or as labeled samples. A critical challenge in the discriminative quantification of image properties of various regions of interest is that they are usually subject to noise and vague boundaries between the objects and background, which are often found in medical, life-science, and natural data [1]- [7]. Consequently, these factors adversely affect the classification performance. To deal with the uncertainty of image information, statistical measures of sets of images are often utilized to construct probability models of images, in which they are considered as random variables. An approach for handling imprecision rather than randomness in images is to consider them as fuzzy events so that some non-probabilistic measure of uncertainty can be established.
Based on the example made to interpret the definition of the entropy of fuzzy sets [8], the above notions of uncertainty in images can be elucidated by considering examples of the outcomes of image segmentation and image enhancement. Figure 1 shows an original part of an MRI image of the brain, in which the bright areas are the white matter hyperintensities of the brain. The image intensity levels are in the range [0, 1]. Figures 2 and 3 are the two binary segmentation results of the original image ( Figure 1) using the gray-level threshold of 0.1647 obtained from the Otsu's image segmentation method [9] (there are well-known public-domain software packages for MRI brain segmentation, which include prior anatomical information, such as SPM [10] and its latest version SPM8, and FSL [11]; this segmentation method used here to only provide a simple example of a segmented section of the brain on MRI for the conceptual discussion in this study), and by setting the threshold at 0.25, respectively.  shows an enhancement result of the original image using an adjustment method that increases the contrast of the image by mapping the values of the original intensity image to new values such that 1% of the data is saturated at low and high intensities of the input data. Figure 5 is another enhancement result of the original image using histogram equalization that enhances the contrast of the image by transforming the values in the original image so that the histogram of the output image approximately matches the uniform distribution histogram. The pixel intensity in Figures 2 and 3 are either black (background) or white (object). In other words, the uncertainty involves in the segmentation outcomes refer to the presence or absence of object pixels, which can be modeled as a random variable taking on the values of either 1 (white object) or 0 (black background). The uncertainty associated with the outcomes shown in Figures 4 and 5 can be expressed in terms of the degree of grayness of the pixels that are subject to imprecision or subjectiveness. Therefore, such a description of imprecise pixel values can be considered as a fuzzy set. In fact, image enhancement is the process that attempts to improve an image appearance so that it looks subjectively better. There exist no standards that can guide how the image should look, but one can tell if it has been improved or not by considering, for example, the detailed contents or contrast of the image [12].
The treatment of uncertainty in images have been largely discussed using the theory of fuzzy sets [13,14] and geostatistics [15,16]. Fuzzy logic addresses uncertainty that is caused by imprecision or vagueness in provided information and models such uncertainty with the admission of degrees of possibility. By using the notion of fuzzy sets, an event, which is a set of outcomes of an experiment to which a probability is assigned, can be extended to define fuzzy events to significantly enhance the applications of the theory of probability in the fields in which uncertainty being due to fuzziness is inherently pervasive [17]. Although the integration of the theories of probability and fuzzy sets is natural in image analysis and pattern recognition, little effort has been spent on exploring the potential application of this idea since a work on binary image thresholding carried out by extending the probability measure of fuzzy events to calculate the fuzzy measure of similarity between two sets [18].
Furthermore, the concept of modeling uncertainty associates different sources of uncertainty with various deterministic and non-deterministic laws of physical and dynamical processes, depending on the purpose for which the models are applied. Many pattern classification problems, such as image analysis, involve spatial modeling that naturally calls for techniques covered in geostatistics. In fact, the variability in images with alternating high and low pixel values is evident in many practical domains of medicine, science and engineering. Modeling spatial continuity in images is therefore critical to providing a solution to the question of addressing uncertainty, since a spatial model of image properties will constitute to a different assessment of uncertainty compared with the assumption that everything is random [19]. Research works on geostatistics have been reported in literature, including ordinary kriging and indicator kriging, applied to image processing [16,20] and classification of remote-sensing images [21]- [23].
The combinations of geostatsitics, fuzzy sets and fuzzy cluster analysis have been developed for image enhancement [15] and image segmentation [24,25]. However, it appears that, for the first time, this paper presents an integrated approach to measuring spatial uncertainty in images by modulating the calculi of probability and fuzzy sets to incorporate their interdependencies. The mathematical development of the proposed method is based on the notion of the probability measure of fuzzy sets and the definition of the entropy of a fuzzy event with respect to a probability distribution that is derived from the theory of geostatistics. The novelty of this mathematical model is that the inherently fuzzy partitions of the image space can be used to model the spatial uncertainty of the image by coding the data as probability values at different degrees of membership of belonging to the fuzzy partitions. As a result, the derivations of the fuzzy sets and spatial probability measures of the image uncertainty allow the quantification of the entropy of fuzzy pixels with respect to their probability distributions. This type of a measure of uncertainty or entropy can be readily utilized as a pattern feature for image classification.

Entropy of a fuzzy event
Consider a Euclidean n-space R n and a probability space represented by a triplet (R n ,B,P), where B is the s-field of Borel sets (Borel sets in a topological space are the s-algebra generated by the open sets. An algebra of sets which is closed under countable unions is known as a s-algebra, s-field or Borel field  Spatial Uncertainty Modeling of Fuzzy Information in Images PLOS ONE | www.plosone.org [26]) in R n and P is a probability measure over R n . Also, let x[R n and B[B. The probability of B, P(B), can be expressed as where b B , b B : R n ?f0, 1g, is the characteristic function of B, and E(b B ) is the expectation of b B . Let A[R n be a fuzzy set and its membership function m A , m A : R n ?½0,1, is Borel measurable. The probability of a fuzzy event A can be defined by the Lebesgue-Stieltjes integral [17] as As Eq. (1) defines the probability of a crisp event as the expectation of its characteristic function, so the probability of a fuzzy event defined in Eq. (2) is the expectation of its membership function. The presented definitions of a fuzzy event and its probability constitute to the generalized framework of the theories of fuzzy sets, probability, and information [17]. Further study has also shown that the theory of probability is of a rich structure for incorporating fuzzy events within its framework to logically generate probabilities of fuzzy events so that the uncertainty of outcomes and of imprecision can be successfully unified [27].
To specifically explore how the concepts of fuzzy sets, probability and information can be made in a coherent framework; we turn the discussion to the notion of the entropy of a probability distribution, which is mathematically expressed as where x is a random variable taking values x 1 , . . . ,x n with respective probabilities p 1 , . . . ,p n , H(x) is the entropy of the distribution P~fp 1 , . . . ,p n g, and for p i~0 , log (p i )~0, indicating the non-feasibility of obtaining information about an impossible event.
The definition expressed in Eq. (3) suggests that the entropy of fuzzy event A~fx 1 , . . . ,x n g with respect to a probability distribution P~fp 1 , . . . ,p n g can be defined as [17] where H P (A) can be interpreted as the uncertainty of outcomes associated with a fuzzy event.
The above definition of the entropy of a fuzzy set forms the basis for our modeling of the spatial uncertainty of the intensity distribution in an image. It has been widely known that categorical information is inherently imprecise in images, particularly in the context of biology and medicine [14]- [31]. First, we apply the theory of fuzzy sets as a calculus for the treatment of uncertainty associated with the classification of image intensity distributions. Second, we use geostatitical tools to study the spatial continuity as a transition probability in an image to evaluate its spatial uncertainty. These two types of uncertainty measures allow fuzzy  sets, probability and information to work in concert for the identification of image classes.

Modeling image-content imprecision with the FCM algorithm
Uncertainty in an image, which is inherently due to the imprecise description of the image content, can be mathematically modeled by the partition of the image space using the fuzzy c-means algorithm [32]. Let an image I of size L|W be arbitrarily partioned into a number of imprecise clusters using the fuzzy c-means algorithm [32]. Mathematically, let M fc be a fuzzy c-partition space, X be a subset of the real g-dimensional vector space R g : The fuzzy c-means (FCM) clustering is based on the minimization of the fuzzy objective function J F : M fc |R cg ?R z , which is defined as [32] where q[½1,?) is the fuzzy weighting exponent, The fuzzy objective function expressed in Eq. (5) is subject to the following constraints: where m ij [½0,1,i~1, . . . ,L | W , j~1, . . . ,c: The objective function J F (U,v) is a squared error clustering criterion and to be minimized to optimally determine U, and v. A solution to the minimization of the objective function is by a process of iteratively updating U and v until some convergence is reached [32]. Thus, given fuzzy c partitions, the FCM assigns each pixel to the c clusters with its respective membership grades. In other words, the FCM-based cluster analysis can be utilized to construct the modeling of uncertainty in an image in the context of imprecise boundaries or ill-defined classes.

Modeling spatial uncertainty of imprecision with indicator kriging formalism
The uncertainty involved with imprecision in image intensity has been addressed with the notion of fuzziness; whereas the uncertainty regarding to pixel locations refers to spatial randomness, which can be modeled with the indicator kriging formalism of geostatistics [33]. The indicator kriging has been applied as a natural tool for determining a non-parametric conditional probability distribution of categorical data [34]. An interest in this study is to utilize indicator kriging to construct local spatial distributions of uncertainty in an image, which can be incorporated within the framework of the probability measure of fuzzy information. Let f i be the intensity value of a pixel located at i, i~1, . . . ,L|W , L|W is the size of the image. Here, the purpose of applying the indicator formalism is to estimate the probability distribution of uncertainty at unsampled location i. The cumulative distribution function is usually estimated with a set of cutoff thresholds z t , t~1, . . . ,T; and the probabilities are then determined by coding the data as binary indicator values. The indicator coding at location i is defined as follows [34] : Using thresholds z t , t~1, . . . ,T, as values in the range of the image intensity does not conveniently offer a procedure for modeling spatial uncertainty in an image (for example, the thresholds can be chosen from the histogram of the data to represent percentiles, which are the values below the percentanges of the observations [35]). Therefore, instead of using z t , we make use of the previously discussed fuzzy image partitions or clusters v j , j~1, . . . ,c, which allows every pixel to belong to every partition with different fuzzy membership grades, and apply a series of alevel cuts, a[½0,1, to code the categorical pixels as follows: where I j (i; a) is the indicator that codes the assignment of f i to cluster v j having a fuzzy membership grade being equal to or greater than a. Here, a can be selected as a set of the fuzzy membership grades to represent the degrees of imprecision that indicate the possibility being higher than the most fuzzy value of 0.5. The next step of the indicator kriging formalism is the determination of the cumulative distribution function (CDF), which characterizes the probability of f i belonging to v j with a membership value of being greater or equal to a, and can be Taking advantage of the available information of M-neighboring data, the conditional CDF is where M is the number of neighboring pixels of i.
The CDF according to the indicator expressed in Eq. (8) can be estimated using the ordinary kriging [35], and the result of indicator kriging is a model of spatial uncertainty at the pixel at location i:F where w m (i,a) is the ordinary kriging weight that indicates the influence of neighboring pixel m over pixel i with respect to level cut a. These weights can be optimally determined by the ordinary kriging system of equations [35]: The indicator semi-variogram that is experimentally calculated for lag distance h is defined as the average squared difference of values separated by h: where N(h) is the number of pairs for lag h. Alternatively, the ordinary kriging system can be expressed in a matrix form as where G is the square and symmetrical matrix that represents the semi-variogram value between the known neighboring values I j (m,a), m~1, . . . ,M; w is the vector of kriging weights; and g is the vector representing the semi-variogram values between I j (m,a) and I j (i,a), m~1, . . . ,M, m=i. These terms are defined as [35] G~c 11 c 12 Á Á Á c 1M 1 : : Á Á Á : : : : Á Á Á : : : : Á Á Á : : where c 12 is the semi-variance of I j (m~1,a) and I j (m~2,a); where w 1 , . . . ,w M are the kriging weights, l is a Lagrange multiplier, and P M m~1 w m~1 ; and where c iM is the semi-variance of I j (i,a) and I j (M,a).
Given that G {1 exists, the kriging weights can be obtained by solving: An implicit assumption of the ordinary kriging system having presented in Eq. (12) is that the underlying statistics are invariant in space under translation. Such a property is known as statistically stationary. However, statistical stationarity is a property of a random function, but not an inherent property of real data [36]. This nonstationary property is also true for medical images in which different internal organs can have different variations of the image intensity and the mean of the image changes locally. Here, kriging with a nonstationary mean is applied to enhance the reliability of the estimate of the kriging weights. This technique is called universal kriging (UK) [37,38].
In ordinary kriging, the estimation is carried out with the error variable from a stationary mean that must be known at all positions and can be set as the global mean or modeled with a drift or a local trend. A local mean with a drift m(u) can be modeled as a linear combination of the geometric coordinates of the pixels with a local neighborhood as where a k are unknown drift coefficients, L 0 (u)~1 (constant function for the constant-mean case), and L k (u), k~1, . . . ,K, are the polynomials or basis functions, which can be modeled as the first-degree or second-degree terms as follows, respectively.
where x 1k and x 2k are the pixel coordinates in row-wise and column-wise of f k , respectively. The drift effect can be incorporated into the ordinary kriging system to find kriging weights as additional constraints. Solving this extended set of simultaneous equations, a set of universal kriging weights that model the drift within the local neighbors around the location of the unknown value. In general, the UK system can be expressed with the following matrix structure where G Ã , w Ã , and g Ã are the G without the last row and last column, w Ã without the last row, and g without the last row as defined for the ordinary kriging system, respectively; : Á Á Á : : : Á Á Á : : : , where L 11 denotes L 1 (f 1 ),

Image classification with integrated uncertainty modeling
The integrated framework for modeling uncertainty due to both imprecision and randomness has been formulated. In other words, a new type of image feature has been introduced in terms of the probability of a fuzzy event for pattern classification. In this context, a fuzzy event can be an imprecise object or sample to be categorically identified. In application, the next task is to decide which class that best matches the feature extracted from the unknown sample. This is pattern classification that associates the appropriate class label with the test sample by using the descriptive features. A general way is to use a function or a classifier, such as a distance measure, to find the class with features that differ the least amount from the features of the unknown sample. The discussion completes with a decision or classification procedure for a computed set of entropy features for v s , s~1, . . . ,K, classes as follows.
Let fuzzy cluster centers v 1 w . . . wv c , and a-level cuts a 1 w . . . wa b (if such orders do not exist, then the orders are rearranged). Also, let H P s (A,a r ,v j ), r~1, . . . b, j~1, . . . ,c, be the entropy of fuzzy event A with respect to a probability distribution P defined in Eq. (4), obtained by using a-level cut a r and fuzzy partition v j for class v s : Given w s (L)~f (v s DL j ,j~1, . . . ,c), where f ( : ) is a monotonically increasing discriminant function (the larger the value of the function the better the match); the decision for classification is carried out as follows: It is noted that the above decision rule expressed in Eq. (21) is general and can be applied to any type of pattern classifiers.

Detection of Mitochondria in Microscope Images
The proposed method was applied for the detection of mitochondria in microscope images. The mitochondrion is a membrane-bound organelle found in most eukaryotic cells. Mitochondria are considered as the powerhouse of the cell because they function as the platform for generating the production of chemical energy. The visual information of mitochondria revealed by the recent advanced technology in nanoimaging opens doors to life-science researchers to gain insights into its spatial structure and its spatial distribution within the cell. In order to simulate and model mitochondria using a large amount of images, the first task in image processing is the automated detection of this organelle. In fact, the classification of molecular images has been a long-pursued research in the disciplinary field of engineering and computer science in life sciences [39,40]. However, there is always a strong demand for exploring appropriate feature extraction methods for the automated identification of particular types of objects or regions of interest in cell biology with different levels of technical challenge, ranging from the detection of cells nuclei [41] to subcellular patterns [2]. If different types of the images can be automatically distinguished by computerized methods, such an ability can help researchers to quickly and accurately study cell function to discover mechanisms underlying complex diseases, and carry out spatial modeling and simulation of biological signaling pathways, which may identify critical organelles attributing to the regulation of the cellular process within the intracellular space.
The cells were imaged using scanning electron microscopy (SEM) and focused ion beam (FIB) technology with Helios NanoLab 650, which is one of the most recent advances in field emission SEM and FIB technologies and their combined use, and designed to access an extremely high resolution characterization, and higher quality sample preparation. Figure 6 is a typical SEM-FIB image showing half of the intracellular space of a cancer cell line that was derived from a human head and neck squamous cell carcinoma (SCC-61) parental line [42]. Figure 7 shows a typical FIB-SEM image of the same cancer cell in which the ground-truth mitochondria were manually identified and marked by a cell biologist. The interest here is to detect image regions of interest that contain the mitochondria. Such detected regions will greatly alleviate the difficulty in the image segmentation of the mitochondria [43] to facilitate the spatial modeling and simulation of the role of this major organelle for studying human complex diseases such as cancer [44,45].
The detection of the mitochondria in the intracellular space was carried out with a window of 53 by 60 pixels, which is the average size of the mitochondria in the images. The number of clusters c and the exponent q expressed in Eq. (5) were selected to be three (to approximately represent the number of intensity groups in the images) and two (commonly specified in many applications), respectively. The values of the a-cut used for the indicator expressed in Eq. (8) are 0.5, 0.6, 0.7, 0.8 and 0.9. The numbers of the neighboring pixels M used in Eq. (11) are 5 and 7. The detection of the mitochondria was performed by moving the 53by-60 window along the horizontal and vertical directions of the image to extract different features for training. Twenty scans of the FIB-SEM images of the cancer single cell were available in this study. To show the effectiveness of various feature extraction methods, only one image was used for training. The training was performed by extracting the proposed probabilistic entropy measure of the fuzzy information, expressed in Eq. (4), of the mitochondrial and non-mitochondrial regions using OK, denoted as PEFI1, and expressed in Eq. (12), and using UK, denoted as PEFI2, expressed in Eq. (20), respectively. To compare with other feature extraction methods, the same images were used to obtain the gray-level co-occurrence matrix (GLCM), fractal dimension (FD), semi-variogram (SV), semi-variogram exponent (SVE), and the indicator-kriging co-occurrence matrix (IKCM) for the mitochondrial and non-mitochondrial objects, described in [46]. Ten lags were used to extract the semi-variogram values of each image window. If the image window contained whole or part of a mitochondrion, it was labelled as a mitochondrial object. This is designed to capture all small regions of interest containing the mitochondria in order to maximize the sensitivity (true positive rate), while the specificity (true negative rate) can be first reasonably obtained and then maximized in the localized image segmentation task performed window by window. To validate the effectiveness of the extracted features, two simple measures that are the Euclidean and Mahalanobis distances were used to calculate the similarity between the unknown (test) samples and the trained prototypes of the mitochondrial and non-mitochondrial objects.
Sensitivity and specificity are statistical measures of the performance of a binary classification test. In this study, the sensitivity (true positive rate) is the percentage of the mitochondrial regions that are correctly identified; whereas specificity (true negative rate) is the percentage of the non-mitochondrial regions that are correctly identified. Tables 1 and 2 show the sensitivity and specificity of the experiment obtained from several feature extraction methods using the Euclidean and Mahalanobis distances, respectively. In general, the SV, SVE, IKCM, PEFI1 and PEFI2 performed consistently using either the Euclidean distance or Mahalanobis distance. The PEFI2 yields the best results in both sensitivity and specificity in both distance measures: sensitivity = 100% and specificity = 93%, using the Euclidean distance; sensitivity = 100% and specificity = 97%, using the Mahalanobis distance. The PEFI1 yields the second best: sensitivity = 100% and specificity = 91%, using the Euclidean distance; sensitivity = 100% and specificity = 94%, using the Mahalanobis distance. Using the Euclidean distance, the GLCM achieved 100% for sensitivity, but its specificity is lowest (5%) in comparisons with the other features. On the other hand, the FD performed well with the specificity (90%) but poorly with the sensitivity (35%), using the Euclidean distance. In general, the use of the Mahalanobis distances improved all the detection results provided by all the features, in which the specificity obtained by the GLCM (65%) is significantly higher than using the Euclidean distance.

Identification of Abdominal Tissues on Computed Tomography
The proposed method was also tested for abdominal wall hernia mesh tissue classification on computed tomography (CT), which was recently carried out in [47]. carried out by the k-nearest neighbor method [48], where k = 3 to decide which type of mesh was present. Ten Monte-Carlo iterations were used for the random selection of training and testing data to enhance the statistics of the experimental results.
The results obtained from the feature extracted by the proposed spatial uncertainty modeling (SUM) were compared with other features extracted by wavelets, the gray-level co-occurrence matrix entropy (GLCME), geostatistical entropy (GE), probabilistic Fusion (PF), and entropy fusion (EF) models, which were carried out in [47]. Further details about the CT data and implementations of GLCME, GE, and EF methods were described in [47]. For the implementation of the FCM, the number of clusters c and the exponent q expressed in Eq. (5) were selected to be three (to approximately represent the organs of gray and white intensities and the background) and two (commonly specified in many applications), respectively. The values of the a-cut used for the indicator expressed in Eq. (8) are 0.5, 0.6, 0.7, 0.8 and 0.9. The numbers of the neighboring pixels M used in Eq. (11) are 5 and 7. The total average results obtained from the current technique, using OK (PEFI1) and UK (PEFI2), and other methods are shown in Table 3. The results show that the performance of proposed feature extracted by PEFI1 (94.50%) and PEFI2 (94.92%) are the best among the other features for the classification task. The performance of PEFI2 is only slighly better than that of PEFI1. It should be noted that the proposed feature not only outperforms the other individual features (wavelets, GLCME, and GE), but also yields better classification rates than the combinations of the

Classification of Logos on Document Imaging
Furthermore, the proposed spatial uncertainty modeling approach was tested for the classification of logos on document images. Ten sets of logos were obtained from the public-domain logo database of the University of Maryland, which consist of 105 intensity logo images. Fifty other logo images were also included, embedded in several document formats including letters, faxed documents, and billing statements [49,50]. All logo images were also generated subject to translation, scaling, orientation and degradation to create different sets of images [49,50]. Image rotations include 2-degree, 4-degree, 6-degree, 8-degree, 10degree orientations. The images were shrunk by the factor of two using the bicubic interpolation and anti-aliasing. For the translation, all images were shifted left (x-shifted) by 50 pixels and up (y-shifted) by 30 pixels. All images were degraded with Gaussian noise of zero mean and 0.02 variance.
The features extracted from the logo images are: 1) semivariograms, 2) Zernike moments, 3) wavelets, 4) Gabor features, and 5) SUM-based feature. The first four features were studied in [50][51][52]. These features were equally divided into training and test datasets. For the implementation of the FCM expressed in Eq. The datasets were equally split into half for training and the other half for testing. Furthermore, the data were randomly selected for 10 times to repeat the training and testing in order to establish statistically meaningful results of the experiment. The k nearest neighbor (k-NN) method was applied for the task of classification, with k = 3, 5, and 7. The total average classification results shown in Table 4 suggest that the proposed SUM-based feature outperforms the use of the other four features, with the order of performance from the lowest to highest classification rates as follows: wavelet feature, Gabor features, Zernike moments, semi-variograms, and proposed feature (PEFI1 and PEFI2, where both algorithms achieved an equal classification rate).

Conclusion
A modeling of spatial uncertainty in images for pattern classification using the theories of fuzzy sets and geostatistics has been presented and discussed. The proposed model has been implemented as a new feature extraction method for the classification of image patterns.
The entropy of a fuzzy image information with respect to a probability distribution is calculated as an integrated spatial uncertainty of the image, which can be used for characterizing categorical images. Simple classifiers were trained with this new feature for comparions with other related existing features. The training of the proposed feature with advanced classifiers can be expected to enhance the results. In particular, the applications of the proposed approach for automated detection of mitochondria in the real intracellular imaging of a cancer cell line, tissue identification and logo classification have been carried out. The comparative results suggest the usefulness of the proposed mathematical framework for image feature extraction. Being similar to the use of the probabilities of the gray-level cooccurrence matrix, the indicator-kriging probabilities can be utilized to construct other statistical features of an image for pattern classification.
The model developed in this study can be further improved by selecting effective strategies for selecting the number of fuzzy clusters and adding additional spatial constraints to the fuzzy objective function. In particular, as the constrained independent component analysis has been developed to reduce ambiguity in studying fMRI data by imposing temporal and spatial constraints to the mathematical model [53]; the fuzzy objective function defined in Eq. (5) can be modified by adding similar temporal and spatial constraints [54,55] to improve the modeling of uncertainty in the setting of geostatistics.