Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Hyperspectral image spectral-spatial classification via weighted Laplacian smoothing constraint-based sparse representation

  • Eryang Chen,

    Roles Methodology, Software, Writing – original draft

    Affiliations College of Geophysics, Chengdu University of Technology, Chengdu, China, School of Electronic Information and Electrical Engineering, Chengdu University, Chengdu, China, Geomathematics Key Laboratory of Sichuan Province, Chengdu University of Technology, Chengdu, China, Key Laboratory of Pattern Recognition and Intelligent Information Processing of Sichuan, Chengdu University, Chengdu, China

  • Ruichun Chang ,

    Roles Methodology

    2293721870@qq.com (RC); skbs111@163.com (KS)

    Affiliations Geomathematics Key Laboratory of Sichuan Province, Chengdu University of Technology, Chengdu, China, Digital Hu Line Research Institute, Chengdu University of Technology, Chengdu, China

  • Ke Guo,

    Roles Project administration, Supervision

    Affiliations Geomathematics Key Laboratory of Sichuan Province, Chengdu University of Technology, Chengdu, China, Digital Hu Line Research Institute, Chengdu University of Technology, Chengdu, China

  • Fang Miao,

    Roles Project administration, Supervision

    Affiliation Key Laboratory of Pattern Recognition and Intelligent Information Processing of Sichuan, Chengdu University, Chengdu, China

  • Kaibo Shi ,

    Roles Software, Writing – review & editing

    2293721870@qq.com (RC); skbs111@163.com (KS)

    Affiliation School of Electronic Information and Electrical Engineering, Chengdu University, Chengdu, China

  • Ansheng Ye,

    Roles Resources, Software

    Affiliations College of Geophysics, Chengdu University of Technology, Chengdu, China, Key Laboratory of Pattern Recognition and Intelligent Information Processing of Sichuan, Chengdu University, Chengdu, China

  • Jianghong Yuan

    Roles Writing – review & editing

    Affiliation School of Intelligent Engineering, Sichuan Changjiang Vocational College, Chengdu, China

Abstract

As a powerful tool in hyperspectral image (HSI) classification, sparse representation has gained much attention in recent years owing to its detailed representation of features. In particular, the results of the joint use of spatial and spectral information has been widely applied to HSI classification. However, dealing with the spatial relationship between pixels is a nontrivial task. This paper proposes a new spatial-spectral combined classification method that considers the boundaries of adjacent features in the HSI. Based on the proposed method, a smoothing-constraint Laplacian vector is constructed, which consists of the interest pixel and its four nearest neighbors through their weighting factor. Then, a novel large-block sparse dictionary is developed for simultaneous orthogonal matching pursuit. Our proposed method can obtain a better accuracy of HSI classification on three real HSI datasets than the existing spectral-spatial HSI classifiers. Finally, the experimental results are presented to verify the effectiveness and superiority of the proposed method.

Introduction

Remote sensing is of paramount importance for several application fields, including environmental monitoring, urban planning, ecosystem-oriented natural resource management, urban change detection, and agricultural region monitoring [1, 2]. Hyperspectral images (HSIs), whose structure consists of two spatial dimensions and one spectral dimension [3, 4], are generally characterized by hundreds or thousands of continuous observation bands throughout the electromagnetic spectrum with high spectral resolution in the field of remote sensing. The abundance of spectral information in HSI provides an opportunity for the precise classification of ground objects [5, 6]. HSI classification, as one of the main challenges in remote sensing technology, has opened new avenues in remote sensing [710]. As a powerful image-processing tool, the support vector machine (SVM) [1114] and sparse representation (SR) model [15, 16] and its derivative model have attracted much attention for HSI classification [1720]. However, the noise and mixed spectral information in HSI cause several theoretical and practical challenges for pixel-wise classification [2123].

A large number of spatial-spectral combined HSI classifiers have been developed in recent decades to incorporate spatial information in the classification. Reference [24] proposed an image patch distance (IPD) that uses the observed pixels and spatial neighbors to measure the pixel patch-wise similarity. Reference [17] presented a joint sparse representation (JSR) model, which first defined a local region of fixed size for each test pixel. Reference [25] reported that a multiscale adaptive sparse representation (MASR) model, which considers the regions of different scales for classification, can further improve classification performance. Reference [26] showed a class-dependent SR classifier for HSI classification, which can effectively combine the SR and k-NN classifiers models in a class-wise manner to exploit both the correlation and Euclidean distance between training and test data. As the traditional joint k-NN algorithm holds, the weight of each test sample in a local region is identical, which is not reasonable because each test sample may have different importance and distribution. To solve this problem, Reference [27] recommended a weighted joint nearest neighbor and sparse representation method, named WJNN-JSR, and can achieve better performance than several traditional joint k-NN methods. More recent HSI classification techniques can be found in references [2835]. With respect to the above descriptions, it can be concluded that all these methods ignore the boundary information of different features in the HSI. The common shortcomings hindered the achievement of more satisfactory classification accuracy.

Motivated by the above-mentioned discussions, by combining the spectral and spatial information, we propose a new classification algorithm for HIS, which is termed the weighted Laplacian smoothing constraint-based sparse representation (WLSC-SR) classifier. The primary contributions of this study are as follows.

  1. Inspired by the existing ordinary vector Laplacian, a smoothing-constraint weighted Laplacian vector is constructed, which consists of the interest pixel and its four nearest neighbors through their weighting factors [3638].
  2. By forcing the weighted Laplacian vector of the pixel of interest to be zero, a new large block sparse dictionary for sparse representation is developed.
  3. In contrast to several earlier studies, the boundary characteristics of the HIS were fully used.

Experiments on three real HSI datasets were conducted and compared with several state-of-the-art spectral-spatial HSI classification classifiers to evaluate the performance of the proposed WLSC-SR method. The results show that WLSC-SR can substantially improve the accuracy of the HSI classification.

Related works

In this section, we outline the basic theory for WLSC-SR.

Sparse representation

The mathematical essence of sparse representation (SR) is signal decomposition under sparse normalization constraints [3941]. A few atoms with the best linear combination are found in the dictionary to represent a signal using the super-complete dictionary of redundant functions as the basis function. It is demonstrated that the HSI pixel can be represented as a linear combination of training pixels from all classes by an unknown test.

Let yRh×l be a test sample in HSI, the label set of the whole training set be T = {1,2,3, ⋯, s}, where h denotes the spectral dimension of the HSI, and s is the number of training samples. Therefore, a dictionary DRh×s can be constructed using the spectrum sets of y and T. Each base of the redundant dictionary D is called an atom. Therefore, y can be represented by a linear combination of atoms of D. However, a linear combination is unlikely to be unique. The sparsest coefficient can help us find a better linear combination. Assuming that there is no noise in the HSI, then the SR model of the clean sample y is defined as: (1) However, there must be noise in the HSI, and the SR model for noisy data can be defined as: (2) Using the Lagrangian multiplier method, the SR model can be regularized as: (3) where σ is the error tolerance and γ denotes the regularization parameter.

Generally, the orthogonal matching pursuit (OMP) or simultaneous orthogonal mutation pursuit (SOMP) algorithm is used to calculate formula (3). When the original signal is atomic, the OMP can be unified as the SOMP. The SOMP is selected according to the characteristics of the WLSC-SR algorithm.

Image patch distance

In hyperspectral imagery, the pixels within a small neighborhood usually consist of similar materials whose spectral characteristics are highly correlated.Based on this fact, the image patch distance (IPD) exploits the observed pixels and corresponding spatial neighbors to measure the pixel patch-wise similarity.

For the observed pixel xij, its ω2 neighbors in the ω × ω spatial neighborhood can be defined as: (4) in which r = (ω – 1)/2.

Let al and bl be the lth elements of the pixel sets Ω(xij) and Ω(xpq). The distance between al and spatial neighborhood Ω(xpq) is defined as , and the undirected distance between two pixels al and bl can be defined as follows: (5) where d(⋅) is a spectral similarity function, such as the Minkowski Distance (MD) and spectral cosine distance (SCD).

Then, the similarity between the observed pixels xij and xpq can be defined as follows: (6) This spatial-spectral similarity measure combines the spatial and spectral features into distance, which improves the classification accuracy.

Proposed classifier

The pixels in the HSI dataset are high-dimensional vectors that reflect the spectrums of the ground objects. The spectrum vectors of the same class label are more likely to be similar to those of the different class labels. Based on this assumption, the WLSC-SR method exploits the spatial neighborhood to extract spatial-spectral information.

Procedure of WLSC-SR

The WLSC-SRattempts to construct a smoothing-constraint Laplacian vector, which is forced to be zero. The vector consists of the sparse vector of the pixel of interest and its four nearest neighbors through their weighting factor, by which the aggregation of homogeneous data and the separability of heterogeneous data in HSI can be effectively enhanced. Based on the smoothing-constraint Laplacian vector, a new large block sparse dictionary for SOMP is constructed with six times as many rows and five times as many columns as the original dictionary. Furthermore, the WLSC-SR classifier distinguishes the boundaries of adjacent types of ground objects in HSI, which is beneficial for HSI classification. The procedure for the proposed method is shown in Fig 1.

Weighted Laplacian smoothing constraint

We assumed that the size of the HSI was k × l × h. The spectrum vector xij(h,l) ∈ Rh × l represents the pixel in row i and column j in the HSI. At first, we choose a spatial window parameter ω, which is an odd positive integer, and then construct a ω × ω spatial window with central pixel xij. At the same time, the HSI boundaries are extended by (ω – 1)/2 pixels in a mirror manner, which is convenient for processing pixels at the edge or corner of the image.

We can obtain the IPD between pixel sets Ω(xij) and Ω(xpq). However, generally, the IPD method requires a large spatial window to exploit the spatial information in HSI, while the time-consuming steps in the iteration process limit its real applications.

The IPD between pixel sets Ω(xij) and Ω(xpq) is replaced by the distance between the central pixels in the pixel sets to simplify the IPD calculation method. Thus, the image patch distance-based center (IPDC) calculation method is defined as follows: (7) Thus,the weighting factor can be defined as: (8) where the trade-off parameter t > 0 controls the proportion of spatial information, and xij and xpq are normalized to and , respectively. These factors affect the value of the reconstruction weight W.

Using the weights obtained, we constructed the weighted Laplacian vector. Let xst be the four nearest neighbors of xij, where s = i– 1, i + 1; t = j– 1, j + 1, as shown in Fig 2.

thumbnail
Fig 2. Four nearest neighbors of a pixel xij and their weights between xij.

https://doi.org/10.1371/journal.pone.0254362.g002

Let be αij be the sparse vector associated with xij (i.e., ij = xij). Then, we construct the weighted Laplacian vector at the pixel xij as: (9) where .

To incorporate the smoothness across the neighboring spectral pixels, 2(xij) is set to zero, based on which a new large block sparse dictionary for SOMP is constructed with six times as many rows and five times as many columns as the original dictionary. Taking the Indian pines dataset with 10% training samples as an example, the dimension of the original dictionary was 200 × 1027, and the block sparse matrix dimension we built was 1200 × 5135. Then, the optimization problem in (1) can be redefined as a new sparse recovery problem with the Laplacian smoothing constraint, and it is formulated as: (10) where (11) (12) (13) (14) Normally, we can assign, w1 = 1, and wi (i = 2, …,5) are normalized as , where .

For the pixels at the center, all weights are present. However, for the pixels on the edge or corner, some weights will not be present, which will cause an imbalance. To avoid the imbalance, we assign wi = 0.25 (i = 2, …,5).

We all pointed out that the L0 norm is a non-deterministic polynomial hard (NP-hard) problem, while the L1 norm is the optimal convex approximation of the L0 norm, and the L1 norm is easier to solve than the L0 norm. Additionally, the equality constraints in (11) cannot be satisfied completely, it allows approximation error, thus the problem can be written as: (15) where γ denotes the regularization parameter.

The problem in (15) is a standard sparse recovery problem, and SOMP can be implemented to solve it. Once the problem in (15) is solved, the total class-dependent reconstruction residuals between the original test samples and the approximations obtained from each of the K class sub-dictionaries can be calculated as: (16) where kκ = {1,2,3,…,K}, s = i– 1, i +1; t = j– 1, j + 1. x represents a concatenation of the five pixels, xij, xi– 1,j, xi,j−1, xi+1,jxi,j+1, as shown in Fig 2, and denotes the portion of the recovered sparse vector xst associated with the kth-class subdictionary, . The test sample is xij assigned to the class that minimizes the residual: (17)

Experiments and discussion

Datasets

Three well-known publicly available HSI datasets, namely the Indian Pines, University of Pavia, and Salinas, were used to evaluate the performance of WLSC-SR in this study. The number of samples in the Indian Pines, Pavia University, and Salinas scene images are shown in Table 1, in which the background color was used to distinguish different classes.

thumbnail
Table 1. Number of samples in the Indian Pines, Pavia University, and Salinas scene image.

https://doi.org/10.1371/journal.pone.0254362.t001

Quantitative metrics

Normally, overall accuracy (OA), average accuracy (AA), class accuracy (CA), and Kappa coefficient are adopted to evaluate the quality of the classification results of HSI.OA refers to the ratio between the number of correctly classified categories and the total number of categories, AA represents the mean of the percentage of correctly classified pixels for each class, CA measures the separate classification accuracy of various ground objects in the dataset, and the Kappa coefficient estimates the percentage of classified pixels corrected by the number of agreements that would be expected purely by chance. It is believed that the classification performance of the classifier is good when the Kappa coefficient is greater than 0.75. However, when the Kappa coefficient is less than 0.40, the performance is poor [10, 42].

Parameter analysis

In the proposed classification method, there are two primary impact parameters: the sparsity level Sl and the WLSC-SR model test region scale Sc, which can affect the classification performance from different aspects. Experiments on the Indian Pines, Pavia University, and Salinas showed the OAs of different Sl and Sc, based on which the optimal parameters were determined. Fig 3A–3C show the effects of Sl and Sc in the three datasets, respectively. The optimal classification result is shown in the graph.

thumbnail
Fig 3. Effect of Sl and Sc.

(a) Indian Pines, (b) Pavia University, and (c) Salinas Scene.

https://doi.org/10.1371/journal.pone.0254362.g003

As shown in Fig 3, when the value of the test region scale Sc is fixed, the OA for Indian Pines, Pavia University, and Salinas Scene can consistently achieve the best performance when the sparsity level Sl is 1 or 2. As Sl increases, the solution of (16) converges to the pseudo inverse solution, which is no longer sparse, which deteriorates the classification performance. Additionally, when the sparsity level Sl is small, if Sc is too large, the neighboring pixels cannot be faithfully approximated by a few training samples. In other words, the OA is reduced. Moreover, the value of Sc for a large dataset is relatively large, and vice versa. For example, when Sl for Indian Pines, Pavia University, and Salinas Scene are 1, 2, and 3, respectively, the OA can obtain the best value when Sc is 5, 7, and 5, respectively. On the contrary, the experimental results show that when Sc is equal to 40, the classification performance is far worse than the best value.

Additionally, since the Pavia University image is larger than the Indian Pines and Salinas Scene image, OA obtains the best value when Sc = 7 in Pavia University image classification. For Indian Pines and Salinas Scene images, the corresponding Sc is equal to 5.

Comparison of different classifiers

In this section, the proposed methods are compared with the SVM method [5], JSR classification method [17], SR classification method [16], and sparse representation nearest neighbor (NN-SR) classification method [18]. Additionally, the original nearest neighbor classification methods, such as multiscale adaptive sparse representation (MASR) [19] as well as the joint sparse representation joint nearest neighbor(JNN-JSR) [27], are also compared with the joint sparse representation weighting joint nearest neighbor method (WJNN-JSR) [27]. These classification methods were implemented using optimal parameters.

Three different experiments were conducted on three different datasets: Indian Pines, Pavia University, and Salinas. For each class of every dataset, 30% of the labeled pixels were randomly sampled for training, while the remaining 70% were used to test the classifiers. Figs 46 illustrate different classification maps obtained by different methods on different datasets.

thumbnail
Fig 4. The Indian Pines image.

(a) SVM [1] (OA = 72.26%); (b) JSR [17] (OA = 91.71%); (c) SR [16] (OA = 63.89%); (d) MASR [19] (OA = 96.91%); (e) NN-SR [18] (OA = 65.44%); (f) JNN-JSR [27] (OA = 93.12%); (g) WJNN-JSR [27] (OA = 93.65%); (h) WLSC-SR (OA = 98.21%).

https://doi.org/10.1371/journal.pone.0254362.g004

thumbnail
Fig 5. The Pavia University image.

(a) SVM [1] (OA = 85.42%); (b) JSR [17] (OA = 89.31%); (c) SR [16] (OA = 72.01); (d) MASR [19] (OA = 88.01%); (e) NN-SR [18] (OA = 73.27%); (f) JNN-JSR [27] (OA = 96.60%); (g) WJNN-JSR [27] (OA = 97.42%); (h) WLSC-SR (OA = 98.18%).

https://doi.org/10.1371/journal.pone.0254362.g005

thumbnail
Fig 6. The Salinas image.

(a) SVM [1] (OA = 88.67%); (b) JSR [17] (OA = 92.99%); (c) SR [16] (OA = 85.09); (d) MASR [19] (OA = 93.43%); (e) NN-SR [18] (OA = 85.37%); (f) JNN-JSR [27] (OA = 94.46%); (g) WJNN-JSR [27] (OA = 95.61%); (h) WLSC-SR (OA = 99.71%).

https://doi.org/10.1371/journal.pone.0254362.g006

The first experiment was performed using the Indian Pines dataset. Table 2 shows the classification performance with the corresponding OA, AA, and Kappa values. The bold values indicate the best classification accuracy. As can be observed, the classification maps of the SVM and SR methods have a very noisy appearance. By considering the spatial context, the JSR, MASR, NN-SR, and WJNN-JSR algorithms can deliver a comparatively smooth result but fail to detect meaningful regions. Although the JNN-JSR algorithm shows improvements in detecting the details, some noisy behavior will exist on the obtained classification maps for these approaches.

thumbnail
Table 2. Classification accuracy (in percent) of the Indian Pines in the SVM [1], JSR [17], SR [16], MASR [19], NN-SR [18], JNN-JSR [27], WJNN-JSR [27], and WLSC-SR.

https://doi.org/10.1371/journal.pone.0254362.t002

In contrast, the proposed WLSC-SR algorithm has a limited improvement in the average classification accuracy, denoising, and misclassification at the edges of the data, and the overall scene is significantly reduced. Therefore, according to the classification results, the proposed method still has advantages in terms of OA and kappa values. For example, compared with other methods, the WLSC-SR algorithm achieves the highest classification accuracy in classes 2, 4, 8, 9, 11, and 12. Additionally, OA and Kappa reached their highest values. The second and third experiments were conducted on the Pavia University and Salinas datasets, respectively. The training sample selection was the same as in the first experiment.

Table 3 presents the classification performance with the corresponding OA, AA, and Kappa values for Pavia University. As shown in the table, the proposed WLSC-SR algorithm obtains higher accuracy than the other compared methods in terms of OA, AA, and Kappa. These spectral-spatial joint algorithms, such as JSR, MASR, NN-SR, JNN-JSR, WJNN-JSR, and WLSC-SR, perform better than SVM and SR which only use spectral information. For example, the OA of the SR algorithm is only 72.01%, and compared with the SR algorithm, the OA of the WJNN-JSR and WLSC-SR algorithms were improved by 25.41% and 27.17%, respectively.

thumbnail
Table 3. Classification accuracy (in percent) of the Pavia University in the SVM [1], JSR [17], SR [16], MASR [19], NN-SR [18], JNN-JSR [27], WJNN-JSR [27], and WLSC-SR.

https://doi.org/10.1371/journal.pone.0254362.t003

Table 4 presents the classification performance with the corresponding OA, AA, and Kappa values for the Salinas image. It can be seen that because WJNN-JSR effectively utilizes multi-scale spatial information through an adaptive sparse strategy, the AA of WJNN-JSR has been significantly improved, but some noise still exists around the boundary of different ground objects, so the improvement of OA and Kappa is limited. By contrast, considering the boundaries of adjacent ground objects in the image, the OA, AA, and Kappa of our proposed WLSC-SR method were improved by 4.1%, 2.89%, and 4.8%, respectively.

thumbnail
Table 4. Classification accuracy (in percent) of the Salinas scene in the SVM [1], JSR [17], SR [16], MASR [19], NN-SR [18], JNN-JSR [27], WJNN-JSR [27], and WLSC-SR.

https://doi.org/10.1371/journal.pone.0254362.t004

Computational complexity

Experiments were performed using MATLAB 2018b on a computer with an Intel-2.60GHz CPU, 16GB memory, and a 64-bit Windows 7 system. On three real HSI datasets, complete execution of our algorithm may take several minutes to several hours, but the other compared methods in this study do not take that long. Specifically, the main computational cost of this method is the operation of weighted parameters and the large block sparse dictionary in SOMP. With the development of computing hardware and cloud computing technology, we believe that the consumption time will be significantly reduced. Additionally, the ideal parameters or hyperparameters used by the various algorithms for the results are listed in Table 5.

thumbnail
Table 5. The ideal parameters or hyperparameters used by the various algorithms.

https://doi.org/10.1371/journal.pone.0254362.t005

Conclusions

In this context, we proposed a new classification method for HSI. The proposed WLSC-SR strengthens the spatial information between the center pixel and its four nearest neighborpixels by constructing a smoothing constraint Laplacian vector. The vector can overcome the boundary characteristics of adjacent ground objects in the HSI. Experiments on three real HSI datasets revealed that the proposed WLSC-SR method outperforms several other well-known classifiers in terms of OA, AA, Kappa, and visual comparison of classification maps. Finally, we verified the effectiveness and superiority of WLSC-SR. Another method that the authors will explore in future work to further improve the classification accuracy is employing discriminative learning algorithms and optimizing the dictionary structure. Therefore, the focus of our future research is to explore more efficient solutions to optimize this method.

References

  1. 1. Wei L., Yu M., Zhong Y., Zhao J., Liang Y., Hu X.. “Spatial–Spectral Fusion Based on Conditional Random Fields for the Fine Classification of Crops in UAV-Borne Hyperspectral Remote Sensing Imagery,” Remote Sens., vol. 11, pp. 780, Apr. 2019.
  2. 2. Lira Melo de Oliveira Santos C., Augusto Camargo Lamparelli R., Kelly Dantas Araújo Figueiredo G., Dupuy S., Boury J., Luciano A.C.S., et al. “Classification of Crops, Pastures, and Tree Plantations along the Season with Multi-Sensor Image Time Series in a Subtropical Agricultural Region,” Remote Sens., vol. 11, no. 3, pp. 334–361, Feb. 2019.
  3. 3. Li Y., Zhang H., Shen Q.. “Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network,” Remote Sens., vol. 9, no. 1, pp. 67, Jan. 2017.
  4. 4. Lian-Zhi , et al. "Supervised spatial classification of multispectral LiDAR data in urban areas. " Plos One., vol. 13, no. 10, e0206185, Oct. 2018.
  5. 5. Melgani F., Bruzzone L.. “Classification of hyperspectral remote sensing images with support vector machines,” IEEE Trans. Geosci. Remote Sens., vol. 42, no.8, pp.1778–1790, Sep. 2004.
  6. 6. Sun L.,Ma C., Chen Y., Hiuk Jae S., Wu Z., Byeungwoo J.. “Adjacent Superpixel-Based Multiscale Spatial-Spectral Kernel for Hyperspectral Classification,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 6, pp. 1905–1919, May 2019.
  7. 7. Li Z., Huang L., He J.. “A Multiscale Deep Middle-level Feature Fusion Network for Hyperspectral Classification,” Remote Sens., vol. 11, no. 6,pp. 695, Mar. 2019.
  8. 8. Ghamisi P., Rasti B., Yokoya N.. “Multisource and Multitemporal Data Fusion in Remote Sensing: A Comprehensive Review of the State of the Art,” IEEE Geosc. Rem. Sen. M. vol. 7, no. 1, pp.6–39, Mar. 2019.
  9. 9. Chen C., Jiang F., Yang C., Rho S., Shen W., Liu S., et al. “Hyperspectral classification based on spectral–spatial convolutional neural networks,” Eng Appl Artif Intel., vol. 68, pp. 165–171, Feb. 2018.
  10. 10. Fauvel M., Tarabalka Y., Benediktsson J.. “Advances in Spectral–Spatial Classification of Hyperspectral Images,” P. IEEE, vol. 101, no.3, pp.652–675, Mar. 2013.
  11. 11. Ma J., Jiang J., Zhou H.. “Guided locality preserving feature matching for remote sensing image registration,” IEEE Trans. Geosci. Remote Sens., vol. 99, pp. 1–13, Apr. 2018.
  12. 12. Archibald R., George F.. "Feature selection and classification of hyperspectral images with support vector machines,” IEEE Trans. Geosci. Remote Sens., vol. 4, no. 4, pp. 674–677, Nov. 2007.
  13. 13. Liu L., Huang W., Liu B.. “Semisupervised Hyperspectral Image Classification via Laplacian Least Squares Support Vector Machine in Sum Space and Random Sampling,” IEEE J-STARS., vol. 99, pp. 1–15, Oct. 2018.
  14. 14. Marcelo M., Soares F., Ardila J.. “Fast inline tobacco classification by near-infrared hyperspectral imaging and support vector machine-discriminant analysis,” Anal Methods-UK., vol. 11, no. 14, pp. 1966–1975, Mar. 2019.
  15. 15. Sun H., Ren J., Zhao H..” Superpixel based feature specific sparse representation for spectral-spatial classification of hyperspectral images,” Remote Sens., vol. 1, no. 5, pp. 536, Mar. 2019.
  16. 16. Hamdi Mohamed Ali, and Ben Salem R.. "Sparse Representations for the Spectral–Spatial Classification of Hyperspectral Image." Journal of the Indian Society of Remote Sensing (2018).
  17. 17. Chen Y., Nasrabadi M., Tran D.. “Hyperspectral image classification using dictionary-based sparse representation,” IEEE Trans. Geosci. Remote Sens., Vol. 49, no.10, pp.3973–985, Nov. 2011.
  18. 18. Cui M., Prasad S.. “Class-Dependent Sparse Representation Classifier for Robust Hyperspectral Image Classification,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 5, pp. 2683–2695, Apr. 2015.
  19. 19. Fang L., Li S., Kang X.. “Spectral–Spatial Hyperspectral Image Classification via Multiscale Adaptive Sparse Representation,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 12, pp. 7738–7749, Dec. 2014.
  20. 20. Soltani-Farani A., Rabiee H.R., Hosseini S.A.. “Spatial-Aware Dictionary Learning for Hyperspectral Image Classification,” IEEE T.Geosci. Remote., vol. 53, no. 1, pp. 527–541, Aug. 2013.
  21. 21. Pan B., Shi Z., Zhang Z.. “Hyperspectral Image Classification Based on Nonlinear Spectral–Spatial Network,” IEEE Geosci. Remote Sens. Lett., vol. 99, pp. 1–5, Sep. 2016.
  22. 22. Gao H., Yao D., Wang M.. “A Hyperspectral Image Classification Method Based on Multi-Discriminator Generative Adversarial Networks,” Sens., vol. 19, no. 15, pp. 3269, Jul. 2019. pmid:31349589
  23. 23. Yan R., Peng J., Ma D.. “Spectral Tensor Synthesis Analysis for Hyperspectral Image Spectral–Spatial Feature Extraction,” J Indian Soc Remote, vol. 47, no. 6, pp. 91–100, Oct. 2018.
  24. 24. Pu H., Chen Z., Wang B.. “A Novel Spatial–Spectral Similarity Measure for Dimensionality Reduction and Classification of Hyperspectral Imagery,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 11, pp. 7008–7022, Nov. 2014.
  25. 25. Liu J., Wu Z., Wei Z.. “Spatial-spectral kernel sparse representation for hyperspectral image classification,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 6, no. 6, pp.2462–2471, Dec. 2013.
  26. 26. Tu B., Huang S., Fang L.. “Hyperspectral image classification via weighted joint nearest neighbor and sparse representation,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 99, pp.1–13, Oct. 2018.
  27. 27. Zou J., Wei L., Qian D.. “Sparse representation-based nearest neighbor classifiers for hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens., vo.12, pp. 2418–2422, Dec. 2015.
  28. 28. Liu J., Wu Z., Xiao Z., Yang J.. “Classification of hyperspectral images using kernel fully constrained least squares,”. ISPRS Int. J. Geo-Inf., vol. 6, pp.11, 344, Nov. 2017.
  29. 29. Ding H., Xu L., Wu Y., Shi W..“Classification of hyperspectral images by deep learning of spectral-spatial features,” Arab J Geosci, vol. 13, no. 12, pp. 1–14, Jun. 2020.
  30. 30. Zhan T., Sun L., Xu Y., Wan M., Wu Z., Lu Z., et al. “Hyperspectral classification using an adaptive spectral-spatial kernel-based low-rank approximation,” Remote Sens.Lett, vol. 10, no. 8, pp. 766–775, Aug. 2019.
  31. 31. Sun L., Ma C., Chen Y., Shim H. J., Wu Z., Jeon B.. “Adjacent superpixel-based multiscale spatial-spectral kernel for hyperspectral classification,”. IEEE J-STARS., vol. 12, no. 6, pp 1905–1919, Aug. 2019.
  32. 32. Tu B., Zhou C., He D., Huang S., Plaza A..“Hyperspectral classification with noisy label detection via superpixel-to-pixel weighting distance,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 6, pp. 4116–4131, Jan. 2020.
  33. 33. Xie Y., Miao F., Zhou K., Peng J.. “HsgNet: A Road Extraction Network Based on Global Perception of High-Order Spatial Information,” ISPRS Int. J. Geo-Inf., vol. 8, no. 12,pp. 571, Dec. 2019.
  34. 34. Xie F., Hu D., Li F., Yang J., Liu D..“Semi-supervised classification for hyperspectral images based on multiple classifiers and relaxation strategy,” ISPRS Int. J. Geo-Inf., vol. 7, no. 7, pp. 284, Jul. 2018.
  35. 35. Liang H., Li Q.. “Hyperspectral imagery classification using sparse representations of convolutional neural network features,” Remote Sens., vol. 8, no. 2, pp. 99, Jan. 2016.
  36. 36. Liu X., Bao H., Shum H., Peng Q.. “A novel volume constrained smoothing method for meshes,” Graph. Models, vol. 64, no.3-4, pp. 169–182, May. 2002.
  37. 37. Behnood R., Ghamis P., Ulfarsson M.O.. “Hyperspectral Feature Extraction Using Sparse and Smooth Low-Rank Analysis,” Remote Sens., vol. 11, no. 2, pp.121, Jan. 2019.
  38. 38. Hamdi M.A., Salem R.B.. “Sparse Representations for the Spectral–Spatial Classification of Hyperspectral Image,” J Indian Soc Remote, vol. 47, no. 2, pp. 923–929, Dec. 2018.
  39. 39. Zhang H., Li J., Huang Y.. “A Nonlocal Weighted Joint Sparse Representation Classification Method for Hyperspectral Imagery,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 7, no. 6, pp. 2056–2065, Jun. 2014.
  40. 40. Wang Y., Tang Y., Zou C., Li L., Chen H.. “Modal regression based greedy algorithm for robust sparse signal recovery, clustering and classification,” Neurocomputing, vol. 372, pp. 73–83, Sep. 2020.
  41. 41. Wang G., Han H., Carranza M., Guo S., Guo K.. “Tensor-Based Low-Rank and Sparse Prior Information Constraints for Hyperspectral Image Denoising,” IEEE Access, vol. 99, pp. 1, May 2020.
  42. 42. Monserud R.A., Leemans R.. “Comparing global vegetation maps with the Kappa statistic,” Eco Model, vol. 62, no. 4, pp. 275–293, Aug. 199.