Next Article in Journal
Expression Analysis of BIRC3 as One Target Gene of Transcription Factor NF-κB for Esophageal Cancer
Next Article in Special Issue
Performance of a Novel Enhanced Sparrow Search Algorithm for Engineering Design Process: Coverage Optimization in Wireless Sensor Network
Previous Article in Journal
Optimization of the Sustainable Distribution Supply Chain Using the Lean Value Stream Mapping 4.0 Tool: A Case Study of the Automotive Wiring Industry
Previous Article in Special Issue
A Study Using Optimized LSSVR for Real-Time Fault Detection of Liquid Rocket Engine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparison of Three Different Group Intelligence Algorithms for Hyperspectral Imagery Classification

Geographic Information and Tourism College, Chuzhou University, Chuzhou 239099, China
*
Author to whom correspondence should be addressed.
Processes 2022, 10(9), 1672; https://doi.org/10.3390/pr10091672
Submission received: 7 July 2022 / Revised: 19 August 2022 / Accepted: 20 August 2022 / Published: 23 August 2022
(This article belongs to the Special Issue Evolutionary Process for Engineering Optimization (II))

Abstract

:
The classification effect of hyperspectral remote sensing images is greatly affected by the problem of dimensionality. Feature extraction, as a common dimension reduction method, can make up for the deficiency of the classification of hyperspectral remote sensing images. However, different feature extraction methods and classification methods adapt to different conditions and lack comprehensive comparative analysis. Therefore, principal component analysis (PCA), linear discriminant analysis (LDA), and locality preserving projections (LPP) were selected to reduce the dimensionality of hyperspectral remote sensing images, and subsequently, support vector machine (SVM), random forest (RF), and the k-nearest neighbor (KNN) were used to classify the output images, respectively. In the experiment, two hyperspectral remote sensing data groups were used to evaluate the nine combination methods. The experimental results show that the classification effect of the combination method when applying principal component analysis and support vector machine is better than the other eight combination methods.

1. Introduction

Hyperspectral remote sensing has high spectral resolution, and can obtain the spectral characteristics and differences of ground objects comprehensively and carefully, thus, greatly improving the accuracy of ground object classification [1]. Hyperspectral Images are highly innovative remote sensing imageries, and contain hundreds of continuous narrow spectral bands [2]. However, classifying HSIs efficiently is a major challenge for scientists and researchers [3]. Some of these challenges have been indicated, such as the increased presence of redundant spectral information and high dimensionality in observed data [4]. Several machine learning classifiers have been used for classifying HSIs. In recent years, deep-learning-based classifiers have been extensively studied in hyperspectral image classification [5,6,7]; they can achieve better classification effects, but the parameters involved are complicated. Unlike deep-learning-based classifiers, the traditional unsupervised machine learning classifiers include fuzzy C-means (FCM) and K-means (KM). Alternatively, the supervised classifiers, (e.g., k-nearest neighbor (KNN), Gaussian mixture model (GMM), support vector machine (SVM), random forest (RF), and artificial neural network (ANN)) have been widely used in the classification of HSIs [8,9,10]. The single classifier is simple to implement and suitable for classifying data with small samples and high dimensional features. However, both theory and practice show that no classifier is superior to others in nature due to the characteristics of hyperspectral remote sensing data, training samples, and the classifier itself [11,12].
Because hyperspectral remote sensing has many bands, a strong correlation between adjacent bands, and a large amount of data, it is easy to cause problems such as “dimension disaster” [13], which has a great influence on ground object classification; therefore, before classification, hyperspectral remote sensing images are often reduced to retain the original image information to the maximum extent and facilitate better understanding, analysis, and processing of hyperspectral data [12]. One method of dimensionality reduction is band selection, which selects a band subset of the original image according to certain metric criteria or methods. Although this method can select specific bands with key functions, it is easy to ignore important information about other bands. The other method is feature extraction, which makes the original image data achieve the optimal feature in a certain sense. Feature extraction can be divided into linear and nonlinear feature extraction methods [14]. Common linear feature extraction methods include: principal component analysis (PCA) [15], linear discriminant analysis (LDA) [16], and locality preserving projections (LPP) [17], etc. These methods can successfully preserve the spectral characteristic information of local objects, and are simple to implement and fast to calculate. Nonlinear feature extraction methods include: kernel principal component analysis (KPCA) [18], kernel independent component analysis (KICA) [19], local linear embedding (LLE) [20], Laplacian eigenmaps (LE) [21], etc. However, which can better represent the structure of hyperspectral data is uncertain [22], and although homotopy disruption strategy (HPM) can be utilized to examine the logical surmised answer for the nonlinear control issue [23], their implementation is relatively complex. Selecting appropriate feature extraction methods can improve the processing speed and reduce the time of extracting valuable information. Therefore, many researchers have studied the comparison of different feature extraction methods [24,25,26], which provide references for the classification of different types of hyperspectral data.
In the application of hyperspectral remote sensing, the simplest classification method is usually adopted to ensure the accuracy of the classification and improve the efficiency of operation. In addition, the combination of different feature extraction methods and classifiers has different effects on hyperspectral image classification. However, the current research mainly compares the classification effects of different feature extraction methods or classifiers alone, which is not conducive to the further application of classification methods in the hyperspectral field. Therefore, the present comprehensive analysis of the current research on feature extraction adopted three of the most typical feature extraction methods and three different classifiers, respectively, selected two study areas of hyperspectral image datasets to design the classification experiment, and finally, compared the classification effects of different combination methods.
The research has two main advantages: (1) it discusses the advantages and disadvantages of different combination methods, and thus can provide a reference for the classification of hyperspectral remote sensing images; (2) on the basis of easy acquisition and simple operation, the applicability of different combination methods to different types of hyperspectral data is explored, which provides a reference for the practical application of hyperspectral remote sensing image classification and saves time in method selection.

2. Theoretical Methods

In this paper, the unsupervised feature extraction method PCA, and the supervised feature extraction methods LDA and LPP are selected from among common feature extraction methods to reduce the dimensionality of original hyperspectral images, and the common single classifiers SVM, RF, and KNN are selected to classify the images after dimensionality reduction.

2.1. Feature Extraction Method

2.1.1. Principal Component Analysis (PCA)

PCA is an unsupervised feature extraction method. Its main function is to reduce dimension by mapping the sample data from high-dimensional space to low-dimensional space through an orthogonal matrix. In the original space, the largest variance in the first set of axes represent the original data direction, the second set of axes represent the largest variance in the first set of the orthogonal plane coordinate system, the third set of axes represent the largest variance in the first and second group axis orthogonal plane, and so on. Therefore, most of the variance is retained in the first K coordinate axes, that is, the K-dimensional space is reconstructed based on the original feature space, and this space contains most of the important information regarding the original space.
Let the original sample matrix be X = [ x 1 , x 2 , x 3 , , x n ] R m × n , where m and n represent the characteristic dimension and sample number, respectively, and the sample mean is 0. Therefore, when x ¯ = 1 n i = 1 n x i = 0, the low-dimensional matrix is obtained after mapping. The PCA algorithm tries to find a set of optimal orthogonal basis vectors to minimize the reconstruction error function:
δ = i = 1 n x i a = 1 k ( β a T x i ) β a 2
{ β a a = 1 , , k } is the set of partial orthogonal basis vectors.
If the mapping matrix is U = { β 1 , , β k } , U R d × k , then y i = U T x i , and thus Y = U T X , and if it satisfies the constraint conditions U T U = I , then I is the identity matrix. Subsequently, the objective function can be expressed as:
argmin U i x i U ( U T x i ) 2
To solve the objective function is to solve the eigenvalue problem:
X X T β i = λ i β i
where λ i is the eigenvalue, and the optimal orthogonal basis vector is composed of the k maximum eigenvalue pairs of the eigenvectors solved by the above formula, which can finally form the mapping matrix U.
The PCA algorithm is the most commonly used dimension reduction method, and is suitable for the condition of the global linear low-dimensional structure, and has a good effect on linear data.

2.1.2. Linear Discriminant Analysis (LDA)

LDA is a feature extraction method of supervised learning. Its basic function is to project the sample data of high-dimensional space into low-dimensional space to meet the requirement of “the minimum variance within the class and the maximum variance between classes”; that is, the projection points of the data of the same category are as close as possible, and those of different classes are as far as possible.
In the LDA algorithm, the mapping matrix is set as U and satisfies Fisher’s criterion function:
arg   max U Tr ( U T S b U ) Tr ( U T S w U )
where S b is the interclass dispersion matrix of samples, S w is the in-class dispersion matrix of the sample. Assuming that the number of categories is C, n i is the class i sample, x ¯ i and x ¯ are the mean of the class i sample and the population mean of the sample, respectively. Considering that S is the sample population discrete matrix, then S b and S w can be defined as:
S b = 1 n i = 1 C n i ( x ¯ i x ¯ ) ( x ¯ i x ¯ ) T
S = 1 n j = 1 n ( x j x ¯ ) ( x j x ¯ ) T
S w = S S b = 1 n j = 1 n x i x j T 1 n i = 1 C n i x ¯ i x ¯ i T
Solving the optimal mapping matrix U is equivalent to solving the generalized eigenvalue problem: S b U i = λ i S w U i , as the rank of S b is, at most, C − 1; therefore, the maximum dimension of the space after LDA mapping is also C − 1.
The LDA algorithm assumes that data conform to Gaussian distribution, so it is not suitable for dimensionality reduction in non-Gaussian distribution samples. As LDA information is measured by mean size, the effect of dimension reduction is poor when sample classification information depends on the variance rather than the mean.

2.1.3. Locality Preserving Projections (LPP)

LPP mainly constructs a graph containing neighborhood information of a dataset comprising high-dimensional data, and then calculates a transformation matrix using the concept of the Laplacian operator [15] to map data points onto a low-dimensional subspace. This linear transformation preserves local neighborhood information well.
The LPP algorithm assumes that the two sample points, x i and x j , which are very close to each other in the original space, are also very close to the corresponding points, y i and y j , after being projected into the low-dimensional space. Its objective function is:
min i j ( y i y j ) 2 W i j
where W i j represents weights. There are two ways to construct these weights.
The first method is a thermonuclear method that utilizes the Euclidean distance between samples to determine the corresponding weight, i.e., the closer the distance, the greater the weight, and vice versa. By introducing thermonuclear parameter t, the weight can be expressed as:
W i j = e x i x j 2 t
The second method is relatively simple. As long as the two points are adjacent, the weight between samples is set as 1; however, this method cannot distinguish the affinity between sample points effectively. The objective function can be further derived as:
= min   i j ( U T x i U T x j ) 2 W i j = min   i j U T x i D i i x i T U i j U T x i D i j x j T U = min   u T X ( D W ) X T u = min   u T X L X T u
where L is the Laplace matrix, D i i = j W i j . Since the larger D i i is, the more important y i is, the formula U T X L X T U = 1 is introduced. Finally, the objective function can be transformed into an eigenvalue to solve the problem:
X L X T U = λ X D X T U
The LPP algorithm is suitable for processing nonlinear sample data because LPP can maintain the nonlinear relationship after dimensionality reduction.

2.2. Classification Methods

The most representative machine learning classification methods in hyperspectral remote sensing image classification mainly include SVM, RF, and KNN.

2.2.1. Support Vector Machine (SVM)

The SVM is a supervised learning algorithm widely used in two or more linear and nonlinear classifications. The purpose of the SVM algorithm is to try to find an optimal hyperplane and ensure the maximum distance between various sample points and this plane.
For linear problems, any hyperplane can be expressed by the linear equation shown below:
w T x + b = 0
where w is the weight vector and b is bias. In higher dimensional space, the distance from the sample point to the hyperplane is:
w T x + b w
To maximize the distance between the sample point closest to the hyperplane and the hyperplane, this distance can be transformed into a minimization function L(w) under additional constraints, which implies that the hyperplane correctly classifies all training samples, x i , as:
min 1 2 w 2 ,   s . t .   y i ( w T x i + b ) 1
This is a Lagrangian optimization problem. The weight vector w and bias b of the optimal hyperplane can be obtained by Lagrangian multiplication.
For nonlinear problems, linearly separable support vector machines cannot solve them well, so nonlinear transformation is used to transform nonlinear problems into linear problems. In addition, Φ ( x ) represents the feature vector after the original data is mapped, and the hyperplane can be expressed as:
f ( x ) = w T Φ ( x ) + b
therefore, we have the minimization function:
min w , b 1 2 w 2 ,   s . t .   y i ( w T Φ ( x i ) + b ) 1 ( i = 1 ,   2 ,     ,   m )
The SVM can model linear and nonlinear problems based on the kernel, but it is not suitable for large and/or noisy datasets.

2.2.2. Random Forest (RF)

RF is a classifier model composed of multiple decision trees, and the final output of the model is jointly determined by every moment in the decision tree in the forest. RF can deal with regression problems and classification problems. When dealing with classification problems, each decision tree will randomly select training samples and distinguish categories. Finally, the output categories of each decision tree will be considered comprehensively to determine the category of test samples by voting. The main steps to build an RF are:
(1)
Extract k training subsets from the original training set, corresponding to k decision trees, respectively.
(2)
The growth of each decision tree includes two processes. First, random feature variables are selected, and n features (nN) are randomly selected at each node of each tree. The other is node splitting. The information contained in each feature is calculated, and the feature with the best classification ability is selected among n features for node splitting.
(3)
Generate a random forest, do not prune each tree to maximize its growth, and finally, all decision trees constitute a random forest.
(4)
After the random forest is constructed, the samples are input into the classifier. Each decision tree predicts the corresponding category for each sample, and records it by voting. The category with the most votes becomes the determined category of the sample.

2.2.3. K-Nearest Neighbor (KNN)

KNN is a classification algorithm. It sets the number of samples of each sample and its nearest neighbor. If most of these nearest neighbor samples belong to a certain category, it identifies that the sample also belongs to the same category. According to the distance between different characteristic values, data separation is generally carried out using Euclidean distance:
L = ( l = 1 N | x i ( l ) x j ( l ) | p ) 1 p
where p is a variable parameter. When p is 1, L is the Manhattan distance (corresponding to L1 norm), when p is 2, L is the Euclidean distance (corresponding to L2 norm), and when p tends to infinity, L is the Chebyshev distance, namely, the maximum distance of each coordinate axis. Additionally, l represents the vector dimension of the sample, and i and j represent the training sample vector of the ith and jth input, respectively.
As for the selection of the nearest neighbor sample number k value, when the k value is small, the existing training set can be well predicted, but the overall model becomes complex and prone to the overfitting phenomenon. When the k value is very large, the test error of the test set can be reduced, but the overall model becomes simpler, and the approximate error will increase. Therefore, in application, the value of k is generally selected as a small value, and the optimal value is usually found via a suboptimal verification method.

3. Data and Implementation

In this paper, hyperspectral datasets of two regions are selected for experimentation. The first hyperspectral dataset is for the Yellow River Estuary Experimental Zone, Dongying City, Shandong Province, China. The hyperspectral remote sensing images were acquired by the AHSI sensor on China’s Gaofen-5 satellite in 2018, covering 330 bands from visible to shortwave infrared region (0.39–2.51 µm) with a spatial resolution of 30 m. After eliminating the substandard bands, image data of 285 bands were used for the experiment. The size of the experiment area was 721 pixels × 676 pixels, including 17 types of ground objects, such as the Suaeda salsa, pond, and floodplain. To obtain sufficient training samples, eight types of ground objects were removed, and the remaining nine types were used for experimental analysis. During the experiment, 10 samples in each category were selected for training. The distribution of the false-color image map and ground sample data for this area is shown in Figure 1, and the sample distribution is shown in Table 1. The second hyperspectral dataset comprises the Pavia University data (PaviaU for short) from the University of Pavia, Italy, acquired by the airborne reflective optical spectral imager ROSIS-03 in Germany in 2003. The spectral imager acquired 115 continuous band images in the wavelength range of 0.43 to 0.86 μm with a spatial resolution of 1.3 m. The bands affected by noise were removed, and the remaining 103 spectral bands were retained. The size of the area was 610 pixels × 340 pixels, including nine types of ground objects, such as trees, asphalt roads, bricks, meadows, etc. During the experiment, 5% of all samples were selected as training samples, and the rest as test samples. The distribution of the false-color image map and ground sample data in this area is shown in Figure 1, and the sample distribution is shown in Table 1.
In this experiment, principal component analysis (PCA), linear discriminant analysis (LDA), and local reserved projection (LPP) algorithms were used to extract features from hyperspectral remote sensing images, and then, support vector machine (SVM), random forest (RF), and k-nearest neighbor (KNN) classifiers were used to classify the feature images after dimensionality reduction. It is worth noting that before the experiment, the research data were preprocessed, and the preprocessing method involved data normalization (min–max normalized data). The technical route is shown in Figure 2. In the experiment, when the PCA method was used for feature extraction, the classification accuracy was stable after dimensionality reduction to 30 dimensions [27]; thus, the data with dimensionality reductions to 30 dimensions were selected for accuracy evaluation. When LDA and LPP methods were used for feature extraction, the maximum dimension was reduced to C − 1 (C is the number of categories). Related parameters in the classifier, such as the penalty parameter c and kernel function parameter g in SVM, the number of decision trees in RF, and the nearest neighbor value in KNN, were determined by grid searches and tenfold cross-validation. The sample set selected by the decision number of the random forest to discriminate classification is random. Therefore, the mean value of classification accuracy is taken as the accuracy evaluation index after 10 runs. The classification performance evaluation indexes included overall accuracy (OA), average accuracy (AA), and the Kappa coefficient. At the same time, the average running time of the feature extraction and classification algorithm were recorded five times, and the operation efficiencies of the different algorithm combinations were calculated.

4. Results and Discussion

Table 2 shows the experimental results of the combination of three feature extraction methods and three classification methods for the Yellow River Estuary dataset. The classification accuracy from high to low in terms of method combination is PCA+SVM, PCA+RF, LDA+SVM/KNN, LPP+SVM/KNN, PCA+KNN, LDA+RF, and LPP+RF. The highest classification accuracy was PCA+SVM, for which OA was 94.68%, and the Kappa coefficient was 0.9385, followed by PCA+RF. The classification accuracy of LDA+RF and LPP+RF was poor. Compared with the classification results of PCA+SVM, OA differed by 4.5% and 4.56%, and the Kappa coefficient differed by 5.2% and 5.24%, respectively. The PCA feature extraction method not only retains the initial sample information to the maximum extent, and contains the most important information, it also removes image noise. Therefore, after the PCA algorithm was used to reduce the dimension, three kinds of classifiers were used for classification to test for high classification accuracy. Compared with PCA, the dimensionality reduction effect of LPP and LDA for Yellow River Estuary data was average. The possible reason for this is that Yellow River Estuary data meet the condition of global linearity, while the LPP algorithm is suitable for processing nonlinear sample data, thus impacting the dimensionality reduction effect. LDA considers the influence of categories in the dimensionality reduction process, and ensures that the sample sets of different classes have a large interval after dimensionality reduction. However, if there is no significant difference between the mean values of the two types of sample sets, and the covariances are greatly different, the dimensionality reduction effect will have no obvious advantage. Because of the small number of training samples in the Yellow River Estuary dataset, SVM can achieve a better classification effect than other methods on the small sample training set. In addition, SVM represents a convex optimization problem, which can find the global minimum of the objective function rather than obtain the local optimal solution, thus, the classification accuracy is high. However, KNN is suitable for the classification of large samples, and it is easy to misclassify the Yellow River Estuary dataset with a small sample size; therefore, the classification effect had no obvious advantage. In terms of feature extraction and classification method combination for classification accuracy, we found PCA and SVM to be superior in feature extraction and classification, thus achieving the optimal classification accuracy. Furthermore, RF noise due to larger data classification easily produced the phenomenon of fitting, and after using the PCA process to obtain a better classification effect, the effects of LDA and LPP treatment were poor.
To evaluate the classification effect more directly, Figure 3 shows the land type classification diagram produced by the different methods. The experimental results show that PCA+SVM, PCA+RF, and PCA+KNN achieved better classification results, and the classification results of buildings, Suaeda salsa, and floodplain were clear and accurate. The possible reason for this is that PCA extracted the main information of ground objects, including buildings, Suaeda salsa, and floodplain. Therefore, the classification effect after PCA treatment was generally good. However, after feature extraction by LDA and LPP and classification by the three classifiers, the ground features were not smooth enough, especially in rivers and buildings covered with parts of corn ground features. The possible reason is that there is no significant difference between the mean values of rivers and maize, or buildings and maize, which affects the effect of dimension reduction and finally leads to the misclassification of maize as buildings and rivers. However, due to the obvious difference between the mean values of ponds and Acacia, the final differentiation degree was high. In addition, after the same feature extraction method was used, the overall classification effect of RF was poor, which may be because the RF classifier was greatly affected by noise, which affects the classification effect.
The classification accuracy of The University of Pavia is shown in Table 3, where the classification accuracy from high to low in terms of method combination is PCA+SVM, LPP+KNN, LPP+RF, LPP+SVM, LDA+KNN, LDA+RF, PCA+RF, LDA+SVM, PCA+KNN. Among them, PCA+SVM had the highest classification accuracy, while PCA+KNN had the worst classification accuracy, with a 9.1% difference in OA, 7.9% difference in AA, and 12.52% difference in the Kappa coefficient. For the University of Pavia dataset, the classification accuracy of PCA+SVM was still the best, which is because PCA effectively reduces the image noise, and SVM grasps the key samples in classification, thus the perception of the outliers is unknown, and the classification effect is superior. In addition, after processing by the LPP method, the classification results are all good, which may be due to a large number of training samples in the University of Pavia dataset. This method avoids the divergence of the sample set, and effectively retains the local neighborhood structure of the University of Pavia dataset. Because the LDA method considers the type of samples, the dimension projected into low-dimensional space is limited, and there may be an overfitting phenomenon when LDA is used to process the dataset of the University of Pavia, resulting in a generally poor final classification effect. PCA handling with KNN classification was the worst; this is because the PCA has to extract the principal component information, and sample dimension reduction causes imbalance, meanwhile, in KNN calculation, samples of nearest neighbor are only achieved if of the sample size is large. Thus, this type of sample may either be too far from the target sample, or too close to the target sample, leading to poor classification results.
According to the land type classification map of The University of Pavia in Figure 4, it can be seen that the combination of PCA and SVM had the best classification effect, followed by LPP+KNN and LPP+RF, which can accurately classify grassland, metal plate, and shadow. This is because the University of Pavia is primarily urban terrain, with a distinct terrain shape, (e.g., rectangles, and arcs are easy to identify), and has obvious differences between plot properties. This is advantageous to the LPP in terms of dimension reduction, and better preserves the local field information of the samples. Moreover, the classification results of the three classifiers were also good. KNN classification relies mainly on the surrounding adjacent samples. Therefore, compared with other classifiers, the classification effect of KNN is better for data with local structures. In addition, the classification effects of LDA+RF, PCA+RF, LDA+SVM, and PCA+KNN were not good, and the classification error rate of gravel and asphalt roof was high. The possible reason for this is that there was a strong correlation between any two decision trees of the RF, which led to the occurrence of repeated information in the classification, especially for sand and asphalt. Although LDA can retain local information of samples, the extracted edge information does not have good consistency with the boundary of ground object distribution in some categories, and the classification accuracy of SVM is quite different [28] (for example, the classification accuracy of asphalt is only 42.36%, whereas sand is 65.4%, and a metal plate is 99.77%). The result of the final classification effect is not good. PCA is not able to distinguish samples of different classes. When the KNN classifier was used for classification after dimensionality reduction, due to the large value of the selected nearest neighbor sample number k, the model was not fitted enough, and its ability to recognize differentiated ground objects was reduced. On the whole, PCA+SVM had the best classification effect, while PCA+RF, LDA+SVM, and PCA+KNN had poor classification effects.
To further compare the time complexity of different algorithms, Table 4 presents the operation times of the experimental data under different combination algorithms (the average of five run times). It can be seen that when the amount of experimental data was small, the computation speeds of SVM and KNN were faster, and the computation time of RF was the longest. When the amount of experimental data was large, the computation time of KNN was the shortest, and the computation speed of RF was the slowest. The computation speed of RF classification after PCA processing is 61 times that of KNN. This is because the larger the number of RF decision trees, the longer each decision tree will participate in classification, and the slower the operation speed will be.

5. Conclusions

In this paper, several feature extraction and classification methods are combined and applied for hyperspectral remote sensing image classification. Through comparative analysis of experimental results, the following conclusions are drawn: (1) For the feature extraction method, PCA can extract most of the important information from the original data, and the visual effect and classification accuracy are good after classification via the SVM classifier, and the calculation speed is fast. The combination of PCA and SVM is an effective method for hyperspectral remote sensing image classification. (2) For datasets with a large number of training samples, LPP can achieve a better effect for dimensionality reduction, and there is little difference in effect of classification after using different classifiers. (3) For datasets with a small amount of data, the classification effect of PCA+RF is better. For large datasets, LPP+KNN and LPP+RF can achieve better classification.
In this paper, several common hyperspectral remote sensing image feature extraction methods and classifiers are preliminarily compared. Future research work will focus on proposing the best method for processing the images. At the same time, we will compare more feature extraction and classification methods, and apply them to hyperspectral images with a large number of samples. In this process, the applicability of different combination methods, optimization of dimensionality reduction methods, and classification results will also be discussed.

Author Contributions

Funding acquisition, project administration, writing—original draft: Y.W.; project administration, data curation, writing—original draft, methodology, formal analysis: W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by grant from Chuzhou University (The research of funding name is 2022/2024 and funding number is No. 2022qd008).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Pavia University data from the experiment can be downloaded from the website (https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes). The Yellow River Estuary data in the experiment is not convenient to be disclosed because it involves confidentiality.

Acknowledgments

We acknowledge Shuying Zang’s supervision and discussion. We would like to thank Huiqiao Sui (Nanjing Normal University) for the English and grammar corrections. We would also like to thank Mengyu Gu (Hohai University) for help with the experiment.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tong, Q.X.; Zhang, B.; Zhang, L.F. Advances in hyperspectral remote sensing in China. J. Remote Sens. 2016, 20, 19. [Google Scholar]
  2. Zhang, M.; Li, W.; Du, Q. Diverse region-based CNN for hyperspectral image classification. IEEE Trans. Image Process. 2018, 27, 2623–2634. [Google Scholar] [CrossRef] [PubMed]
  3. Li, P.; Wang, D.; Wang, L.; Lu, H. Deep visual tracking: Review and experimental comparison. Pattern Recognit. 2018, 76, 323–338. [Google Scholar] [CrossRef]
  4. Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A. Advanced supervised spectral classifiers for hyperspectral images: A review. J. Latex Cl. Files 2007, 6, 1–23. [Google Scholar]
  5. Yang, X.; Ye, Y.; Li, X.; Lau RY, K.; Zhang, X.; Huang, X. Hyperspec-tral image classification with deep learning models. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5408–5423. [Google Scholar] [CrossRef]
  6. Yin, X.; Wang, R.; Liu, X.; Cai, Y. Deep forest-based classification of hyperspectral images. Proc. Chin. Control Conf. 2018, 2018, 10367–10372. [Google Scholar]
  7. Yu, D.; Ma, Z.; Wang, R. Efficient smart grid load balancing via fog and cloud computing. Math. Probl. Eng. 2022, 22, 3151249. [Google Scholar] [CrossRef]
  8. Wang, X.; Feng, Y. New Method Based on Support Vector Machine in Classification for Hyperspectral Data. In Proceedings of the International Symposium on Computational Intelligence and Design, Wuhan, China, 17–18 October 2008; pp. 76–80. [Google Scholar]
  9. Joelsson, S.R.; Benediktsson, J.A.; Sveinsson, J.R. Random Forest Classifiers for Hyperspectral Data. In Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium, 2005 IGARSS ’05, Seoul, Korea, 29–29 July 2005. [Google Scholar]
  10. Ma, L.; Crawford, M.M.; Tian, J. Local manifold learning-based K-nearest-neighbor for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4099–4109. [Google Scholar] [CrossRef]
  11. Du, P.J.; Xia, J.S.; Xue, Z.H.; Tan, K.; Su, H.J.; Bao, R. Advances in classification of hyperspectral remote sensing images. J. Remote Sens. 2016, 20, 21. [Google Scholar]
  12. Du, P.J.; Xia, J.S.; Zhang, W.; Tan, K.; Liu, Y.; Liu, S.C. Multiple classifier system for remote sensing image classification: A review. Sensors 2012, 12, 4764–4792. [Google Scholar] [CrossRef]
  13. Hughes, G.F.; Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef] [Green Version]
  14. Zhang, B. Frontier of hyperspectral image processing and information extraction. J. Remote Sens. 2016, 20, 1062–1089. [Google Scholar] [CrossRef]
  15. Farrell, M.D.; Mersereau, R.M. On the impact of PCA dimension reduction for hyperspectral detection of difficult targets. IEEE Geosci. Remote Sens. Lett. 2005, 2, 192–195. [Google Scholar] [CrossRef]
  16. Tharwat, A.; Gaber, T.; Ibrahim, A.; Hassanien, A.E. Linear discriminant analysis: A detailed tutorial. AI Commun. 2017, 30, 169–190. [Google Scholar] [CrossRef] [Green Version]
  17. He, X.; Niyogi, P. Locality Preserving Projections. Adv. Neural Inf. Process. Syst. 2004, 16, 153–160. [Google Scholar]
  18. Schölkopf, B.; Smola, A.; Müller, K.R. Kernel Principal Component Analysis. In Proceedings of the 7th International Conference on Artificial Neural Networks—ICANN 1997, Lausanne, Switzerland, 8–10 October 1997; Springer-Verlag GmbH: Cham, Switzerland; pp. 583–588. [Google Scholar]
  19. Bach, F.R.; Jordan, M.I. Kernel independent component analysis. J. Mach. Learn. Res. 2003, 3, 1–48. [Google Scholar]
  20. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version]
  21. Belkin, M.; Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 2003, 15, 1373–1396. [Google Scholar] [CrossRef] [Green Version]
  22. Bachmann, C.M.; Ainsworth, T.L.; Fusina, R.A. Exploiting manifold geometry in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 441–454. [Google Scholar] [CrossRef]
  23. Gepreel, K.A.; Higazy, M.; Mahdy AM, S. Optimal control, signal flow graph, and system electronic circuit realization for nonlinear Anopheles mosquito model. Int. J. Mod. Phys. C 2020, 31, 2050130. [Google Scholar] [CrossRef]
  24. Uddin, M.P.; Mamun, M.A.; Hossain, M.A. PCA-based feature reduction for hyperspectral remote sensing image classification. IETE Technol. Rev. 2021, 38, 377–396. [Google Scholar] [CrossRef]
  25. Fabiyi, S.D.; Murray, P.; Zabalza, J.; Ren, J. Folded LDA: Extending the linear discriminant analysis algorithm for feature extraction and data reduction in hyperspectral remote sensing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 12312–12331. [Google Scholar] [CrossRef]
  26. Ayesha, S.; Hanif, M.K.; Talib, R. Overview and comparative study of dimensionality reduction techniques for high dimensional data. Inf. Fusion 2020, 59, 44–58. [Google Scholar] [CrossRef]
  27. Su, H.J.; Gu, M.Y. Extraction of local alignment feature from hyperspectral remote sensing image based on optimization and discriminant. J. Remote Sens. 2021, 25, 16. [Google Scholar]
  28. Shao, W.J.; Sun, W.W.; Yang, G. Comparative analysis of texture feature extraction from hyperspectral remote sensing images. Remote Sens. Technol. Appl. 2021, 36, 10. [Google Scholar]
Figure 1. The distribution of false-color image maps and ground sample data: (a) false-color image for Yellow River Estuary data; (b) ground truth distribution map for Yellow River Estuary data; (c) false-color image map for Pavia University data; (d) ground truth distribution map for Pavia University data.
Figure 1. The distribution of false-color image maps and ground sample data: (a) false-color image for Yellow River Estuary data; (b) ground truth distribution map for Yellow River Estuary data; (c) false-color image map for Pavia University data; (d) ground truth distribution map for Pavia University data.
Processes 10 01672 g001
Figure 2. Technical route.
Figure 2. Technical route.
Processes 10 01672 g002
Figure 3. Classification maps of land types for different combination algorithms of Yellow River Estuary: (a) PCA+SVM; (b) LDA+SVM; (c) LPP+SVM; (d) PCA+RF; (e) LDA+RF; (f) LPP+RF; (g) PCA+KNN; (h) LDA+KNN; (i) LPP+KNN.
Figure 3. Classification maps of land types for different combination algorithms of Yellow River Estuary: (a) PCA+SVM; (b) LDA+SVM; (c) LPP+SVM; (d) PCA+RF; (e) LDA+RF; (f) LPP+RF; (g) PCA+KNN; (h) LDA+KNN; (i) LPP+KNN.
Processes 10 01672 g003
Figure 4. Classification maps of land types for different combination algorithms of Pavia University data. (a) PCA+SVM; (b) LDA+SVM; (c) LPP+SVM; (d) PCA+RF; (e) LDA+RF; (f) LPP+RF; (g) PCA+KNN; (h) LDA+KNN; (i) LPP+KNN.
Figure 4. Classification maps of land types for different combination algorithms of Pavia University data. (a) PCA+SVM; (b) LDA+SVM; (c) LPP+SVM; (d) PCA+RF; (e) LDA+RF; (f) LPP+RF; (g) PCA+KNN; (h) LDA+KNN; (i) LPP+KNN.
Processes 10 01672 g004
Table 1. Training sample of datasets.
Table 1. Training sample of datasets.
No.Yellow River Estuary DataPaviaU Data
ClassesTraining SampleTest SampleClassTraining SampleTest Sample
1Pond10300Asphalt3326299
2Building10406Meadows93217,717
3Suaeda salsa10255Gravel1051994
4Flood plain1095Tress1532911
5River10162Painted metal sheets671278
6Soybean10538Bare Soil2514778
7Broomcorn10369Bitumen671263
8Maize10123Self-Blocking Bricks 1843498
9Locust10367Shadows47900
Table 2. Classification accuracy of Yellow River Estuary dataset (%).
Table 2. Classification accuracy of Yellow River Estuary dataset (%).
Class LabelSVMRFKNN
PCALDALPPPCALDALPPPCALDALPP
181.6799.6710095.67100.009874.3399.67100
210010010010097.0493.699.75100100
310010010010098.8299.61100100.100
410094.7493.6810085.2685.2610094.7493.68
595.6883.3381.4889.5166.0577.1692.5983.3381.48
610098.797.499.4493.6888.6610098.797.4
799.7399.1998.6499.4696.4898.3710099.1998.64
883.7495.1294.3191.8780.4995.9390.2495.1294.31
984.7474.1175.276.0274.6671.9381.274.1175.2
Overall classification accuracy of OA94.6894.4994.1594.6690.1890.1293.4694.4994.15
Average classification accuracy of AA94.1293.6693.0394.1888.0287.5792.9993.6693.03
Kappa coefficient93.8593.6393.2493.8388.6588.6192.4393.6393.24
Table 3. Classification accuracy of Pavia University data (%).
Table 3. Classification accuracy of Pavia University data (%).
Class LabelSVMRFKNN
PCALDALPPPCALDALPPPCALDALPP
192.3289.0391.0992.491.4393.2784.8990.6889.76
297.7994.079699.2594.5695.6196.0796.4398.99
384.9565.469.7654.7165.2564.2464.5466.7570.81
491.3885.0689.6385.5486.8191.2175.2782.7986.98
599.399.7799.7799.5399.7710098.8399.6199.61
688.4573.4476.3345.2576.3980.9853.3971.4572.67
786.4642.3663.3453.1342.2863.9080.5262.8781.08
889.5776.9683.7392.1177.0482.5981.1678.2484.16
999.8999.4499.6799.8998.5699.1199.899998.89
Overall classification accuracy of OA93.7986.0389.3186.6287.1389.8284.6987.7290.41
Average classification accuracy of AA92.8483.9488.1891.1886.1389.2484.9487.0289.88
Kappa coefficient91.7481.3485.7281.6482.8286.4679.2283.4987.1
Table 4. Computational time of different methods (s).
Table 4. Computational time of different methods (s).
DatasetSVMRFKNN
PCALDALPPPCALDALPPPCALDALPP
Yellow River Estuary0.080.030.013.131.652.730.060.010.01
University of Pavia2.150.851.2629.2311.3618.410.480.030.3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Zeng, W. A Comparison of Three Different Group Intelligence Algorithms for Hyperspectral Imagery Classification. Processes 2022, 10, 1672. https://doi.org/10.3390/pr10091672

AMA Style

Wang Y, Zeng W. A Comparison of Three Different Group Intelligence Algorithms for Hyperspectral Imagery Classification. Processes. 2022; 10(9):1672. https://doi.org/10.3390/pr10091672

Chicago/Turabian Style

Wang, Yong, and Weibo Zeng. 2022. "A Comparison of Three Different Group Intelligence Algorithms for Hyperspectral Imagery Classification" Processes 10, no. 9: 1672. https://doi.org/10.3390/pr10091672

APA Style

Wang, Y., & Zeng, W. (2022). A Comparison of Three Different Group Intelligence Algorithms for Hyperspectral Imagery Classification. Processes, 10(9), 1672. https://doi.org/10.3390/pr10091672

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop