Next Article in Journal
Hyperspectral Reflectance and Indices for Characterizing the Dynamics of Crop–Weed Competition for Water
Next Article in Special Issue
MFANet: A Multi-Level Feature Aggregation Network for Semantic Segmentation of Land Cover
Previous Article in Journal
Terrestrial Laser Scanning for Vegetation Analyses with a Special Focus on Savannas
Previous Article in Special Issue
PolSAR Image Classification Using a Superpixel-Based Composite Kernel and Elastic Net
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual-Weighted Kernel Extreme Learning Machine for Hyperspectral Imagery Classification

1
School of Electronics and Information Engineering, Northwestern Polytechnical University, Xi’an 710072, China
2
School of Electronics and Information Engineering, Shaoguan University, Shaoguan 512023, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(3), 508; https://doi.org/10.3390/rs13030508
Submission received: 9 November 2020 / Revised: 17 January 2021 / Accepted: 28 January 2021 / Published: 1 February 2021
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)

Abstract

:
Due to its excellent performance in high-dimensional space, the kernel extreme learning machine has been widely used in pattern recognition and machine learning fields. In this paper, we propose a dual-weighted kernel extreme learning machine for hyperspectral imagery classification. First, diverse spatial features are extracted by guided filtering. Then, the spatial features and spectral features are composited by a weighted kernel summation form. Finally, the weighted extreme learning machine is employed for the hyperspectral imagery classification task. This dual-weighted framework guarantees that the subtle spatial features are extracted, while the importance of minority samples is emphasized. Experiments carried on three public data sets demonstrate that the proposed dual-weighted kernel extreme learning machine (DW-KELM) performs better than other kernel methods, in terms of accuracy of classification, and can achieve satisfactory results.

Graphical Abstract

1. Introduction

Hyperspectral remote sensing images contain rich spatial and spectral object information, including ultraviolet, visible, and near- and mid-infrared regions of electromagnetic waves. For this reason, the ability to recognize and classify ground objects has greatly improved. The classification of hyperspectral images has become a hot research topic in recent years, with a considerable amount of research on hyperspectral image classification having been conducted. However, despite the rich information provided by hyperspectral images, their high dimensionality and non-linear characteristics make detailed classification difficult. Moreover, as the number of available training samples is typically small, we previously encountered the Hughes phenomenon [1] during the supervised classification of hyperspectral images (HSI). To overcome the high-dimensionality problem, many methods have been introduced for HSI classification and shown good performance, such as manifold learning, the support vector machine (SVM) [2], and composite kernel-based methods [3,4,5,6,7].
Recently, many deep learning employed for hyperspectral imagery classification tasks. H.Wu [8] proposed semi-supervised deep learning for hyperspectral image classification, while the approach uses limited labeled data and abundant unlabeled data to train a deep neural network. B. Pan [9] introduced a dilated semantic segmentation network, in order to avoid spatial information loss during the pooling operation. The network has an end-to-end structure, thus reducing time consumption. In [10], a deep learning method by combining spatial and spectral information for HSI classification was successfully designed. An unsupervised spatial–spectral feature learning strategy using a 3-dimensional convolutional auto-encoder (3D-CAE) has been proposed for hyperspectral data [11]. The proposed 3D-CAE maximally explores the spatial–spectral structure information for feature extraction.
Sparse representation is also a commonly used method to hyperspectral image classification. J. Peng [12] designed a self-paced joint sparse representation model which replaces the least-squares loss in the standard joint sparse representation with a weighted least-squares loss and adopts a self-paced learning (SPL) strategy to learn the weights for neighboring pixels. In order to improve the robustness of joint sparse representation (JSR) [13], J. Peng proposed maximum likelihood estimation (MLE) based a JSR model, which replaces the traditional quadratic loss function with an MLE-like estimator to measure the joint approximation error by providing priors on the coding residuals. This model can easily be converted to an iteratively reweighted JSR problem. Y. Yuan [14] proposed a method that mainly focuses on multitask joint sparse representation (MJSR) and a stepwise Markov random field framework to tackle such problems.
As an effective strategy the multi-features have been widely used to improve the accuracy of classification. Y. Gu [15] proposed non-linear multiple kernel learning, which learned an optimal combined kernel from pre-defined linear base kernels. J. Li [16] constructed a new family of generalized composite kernels, which showed great flexibility in the way that they combined the spectral information contained in hyperspectral data without weighted parameters. W. Li [17] introduced the one-against-one strategy by using discriminant analysis within kernel-induced feature spaces. L. Fang [18] presented a novel framework to effectively utilize the spectral–spatial information of super pixels through multiple kernels, which extracted extinction profiles from three independent components and then created an adaptive composite kernel to explore the spatial information.
Compared to the above-mentioned methods, the extreme learning machine (ELM) [19] has received a lot more attention, due to its advantages. ELM does not need to tune the hidden layer parameters if the network architecture is determined. The hidden layer parameters in ELM are randomly generated and independent of the training data and application environments. By minimizing the training error and the norm of output weights simultaneously, ELM tends to have better generalization performance and provides a unified analytic solution to binary, multiclass, and regression problems. However, despite the advantages mentioned above, when an ELM is directly applied to a HSI data set, the accuracy is still not high, as only spectral information is used. Some methods combining spatial–spectral information based on ELM for HSI classification have been proposed. To evaluate the effectiveness of a kernel-based extreme learning machine algorithm, Pal, M. [20] applied the kernel ELM method to multispectral and hyperspectral remote-sensing data. The results suggested that the accuracy was similar to that of SVM, while it had a lower computation cost. For weighted summation form of kernel extreme learning model, Y. Zhou [21] proposed two spatial–spectral composite kernel ELM algorithms for HSI classification. C. Chen [22] exploited Gabor filtering and multi-hypothesis to extract spatial information, then used the joint spectral information as an ELM input. In [23], extended morphological profiles were employed for spatial information in ELM-based classification of HSI. M. Jiang [24] exploited a multiscale spatial weighted-mean filtering-based approach to extract multiple spatial information. F. Cao [25] proposed probabilistic modeling with a sparse representation and weighted composite features (WCFs) to derive the optimized output weights. Similarly as deep learning method, J. Li [26] proposed a new classification framework, derived from the deep KELM network, in which deep KELM was employed to generate deep spectral features. Ensemble learning based method was also developed for hyperspectral imagery classification, Ugur Ergul [27] proposed a new boosting-based algorithm, which enables the construction of composite kernels (CKs) by using spatial and spectral hybrid kernels. In [28], an improved hierarchical ELM was designed by adding an ELM to a hierarchical ELM. In this model, the average spectral-spatial features were extracted twice by this multiple layer framework; satisfactory results were achieved. As a kind of optimization a multiple reduced kernel extreme learning machine was introduced [29], with which the combination of hybrid kernels and optimal weights could be achieved, allowing the features of the hyperspectral image to be fully represented and the classification error rates to be reduced. As extended attribute profiles usually require manual parameter settings, Marpu [30] presented a technique to automatically produce the extended attribute profiles under consideration of the standard deviation, where the homogenous regions were retained by the minimum and maximum values of the standard deviation. Recently, group intelligent algorithms have also been used: H. Su [31,32] proposed an extreme learning machined optimized by the firefly algorithm, where the parameters in ELM were optimized by the proposed method. J. Li [33] presented an empirical linear relationship between the training number and hidden nodes with a linear model. To improve the individual performance of a basic classifier, F. Lv [34] proposed a stacked auto-encoder ELM (SAE-ELM) model. The features were extracted by this model, while the Q statistic was adopted to determine the final results. Spatial features provide subtle information, which helps discriminate different classes. As an excellent edge-preserving filter, guided image filtering [35], which was proposed by He, has been widely used in the fields of noise reduction, haze removal, and so on. B. Pan [36] proposed an ensemble framework where, by integrating many individual learners, better generation can be achieved. To establish the ensemble model, hierarchical guidance filtering was employed. Y. Guo [37] attempted to develop two fusion methods for spectral and spatial features and, in order to obtain better results, adopted guided image filtering. Z. Wang [38] proposed a discriminative guided filtering framework which integrates a classifier by guided filtering. Guided image filtering establishes a local linear model between the guided image and the output image, implicitly completes the filtering of the input image by solving the difference function between the input image and the output image [35,39]. Inspired by these studies, guided image filtering is used to extract spatial information, in order to further improve the accuracy of hyperspectral image classification (HSIC).
While these spatial–spectral ELM-based methods performed well, their performance can be further improved, as they ignored the imbalanced samples in different classes in multiclassification tasks, causing the majority of samples to weaken the minority’s influences on the classification performance; thus, small-sized samples should be taken into consideration. Motivated by these, we propose a dual weighted kernel extreme learning for hyperspectral image classification. For one thing, different scales of spatial features extend the feature space, the combination of multiscale spatial features will rich the diversity of samples which may bring more information for our classification task. The other, in imbalanced data environment, the separating boundary is supposed to be pushed toward the side of minority class, which in fact favors the performance of majority class. To alleviate the depression by the majority, we attempt to assign an extra weight to each sample to strengthen the impact of minorities and weaken the impact of majorities in some distant.
To tackle the above task, the main contributions of this paper are summarized below:
A spatial–spectral dual-weighted kernel extreme learning machine framework for hyperspectral image classification is proposed. As important spatial features can help to identify similar classes, the spatial- and spectral-added weight summation make hyperspectral imagery classification feasible. In addition, the minority class should not be ignored, as the majority classes may weaken the generalization performance of minorities. For this reason, the weighted extreme learning machine is employed, in order to counteract this imbalance problem.
The rest of the paper is organized as follows: In Section 2, the related works of single layer feed-forward networks, ELM and weighted kernel ELM, and guided filters are introduced; furthermore, the proposed dual-weighted kernel ELM is described in detail. The experimental results and analysis are provided in Section 3. The conclusions of this paper are given in Section 4.

2. Materials and Methods

2.1. Weighted Kernel Extreme Learning Machine

2.1.1. Single-Layer Feedforward Neural Networks (SLFN)

ELM is a fast-learning algorithm for single hidden layer neural networks, which works by randomly initializing the input weights and biases, which can greatly save a considerable amount of computation time. Meanwhile, the random input may bring diversity of samples.
For a single hidden layer neural network, we suppose that there are N arbitrary samples, { ( x i , y i ) } i = 1 N , where x i = [ x i 1 , x i d ] T R n and y i = [ y i 1 , y i m ] T R m . Therefore, a single hidden layer neural network with one hidden layer node can be expressed as
f L = i = 1 L β i G i ( x ) = i = 1 L β i G ( a i , b i , x ) ,
where G i ( x ) = G ( a i , b i , x ) is the activation function, β i is the output weight, a i = [ a i 1 , , a i d ] T is the weight vector, and b i is the bias of the ith hidden layer.
G. Huang [19] proved that SLFN with L nodes can approximate an arbitrary function. Therefore, β i ,   a i , and b i can satisfy j = 1 N f L ( x j ) y j = 0 , such that
i = 1 L β i G ( a i , b i , x ) = y j .
We used the matrix form to rewrite Equation (2):
H β = Y ,
where β i = [ β 1 , β L ] T R L × m and Y = [ y 1 , , y n ] T R N × m .
The hidden layer output matrix, H , is expressed as
H = ( G ( a 1 , b 1 , x 1 ) G ( a L , b L , x L ) G ( a 1 , b 1 , x N ) G ( a L , b L , x N ) ) .
The matrix H is an active function of the hidden layer. In Equation (5), the parameters a i and b i are both unknown:
a r g   m i n a i , b i , β H ( a 1 , , a L ; b 1 , , b L ) β Y 2 .
In traditional neural networks, Equation (5) is usually solved using a gradient descent-based iterative algorithm. During the process of iteration, all parameters need to be tuned, according to the iteration, which may cause the problems of gradient diffusion, local minima, and overfitting.

2.1.2. ELM and Weighted Kernel ELM

As for ELM, the solution of the parameters is completely different. The parameters a i and b i are randomly generated. They do not change during the whole procedure. The hidden layer is determined after the input parameters are produced. Based on the input parameter and hidden layer, we can derive the output by the linear analytic solution. The final goal of ELM is to obtain the smallest training error with the smallest norm of the output weight. This is expressed as
a r g   m i n a i , b i , β H ( a 1 , , a L ; b 1 , , b L ) β Y 2 .
Based on optimization theory, Equation (6) can be formulated as follows:
m i n β 1 2 β 2 2 + C 1 2 ξ i 2   s . t . h ( x ) β = y i T ξ i T , i = 1 , , N ,
where h ( x ) = [ G ( a 1 , b 1 , x ) , , G ( a L , b L , x ) ] , ξ i is the training error, and C is the regularization parameter.
According to Lagrange multiplier theory and Karush–Kuhn–Tucker (KKT) optimization conditions [40], training the ELM is equivalent to solving the following dual optimization problem:
m i n ( β , α , ξ i ) L E L M = 1 2 β F 2 + C 1 2 i = 1 N ξ i F 2 i = 1 N j = 1 M α i , j ( h T ( x i ) β j y i , j + ξ i , j ) ,
where β j is the column vector of matrix β and α i , j is the Lagrange multiplier. From the KKT theorem, we can further derive:
L E L M β j = 0 β = H × α
L E L M ε i = 0 α i = C ε i ,   i = 1 , ,   N
L E L M α i = 0 H T β = y i T ξ i T = 0 ,   i = 1 , ,   N
Based on Equations (9)–(11), the output weight, β , can be expressed as
β = H T ( I C + H H T ) 1 Y .
After obtaining the output weight β , the output of the ELM is obtained as:
f ( x ) = h ( x ) β .
Traditional ELM does not take the imbalance problem into account, while the weighted ELM was designed to address it [41]. For this paper, two weighting schemes were proposed:
Scheme 1:
W = 1 t k ,
where t k is the total number of samples belonging to the k th class. After applying weighting scheme 1, we can obtain a balanced ratio between the minority and majority.
Scheme 2:
W = { 0.618 t k i f ( t k > t a v g ) 1 t k i f ( t k t a v g ) ,
where t a v g represents the average number of samples for all classes. If the number of t k is below the average, similar to ELM, the optimization form of the weighted ELM can be expressed as:
min β 1 2 β 2 2 + C 1 2 W ξ i 2 , s . t . h ( x ) β = y i T ξ i T , i = 1 , , N .
For the multiclass-weighted kernel ELM [41,42], we define a diagonal matrix, W , which is associated with the training sample x The output weight, β , can be expressed as
β = H T ( I C + H W H T ) 1 W Y .
Given a new sample, x , the output function of the weighted ELM classifier is obtained from f ( x ) = h ( x ) β , that is:
f ( x ) = h ( x ) β = h ( x ) H T ( I C + W H H T ) 1 W Y .
Similar to SVM kernel methods, the kernel trick can be used in Equation (18), where the kernel function can replace the inner products h ( x ) H T and H H T .
The kernel trick version of the weighted ELM is the weighted kernel ELM. Thus, the N × N version of the kernel ELM can be rewritten as:
f ( x ) = ( k ( x , x 1 ) k ( x , x N ) ) T ( I C + W K ( x i , x j ) ) W Y ,
where h ( x ) H T = ( k ( x , x 1 ) k ( x , x N ) ) T and H H T = K ( x i , x j ) .
Therefore, the weighted kernel ELM provides a unified solution for networks with different feature mappings and, at the same time, strengthens the impact of minority class samples by adding a weighted matrix.

2.2. Spatial Feature Extraction

To improve the performance of ELM for HSI classification, guided image filtering is adopted to extract spatial information. The guided image filtering method proposed by He [36] is a novel type of explicit filter that can act as an edge-preserving smoothing operator-like bilateral filter and obtain better behavior near edges. Given an image p as an input, g is a guided image, q is an output image—which is a linear transform in a window ω o around a pixel o with a size of ( 2 r + 1 ) × ( 2 r + 1 ) , where r is the window radius—and u is the pixel of ω o :
q u = a o g u + b o , u ω o
where a o   and   b o are linear coefficients that are assumed to be constant in ω o . From Equation (20), we can see that Δ q = a Δ g , which means that the output q has a similar gradient as the guidance image g . The coefficients are solved by the following minimum cost function:
E ( a o , b o ) = u ω o ( ( a o g u + b o p u ) 2 + ε a o 2 ) ,
where ε is a regularization parameter, to prevent a k from being too large. The values of a o and b o can be obtained by linear regression [40]:
a o = 1 | ω | u ω o g u p u μ o p ¯ o σ o 2 + ε ,
b o = p ¯ o a o μ o ,
where μ o and σ o 2 are the mean and variance of g in the window of ω o , | ω | is the number of pixels in ω o , and p ¯ o = 1 | ω | u ω k p u is the mean of p in ω o . After obtaining the coefficients a o and b o , the guided filtering image q u can be computed. Based on the above procedure, we can obtain the linear transformed image q .

2.3. Proposed Dual-Weighted Kernel ELM-Based Method

In this section, the proposed dual-weighted kernel extreme learning machine for hyperspectral image classification—termed DW-KELM—is described in detail. The joint spatial–spectral information is employed to investigate the performance of the dual-weighted kernel ELM for hyperspectral imagery classification. Figure 1 shows the procedure of the spatial–spectral dual-weighted kernel ELM-based HSI classification.
For the classification task, principal component analysis (PCA) is applied as a pre-procedure of feature extraction. The PCs that contain 99% of information are preserved. We use the guided filter on PCs that have a group of spatial features.
Given pixel x i , which is a sample consisting of the spectral characteristics across a continuous range of spectral bands, we denote its spectral and spatial features as x i w and x i s , respectively. The spectral feature vector x i w is the original x i , which consists of spectral reflection values across all bands. The spatial feature vector x i s is extracted by multiple guided image filtering methods. As the first PC contains most of the useful information, we use it as the guided image in our proposal. The first PC greatly maintains the edge information after these operations, while the other PCs are input images for guided image filtering. Then, we obtained groups of spatial features.
Exploiting the information from the spatial and spectral domains, the kernel method is usually used to perform the spatial–spectral classification. For the kernel method, the original spectral features are used to compute spatial and spectral kernels, which are combined to form kernels.
Once the spatial and spectral features x i s and x i w are constructed, we can compute the spatial kernel K s and spectral kernel K w , as follows:
k s ( x i , x j ) = e x p ( x i s x j s 2 2 σ s 2 ) ,
k w ( x i , x j ) = e x p ( x i w x j w 2 2 σ w 2 ) .
Here, we use the Radial Basis Function (RBF)kernel. σ s and σ w are the width of the respective RBF kernels. The Kernel ELM is represented as a weighted kernel summation:
K = μ K S + ( 1 μ ) K w .
Then, the weighted summation composite kernel is required. The spatial–spectral kernel in Equations (24)–(26) is computed. Then, the features are recalculated using the weighted matrix, W , in order to strengthen the impact of the minority class samples. Following this, the dual-weighted kernel ELM model solves:
f ( x ) = h ( x ) β = ( k ( x , x 1 ) k ( x , x N ) ) T ( I C + W K ( x i , x j ) ) W Y ,
where the weighted matrix W is the diagonal matrix of the spatial–spectral feature extracted by weighted scheme 2 [41]:
W = d i a g { W i i } , i = 1 , , N ,
W i i = { 0.618 t k i f ( t k > t a v g ) 1 t k i f ( t k t a v g ) ,
where t k is the total number belonging to the kth class. W is assigned to 1 t k , that is, the inverse of the minority samples is weighted for the minorities. The golden ratio is used for the majorities.
After the final results are obtained, each test sample is assigned to the highest value in f q ( x q ) = [ f 1 ( x q ) , , f m ( x q ) ] , where q = 1 , , m , according to the index during the prediction phase:
l a b e l ( x ) = a r g   m a x q f q ( x q ) , q = 1 , , m .
Algorithm (Spatial–spectral dual-weighted kernel ELM for HSI classification)
Input: HSI data set, r, ε , μ , L
Output :   l a b e l ( x ) = arg max q f q ( x q ) , q = 1 , , m
1. Spectral information is directly extracted from HSI data set.
2. PCA operations are performed and PCs are chosen, according to the quality of information the PCs contain; afterwards, the spatial information is extracted by guided image filtering, according to Equation (23).
3. Kernel weighted summation is formed by spectral and spatial–spectral information, according to Equations (24)–(26).
4. The weighted matrix W is acquired, according to Equations (28)–(29).
5. Initiation of the weighted kernel extreme learning machine.
6. Calculation of β with Equations (16)–(17).
7. Calculation of the predicted output with Equation (27).
8. Sample x i is assigned to the highest value, according to Equation (30).End procedure

3. Experimental Results and Analysis

3.1. Hyperspectral Image Data Sets

The performance of the proposed approach was evaluated using three widely used data sets; namely, Indian Pines, the University of Pavia, and Salinas. The three data sets are publicly available hyperspectral data sets.

3.1.1. Indian Pines

The Indian Pines data set was acquired with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in 1992. The image scene contains 145 × 145 pixels, 220 spectral bands, and a spectral range from 0.4 to 2.5 μ m, where 20 channels were discarded due to the atmospheric affection. The spatial resolution of the data is 20 m per pixel. The scene contains two-thirds agricultural land and one-third forest or other natural perennial vegetation. Some of the crops present are in early stages of growth, with less than 5% coverage. There are 16 classes and 10,249 labeled samples in the data set in total. The RGB composite image and ground-truth map from the data set are shown in Figure 2.

3.1.2. Pavia University

The Pavia University data set was acquired in 2001 using the Reflective Optics System Imaging Spectrometer (ROSIS) instrument over the urban area surrounding the University of Pavia, Italy. This image scene has a size of 610 × 610 pixels. As some of the samples in Pavia University contain no information, we discarded these parts. Thus, the size in our experiment was 610 × 340. The spatial resolution was 1.3 m per pixel. The ROSIS-03 sensor captures 115 spectral bands ranging from 0.43 to 0.86 μ m. After removing 12 noisy and water-absorption bands, 103 bands were retained. The data contain nine ground-truth classes: asphalt, meadows, gravel, trees, metal sheets, bare soil, bitumen, bricks, and shadows. There was a total of 42,776 labeled samples, The RGB composite image and ground-truth map from the data set are shown in Figure 3.

3.1.3. Salinas

The Salinas data set was acquired using the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over Salinas Valley, California, USA. It contains 224 bands and 512 × 217 pixels with 3.7 m spatial resolution per pixel. The data contain 16 ground-truth classes, and 12 noisy and water-absorption bands were removed in the experiment. An image of Salinas is shown in Figure 4.

3.2. Parameter Settings

The classification performance of the different algorithms was assessed on the testing set using the overall accuracy (OA), which is the number of correctly classified testing samples divided by the number of total testing samples; as well as the average accuracy (AA), which represents the average of the classification accuracies for the individual classes; and the kappa (κ) coefficient, which measures the accuracy of classification agreement. The experiments were conducted using MATLAB R2016b on a computer with a 2.8 GHz dual core and 16 GB RAM.
In the pre-processing stage, the principle components (PC) which contained more than 99% information were chosen; PC1 was used as a guided image, the other PCs were used as input images, and the step of the window was 2 .
For the kernel methods, the combination of kernel ELM and coefficient μ was set to 0.95, according to our experience. For all kernel-based algorithms, the RBF kernel was used. The parameter σ varied in the range { 2 4 , 2 3 , , 2 4 } and C ranged from 10 0   to   10 5 . The number of hidden nodes for the Indian Pines data set was 500, while those for the University of Pavia and Salinas data sets were 1250 and 650, respectively.
In the general ELM method, the sigmoid function was used and the hidden layer parameters, ( a i , b i ) i = 1 L , were randomly generated based on the uniform distribution in the range of [ 1 , 1 ] .

3.3. Accuracy of Classification and Analysis

The total number of pixels of Indian Pines available in the reference data was 10,366; however, some classes only had very small labeled samples. To evaluate the performance of different algorithms in this challenging case, we randomly chose 10% of labeled training samples per class. The remaining labeled samples were used for testing. At the same time, for comparison with traditional methods, we also chose 5, 10, 15, 20, 25, and 30 samples as training samples, in order to evaluate the effects of different methods.
The further different algorithms were then compared with seven benchmark algorithms; namely, ELM, kernel ELM (KELM), weighted KELM (WKELM), the spatial feature that uses guided filtering features combined with KELM (SS-KELM), KELM-CK (extreme learning machine e-composite kernel) [21], ASS-H-DELM (average spectral–spatial hierarchical extreme learning machine) [28], and HCKBoost (hybridized composite kernel boosting with extreme learning machines) [27].

3.3.1. Results on the Indian Pines Data Set

The accuracies of the ELM, KELM, WKELM, SS-KELM, KELM-CK, ASS-H-DELM, and HCKBoost measures are provided in Table 1.
From Table 1, it can be observed that the ELM method only required a few seconds for hyperspectral classification application. At the same time, ELM provided the worst results, especially for the classes with limited training samples. The KELM method alleviated this, to some extent, but not significantly. This demonstrates that the kernel used in kernel ELM is more powerful than that which is randomly generated. For the DW-KELM algorithm, when additional spatial information was available, the dual-weighted framework improved the performance of the classifier, while the accuracy dramatically increased. This conclusion can be clearly seen for classes 1, 7, and 9. These three classes contained very similar spectral information, which made the results of classification bad, due to the spectral classifier. Classes 2, 3, 4 are corn subclasses and, thus, had very similar spectral curves; however, the spatial information helped to discriminate the subtle differences and, so, DW-KELM achieved good classification accuracies on corn (more than 95%) and on soybeans (more than 96%). After comparing the cost of time among those methods, it was observed that the ELM method consumes the least amount of time. There are three reasons which explain this phenomenon: Only spectral information was used, the random initial parameters, and the analytic solution for the network. While the same parameter settings were retained, the solution form decides the computation time. It is very common to use the spatial feature as an effective supplement. From the results of classification of classes 1, 7, and 9 for the SS-KELM, KELM-CK, ASS-H-DELM, and HCKBoost algorithms, we can see improvements in both spatial feature use and multiple kernel sides. However, despite the considerable improvement of these methods, the proposed dual-weighted kernel provided more satisfactory results, as the minority class sample was a more important consideration.
Further experiments on the performance with different numbers of labeled training samples per class were conducted, using the three previously introduced data sets. The training set was formed by randomly choosing from 5 to 30 samples, with a step of 5. The remaining samples were used as testing sets. As shown in Table 2, the OA, AA, and κ values were greatly improved with an increase in training numbers. When only spectral information was used, KELM achieved better results than ELM, especially in the condition of extremely small-sized samples. Among the spatial–spectral methods, the proposed DW-KELM showed a significant improvement over the SS-KELM, KELM-CK, ASS-H-DELM, and HCKBoost algorithms. This means that the proposed DW-KELM method is a powerful algorithm for this task, especially for enhancing the performance relating to minority class samples. When the number of training samples was 5 per class, the DW-KELM improved the OA by 4.36%, AA by 4.09%, and κ by 3.50%, while the presence of 30 samples conditions improved the OA by 3.29%, AA by 2.8%, and κ by 3.03%, when compared with HCKBoost on the Indian Pines data set.
The classification map of the Indian Pines data set is shown in Figure 5. It can be clearly seen that the classification maps of DW-KELM were more coherent in the homogeneous regions, compared with the ELM, KELM, WKELM, SS-KELM, KELM-CK, ASS-H-DELM, and HCKBoost algorithms. In addition, the spatial–spectral methods provided better results than the spectral methods, in terms of consistent classification results with less noise. In particular, in the application of dual-weighted KELM, subtle features and minority samples were considered; this improvement typically arises for classes with similar spectral signatures.

3.3.2. Results on the University of Pavia Image Data Set

The classification results for the University of Pavia images are shown in Figure 6 and the accuracy measures are given in Table 3. The total number of pixels available in the reference data was 414,815. Accordingly, a training set of 10% samples per class were used. Regarding Table 2, the accuracy measures of the proposed ELM-based technique provided equally competitive and even better classification results, when compared to the traditional approaches. The results of the classification of the University of Pavia data set are shown in Figure 7. Figure 8a represents a map of ELM, only using spectral information. The accuracy measures for classification of the University of Pavia image are shown in Table 3. The first columns are the samples that we chose in the experiment.
From Table 3, we can clearly see that, when spatial information and dual-weighted KELM were used, the accuracy of classification was dramatically increased; for instance, for bare soil, from 84.10 to 97.25% and, for bitumen, from 78.93 to 99.90%. There were two main reasons for this: First, the weighted matrix strengthened the importance of class samples, which may be ignored in the presence of many majority class samples; second, TTthe spatial information helped to discriminate samples with similar spectral curves.
When the training samples increased, the OA, AA, and κ values improved, which can be clearly seen from Table 4. When only spectral information was used, ELM provided worse results than KELM. Among the joint spatial and spectral information classification methods, DW-KELM provided the best results. When the number of training samples per class was 30, DW-KELM improved the OA by 3.95%, AA by 2.99%, and κ by2.53% on the University of Pavia image, when compared with the HCKBoost algorithm. It seems that the proposed dual-weighted KELM is not only suitable for data with an imbalanced distribution, but also for balanced data.

3.3.3. Results on the Salinas Image

The classification results of different methods for the Salinas image are shown in Figure 8. Similar settings as those in the aforementioned images were used. It can be clearly seen that the classification maps of DW-KELM are more spatially coherent in the large homogeneous region than other methods; further, the results have little noise. The increasing trend of OA, AA, and κ was also the same as for the images of the Indian Pines and Pavia University data sets. Among the other ELM- or KELM-based approaches, DW-KELM improved the OA by 3.72%, AA by 2.32%, and κ by 2.64%, when compared with HCKBoost, on the Salinas image.
From Table 5 and Table 6, we can observe the trend of accuracy change, in all of the experiments, the proposed DW-KELM method provided more accurate results than the other (ELM- or KELM-based) methods. This indicates that the weighted matrix kernel and weighted kernel summations are effective for identifying the subtle differences among similar objects.

3.4. Ablation Study

To evaluate the purpose of our method, the ablation experiments are also carried, which termed as WKELM and SS-KELM respectively. From Table 1 and Table 2, we can clearly see that, without multiple spatial information, the accuracy is not high as those within spatial features methods. At the same time, when the extra weight not assigned to each sample, the accuracy is also not high. Specially, for these samples whose training sample are extremely small, for instance the class 7 and class 9 in the image of Indian Pines, the weight will affect a lot. The same trend happens on the image of Pavia University and the Salinas, we can see these from Table 3, Table 4 and Table 5 and Table 6 respectively.

3.5. G-Mean as a Supplementary Measure for Evaluation

Overall accuracy has been widely used to evaluate the performance of classifiers. In addition, if the samples are imbalanced or distributed, it may not be possible to provide adequate information regarding the generalizability of a classifier; for instance, with a data set which has 10 samples belonging to a negative class and 90 samples belonging to a positive class, if there are 10 misclassified samples, the overall accuracy is equal to 80%, but the G-mean is equal to zero. Thus, we used the G-mean [36] as a supplementary measure to evaluate the performance of the proposed dual-weighted method:
G m e a n = ( r = 1 s Re c a l l r ) 1 s ,
Re c a l l = T P T P + F N ,
where TP represents the number that correctly classified positive samples and FN is the number of incorrectly classified positive samples.
From the box plot in Figure 8, the results show that the proposed DW-KELM obtained a more concentrated G-mean, especially on the Indian Pines image, due to consideration of the importance of the minority samples. In addition, its interquartile range (IQR) was smaller than those of the other methods.

4. Conclusions

In this paper, a dual-weighted kernel extreme learning machine was proposed, in order to tackle the hyperspectral imagery classification task. It is more effective when using small-sized samples, as the cumulative errors of the minority samples were previously ignored in traditional ELM algorithms. In particular, the weighted matrix W plays an important role in the proposed method; larger weights are assigned to samples from the minority class, thus emphasizing their importance. In addition, as useful supplementary features, the spatial features are fully mined by adding weighted summation. This spatial information contains rich structure features, which help in distinguishing subtle differences in similar classes. The experimental results demonstrated that the proposed DW-KELM method is more accurate than the considered benchmark methods for the classification of hyperspectral imagery.

Author Contributions

X.Y. and Y.F. conceived and designed the experiments; X.Y. implemented the proposed method and analyzed the results of the experiment; Y.G. and Y.J. validated the experiment; S.M. and Y.F. analyzed the results and significantly revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant number 61420106007.

Data Availability Statement

The data presented in this study are openly available in the website: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes.

Acknowledgments

The authors thank the Editor-in-Chief, the Associate Editor, and the anonymous reviewers for their suggestions and insightful comments on this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Camps-Valls, G.; Tuia, D.; Bruzzone, L.; Benediktsson, J.A. Advances in Hyperspectral Image Classification: Earth Monitoring with Statistical Learning Methods. IEEE Signal Process. Mag. 2014, 31, 45–54. [Google Scholar] [CrossRef] [Green Version]
  2. Gualtieri, J.A.; Cromp, R.F. Support vector machines for hyperspectral remote sensing classification. Proc. SPIE Workshop Adv. Comput. Assist. Recognit. 1998, 3584, 221–232. [Google Scholar]
  3. Campsvalls, G.; Gomezchova, L.; Munozmari, J.; Vilafrances, J.; Calpemaravilla, J. Composite kernels for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2006, 3, 93–97. [Google Scholar] [CrossRef]
  4. Fauvel, M.; Chanussot, J.; Benediktsson, J.A. Evaluation of kernels for multiclass classification of hyperspectral remote sensing data. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; p. II. [Google Scholar] [CrossRef]
  5. Fauvel, M.; Arbelot, B.; Benediktsson, J.A.; Sheeren, D.; Chanussot, J. Detection of Hedges in a Rural Landscape Using a Local Orientation Feature: From Linear Opening to Path Opening. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 15–26. [Google Scholar] [CrossRef]
  6. Valero, S.; Chanussot, J.; Benediktsson, J.A.; Talbot, H.; Waske, B. Advanced directional mathematical morphology of the detection of the road network in very high resolution remote sensing images. Pattern Recognit. Lett. 2010, 31, 1120–1127. [Google Scholar] [CrossRef] [Green Version]
  7. Fang, L.; Li, S.; Duan, W.; Ren, J.; Benediktsson, J.A. Classification of Hyperspectral Images by Exploiting Spectral-Spatial Information of Super pixel via Multiple Kernels. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6663–6674. [Google Scholar] [CrossRef] [Green Version]
  8. Wu, H.; Prasad, S. Semi-Supervised Deep Learning Using Pseudo Labels for Hyperspectral Image Classification. IEEE Trans. Image Process. 2018, 27, 1259–1270. [Google Scholar] [CrossRef] [PubMed]
  9. Pan, B.; Xu, X.; Shi, Z.; Zhang, N.; Luo, H.; Lan, X. DSSNet: A Simple Dilated Semantic Segmentation Network for Hyperspectral Imagery Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1968–1972. [Google Scholar] [CrossRef]
  10. Pan, B.; Shi, Z.; Xu, X. R-VCANet: A New Deep-Learning-Based Hyperspectral Image Classification Method. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1975–1986. [Google Scholar] [CrossRef]
  11. Mei, S.; Ji, J.; Geng, Y.; Zhang, Z.; Li, X.; Du, Q. Unsupervised Spatial–Spectral Feature Learning by 3D Convolutional Autoencoder for Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6808–6820. [Google Scholar] [CrossRef]
  12. Peng, J.; Sun, W.; Du, Q. Self-Paced Joint Sparse Representation for the Classification of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1183–1194. [Google Scholar] [CrossRef]
  13. Peng, J.; Li, L.; Tang, Y.Y. Maximum Likelihood Estimation-Based Joint Sparse Representation for the Classification of Hyperspectral Remote Sensing Images. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1790–1802. [Google Scholar] [CrossRef] [PubMed]
  14. Yuan, Y.; Lin, J.; Wang, Q. Hyperspectral Image Classification via Multitask Joint Sparse Representation and Stepwise MRF Optimization. IEEE Trans. Cybern. 2016, 46, 2966–2977. [Google Scholar] [CrossRef] [PubMed]
  15. Gu, Y.; Liu, T.; Jia, X.; Benediktsson, J.A.; Chanussot, J. Nonlinear Multiple Kernel Learning with Multiple-Structure-Element Extended Morphological Profiles for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3235–3247. [Google Scholar] [CrossRef]
  16. Li, J.; Marpu, P.R.; Plaza, A.; Bioucas-Dias, J.M.; Benediktsson, J.A. Generalized Composite Kernel Framework for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4816–4829. [Google Scholar] [CrossRef]
  17. Li, W.; Prasad, S.; Fowler, J.E. Decision Fusion in Kernel-Induced Spaces for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3399–3411. [Google Scholar] [CrossRef] [Green Version]
  18. Fang, L.; He, N.; Li, S.; Ghamisi, P.; Benediktsson, J.A. Extinction Profiles Fusion for Hyperspectral Images Classification. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1803–1815. [Google Scholar] [CrossRef]
  19. Huang, G.B.; Zhou, H.; Ding, X.; Zhang, R. Extreme Learning Machine for Regression and Multiclass Classification. IEEE Trans. Syst. Man Cybern. Part B 2012, 42, 513–529. [Google Scholar] [CrossRef] [Green Version]
  20. Pal, M.; Maxwell, A.E.; Warner, T.A. Kernel-based extreme learning machine for remote sensing image classification. Remote Sens. Lett. 2013, 4, 853–862. [Google Scholar] [CrossRef]
  21. Zhou, Y.; Peng, J.; Chen, C.L.P. Extreme Learning Machine with Composite Kernels for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2351–2360. [Google Scholar] [CrossRef]
  22. Chen, C.; Li, W.; Su, H.; Liu, K. Spectral-Spatial Classification of Hyperspectral Image Based on Kernel Extreme Learning Machine. Remote Sens. 2014, 6, 5795–5814. [Google Scholar] [CrossRef] [Green Version]
  23. Argüello, F.; Heras, D.B. ELM-based spectral–spatial classification of hyperspectral images using extended morphological profiles and composite feature mappings. Int. J. Remote Sens. 2015, 36, 645–664. [Google Scholar] [CrossRef]
  24. Jiang, M.; Cao, F.; Lu, Y. Extreme Learning Machine with Enhanced Composite Feature for Spectral-Spatial Hyperspectral Image Classification. IEEE Access 2018, 6, 22645–22654. [Google Scholar] [CrossRef]
  25. Cao, F.; Yang, Z.; Ren, J.; Ling, B.W.-K.; Zhao, H.; Sun, M.; Benediktsson, J.A. Sparse Representation-Based Augmented Multinomial Logistic Extreme Learning Machine with Weighted Composite Features for Spectral–Spatial Classification of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6263–6279. [Google Scholar] [CrossRef] [Green Version]
  26. Li, J.; Xi, B.; Du, Q.; Song, R.; Li, Y.; Ren, G. Deep Kernel Extreme-Learning Machine for the Spectral–Spatial Classification of Hyperspectral Imagery. Remote Sens. 2018, 10, 2036. [Google Scholar] [CrossRef] [Green Version]
  27. Ergul, U.; Bilgin, G. HCKBoost: Hybridized composite kernel boosting with extreme learning machines for hyperspectral image classification. Neurocomputing 2019, 334, 100–113. [Google Scholar] [CrossRef]
  28. Le, B.T.; Ha, T.T.L. Hyperspectral image classification based on average spectral-spatial features and improved hierarchical-ELM. Infrared Phys. Technol. 2019, 102, 103013. [Google Scholar] [CrossRef]
  29. Lv, F.; Han, M. Hyperspectral image classification based on multiple reduced kernel extreme learning machine. Int. J. Mach. Learn. Cybern. 2019, 10, 3397–3405. [Google Scholar] [CrossRef]
  30. Marpu, P.R.; Pedergnana, M.; Mura, M.D.; Benediktsson, J.A.; Bruzzone, L. Automatic Generation of Standard Deviation Attribute Profiles for Spectral–Spatial Classification of Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2013, 10, 293–297. [Google Scholar] [CrossRef]
  31. Su, H.; Cai, Y.; Du, Q. Firefly-Algorithm-Inspired Framework with Band Selection and Extreme Learning Machine for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 309–320. [Google Scholar] [CrossRef]
  32. Su, H.; Tian, S.; Cai, Y.; Sheng, Y.; Chen, C.; Najafian, M. Optimized extreme learning machine for urban land cover classification using hyperspectral imagery. Front. Earth Sci. 2017, 11, 765–773. [Google Scholar] [CrossRef]
  33. Li, J.; Du, Q.; Li, W.; Li, Y. Optimizing extreme learning machine for hyperspectral image classification. J. Appl. Remote Sens. 2015, 9, 97296. [Google Scholar] [CrossRef]
  34. Lv, F.; Han, M.; Qiu, T. Remote Sensing Image Classification Based on Ensemble Extreme Learning Machine with Stacked Autoencoder. IEEE Access 2017, 5, 9021–9031. [Google Scholar] [CrossRef]
  35. He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef]
  36. Pan, B.; Shi, Z.; Xu, X. Hierarchical Guidance Filtering-Based Ensemble Classification for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4177–4189. [Google Scholar] [CrossRef]
  37. Guo, Y.; Yin, X.; Zhao, X.; Yang, D.; Bai, Y. Hyperspectral image classification with SVM and guided filter. EURASIP J. Wirel. Commun. Netw. 2019, 2019, 56. [Google Scholar] [CrossRef]
  38. Wang, Z.; Hu, H.; Zhang, L.; Xue, J.-H. Discriminatively guided filtering (DGF) for hyperspectral image classification. Neurocomputing 2018, 275, 1981–1987. [Google Scholar] [CrossRef] [Green Version]
  39. Li, Z.; Zheng, J.; Zhu, Z.; Yao, W.; Wu, S. Weighted Guided Image Filtering. IEEE Trans. Image Process. 2015, 24, 120–129. [Google Scholar] [CrossRef]
  40. Fletcher, R. Practical Methods of Optimization: Constrained Optimization; Wiley: New York, NY, USA, 1981; Volume 2, pp. 143–144. [Google Scholar]
  41. Zong, W.; Huang, G.-B.; Chen, Y. Weighted extreme learning machine for imbalance learning. Neurocomputing 2013, 101, 229–242. [Google Scholar] [CrossRef]
  42. Raghuwanshi, B.S.; Shukla, S. Class imbalance learning using UnderBagging based kernelized extreme learning machine. Neurocomputing 2019, 329, 172–187. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed dual-weight kernel extreme learning method.
Figure 1. Flowchart of the proposed dual-weight kernel extreme learning method.
Remotesensing 13 00508 g001
Figure 2. Ground truth of the Indian Pines data set.
Figure 2. Ground truth of the Indian Pines data set.
Remotesensing 13 00508 g002
Figure 3. Ground truth of the Pavia University data set.
Figure 3. Ground truth of the Pavia University data set.
Remotesensing 13 00508 g003
Figure 4. Ground truth map of the Salinas data set.
Figure 4. Ground truth map of the Salinas data set.
Remotesensing 13 00508 g004
Figure 5. Classification map of the Indian Pines data set with 30 samples: (a) ELM, (b) KELM, (c) WKELM, (d) SS-KELM, (e) KELM-CK, (f) ASS-H-DELM, (g) HCKBoost, and (h) DW-KELM.
Figure 5. Classification map of the Indian Pines data set with 30 samples: (a) ELM, (b) KELM, (c) WKELM, (d) SS-KELM, (e) KELM-CK, (f) ASS-H-DELM, (g) HCKBoost, and (h) DW-KELM.
Remotesensing 13 00508 g005
Figure 6. Classification map of the University of Pavia: (a) ELM, (b) KELM, (c) WKELM, (d) SS-KELM, (e) KELM-CK, (f) ASS-H-DELM, (g) HCKBoost, and (h) DW-KELM.
Figure 6. Classification map of the University of Pavia: (a) ELM, (b) KELM, (c) WKELM, (d) SS-KELM, (e) KELM-CK, (f) ASS-H-DELM, (g) HCKBoost, and (h) DW-KELM.
Remotesensing 13 00508 g006
Figure 7. Classification map of the Salinas data set with 30 samples: (a) ELM, (b) KELM, (c) WKELM, (d) SS-KELM, (e) KELM-CK, (f) ASS-H-DELM, (g) HCKBoost, (h) and DW-KELM.
Figure 7. Classification map of the Salinas data set with 30 samples: (a) ELM, (b) KELM, (c) WKELM, (d) SS-KELM, (e) KELM-CK, (f) ASS-H-DELM, (g) HCKBoost, (h) and DW-KELM.
Remotesensing 13 00508 g007
Figure 8. G-mean (%) on the (a) Indian Pines, (b) Pavia University, and (c) Salinas images.
Figure 8. G-mean (%) on the (a) Indian Pines, (b) Pavia University, and (c) Salinas images.
Remotesensing 13 00508 g008
Table 1. Overall accuracy (OA), average accuracy (AA), and kappa (%) obtained by different approaches on the Indian Pines data set.
Table 1. Overall accuracy (OA), average accuracy (AA), and kappa (%) obtained by different approaches on the Indian Pines data set.
Class No.Train/TestELMKELMWKELMSS-KELMKELM-CKASS-H-DELMHCKBoostDW-KELM
15/4143.1481.6586.7584.5587.9286.5087.2391.42
2143/128570.5173.0374.1990.2390.6589.8089.8888.98
3883/74745.6173.2576.7093.8695.1095.1294.1696.29
424/21342.2462.1668.7566.3892.8692.6693.5294.36
550/43385.9688.5389.9794.1594.6395.3594.2797.14
675/65592.1692.6193.5894.7095.8595.4099.6599.62
73/2520.2666.5872.1369.2297.0594.6895.1096.96
849/42996.6496.0897.0897.3599.5099.4599.4399.98
92/1823.3063.2173.7168.3599.4599.1310099.85
1097/87550.3879.9183.5694.2790.0195.6096.3193.30
11247/220879.8286.3588.7195.1295.2594.4896.5598.34
1262/53145.1882.2484.6285.3587.6288.2689.3495.61
1322/18397.2699.0199.4599.1399.0399.1099.2599.38
14130/113597.2797.4497.7598.3599.6799.8599.3099.80
1538/34840.1772.5876.2975.2091.2892.6893.2996.75
1610/8346.2086.2387.1387.2586.5388.5986.3694.69
OA63.5284.4588.2594.5594.2894.3094.5598.25
std1.051.290.950.950.690.880.950.95
AA56.0681.5784.2590.2893.1893.1693.2798.27
std1.832.261.411.561.721.160.630.99
κ65.3780.7384.5284.5293.1693.2292.1994.25
std1.252.442.302.300.720.751.160.86
Time(s)0.353.655.4147.5643.67135.25322.4596.26
std0.030.380.851.080.680.651.050.58
Table 2. OA, AA, and kappa (%) obtained with different training numbers of labeled samples per class on the Indian Pines data set.
Table 2. OA, AA, and kappa (%) obtained with different training numbers of labeled samples per class on the Indian Pines data set.
Training NumbersAssessmentsELMKELMWKELMSS-KELMKELM-CKASS-H-DELMHCKBoostDW-KELM
5OA42.6148.3351.2565.2364.8065.4266.1470.50
std1.232.882.372.550.851.280.553.22
AA53.0161.2564.7574.5876.2577.8077.4581.91
std2.171.871.951.291.460.661.171.83
κ36.4542.6744.5662.0658.9564.3564.6268.12
std2.933.122.232.120.671.251.253.35
10OA55.2262.2265.2673.5274.8576.9577.2580.84
std2.152.171.892.271.451.230.874.01
AA68.5873.5075.5380.3579.1290.2391.1392.26
std2.851.981.661.352.260.681.652.99
κ50.6158.2762.7565.1672.3570.1573.5278.53
std2.212.311.992.540.750.950.954.59
15OA62.5968.1271.1973.5679.5682.1283.1487.28
std1.552.212.092.1131.461.551.542.67
AA74.6278.2978.2983.5988.2389.5290.2594.26
std1.751.251.251.861.680.681.441.35
κ60.4563.8265.2876.1879.1281.6582.0684.72
std1.222.231.852.531.881.560.922.66
20OA68.1069.2672.3581.5387.8584.5685.1689.91
std1.051.311.561.681.720.681.181.23
AA78.2980.2083.6988.5394.5693.1394.2595.59
std1.782.441.492.320.851.350.792.28
κ64.8767.6570.6583.1286.8985.6586.1689.86
std0.983.102.692.291.270.581.592.55
25OA69.4271.2173.2583.2387.8588.4589.2693.06
std3.312.212.471.351.720.651.752.28
AA79.3582.2084.5589.5594.5694.1895.2597.23
std1.522.411.851.560.850.651.971.12
κ65.2967.6569.9685.0286.8988.1289.9292.51
std1.253.102.832.141.270.860.762.90
30OA71.2172.5975.6688.5693.5294.1594.5997.88
std1.722.621.820.560.550.731.141.21
AA80.5582.9084.5990.42 96.5196.1296.3599.15
std0.881.821.160.730.380.750.660.96
κ67.3769.8272.5987.0590.6891.6592.3995.42
std2.211.751.691.350.750.641.360.87
Table 3. OA, AA, and kappa (%) obtained by different approaches on the Pavia University data set for different classes.
Table 3. OA, AA, and kappa (%) obtained by different approaches on the Pavia University data set for different classes.
ClassTrain/testELMKELMWKELMSS-KELMASS-H-DELMCK-KELMHCKBoostDW-KELM
164/659972.5677.1580.2891.8292.8892.6095.15100
2184/1846574.3575.28 78.4693.5992.1995.5593.2998.88
320/207966.1967.1378.3786.55186.9986.2890.0394.29
428/302667.4568.2772.1893.6592.5992.0993.5095.47
511/113470.2374.5277.2897.5995.3597.4697.8698.27
648/498179.5382.3384.1994.9094.5395.5894.9198.36
711/131976.5878.6582.3495.2997.2895.3695.2897.85
834/364875.8679.2180.2586.9089.3387.8988.4597.41
98/93970.3577.3581.5595.3893.2594.5898.0698.17
OA71.2678.5382.3593.1594.1693.3794.5998.36
std2.381.811.281.620.951.281.320.68
AA73.6683.7686.5795.1695.5193.5395.2699.22
std1.251.291.171.330.981.291.350.65
κ70.1574.3875.8390.6693.1592.6694.2796.42
std1.381.291.221.351.151.351.350.37
Time(s)0.765.266.5340.15135.1746.55157.6998.63
std0.070.850.961.421.551.742.031.98
Table 4. Results obtained with different training numbers of labeled samples per class on the University of Pavia data set.
Table 4. Results obtained with different training numbers of labeled samples per class on the University of Pavia data set.
Training EvaluationELMKELMWKELMSS-KELMKELM-CKASS-H-DELMHCKBoostDW-KELM
t5OA60.0659.2262.3664.565.4266.5567.3971.85
std3.725.833.121.984.553.374.166.52
AA66.5369.8171.5871.673.7174.2674.7875.92
std3.113.021.782.253.583.553.855.41
κ49.9251.5253.3454.9656.5660.2961.9562.65
std3.855.524.194.555.354.464.296.86
10OA47.2564.1766.3871.2578.4679.6280.8983.29
std2.212.832.872.296.073.573.484.65
AA59.6274.6276.2882.3685.1285.685.4985.65
std2.293.562.662.052.853.312.964.65
κ41.3553.6159.6368.0969.5870.2570.4877.23
std3.552.012.362.73.853.543.095.85
15OA53.7168.7472.5684.5583.583.8983.9287.81
std3.521.271.351.856.154.423.992.81
AA64.578.6581.5783.6588.2388.9589.1291.09
std2.661.852.372.164.793.863.674.15
κ49.1863.5965.5680.0381.5183.8784.0287.75
std3.353.022.081.453.653.283.472.29
20OA58.0271.5673.7186.9588.6689.9391.0593.31
std1.651.991.851.452.532.252.191.87
AA70.2580.3683.3592.3590.991.3291.8593.72
std1.182.682.432.181.511.271.071.88
κ52.5567.1572.6480.2679.3588.6889.5391.85
std2.282.352.311.533.62.752.381.39
25OA63.1671.5173.6888.9291.1291.5691.8994.67
std2.213.022.791.851.060.980.880.78
AA75.7381.6383.9591.3392.0692..3692.4694.25
std1.861.081.140.90.650.850.570.82
κ60.5267.1471.4386.5987.1389.8290.2391.86
std2.262.252.521.121.871.240.970.99
30OA70.0679.1682.3693.2292.8293.1994.298.15
std2.511.631.751.681.890.870.780.45
AA72.2585.2987.6595.6794.2594.6896.1399.12
std0.890.891.291.552.212.891.170.8
κ69.6673.5275.8892.1691.7592.3593.3495.87
std1.21.351.41.960.970.890.930.56
Table 5. OA, AA, and kappa (%) obtained by different approaches on the Salinas data set.
Table 5. OA, AA, and kappa (%) obtained by different approaches on the Salinas data set.
Class No.Train/TestELMKELMWKELMSS-KELMKELM-CKASS-H-DELMHCKBoostDW-KELM
120/198984.3285.6286.2587.8087.5586.3588.2690.35
237/368997.5097.9298.2698.5698.0298.8099.15100
320/195686.1587.5088.5390.2389.1090.0291.2392.83
414/138088.2689.7190.0591.8592.1590.1492.6495.28
527/265176.5477.9579.2179.5578.6377.2778.1581.95
640/391999.6299.7099.8099.9199.6398.3899.2699.80
736/354374.5276.2079.2678.2677.2376.6879.1681.37
8113/1115898.6599.0899.3599.1499.5099.7699.8299.90
962/614171.5273.8476.3875.9476.4977.2579.9882.31
1033/324569.2071.2574.4674.5675.0173.6476.2178.14
1111/105779.1581.6584.5085.3086.2584.4885.6587.21
1219/190896.5197.1897.4597.2498.4298.1598.6499.01
139/90799.1699.4399.4599.3199.0199.1799.3399.52
1411/105997.2797.7598.1098.6499.2798.4999.1699.75
1573/719598.5798.9699.1199.2099.2899.37100100
1618/178998.2098.7698.2398.4598.5398.2998.8799.47
OA86.6288.0790.3292.5894.3593.2296.3898.35
std2.351.681.650.990.760.961.160.95
AA88.2789.3591.6893.1695.1394.5297.5799.17
std1.952.241.991.981.461.360.890.99
κ86.1688.2590.6091.1193.2393.6594.5596.18
std1.662.132.091.681.121.751.301.16
Time(s)0.564.465.2460.5747.55141.06335.8598.95
std0.020.090.061.440.880.711.050.87
Table 6. Results obtained with different training numbers of labeled samples per class on the Salinas data set.
Table 6. Results obtained with different training numbers of labeled samples per class on the Salinas data set.
TrainingEvaluationELMKELMWELMSS-KELMKELM-CKASS-H-DELMHCKBoostDW-KELM
5OA81.7583.584.6285.1383.1684.6483.7291.52
std2.542.261.993.113.323.053.390.62
AA86.8889.8989.686.8889.4589.9290.0295.28
std1.712.832.952.052.211.881.670.82
κ79.5282.1583.0679.9682.7883.483.2690.35
std2.521.171.283.244.243.653.160.37
10OA83.0686.7387.5489.9688.2988.8589.0392.29
std2.641.561.061.521.251.071.30.81
AA89.0592.8693.1693.9593.0693.4993.6196.06
std1.261.351.280.891.071.551.281.86
κ81.2985.3386.2983.9882.3282.9683.192.03
std2.651.852.011.752.623.182.280.75
15OA85.8587.3587.9791.3489.7190.0590.3594.25
std1.921.941.881.861.11.551.071.29
AA91.2392.3892.6993.2892.2692.7592.6496.89
std0.761.261.350.950.740.880.930.99
κ84.0593.3193.6190.3291.1891.5991.5892.85
std1.790.930.591.651.040.921.150.78
20OA87.5887.6588.5691.9392.1592.6992.7595.37
std1.921.021.651.371.121.41.271.06
AA92.1994.5494.8494.8595.3295.8795.6797.52
std0.580.50.660.871.831.090.850.5
κ85.3687.8788.2693.6492.3892.7592.8293.24
std0.880.490.631.580.880.630.850.62
25OA88.5587.9588.9894.8694.4594.8694.996.29
std0.821.231.451.431.551.251.431.35
AA93.2295.1396.2398.1596.396.8497.0198.75
std0.920.860.961.330.620.750.650.88
κ86.8797.9998.1995.2293.7994.1794.4295.31
std0.581.121.351.611.291.351.121.01
30OA88.9589.1789.6595.6594.8595.1695.3899.1
std0.761.061.241.870.930.850.790.73
AA94.295.7296.3297.8996.9497.2697.4699.78
std0.630.630.750.881.120.940.990.55
κ88.1388.1889.1695.1594.1294.5894.6197.25
std0.620.520.771.850.580.640.730.92
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yu, X.; Feng, Y.; Gao, Y.; Jia, Y.; Mei, S. Dual-Weighted Kernel Extreme Learning Machine for Hyperspectral Imagery Classification. Remote Sens. 2021, 13, 508. https://doi.org/10.3390/rs13030508

AMA Style

Yu X, Feng Y, Gao Y, Jia Y, Mei S. Dual-Weighted Kernel Extreme Learning Machine for Hyperspectral Imagery Classification. Remote Sensing. 2021; 13(3):508. https://doi.org/10.3390/rs13030508

Chicago/Turabian Style

Yu, Xumin, Yan Feng, Yanlong Gao, Yingbiao Jia, and Shaohui Mei. 2021. "Dual-Weighted Kernel Extreme Learning Machine for Hyperspectral Imagery Classification" Remote Sensing 13, no. 3: 508. https://doi.org/10.3390/rs13030508

APA Style

Yu, X., Feng, Y., Gao, Y., Jia, Y., & Mei, S. (2021). Dual-Weighted Kernel Extreme Learning Machine for Hyperspectral Imagery Classification. Remote Sensing, 13(3), 508. https://doi.org/10.3390/rs13030508

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop