Next Article in Journal
Spatiotemporal Variations and Driving Factors of Water Availability in the Arid and Semiarid Regions of Northern China
Previous Article in Journal
Generative Simplex Mapping: Non-Linear Endmember Extraction and Spectral Unmixing for Hyperspectral Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Classification of Hyperspectral and LiDAR Data via Multiprobability Decision Fusion Method

1
School of Computer Science, China West Normal University, Nanchong 637002, China
2
School of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China
3
Institute of Artificial Intelligence, China West Normal University, Nanchong 637002, China
4
Key Laboratory of Optimization Theory and Applications, China West Normal University, Nanchong 637002, China
5
State Key Laboratory of Rail Transit Vehicle System, Southwest Jiaotong University, Chengdu 610031, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(22), 4317; https://doi.org/10.3390/rs16224317
Submission received: 27 September 2024 / Revised: 12 November 2024 / Accepted: 15 November 2024 / Published: 19 November 2024

Abstract

:
With the development of sensor technology, the sources of remotely sensed image data for the same region are becoming increasingly diverse. Unlike single-source remote sensing image data, multisource remote sensing image data can provide complementary information for the same feature, promoting its recognition. The effective utilization of remote sensing image data from various sources can enhance the extraction of image features and improve the accuracy of feature recognition. Hyperspectral remote sensing (HSI) data and light detection and ranging (LiDAR) data can provide complementary information from different perspectives and are frequently combined in feature identification tasks. However, the process of joint use suffers from data redundancy, low classification accuracy and high time complexity. To address the aforementioned issues and improve feature recognition in classification tasks, this paper introduces a multiprobability decision fusion (PRDRMF) method for the combined classification of HSI and LiDAR data. First, the original HSI data and LiDAR data are downscaled via the principal component–relative total variation (PRTV) method to remove redundant information. In the multifeature extraction module, the local texture features and spatial features of the image are extracted to consider the local texture and spatial structure of the image data. This is achieved by utilizing the local binary pattern (LBP) and extended multiattribute profile (EMAP) for the two types of data after dimensionality reduction. The four extracted features are subsequently input into the corresponding kernel–extreme learning machine (KELM), which has a simple structure and good classification performance, to obtain four classification probability matrices (CPMs). Finally, the four CPMs are fused via a multiprobability decision fusion method to obtain the optimal classification results. Comparison experiments on four classical HSI and LiDAR datasets demonstrate that the method proposed in this paper achieves high classification performance while reducing the overall time complexity of the method.

1. Introduction

With the development of sensor technology, observation data from different sensors have enriched the remote sensing image data of the same region [1,2,3,4]. Compared with single-source remote sensing image data, which are limited by imaging indices, multisource remote sensing image data can effectively leverage the complementary advantages of different perspectives from multiple sensors, provide multitemporal and multiangle data, and provide complementary spectral, temporal, and spatial information [5,6,7,8]. Therefore, the effective utilization of multisource remote sensing data can increase the accuracy of feature identification in remote sensing images. HSI data contain rich spectral information that can be utilized to differentiate features with distinct spectra and classify them with precision. However, a challenge arises when different features exhibit the same spectrum, making distinguishing features of varying elevations obscured by clouds and fog during the classification process challenging. On the other hand, LiDAR data can provide shape and height information for features in cloud-obscured areas. They can also be used to better distinguish features with varying heights in the same spectrum that may be obscured by clouds and fog. Therefore, the combined utilization of HSI data and LiDAR data can leverage the complementary nature of spectral information [9,10,11,12,13] and shape–height information [14,15,16,17,18,19,20,21] to extract features from various viewpoints and increase the precision of feature identification.
Due to the large volume and high dimensionality of HSI data, there is a common issue of data redundancy when using them in conjunction with LiDAR data for feature classification tasks. This redundancy can result in a decreased accuracy and efficiency of feature extraction, ultimately impacting the effectiveness of utilizing HSI and LiDAR data together for classification tasks. To eliminate redundant data information, Dong et al. [22] proposed a new spatial environment de-redundancy network (SCDNet) and designed a fusion module based on a multi-layer gating mechanism to eliminate redundant information from HSI data and LiDAR data. The classification was significant on two datasets, but the effect on more datasets of different noise and size is unknown. Vasin et al. [23] utilized a locally chi-squared basis function (LHWABF) system algorithm to remove redundant information and compress HSI data adaptively through multiple iterations. This inevitably resulted in the loss of important information while removing redundant information. Wang et al. [24] proposed observation fusion networks optimized with multiple iterations and alternations to reduce information redundancy by utilizing a feature reconstruction module for HSI data. However, the two networks are structurally complex and have high computational overhead. Although the aforementioned methods partially address the issue of data redundancy when combining HSI and LiDAR data, they all eliminate redundant information through a multilevel mechanism or multiple iterations, involving a complex network structure and high time complexity.
When feature extraction is performed on HSI and LiDAR data to better utilize the complementary information of both datasets and effectively obtain feature information of ground features, spectral features can be extracted using Principal Component Analysis (PCA) [25], Minimum/Maximum Autocorrelation Factor Analysis [26], and Kernel Principal Component Analysis [27] for HSI and LiDAR data, respectively. However, the extraction effectiveness of these basic methods on large and complex data still needs to be improved. In their study, Rasti et al. [28] combined joint feature extraction of extinction profiles with full variational analysis to extract spatial features from HSI data and LiDAR data. It is possible to ascertain the specific spatial characteristics of geometric angles. Liao et al. [29] enhanced the morphological profile method and employed a graph-based fusion strategy to extract spatial features from HSI data and LiDAR data. The graph fusion strategy was employed to preserve image details, thereby enhancing the morphological contours of the extracted spatial features. Li et al. [30] utilized LBPs to extract local texture features from remotely sensed image data. This approach proved effective in capturing local detail information. Dalla et al. [31] utilized EMAP to extract geometric structural features from HSI data. The EMAP technique is capable of extracting the significant features of an image with greater efficacy by fusing the information derived from multiple layers. Although the methods mentioned above for extracting spectral and spatial features from HSI and LiDAR data can capture information from various viewpoints, effectively utilizing the complementary information remains challenging. This is because spectral and spatial features are typically extracted independently and integrating them efficiently is hindered by the heterogeneous nature of HSI and LiDAR data collected from different sensors. In addition, some other methods have also been proposed in recent years [32,33,34,35,36,37,38,39].
Finally, to achieve effective classification of remote sensing images, machine learning-based classifiers such as Support Vector Machine (SVM) [40,41], Random Forest (RF) [42], and K Nearest Neighbors (KNN) [43] are widely used. SVM offers the advantages of rapid computation and robust generalization capabilities. It can transform linearly inseparable problems in low-dimensional space into high-dimensional space for classifying remote sensing image data. However, SVM primarily emphasizes shallow feature information while neglecting the exploration and utilization of deep feature information. This limitation hinders the enhancement of classification performance for remote sensing images. Although RF is suitable for processing high-dimensional nonlinear remote sensing image data, it has low accuracy for feature recognition on remote sensing image datasets with strong noise interference, such as the 2013 Houston dataset, which are often obscured by clouds. KNN effectively utilizes the spatial distance information of all neighboring sample points between test samples and training samples to classify remote sensing images. However, due to the high dimensionality of hyperspectral remote sensing image data and the limited number of training samples, the generalization ability of KNN weakens as the dimensions increase. The classification accuracy of remote sensing images tends to increase and then decrease with the increase in dimensions. The machine learning classifiers mentioned above may struggle to fully leverage deep feature information, identify features affected by high noise interference like cloud cover, or be susceptible to overfitting issues. To increase the classification accuracy of remote sensing images, convolutional neural networks (CNNs) are utilized. CNNs are capable of extracting deep feature information from image data, thus improving remote sensing image classification. Hang et al. [44] proposed the use of a coupled convolutional neural network (coupled CNN) for the feature classification of remote sensing image data. However, the processing was merely superficial and the resulting classification was inadequate. Lee et al. [45] classified the extracted local and global nonlinear and hidden features using a context-based deep convolutional neural network model (CCNN). While the image information was extracted in depth, the mining of different scale information was overlooked, necessitating an improvement in the classification effect. Xu et al. [46] utilized a two-branch CNN method (TBCNN) for the block-by-block classification and fusion of extracted multi-scale remote sensing image data. The extracted features were highly effective, yet the training time was prolonged, and the classification effect was unsatisfactory. Although the CNN-based classification methods mentioned above can increase the accuracy of feature recognition by extracting deep spectral spatial information from remote sensing image data, the model structure is complex. The training process requires setting a large number of parameters, resulting in poor model generalization, long training times, and high time complexity. To balance high classification accuracy and low time complexity, the KELM [47] method is employed for remote sensing image classification. KELM only needs to pre-select the kernel function and does not need to explicitly define the mapping function or set the number of hidden layer neurons. This saves time by optimizing the number of hidden layer neurons and effectively reduces the algorithmic time complexity of the training process. In addition, KELM integrates the kernel function with the extreme learning machine (ELM) and replaces random mapping with kernel mapping. This integration effectively addresses the issues of model generalization and unstable classification results caused by the random assignment of neurons in the hidden layer of traditional convolutional neural networks.
Both of the classification methods mentioned above, one based on machine learning and the other on deep learning, independently extract feature information from various remote sensing image datasets. They then employ feature fusion methods such as feature concatenation, feature-weighted averaging, or feature selection to merge the feature information from different datasets at the feature level. The fused features are subsequently utilized for remote sensing image classification. However, for heterogeneous datasets such as HSI and LiDAR, the respective feature information is highly diverse in terms of physical meaning and data form. This diversity leads to the limited effectiveness of traditional feature fusion methods and impacts the classification accuracy. Therefore, to further improve the classification accuracy of remote sensing images, this paper adopts a decision fusion method that is compatible with heterogeneous multiattribute feature information. The decision fusion method is based on feature-level information processing and adopts a decision fusion strategy to combine the classification probability matrices output by each classifier, thereby achieving the optimal classification results. This approach avoids the poor feature information fusion caused by the strong heterogeneity of heterogeneous data and helps to improve the classification accuracy. Prasad et al. [48] conducted subspace identification and band grouping of hyperspectral images. They combined multiple classifiers and decision fusion methods to increase classification accuracy and overall robustness of the classifiers. However, this fusion method requires high performance for each classifier. Li et al. [48] applied the One Against One (OAO) strategy and Kernel Discriminant Analysis (KDA) to classify hyperspectral images. They obtained the final classification results through the Majority Voting (MV) and Logarithmic Opinion Pool (LOGP) decision fusion strategies. This resulted in a reduction in the overall computational effort of the algorithm but had a negligible impact on the enhancement of classification accuracy. Jiang et al. [49] used Super PCA to extract features from regions with similar reflectance for hyperspectral image segmentation. They then employed an SVM classifier for each region’s extracted features to classify and fused the classification results using MV. However, this method caused a loss of information on some features and the classification accuracy improvement was not high. The decision fusion methods mentioned above utilize various feature extraction techniques and classifiers, in conjunction with traditional decision fusion strategies, to classify hyperspectral images. This approach increases the classification accuracy to some extent. However, these methods simply apply the classical decision fusion strategy in remote sensing image classification without further optimizing the decision fusion strategy. At the same time, the outcomes of decision fusion also rely on the performance of the classifiers [50,51,52,53]. Therefore, selecting high-performance classifiers and optimizing the decision fusion strategy to combine the classification probability matrices of the outputs of the classifiers are crucial tasks to increase the accuracy of image classification in remote sensing.
To address the aforementioned challenges in remote sensing image classification tasks, this paper introduces the PRDRMF method for the simultaneous classification of hyperspectral and LiDAR data. First, the HSI data and LiDAR data are downscaled by PRTV to remove redundant data, and then local texture features and spatial structure features are extracted from the two types of data using LBP and EMAP in the multifeature extraction module. Then, the four extracted features are input into the corresponding KELM with a simple structure and excellent classification performance to obtain four CPMs. Finally, the four CPMs are effectively combined using a multiprobability decision fusion method to achieve the optimal classification results.
The main contributions of this paper are described as follows:
  • The original HSI data and LiDAR data are, respectively, reduced and made de-redundant by PRTV. RTV [54] was first introduced to the field of joint classification of hyperspectral and LiDAR data. Through enhancements to this approach, a novel data de-redundancy method, PRTV, was proposed. PRTV is capable of eliminating redundant data while effectively reducing data dimensionality, thereby addressing the data redundancy issue inherent to the joint classification of HSI and LiDAR data.
  • A multifeature extraction module is proposed to extract feature information from HSI data and LiDAR data from various perspectives. By integrating the LBP method and EMAP method for feature extraction of HSI data and LiDAR data, respectively, not only is the information capture of the uniform and edge regions of the data taken into account, but also the spectral spatial information in the HSI data and the shape–height information in the LiDAR data are made to be complementary.
  • The four extracted features are input into KELM with a simple structure and superior classification performance for feature classification, respectively, and the classification probability matrix (CPM) is output. Subsequently, the CPM is probabilistically fused with a multiprobabilistic decision fusion method that is compatible with multiattribute feature information from heterogeneous sources. In this way, a lightweight and high-performance image classification model is formed, which effectively combines the two objectives of high classification accuracy and low time complexity in the process of joint classification of HSI data and LiDAR data.
The remaining part of this paper is summarized as follows. In Section 2, the proposed framework is introduced. In Section 3, the experimental dataset, experimental setup, and analysis of comparative experimental results are presented. In Section 4, the conclusion and the future work are given.

2. Framework of the PRDRMF Method

To address the issues of data redundancy, low classification accuracy, and high time complexity when combining HSI data and LiDAR data, this paper introduces the PRDRMF method for the joint classification of hyperspectral and LiDAR data.
The overall structure of PRDRMF is shown in Figure 1 and includes four parts: data de-redundancy method (PRTV), multifeature extraction module, classification module, and decision fusion module. First, the original HSI data and the original LiDAR data are downsized by PRTV to eliminate redundant information and extract meaningful data structures, respectively. Then, the processed HSI data and LiDAR data are input into the multifeature extraction module. In this module, local texture features and spatial structure features are extracted for the two types of data using LBP and EMAP, respectively. The four extracted features are then input into the classification module, where they are classified separately by the KELM classifier. This classifier is highly efficient in terms of its classification performance and has a simple structure, resulting in the four CPMs. Finally, the four CPMs are combined using the multiprobability decision fusion method to achieve the optimal classification results. This fusion process enables PRDRMF to achieve high classification accuracy while maintaining low time complexity.

2.1. Data De-Redundancy Method

Data de-redundancy is crucial for reducing the complexity of subsequent feature extraction and enhancing the final classification accuracy. This paper introduces RTV [54], a method with a simple structure, fast computation, and a good de-redundant information effect. However, it is only applicable to maps containing a small number of bands. Therefore, to make it applicable to hyperspectral data, we introduce PCA to improve RTV and ultimately propose the PRTV method. Theoretically, RTV is used for data de-redundancy and is mainly used for low-dimensional data, and we improve it using the PCA method so that it can be applied for the first time in the field of joint classification of hyperspectral data and LiDAR data. In terms of effectiveness, PRTV has a simple structure. The HSI data and LiDAR data are downscaled, and redundant information is removed separately before feature extraction. This process aims to reduce the complexity of the subsequent feature extraction and alleviate the impact of data redundancy when using HSI data and LiDAR data jointly for classification purposes.
Assuming that the original HSI data are H ϵ R m × n × c , where m, n and c denote height, width and dimension, respectively, the output data are H P R T V ϵ R m × n x 1 after using PRTV for the redundancy processing of H . The projection matrix of PCA is P c × d , and the HSI data after dimensionality reduction can be expressed as P · H ϵ R m × n × d , d denotes dimension. In this paper, the ideal results of the redundancy removal of the original HSI data using PRTV are as follows:
arg min H PRTV 1 2 H P R T V P · H 2 + λ i D x i L x i + ε + D y i L y i + ε
where λ is the weight of the regular term and i D x i L x i + ε + D y i L y i + ε is the regular term.
In this context, D x i and D y i represent the total window variation of i points in the x and y directions, respectively. To facilitate the distinction between salient structural and texture elements, new intrinsic window variables, L x i and L y i , have been introduced in addition to D x i and D x i .
D x i = j R i g i , j x H P R T V j
D y i = j R i g i , j y H P R T V i
L x i = j R i g i , j x H P R T V i
L y i = j R i g i , j y H P R T V i
where R i is a rectangular region centered at point i . The weight function, designated as g i , j , serves to regulate the dimensions of the window. The two pixels, designated as x i and x j , are situated in the X-direction, while the two pixels, y i and y j , are located in the Y-direction.
g i , j e x p x i x j 2 + y i y j 2 2 σ 2
Assuming that the original LiDAR is L ϵ R m × n × l , and also applying all the above formulas, we finally obtain the output of the de-redundancy process using the PRTV method for L , i.e., L P R T V ϵ R m × n × t . Figure 2 illustrates the efficacy of the method in reducing data redundancy.

2.2. Multifeature Extraction Module

Both LBP [30] and EMAP [55] are commonly utilized as feature extraction modules for the joint utilization of HSI data and LiDAR data. To create an effective complement between the rich spectral and spatial information of HSI data and the high degree of shape information contained in LiDAR data, we adopt a strategy of combining the LBP method, which captures local spectral features, with the EMAP method, which captures spatial structure features. This method mines the respective features of HSI data and LiDAR data from different perspectives, thereby achieving effective feature extraction and improving the final image classification accuracy.
The data obtained by the data de-redundancy module are input into the multifeature extraction module to obtain the corresponding four features (i.e., H S I L B P ϵ R m × n × h 1 , L i D A R L B P ϵ R m × n × h 1 , H S I E M A P ϵ R m × n × h 2 , and L i D A R E M A P ϵ R m × n × h 2 ).
Specifically, LBP encodes the image to capture detailed local texture features and hence it can be used for feature extraction in localized regions of hyperspectral images. The LBP operator compares the pixel value of each of the n neighboring points with the pixel value of the center of the neighborhood within a range of pixels r × r. The pixel position is marked as 1 if the neighboring pixel value is higher than the center pixel value. The pixel position is marked as 1 if the surrounding pixel value is higher than the center pixel value; otherwise, it is marked as 0.
For the input image H P R T V , r is the radius of the circle from the center point to the neighboring points and n is the total number of pixels to be counted in the neighborhood. The center pixel point is 1, and the number of neighboring pixels is n − 1. Let { m i } i = 0 n 1 represent the pixel values of the neighboring pixels and m h c be the pixel value of the center point of the H P R T V image. H S I L B P is computed as follows:
H S I L B P = L B P n , r m h c = i = 0 n 1 F h m i m h c 2 i
F h m i m h c = 1 ,   m i m h c > 0 0 , m i m h c 0
L i D A R L B P can be obtained in a similar way.
On the other hand, by combining the layers from AP to EAP and finally to EMAP, the spatial features of HSI data can be well extracted, thus improving the classification accuracy.
Firstly, n principal components ( P C i , i = 1 , 2 , , n ) are extracted from the input image H P R T V to compute the morphological attribute filter AP. Then, the extended attribute profiles ( E A P h ) are formed by combining the m different APs. The EAP can be represented as follows:
E A P h = A P P C 1 , A P P C 2 , , A P P C n
The H S I E M A P is formed by combining multiple different E A P h i .
H S I E M A P = E A P h 1 , E A P h 2 , , E A P h n
Here, h i ( i = 1, , n) represents the common attributes.
L i D A R E M A P can also be obtained using a similar method.

2.3. Classification Module

Since Huang [41] introduced the kernel function into ELM and proposed KELM, it has been widely utilized in classifying multisource remote sensing image data. This is attributed to its simple structure, high computational efficiency, and superior classification performance [46]. KELM only needs to pre-select the kernel function and does not need to explicitly define the mapping function or set the number of hidden layer neurons. This saves time by optimizing the number of hidden layer neurons and effectively reduces the algorithmic time complexity of the training process. In addition, KELM integrates the kernel function with ELM and replaces random mapping with kernel mapping. This integration effectively addresses the issues of model generalization and unstable classification results caused by the random assignment of neurons in the hidden layer of traditional convolutional neural networks.
Consequently, the KELM classifier was selected as it enables the attainment of both high classification accuracy and low time complexity. Furthermore, the RBF kernel function was selected following an experimental comparison.
Specifically, we input the obtained features into the classifier and then solve the objective function to derive the corresponding four CPMs, namely HSI_LBP, HSI_EMAP, LiDAR_LBP, and LiDAR_LBP.
F s = k s , s 1 ; ; k s , s n I C + Ω E L M 1 H S I _ L B P
Here, s 1 , s 2 , , s represents the training samples, Ω E L M ϵ R h x h denotes a symmetric matrix constructed based on the kernel function. C represents a constant and is also known as the regularization factor. I ϵ R h x h is the identity matrix, HSI_LBP represents the desired output and k denotes the kernel function (i.e., RBF). HSI_EMAP, LiDAR_LBP, and LiDAR_LBP can also be obtained using a similar method.

2.4. Decision Fusion Module

The probability matrix obtained after classification using a high-performance classifier is combined through a suitable decision fusion method, which increases the classification accuracy of remote sensing images. However, most of the existing decision fusion methods have rudimentary decision fusion strategies or poor performance of the selected classifiers, leading to unsatisfactory fusion results. Therefore, this paper selects the high-performance and simple structure of KELM as a classifier and introduces a multiprobability decision fusion method to accommodate the characteristics of multiple features discussed in this paper. The multiprobability decision fusion method does not require additional parameters and validation sets in the fusion process. It can be compatible with the four heterogeneous multiattribute features, thus avoiding the issue of poor fusion of feature information caused by the strong heterogeneity of the data. This method can help increase the classification accuracy of remote sensing images. It helps to improve the classification accuracy of remote sensing images. Nevertheless, this decision fusion method necessitates a high level of classification performance from the classifier, which introduces an additional degree of complexity.
Since H S I L B P , L i D A R L B P , H S I E M A P , and L i D A R E M A P belong to different types of features or different data sources, the probabilities of their corresponding categorization labels, i.e., S i ϵ R 1 x h l ,   T i ϵ R 1 x l t ,   P i ϵ R 1 x h e ,   a n d   Q i ϵ R 1 x l e for the same Y i point on the test sample Y are independent. Therefore, the multiprobability decision fusion method is designed as follows:
m a x S i , T i , P i , Q i { P ( S i , T i , P i , Q i | Y , Y , Y ) } = m a x S i , T i , P i , Q i { P ( S i | Y ) · P ( T i | Y ) · P ( P i | Y ) · P Q i Y
P ( Y i ) = P H S I L B P Y i P L i D A R L B P Y i P H S I E M A P Y i P L i D A R E M A P Y i
where P H S I L B P Y i , P L i D A R L B P Y i , P H S I E M A P Y i , and P L i D A R E M A P Y i are probability vectors of different features corresponding to the LBP features of HSI, LBP features of LiDAR, EMAP features of HSI, and EMAP features of LiDAR.
The final category labeling is calculated as follows:
c l a s s ( Y i ) = a r g m a x l = 1 , , C [ p Y i ]
where C is a constant representing the total number of sample categories and p ( Y i ) denotes the probability that the sample y belongs to the i th category.
The proposed method comprises four stages, each of which must be completed in sequence. Consequently, the processing effect, operating environment and hardware platform required to achieve each stage differ. This will undoubtedly result in a more intricate operational process in practical applications. Nevertheless, the subsequent experimental analysis demonstrates that the proposed method exhibits notable advantages in terms of high classification accuracy and rapid running time, thereby achieving the desired experimental outcome. These advantages are more pronounced than the associated disadvantages.

3. Experimental Results and Analysis

To address the issues of data redundancy, low classification accuracy, and high time complexity when using HSI data and LiDAR data together, we propose the PRDRMF method. To assess the efficacy of the proposed method, experiments were conducted using three publicly accessible HSI and LiDAR datasets. First, the experimental environment and evaluation metrics are described in detail. Next, detailed information about the four datasets used in the experiments is provided. Finally, experiments are conducted on the four datasets to compare the classification accuracy and time complexity with other existing methods. Additionally, ablation experiments are utilized to demonstrate the functionality of the various sources of data and the different modules in the proposed method.
The experiments utilize unified evaluation metrics to assess the classification outcomes, which include overall accuracy (OA), average accuracy (AA), and the Kappa coefficient. The experimental environment consists of MATLAB 2021a, Keras 2.3.1, an Nvidia RTX 3050 GPU, and 32 GB of memory.

3.1. Datasets

To evaluate the effectiveness of the proposed method, three HSI and LiDAR datasets were selected for experimentation. Detailed description of four multi-sensor datasets can be found in Table 1.
The 2013 Houston dataset covers the area of the University of Houston and its surrounding cities. The dataset consists of 144 HSI spectral bands with dimensions of 349 × 1905 pixels and includes 15 categories. The details are shown in Table 2.
The MUUFL dataset covers the University of Southern Mississippi Gulf Park campus. The dataset consists of 64 HSI spectral bands with a size of 325 × 220 pixels. It includes 11 categories. The details are shown in Table 3.
The Trento dataset covers a rural area south of Trento, Italy. The dataset consists of 63 HSI spectral bands with dimensions of 600 × 166 pixels and includes six categories. The details are shown in Table 4.
The 2018 Houston dataset has the same wavelength range as the 2013 Houston dataset. The dataset contains 48 spectral bands and the images have a spatial resolution of 1m. There are seven consistent classes in their scene. We extracted 48 spectral bands (wavelength range 0.38~1.05um) from the Houston 2013 scene corresponding to the Houston 2018 scene and selected an overlapping area of 209 × 955. The test and training set split ratios were consistent with the 2013 Houston dataset. The classes and the number of samples are listed in Table 5.

3.2. Parameter Analysis

To determine the optimal parameters for the PRDRMF method, the influence of the parameters in the four modules was considered. Following experiments and analysis, the optimal parameters were selected. Among them, the parameters of the data de-redundancy method PRTV can be optimized with λ set to 0.015 and σ set to 1 after experiments. The parameters of the LBP in the multifeature extraction module can be optimized with r set to 1 and n set to 8 after experiments. The optimal kernel function for the classification module is the RBF kernel function after comparisons. The decision fusion module does not require any additional parameters.

3.2.1. Parameters of PRTV

To minimize the redundancy of spectral and spatial information, this paper employs PRTV for HSI data and LiDAR data initially. It conducts dimensionality reduction and data de-redundancy processing by adjusting the smoothing degree λ and texture element σ. Therefore, the parameters λ and σ significantly affect the performance of the model. The relationships between OA and parameters λ and σ are shown in Figure 3.
In Figure 3a, as the value of λ increases, the OA value of the 2013 Houston dataset tends to increase and reaches its peak at λ = 0.04. Conversely, the OA value of the MUUFL dataset decreases as λ increases, reaching its maximum at λ = 0.00. The OA value of the Trento dataset shows slight fluctuations slightly above and below 99.70% and reaches its peak at λ = 0.01. The OA value of the 2018 Houston dataset reaches its peak at λ = 0.03.
In Figure 3b, the OA values of the 2013 Houston dataset, the MUUFL dataset, and the 2018 Houston dataset exhibit a decreasing trend as σ increases, reaching the highest value at σ = 1. In contrast, the OA value of the Trento dataset fluctuates between 99.75% and reaches its peak at σ = 5, achieving the second-highest classification accuracy at σ = 1.
Therefore, after analyzing the parameters in the four datasets, to balance the model’s performance across all datasets and minimize operational complexity, it is recommended to set λ = 0.015 and σ = 1.

3.2.2. Parameters of LBP

The size of the range diameter (r) and the number of sample points (n) in LBP will directly affect the features obtained by the multifeature extraction module. This, in turn, impacts each CPM and can significantly influence the final model performance. Therefore, this paper analyzes the accuracy of PRDRMF with different diameter sizes (r) and varying numbers of sample points (n), as illustrated in Figure 4.
In Figure 4a, the OA values of both the 2013 Houston dataset and the Trento dataset exhibit a general decreasing trend as r increases, reaching the highest value at r = 1. In contrast, the OA values of the MUUFL dataset exhibit an increasing trend followed by a decreasing trend, reaching the peak value at r = 4. The OA value of the 2018 Houston dataset reaches its peak value at r = 1.
In Figure 4b, the OA values of both the 2013 Houston dataset and the Trento dataset exhibit a significant increasing trend as n increases, reaching the peak value at n = 8. The OA values of the MUUFL dataset exhibit a slight fluctuating trend and reach the second-highest classification accuracy at n = 8. The OA value of the 2018 Houston dataset reaches its peak value at n = 8.
A parameter analysis was conducted on the four datasets, and after careful consideration, the parameter settings of r = 1 and n = 8 were selected to achieve an optimal balance in model performance across the datasets.

3.2.3. Parameters of KELM

The KELM classifier is an ELM method that utilizes kernel functions. Therefore, the selection of appropriate kernel functions is of paramount importance. The efficacy of the KELM classifier, which incorporates distinct kernel functions for classification, was evaluated on four datasets. The outcomes of this experiment are presented in Figure 5.
In Figure 5, the KELM classifier with RBF kernel function achieved the highest OA value for classification on all four datasets. Therefore, the KELM with RBF kernel function is ultimately chosen as the classifier.

3.3. Comparison Experiment and Analysis

To validate the excellent performance of the PRDRMF method, we conducted experiments on the four datasets and compared the experimental results with other existing methods, including SVM [56], CCNN, EndNet [57], CRNN [58], TBCNN, coupled CNN [21], CNNMRF [59], FusAtNet [60], S2ENet [61], CALC [62], Fusion-HCT [63], SepG-ResNet50 [64], and DSMSC2N [65]. For these methods, the parameter settings are described in the corresponding references.
To ensure the fairness of the experimental results, the training and test samples for all experimental methods were kept consistent. Please refer to Table 2, Table 3, Table 4 and Table 5. Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13 list the OA, AA, and Kappa values obtained using different methods on the 2013 Houston, MUUFL, Trento, and the 2018 Houston datasets. The bold values in the table represent the optimal values. To visualize the classification effects of the compared methods, Figure 6, Figure 7, Figure 8 and Figure 9 give the classification plots obtained by different classification methods on the 2013 Houston, MUUFL, Trento, and the 2018 Houston datasets. For purposes of comparison, the HSI-generated pseudo-color images, the original LiDAR maps, and the ground truth maps are also presented.
The 2013 Houston dataset demonstrates the broadest coverage of urban scenes with multiple feature classes and is primarily utilized to validate the performance of the PRDRMF method for detailed classification of urban scenes based on satellite imagery.
On the 2013 Houston dataset, the OA value of the PRDRMF method reached 99.79%. In particular, the OA value was 40.39%, 12.87%, 11.27%, 11.24%, 10.88%, 9.36%, 8.86%, 9.18%,5.60%, 5.08%, 0.03%, 27.12%, and 8.30% higher than that of SVM, CCNN, EndNet, CRNN, TBCNN, coupled CNN, CNNMRF, FusAtNet, S2ENet, CALC, Fusion-HCT, SepG-ResNet50, and DSMSC2N, respectively. Furthermore, the classification accuracy was 100% for the categories of health grass, stressed grass, and artificial grass, as well as highway, railway, parking lot 1, and parking lot 2. This suggests that the multifeature extraction module provides texture information from diverse viewpoints, thereby enabling PRDRMF classification to perform exceptionally well on the aforementioned analogous categories.
As illustrated in Figure 6d, the conventional machine learning algorithm SVM exhibits the lowest classification accuracy and provides markedly inadequate classification results for four categories, including highway and car parking lot 1. Figure 6e shows that the highway and railroad categories are misclassified. This is due to the CCNN method’s inability to effectively deal with heterogeneous regions. The method’s limitations are evident in its exclusive focus on spatial neighborhood information, which leads to suboptimal classification performance on various road categories. EndNet achieves 100% classification accuracy on two categories: artificial grass and tennis court. In Figure 6g–j, it can be seen that these CNN-based methods achieve higher classification accuracy per category, reaching 100% accuracy on several categories, particularly on the category of tennis courts, which demonstrates a significant recognition effect. Notably, CRNN, TBCNN, and CNNMRF exhibited limited performance in recognizing the category of ordinary roads and highways, while coupled CNN achieved a classification accuracy of only 41.11% for the category of water. Compared to the aforementioned methods, the classification performance of FusAtNet shows a slight improvement. However, the results are still unsatisfactory for regular highways. In contrast, S2Enet demonstrates significant enhancement in classification accuracy, achieving a perfect score of 100% for stressed grass, artificial grass, water, tennis court, and runway categories. CALC achieved a classification accuracy of over 90% in 14 categories, with the exception of the category pertaining to road, where the classification accuracy requires improvement. The Transformer-based method Fusion-HCT demonstrates efficacy in classification, attaining 100% accuracy in multiple categories. However, the SepG-ResNet50 approach exhibits limitations in discerning between analogous categories, such as roads and grasses. DSMSC2N attains 100% accuracy in two categories, soil and tennis court, although there is scope for enhancement in the recognition of highway. In the upper right corner of Figure 6q, it can be observed that the internal noise of health grass is almost absent and less noisy compared to Figure 6d–p. This is attributed to the utilization of PRDRMF with PRTV, which eliminates redundant information.
The MUUFL dataset depicts small-scale neighborhood scenes within the city and is primarily utilized to validate the classification effectiveness of the PRDRMF method in localized daily scenes.
On the MUUFL dataset, the OA value of the PRDRMF method reached 92.21%. In particular, the OA value was 87.74%, 3.25%, 4.46%, 0.83%, 1.36%, 1.28%, 3.27%, 0.73%, 0.53%, 9.30%, 4.78%, 9.31%, and 1.04% higher than that of SVM, CCNN, EndNet, CRNN, TBCNN, coupled CNN, CNNMRF, FusAtNet, S2ENet, CALC, Fusion-HCT, SepG-ResNet50, and DSMSC2N, respectively. Furthermore, in the case of the sidewalk and grass categories, where all other methods demonstrated suboptimal performance, the PRDRMF method exhibited superior results, with OA values of 84.40% and 92.21%, respectively. The presence of significant misclassification is clearly evident in Figure 7d, and it is attributed to the absence of spatial information in SVM and its vulnerability to noise, leading to lower classification accuracy on the MUUFL dataset. In Figure 7e, it can be seen that the sidewalk and the yellow markers on the roadside are misclassified as mud and sandy ground. This misclassification occurs because the CCNN method is inadequate in handling heterogeneous regions; it is limited to considering spatial neighborhood information only. It is difficult to accurately discriminate between similar categories. From Figure 7f, it can be observed that there are numerous noise points in the EndNet graph. This is attributed to the limited learning capability of the encoder- and decoder-based feature representations in EndNet, which hinders their ability to effectively counteract noise interference. CRNN exhibits low classification accuracy in the category of mostly grass, but it achieves the highest classification accuracy of up to 96.97% in the category of yellow markings on the roadside. TBCNN, coupled CNN, CNNMRF, and other CNN-based methods consider spatial and spectral information, reduce noise, and achieve higher classification accuracy in pixel-level remote sensing image scenes. They have demonstrated the highest classification accuracy in several categories. However, the focus on spectral feature similarity results in the spatial elevation features of the features being ignored, with the consequence that mostly grass is misclassified as trees. The OA of CALC is lower than that of all the aforementioned CNN-based methods. In comparison to the above methods, FusAtNet and S2Enet have demonstrated increased overall classification accuracy. However, the classification accuracy is markedly deficient in the case of the roadside yellow curb category. The Transformer-based Fusion-HCT method exhibits suboptimal performance in classification, particularly in the identification of pavement. PRDRMF demonstrates the most accurate classification outcomes. A comparison of Figure 7e–p reveals that Figure 7q, which is most similar to the ground truth map, exhibits less classification noise and clearer boundaries. This is attributed to the utilization of PRDRMF with PRTV, which eliminates redundant information.
The Trento dataset showcases farm scenarios with fewer crop classes and is mainly utilized to validate the performance of the PRDRMF method for precise agricultural classification over extensive areas and in more standardized conditions.
On the Trento dataset, the OA value of the PRDRMF method reached 99.73%. In particular, the OA value was improved by 26.84%, 2.44%, 5.56%, 2.51%, 2.27%, 2.04%, 1.33%, 0.67%, 1.19%, 0.35%, 0.13%, 5.91%, and 0.80% compared to SVM, CCNN, EndNet, CRNN, TBCNN, coupled CNN, CNNMRF, FusAtNet, S2ENet, CALC, Fusion-HCT, SepG-ResNet50, and DSMSC2N, respectively. Furthermore, the classification accuracy was 100% for two similar categories, namely woods and vineyards. This suggests that the multifeature extraction module provides texture information from diverse viewpoints, thereby enabling PRDRMF classification to perform exceptionally well on the aforementioned analogous categories. The large apple tree orchard depicted in Figure 8d is erroneously classified as vineyard land due to the SVM’s lack of spatial information and susceptibility to noise interference, which can lead to misclassification of categories. From Figure 8f, it can be seen that the total classification accuracy of EndNet is only higher than that of SVM. There are many noise points in the graph due to the limited learning ability of the encoder-based and decoder-based feature representation in EndNet, which hinders its effectiveness in resisting noise interference. In Figure 8e, it can be seen that the yellow markers on the sidewalk and the roadside are misclassified as mud and sand. This is due to the limitations of the CCNN method in handling heterogeneous regions. The method only considers spatial neighborhood information, which makes it difficult to accurately discriminate between similar categories. In Figure 8e,g–j, it can be seen that the total classification accuracies of these CNN-based methods are significantly higher. Specifically, CRNN and TBCNN achieve 100% classification accuracies for both categories of ground and vineyard land, while CNNMRF demonstrates the highest classification accuracies for all three categories of apple trees, woods, and vineyard land. Compared to the aforementioned methods, FusAtNet and S2Enet show a slight improvement in classification, achieving 100% accuracy in classifying ground and woods categories, respectively. The classification maps produced by CALC and Fusion-HCT are of superior quality, exhibiting minimal noise points. In comparison, the DSMSN classification map displays a few noise points within the apple trees category, while the SepG-ResNet50 map is of inferior quality, displaying significant noise points. In addition, there is noticeable noise in the extensive apple tree orchard depicted in Figure 8e–j,o. However, in Figure 8q, the large, well-maintained farm scene displays distinct boundaries with minimal noise. This is attributed to the utilization of PRDRMF with PRTV, which eliminates redundant information. Only a slight cross-bar phenomenon of misclassification within the ground features persists.
The 2018 Houston dataset is a selection of the same seven categories as the 2013 Houston dataset, showing areas of the same location at different points in time, and is mainly used to compare the 2013 Houston dataset as a complement to the multi-temporal data. The study of this dataset shows the applicability of this paper’s method on multi-temporal data.
On the 2018 Houston dataset, the OA value of the PRDRMF method reached 96.93%. In particular, this OA value was 15.44%, 6.84%, 6.21%, 5.77%, 5.72%, 4.72%, 4.58%, 5.35%, 2.34%, 2.13%, 0.25%, 8.63%, and 3.38% higher than that of SVM, CCNN, EndNet, CRNN, TBCNN, coupled CNN, CNNMRF, FusAtNet, S2ENet, CALC, Fusion-HCT, SepG-ResNet50, and DSMSC2N, respectively. In addition, it achieved 100% classification accuracy on the water category. This is better than PRDRMF’s recognition of the water category on the 2013 Houston dataset, probably because the 2018 Houston dataset has only seven categories and the sample size of the water category accounts for too much of the total. The dataset was simplified to make it easier to distinguish different target categories.
As shown in Figure 9d, the traditional machine learning algorithm, SVM, has the lowest classification accuracy and provides extremely poor classification results on three categories, such as grass healthy and residential buildings. And in Figure 9e, it can be seen that the road and non-residential buildings categories are misclassified, which is due to the shortcomings of the CCNN method in dealing with heterogeneous regions and to the poor classification performance on spatially neighboring categories. EndNet performs better only on the two categories of grass stressed and non-residential buildings, and the rest needs to be improved. In Figure 9g–j, it can be seen that these CNN-based methods have a higher classification accuracy per category, with all of them achieving 100% accuracy on the category of water. The TBCNN achieves better recognition performance on the category of grass healthy and the CNNMRF achieves better recognition performance on the category of trees, with a classification accuracy of 97.37%. FusAtNet, on the other hand, does not perform as well on the category of trees. The improvement in classification accuracy achieved by the above methods is not much, while S2Enet has great improvement in classification accuracy, with 94.59%. CALC achieves more than 80% accuracy for eight categories. The Transformer-based Fusion-HCT method performs well in terms of classification and is the best for the residential buildings category compared to all methods. SepG-ResNet50 had a low overall classification accuracy, but only achieves the highest accuracy in the grass stressed category, with an OA of 98,59%. DSMSC2N classified the categories of both residential buildings and non-residential buildings with a superior accuracy. In particular, the identification of grass healthy was poor for all methods on the 2018 Houston dataset, unlike the 2013 Houston dataset. This may be due to the fact that the total data volume was smaller and PRDRMF’s identification effect on specific categories was weakened.
Overall, it can be observed from the figure that the other methods exhibit significant noise in the four datasets and do not accurately identify the types of objects. In contrast, the PRDRMF method produces fewer mislabeled classification maps across the four datasets, with clearer boundaries that closely align with the corresponding ground truth. Among all the compared methods, PRDRMF demonstrates the best classification performance and is competitive in the task of feature recognition when utilizing both HSI data and LiDAR data simultaneously.

3.4. Computation Time Comparisons

To verify the significant advantage of the proposed PRDRMF method in reducing time complexity, we compared the running time on four datasets with the running time of the aforementioned comparison methods.
CNNs have a complex structure with multi-layer convolutions, typically resulting in high time complexity. For a fair comparison, we standardized the number of running rounds for CNN-based methods to 200 and provided their running times on all datasets to compare their time complexity in Table 14.
In Table 14, it can be seen that the running time of PRDRMF is second only to SVM among all the compared methods and is shorter than the time of the rest of the CNN-based methods in the same column. Specifically, SVM has the shortest running time, but it compromises classification accuracy, while the remaining CNN-based methods, such as CCNN and CRNN, exhibit good classification accuracy, but their running time exceeds that of PRDRMF. Therefore, PRDRMF represents the optimal method as it addresses the limitations of existing approaches by balancing high classification accuracy with low time complexity.

3.5. Comparison of Decision Fusion Methods

MV is a relatively simple and straightforward approach that does not significantly increase the complexity of the process. However, it fails to consider a substantial amount of crucial information and important details, which is not conducive to achieving the desired level of accuracy in the final classification. Naive Bayes requires the calculation of the a priori probability, which is susceptible to a considerable degree of error in the classification decision. LOGP belongs to the soft decision fusion strategy, which employs uniform weight coefficients for decision fusion. However, the respective classification performance of the subclassifiers is not optimally evaluated, resulting in an impact on the final classification effect. The adaptive decision fusion method fuses the classification results of the subclassifiers based on the strategy of optimally assigning weight coefficients, achieving the highest classification accuracy. However, the method requires the constant identification of optimal weight coefficients, which increases the time complexity. The method presented in this paper demonstrates an improvement in final classification accuracy, ranking second only to the adaptive decision fusion method. Furthermore, the overall running time of the algorithm presented in this paper is notably brief, indicating that the time cost associated with this fusion method is not significant. In Table 15, we have experimented the above method with our proposed method on the 2018 Houston dataset and the detailed comparison results are as follows.

3.6. Ablation Experiments

To validate the superiority of combining joint HSI data and LiDAR data over using single data sources, to confirm the contribution of each CPM to the classification performance, and to verify the essential role of each module in the PRDRMF method, we designed three ablation experiments. These experiments were conducted on all datasets, and the detailed results are presented in Table 16, Table 17 and Table 18. To make the validation process more structured and organized, the entire argumentation process is divided into two parts.

3.6.1. Ablation Analysis of Different Source of Data Inputs

Considering the impact of different data sources on the model’s classification performance, three sets of experiments were conducted using single HSI data, single LiDAR data, and joint HSI and LiDAR data inputs. The experimental results are shown in Table 16.
In Table 16, the best classification accuracy is achieved by jointly using HSI data and light detection and ranging (LiDAR) data, as evidenced by the comparison of the overall accuracy (OA), average accuracy (AA), and Kappa values across the four datasets. This confirms that combining two types of data offers a significant advantage over using a single type of data. It can lead to effective information complementarity, thereby enhancing the accuracy of classification. It also confirms that the designed multiprobability decision module can effectively utilize information from various sources and ultimately achieve improved classification results.
In addition, there is a gap between the classification assessment metrics of single HSI data or single LiDAR data and the assessment metrics of the joint use of HSI data and LiDAR data. It is challenging to differentiate between spectrally coherent features and recognize occluded areas such as clouds using single HSI data. In contrast, LiDAR data can offer height and shape information on occluded features and identify features with varying heights within the same spectrum. The two types of data can obtain features from different perspectives and provide complementary information, ultimately enhancing the accuracy of feature recognition.

3.6.2. Ablation Analysis of Different CPM Inputs

As the proposed method benefits from four feature-corresponding CPMs, an analysis of the contribution of each CPM to the classification performance is conducted through burn-in ablation experiments. Specifically, there are four main components: HSI_LBP, HSI_EMAP, LiDAR_LBP, and LiDAR_EMAP. Extensive ablation experiments were conducted on the 2013 Houston dataset as a representative sample to validate the effectiveness of the CPMs corresponding to the four features in the PRDRMF for model classification. PRDRMF is evaluated by removing each CPM through comparison in Table 17, and the optimal results are highlighted in bold.

3.6.3. Ablation Analysis of Different Module Inputs

The PRDRMF method contributes by incorporating the data de-redundancy method PRTV, LBP, and EMAP for the multifeature extraction module, as well as the multiprobability decision fusion method for the decision fusion module. Extensive ablation experiments were conducted on the four datasets to verify the indispensable role of these modules in PRDRMF, demonstrating their necessity to the PRDRMF method. PRDRMF is evaluated by systematically removing each module in Table 18, with the optimal results highlighted in bold.
Table 18 demonstrates that the PRTV module increases the classification accuracy on the MUUFL, Trento, and 2018 Houston datasets, particularly on MUUFL. The MUUFL dataset comprises small-scale neighborhood scenes in the inner city, with a spatial resolution of less than 1 m. This results in the presence of excessive noise and redundant information, as well as images that lack clarity, thereby compromising the availability of detailed information. It can thus be concluded that the PRTV module is of considerable assistance in the process of de-redundancy of the MUUFL dataset. However, in the case of the 2013 Houston dataset, the PRTV module has the effect of reducing the 0A value. This is due to the fact that the 2013 Houston dataset comprises a greater number of categories and a larger quantity of data. Consequently, the PRTV module resulted in a slight loss of information in this complex feature scenario. However, this loss is minimal and may even be considered negligible in terms of the final classification results.
LBP exerts a beneficial influence on all four datasets, yet the enhancement effect is less than 1%. In comparison to other modules, it exhibits the least pronounced impact.
EMAP plays a significant role in the multifeature extraction module, demonstrating robust performance across all four datasets, particularly in the case of the MUUFL dataset. This suggests that EMAP is capable of performing well on datasets characterized by low spatial resolution, data redundancy, and reduced clarity, with the ability to capture more detailed spatial information.
The multiprobability decision fusion method was observed to be effective across all four datasets, resulting in an improvement in the final classification accuracy. Nevertheless, it did not exert a significant influence.
In conclusion, the results demonstrate that each module in PRDRMF is indispensable for achieving optimal outcomes.
Each module in PRDRMF can enhance the distinguishability between different categories to a certain extent, which can aid in category classification. Therefore, we gradually added modules to the original HSI data in sequence and output the data feature distribution to confirm that PRDRMF has the advantage of aiding in the separation of different categories, as illustrated in Figure 10, Figure 11, Figure 12 and Figure 13.

4. Conclusions

The combined use of hyperspectral remote sensing and LiDAR data can yield advantageous outcomes in classification tasks. However, there are also problems such as data redundancy, lower classification accuracy, and high time complexity. Our research aims to fully utilize the significant advantages of hyperspectral data and LiDAR data via the PRDRMF method. These data sources offer information from various perspectives, enabling us to address the challenges currently complicating classification tasks and enhance the accuracy of feature recognition.
Compared with the existing methods, the PRDRMF method preprocesses the data before feature extraction. It reduces data dimensionality, eliminates redundant remote sensing data from multiple sources, and reduces the complexity of subsequent feature extraction processes. This provides a data preprocessing method for the research field that involves the joint use of HSI data and LiDAR data for feature classification.
Second, the PRDRMF method performs multifeature extraction, using LBP and EMAP to extract local texture features and spatial structure features. This approach considers both the local area and the overall spatial structure of HSI data and LiDAR data, ultimately increasing the accuracy of image classification.
The PRDRMF method is a lightweight method. It uses a kernel limited learning machine with a simple structure and superior classification performance to produce a classification probability matrix corresponding to multiple features. This matrix is then used in a multiprobability decision-making process that does not require additional parameters for decision fusion. This approach reduces the time complexity of the model. Moreover, the PRDRMF method outperforms the other methods in terms of running speed and model performance. It achieved the highest classification accuracy on the four datasets, with OA values of 99.79%, 92.21%, 99.73%, and 96.93%, respectively. Of particular importance, the PRDRMF method even achieved 100% classification accuracy for certain feature classes.
However, our method has certain limitations. First, information loss occurs due to the use of the data deduplication module, and further research is needed to explore ways to prevent the loss of valuable information. In addition, the applicability of the PRDRMF method to larger and more complex scenarios with uneven sample distributions or to tasks with too many similar categories is subject to experimentation.

Author Contributions

Conceptualization, S.C. and H.C.; methodology, S.C. and B.Z.; software T.C.; validation, W.D.; resources, H.C.; data curation, S.C.; writing—original draft preparation, S.C. and H.C.; writing—review and editing, T.C. and W.D.; visualization, L.C.; supervision, B.Z.; project administration, T.C.; funding acquisition, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (62176217), the Innovation Team Funds of China West Normal University (KCXTD2022-3), the Sichuan Science and Technology Program of China (2023YFG0028, 2023YFS0431), the A Ba Achievements Transformation Program (R23CGZH0001), the Sichuan Science and Technology Program of China (2023ZYD0148, 2023YFG0130), and the Sichuan Province Transfer Payment Application and Development Program (R22ZYZF0004).

Data Availability Statement

The 2013 Houston dataset used in this study is available at https://hyperspectral.ee.uh.edu/?page_id=1075 (accessed on 18 November 2024); the MUUFL dataset is available from https://github.com/GatorSense/MUUFLGulfport/ (accessed on 18 November 2024); The Trento dataset used in this study is available at https://github.com/AnkurDeria/MFT?tab=readme-ov-file; The 2018 Houston dataset used in this study is available at https://github.com/YuxiangZhang-BIT/IEEE_TIP_SDEnet. All the websites can be accessed on 18 November 2024.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ding, Z.; Liao, X.; Su, F.; Fu, D. Mining Coastal Land Use Sequential Pattern and Its Land Use Associations Based on Association Rule Mining. Remote Sens. 2017, 9, 116. [Google Scholar] [CrossRef]
  2. Chen, H.; Ru, J.; Long, H.; He, J.; Chen, T.; Deng, W. Semi-supervised adaptive pseudo-label feature learning for hyperspectral image classification in internet of things. IEEE Internet Things J. 2024, 11, 30754–30768. [Google Scholar] [CrossRef]
  3. Simani, S.; Lam, Y.P.; Farsoni, S.; Castaldi, P. Dynamic Neural Network Architecture Design for Predicting Remaining Useful Life of Dynamic Processes. J. Data Sci. Intell. Syst. 2024, 2, 141–152. [Google Scholar] [CrossRef]
  4. Li, X.; Zhao, H.; Xu, J.; Zhu, G.; Deng, W. APDPFL: Anti-Poisoning Attack Decentralized Privacy Enhanced Federated Learning Scheme for Flight Operation Data Sharing. IEEE Trans. Wirel. Commun. 2024, 1. [Google Scholar] [CrossRef]
  5. Gu, Y.; Wang, Q. Discriminative Graph-Based Fusion of HSI and LiDAR Data for Urban Area Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 906–910. [Google Scholar] [CrossRef]
  6. Zhao, H.; Gao, Y.; Deng, W. Defect detection using shuffle Net-CA-SSD lightweight network for turbine blades in IoT. IEEE Internet Things J. 2024, 11, 32804–32812. [Google Scholar] [CrossRef]
  7. Xie, P.; Deng, L.; Ma, Y.; Deng, W.Q. EV-Call 120: A new-generation emergency medical service system in China. J. Transl. Intern. Med. 2024, 12, 209–212. [Google Scholar] [CrossRef] [PubMed]
  8. Li, W.; Liu, D.; Li, Y.; Hou, M.; Liu, J.; Zhao, Z.; Guo, A.; Zhao, H.; Deng, W. Fault diagnosis using variational autoencoder GAN and focal loss CNN under unbalanced data. Struct. Health Monit. 2024. [Google Scholar] [CrossRef]
  9. Huang, C.; Wu, D.Q.; Zhou, X.B.; Song, Y.J.; Chen, H.L.; Deng, W. Competitive swarm optimizer with dynamic multi-competitions and convergence accelerator for large-scale optimization problems. Appl. Soft Comput. 2024, 167, 112252. [Google Scholar] [CrossRef]
  10. Rasti, B.; Ghamisi, P.; Plaza, J.; Plaza, A. Fusion of Hyperspectral and LiDAR Data Using Sparse and Low-Rank Component Analysis. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6354–6365. [Google Scholar] [CrossRef]
  11. Deng, W.; Li, X.; Xu, J.; Li, W.; Zhu, G.; Zhao, H. BFKD: Blockchain-Based Federated Knowledge Distillation for Aviation Internet of Things. IEEE Trans. Reliab. 2024, 1–14. [Google Scholar] [CrossRef]
  12. Ran, X.J.; Suyaroj, N.; Tepsan, W.; Ma, J.H.; Zhou, X.B.; Deng, W. A hybrid genetic-fuzzy ant colony optimization algorithm for automatic K-means clustering in urban global positioning system. Eng. Appl. Artif. Intell. 2024, 137, 109237. [Google Scholar] [CrossRef]
  13. Shao, H.; Zhou, X.; Lin, J.; Liu, B. Few-shot cross-domain fault diagnosis of bearing driven by Task-supervised ANIL. IEEE Internet Things J. 2024, 11, 22892–22902. [Google Scholar] [CrossRef]
  14. Li, T.; Shu, X.; Wu, J.; Zheng, Q.; Lv, X.; Xu, J. Adaptive weighted ensemble clustering via kernel learning and local information preservation. Knowl.-Based Syst. 2024, 294, 111793. [Google Scholar] [CrossRef]
  15. Xu, J.; Li, T.; Zhang, D.; Wu, J. Ensemble clustering via fusing global and local structure information. Expert Syst. Appl. 2024, 237, 121557. [Google Scholar] [CrossRef]
  16. Wang, R.; Qiu, H.; Jiang, G.; Liu, X.; Cheng, X. Class-Imbalanced Spatial–Temporal Feature Learning for Blade Icing Recognition of Wind Turbine. IEEE Trans. Ind. Inform. 2024, 20, 10249–10258. [Google Scholar] [CrossRef]
  17. Gu, Y.; Wang, Q.; Jia, X.; Benediktsson, J.A. Novel MKL Model of Integrating LiDAR Data and MSI for Urban Area Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5312–5326. [Google Scholar]
  18. Chen, T.; Wang, T.; Chen, H.; Zheng, B.; Deng, W. Cross-Hopping Graph Networks for Hyperspectral–High Spatial Resolution (H2) Image Classification. Remote Sens. 2024, 16, 3155. [Google Scholar] [CrossRef]
  19. Peng, B.; Gao, D.R.; Wang, M.Q.; Zhang, Y.Q. 3D-STCNN: Spatiotemporal convolutional neural network based on EEG 3D features for detecting driving fatigue. J. Data Sci. Intell. Syst. 2024, 2, 1–13. [Google Scholar] [CrossRef]
  20. Chen, H.; Wang, T.; Chen, T. Hyperspectral Image Classification Based on Fusing S3-PCA, 2D-SSA and Random Patch Network. Remote Sens. 2023, 15, 3402. [Google Scholar] [CrossRef]
  21. Cheng, X.; He, T.; Shi, F.; Zhao, M.; Liu, X.; Chen, S. Selective Feature Fusion and Irregular-Aware Network for Pavement Crack Detection. IEEE Trans. Intell. Transp. Syst. 2024, 25, 3445–3456. [Google Scholar] [CrossRef]
  22. Dong, L.; Jiang, W.; Geng, J. Hyperspectral and LiDAR Data Classification Using Spatial Context and De-Redundant Fusion Network. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5510305. [Google Scholar] [CrossRef]
  23. Vasin, D.Y.; Gromov, V.P.; Pakhomov, P.A. Elimination of information redundancy of hyperspectral images using the “well-adapted” basis method. J. Phys. Conf. Ser. 2019, 1368, 032025. [Google Scholar] [CrossRef]
  24. Wang, W.; Fu, X.; Zeng, W. Enhanced Deep Blind Hyperspectral Image Fusion. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 1513–1523. [Google Scholar] [CrossRef]
  25. Chavez, P.S., Jr.; Kwarteng, A.Y. Extracting spectral contrast in Lands at thematic mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348. [Google Scholar]
  26. Switzer, P.; Green, A. Min/Max. Autocorrelation Factors for Multivariate Spatial Imagery; Deptartment of Statistics, Stanford University: Stanford, CA, USA, 1984. [Google Scholar]
  27. Licciardi, G.; Marpu, P.R.; Chanussot, J.A. Benediktsson Linear versus nonlinear PCA for the classification of hyperspectral data based on the extended morphological profiles. IEEE Geosci. Remote Sens. Lett. 2012, 9, 447–451. [Google Scholar] [CrossRef]
  28. Rasti, B.; Ghamisi, P.; Gloaguen, R. Hyperspectral and LiDAR fusion using extinction profiles and total variation component analysis. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3997–4007. [Google Scholar] [CrossRef]
  29. Liao, W.; Pizurica, A.; Bellens, R.; Gautama, S.; Philips, W. Generalized graph-based fusion of hyperspectral and LiDAR data usingmorphological features. IEEE Geosci. Remote Sens. Lett. 2015, 12, 552–556. [Google Scholar] [CrossRef]
  30. Li, W.; Chen, C.; Su, H.; Du, Q. Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
  31. Dalla Mura, M.; Villa, A.; Benediktsson, J.A.; Chanussot, J.; Bruzzone, L. Classification of hyperspectral images by using extended morphological attribute profiles and independent component analysis. IEEE Geosci. Remote Sens. Lett. Dec. 2010, 8, 542–546. [Google Scholar] [CrossRef]
  32. Li, M.; Lv, Z.; Cao, Q.; Gao, J.; Hu, B. Automatic assessment method and device for depression symptom severity based on emotional facial expression and pupil-wave. IEEE Trans. Instrum. Meas. 2024, 73, 2531215. [Google Scholar] [CrossRef]
  33. Zhao, H.; Wang, L.; Zhao, Z.; Deng, W. A new fault diagnosis approach using parameterized time-reassigned multisynchrosqueezing transform for rolling bearings. IEEE Trans. Reliab. 2024, 1–10. [Google Scholar] [CrossRef]
  34. Song, Y.J.; Han, L.H.; Zhang, B.; Deng, W. A dual-time dual-population multi-objective evolutionary algorithm with application to the portfolio optimization problem. Eng. Appl. Artif. Intell. 2024, 133, 108638. [Google Scholar] [CrossRef]
  35. Li, X.; Zhao, H.; Deng, W. IOFL: Intelligent-optimization-based federated learning for Non-IID data. IEEE Internet Things J. 2024, 11, 16693–16699. [Google Scholar] [CrossRef]
  36. Zhu, T.; Ren, R.; Li, Y.; Liu, W. A Model-Based Reinforcement Learning Method with Conditional Variational Auto-Encoder. J. Data Sci. Intell. Syst. 2024. [Google Scholar] [CrossRef]
  37. Sun, Q.; Chen, J.; Zhou, L.; Ding, S.; Han, S. A study on ice resistance prediction based on deep learning data generation method. Ocean Eng. 2024, 301, 117467. [Google Scholar] [CrossRef]
  38. Cheng, X.; Li, G.; Skulstad, R.; Zhang, H. SAFENESS: A Semi-Supervised Transfer Learning Approach for Sea State Estimation Using Ship Motion Data. IEEE Trans. Intell. Transp. Syst. 2024, 25, 3352–3363. [Google Scholar] [CrossRef]
  39. Zhang, Z.; Guo, D.; Zhou, S.; Zhang, J.; Lin, Y. Flight trajectory prediction enabled by time-frequency wavelet transform. Nat. Commun. 2023, 14, 5258. [Google Scholar] [CrossRef]
  40. Wang, Y.; Duan, H. Classification of Hyperspectral Images by SVM Using a Composite Kernel by Employing Spectral, Spatial and Hierarchical Structure Information. Remote Sens. 2018, 10, 26. [Google Scholar] [CrossRef]
  41. Huang, G.B. An insight into extreme learning machines: Random neurons, random features and kernels. Cogn. Comput. 2014, 6, 376–390. [Google Scholar] [CrossRef]
  42. Huang, K.; Li, S.; Kang, X.; Fang, L. Spectral–Spatial Hyperspectral Image Classification Based on KNN. Sens. Imaging 2016, 17, 1. [Google Scholar] [CrossRef]
  43. Hang, R.; Li, Z.; Ghamisi, P.; Hong, D.; Xia, G.; Liu, Q. Classification of hyperspectral and LiDAR data using coupled CNNs. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4939–4950. [Google Scholar] [CrossRef]
  44. Lee, H.; Kwon, H. Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [PubMed]
  45. Xu, X.; Li, W.; Ran, Q.; Du, Q.; Gao, L.; Zhang, B. Multisource remote sensing data classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 937–949. [Google Scholar] [CrossRef]
  46. Pal, M.; Maxwell, A.E.; Warner, T.A. Kernel-based extreme learning machine for remote-sensing image classification. Remote Sens. Lett. 2013, 4, 853–862. [Google Scholar] [CrossRef]
  47. Prasad, S.; Bruce, L.M. Decision fusion with confidence-based weight assignment for hyperspectral target recognition. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1448–1456. [Google Scholar] [CrossRef]
  48. Li, W.; Du, Q. Gabor-filtering-based nearest regularized subspace for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1012–1022. [Google Scholar] [CrossRef]
  49. Jiang, J.; Ma, J.; Chen, C.; Wang, Z.; Cai, Z.; Wang, L. SuperPCA: A superpixelwise PCA approach for unsupervised feature extraction of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4581–4593. [Google Scholar] [CrossRef]
  50. Li, M.; Wang, Y.; Yang, C.; Lu, Z.; Chen, J. Investigation of ice wedge bearing capacity based on an anisotropic beam analogy. Ocean Eng. 2024, 302, 117611. [Google Scholar] [CrossRef]
  51. Li, M.; Wang, Y.; Yang, C.; Lu, Z.; Chen, J. Automatic diagnosis of depression based on facial expression information and deep convolutional neural network. IEEE Trans. Comput. Soc. Syst. 2024, 11, 5728–5739. [Google Scholar] [CrossRef]
  52. Lin, Y.; Guo, D.; Wu, Y.; Li, L.; Wu, E.Q.; Ge, W. Fuel consumption prediction for pre-departure flights using attention-based multi-modal fusion. Inf. Fusion 2024, 101, 101983. [Google Scholar] [CrossRef]
  53. Guo, D.; Wu, E.Q.; Wu, Y.; Zhang, J.; Law, R.; Lin, Y. FlightBERT: Binary Encoding Representation for Flight Trajectory Prediction. IEEE Trans. Intell. Transp. Syst. 2023, 24, 1828–1842. [Google Scholar] [CrossRef]
  54. Li, X.; Qiong, Y. Structure extraction from texture via relative total variation. ACM Trans. Graph. 2012, 31, 1–10. [Google Scholar]
  55. Mura, M.D.; Benediktsson, J.A.; Waske, B.; Bruzzone, L. Extended profiles with morphological attribute filters for the analysis of hyperspectral data. Int. J. Remote Sens. 2010, 31, 5975–5991. [Google Scholar] [CrossRef]
  56. Hearst, M.A.; Dumais, S.T.; Osuna, E. Support vector machines. IEEE Intell. Syst. Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
  57. Hong, D.; Gao, L.; Hang, R.; Zhang, B.; Chanussot, J. Deep encoder-decoder networks for classification of hyperspectral and LiDAR data. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5500205. [Google Scholar] [CrossRef]
  58. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
  59. Cao, X.; Zhou, F.; Xu, L.; Meng, D.; Xu, Z.; Paisley, J. Hyperspectral image classification with Markov random fields and a convolutional neural network. IEEE Trans. Image Process. 2018, 27, 2354–2367. [Google Scholar]
  60. Mohla, S.; Pande, S.; Banerjee, B.; Chaudhuri, S. Fusatnet: Dual attention based spectrospatial multimodal fusion network for hyperspectral and lidar classification. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 92–93. [Google Scholar] [CrossRef]
  61. Fang, S.; Li, K.; Li, Z. S2ENet: Spatial-spectral cross-modal enhancement network for classification of hyperspectral and LiDAR data. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6504205. [Google Scholar] [CrossRef]
  62. Lu, T.; Ding, K.; Fu, W.; Li, S.; Guo, A. Coupled adversarial learning for fusion classification of hyperspectral and LiDAR data. Inf. Fusion. 2023, 93, 118–131. [Google Scholar] [CrossRef]
  63. Zhao, G.; Ye, Q.; Sun, L.; Wu, Z.; Pan, C.; Jeon, B. Joint Classification of Hyperspectral and LiDAR Data Using a Hierarchical CNN and Transformer. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5500716. [Google Scholar] [CrossRef]
  64. Yang, Y.; Zhu, D.; Qu, T.; Wang, Q.; Ren, F.; Cheng, C. Single-stream CNN with learnable architecture for multisource remote sensing data. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5409218. [Google Scholar] [CrossRef]
  65. Wang, A.; Dai, S.; Wu, H.; Iwahori, Y. Multimodal Semantic Collaborative Classification for Hyperspectral Images and LiDAR Data. Remote Sens. 2024, 16, 3082. [Google Scholar] [CrossRef]
Figure 1. Framework of PRDRMF.
Figure 1. Framework of PRDRMF.
Remotesensing 16 04317 g001
Figure 2. The impact of PRTV on data de-redundancy. (a) Image after PRTV processing of raw HIS. (b) Image of original HSI data. (c) Output of PRDRMF.
Figure 2. The impact of PRTV on data de-redundancy. (a) Image after PRTV processing of raw HIS. (b) Image of original HSI data. (c) Output of PRDRMF.
Remotesensing 16 04317 g002
Figure 3. Parameters of PRTV on classification accuracy for four datasets: (a) smoothing degree λ; (b) texture element σ.
Figure 3. Parameters of PRTV on classification accuracy for four datasets: (a) smoothing degree λ; (b) texture element σ.
Remotesensing 16 04317 g003
Figure 4. Parameters of LBP on classification accuracy for four datasets: (a) size of range diameter size r; (b) number of sample points n.
Figure 4. Parameters of LBP on classification accuracy for four datasets: (a) size of range diameter size r; (b) number of sample points n.
Remotesensing 16 04317 g004
Figure 5. Performance of KELM with different kernel functions.
Figure 5. Performance of KELM with different kernel functions.
Remotesensing 16 04317 g005
Figure 6. Classification maps of the 2013 Houston dataset using different methods. (a) Pseudo-color image of HSI, (b) LiDAR, (c) ground truth map, (d) SVM (59.40%), (e) CCNN (86.92%), (f) EndNet (88.52%), (g) CRNN (88.55%), (h) TBCNN (88.91%), (i) coupled CNN (90.43%), (j) CNNMRF (90.61%), (k) FusAtNet (89.98%), (l) S2ENet (94.19%), (m) CALC (94.71%), (n) Fusion-HCT (99.76%), (o) SepG-ResNET50 (72.67%), (p) DSMSC2N (91.49%), (q) PRDRMF (99.79%).
Figure 6. Classification maps of the 2013 Houston dataset using different methods. (a) Pseudo-color image of HSI, (b) LiDAR, (c) ground truth map, (d) SVM (59.40%), (e) CCNN (86.92%), (f) EndNet (88.52%), (g) CRNN (88.55%), (h) TBCNN (88.91%), (i) coupled CNN (90.43%), (j) CNNMRF (90.61%), (k) FusAtNet (89.98%), (l) S2ENet (94.19%), (m) CALC (94.71%), (n) Fusion-HCT (99.76%), (o) SepG-ResNET50 (72.67%), (p) DSMSC2N (91.49%), (q) PRDRMF (99.79%).
Remotesensing 16 04317 g006
Figure 7. Classification maps of the MUUFL dataset using different methods. (a) Pseudo-color image of HSI, (b) LiDAR, (c) ground truth map, (d) SVM (4.47%), (e) CCNN (88.96%), (f) EndNet (87.75%), (g) CRNN (91.38%), (h) TBCNN (90.85%), (i) coupled CNN (90.93%), (j) CNNMRF (88.94%), (k) FusAtNet (91.48%), (l) S2ENet (91.68%), (m) CALC (82.91%), (n) Fusion-HCT (87.43%), (o) SepG-ResNET50 (82.90%), (p) DSMSC2N (91.17%), (q) PRDRMF (92.21%).
Figure 7. Classification maps of the MUUFL dataset using different methods. (a) Pseudo-color image of HSI, (b) LiDAR, (c) ground truth map, (d) SVM (4.47%), (e) CCNN (88.96%), (f) EndNet (87.75%), (g) CRNN (91.38%), (h) TBCNN (90.85%), (i) coupled CNN (90.93%), (j) CNNMRF (88.94%), (k) FusAtNet (91.48%), (l) S2ENet (91.68%), (m) CALC (82.91%), (n) Fusion-HCT (87.43%), (o) SepG-ResNET50 (82.90%), (p) DSMSC2N (91.17%), (q) PRDRMF (92.21%).
Remotesensing 16 04317 g007aRemotesensing 16 04317 g007b
Figure 8. Classification maps of the Trento daaset using different methods. (a) Pseudo-color image of HSI, (b) LiDAR, (c) ground truth map, (d) SVM (72.89%), (e) CCNN (97.29%), (f) EndNet (94.17%), (g) CRNN (97.22%), (h) TBCNN (97.46%), (i) coupled CNN (97.69%), (j) CNNMRF (98.40%), (k) FusAtNet (99.06%) (l) S2ENet (98.54%), (m) CALC (99.38%), (n) Fusion-HCT (99.60%), (o) SepG-ResNET50 (93.82%), (p) DSMSC2N (98.93%), (q) PRDRMF (99.73%).
Figure 8. Classification maps of the Trento daaset using different methods. (a) Pseudo-color image of HSI, (b) LiDAR, (c) ground truth map, (d) SVM (72.89%), (e) CCNN (97.29%), (f) EndNet (94.17%), (g) CRNN (97.22%), (h) TBCNN (97.46%), (i) coupled CNN (97.69%), (j) CNNMRF (98.40%), (k) FusAtNet (99.06%) (l) S2ENet (98.54%), (m) CALC (99.38%), (n) Fusion-HCT (99.60%), (o) SepG-ResNET50 (93.82%), (p) DSMSC2N (98.93%), (q) PRDRMF (99.73%).
Remotesensing 16 04317 g008
Figure 9. Classification maps of the 2018 Houston dataset using different methods. (a) Pseudo-color image of HSI, (b) LiDAR, (c) ground truth map, (d) SVM (81.49%), (e) CCNN (90.09%), (f) EndNet (90.72%), (g) CRNN (91.16%), (h) TBCNN (91.21%), (i) coupled CNN (92.21%), (j) CNNMRF (92.35%), (k) FusAtNet (91.58%), (l) S2ENet (94.59%), (m) CALC (94.80%), (n) Fusion-HCT (96.68%), (o) SepG-ResNET50 (88.30%), (p) DSMSC2N (93.55%), (q) PRDRMF (96.93%).
Figure 9. Classification maps of the 2018 Houston dataset using different methods. (a) Pseudo-color image of HSI, (b) LiDAR, (c) ground truth map, (d) SVM (81.49%), (e) CCNN (90.09%), (f) EndNet (90.72%), (g) CRNN (91.16%), (h) TBCNN (91.21%), (i) coupled CNN (92.21%), (j) CNNMRF (92.35%), (k) FusAtNet (91.58%), (l) S2ENet (94.59%), (m) CALC (94.80%), (n) Fusion-HCT (96.68%), (o) SepG-ResNET50 (88.30%), (p) DSMSC2N (93.55%), (q) PRDRMF (96.93%).
Remotesensing 16 04317 g009
Figure 10. Visualization of data feature distribution for the 2013 Houston dataset. (a) Raw HSI, (b) PRTV, (c) PRTV+ multifeature extraction module, (d) PRDRMF.
Figure 10. Visualization of data feature distribution for the 2013 Houston dataset. (a) Raw HSI, (b) PRTV, (c) PRTV+ multifeature extraction module, (d) PRDRMF.
Remotesensing 16 04317 g010
Figure 11. Visualization of data feature distribution for the MUUFL dataset. (a) Raw HSI, (b) PRTV, (c) PRTV+ multifeature extraction module, (d) PRDRMF.
Figure 11. Visualization of data feature distribution for the MUUFL dataset. (a) Raw HSI, (b) PRTV, (c) PRTV+ multifeature extraction module, (d) PRDRMF.
Remotesensing 16 04317 g011
Figure 12. Visualization of data feature distribution for the Trento dataset. (a) Raw HSI, (b) PRTV, (c) PRTV+ multifeature extraction Module, (d) PRDRMF.
Figure 12. Visualization of data feature distribution for the Trento dataset. (a) Raw HSI, (b) PRTV, (c) PRTV+ multifeature extraction Module, (d) PRDRMF.
Remotesensing 16 04317 g012
Figure 13. Visualization of data feature distribution for the Trento dataset. (a) Raw HSI, (b) PRTV, (c) PRTV+ multifeature extraction module, (d) PRDRMF.
Figure 13. Visualization of data feature distribution for the Trento dataset. (a) Raw HSI, (b) PRTV, (c) PRTV+ multifeature extraction module, (d) PRDRMF.
Remotesensing 16 04317 g013
Table 1. Detailed description of four multi-sensor datasets.
Table 1. Detailed description of four multi-sensor datasets.
Dataset2013 HoustonMUUFLTrento2018 Houston
LocationHouston, RX, USALong Beach, MS, USATrento, ItalyHouston, TX, USA
Data sourceHSILiDARHSILiDARHSILiDARHSILiDAR
Size345 × 1905345 × 1905325 × 220325 × 220600 × 166600 × 166209 × 955209 × 955
Spatial resolution2.5 m2.5 m0.54 m × 1.0 m0.60 m× 0.78 m1 m1 m2.5 m2.5 m
Bands1441642631481
Wavelength range0.38–1.05-0.375–1.051.060.42–0.99-0.38–1.05-
Sensor TypeCASI-1500-CASI-1500Gemini ALTM LIDARAISA EagleOptech ALTM 3100EACASI-1500-
Table 2. The 2013 Houston dataset.
Table 2. The 2013 Houston dataset.
ClassClass NameTrainTestTotal
1Health grass19810531251
2Stressed grass19010641254
3Synthetic192505697
4Tress18810561244
5Soil18610561242
6Water182143325
7Residential19610721268
8Commercial19110531244
9Road19310591252
10Highway19110361227
11Railway18110541235
12Parking lot 119210411233
13Parking lot 2183285469
14Tennis court181247428
15Running track187473660
Total-283212,19715,029
Table 3. The MUUFL dataset.
Table 3. The MUUFL dataset.
ClassClass NameTrainTestTotal
1Trees15023,09623,246
2Mostly grass15041204270
3Mixed ground surface15067326882
4Dirt and sand15016761826
5Road15065376687
6Water150316466
7Building shadow15020832233
8Building15060906240
9Sidewalk15012351385
10Yellow curb15033183
11Cloth panels150119269
Total-283212,19715,029
Table 4. The Trento dataset.
Table 4. The Trento dataset.
ClassClass NameTrainTestTotal
1Apple trees12930954034
2Buildings12527782903
3Ground105374479
4Woods15489699123
5Vineyard18410,31710,501
6Roads12229523074
Total-81929,39530,214
Table 5. The 2018 Houston dataset.
Table 5. The 2018 Houston dataset.
ClassClass NameTrainTestTotal
1Grass healthy21411391353
2Grass stressed74041484888
3Trees41823482766
4Water121022
5Residential buildings82645215347
6Non-residential buildings498327,47632,459
7Roads98153846365
Total-817445,02653,200
Table 6. Comparison of overall accuracy (OA) and kappa of different methods on the 2013 Houston dataset.
Table 6. Comparison of overall accuracy (OA) and kappa of different methods on the 2013 Houston dataset.
No.ClassClassification Algorithms
SVMCCNNEndNetCRNNTBCNNCoupled CNNCNNMRFPRDRMF
1Health grass86.3299.3281.5883.0083.1098.5185.77100.00
2Stressed grass61.6587.5683.6579.4181.2097.8386.28100.00
3Synthetic grass96.6398.80100.0099.80100.0070.6099.00100.00
4Trees94.5197.4893.0990.1592.9099.0692.8599.72
5Soil77.6599.8199.9199.7199.81100.00100.00100.00
6Water85.3199.3195.1083.21100.0041.1198.15100.00
7Residential47.3973.2382.6588.0692.5483.1491.6499.91
8Commercial34.6688.6581.2988.6194.8798.3980.7998.10
9Road81.1182.3488.2966.0183.8594.8191.3799.81
10Highway0.0075.8189.0052.2269.8992.9873.35100.00
11Railway78.6572.1083.7881.9786.1590.8898.87100.00
12Parking lot10.0085.3990.3969.8392.6091.0289.38100.00
13Parking lot20.3594.2982.4679.6479.3097.0992.75100.00
14Tennis court94.7483.62100.00100.00100.00100.00100.00100.00
15Running track96.4199.5598.10100.00100.0097.85100.00100.00
OA (%)59.4086.9288.5288.5588.9190.4390.6199.79
AA (%)62.3689.1589.9590.3090.4290.2292.0199.84
Kappa (×100)56.0285.8087.5987.5687.9689.6889.8799.77
Table 7. Comparison of overall accuracy (OA) and kappa of different methods on the 2013 Houston dataset.
Table 7. Comparison of overall accuracy (OA) and kappa of different methods on the 2013 Houston dataset.
No.ClassClassification Algorithms
FusAtNetS2ENetCALCFusion-HCTSepG-ResNet50DSMSC2NPRDRMF
1Health grass83.1082.9190.07100.0072.3690.12100.00
2Stressed grass96.05100.0096.1299.5377.3584.59100.00
3Synthetic grass100.00100.0099.23100.0034.8598.81100.00
4Trees93.0996.8895.8799.9086.8490.9199.72
5Soil99.4399.9199.98100.0091.38100.00100.00
6Water100.00100.0094.15100.0095.1095.80100.00
7Residential93.5395.1595.2199.9081.6291.1299.91
8Commercial92.1293.9292.5198.5761.7395.3898.10
9Road83.6391.3189.6799.4386.3195.0499.81
10Highway64.0992.9593.89100.0046.2667.66100.00
11Railway90.1394.6995.1799.9069.3597.22100.00
12Parking lot191.9389.4390.63100.0086.9493.66100.00
13Parking lot288.4283.1697.73100.0078.2592.63100.00
14Tennis court100.00100.0099.90100.0087.04100.00100.00
15Running track99.15100.0099.51100.0018.8297.46100.00
OA (%)89.9894.1994.7199.7672.6791.4999.79
AA (%)94.6594.6995.6399.8171.6392.6999.84
Kappa (×100)89.1393.6994.4399.7470.4090.7699.77
Table 8. Comparison of overall accuracy (OA) and kappa of different methods on the MUUFL dataset.
Table 8. Comparison of overall accuracy (OA) and kappa of different methods on the MUUFL dataset.
No.ClassClassification Algorithms
SVMCCNNEndNetCRNNTBCNNCoupled CNNCNNMRFPRDRMF
1Trees0.0098.3596.8691.4398.1198.9093.0492.87
2Mostly grass1.1974.9072.0963.1683.3878.6060.1792.21
3Mixed ground surface0.0082.4280.2490.2081.7790.6690.6088.59
4Dirt and sand0.0075.6673.6793.4484.6690.6097.2092.78
5Road9.7695.4796.5687.6296.3496.9092.0092.20
6Water95.2580.6264.8995.8983.3675.9899.6899.37
7Building shadow5.5263.9366.5290.1670.2973.5495.3994.72
8Building18.2195.8495.4189.2998.7796.6694.7193.99
9Sidewalk0.0062.8860.4582.9173.5264.9330.5384.40
10Yellow curb0.0052.0547.0396.9733.1519.4736.3687.88
11Cloth panels97.4871.7383.4996.6460.7767.7695.8098.32
OA (%)4.4788.9687.7591.3890.8590.9388.9492.21
AA (%)20.6777.6276.1188.8878.5677.1885.0292.39
Kappa (×100)2.61285.6784.0684.4188.0688.2285.5589.75
Table 9. Comparison of overall accuracy (OA) and kappa of different methods on the MUUFL dataset.
Table 9. Comparison of overall accuracy (OA) and kappa of different methods on the MUUFL dataset.
No.ClassClassification Algorithms
FusAtNetS2ENetCALCFusion-HCTSepG-ResNet50DSMSC2NPRDRMF
1Trees98.1098.1689.7989.2086.7894.2392.87
2Mostly grass71.6681.6473.4684.4178.4785.8192.21
3Mixed ground surface87.6590.5566.5380.6071.2081.9788.59
4Dirt and sand86.4283.0289.9192.6689.9886.6592.78
5Road95.0994.5075.8581.8776.3889.7292.20
6Water90.7372.1499.7499.3699.7399.6599.37
7Building shadow74.2779.4684.3691.3191.7094.1894.72
8Building97.5597.9394.5494.6487.3190.8793.99
9Sidewalk60.4465.4543.1877.4869.8879.7584.40
10Yellow curb9.3933.4062.3596.9690.3692.0087.88
11Cloth panels93.0280.1896.7399.1599.4198.8298.32
OA (%)91.4891.6882.9187.4382.9091.1992.21
AA (%)78.5879.6779.6789.7985.5690.8792.39
Kappa (×100)88.6589.1577.8283.6277.9488.3389.75
Table 10. Comparison of overall accuracy (OA) and kappa of different methods on the Trento dataset.
Table 10. Comparison of overall accuracy (OA) and kappa of different methods on the Trento dataset.
No.ClassClassification Algorithms
SVMCCNNEndNetCRNNTBCNNCoupled CNNCNNMRFPRDRMF
1Apple trees73.1299.7688.1997.7298.5199.8799.9596.26
2Buildings79.8896.4098.4995.6992.4983.8489.9799.64
3Ground93.3299.4495.19100.00100.0087.0998.3399.73
4Woods76.7397.7599.3096.8597.3299.98100.00100.00
5Vineyard77.1697.3291.96100.00100.0099.61100.00100.00
6Roads59.3593.2790.1477.7692.5698.7593.9897.79
OA (%)72.8997.2994.1797.2297.4697.6998.4099.73
AA (%)73.7497.3293.8894.6796.8094.8697.0499.55
Kappa (×100)63.6696.3992.2296.2996.6196.9197.8699.64
Table 11. Comparison of overall accuracy (OA) and kappa of different methods on the Trento dataset.
Table 11. Comparison of overall accuracy (OA) and kappa of different methods on the Trento dataset.
No.ClassClassification Algorithms
FusAtNetS2ENetCALCFusion-HCTSepG-ResNet50DSMSC2NPRDRMF
1Apple trees99.5499.8599.4798.9293.2899.3396.26
2Buildings98.4998.1798.6198.3499.3897.4299.64
3Ground99.73100.0096.76100.0074.3596.6699.73
4Woods100.0099.42100.00100.0099.8899.29100.00
5Vineyard99.9099.6599.97100.0095.9199.70100.00
6Roads93.3290.8396.4299.1168.0596.6097.79
OA (%)99.0698.5499.3899.6093.8298.9399.73
AA (%)98.5097.9998.5399.3988.4798.1699.55
Kappa (×100)98.7598.0699.1299.4791.7998.5799.64
Table 12. Comparison of overall accuracy (OA) and kappa of different methods on the 2018 Houston dataset.
Table 12. Comparison of overall accuracy (OA) and kappa of different methods on the 2018 Houston dataset.
No.ClassClassification Algorithms
SVMCCNNEndNetCRNNTBCNNCoupled CNNCNNMRFPRDRMF
1Grass healthy47.1569.8774.6366.5580.8266.0171.3775.61
2Grass stressed98.5798.1997.7285.5197.7389.1286.4693.07
3Trees72.7384.9286.4863.5488.1497.3768.7393.42
4Water50.0990.0170.01100.00100.00100.00100.00100.00
5Residential buildings44.5173.5675.0996.5376.4697.6497.2898.53
6Non-residential buildings96.9795.8295.9298.6496.7398.9198.7999.36
7Roads81.5375.0177.6370.2879.2873.1275.4692.57
OA (%)81.4990.0990.7291.1691.2192.2192.3596.93
AA (%)63.6783.8882.4582.9888.2584.4385.3793.19
Kappa (×100)65.9383.0484.1684.7585.0486.5986.8794.80
Table 13. Comparison of overall accuracy (OA) and kappa of different methods on the 2018 Houston dataset.
Table 13. Comparison of overall accuracy (OA) and kappa of different methods on the 2018 Houston dataset.
No.ClassClassification Algorithms
FusAtNetS2ENetCALCFusion-HCTSepG-ResNet50DSMSC2NPRDRMF
1Grass healthy64.1072.3174.8777.7361.8674.1275.61
2Grass stressed85.9590.7390.5792.0398.5988.1993.07
4Trees63.0981.9880.3792.4078.8274.7193.42
6Water100.00100.00100.00100.0040.03100.00100.00
7Residential buildings97.5397.8598.0198.7768.2298.3798.53
8Non-residential buildings98.9199.0299.7199.1795.4398.9899.36
9Roads74.4682.9184.4792.0970.9178.6492.57
OA (%)91.5894.5994.8096.6888.3093.5596.93
AA (%)83.0989.2189.5793.1273.3887.5493.19
Kappa (×100)85.4990.7991.1494.3979.8588.9194.80
Table 14. Comparison of running time of different methods on the experimental datasets (in seconds).
Table 14. Comparison of running time of different methods on the experimental datasets (in seconds).
Classifier2013 HoustonMUUFLTrento2018 Houston
SVM8.477.182.385.71
CCNN167.41115.191.01103.46
EndNet173.92131.2692.29117.83
CRNN512.37476.21404.14456.12
TBCNN215.55115.287.77104.78
Coupled CNN185.80143.67118.94138.21
CNNMRF2104.211744.541635.901689.53
FusAtNet1218.41681.35352.20578.92
S2ENet231.57192.59114.75182.31
CALC921.01434.40287.76412.41
Fusion-HCT1727.72597.71322.79489.73
SepG-ResNET50879.32413.72233.65398.01
DSMSC2N1560.00521.17312.23479.73
PRDRMF84.3765.1444.4356.04
Table 15. Comparison of different decision fusion methods on the 2018 Houston datasets.
Table 15. Comparison of different decision fusion methods on the 2018 Houston datasets.
Methods2018 Houston
OA (%)AA (%)Kappa (×100)
MV95.9892.8993.94
Naive Bayes96.1293.0794.35
LOGP96.1192.9894.11
Adaptive decision fusion method97.1394.1195.72
Multiprobability decision fusion method96.9393.1994.80
Table 16. Ablation analysis of different sources of data inputs.
Table 16. Ablation analysis of different sources of data inputs.
Cases2013 HoustonMUUFLTrento2018 Houston
OA (%)AA (%)K × 100OA (%)AA (%)K × 100OA (%)AA (%)K × 100OA (%)AA (%)K × 100
Only HSI99.5099.5399.4699.3498.9099.1290.7791.5587.9296.6592.8894.33
Only LiDAR98.5898.9098.4697.6696.7696.8675.3983.4769.1395.7696.1695.79
HSI+ LiDAR99.7999.8499.7799.7399.5599.6492.2192.3989.7596.9393.1994.80
Table 17. Ablation analysis of different CPM inputs on the 2013 Houston dataset (- represents removal, √ represents inclusion).
Table 17. Ablation analysis of different CPM inputs on the 2013 Houston dataset (- represents removal, √ represents inclusion).
CasesComponentIndicators
HSI_LBPHSI_EMAPLiDAR_LBPLiDAR_LBPOA (%)AA (%)K × 100
1-99.7699.7799.74
2-99.7799.8299.75
3-99.1299.2899.05
4-99.4499.5799.39
5--99.0099.0298.91
6--94.9795.5794.55
7--99.6599.6699.62
8--98.8099.0798.70
9--99.7399.7799.71
10--98.5298.8698.40
11---98.0798.4997.91
12---94.7495.9294.29
13---98.2398.5698.08
14---52.4857.7149.48
1599.7999.8499.77
Table 18. Ablation analysis of different module inputs.
Table 18. Ablation analysis of different module inputs.
ComponentOA (%)
2013 HoustonMUUFLTrento2018 Houston
NO PRTV99.8090.6599.3196.21
NO LBP99.6389.5399.7096.70
NO EMAP98.7980.7099.2894.28
NO multiprobability decision fusion method98.7886.4199.4596.45
PRDRMF99.7992.2199.7396.93
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, T.; Chen, S.; Chen, L.; Chen, H.; Zheng, B.; Deng, W. Joint Classification of Hyperspectral and LiDAR Data via Multiprobability Decision Fusion Method. Remote Sens. 2024, 16, 4317. https://doi.org/10.3390/rs16224317

AMA Style

Chen T, Chen S, Chen L, Chen H, Zheng B, Deng W. Joint Classification of Hyperspectral and LiDAR Data via Multiprobability Decision Fusion Method. Remote Sensing. 2024; 16(22):4317. https://doi.org/10.3390/rs16224317

Chicago/Turabian Style

Chen, Tao, Sizuo Chen, Luying Chen, Huayue Chen, Bochuan Zheng, and Wu Deng. 2024. "Joint Classification of Hyperspectral and LiDAR Data via Multiprobability Decision Fusion Method" Remote Sensing 16, no. 22: 4317. https://doi.org/10.3390/rs16224317

APA Style

Chen, T., Chen, S., Chen, L., Chen, H., Zheng, B., & Deng, W. (2024). Joint Classification of Hyperspectral and LiDAR Data via Multiprobability Decision Fusion Method. Remote Sensing, 16(22), 4317. https://doi.org/10.3390/rs16224317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop