Next Article in Journal
A Novel Beam-Domain Direction-of-Arrival Tracking Algorithm for an Underwater Target
Previous Article in Journal
A Multi-Scale Feature Fusion Deep Learning Network for the Extraction of Cropland Based on Landsat Data
Previous Article in Special Issue
Iterative Optimization-Enhanced Contrastive Learning for Multimodal Change Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cross Attention-Based Multi-Scale Convolutional Fusion Network for Hyperspectral and LiDAR Joint Classification

1
College of Computer and Control Engineering, Qiqihar University, Qiqihar 161000, China
2
Heilongjiang Key Laboratory of Big Data Network Security Detection and Analysis, Qiqihar University, Qiqihar 161000, China
3
College of Information and Communication Engineering, Dalian Minzu University, Dalian 116600, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(21), 4073; https://doi.org/10.3390/rs16214073
Submission received: 14 September 2024 / Revised: 21 October 2024 / Accepted: 29 October 2024 / Published: 31 October 2024

Abstract

:
In recent years, deep learning-based multi-source data fusion, e.g., hyperspectral image (HSI) and light detection and ranging (LiDAR) data fusion, has gained significant attention in the field of remote sensing. However, the traditional convolutional neural network fusion techniques always provide poor extraction of discriminative spatial–spectral features from diversified land covers and overlook the correlation and complementarity between different data sources. Furthermore, the mere act of stacking multi-source feature embeddings fails to represent the deep semantic relationships among them. In this paper, we propose a cross attention-based multi-scale convolutional fusion network for HSI-LiDAR joint classification. It contains three major modules: spatial–elevation–spectral convolutional feature extraction module (SESM), cross attention fusion module (CAFM), and classification module. In the SESM, improved multi-scale convolutional blocks are utilized to extract features from HSI and LiDAR to ensure discriminability and comprehensiveness in diversified land cover conditions. Spatial and spectral pseudo-3D convolutions, pointwise convolutions, residual aggregation, one-shot aggregation, and parameter-sharing techniques are implemented in the module. In the CAFM, a self-designed local-global cross attention block is utilized to collect and integrate relationships of the feature embeddings and generate joint semantic representations. In the classification module, average polling, dropout, and linear layers are used to map the fused semantic representations to the final classification results. The experimental evaluations on three public HSI-LiDAR datasets demonstrate the competitiveness of the proposed network in comparison with state-of-the-art methods.

Graphical Abstract

1. Introduction

In recent years, remote sensing technology has played a crucial role in Earth observation tasks [1]. With the development of sensor technology, remote sensing imaging methods exhibit a diversified trend [2]. Although an abundance of multi-source data are now available, remote sensing data from each source captures only one or a few specific properties, which cannot fully describe the scenes observed [3,4]. Naturally, multi-source remote sensing data fusion techniques present a feasible resolution to the predicament. By integrating complementary information from multi-source data, tasks can be performed more reliably and accurately [5,6]. Specifically, light detection and ranging (LiDAR) data can provide additional elevation information to hyperspectral image (HSI) data. In this way, the joint land cover classification of HSI-LiDAR data becomes a promising approach that has received favorable results in practical tasks [7].
Depending on the sensor types of remote sensing data, multi-source remote sensing data can be categorized as homogeneous data or heterogeneous data [8]. HSI-LiDAR data are heterogeneous remote sensing data that contains two forms of characteristics, namely spatial–spectral features in the HSI and spatial–elevation features in the LiDAR data [9]. Depending on the hierarchical level of data fusion, multi-source remote sensing joint classification techniques can be further categorized into pixel-level, feature-level, and decision-level approaches [10]. Due to the vast variation in the target characteristics being observed, the joint HSI-LiDAR data fusion is frequently processed by feature-level or decision-level techniques. In general, a typical HSI-LiDAR fusion classification network consists of four components, including the HSI feature extraction module, LiDAR feature extraction module, feature embedding fusion module, and classification module, which is shown in Figure 1.
For the HSI feature extraction module, the feature extraction approaches mainly include convolution-based techniques, recurrent-based techniques, transformer-based techniques, and attention-based techniques. Many convolutional neural network-based (CNN) approaches use 2D convolution to learn local contextual information from pixel-centric data cubes [11,12]. However, these methods devote insufficient attention to the spectral signatures and fail to consider the joint spatial–spectral information in HSI. Naturally, some scholars use 2D-3D convolutions to improve feature extraction modules to obtain joint spatial–spectral feature embeddings, which has achieved promising results in practical applications [13,14]. Nevertheless, 3D convolution exposes increased computation in spectral dimensions over 2D convolution, which can dramatically raise the number of parameters and boost the computational complexity. Moreover, 2D and 3D convolutions are restricted by the receptive field and omit the long-distance dependencies among features. To capture the long-distance dependencies of spectral signatures, some scholars regard the spectral signatures of HSI as time series signals and employ recurrent neural network (RNN) techniques to process them [15]. However, the sequential processing of RNN is time-consuming, which limits its application in practical tasks. In recent years, transformers and attention mechanisms have proposed to provide new thoughts for HSI feature extraction [16,17]. These approaches can capture long-distance dependencies of feature embeddings in parallel [18,19]. However, the attention mechanisms provide less ability to extract spatial information compared to the CNN models. Furthermore, the training and inference speed of transformer models is strongly influenced by data size and model structure. To solve these issues, the mamba models have emerged as a promising approach, which has strong long-distance modeling capabilities while maintaining a linear computational complexity [20,21]. However, it is hard for mamba models to provide an integrated spatial and spectral understanding of HSI features. For the LiDAR feature extraction module, 2D CNNs and attention mechanisms are widely used, and their high performance has been demonstrated in real tasks [22,23,24].
For the feature embedding fusion module, the fusion techniques mainly include concatenate fusion, hierarchical fusion, residual fusion, graph-based fusion, transformer- and attention-based fusion. The concatenate fusion technique is introduced to achieve data fusion by combining feature embeddings of different input data into a joint feature [25,26]. However, this approach only performs simple stacking of multi-source features and only performs well on some small-scale datasets. In contrast, the hierarchical fusion technique is an effective improvement over the concatenate fusion approach. In hierarchical fusion, shallow and deep features, which are extracted in different hierarchies, interact with each other to perform fusion [23,27,28]. In the feature fusion process, different attention modules are used for hierarchical fusion to achieve complete complementarity of features, which are expressed as shallow fusion and deep fusion. However, the effectiveness of hierarchical fusion depends on the quality of the features extracted from the network at each level of the model, which limits the application of the method to complex tasks. Residual fusion uses CNN and residual aggregation to realize data fusion [29,30]. However, the method works poorly at handling the heterogeneous data fusion problem. To represent the relationship between multi-source features, scholars tried to use graph-based fusion techniques to achieve data fusion [9,31]. However, the computational complexity of the graph-based fusion technique is significantly influenced by the number of nodes and edges, which makes it unsuitable to apply to large-scale data. The transformer- and attention-based fusion techniques utilize attention mechanism to achieve data fusion [32,33]. This approach can capture long-distance dependencies of feature embeddings and provide a high-performance representation of the relationship between multi-source features [34,35]. However, the transformer and attention mechanisms are less sensitive to location information and lack the ability to collect local information. For the classification module, it serves to map the fused features to the final classification results. Linear layers and adaptive weighting techniques are always used in this component to enhance the robustness and generalization of the model [36,37].
In this paper, a cross attention-based multi-scale convolutional fusion network (CMCN) is proposed for HSI-LiDAR land cover classification. The approach majorly consists of three modules: spatial–elevation–spectral convolutional feature extraction module (SESM), cross attention fusion module (CAFM), and classification module. Two considerations are focused on improving the performance of the network. First, discriminative spatial–spectral and spatial–elevation features are extracted in diversified land cover conditions, and the correlative and complementary information is preserved. Second, the long-distance dependencies of feature embeddings are captured, and the representation of deep semantic relations for multi-source data is achieved. To capture discriminative spatial–spectral and spatial–elevation features and preserve the correlation and complementarity, improved multi-scale convolutional blocks are utilized in the SESM to extract features from HSI and LiDAR data. Spatial and spectral pseudo-3D [38] convolutions and pointwise convolutions are jointly used to extract spatial–elevation–spectral features and simplify the computation. Residual and one-shot aggregations are employed to maintain shallow features in deep layers and make the network easier to train. The parameter-sharing technique is used to exploit the correlation and complementarity. To capture the long-distance dependencies and achieve the relations of the feature embeddings, a local-global cross attention mechanism is applied in the CAFM to collect the local contexture features and integrate the global significant relational semantic information. A classification module is implemented to collect the fused features and translate them into the final classification results. More details about the proposed method are presented in Section 3. The main contributions are summarized as follows:
1. A multi-scale convolutional feature extraction module is designed to extract spatial–elevation–spectral features from HSI and LiDAR data. In this module, spatial and spectral pseudo-3D multi-scale convolutions and pointwise convolutions are jointly utilized to extract discriminative features, which can enhance the ability to extract ground characteristics in diversified environments. Residual and one-shot aggregations are employed to maintain the shallow features and ensure convergence. To capture the correlation and complementarity of spatial and elevation information among HSI and LiDAR data, a parameter-sharing technique is applied to generate feature embeddings;
2. A local–global cross attention block is designed to collect and integrate effective information from multi-source feature embeddings. To collect the local information, local-based convolutional layers are implemented to perform the mapping transformation. After that, the global cross attention mechanism is applied to achieve long-distance dependencies and generate attention weights. Then, multiplication operation and residual aggregation are used to produce semantic representations and accomplish data fusion;
3. A novel cross attention-based multi-scale convolutional fusion network is proposed to achieve the joint classification of HSI and LiDAR data. A multi-scale CNN framework with parameter sharing and a local–global cross attention mechanism are combined to exploit joint deep semantic representations of HSI and LiDAR data and achieve data fusion. The classification module is implemented to perform classification results. Experimental results on three publicly available datasets are reported.
The rest of the paper is organized as follows. Section 2 introduces the related work, such as HSI and LiDAR data classification, residual and one-shot aggregations, and the cross attention mechanism. Section 3 presents the details of the proposed network. Section 4 gives the experimental results, and Section 5 makes some discussions. Section 6 gives the conclusion of this article and provides future work.

2. Related Work

2.1. HSI and LiDAR Data Classification

Different from traditional HSI classification models, the joint HSI and LiDAR classification models are required to consider how to capture correlative and complementary information and represent the relationship of HSI and LiDAR data while extracting discriminative features. Shallow learning methods use a stacking approach to combine features to achieve joint classification results. However, these methods only utilize the respective features of HSI and LiDAR data and fail to achieve genuine feature-level fusion. Improved approaches utilize dimensionality reduction [39] and related subspaces [40,41] technologies to align HSI and LiDAR data in a shared feature space, where the joint features of HSI and LiDAR can be well expressed. However, feature mapping is a challenging problem due to its ill-defined nature, making it difficult to find efficient ways to quantitatively map HSI and LiDAR data.
Unlike shallow learning methods, deep learning methods use deep neural networks to achieve feature extraction, feature fusion, and discriminative decisions. The original approaches use a combination of shallow feature extraction and deep learning to achieve HSI-LiDAR fusion classification. For example, Ghamisi et al. [42] propose an HSI-LiDAR data fusion framework using extinction profiles and deep learning. The fused features, which are extracted by extinction profiles, are fed to a deep learning-based classifier to ultimately produce the classification results. Later, the feedforward neural network (FNN), residual network (Resnet), and squeeze-and-excitation network (SEnet) are employed to implement HSI-LiDAR fusion classification. For example, Chen et al. [43] propose an HSI-LiDAR data fusion network, which utilizes a two-branch network to separately extract spectral–spatial–elevation features and then utilizes FNN to integrate these features for the final classification. Ge et al. [30] propose a deep Resnet-based fusion framework for HSI-LiDAR data. Three fusion methods are implemented to enhance the effectiveness of the method, which are the residual network-based deep feature fusion, the residual network-based probability reconstruction fusion, and the residual network-based probability multiplication fusion. Feng et al. [44] incorporate squeeze-and-excitation networks into the fusion step to adaptively realize the feature calibration. Although these deep learning-based fusion methods represent an improvement over traditional methods, the feature extraction, feature fusion, and discriminative decision processes are basic and rough, which constrains the practicality of these methods. To further improve the capability of the fusion classification network, many improved methods are proposed, such as CNN, transformer, attention, and mamba techniques. For the CNN methods, Yu et al. [11] propose a simplified CNN architecture to classify HSI and 2D convolution blocks that are used to extract the spatial features of abundantly involved spectral information as training channels. Roy et al. [45] propose a hybrid 2D-3D spectral CNN for HSI classification. Li et al. [46] propose an HSI-LiDAR data fusion network based on a convolutional neural network and composite kernels. A three-stream CNN is designed to extract the spectral, spatial, and elevation features from HSI-LiDAR datasets, and a multi-sensor composite kernels scheme is designed to fuse the extracted features and produce the final classification results. In [47], a novel HSI-LiDAR classification method based on multi-view feature learning and multi-level information fusion is proposed. For the transformer methods, Zhang et al. [32] propose a transformer and multi-scale fusion network for HSI and LiDAR joint classification. Zhao et al. [33] propose a hierarchical CNN and transformer network for HSI and LiDAR joint classification. For the attention techniques, in [48], a dual-channel spatial, spectral, and multi-scale attention convolutional network is proposed for HSI-LiDAR data fusion. A novel composite attention learning mechanism is introduced to fully integrate the features in the data sources. Song et al. [49] propose a multi-scale pseudo-Siamese network with an attention mechanism to fuse HSI and LiDAR data. In [35], an attention-guided fusion and classification framework based on a convolutional neural network is proposed to classify the land cover of HSI and LiDAR data. The spectral attention mechanism is adopted to assign weights to the spectral channels, and the cross attention mechanism is introduced to impart significant spatial weights from LiDAR to HSI. Li et al. [50] propose a morphological convolution and attention calibration network for HSI-LiDAR classification. A dual attention module is designed to extract features of the input data. For the mamba methods, Li et al. [20] propose a mamba-based model to exploit long-range interaction of the whole image and spatial–spectral information for the HSI dataset. These methods use deep learning techniques to enhance the data fusion capabilities to some extent.

2.2. Residual Aggregation and One-Shot Aggregation

The depth of the neural network is critical to the performance of the model. When increasing the depth of the neural network, the model can perform more complicated feature mapping, which can theoretically provide better performance. However, the experiments find that the deep neural network has a degradation problem, i.e., the network accuracy saturates and even decreases when the depth of the network increases. To address this problem, residual aggregation is proposed to incorporate residual pathways by explicitly modifying the network structure and retaining shallow features using identity mapping. Given H as a hidden layer, F as a feature map, and as a summation operator, the output feature map of the l t h hidden layer can be expressed as
F l = H l F l 1 F l 1
However, experiments show that multiple summation operations will wash away the information embedded in the previous features. To better maintain the previous feature maps, one-shot aggregation [51] is proposed to use a concatenation operation instead of a summation operation and simultaneously aggregate the previous feature maps to the target l t h layer at one time. Given Cat · as a concatenation operator, the process can be expressed as
F l = C a t H 1 F 1 , H 2 F 2 , , H l 1 F l 1

2.3. Cross Attention Mechanism

The attention mechanism [52] is a feature transformation method that is based on the human perceptual process. It is designed to focus more on the informative areas while taking into account nonessential areas to a lesser extent [53]. Specifically, given queries, keys, and values are input vectors, and the output is output vector. The attention mechanism is a process that maps queries and a set of key-value pairs to weights, which are used to transform the values to obtain output [54]. The attention mechanism can capture long-range interactions of input vectors and has shown great potential in fields such as natural language processing and computer vision [38]. The cross attention mechanism [55,56] uses multi-source input vectors for queries, keys, and values. It generates contextual representations for the values by computing associations between queries and keys and helps the model understand the relationship between multi-source features. For instance, Liu et al. [57] propose a multi-scale cross-interaction attention network for HSI classification, and cross attention is implemented to detect spectral–spatial features at different scales. In [58], a spatial–spectral cross attention module is proposed to extract the interactive spatial–spectral fusion feature intra-transformer block. Yang et al. [59] introduce a cross attention spectral–spatial network for HSI classification. Cross-spectral and cross-spatial attention components are proposed to capture spectral and spatial features from HSI patches. In [60], the cross attention mechanism is used to select the HSI bands, which are guided by LiDAR data. In [61], cross HSI and LiDAR attention is introduced. In the approach, LiDAR patch tokens serve as queries, while keys and values are derived from HSI patch tokens. The fused features are used for the joint classification of HSI and LiDAR data.

3. Methodology

The proposed method is introduced in this section. First, the preliminary is given. After that, the SESM, CAFM, and classification modules are described in detail. Finally, the overall framework of the proposed model is discussed.

3.1. Preliminary

Mathematically, given an HSI dataset X H S I H × W × c and its corresponding LiDAR dataset X L i D A R H × W × 1 , where H and W refer to the height and width of the two datasets, c is the spectral size of HSI. The input HSI cube can be described as x H S I i f × h × w × c , where x H S I i represents the cube-based HSI data of i t h pixel, f is the number of feature maps ( f is set to 1 when initializing the input data), h × w is the patch size, and h = w . The input LiDAR cube can be presented as x L i D A R i f × h × w × 1 , which is the cube-based LiDAR data of i t h pixel. The i t h input data of the proposed network can be denoted as x i = x H S I i , x L i D A R i . The ground-truth label of the i t h input data is expressed as y t r u e i 1 , 2 , , C , where C is the number of classes. The output of the network is expressed as y p r e i 1 × C . For quick reference, the notations used in this paper are summarized in Table 1.

3.2. The Overall Framework of the Proposed Method

The proposed framework mainly consists of three modules. Initially, the cube-based inputs are fed into the SESM. Pseudo-3D convolutions and pointwise convolutions are used to extract discriminative features. The parameter-sharing technique is employed to exploit correlation and complementarity. Then, the extracted features are passed through a local–global cross attention fusion module. Local-based convolutions and global-based cross attention mechanisms are combined to represent the relationships of multi-source data and achieve data fusion. Finally, the fused features are connected to the classification module to provide classification results. The overall structure of the proposed method is shown in Figure 2. Early stopping and dynamic learning rate technologies are implemented to reduce the training time and provide better network convergence. Cross-entropy loss is applied in the proposed network, which can be expressed as
L = i = 1 N j = 1 C y t r u e i j l o g y p r e i j
where y t r u e i j represents the j th element of the label y t r u e i and y p r e i j represents the probability that the pixel belongs to the j th class.

3.3. Spatial–Elevation–Spectral Convolutional Feature Extraction Module

A triple-branch CNN architecture ( B H 1 , B H 2 and B L ) is applied to extract spatial–spectral and spatial–elevation features from HSI and LiDAR data. Specifically, B H 1 is used to extract the spectral features and B H 2 is used to extract the spatial features from HSI. B L is applied to extract the spatial–elevation features from LiDAR. To fully explore the discriminative features, multi-scale spatial and spectral pseudo-3D convolutional blocks are designed. Residual and one-shot aggregations are implemented to enhance the module. Parameter sharing is introduced to capture the correlation and complementarity between B H 2 and B L , allowing the module to better exploit spatial and elevation features of HSI and LiDAR and reducing the number of free parameters. Figure 3 shows the detailed architecture of the SESM.
For B H 1 , the input is the HSI cube x H S I i . A 1 × 1 × 1 convolutional layer is implemented to maintain the spectral size and increase the number of feature maps. The 3D convolution can be expressed as
F l x , y , z = s 1 = 0 k 1 s 2 = 0 k 2 s 3 = 0 k 3 s l s 1 , s 2 , s 3 F l 1 x + s 1 , y + s 2 , z + s 3 + b l
where F l x , y , z represents the output of the l th layer at the position x , y , z , F l 1 x + s 1 , y + s 2 , z + s 3 represents the specific value of the feature maps at the x + s 1 , y + s 2 , z + s 3 position in l 1 th layer, s l represents the specific value of the convolution kernel of the l th layer at the position s 1 , s 2 , s 3 ,   k 1 , k 2 and k 3 represent the size of the convolution kernel, respectively, b l represents the bias of the l th layer.
Then, three spectral convolutional blocks are used to extract features. One-shot aggregation is used to collect the shallow and deep features to promote the presentation ability. The process can be expressed as
F l = C a t H s p e F i n 1 , H s p e F i n 2 , H s p e F i n 3
where H s p e represents the spectral convolutional block, F i n 1 , F i n 2 and F i n 3 are the corresponding input data, C a t · represents the concatenate operator.
After that, a 1 × 1 × 1 convolutional layer is implemented to reduce the number of feature maps. The residual aggregation is used to enhance convergence. The residual aggregation can be expressed as
F l = H F s F s
where F s represents the shallow feature map, H · represents the hidden intermediate layer. Finally, a 1 × 1 × c convolutional layer is used to reduce the spectral size. The dataflow of the B H 1 is shown in Table 2. All these layers are followed by the batch normalization [62] (Batch Norm) and Mish activation function [63] (Mish) to avoid overfitting and provide nonlinear capabilities for the module. The Batch Norm formula is
B N x = x E x V a r x
where B N · is the batch normalization function, E · and V a r · represent the mean and variance function in element dimension. The Mish activation function can be expressed as
M x = tanh ln 1 + e x x
where M · represents the Mish function. Assume the following formula
f k 1 × k 2 × k 3 x = M B N C o n v k 1 × k 2 × k 3 x
where f k 1 × k 2 × k 3 · is a conforming function, C o n v k 1 × k 2 × k 3 · represents the 3D convolutional layer, k 1 × k 2 × k 3 represents the kernel size. The whole process of B H 1 can be expressed as
F 1 H 1 = f 1 × 1 × 1 x H S I i
F s p e 1 H 1 = H s p e F 1 H 1 ,   F s p e 2 H 1 = H s p e F s p e 1 H 1 ,   F s p e 3 H 1 = H s p e F s p e 2 H 1
F H 1 = f 1 × 1 × c f 1 × 1 × 1 C a t F s p e 1 H 1 , F s p e 2 H 1 , F s p e 3 H 1 F 1 H 1
where F H 1 represents the output of B H 1 .
For B H 2 and B L , the structure of the network is almost identical. The only difference is that B H 2 uses 1 × 1 × c convolutional layer to reduce the spectral size and increase the number of feature maps, and B L use 1 × 1 × 1 convolutional layer to achieve the same function. After that, three spatial convolutional blocks are used to extract spatial and elevation features from HSI and LiDAR data. Two 1 × 1 × 1 convolutional layers are used to encode the features and exploit the correlation and complementarity by parameter sharing, which can enhance the generalizability and robustness of the neural network. Residual and one-shot aggregation are used to enhance the convergence. Batch Norm and Mish are applied after these layers. The process of B H 2 is expressed as follows.
F 1 H 2 = f 1 × 1 × c x H S I i
F s p a 1 H 2 = H s p a F 1 H 2 ,   F s p a 2 H 2 = H s p a F s p a 1 H 2 ,   F s p a 3 H 2 = H s p a F s p a 2 H 2
F H 2 = f 1 × 1 × 1 f 1 × 1 × 1 C a t F s p a 1 H 2 , F s p a 2 H 2 , F s p a 3 H 2 F 1 H 2
where F H 2 represents the output of B H 2 . Similarly, the B L can be expressed as
F 1 L = f 1 × 1 × 1 x L i D A R i
F s p a 1 L = H s p a F 1 L ,   F s p a 2 L = H s p a F s p a 1 L ,   F s p a 3 L = H s p a F s p a 2 L
F L = f 1 × 1 × 1 f 1 × 1 × 1 C a t F s p a 1 L , F s p a 2 L , F s p a 3 L F 1 L
where F L is the output of B L . The parameter sharing occurs in the last two f 1 × 1 × 1 · . The dataflow of the B H 2 and B L is shown in Table 3.
To capture discriminative features, multi-scale spectral and spatial pseudo-3D convolutional blocks are designed. The pseudo-3D convolutions and pointwise convolutions are jointly utilized to extract multi-scale features, which can adequately exploit the ground characteristics in complex environments. Furthermore, residual aggregation is applied to make the block easier to train. The two blocks present a similar structure. The difference is that the spectral convolutional block uses multi-scale spectral pseudo-3D convolutions to extract spectral features, and the spatial convolutional block applies multi-scale spatial pseudo-3D convolutions to focus on spatial features. Specifically, for the spectral convolutional block, three convolutional layers with 1 × 1 × 3 , 1 × 1 × 5 and 1 × 1 × 7 kernels are used to generate multi-scale feature maps. The number of data cubes is decreased to half the original number to reduce complexity. After that, the concatenation operator is applied to converge the features. A 1 × 1 × 1 convolutional layer is implemented to reduce the number of data cubes, and the residual aggregation is conducted to maintain the original features and make the network easier to converge. Batch Norm and Mish are used after the convolutional layers to provide stability and nonlinearity for the block. The process can be expressed as
H s p e x = x f 1 × 1 × 1 ( M B N C a t C o n v 1 × 1 × 3 x , C o n v 1 × 1 × 5 x , C o n v 1 × 1 × 7 x
For the spatial convolutional block, the kernels of the pseudo-3D convolutions are 3 × 3 × 1 , 5 × 5 × 1 , and 7 × 7 × 1 . The following stages are the same as the spectral block. The spatial convolutional block can be expressed as
H s p a x = x f 1 × 1 × 1 ( M B N C a t C o n v 3 × 3 × 1 x , C o n v 5 × 5 × 1 x , C o n v 7 × 7 × 1 x
The dataflows of the spectral and spatial convolutional blocks are shown in Table 4 and Table 5.

3.4. Cross Attention Fusion Module

To capture the long-distance dependencies and represent the relationships of the features, a local–global cross attention fusion module is designed to integrate the multi-source features and generate relational semantic representations to achieve data fusion. Different from the existing cross attention techniques, which commonly use maximization or average pooling to capture the global linear spectral and spatial dependencies between feature maps and ignore the local spatial cues in the feature maps, the proposed CAFM module uses a combination of convolutional layers and attention operations to achieve feature representation. Local-based convolutions are used to collect contextual cues of feature maps. A global-based cross attention mechanism is applied to exploit long-distance dependences. Residual aggregations are implemented to enhance convergence. The overall structure of the CAFM is shown in Figure 4.
In the attention mechanism, Q is the target of the query, K is the key feature representation, and V is the value of the key feature representation. First, the attention scores are obtained by the correlation between Q and K , which can be used to express the dependency of Q and K . Then the output is produced by weighted summation of V according to the attention scores. In the calculation process of the attention mechanism, Q is the subject of the query and is the core information that generates the attention scores. For the joint HSI and LiDAR datasets, we consider that the spectral signatures provided by HSI are the core information that provides more discriminative power than spatial and elevation features, which should be given a higher priority. Therefore, in the first stage of the CAFM module, spectral features are chosen as Q instead of spatial features or elevation information. With this design, the spectral signatures are fully utilized. And the correlation expressions of the spectral signatures with spatial features and elevation information are obtained, respectively. Based on the same consideration, in the second stage of the CAFM module, the correlation expression of the spectral signatures and elevation information is employed as Q to obtain a joint expression of the spectral signatures, spatial features, and elevation information, which is treated as the final fusion features.
Specifically, the input’s spatial features ( F H 2 ), spectral features ( F H 1 ), and elevation features ( F L ) are extracted by SESM. The shapes of the three features are h , w , f , which are obtained by removing the spectral dimensions through the reshape operator. The whole process consists of two stages. For stage 1, spectral features are fused with spatial features and elevation features by cross attention mechanism, respectively; 3 × 3 convolutions are used to capture local contextual information of feature maps; 1 × 1 convolutions are applied to replace the traditional FFN to achieve linear transformation, which can effectively reduce the computational complexity. The initiation process is expressed as follows.
V H = R e C o n v 1 × 1 F H 2 , K H = R e C o n v 3 × 3 F H 2 , Q H = R e C o n v 3 × 3 F H 1
V L = R e C o n v 1 × 1 F L , K L = R e C o n v 3 × 3 F L
where V H , K H , Q H , V L , K L represent the queries, keys and values, R e · represents the reshape operator, which are used to change the shape of the input data, C o n v k 1 × k 2 · represents the 2D convolutional layer, k 1 × k 2 represents the kernel size.
Layer normalization (Layer Norm) [64] are applied to stabilize the module, which is commonly observed in the attention mechanism. The Layer Norm formula is
L N x = x E x V a r x
where L N · is the Layer normalization function, E · and V a r · represent the mean and variance function in feature dimension.
To enhance flexibility, the multi-head approach is applied to provide cross attention weights, which can be expressed as
A t t e n t i o n Q , K , V = s o f t m a x Q K T d k V
h e a d i = A t t e n t i o n Q W i Q , K W i K , V W i V
M u l t i H e a d Q , K , V = C a t h e a d 1 , , h e a d h W O
where Q , K , V represent the queries, keys, and values, h represents the number of heads, h e a d i represents the output of i th head, W O represents the output transform matrix, W i Q , W i K , W i V represent the transform matrix of queries, keys, and values in i th head, A t t e n t i o n · is the attention function, d k is the feature size of the key vector, s o f t m a x · represents the similarity normalization function. The process of stage 1 can be expressed as
F H H = L N C o n v 1 × 1 R e M u l t i H e a d Q H , K H , V H
F H L = L N C o n v 1 × 1 R e M u l t i H e a d Q H , K L , V L
where F H H and F H L represent the output of stage 1.
Furthermore, feature compression is used for local feature extraction to condense valuable information and reduce the computation. The dataflow of stage 1 is shown in Table 6.
Stage 2 is designed to further interact with the fused features and exploit the relationship between multi-source data to generate joint deep semantic representations. To achieve the objective, a further cross attention approach is performed. Different from the process of stage 1, residual aggregations are implemented in stage 2 to maintain shallow features and provide convergence. The initialization steps can be expressed as follows.
V H H = R e C o n v 1 × 1 F H H , K H H = R e C o n v 3 × 3 F H H , Q H L = R e C o n v 3 × 3 F H L
where V H H , K H H , Q H L represent the values, keys, and queries. The process of stage 2 can be expressed as follows.
F H H L = C a t F H 2 , L N F H L C o n v 1 × 1 R e M u l t i H e a d Q H , K H , V H
where F H H L represents the output of stage 2. The dataflow of stage 2 is shown in Table 7.

3.5. The Classification Module

The classification module is used to process the extracted semantic features to generate the final classification result. The structure of the classification module is shown in Figure 2. The classification module consists of an average pooling layer, a Batch Norm layer, a Mish activation function, a reshape layer, a dropout layer, and a linear layer. Specifically, the average pooling layer is used to further integrate the deep semantic features into classification results. The Batch Norm layer and Mish layer are applied to normalize features and provide nonlinear mapping. The reshape layer is used to eliminate redundant dimensions. The dropout layer is implemented to enhance the generalization of the module. The linear layer is used to generate the predictive labels. The process of the classification module can be expressed as follows.
y p r e i = L i n e a r D r o p o u t R e M B N A v g p o o l F H H L
where y p r e i represents the predictive label of i th input data, L i n e a r · represents the linear layer, D r o p o u t · represents the dropout layer, A v g p o o l · represents the average pooling layer. The dataflow of the classification module is shown in Table 8.

4. Experiment

4.1. Dataset Description

In the experiment, three HSI-LiDAR pair datasets with different land covers are used to evaluate the effectiveness of the proposed network, which are the Trento dataset, the MUUFL dataset, and the Houston2013 dataset. The brief views of the datasets are described as follows.
Trento Dataset [4]: The Trento dataset is an HSI-LiDAR pair dataset, where the HSI data were captured by an AISA Eagle sensor, and the LiDAR digital surface model (DSM) data were acquired by an Optech ALTM 3100EA sensor. The dataset is captured over a rural area south of the city of Trento, Italy. The spatial size of the Trento dataset is 166 × 600 and the spatial resolution is about 1 m . The HSI data contains 63 bands with a spectral wavelength ranging from 420 to 990 nm . The LiDAR DSM data can reflect the height of ground objects. The land covers are classified into six categories, including Apple trees, Buildings, Ground, Woods, Vineyard, and Roads. The pseudo-color image for the HSI data, the LiDAR DSM image, and the ground-truth map of the Trento dataset are shown in Figure 5. The classes, colors, and the number of samples for each class are exhaustively provided in Table 9.
MUUFL Dataset [65]: The MUUFL dataset was collected by the ITERS CASI-1500 sensor in November 2010 at the University of Southern Mississippi Gulf Park campus in Long Beach, Mississippi, which contains the HSI dataset and LiDAR dataset. The spatial size is 325 × 220 , and the spatial resolution is 0.54 × 1.0   m . The HSI contains 64 available bands in the range of 375 to 1050 nm . The LiDAR data can reflect the height of the ground objects. The land covers are classified into 11 categories, including Trees, Mostly grass, Mixed ground surface, Dirt and sand, Road, Water, Building Shadow, Building, Sidewalk, Yellow curb, and Cloth panels. The pseudo-color image for the HSI data, the LiDAR DSM image, and the ground-truth map are shown in Figure 6. The classes, colors, and the number of samples for each class are provided in Table 9.
Houston2013 Dataset [66,67]: The Houston2013 dataset was acquired by the ITERS CASI-1500 sensor over the University of Houston campus, Houston, Texas, USA, and the neighboring urban area in 2012. It is composed of HSI data and LiDAR DSM data. The spatial size is 349 × 1905 , and the spatial resolution is about 2.5 m . The HSI data contain 144 spectral bands in the 380 to 1050 nm region. The LiDAR data can reflect the height of the ground objects. The land covers are classified into 15 categories, including Healthy grass, Stressed grass, Synthetic grass, Trees, Soil, Water, Residential, Commercial, Road, Highway, Railway, Parking Lot 1, Parking Lot 2, Tennis Court, and Running track. The pseudo-color image for the HSI data, the LiDAR DSM image, and the ground-truth map of the Houston2013 dataset are shown in Figure 7. The classes, colors, and the number of samples for each class are provided in Table 9.

4.2. Experimental Setup and Assessment Indices

To evaluate the performance of the proposed network on the multi-source remote sensing datasets, three HSI-LiDAR pair datasets with different land covers and spatial resolutions are introduced to our experiment. Ten representative methods are collected for comparison, including SVM [68], HYSN [45], DBDA [53], PMCN [69], FusAtNet [34], CCNN [22], AM3net [27], HCTnet [33], Sal2RN [65], and MS2CAN [70]. Among these methods, SVM is adopted to represent the classical machine learning HSI classification methods based on spectral signatures; HYSN, DBDA, and PMCN are used to represent the classical deep learning HSI classification methods based on spatial–spectral information; and FusAtNet, CCNN, AM3net, HCTnet, Sal2RN, MS2CAN are employed to represent the state-of-the-art deep learning HSI-LiDAR classification methods based on spatial–elevation–spectral information. The details of the comparisons are described as follows:
1. SVM: this method finds the optimal hyperplane, which is determined by the support vectors, to achieve classification;
2. HYSN: this approach proposes a hybrid spectral convolutional neural network for HSI classification and uses spectral–spatial 2D-3D convolutions to extract features;
3. DBDA: this approach proposes a double-branch dual-attention mechanism network for HSI classification. CNNs and self-attentions are used to extract spectral–spatial features;
4. PMCN: this method uses multi-scale spectral-spatial convolutions to extract features from HSI data, and an attention mechanism is applied to enhance the performance;
5. FusAtNet: this method uses residual aggregation and attention blocks to achieve data fusion of HSI and LiDAR data;
6. CCNN: this approach proposes a coupled convolutional neural network for HSI and LiDAR classification. Multi-scale CNNs are used to capture spectral–spatial features from HSI and LiDAR, and a hierarchical fusion method is implemented to fuse the extracted features;
7. AM3net: this approach uses CNNs to extract spectral–spatial–elevation features and the involution operator is specially designed for spectral features. A hierarchical mutual-guided module is proposed to fuse feature embeddings to achieve HSI-LiDAR data classification;
8. HCTnet: this method proposes a dual-branch approach to achieve HSI and LiDAR classification; 2D-3D convolutions are implemented to extract features, and a transformer network is used to fuse the features;
9. Sal2RN: this approach uses CNNs to extract spectral–spatial features from HSI and LiDAR data and applies a cross attention mechanism to achieve fusion;
10. MS2CAN: this approach is a multiscale pyramid fusion framework based on spatial–spectral cross-modal attention for HSI and LiDAR classification.
SVM utilizes optimal parameters by experimental analysis. Other comparisons utilize the default parameters as specified in their original papers. The data enhancement of FusAtNet is removed in the actual experiment.
For the proposed CMCN, the patch size of the data cube is set to 7 × 7 . The batch size is set to 32. The number of feature maps ( f ) is set to 24. The multiplicity of channel compression ( r ) and the number of heads ( h ) in multi-head attention, which are used in CAFM, are set to 2 and 2, respectively. The dropout rate is set to 0.5. Additional hyperparameters are listed as follows. The epoch is set to 200. The initial learning rate is set to 5.0 × 10 4 . The adaptive moment estimation (Adam) [71] optimizer is applied to train the network, where the attenuation rate is set to 0.9 , 0.999 and the fuzzy factor is set to 10 8 . The cosine annealing technology [72] is adopted with 200 epochs. The early stopping technology is used in the training process with 50 epochs; 1.00% of labeled samples are randomly selected as training samples and validation samples, respectively. The remaining labeled samples are collected as testing samples.
The overall accuracy (OA), average accuracy (AA), and Kappa coefficient (Kappa) [73] are introduced to quantitatively measure the performance of the competitors. All experiments are repeated 10 times independently, and the average values are reported as the final results. The experimental hardware environment is a workstation with Intel Xeon E5-2680v4 processor 2.4 GHz and NVIDIA GeForce RTX 2080Ti GPU. The software environment is CUDA v11.2, PyTorch 1.10, and Python 3.8.

4.3. Experimental Results

We first compare the performance of the various methods on the Trento dataset. The classification results are given in Table 10, and the full-factor classification maps are shown in Figure 8. The best classification accuracy for each category, as well as OA, AA, Kappa, and training time, are highlighted in bold in the tables. Observing the classification accuracies of each method, we can see that the Trento dataset is relatively easy to classify with sufficient training samples. SVM gives the lowest OA (85.39%), indicating that the pixel classification methods using only spectral signatures are less effective than the methods using spatial–spectral features. To be specific, the C2-Buildings and C6-Roads are hard to be classified in SVM (74.12% and 71.57%). It indicates that ground objects of the two categories are difficult to distinguish by spectral signatures. HYSN, DBDA, and PMCN provide higher OAs (95.50%, 96.55%, and 97.46%) than SVM. It indicates that the inclusion of spatial information is helpful for the improvement of classification accuracy. Viewing the classification accuracies of these spatial–spectral-based deep learning HSI classification methods in various categories, especially in C2-Buildings and C6-Roads, we can see that the accuracies increase gradually. It demonstrates that the usage of multi-scale convolution and attention mechanisms can effectively improve the generalization of the model. The HSI-LiDAR fusion classification methods (FusAtNet, CCNN, AM3net, HCTnet, Sal2RN, MS2CAN, and CMCN) yield higher OAs than those of HSI classification methods (SVM, HYSN, DBDA, and PMCN). It indicates that the elevation information provided by LiDAR data can provide additional discriminative features to help improve the classification accuracy. Checking the average classification accuracies of each category and the OAs of these HSI-LiDAR fusion classification methods, we can see that CCNN, MS2CAN, and CMCN obtain relatively higher classification accuracies. It further demonstrates that multi-scale feature extraction and multi-level feature fusion can effectively exploit discriminative characteristics in complex ground cover environments. In particular, the proposed method (CMCN) obtained the highest OA, which shows that the proposed method can effectively capture correlative and complementary information and fuse them to generate discriminative deep semantic features. Observing the full-factor classification maps and the ground truth map, we can see that some scattered buildings, ground, and roads are difficult to distinguish. Region I is a parcel of buildings, ground, and roads. The comparison shows that CMCN can express the detailed information of ground cover more precisely. In Region II, there are two trees on the road. It is also clearly resolved on the full-factor classification map given by CMCN. Viewing the training times of the competitors, SVM gives the best training time (4.63 s). Comparing the deep neural networks, HCTnet provides the shortest training time (10.07 s). In contrast, FusAtNet gives the longest training time (68.03 s). The training time of CMCN is 30.98 s.
To further test the performance of the proposed method, experiments are implemented on the MUUFL dataset, which contains complex topographical landscapes and an unbalanced number of labeled samples. The classification results and full-factor classification maps are presented in Table 11 and Figure 9. Different from the experimental results on the Trento dataset, SVM obtains an OA of 83.87%, which is higher than those of HYSN and FusAtNet. For spatial–spectral-based deep learning classification methods, DBDA provides the highest OA (88.09%). The spatial–elevation–spectral-based fusion classification methods provide relatively higher classification accuracies than those of SVM, HYSN, and PMCN. MS2CAN and CMCN obtained the second-highest and the first-highest OAs (88.65% and 88.99%) for HSI-LiDAR classification. Reviewing the classification accuracies for each category, it can be seen that C7-Building Shadow, C9-Sidewalk, and C10-Yellow curb are hard to classify. It may be due to the dispersed distribution of ground cover and the similarity in elevation that leads to LiDAR data failing to provide additional valid discriminatory information. In addition, inadequate training of the deep neural network caused by an insufficient number of labeled samples for the C10-Yellow curb may also be one of the reasons for the difficulty in recognizing this category. Checking the classification accuracies of C7-Building Shadow and C9-Sidewalk, we can see that CCNN and CMCN received relatively high experimental results (83.35% and 80.91% for C7-Building Shadow; 74.94% and 75.98% for C9-Sidewalk), indicating that multi-scale feature extraction has better adaptability for remote sensing images with vast structural variations. For the C10-Yellow curb, we can see that SVM provides the highest accuracy (68.40%); meanwhile, all other methods give low classification accuracy on the C10-Yellow curb. The accuracy of the proposed method on the C10-Yellow curb is also poor (8.66%), indicating that the approach performs poorly with insufficient labeled samples and requires continuous efforts to improve it. Observing the full-factor classification maps and the ground truth map, the CMCN yields a relatively clearer map for land cover classification. In Region I and II, the building, building shadows and roads are smoothly recognized. However, we can see that the sidewalk is misclassified to be mostly grass in Region I. In Region II, the yellow curb is misclassified to be a road. SVM provides the shortest training time (11.75 s). FusAtNet gives the longest training time (225.94 s). The training time of CMCN is 66.53 s.
To further test the performance of the proposed network, experiments are conducted on the Houston2013 dataset. The classification results and the full-factor classification maps are given in Table 12 and Figure 10. The spectral-based SVM obtains the lowest OA (75.70%). The spatial–spectral-based methods provide higher OAs (79.98%, 86.53%, and 89.09%) than SVM. PMCN obtains the highest OA among these spatial–spectral-based deep learning methods, which is composed of multi-scale convolution and attention mechanisms. The spatial–elevation–spectral-based methods obtain relatively higher OAs than those of SVM and spatial–spectral-based methods. CMCN achieves the highest OA (89.47%) among all competitors. Checking the classification accuracies of each category, we can see that C9-Road and C12-Parking Lot 1 are relatively difficult to classify. For C9-Road, the classification accuracies of all methods are below 90%. FusAtNet gives the lowest accuracy (53.10%), while CMCN obtains the highest result (85.28%). For C12-Parking Lot 1, SVM provides the lowest accuracy (44.75%), and MS2CAN gives the highest accuracy (93.83%). The proposed CMCN obtains an accuracy of 76.83%, which is not good among the competitors. It may be caused by the scattered distribution of labeled samples in C9-Road and C12-Parking Lot 1, where spatial and elevation information cannot be fully utilized. Observing the full-factor classification maps and the ground-truth map, we can see that CMCN provides more clear and smooth classification maps in most categories. In Region I and II, it can be seen that buildings, roads, and land covers can be clearly identified. However, some of the ground objects in Region II are still poorly recognized due to cloud obscuration. For the training time, SVM provides the shortest time (3.49 s), and FusAtNet obtains the longest time (91.21 s). The training time of CMCN is 24.2 s.

5. Discussion

5.1. Impact of the Hyper-Parameters

To detect the impact of the hyper-parameters on the proposed network, four hyper-parameters are investigated in the experiments, including the patch size of the data cubes ( h , w ), the number of feature maps ( f ), the multiplicity of channel compression ( r ), and the number of heads ( h ). The experimental results are demonstrated in Figure 11.
To be specific, Figure 11a illustrates the OAs of CMCN for Trento, MUUFL, and Houston2013 datasets under different patch sizes of the data cubes. We can see clearly that the OAs of CMCN on the three datasets increase and then decrease as the patch sizes increase. This phenomenon is easy to appreciate since small patches cannot provide enough spatial information, while too large patches present more redundant data. As a result, we need to choose an appropriate patch size to optimize the classification results. In the experiment, the default value of patch size is set to h = w = 7 .
Figure 11b illustrates the OAs of CMCN for three HSI-LiDAR pair datasets under different numbers of feature maps, which appear in the multi-scale convolutional blocks. A larger number of feature maps indicates more learnable parameters and a more complex network structure. From the experimental results, the best f is 24 for the Trento dataset (98.83%). For the MUUFL dataset, the best f is 32 (89.62%). For the Houston2013 dataset, the best f is 64 (90.73%). It shows that increasing the number of feature maps is helpful to boost the performance of the network to some extent. To strike a balance between performance and efficiency, we choose f = 24 as the default hyper-parameter setting for the actual experiments.
Figure 11c shows the OAs of CMCN under the different multiplicities of channel compression in the CAFM. A larger multiplicity indicates a smaller channel width. From the experimental results, it is pleased to find that higher compression multiplicity ( r = 4 ) provides higher OA (98.92%) on the Trento, and there is a small decrease in OAs (0.07%, 0.32%, 0.85%; 0.83%, 1.26%, 1.40%) compared with r = 2 on the MUUFL and Houston2013 datasets. The experimental results encouraged us to increase the channel compression ratio in future research. In the experiment, the default value of the multiplicity of channel compression is set to r = 2 .
Finally, we compare the performance of CMCN for different numbers of heads in CAFM, and the experimental results are displayed in Figure 11d. We can see that the number of heads appears to have a small impact on the Trento (98.77%, 98.83%, 98.85%, 98.95%, and 98.87%) and MUUFL (89.15%, 88.99%, 88.91%, 88.76%, and 88.21%) datasets and a large impact on the Houston2013 dataset (88.74%, 89.47%, 90.41%, 90.62%, and 89.37%). Theoretically, a larger number of heads could provide more flexibility in the attention mechanism. Considering the robustness of the model, we set the default value of the number of heads as h e a d = 2 in the experiments.

5.2. Investigation of the Proportion of Training Samples

In this section, we examine the performance of the competitors in the context of different proportions of training samples on three HSI-LiDAR datasets. It is an important investigation since supervised learning methods are data-driven algorithms, and the number of training samples can directly determine the performance of the classification methods. To comprehensively analyze the performance of the competitors under different training sample conditions, we randomly selected 0.5%, 1%, 2%, 4%, and 5% labeled samples in each class to compose the training sample set. The classification results are reported in Figure 12. In general, a large number of training samples enables the supervised learning methods to be adequately trained and also provides sufficient discriminative information, thus helping the methods to improve the discriminative ability. It is confirmed in the experiment, where the OAs of the competitors are low when the proportion of training samples is small, and vice versa. When the proportion of training samples is small (0.5%), the OAs of MS2CAN and CMCN decrease less than those of other methods. It indicates that these methods can better extract effective information under small sample conditions. CMCN provides consistently competitive results with the increase of the training sample proportion. When the training sample proportion reaches 5%, the classification accuracies of all methods achieve the maximum values. Among them, CMCN obtained the highest classification accuracy in all three datasets, where the accuracies are saturated in the Trento (99.7%) and Houston2013 (98.83%) datasets. The accuracy is 94.01% in the MUUFL dataset. The experimental results demonstrate again the superiority of the multi-scale convolutional network coupled with the local–global cross attention mechanism in the task of pixel classification of remote sensing images and provide thoughts to design the fusion network for the multi-source dataset.

5.3. Computational Cost and Visualization of Data Features

In this section, we will discuss the computational complexity of the competitors and demonstrate the feature extraction ability of the proposed method in visualization. The number of learnable parameters (Par) and floating-point operations (FLOPs) of the competitors on three datasets are shown in Table 13. In general, more complex convolutional structures and larger input data will increase the number of learnable parameters and the FLOPs of the network. This phenomenon is observed in Table 13. For example, both HYSN and FusAtNet contain a large number of learnable parameters, and at the same time, their FLOPs are extensive. When the input data are the Houston2013 dataset (larger spectral size and more categories), the number of learnable parameters and FLOPs of HYSN and FusAtNet also increases. A similar situation is also found in the DBDA, CCNN, MS2CAN, and HCTnet, where their number of learnable parameters and FLOPs is relatively small. However, the variations of parameters and FLOPs of PMCN, AM3net, Sal2RN, and CMCN are different from those of the previous methods. AM3net and Sal2RN have a large number of parameters but relatively low FLOPs. This is because a large number of convolutional layers and attention blocks are used to compose the network in AM3net, which will increase the learnable parameters. The input data are uniformly compressed as data preprocessing, which can reduce the FLOPs and make the parameters and FLOPs the same value on all datasets. In addition, AM3net uses involution convolution to extract spectral features for single pixels rather than for the entire data cube, which can further reduce the FLOPs of the network. Sal2RN employs dense blocks in the network, which leads to smaller FLOPs for computation. In contrast, the number of learnable parameters for PMCN and CMCN are small. However, the FLOPs of PMCN and CMCN are relatively extensive. Examining the computational process of PMCN and CMCN, we can see that there is a large number of convolutional operations in the networks, which provide a significant number of FLOPs.
To intuitively investigate the feature extraction capability of the proposed CMCN, the t-Distributed Stochastic Neighbor Embedding (t-SNE) [74] is adopted to visualize the distributions of the original HSI data and the output data of the CMCN for the three datasets in low-dimensional feature space. The feature distributions are illustrated in Figure 13. We can see that the feature distribution of the original HSI data in the three datasets appears more chaotic. Some categories are characterized by overlapping distributions, such as C2-Buildings and C6-Roads, C1-Apple trees, and C5-Vineyard in the Trento dataset, C1-Trees, C6-Water, and C7-Building Shadow in the MUUFL dataset, and C8-Commercial, C10-Highway, C11-Railway, and C13-Parking Lot 2 in the Houston2013 dataset. The CMCN can be regarded as a feature mapping method, which can help map the features into distribution states that are easy to distinguish. As shown in Figure 13, the distributions of the HSI-LiDAR data features are more dispersed in the feature space after being processed by the CMCN network, which will assist in distinguishing different categories of ground objects.

5.4. Model Analysis

In this section, model analysis is performed to investigate the effectiveness of the components of the proposed CMCN. The performance of the model is evaluated by iteratively removing each module used in our model. For the triple-branch CNN architecture of SESM, the multi-scale feature extraction blocks of each branch are removed, and the modified models are called CMCN-re- B H 1 , CMCN-re- B H 2 , and CMCN-re- B L . For the CAFM, the local–global cross attention (CAt) block is removed, which is called CMCN-re-CAt. In the CMCN-re-CAt, the spatial features, spectral features, and elevation features are concatenated directly to fed into classification module. In addition, the one-shot aggregation (OS) technique and parameter sharing (PS) technique are iteratively removed to check the impact on the model, so the modified models are called CMCN-re-OS and CMCN-re-PS. The experimental results of the models on three datasets are shown in Table 14. The best OA, AA, and Kappa are highlighted in bold.
Comparing CMCN-re- B H 1 , CMCN-re- B H 2 , and CMCN-re- B L with our model, we can see that the classification accuracies of the models decrease in all three datasets. It suggests that the multi-scale feature extraction blocks in the three branches can exploit discriminative information for the model, thus enhancing the performance of the model to some extent. A further observation of the experimental results shows that the three branches perform differently on the three datasets. For Trento and MUUFL, B L makes the highest impact on OAs, decreasing by 0.71% and 2.98%, respectively. For Houston2013, B H 2 produced the highest drop, which is 2.42%. It is a consequence of the complex data characteristics of the remote sensing datasets, which encourages us to collect spatial, elevation, and spectral information in a comprehensive nature. Comparing the OAs of CMCN-re-CAt and our model, we can see that the OAs decrease on all three datasets (0.49%, 0.67%, and 1.08%) after removing the cross attention block. It indicates that the local–global cross attention mechanism can further boost the generalization of the model by integrating relationships of the feature embeddings. For CMCN-re-OS, we can see that there are small increases (0.11%, 0.80%, and 0.54%) in the OAs after applying the one-shot aggregation technique. It demonstrates that the performance of the model is improved by maintaining the shallow features to deep layers. Finally, we tested the impact of the parameter-sharing technique, which is CMCN-re-PS. Encouragingly, we obtained positive experimental results indicating that the parameter-sharing technique is beneficial. The increases of OAs are 0.55%, 0.94%, and 0.60%, respectively. The results encourage us to utilize flexible parameter-sharing techniques to collect correlative and complementary information in future studies.

5.5. Limitations of the Model

In this study, a deep learning-based model is proposed for HSI-LiDAR joint classification. However, there are still some limitations to be considered. First, the predictions of the model are influenced by the quality of the labeled samples. Errors or biases in the real data may affect the discriminatory power of the model. Second, the predictions for the categories with small sample sizes in the unbalanced data are inaccurate. Third, the FLOPs of the model are large, resulting in a longer training time than the competitors. Fourth, the number of adjustable parameters is relatively large, which increases the difficulty of optimizing the model. Fifth, the interpretability of the model may affect the application of the model to real tasks. When using the model to predict and analyze remote sensing data, we need to be aware of these limitations. And we need to continuously improve and refine the model to improve the prediction and interpretability.

6. Conclusions

In this paper, a cross attention-based multi-scale convolutional fusion network is proposed for pixel-wise HSI-LiDAR classification. The proposed model consists of three modules, which are SESM, CAFM, and classification module. The SESM is used to extract spatial, elevation, and spectral features of the HSI and LiDAR data. The CAFM is implemented to fuse the extracted HSI and LiDAR features in a cross-modal representation learning manner and generate joint semantic information. The classification module is employed to map the semantic features to classification results. Some techniques such as multi-scale convolution, cross attention mechanism, one-shot aggregation, residual aggregation, parameter sharing, batch normalization, layer normalization, and Mish activation function are implemented to improve the performance of the network. Three HSI-LiDAR datasets containing different land covers and spectral–spatial resolutions are used to verify the effectiveness of the proposed method. Ten relevant methods are invited for comparison. At the same time, the impact of the hyper-parameters, the proportion of training samples, computational cost, visualization of data features, model analysis, and the limitations are discussed.
In conclusion, our research contributes to the field of multi-source data fusion and classification by proposing an effective framework that combines multi-scale CNN and cross attention techniques. Compared with the state-of-the-art methods, the proposed CMCN provides competitive classification performance on widely used datasets, such as Trento, MUUFL, and Houston2013. The experimental results demonstrate the potential of these techniques in enhancing the ability to extract discriminative spatial–spectral features and capture correlation and complementarity between different data sources for HSI and LiDAR joint classification. Although the proposed method provides efficient performance in HSI-LiDAR classification, there are still several issues to be focused on, such as the quality of the labeled samples, unbalanced available data, FLOPs, parameter scale, and interpretability. In the future, we will aim to overcome these issues and further enhance the robustness and overall performance of our approach in multi-source data fusion and classification.

Author Contributions

Conceptualization, H.G.; Data curation, H.G., H.P., Y.L., C.L., D.L. and H.M.; Formal analysis, H.G. and H.P.; Funding acquisition, H.G., H.P. and L.W.; Investigation, H.G., H.P., Y.L., C.L., D.L. and H.M.; Methodology, H.G.; Project administration, H.G., H.P. and L.W.; Resources, H.G., H.P. and L.W.; Software, H.G.; Supervision, H.G., H.P. and L.W.; Validation, H.G.; Visualization, H.G.; Writing—original draft, H.G.; Writing—review and editing, H.G., H.P., Y.L., C.L., D.L. and H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62071084, in part by Heilongjiang Provincial Natural Science Foundation of China under Grant LH2023F050, and in part by Fundamental Research Funds in Heilongjiang Provincial Universities under Grant 145309208.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the handling editor and anonymous reviewers for their insights and comments. The authors also would like to thank the Hyperspectral Image Analysis group at the University of Houston and the IEEE GRSS DFC2013 for providing the CASI University of Houston datasets.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HISHyperspectral image
LiDARLight detection and ranging
CNNConvolutional neural network
RNNRecurrent neural network
CMCNCross attention-based multi-scale convolutional fusion network
SESMSpatial–elevation–spectral convolutional feature extraction module
CAFMCross attention fusion module
Batch NormBatch normalization
MishMish activation function
Layer NormLayer normalization
DSMDigital surface model
SVMSupport vector machine
HYSNHybrid spectral convolutional neural network
DBDADouble-branch dual-attention mechanism network
PMCNPyramidal multiscale convolutional network with polarized self-attention
FusAtNetDual attention-based spectro–spatial multimodal fusion network
CCNNCoupled convolutional neural network
AM3netAdaptive mutual-learning-based multimodal data fusion network
HCTnetHierarchical CNN and transformer
Sal2RNSpatial–spectral salient reinforcement network
MS2CANMulti-scale spatial–spectral cross-modal attention network
AdamAdaptive moment estimation
OAOverall accuracy
AAAverage accuracy
KappaKappa coefficient
ParParameters
FLOPsFloating-point operations
t-SNEt-Distributed stochastic neighbor embedding
CAtCross attention
OSOne-shot aggregation
PSParameter sharing

References

  1. Tong, X.D. Promote the implementation of high-score projects and help the construction of the “Belt and Road” initiative. Spacecr. Recovery Remote Sens. 2018, 39, 18–25. [Google Scholar]
  2. Sun, W.W.; Yang, G.; Chen, C.; Chang, M.H.; Huang, K.; Meng, M.Z.; Liu, L.Y. Development status and literature analysis of China’s earth observation remote sensing satellites. J. Remote Sens. 2020, 24, 479–510. [Google Scholar] [CrossRef]
  3. Hong, D.F.; Gao, L.R.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
  4. Rasti, B.; Ghamisi, P.; Gloaguen, R. Hyperspectral and LiDAR Fusion Using Extinction Profiles and Total Variation Component Analysis. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3997–4007. [Google Scholar] [CrossRef]
  5. Chen, S.; Li, Q.; Zhong, W.S.; Wang, R.; Chen, D.; Pan, S.H. Improved Monitoring and Assessment of Meteorological Drought Based on Multi-Source Fused Precipitation Data. Int. J. Environ. Res. Public Health 2022, 19, 1542. [Google Scholar] [CrossRef]
  6. Judah, A.; Hu, B.X. An Advanced Data Fusion Method to Improve Wetland Classification Using Multi-Source Remotely Sensed Data. Sensors 2022, 22, 8942. [Google Scholar] [CrossRef]
  7. Li, S.T.; Li, C.Y.; Kang, X.D. Development Status and Future Prospects of Multi-source Remote Sensing Image Fusion. J. Remote Sens. 2021, 25, 148–166. [Google Scholar] [CrossRef]
  8. Li, J.X.; Hong, D.F.; Gao, L.R.; Yao, J.; Zheng, K.; Zhang, B.; Chanussot, J. Deep learning in multimodal remote sensing data fusion: A comprehensive review. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102926. [Google Scholar] [CrossRef]
  9. Cai, J.H.; Zhang, M.; Yang, H.F.; He, Y.T.; Yang, Y.Q.; Shi, C.H.; Zhao, X.J.; Xun, Y.L. A novel graph-attention based multimodal fusion network for joint classification of hyperspectral image and LiDAR data. Expert Syst. Appl. 2024, 249, 123587. [Google Scholar] [CrossRef]
  10. Schmitt, M.; Zhu, X.X. Data Fusion and Remote Sensing: An ever-growing relationship. IEEE Geosci. Remote Sens. Mag. 2016, 4, 6–23. [Google Scholar] [CrossRef]
  11. Yu, C.Y.; Han, R.; Song, M.P.; Liu, C.Y.; Chang, C.I. A Simplified 2D-3D CNN Architecture for Hyperspectral Image Classification Based on Spatial-Spectral Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2485–2501. [Google Scholar] [CrossRef]
  12. Hao, J.; Dong, F.J.; Wang, S.L.; Li, Y.L.; Cui, J.R.; Men, J.L.; Liu, S.J. Combined hyperspectral imaging technology with 2D convolutional neural network for near geographical origins identification of wolfberry. J. Food Meas. Charact. 2022, 16, 4923–4933. [Google Scholar] [CrossRef]
  13. Liu, D.X.; Han, G.L.; Liu, P.X.; Yang, H.; Sun, X.L.; Li, Q.Q.; Wu, J.J. A Novel 2D-3D CNN with Spectral-Spatial Multi-Scale Feature Fusion for Hyperspectral Image Classification. Remote Sens. 2021, 13, 4621. [Google Scholar] [CrossRef]
  14. Zhao, J.L.; Wang, G.L.; Zhou, B.; Ying, J.J.; Liu, J. Exploring an application-oriented land-based hyperspectral target detection framework based on 3D-2D CNN and transfer learning. Eurasip J. Adv. Signal Process. 2024, 2024, 37. [Google Scholar] [CrossRef]
  15. Hang, R.L.; Liu, Q.S.; Hong, D.F.; Ghamisi, P. Cascaded Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5384–5394. [Google Scholar] [CrossRef]
  16. Peng, Y.B.; Ren, J.S.; Wang, J.M.; Shi, M.L. Spectral-Swin Transformer with Spatial Feature Extraction Enhancement for Hyperspectral Image Classification. Remote Sens. 2023, 15, 2696. [Google Scholar] [CrossRef]
  17. Sun, J.; Zhang, J.B.; Gao, X.S.; Wang, M.T.; Ou, D.H.; Wu, X.B.; Zhang, D.J. Fusing Spatial Attention with Spectral-Channel Attention Mechanism for Hyperspectral Image Classification via Encoder-Decoder Networks. Remote Sens. 2022, 14, 1968. [Google Scholar] [CrossRef]
  18. Roy, S.K.; Deria, A.; Hong, D.F.; Rasti, B.; Plaza, A.; Chanussot, J. Multimodal Fusion Transformer for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5515620. [Google Scholar] [CrossRef]
  19. Arshad, T.; Zhang, J.P.; Anyembe, S.C.; Mehmood, A. Spectral Spatial Neighborhood Attention Transformer for Hyperspectral Image Classification. Can. J. Remote Sens. 2024, 50, 2347631. [Google Scholar] [CrossRef]
  20. Li, Y.P.; Luo, Y.; Zhang, L.F.; Wang, Z.M.; Du, B. MambaHSI: Spatial-Spectral Mamba for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5524216. [Google Scholar] [CrossRef]
  21. Chen, H.Y.; Long, H.Y.; Chen, T.; Song, Y.J.; Chen, H.L.; Zhou, X.B.; Deng, W. M3FuNet: An Unsupervised Multivariate Feature Fusion Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5513015. [Google Scholar] [CrossRef]
  22. Hang, R.L.; Li, Z.; Ghamisi, P.; Hong, D.F.; Xia, G.Y.; Liu, Q.S. Classification of Hyperspectral and LiDAR Data Using Coupled CNNs. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4939–4950. [Google Scholar] [CrossRef]
  23. Wang, X.H.; Feng, Y.N.; Song, R.X.; Mu, Z.H.; Song, C.M. Multi-attentive hierarchical dense fusion net for fusion classification of hyperspectral and LiDAR data. Inf. Fusion 2022, 82, 1–18. [Google Scholar] [CrossRef]
  24. Zhou, L.; Geng, J.; Jiang, W. Joint Classification of Hyperspectral and LiDAR Data Based on Position-Channel Cooperative Attention Network. Remote Sens. 2022, 14, 3247. [Google Scholar] [CrossRef]
  25. Zhao, X.D.; Tao, R.; Li, W.; Li, H.C.; Du, Q.; Liao, W.Z.; Philips, W. Joint Classification of Hyperspectral and LiDAR Data Using Hierarchical Random Walk and Deep CNN Architecture. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7355–7370. [Google Scholar] [CrossRef]
  26. Zhang, H.T.; Yao, J.; Ni, L.; Gao, L.R.; Huang, M. Multimodal Attention-Aware Convolutional Neural Networks for Classification of Hyperspectral and LiDAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 3635–3644. [Google Scholar] [CrossRef]
  27. Wang, J.P.; Li, J.; Shi, Y.L.; Lai, J.H.; Tan, X.J. AM3Net: Adaptive Mutual-Learning-Based Multimodal Data Fusion Network. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5411–5426. [Google Scholar] [CrossRef]
  28. Feng, Y.N.; Song, L.Y.; Wang, L.; Wang, X.H. DSHFNet: Dynamic Scale Hierarchical Fusion Network Based on Multiattention for Hyperspectral Image and LiDAR Data Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5522514. [Google Scholar] [CrossRef]
  29. Li, H.; Ghamisi, P.; Rasti, B.; Wu, Z.Y.; Shapiro, A.; Schultz, M.; Zipf, A. A Multi-Sensor Fusion Framework Based on Coupled Residual Convolutional Neural Networks. Remote Sens. 2020, 12, 2067. [Google Scholar] [CrossRef]
  30. Ge, C.R.; Du, Q.; Sun, W.W.; Wang, K.Y.; Li, J.J.; Li, Y.S. Deep Residual Network-Based Fusion Framework for Hyperspectral and LiDAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2458–2472. [Google Scholar] [CrossRef]
  31. Arun, P.V.; Sadeh, R.; Avneri, A.; Tubul, Y.; Camino, C.; Buddhiraju, K.M.; Porwal, A.; Lati, R.N.; Zarco-Tejada, P.J.; Peleg, Z.; et al. Multimodal Earth observation data fusion: Graph-based approach in shared latent space. Inf. Fusion 2022, 78, 20–39. [Google Scholar] [CrossRef]
  32. Zhang, M.Q.; Gao, F.; Zhang, T.E.; Gan, Y.H.; Dong, J.Y.; Yu, H. Attention Fusion of Transformer-Based and Scale-Based Method for Hyperspectral and LiDAR Joint Classification. Remote Sens. 2023, 15, 650. [Google Scholar] [CrossRef]
  33. Zhao, G.R.; Ye, Q.L.; Sun, L.; Wu, Z.B.; Pan, C.S.; Jeon, B. Joint Classification of Hyperspectral and LiDAR Data Using a Hierarchical CNN and Transformer. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5500716. [Google Scholar] [CrossRef]
  34. Mohla, S.; Pande, S.; Banerjee, B.; Chaudhuri, S. Fusatnet: Dual attention based spectrospatial multimodal fusion network for hyperspectral and lidar classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 92–93. [Google Scholar]
  35. Huang, J.; Zhang, Y.H.; Yang, F.; Chai, L.; Tansey, K. Attention-Guided Fusion and Classification for Hyperspectral and LiDAR Data. Remote Sens. 2024, 16, 94. [Google Scholar] [CrossRef]
  36. Gao, H.M.; Feng, H.; Zhang, Y.Y.; Xu, S.F.; Zhang, B. AMSSE-Net: Adaptive Multiscale Spatial–Spectral Enhancement Network for Classification of Hyperspectral and LiDAR Data. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5531317. [Google Scholar] [CrossRef]
  37. Liu, Y.; Ye, Z.; Xi, Y.Q.; Liu, H.; Li, W.; Bai, L. Multiscale and Multidirection Feature Extraction Network for Hyperspectral and LiDAR Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 9961–9973. [Google Scholar] [CrossRef]
  38. Pan, H.Z.; Liu, M.Q.; Ge, H.M.; Wang, L.G. One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification. Remote Sens. 2022, 14, 2265. [Google Scholar] [CrossRef]
  39. Gu, Y.F.; Wang, Q.W. Discriminative Graph-Based Fusion of HSI and LiDAR Data for Urban Area Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 906–910. [Google Scholar] [CrossRef]
  40. Xia, J.S.; Yokoya, N.; Iwasaki, A. Fusion of Hyperspectral and LiDAR Data with a Novel Ensemble Classifier. IEEE Geosci. Remote Sens. Lett. 2018, 15, 957–961. [Google Scholar] [CrossRef]
  41. Liu, H.; Jia, Y.H.; Hou, J.H.; Zhang, Q.F. Global-Local Balanced Low-Rank Approximation of Hyperspectral Images for Classification. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 2013–2024. [Google Scholar] [CrossRef]
  42. Ghamisi, P.; Höfle, B.; Zhu, X.X. Hyperspectral and LiDAR Data Fusion Using Extinction Profiles and Deep Convolutional Neural Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3011–3024. [Google Scholar] [CrossRef]
  43. Chen, Y.S.; Li, C.Y.; Ghamisi, P.; Jia, X.P.; Gu, Y.F. Deep Fusion of Remote Sensing Data for Accurate Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1253–1257. [Google Scholar] [CrossRef]
  44. Feng, Q.L.; Zhu, D.H.; Yang, J.Y.; Li, B.G. Multisource Hyperspectral and LiDAR Data Fusion for Urban Land-Use Mapping based on a Modified Two-Branch Convolutional Neural Network. ISPRS Int. J. Geo-Inf. 2019, 8, 28. [Google Scholar] [CrossRef]
  45. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D-2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
  46. Li, H.; Ghamisi, P.; Soergel, U.; Zhu, X.X. Hyperspectral and LiDAR Fusion Using Deep Three-Stream Convolutional Neural Networks. Remote Sens. 2018, 10, 1649. [Google Scholar] [CrossRef]
  47. Feng, J.; Zhang, J.P.; Zhang, Y. Multiview Feature Learning and Multilevel Information Fusion for Joint Classification of Hyperspectral and LiDAR Data. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5528613. [Google Scholar] [CrossRef]
  48. Li, H.C.; Hu, W.S.; Li, W.; Li, J.; Du, Q.; Plaza, A. A3 CLNN: Spatial, Spectral and Multiscale Attention ConvLSTM Neural Network for Multisource Remote Sensing Data Classification. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 747–761. [Google Scholar] [CrossRef]
  49. Song, D.M.; Gao, J.C.; Wang, B.; Wang, M.Y. A Multi-Scale Pseudo-Siamese Network with an Attention Mechanism for Classification of Hyperspectral and LiDAR Data. Remote Sens. 2023, 15, 1283. [Google Scholar] [CrossRef]
  50. Li, Z.W.; Sui, H.; Luo, C.; Guo, F.M. Morphological Convolution and Attention Calibration Network for Hyperspectral and LiDAR Data Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 5728–5740. [Google Scholar] [CrossRef]
  51. Lee, Y.; Hwang, J.W.; Lee, S.; Bae, Y.; Park, J. An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
  52. Meng, Q.Y.; Zhao, M.F.; Zhang, L.L.; Shi, W.X.; Su, C.; Bruzzone, L. Multilayer Feature Fusion Network with Spatial Attention and Gated Mechanism for Remote Sensing Scene Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6510105. [Google Scholar] [CrossRef]
  53. Li, R.; Zheng, S.Y.; Duan, C.X.; Yang, Y.; Wang, X.Q. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef]
  54. Pan, H.Z.; Zhu, Y.X.; Ge, H.M.; Liu, M.Q.; Shi, C.P. Multiscale cross-fusion network for hyperspectral image classification. Egypt. J. Remote Sens. Space Sci. 2023, 26, 839–850. [Google Scholar] [CrossRef]
  55. Meng, X.C.; Zhu, L.Q.; Han, Y.L.; Zhang, H.C. We Need to Communicate: Communicating Attention Network for Semantic Segmentation of High-Resolution Remote Sensing Images. Remote Sens. 2023, 15, 3619. [Google Scholar] [CrossRef]
  56. Wang, C.; Ji, L.Q.; Shi, F.; Li, J.Y.; Wang, J.; Enan, I.H.; Wu, T.; Yang, J.J. Collapsed Building Detection in High-Resolution Remote Sensing Images Based on Mutual Attention and Cost Sensitive Loss. IEEE Geosci. Remote Sens. Lett. 2023, 20, 8000605. [Google Scholar] [CrossRef]
  57. Liu, D.X.; Wang, Y.R.; Liu, P.X.; Li, Q.Q.; Yang, H.; Chen, D.B.; Liu, Z.C.; Han, G.L. A Multiscale Cross Interaction Attention Network for Hyperspectral Image Classification. Remote Sens. 2023, 15, 428. [Google Scholar] [CrossRef]
  58. Peng, Y.S.; Zhang, Y.W.; Tu, B.; Li, Q.M.; Li, W.J. Spatial-Spectral Transformer with Cross-Attention for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5537415. [Google Scholar] [CrossRef]
  59. Yang, K.; Sun, H.; Zou, C.B.; Lu, X.Q. Cross-Attention Spectral-Spatial Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5518714. [Google Scholar] [CrossRef]
  60. Yang, J.X.; Zhou, J.; Wang, J.; Tian, H.; Liew, A.W.C. LiDAR-Guided Cross-Attention Fusion for Hyperspectral Band Selection and Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5515815. [Google Scholar] [CrossRef]
  61. Roy, S.K.; Sukul, A.; Jamali, A.; Haut, J.M.; Ghamisi, P. Cross Hyperspectral and LiDAR Attention Transformer: An Extended Self-Attention for Land Use and Land Cover Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5512815. [Google Scholar] [CrossRef]
  62. Wu, S.; Li, G.Q.; Deng, L.; Liu, L.; Wu, D.; Xie, Y.; Shi, L.P. L1-Norm Batch Normalization for Efficient Training of Deep Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2043–2051. [Google Scholar] [CrossRef]
  63. Wang, X.L.; Ren, H.E.; Wang, A.C. Smish: A Novel Activation Function for Deep Learning Methods. Electronics 2022, 11, 540. [Google Scholar] [CrossRef]
  64. Cui, Y.Q.; Xu, Y.F.; Peng, R.M.; Wu, D.R. Layer Normalization for TSK Fuzzy System Optimization in Regression Problems. IEEE Trans. Fuzzy Syst. 2023, 31, 254–264. [Google Scholar] [CrossRef]
  65. Li, J.J.; Liu, Y.Z.; Song, R.; Li, Y.S.; Han, K.L.; Du, Q. Sal2RN: A Spatial-Spectral Salient Reinforcement Network for Hyperspectral and LiDAR Data Fusion Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5500114. [Google Scholar] [CrossRef]
  66. Khodadadzadeh, M.; Li, J.; Prasad, S.; Plaza, A. Fusion of Hyperspectral and LiDAR Remote Sensing Data Using Multiple Feature Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2971–2983. [Google Scholar] [CrossRef]
  67. Liu, Y.; Bioucas-Dias, J.; Li, J.; Plaza, A. Hyperspectral cloud shadow removal based on linear unmixing. In Proceedings of the IGARSS 2017—2017 IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 23–28 July 2017. [Google Scholar]
  68. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  69. Ge, H.M.; Wang, L.G.; Liu, M.Q.; Zhao, X.Y.; Zhu, Y.X.; Pan, H.Z.; Liu, Y.Z. Pyramidal Multiscale Convolutional Network with Polarized Self-Attention for Pixel-Wise Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5504018. [Google Scholar] [CrossRef]
  70. Wang, X.H.; Zhu, J.H.; Feng, Y.N.; Wang, L. MS2CANet: Multiscale Spatial-Spectral Cross-Modal Attention Network for Hyperspectral Image and LiDAR Classification. IEEE Geosci. Remote Sens. Lett. 2024, 21, 5501505. [Google Scholar] [CrossRef]
  71. Ghorbanian, A.; Ahmadi, S.A.; Amani, M.; Mohammadzadeh, A.; Jamali, S. Application of Artificial Neural Networks for Mangrove Mapping Using Multi-Temporal and Multi-Source Remote Sensing Imagery. Water 2022, 14, 244. [Google Scholar] [CrossRef]
  72. Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
  73. Zhang, S.Y.; Xu, M.; Zhou, J.; Jia, S. Unsupervised Spatial-Spectral CNN-Based Feature Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5524617. [Google Scholar] [CrossRef]
  74. van der Maaten, L.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Figure 1. A typical HSI-LiDAR fusion classification network diagram.
Figure 1. A typical HSI-LiDAR fusion classification network diagram.
Remotesensing 16 04073 g001
Figure 2. Structure of the proposed method.
Figure 2. Structure of the proposed method.
Remotesensing 16 04073 g002
Figure 3. The architecture of the SESM.
Figure 3. The architecture of the SESM.
Remotesensing 16 04073 g003
Figure 4. Structure of the CAFM.
Figure 4. Structure of the CAFM.
Remotesensing 16 04073 g004
Figure 5. Trento dataset: (a) Pseudo-color image (31,14,2 bands); (b) LiDAR-derived DSM image; (c) Ground-truth map.
Figure 5. Trento dataset: (a) Pseudo-color image (31,14,2 bands); (b) LiDAR-derived DSM image; (c) Ground-truth map.
Remotesensing 16 04073 g005aRemotesensing 16 04073 g005b
Figure 6. MUUFL dataset: (a) Pseudo-color image (31,16,6 bands); (b) LiDAR-derived DSM image; (c) Ground-truth map.
Figure 6. MUUFL dataset: (a) Pseudo-color image (31,16,6 bands); (b) LiDAR-derived DSM image; (c) Ground-truth map.
Remotesensing 16 04073 g006
Figure 7. Houston2013 dataset: (a) Pseudo-color image (69,36,12 bands); (b) LiDAR-derived DSM image; (c) Ground-truth map.
Figure 7. Houston2013 dataset: (a) Pseudo-color image (69,36,12 bands); (b) LiDAR-derived DSM image; (c) Ground-truth map.
Remotesensing 16 04073 g007
Figure 8. Full-factor classification maps for the Trento dataset: (a) Ground-truth map; (b) SVM; (c) HYSN; (d) DBDA; (e) PMCN; (f) FusAtNet; (g) CCNN; (h) AM3net; (i) HCTnet; (j) Sal2RN; (k) MS2CAN; (l) CMCN.
Figure 8. Full-factor classification maps for the Trento dataset: (a) Ground-truth map; (b) SVM; (c) HYSN; (d) DBDA; (e) PMCN; (f) FusAtNet; (g) CCNN; (h) AM3net; (i) HCTnet; (j) Sal2RN; (k) MS2CAN; (l) CMCN.
Remotesensing 16 04073 g008aRemotesensing 16 04073 g008b
Figure 9. Full-factor classification maps for the MUUFL dataset: (a) Ground-truth map; (b) SVM; (c) HYSN; (d) DBDA; (e) PMCN; (f) FusAtNet; (g) CCNN; (h) AM3net; (i) HCTnet; (j) Sal2RN; (k) MS2CAN; (l) CMCN.
Figure 9. Full-factor classification maps for the MUUFL dataset: (a) Ground-truth map; (b) SVM; (c) HYSN; (d) DBDA; (e) PMCN; (f) FusAtNet; (g) CCNN; (h) AM3net; (i) HCTnet; (j) Sal2RN; (k) MS2CAN; (l) CMCN.
Remotesensing 16 04073 g009
Figure 10. Full-factor classification maps for the Houston2013 dataset: (a) Ground-truth map; (b) SVM; (c) HYSN; (d) DBDA; (e) PMCN; (f) FusAtNet; (g) CCNN; (h) AM3net; (i) HCTnet; (j) Sal2RN; (k) MS2CAN; (l) CMCN.
Figure 10. Full-factor classification maps for the Houston2013 dataset: (a) Ground-truth map; (b) SVM; (c) HYSN; (d) DBDA; (e) PMCN; (f) FusAtNet; (g) CCNN; (h) AM3net; (i) HCTnet; (j) Sal2RN; (k) MS2CAN; (l) CMCN.
Remotesensing 16 04073 g010aRemotesensing 16 04073 g010b
Figure 11. Impact of hyper-parameters on the OA of the CMCN in the Trento, MUUFL, and Houston2013 datasets: (a) Impact of the patch size of the data cubes ( p a t c h ); (b) impact of the number of feature maps ( f ); (c) impact of the multiplicity of channel compression ( r ); (d) impact of the number of heads ( h ).
Figure 11. Impact of hyper-parameters on the OA of the CMCN in the Trento, MUUFL, and Houston2013 datasets: (a) Impact of the patch size of the data cubes ( p a t c h ); (b) impact of the number of feature maps ( f ); (c) impact of the multiplicity of channel compression ( r ); (d) impact of the number of heads ( h ).
Remotesensing 16 04073 g011
Figure 12. Investigation of the proportion of training samples: (a) Trento dataset; (b) MUUFL dataset; (c) Houston2013 dataset.
Figure 12. Investigation of the proportion of training samples: (a) Trento dataset; (b) MUUFL dataset; (c) Houston2013 dataset.
Remotesensing 16 04073 g012
Figure 13. Visualization of the distributions of the original HSI and the output of the CMCN for the three datasets. (a) Original Trento. (b) Processed Trento. (c) Original MUUFL. (d) Processed MUUFL. (e) Original Houston2013. (f) Processed Houston2013.
Figure 13. Visualization of the distributions of the original HSI and the output of the CMCN for the three datasets. (a) Original Trento. (b) Processed Trento. (c) Original MUUFL. (d) Processed MUUFL. (e) Original Houston2013. (f) Processed Houston2013.
Remotesensing 16 04073 g013
Table 1. A summary of notations used in this paper.
Table 1. A summary of notations used in this paper.
SymbolsDefinitions
X H S I , X L i D A R The original input data of HSI and LiDAR
H , W The height and width of the original HSI and LiDAR
c The spectral size of HSI
C The number of classes
x H S I i , x L i D A R i The cube-based input HSI and LiDAR data
x i The merged cube-based input HSI and LiDAR data
h , w The patch size of HSI and LiDAR data
f The number of feature maps
y t r u e i , y p r e i The ground-truth and predictive label of the ith input data
B H 1 , B H 2 , B L The triple-branch of SESM
F H 1 , F H 2 , F L The corresponding output of the triple-branch
F H H , F H L The output of stage 1 in CAFM
F H H L The output of stage 2 in CAFM
r , h The multiplicity of channel compression and number of heads in the attention mechanism
L The cross-entropy loss
Table 2. The dataflow of the B H 1 .
Table 2. The dataflow of the B H 1 .
Input SizeLayer NameKernelStridePaddingFiltersOutput Size
1 , h , w , c Conv/BN/Mish 1 , 1 , 1 1 , 1 , 1 0 , 0 , 0 f f , h , w , c
f , h , w , c Spectral conv block---- f , h , w , c
f , h , w , c Spectral conv block---- f , h , w , c
f , h , w , c Spectral conv block---- f , h , w , c
-Concatenate---- 3 f , h , w , c
3 f , h , w , c Conv/BN/Mish 1 , 1 , 1 1 , 1 , 1 0 , 0 , 0 f f , h , w , c
-Summation---- f , h , w , c
f , h , w , c Conv/BN/Mish 1 , 1 , c 1 , 1 , 1 0 , 0 , 0 f f , h , w , 1
Table 3. The dataflow of the B H 2 and B L .
Table 3. The dataflow of the B H 2 and B L .
Input SizeLayer NameKernelStridePaddingFiltersOutput Size
B H 2 1 , h , w , c Conv/BN/Mish 1 , 1 , c 1 , 1 , 1 0 , 0 , 0 f f , h , w , 1
B L 1 , h , w , 1 Conv/BN/Mish 1 , 1 , 1 1 , 1 , 1 0 , 0 , 0 f f , h , w , 1
B H 2 , B L f , h , w , 1 Spatial conv block---- f , h , w , 1
B H 2 , B L f , h , w , 1 Spatial conv block---- f , h , w , 1
B H 2 , B L f , h , w , 1 Spatial conv block---- f , h , w , 1
B H 2 , B L -Concatenate---- 3 f , h , w , 1
B H 2 , B L 3 f , h , w , 1 Conv/BN/Mish 1 , 1 , 1 1 , 1 , 1 0 , 0 , 0 f f , h , w , 1
B H 2 , B L -Summation---- f , h , w , 1
B H 2 , B L f , h , w , 1 Conv/BN/Mish 1 , 1 , 1 1 , 1 , 1 0 , 0 , 0 f f , h , w , 1
Table 4. The dataflow of the multi-scale spectral pseudo-3D convolutional block.
Table 4. The dataflow of the multi-scale spectral pseudo-3D convolutional block.
Input SizeLayer NameKernelStridePaddingFiltersOutput Size
f , h , w , c Conv 1 , 1 , 3 1 , 1 , 1 0 , 0 , 1 f / 2 f / 2 , h , w , c
f , h , w , c Conv 1 , 1 , 5 1 , 1 , 1 0 , 0 , 2 f / 2 f / 2 , h , w , c
f , h , w , c Conv 1 , 1 , 7 1 , 1 , 1 0 , 0 , 3 f / 2 f / 2 , h , w , c
-Concatenate---- 3 f / 2 , h , w , c
3 f / 2 , h , w , c BN/Mish---- 3 f / 2 , h , w , c
3 f / 2 , h , w , c Conv/BN/Mish 1 , 1 , 1 1 , 1 , 1 0 , 0 , 0 f f , h , w , c
-Summation---- f , h , w , c
Table 5. The dataflow of the multi-scale spatial pseudo-3D convolutional block.
Table 5. The dataflow of the multi-scale spatial pseudo-3D convolutional block.
Input SizeLayer NameKernelStridePaddingFiltersOutput Size
f , h , w , 1 Conv 3 , 3 , 1 1 , 1 , 1 1 , 1 , 0 f / 2 f / 2 , h , w , 1
f , h , w , 1 Conv 5 , 5 , 1 1 , 1 , 1 2 , 2 , 0 f / 2 f / 2 , h , w , 1
f , h , w , 1 Conv 7 , 7 , 1 1 , 1 , 1 3 , 3 , 0 f / 2 f / 2 , h , w , 1
-Concatenate---- 3 f / 2 , h , w , 1
3 f / 2 , h , w , 1 BN/Mish---- 3 f / 2 , h , w , 1
3 f / 2 , h , w , 1 Conv/BN/Mish 1 , 1 , 1 1 , 1 , 1 0 , 0 , 0 f f , h , w , 1
-Summation---- f , h , w , 1
Table 6. The dataflow of stage 1 of CAFM.
Table 6. The dataflow of stage 1 of CAFM.
Input SizeLayer NameKernelStridePaddingFiltersOutput Size
F H 2 f , h , w Conv/Reshape 1 , 1 1 , 1 0 , 0 f h × w , f
F H 2 f , h , w Conv/Reshape 3 , 3 1 , 1 1 , 1 f / r h × w , f / r
F H 1 f , h , w Conv/Reshape 3 , 3 1 , 1 1 , 1 f / r f / r , h × w
F L f , h , w Conv/Reshape 3 , 3 1 , 1 1 , 1 f / r h × w , f / r
F L f , h , w Conv/Reshape 1 , 1 1 , 1 0 , 0 f h × w , f
-Multiple/SoftMax---- h × w , h × w
-Multiple/SoftMax---- h × w , h × w
-Multiple---- h × w , f
-Multiple---- h × w , f
h × w , f Reshape/Conv/LN 1 , 1 1 , 1 0 , 0 f f , h , w
h × w , f Reshape/Conv/LN 1 , 1 1 , 1 0 , 0 f f , h , w
Table 7. The dataflow of stage 2 of CAFM.
Table 7. The dataflow of stage 2 of CAFM.
Input SizeLayer NameKernelStridePaddingFiltersOutput Size
f , h , w Conv/Reshape 1 , 1 1 , 1 0 , 0 f h × w , f
f , h , w Conv/Reshape 3 , 3 1 , 1 1 , 1 f / r h × w , f / r
f , h , w Conv/Reshape 3 , 3 1 , 1 1 , 1 f / r f / r , h × w
-Multiple/SoftMax---- h × w , h × w
-Multiple---- h × w , f
p × p , f Reshape/Conv 1 , 1 1 , 1 0 , 0 f f , h , w
-Summation/LN---- f , h , w
-Concatenate---- 2 f , h , w
Table 8. The dataflow of the classification module.
Table 8. The dataflow of the classification module.
Input SizeLayer NameKernelStridePaddingFiltersOutput Size
2 f , h , w Avg pooling---- 2 f , 1 , 1
2 f , 1 , 1 BN/Mish---- 2 f , 1 , 1
2 f , 1 , 1 Reshape---- 2 f
2 f Dropout---- 2 f
2 f Linear--- c c
Table 9. Classes, colors, and number of samples of the Trento, MUUFL, and Houston2013 datasets.
Table 9. Classes, colors, and number of samples of the Trento, MUUFL, and Houston2013 datasets.
ClassTotalTrainValTestClassTotalTrainValTest
C1-Apple trees403441413952C5-Vineyard10,50110610610,289
C2-Buildings290330302843C6-Roads317432323110
C3-Ground47955469Total30,21430630629,602
C4-Woods912392928939
ClassTotalTrainValTestClassTotalTrainValTest
C1-Trees23,24623323322,780C7-Building Shadow223323232187
C2-Mostly grass427043434184C8-Building624063636114
C3-Mixed ground surface688269696744C9-Sidewalk138514141357
C4-Dirt and sand182619191788C10-Yellow curb18322179
C5-Road668767676553C11-Cloth panels26933263
C6-Water46655456Total53,68754154152,605
ClassTotalTrainValTestClassTotalTrainValTest
C1-Healthy grass125113131225C9-Road125213131226
C2-Stressed grass125413131228C10-Highway122713131201
C3-Synthetic grass69777683C11-Railway123513131209
C4-Trees124413131218C12-Parking Lot 1123313131207
C5-Soil124213131216C13-Parking Lot 246955459
C6-Water32544317C14-Tennis Court42855418
C7-Residential126813131242C15-Running track66077646
C8-Commercial124413131218Total15,02915815814,713
Table 10. Classification OA (%), AA (%), and Kappa (×100) with standard deviation and training time (s) of the Trento dataset.
Table 10. Classification OA (%), AA (%), and Kappa (×100) with standard deviation and training time (s) of the Trento dataset.
ClassSVMHYSNDBDAPMCNFusAtNetCCNNAM3netHCTnetSal2RNMS2CANCMCN
C174.22 ± 3.2497.30 ± 0.2398.29 ± 0.4998.95 ± 0.3998.44 ± 0.9695.06 ± 1.7397.06 ± 0.4496.35 ± 2.1795.72 ± 5.3496.67 ± 0.3497.67 ± 1.03
C274.12 ± 5.6176.07 ± 0.2079.97 ± 1.7585.48 ± 3.6289.14 ± 8.1094.90 ± 0.5695.85 ± 0.6096.00 ± 0.6596.69 ± 0.6794.95 ± 0.9396.02 ± 0.33
C378.25 ± 28.5497.76 ± 0.1199.98 ± 0.0199.95 ± 0.1095.15 ± 3.6199.39 ± 0.7999.80 ± 0.2899.35 ± 0.3698.67 ± 0.4099.75 ± 0.2499.90 ± 0.09
C495.47 ± 0.7999.97 ± 0.0199.82 ± 0.0599.68 ± 0.0299.98 ± 0.0499.99 ± 0.0199.83 ± 0.0699.88 ± 0.0199.93 ± 0.0199.99 ± 0.0199.99 ± 0.01
C589.57 ± 7.2999.11 ± 0.0399.02 ± 0.1499.68 ± 0.1199.36 ± 0.3999.62 ± 0.1099.62 ± 0.0899.07 ± 0.1499.24 ± 0.8299.84 ± 0.0499.55 ± 0.28
C671.57 ± 0.8187.76 ± 0.3393.68 ± 0.8893.81 ± 3.7898.22 ± 1.6097.88 ± 0.6497.38 ± 0.6394.88 ± 0.9696.73 ± 0.7596.96 ± 0.5397.14 ± 0.63
OA85.39 ± 0.8195.50 ± 0.0296.55 ± 0.3297.46 ± 0.1398.06 ± 1.2298.45 ± 0.3198.74 ± 0.0798.20 ± 0.2898.39 ± 0.8298.67 ± 0.0698.83 ± 0.11
AA 80.53 ± 4.9893.00 ± 0.0395.13 ± 0.4196.26 ± 0.1296.72 ± 1.3297.81 ± 0.3298.26 ± 0.1297.59 ± 0.2897.83 ± 0.8798.03 ± 0.0798.38 ± 0.08
Kappa80.57 ± 1.0893.99 ± 0.0395.39 ± 0.4796.61 ± 0.1997.41 ± 1.6297.92 ± 0.4198.31 ± 0.1097.60 ± 0.3897.85 ± 1.0898.22 ± 0.0898.44 ± 0.14
Time (s)4.6314.3619.70113.3168.0313.2529.4410.0724.0921.4930.98
Table 11. Classification OA (%), AA (%), and Kappa (×100) with standard deviation and training time (s) of the MUUFL dataset.
Table 11. Classification OA (%), AA (%), and Kappa (×100) with standard deviation and training time (s) of the MUUFL dataset.
ClassSVMHYSNDBDAPMCNFusAtNetCCNNAM3netHCTnetSal2RNMS2CANCMCN
C193.00 ± 0.9692.24 ± 0.7194.71 ± 0.4592.00 ± 2.8595.31 ± 1.2095.75 ± 0.7394.06 ± 0.6294.16 ± 1.0594.10 ± 2.2493.59 ± 1.3095.77 ± 1.32
C271.61 ± 3.6166.64 ± 2.9077.51 ± 1.1390.60 ± 2.9277.24 ± 4.7382.83 ± 3.8480.81 ± 9.0978.22 ± 2.7476.34 ± 6.3179.66 ± 4.0086.12 ± 5.67
C373.10 ± 2.4767.25 ± 2.1882.90 ± 0.8275.35 ± 5.2368.50 ± 4.6373.71 ± 1.1278.46 ± 3.8173.16 ± 4.0470.91 ± 8.5780.35 ± 2.2272.42 ± 3.34
C470.37 ± 4.5376.84 ± 7.8789.18 ± 0.8182.71 ± 9.7271.81 ± 3.4381.06 ± 4.0085.30 ± 2.4079.25 ± 3.1483.37 ± 6.8283.06 ± 5.8286.68 ± 5.31
C581.73 ± 2.1277.04 ± 3.7586.16 ± 0.2883.37 ± 2.2480.98 ± 3.3489.32 ± 1.6485.40 ± 1.1083.06 ± 1.8085.71 ± 2.6786.56 ± 1.6687.81 ± 2.21
C694.10 ± 7.2751.29 ± 20.8691.59 ± 2.9579.12 ± 39.5777.33 ± 11.2675.68 ± 6.3958.09 ± 7.6743.28 ± 6.0284.69 ± 8.7375.13 ± 37.6992.31 ± 5.08
C764.65 ± 5.3662.67 ± 3.1772.25 ± 0.9175.57 ± 3.4076.03 ± 6.0383.35 ± 3.1379.36 ± 3.2179.77 ± 5.2170.07 ± 9.7983.53 ± 5.8880.91 ± 6.91
C890.48 ± 2.6774.27 ± 0.6493.40 ± 0.5785.74 ± 6.2083.93 ± 5.4687.96 ± 1.1891.32 ± 1.9187.40 ± 1.9883.03 ± 5.5395.25 ± 1.5496.12 ± 1.62
C959.56 ± 10.5851.39 ± 7.2046.05 ± 0.6548.41 ± 6.0749.91 ± 9.6874.94 ± 4.0867.90 ± 3.8272.57 ± 5.9053.45 ± 4.9247.4 ± 24.7875.98 ± 10.5
C1068.40 ± 22.512.19 ± 1.7918.00 ± 1.970.00 ± 0.007.98 ± 3.5815.86 ± 8.2514.86 ± 7.565.98 ± 1.977.56 ± 5.630.00 ± 0.008.66 ± 7.83
C1194.82 ± 5.1091.85 ± 9.7495.74 ± 0.9684.23 ± 3.5853.96 ± 7.0682.04 ± 8.4192.90 ± 7.1788.15 ± 7.7293.96 ± 1.9817.31 ± 34.6298.51 ± 1.68
OA83.87 ± 0.5580.00 ± 0.9288.09 ± 0.0785.41 ± 1.9083.73 ± 1.3688.09 ± 0.4987.49 ± 0.7285.50 ± 0.5684.10 ± 2.1688.65 ± 0.9688.99 ± 0.79
AA 78.35 ± 2.2364.88 ± 4.2877.04 ± 0.4572.46 ± 4.2467.54 ± 2.1776.59 ± 1.0575.32 ± 1.9071.36 ± 0.9473.02 ± 2.4067.44 ± 7.4780.12 ± 1.32
Kappa78.62 ± 0.6973.35 ± 1.2784.21 ± 0.1180.41 ± 3.0178.46 ± 1.8484.20 ± 0.6783.34 ± 0.9980.71 ± 0.7778.88 ± 2.8684.84 ± 1.3485.39 ± 1.08
Time (s)11.7558.5747.70115.9225.9423.4654.1718.6332.9649.466.53
Table 12. Classification OA (%), AA (%), and Kappa (×100) with standard deviation and training time (s) of the Houston2013 dataset.
Table 12. Classification OA (%), AA (%), and Kappa (×100) with standard deviation and training time (s) of the Houston2013 dataset.
ClassSVMHYSNDBDAPMCNFusAtNetCCNNAM3netHCTnetSal2RNMS2CANCMCN
C189.59 ± 6.0682.83 ± 5.0583.48 ± 0.9987.24 ± 0.4885.22 ± 6.1985.06 ± 1.5078.66 ± 4.8587.75 ± 2.3574.92 ± 12.2185.38 ± 2.3491.69 ± 4.44
C290.72 ± 4.0691.82 ± 0.8688.42 ± 0.4498.59 ± 0.1592.59 ± 11.880.46 ± 4.6373.61 ± 3.1882.13 ± 8.3582.14 ± 7.4880.66 ± 4.7688.32 ± 1.79
C398.38 ± 2.8996.42 ± 2.6399.53 ± 0.11100.00 ± 0.0070.81 ± 13.699.94 ± 0.1299.67 ± 0.6599.97 ± 0.0698.01 ± 0.2694.20 ± 1.95100.00 ± 0.00
C494.79 ± 2.7681.61 ± 1.9479.15 ± 1.7095.57 ± 0.4894.02 ± 4.5592.92 ± 1.2597.24 ± 0.3396.40 ± 0.9193.53 ± 1.4292.61 ± 1.0295.64 ± 1.92
C593.58 ± 2.9384.68 ± 2.2292.26 ± 1.1799.09 ± 0.1990.68 ± 12.490.26 ± 2.5094.27 ± 1.3395.37 ± 1.9795.77 ± 1.6796.27 ± 1.3495.15 ± 0.37
C698.15 ± 5.3886.97 ± 5.77100.00 ± 0.0099.42 ± 0.1842.84 ± 9.9198.98 ± 0.4299.77 ± 0.1999.86 ± 0.2899.93 ± 0.1494.65 ± 1.3197.29 ± 0.86
C776.36 ± 5.9163.07 ± 4.6484.29 ± 0.5787.04 ± 0.6880.73 ± 4.7091.50 ± 3.783.60 ± 1.0885.52 ± 2.6483.33 ± 1.0783.20 ± 1.0087.94 ± 1.94
C861.29 ± 13.6275.66 ± 1.7287.32 ± 0.4284.46 ± 0.8789.37 ± 5.2083.68 ± 1.6667.96 ± 2.2288.65 ± 1.0591.27 ± 3.5288.94 ± 1.8086.05 ± 7.76
C958.49 ± 11.0472.15 ± 8.4474.23 ± 0.4075.45 ± 1.6053.10 ± 4.3379.28 ± 2.2775.55 ± 1.8375.42 ± 3.2779.96 ± 4.7680.96 ± 2.8885.28 ± 2.84
C1063.75 ± 8.6273.77 ± 4.4893.32 ± 1.2188.44 ± 0.7368.81 ± 12.182.83 ± 1.7279.71 ± 1.9785.19 ± 0.6779.90 ± 3.3987.15 ± 1.3488.73 ± 1.75
C1167.76 ± 10.3379.48 ± 5.8186.35 ± 0.4477.32 ± 2.2086.97 ± 5.9990.10 ± 2.9879.89 ± 1.3389.05 ± 2.8383.00 ± 4.1896.92 ± 1.3994.83 ± 1.63
C1256.76 ± 5.4475.51 ± 2.6281.96 ± 0.2087.90 ± 2.0272.10 ± 4.4291.47 ± 4.2675.09 ± 2.4783.15 ± 3.9482.34 ± 5.8493.83 ± 3.1476.83 ± 2.05
C1344.75 ± 23.6979.65 ± 8.2984.51 ± 1.1189.26 ± 0.9182.57 ± 8.4285.06 ± 1.2990.61 ± 3.4997.03 ± 4.6989.90 ± 7.5197.34 ± 0.7899.15 ± 0.19
C1489.09 ± 11.0390.73 ± 2.3696.40 ± 3.07100.00 ± 0.0069.07 ± 8.0899.43 ± 0.3596.25 ± 1.9199.29 ± 0.5898.05 ± 1.4391.53 ± 4.2491.95 ± 0.39
C1598.03 ± 3.2488.13 ± 3.9894.55 ± 0.2294.17 ± 0.1786.61 ± 3.3091.02 ± 1.0895.62 ± 0.8995.15 ± 1.0589.46 ± 6.3791.10 ± 0.9482.19 ± 0.44
OA75.70 ± 1.5179.98 ± 2.5586.53 ± 0.1089.09 ± 0.2278.66 ± 1.1187.90 ± 0.7383.01 ± 0.4088.50 ± 0.8785.37 ± 2.2589.18 ± 0.5889.47 ± 0.52
AA 78.77 ± 2.3681.50 ± 1.1188.39 ± 0.0990.93 ± 0.2177.70 ± 0.8389.46 ± 0.6085.83 ± 0.5190.66 ± 0.5888.10 ± 1.5390.32 ± 0.5190.47 ± 0.45
Kappa73.69 ± 1.6478.36 ± 3.0985.44 ± 0.1188.20 ± 0.2776.91 ± 1.2286.92 ± 0.7981.62 ± 0.4487.56 ± 0.9484.18 ± 2.4488.30 ± 0.6388.62 ± 0.56
Time (s)3.4913.7614.6456.6591.218.8229.289.5014.5915.2024.20
Table 13. Number of parameters (Par) (M) and FLOPs (G) of the competitors.
Table 13. Number of parameters (Par) (M) and FLOPs (G) of the competitors.
HYSNDBDAPMCNFusAtNetCCNNAM3netHCTnetSal2RNMS2CANCMCN
TrentoPar1.000.130.0836.240.112.650.580.910.190.12
FLOPs1.360.644.78108.280.120.230.510.210.172.13
MUUFLPar1.020.130.0836.260.122.650.590.910.190.12
FLOPs1.400.644.86108.310.120.230.520.210.172.16
Houston2013Par1.760.280.1336.900.142.650.960.970.220.17
FLOPs3.251.4910.70110.810.210.231.150.270.294.52
Table 14. Model analysis. The CMCN-re- B H 1 is the model obtained by removing the B H 1 branch from the CMCN. The CMCN-re- B H 2 is the model obtained by removing the B H 2 branch from the CMCN. The CMCN-re- B L is the model obtained by removing the B L branch from the CMCN. The CMCN-re-CAt is the model obtained by removing the local–global cross attention block from the CMCN. The CMCN-re-OS is the model obtained by removing the one-shot aggregation technique from the CMCN. The CMCN-re-PS is the model obtained by removing the parameter-sharing technique from the CMCN. The CMCN is the proposed model.
Table 14. Model analysis. The CMCN-re- B H 1 is the model obtained by removing the B H 1 branch from the CMCN. The CMCN-re- B H 2 is the model obtained by removing the B H 2 branch from the CMCN. The CMCN-re- B L is the model obtained by removing the B L branch from the CMCN. The CMCN-re-CAt is the model obtained by removing the local–global cross attention block from the CMCN. The CMCN-re-OS is the model obtained by removing the one-shot aggregation technique from the CMCN. The CMCN-re-PS is the model obtained by removing the parameter-sharing technique from the CMCN. The CMCN is the proposed model.
MethodTrentoMUUFLHouston 2013
OA (%)AA (%)Kappa × 100OA (%)AA (%)Kappa × 100OA (%)AA (%)Kappa × 100
CMCN-re- B H 1 98.60 ± 0.4098.10 ± 0.4898.13 ± 0.5387.43 ± 0.8879.35 ± 3.1283.20 ± 1.1887.16 ± 0.7590.02 ± 0.3786.11 ± 0.82
CMCN-re- B H 2 98.61 ± 0.2498.03 ± 0.3598.15 ± 0.3386.70 ± 1.6478.36 ± 2.0582.29 ± 2.1487.05 ± 0.5988.46 ± 0.4586.00 ± 0.64
CMCN-re- B L 98.12 ± 0.7097.40 ± 0.7097.49 ± 0.9386.01 ± 1.1778.15 ± 1.8481.16 ± 1.6188.02 ± 0.4688.77 ± 0.3587.06 ± 0.49
CMCN-re-CAt98.34 ± 0.8297.77 ± 0.9997.79 ± 1.0988.32 ± 0.6079.01 ± 1.5884.32 ± 0.8588.39 ± 0.2890.24 ± 0.3087.45 ± 0.31
CMCN-re-OS98.72 ± 0.3598.28 ± 0.4898.21 ± 0.4788.19 ± 0.8477.97 ± 1.6084.22 ± 1.1388.93 ± 0.5890.54 ± 0.3088.03 ± 0.63
CMCN-re-PS98.28 ± 0.3797.57 ± 0.4697.70 ± 0.4988.05 ± 0.9378.62 ± 1.7784.11 ± 1.2388.87 ± 0.8990.46 ± 0.7387.96 ± 0.96
CMCN98.83 ± 0.1198.38 ± 0.0898.44 ± 0.1488.99 ± 0.7980.12 ± 1.3285.39 ± 1.0889.47 ± 0.5290.47 ± 0.4588.62 ± 0.56
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ge, H.; Wang, L.; Pan, H.; Liu, Y.; Li, C.; Lv, D.; Ma, H. Cross Attention-Based Multi-Scale Convolutional Fusion Network for Hyperspectral and LiDAR Joint Classification. Remote Sens. 2024, 16, 4073. https://doi.org/10.3390/rs16214073

AMA Style

Ge H, Wang L, Pan H, Liu Y, Li C, Lv D, Ma H. Cross Attention-Based Multi-Scale Convolutional Fusion Network for Hyperspectral and LiDAR Joint Classification. Remote Sensing. 2024; 16(21):4073. https://doi.org/10.3390/rs16214073

Chicago/Turabian Style

Ge, Haimiao, Liguo Wang, Haizhu Pan, Yanzhong Liu, Cheng Li, Dan Lv, and Huiyu Ma. 2024. "Cross Attention-Based Multi-Scale Convolutional Fusion Network for Hyperspectral and LiDAR Joint Classification" Remote Sensing 16, no. 21: 4073. https://doi.org/10.3390/rs16214073

APA Style

Ge, H., Wang, L., Pan, H., Liu, Y., Li, C., Lv, D., & Ma, H. (2024). Cross Attention-Based Multi-Scale Convolutional Fusion Network for Hyperspectral and LiDAR Joint Classification. Remote Sensing, 16(21), 4073. https://doi.org/10.3390/rs16214073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop