remotesensing-logo

Journal Browser

Journal Browser

Recent Advances in the Processing of Hyperspectral Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 15 April 2025 | Viewed by 15069

Special Issue Editors


E-Mail Website
Guest Editor
College of Information and Communication Engineering, Dalian Minzu University, Dalian 116600, China
Interests: remote sensing image processing and machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
Interests: space intelligent remote sensing; multi-mode hyperspectral remote sensing; intelligent application of remote sensing big data
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Hyperspectral imagery (HSI) has become one of the most important forms of data for analyzing the monitoring and evaluation of resources and the ecological environment. However, due to the limitations of sensors and the complexity of resource ecological environment, there are often many mixed pixels in the obtained HSI, which bring great challenges to the mapping of resource ecological environments. Therefore, one of the hot spectral issues in remote sensing research is how to process mixed pixels for HSI to obtain more accurate resource ecological environment mapping information. Many hyperspectral image processing techniques are developing rapidly to process mixed pixels. Particularly, the development of computer technology and calculation techniques such as artificial intelligence, deep learning, and weakly supervised learning has expanded and enhanced the application direction and scope of hyperspectral image processing in recent years. However, several challenges and open problems are still waiting for efficient solutions and novel methodologies. The main goal of this Special Issue is to address advanced topics related to hyperspectral image processing.

  • Fusion and resolution enhancement;
  • Denoising, restoration, and super resolution;
  • Endmember extraction and unmixing;
  • Dimensionality reduction and band selection;
  • Classification and segmentation;
  • Subpixel mapping;
  • Change detection and time-series HSI analysis;
  • Artificial intelligence for HSI;
  • Deep learning for HSI.

Prof. Dr. Liguo Wang
Prof. Dr. Yanfeng Gu
Dr. Peng Wang
Prof. Dr. Henry Leung
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing image processing
  • hyperspectral image
  • mixed pixels
  • machine learning
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

28 pages, 24617 KiB  
Article
Noise-Disruption-Inspired Neural Architecture Search with Spatial–Spectral Attention for Hyperspectral Image Classification
by Aili Wang, Kang Zhang, Haibin Wu, Shiyu Dai, Yuji Iwahori and Xiaoyu Yu
Remote Sens. 2024, 16(17), 3123; https://doi.org/10.3390/rs16173123 - 24 Aug 2024
Viewed by 1164
Abstract
In view of the complexity and diversity of hyperspectral images (HSIs), the classification task has been a major challenge in the field of remote sensing image processing. Hyperspectral classification (HSIC) methods based on neural architecture search (NAS) is a current attractive frontier that [...] Read more.
In view of the complexity and diversity of hyperspectral images (HSIs), the classification task has been a major challenge in the field of remote sensing image processing. Hyperspectral classification (HSIC) methods based on neural architecture search (NAS) is a current attractive frontier that not only automatically searches for neural network architectures best suited to the characteristics of HSI data, but also avoids the possible limitations of manual design of neural networks when dealing with new classification tasks. However, the existing NAS-based HSIC methods have the following limitations: (1) the search space lacks efficient convolution operators that can fully extract discriminative spatial–spectral features, and (2) NAS based on traditional differentiable architecture search (DARTS) has performance collapse caused by unfair competition. To overcome these limitations, we proposed a neural architecture search method with receptive field spatial–spectral attention (RFSS-NAS), which is specifically designed to automatically search the optimal architecture for HSIC. Considering the core needs of the model in extracting more discriminative spatial–spectral features, we designed a novel and efficient attention search space. The core component of this innovative space is the receptive field spatial–spectral attention convolution operator, which is capable of precisely focusing on the critical information in the image, thus greatly enhancing the quality of feature extraction. Meanwhile, for the purpose of solving the unfair competition issue in the traditional differentiable architecture search (DARTS) strategy, we skillfully introduce the Noisy-DARTS strategy. The strategy ensures the fairness and efficiency of the search process and effectively avoids the risk of performance crash. In addition, to further improve the robustness of the model and ability to recognize difficult-to-classify samples, we proposed a fusion loss function by combining the advantages of the label smoothing loss and the polynomial expansion perspective loss function, which not only smooths the label distribution and reduces the risk of overfitting, but also effectively handles those difficult-to-classify samples, thus improving the overall classification accuracy. Experiments on three public datasets fully validate the superior performance of RFSS-NAS. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Graphical abstract

24 pages, 4634 KiB  
Article
Multimodal Semantic Collaborative Classification for Hyperspectral Images and LiDAR Data
by Aili Wang, Shiyu Dai, Haibin Wu and Yuji Iwahori
Remote Sens. 2024, 16(16), 3082; https://doi.org/10.3390/rs16163082 - 21 Aug 2024
Cited by 3 | Viewed by 1546
Abstract
Although the collaborative use of hyperspectral images (HSIs) and LiDAR data in land cover classification tasks has demonstrated significant importance and potential, several challenges remain. Notably, the heterogeneity in cross-modal information integration presents a major obstacle. Furthermore, most existing research relies heavily on [...] Read more.
Although the collaborative use of hyperspectral images (HSIs) and LiDAR data in land cover classification tasks has demonstrated significant importance and potential, several challenges remain. Notably, the heterogeneity in cross-modal information integration presents a major obstacle. Furthermore, most existing research relies heavily on category names, neglecting the rich contextual information from language descriptions. Visual-language pretraining (VLP) has achieved notable success in image recognition within natural domains by using multimodal information to enhance training efficiency and effectiveness. VLP has also shown great potential for land cover classification in remote sensing. This paper introduces a dual-sensor multimodal semantic collaborative classification network (DSMSC2N). It uses large language models (LLMs) in an instruction-driven manner to generate land cover category descriptions enriched with domain-specific knowledge in remote sensing. This approach aims to guide the model to accurately focus on and extract key features. Simultaneously, we integrate and optimize the complementary relationship between HSI and LiDAR data, enhancing the separability of land cover categories and improving classification accuracy. We conduct comprehensive experiments on benchmark datasets like Houston 2013, Trento, and MUUFL Gulfport, validating DSMSC2N’s effectiveness compared to various baseline methods. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

23 pages, 2501 KiB  
Article
MsFNet: Multi-Scale Fusion Network Based on Dynamic Spectral Features for Multi-Temporal Hyperspectral Image Change Detection
by Yining Feng, Weihan Ni, Liyang Song and Xianghai Wang
Remote Sens. 2024, 16(16), 3037; https://doi.org/10.3390/rs16163037 - 18 Aug 2024
Viewed by 1549
Abstract
With the development of satellite technology, the importance of multi-temporal remote sensing (RS) image change detection (CD) in urban planning, environmental monitoring, and other fields is increasingly prominent. Deep learning techniques enable a profound exploration of the intrinsic features within hyperspectral (HS) data, [...] Read more.
With the development of satellite technology, the importance of multi-temporal remote sensing (RS) image change detection (CD) in urban planning, environmental monitoring, and other fields is increasingly prominent. Deep learning techniques enable a profound exploration of the intrinsic features within hyperspectral (HS) data, leading to substantial enhancements in CD accuracy while addressing several challenges posed by traditional methodologies. However, existing convolutional neural network (CNN)-based CD approaches frequently encounter issues during the feature extraction process, such as the loss of detailed information due to downsampling, which hampers a model’s ability to accurately capture complex spectral features. Additionally, these methods often neglect the integration of multi-scale information, resulting in suboptimal local feature extraction and, consequently, diminished model performance. To address these limitations, we propose a multi-scale fusion network (MsFNet) which leverages dynamic spectral features for effective multi-temporal HS-CD. Our approach incorporates a dynamic convolution module with spectral attention, which adaptively modulates the receptive field size according to the spectral characteristics of different bands. This flexibility enhances the model’s capacity to focus on critical bands, thereby improving its ability to identify and differentiate changes across spectral dimensions. Furthermore, we develop a multi-scale feature fusion module which extracts and integrates features from deep feature maps, enriching local information and augmenting the model’s sensitivity to local variations. Experimental evaluations conducted on three real-world HS-CD datasets demonstrate that the proposed MsFNet significantly outperforms contemporary advanced CD methods in terms of both efficacy and performance. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

21 pages, 7503 KiB  
Article
Hyperspectral Image Classification Based on Two-Branch Multiscale Spatial Spectral Feature Fusion with Self-Attention Mechanisms
by Boran Ma, Liguo Wang and Heng Wang
Remote Sens. 2024, 16(11), 1888; https://doi.org/10.3390/rs16111888 - 24 May 2024
Viewed by 1334
Abstract
In recent years, the use of deep neural network in effective network feature extraction and the design of efficient and high-precision hyperspectral image classification algorithms has gradually become a research hotspot for scholars. However, due to the difficulty of obtaining hyperspectral images and [...] Read more.
In recent years, the use of deep neural network in effective network feature extraction and the design of efficient and high-precision hyperspectral image classification algorithms has gradually become a research hotspot for scholars. However, due to the difficulty of obtaining hyperspectral images and the high cost of annotation, the training samples are very limited. In order to cope with the small sample problem, researchers often deepen the network model and use the attention mechanism to extract features; however, as the network model continues to deepen, the gradient disappears, the feature extraction ability is insufficient, and the computational cost is high. Therefore, how to make full use of the spectral and spatial information in limited samples has gradually become a difficult problem. In order to cope with such problems, this paper proposes two-branch multiscale spatial–spectral feature aggregation with a self-attention mechanism for a hyperspectral image classification model (FHDANet); the model constructs a dense two-branch pyramid structure, which can achieve the high efficiency extraction of joint spatial–spectral feature information and spectral feature information, reduce feature loss to a large extent, and strengthen the model’s ability to extract contextual information. A channel–space attention module, ECBAM, is proposed, which greatly improves the extraction ability of the model for salient features, and a spatial information extraction module based on the deep feature fusion strategy HLDFF is proposed, which fully strengthens feature reusability and mitigates the feature loss problem brought about by the deepening of the model. Compared with five hyperspectral image classification algorithms, SVM, SSRN, A2S2K-ResNet, HyBridSN, SSDGL, RSSGL and LANet, this method significantly improves the classification performance on four representative datasets. Experiments have demonstrated that FHDANet can better extract and utilise the spatial and spectral information in hyperspectral images with excellent classification performance under small sample conditions. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

26 pages, 6148 KiB  
Article
A Multi-Hyperspectral Image Collaborative Mapping Model Based on Adaptive Learning for Fine Classification
by Xiangrong Zhang, Zitong Liu, Xianhao Zhang and Tianzhu Liu
Remote Sens. 2024, 16(8), 1384; https://doi.org/10.3390/rs16081384 - 14 Apr 2024
Viewed by 1174
Abstract
Hyperspectral (HS) data, encompassing hundreds of spectral channels for the same area, offer a wealth of spectral information and are increasingly utilized across various fields. However, their limitations in spatial resolution and imaging width pose challenges for precise recognition and fine classification in [...] Read more.
Hyperspectral (HS) data, encompassing hundreds of spectral channels for the same area, offer a wealth of spectral information and are increasingly utilized across various fields. However, their limitations in spatial resolution and imaging width pose challenges for precise recognition and fine classification in large scenes. Conversely, multispectral (MS) data excel in providing spatial details for vast landscapes but lack spectral precision. In this article, we proposed an adaptive learning-based mapping model, including an image fusion module, spectral super-resolution network, and adaptive learning network. Spectral super-resolution networks learn the mapping between multispectral and hyperspectral images based on the attention mechanism. The image fusion module leverages spatial and spectral consistency in training data, providing pseudo labels for spectral super-resolution training. And the adaptive learning network incorporates spectral response priors via unsupervised learning, adjusting the output of the super-resolution network to preserve spectral information in reconstructed data. Through the experiment, the model eliminates the need for the manual setting of image prior information and complex parameter selection, and can adjust the network structure and parameters dynamically, eventually enhancing the reconstructed image quality, and enabling the fine classification of large-scale scenes with high spatial resolution. Compared with the recent dictionary learning and deep learning spectral super-resolution methods, our approach exhibits superior performance in terms of both image similarity and classification accuracy. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

24 pages, 4519 KiB  
Article
Joint Classification of Hyperspectral and LiDAR Data Based on Adaptive Gating Mechanism and Learnable Transformer
by Minhui Wang, Yaxiu Sun, Jianhong Xiang, Rui Sun and Yu Zhong
Remote Sens. 2024, 16(6), 1080; https://doi.org/10.3390/rs16061080 - 19 Mar 2024
Cited by 3 | Viewed by 1931
Abstract
Utilizing multi-modal data, as opposed to only hyperspectral image (HSI), enhances target identification accuracy in remote sensing. Transformers are applied to multi-modal data classification for their long-range dependency but often overlook intrinsic image structure by directly flattening image blocks into vectors. Moreover, as [...] Read more.
Utilizing multi-modal data, as opposed to only hyperspectral image (HSI), enhances target identification accuracy in remote sensing. Transformers are applied to multi-modal data classification for their long-range dependency but often overlook intrinsic image structure by directly flattening image blocks into vectors. Moreover, as the encoder deepens, unprofitable information negatively impacts classification performance. Therefore, this paper proposes a learnable transformer with an adaptive gating mechanism (AGMLT). Firstly, a spectral–spatial adaptive gating mechanism (SSAGM) is designed to comprehensively extract the local information from images. It mainly contains point depthwise attention (PDWA) and asymmetric depthwise attention (ADWA). The former is for extracting spectral information of HSI, and the latter is for extracting spatial information of HSI and elevation information of LiDAR-derived rasterized digital surface models (LiDAR-DSM). By omitting linear layers, local continuity is maintained. Then, the layer Scale and learnable transition matrix are introduced to the original transformer encoder and self-attention to form the learnable transformer (L-Former). It improves data dynamics and prevents performance degradation as the encoder deepens. Subsequently, learnable cross-attention (LC-Attention) with the learnable transfer matrix is designed to augment the fusion of multi-modal data by enriching feature information. Finally, poly loss, known for its adaptability with multi-modal data, is employed in training the model. Experiments in the paper are conducted on four famous multi-modal datasets: Trento (TR), MUUFL (MU), Augsburg (AU), and Houston2013 (HU). The results show that AGMLT achieves optimal performance over some existing models. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

20 pages, 4527 KiB  
Article
Hyperspectral Image Classification with the Orthogonal Self-Attention ResNet and Two-Step Support Vector Machine
by Heting Sun, Liguo Wang, Haitao Liu and Yinbang Sun
Remote Sens. 2024, 16(6), 1010; https://doi.org/10.3390/rs16061010 - 13 Mar 2024
Cited by 3 | Viewed by 1587
Abstract
Hyperspectral image classification plays a crucial role in remote sensing image analysis by classifying pixels. However, the existing methods require more spatial–global information interaction and feature extraction capabilities. To overcome these challenges, this paper proposes a novel model for hyperspectral image classification using [...] Read more.
Hyperspectral image classification plays a crucial role in remote sensing image analysis by classifying pixels. However, the existing methods require more spatial–global information interaction and feature extraction capabilities. To overcome these challenges, this paper proposes a novel model for hyperspectral image classification using an orthogonal self-attention ResNet and a two-step support vector machine (OSANet-TSSVM). The OSANet-TSSVM model comprises two essential components: a deep feature extraction network and an improved support vector machine (SVM) classification module. The deep feature extraction network incorporates an orthogonal self-attention module (OSM) and a channel attention module (CAM) to enhance the spatial–spectral feature extraction. The OSM focuses on computing 2D self-attention weights for the orthogonal dimensions of an image, resulting in a reduced number of parameters while capturing comprehensive global contextual information. In contrast, the CAM independently learns attention weights along the channel dimension. The CAM autonomously learns attention weights along the channel dimension, enabling the deep network to emphasise crucial channel information and enhance the spectral feature extraction capability. In addition to the feature extraction network, the OSANet-TSSVM model leverages an improved SVM classification module known as the two-step support vector machine (TSSVM) model. This module preserves the discriminative outcomes of the first-level SVM subclassifier and remaps them as new features for the TSSVM training. By integrating the results of the two classifiers, the deficiencies of the individual classifiers were effectively compensated, resulting in significantly enhanced classification accuracy. The performance of the proposed OSANet-TSSVM model was thoroughly evaluated using public datasets. The experimental results demonstrated that the model performed well in both subjective and objective evaluation metrics. The superiority of this model highlights its potential for advancing hyperspectral image classification in remote sensing applications. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

24 pages, 10424 KiB  
Article
Hyperspectral Images Weakly Supervised Classification with Noisy Labels
by Chengyang Liu, Lin Zhao and Haibin Wu
Remote Sens. 2023, 15(20), 4994; https://doi.org/10.3390/rs15204994 - 17 Oct 2023
Cited by 1 | Viewed by 1588
Abstract
The deep network model relies on sufficient training samples to achieve superior processing performance, which limits its application in hyperspectral image (HSI) classification. In order to perform HSI classification with noisy labels, a robust weakly supervised feature learning (WSFL) architecture combined with multi-model [...] Read more.
The deep network model relies on sufficient training samples to achieve superior processing performance, which limits its application in hyperspectral image (HSI) classification. In order to perform HSI classification with noisy labels, a robust weakly supervised feature learning (WSFL) architecture combined with multi-model attention is proposed. Specifically, the input noisy labeled data are first subjected to multiple groups of residual spectral attention models and multi-granularity residual spatial attention models, enabling WSFL to refine and optimize the extracted spectral and spatial features, with a focus on extracting clean samples information and reducing the model’s dependence on labels. Finally, the fused and optimized spectral-spatial features are mapped to the multilayer perceptron (MLP) classifier to increase the constraint of the model on the noisy samples. The experimental results on public datasets, including Pavia Center, WHU-Hi LongKou, and HangZhou, show that WSFL is better at classifying noise labels than excellent models such as spectral-spatial residual network (SSRN) and dual channel residual network (DCRN). On Hangzhou dataset, the classification accuracy of WSFL is superior to DCRN by 6.02% and SSRN by 7.85%, respectively. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

Other

Jump to: Research

17 pages, 9054 KiB  
Technical Note
MADANet: A Lightweight Hyperspectral Image Classification Network with Multiscale Feature Aggregation and a Dual Attention Mechanism
by Binge Cui, Jiaxiang Wen, Xiukai Song and Jianlong He
Remote Sens. 2023, 15(21), 5222; https://doi.org/10.3390/rs15215222 - 3 Nov 2023
Cited by 5 | Viewed by 1877
Abstract
Hyperspectral remote sensing images, with their continuous, narrow, and rich spectra, hold distinct significance in the precise classification of land cover. Deep convolutional neural networks (CNNs) and their variants are increasingly utilized for hyperspectral classification, but solving the conflict between the number of [...] Read more.
Hyperspectral remote sensing images, with their continuous, narrow, and rich spectra, hold distinct significance in the precise classification of land cover. Deep convolutional neural networks (CNNs) and their variants are increasingly utilized for hyperspectral classification, but solving the conflict between the number of model parameters, performance, and accuracy has become a pressing challenge. To alleviate this problem, we propose MADANet, a lightweight hyperspectral image classification network that combines multiscale feature aggregation and a dual attention mechanism. By employing depthwise separable convolution, multiscale features can be extracted and aggregated to capture local contextual information effectively. Simultaneously, the dual attention mechanism harnesses both channel and spatial dimensions to acquire comprehensive global semantic information. Ultimately, techniques such as global average pooling (GAP) and full connection (FC) are employed to integrate local contextual information with global semantic knowledge, thereby enabling the accurate classification of hyperspectral pixels. The results from the experiments conducted on representative hyperspectral images demonstrate that MADANet not only attains the highest classification accuracy but also maintains significantly fewer parameters compared to the other methods. Experimental results show that our proposed framework significantly reduces the number of model parameters while still achieving the highest classification accuracy. As an example, the model has only 0.16 M model parameters in the Indian Pines (IP) dataset, but the overall accuracy is as high as 98.34%. Similarly, the framework achieves an overall accuracy of 99.13%, 99.17%, and 99.08% on the University of Pavia (PU), Salinas (SA), and WHU Hi LongKou (LongKou) datasets, respectively. This result exceeds the classification accuracy of existing state-of-the-art frameworks under the same conditions. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Graphical abstract

Back to TopTop