remotesensing-logo

Journal Browser

Journal Browser

Recent Advances in Processing Mixed Pixels for Hyperspectral Image

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 42883

Special Issue Editors


E-Mail Website
Guest Editor
College of Information and Communication Engineering, Dalian Minzu University, Dalian 116600, China
Interests: remote sensing image processing and machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
Interests: space intelligent remote sensing; multi-mode hyperspectral remote sensing; intelligent application of remote sensing big data
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Hyperspectral imagery (HSI) has become one of the most important data for analyzing the monitoring and evaluation of resources and ecological environment. However, due to the limitations of sensors and the complexity of resource ecological environment, there are often many mixed pixels in the obtained HSI, which bring great challenges to the mapping of resource ecological environments. Therefore, one of the hot spectral issues in remote sensing research is how to process mixed pixels for HSI to obtain more accurate resource ecological environment mapping information. Many hyperspectral image processing techniques are developing rapidly to process mixed pixels. Particularly, the development of computer technology and calculation techniques such as artificial intelligence, deep learning, and weakly supervised learning has expanded and enhanced the application direction and scope of hyperspectral image processing in recent years. However, several challenges and open problems are still waiting for efficient solutions and novel methodologies. The main goal of this Special Issue is to address advanced topics related to hyperspectral image processing.

This Special Issue is open to any researchers working on hyperspectral image applications and processing. Topics of interests include but are not limited to the following:

  • Fusion and resolution enhancement;
  • Denoising, restoration, and super resolution;
  • Endmember extraction and unmixing;
  • Dimensionality reduction and band selection;
  • Classification and segmentation;
  • Subpixel mapping;
  • Change detection and time-series HSI analysis;
  • Artificial intelligence for HSI;
  • Deep learning for HSI.

Prof. Dr. Liguo Wang
Prof. Dr. Yanfeng Gu
Dr. Peng Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing image processing
  • hyperspectral image
  • mixed pixels
  • machine learning
  • deep learning.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 12867 KiB  
Article
Incorporating Attention Mechanism, Dense Connection Blocks, and Multi-Scale Reconstruction Networks for Open-Set Hyperspectral Image Classification
by Huaming Zhou, Haibin Wu, Aili Wang, Yuji Iwahori and Xiaoyu Yu
Remote Sens. 2023, 15(18), 4535; https://doi.org/10.3390/rs15184535 - 15 Sep 2023
Viewed by 1347
Abstract
Hyperspectral image classification plays a crucial role in various remote sensing applications. However, existing methods often struggle with the challenge of unknown classes, leading to decreased classification accuracy and limited generalization. In this paper, we propose a novel deep learning framework called IADMRN, [...] Read more.
Hyperspectral image classification plays a crucial role in various remote sensing applications. However, existing methods often struggle with the challenge of unknown classes, leading to decreased classification accuracy and limited generalization. In this paper, we propose a novel deep learning framework called IADMRN, which addresses the issue of unknown class handling in hyperspectral image classification. IADMRN combines the strengths of dense connection blocks and attention mechanisms to extract discriminative features from hyperspectral data. Furthermore, it employs a multi-scale deconvolution image reconstruction sub-network to enhance feature reconstruction and provide additional information for classification. To handle unknown classes, IADMRN utilizes an extreme value theory-based model to calculate the probability of unknown class membership. Experimental results on the three public datasets demonstrate that IADMRN outperforms state-of-the-art methods in terms of classification accuracy for both known and unknown classes. Experimental results show that the proposed methods outperform several state-of-the-art methods, which outperformed DCFSL by 8.47%, 6.57%, and 4.25%, and outperformed MDL4OW by 4.35%, 4.08%, and 2.47% on the Salinas, University of Pavia, and Indian Pines datasets, respectively. The proposed framework is computationally efficient and showcases the ability to effectively handle unknown classes in hyperspectral image classification tasks. Overall, IADMRN offers a promising solution for accurate and robust hyperspectral image classification, making it a valuable tool for remote sensing applications. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Graphical abstract

17 pages, 6470 KiB  
Article
Blind Hyperspectral Unmixing with Enhanced 2DTV Regularization Term
by Peng Wang, Xun Shen, Yingying Kong, Xiwang Zhang and Liguo Wang
Remote Sens. 2023, 15(5), 1397; https://doi.org/10.3390/rs15051397 - 1 Mar 2023
Cited by 1 | Viewed by 1591
Abstract
For the problem where the existing hyperspectral unmixing methods do not take full advantage of the correlations and differences between all these bands, resulting in affecting the final unmixing results, we design an enhanced 2DTV (E-2DTV) regularization term and suggest a blind hyperspectral [...] Read more.
For the problem where the existing hyperspectral unmixing methods do not take full advantage of the correlations and differences between all these bands, resulting in affecting the final unmixing results, we design an enhanced 2DTV (E-2DTV) regularization term and suggest a blind hyperspectral unmixing method with the E-2DTV regularization term (E-gTVMBO), which adds E-2DTV regularization to the previous blind hyperspectral unmixing based on g-TV model. The E-2DTV regularization term is based on the gradient mapping of all bands of HSI, and the sparsity is calculated on the basis of the subspace, rather than applying sparsity to the gradient map itself, which can utilize the correlations and differences between all bands naturally. The experimental results prove the superiority of the E-gTVMBO method from both qualitative and quantitative perspectives. The research results can be applied to land cover classification, mineral analysis, and other fields. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Figure 1

21 pages, 8526 KiB  
Article
A Novel Dual-Encoder Model for Hyperspectral and LiDAR Joint Classification via Contrastive Learning
by Haibin Wu, Shiyu Dai, Chengyang Liu, Aili Wang and Yuji Iwahori
Remote Sens. 2023, 15(4), 924; https://doi.org/10.3390/rs15040924 - 7 Feb 2023
Cited by 7 | Viewed by 2625
Abstract
Deep-learning-based multi-sensor hyperspectral image classification algorithms can automatically acquire the advanced features of multiple sensor images, enabling the classification model to better characterize the data and improve the classification accuracy. However, the currently available classification methods for feature representation in multi-sensor remote sensing [...] Read more.
Deep-learning-based multi-sensor hyperspectral image classification algorithms can automatically acquire the advanced features of multiple sensor images, enabling the classification model to better characterize the data and improve the classification accuracy. However, the currently available classification methods for feature representation in multi-sensor remote sensing data in their respective domains do not focus on the existence of bottlenecks in heterogeneous feature fusion due to different sensors. This problem directly limits the final collaborative classification performance. In this paper, to address the bottleneck problem of joint classification due to the difference in heterogeneous features, we innovatively combine self-supervised comparative learning while designing a robust and discriminative feature extraction network for multi-sensor data, using spectral–spatial information from hyperspectral images (HSIs) and elevation information from LiDAR. The advantages of multi-sensor data are realized. The dual encoders of the hyperspectral encoder by the ConvNeXt network (ConvNeXt-HSI) and the LiDAR encoder by Octave Convolution (OctaveConv-LiDAR) are also used. The adequate feature representation of spectral–spatial features and depth information obtained from different sensors is performed for the joint classification of hyperspectral images and LiDAR data. The multi-sensor joint classification performance of both HSI and LiDAR sensors is greatly improved. Finally, on the Houston2013 dataset and the Trento dataset, we demonstrate through a series of experiments that the dual-encoder model for hyperspectral and LiDAR joint classification via contrastive learning achieves state-of-the-art classification performance. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Figure 1

22 pages, 20918 KiB  
Article
Hybrid Attention-Based Encoder–Decoder Fully Convolutional Network for PolSAR Image Classification
by Zheng Fang, Gong Zhang, Qijun Dai, Biao Xue and Peng Wang
Remote Sens. 2023, 15(2), 526; https://doi.org/10.3390/rs15020526 - 16 Jan 2023
Cited by 11 | Viewed by 2539
Abstract
Recently, methods based on convolutional neural networks (CNNs) achieve superior performance in polarimetric synthetic aperture radar (PolSAR) image classification. However, the current CNN-based classifiers follow patch-based frameworks, which need input images to be divided into overlapping patches. Consequently, these classification approaches have the [...] Read more.
Recently, methods based on convolutional neural networks (CNNs) achieve superior performance in polarimetric synthetic aperture radar (PolSAR) image classification. However, the current CNN-based classifiers follow patch-based frameworks, which need input images to be divided into overlapping patches. Consequently, these classification approaches have the drawback of requiring repeated calculations and only relying on local information. In addition, the receptive field size in conventional CNN-based methods is fixed, which limits the potential to extract features. In this paper, a hybrid attention-based encoder–decoder fully convolutional network (HA-EDNet) is presented for PolSAR classification. Unlike traditional CNN-based approaches, the encoder–decoder fully convolutional network (EDNet) can use an arbitrary-size image as input without dividing. Then, the output is the whole image classification result. Meanwhile, the self-attention module is used to establish global spatial dependence and extract context characteristics, which can improve the performance of classification. Moreover, an attention-based selective kernel module (SK module) is included in the network. In the module, softmax attention is employed to fuse several branches with different receptive field sizes. Consequently, the module can capture features with different scales and further boost classification accuracy. The experiment results demonstrate that the HA-EDNet achieves superior performance compared to CNN-based and traditional fully convolutional network methods. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Figure 1

25 pages, 5351 KiB  
Article
Hyperspectral Image Classification Based on a 3D Octave Convolution and 3D Multiscale Spatial Attention Network
by Cuiping Shi, Jingwei Sun, Tianyi Wang and Liguo Wang
Remote Sens. 2023, 15(1), 257; https://doi.org/10.3390/rs15010257 - 1 Jan 2023
Cited by 12 | Viewed by 3009
Abstract
Convolutional neural networks are widely used in the field of hyperspectral image classification. After continuous exploration and research in recent years, convolutional neural networks have achieved good classification performance in the field of hyperspectral image classification. However, we have to face two main [...] Read more.
Convolutional neural networks are widely used in the field of hyperspectral image classification. After continuous exploration and research in recent years, convolutional neural networks have achieved good classification performance in the field of hyperspectral image classification. However, we have to face two main challenges that restrict the improvement of hyperspectral classification accuracy, namely, the high dimension of hyperspectral images and the small number of training samples. In order to solve these problems, in this paper, a new hyperspectral classification method is proposed. First, a three-dimensional octave convolution (3D-OCONV) is proposed. Subsequently, a dense connection structure of three-dimensional asymmetric convolution (DC-TAC) is designed. In the spectral branch, the spectral features are extracted through a combination of the 3D-OCONV and spectral attention modules, followed by the DC-TAC. In the spatial branch, a three-dimensional, multiscale spatial attention module (3D-MSSAM) is presented. The spatial information is fully extracted using the 3D-OCONV, 3D-MSSAM, and DC-TAC. Finally, the spectral and spatial information extracted from the two branches is fully fused with an interactive information fusion module. Compared to some state-of-the-art classification methods, the proposed method shows superior classification performance with a small number of training samples on four public datasets. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Figure 1

21 pages, 5092 KiB  
Article
FusionNet: A Convolution–Transformer Fusion Network for Hyperspectral Image Classification
by Liming Yang, Yihang Yang, Jinghui Yang, Ningyuan Zhao, Ling Wu, Liguo Wang and Tianrui Wang
Remote Sens. 2022, 14(16), 4066; https://doi.org/10.3390/rs14164066 - 19 Aug 2022
Cited by 43 | Viewed by 4750
Abstract
In recent years, deep-learning-based hyperspectral image (HSI) classification networks have become one of the most dominant implementations in HSI classification tasks. Among these networks, convolutional neural networks (CNNs) and attention-based networks have prevailed over other HSI classification networks. While convolutional neural networks with [...] Read more.
In recent years, deep-learning-based hyperspectral image (HSI) classification networks have become one of the most dominant implementations in HSI classification tasks. Among these networks, convolutional neural networks (CNNs) and attention-based networks have prevailed over other HSI classification networks. While convolutional neural networks with perceptual fields can effectively extract local features in the spatial dimension of HSI, they are poor at capturing the global and sequential features of spectral–spatial information; networks based on attention mechanisms, for example, Transformer, usually have better ability to capture global features, but are relatively weak in discriminating local features. This paper proposes a fusion network of convolution and Transformer for HSI classification, known as FusionNet, in which convolution and Transformer are fused in both serial and parallel mechanisms to achieve the full utilization of HSI features. Experimental results demonstrate that the proposed network has superior classification results compared to previous similar networks, and performs relatively well even on a small amount of training data. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Graphical abstract

20 pages, 12525 KiB  
Article
A Hyperspectral Image Classification Method Based on Adaptive Spectral Spatial Kernel Combined with Improved Vision Transformer
by Aili Wang, Shuang Xing, Yan Zhao, Haibin Wu and Yuji Iwahori
Remote Sens. 2022, 14(15), 3705; https://doi.org/10.3390/rs14153705 - 2 Aug 2022
Cited by 17 | Viewed by 3353
Abstract
In recent years, methods based on deep convolutional neural networks (CNNs) have dominated the classification task of hyperspectral images. Although CNN-based HSI classification methods have the advantages of spatial feature extraction, HSI images are characterized by approximately continuous spectral information, usually containing hundreds [...] Read more.
In recent years, methods based on deep convolutional neural networks (CNNs) have dominated the classification task of hyperspectral images. Although CNN-based HSI classification methods have the advantages of spatial feature extraction, HSI images are characterized by approximately continuous spectral information, usually containing hundreds of spectral bands. CNN cannot mine and represent the sequence properties of spectral features well, and the transformer model of attention mechanism proves its advantages in processing sequence data. This study proposes a new spectral spatial kernel combined with the improved Vision Transformer (ViT) to jointly extract spatial spectral features to complete classification task. First, the hyperspectral data are dimensionally reduced by PCA; then, the shallow features are extracted with an spectral spatial kernel, and the extracted features are input into the improved ViT model. The improved ViT introduces a re-attention mechanism and a local mechanism based on the original ViT. The re-attention mechanism can increase the diversity of attention maps at different levels. The local mechanism is introduced into ViT to make full use of the local and global information of the data to improve the classification accuracy. Finally, a multi-layer perceptron is used to obtain the classification result. Among them, the Focal Loss function is used to increase the loss weight of small-class samples and difficult-to-classify samples in HSI data samples and reduce the loss weight of easy-to-classify samples, so that the network can learn more useful hyperspectral image information. In addition, using the Apollo optimizer to train the HSI classification model to better update and compute network parameters that affect model training and model output, thereby minimizing the loss function. We evaluated the classification performance of the proposed method on four different datasets, and achieved good classification results on urban land object classification, crop classification and mineral classification, respectively. Compared with the state-of-the-art backbone network, the method achieves a significant improvement and achieves very good classification accuracy. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Graphical abstract

22 pages, 3135 KiB  
Article
A Pan-Sharpening Method with Beta-Divergence Non-Negative Matrix Factorization in Non-Subsampled Shear Transform Domain
by Yuetao Pan, Danfeng Liu, Liguo Wang, Jón Atli Benediktsson and Shishuai Xing
Remote Sens. 2022, 14(12), 2921; https://doi.org/10.3390/rs14122921 - 18 Jun 2022
Cited by 6 | Viewed by 2442
Abstract
In order to combine the spectral information of the multispectral (MS) image and the spatial information of the panchromatic (PAN) image, a pan-sharpening method based on β-divergence Non-negative Matrix Factorization (NMF) in the Non-Subsampled Shearlet Transform (NSST) domain is proposed. Firstly, we [...] Read more.
In order to combine the spectral information of the multispectral (MS) image and the spatial information of the panchromatic (PAN) image, a pan-sharpening method based on β-divergence Non-negative Matrix Factorization (NMF) in the Non-Subsampled Shearlet Transform (NSST) domain is proposed. Firstly, we improve the traditional contrast calculation method to build the weighted local contrast measure (WLCM) method. Each band of the MS image is fused by a WLCM-based adaptive weighted averaging rule to obtain the intensity component I. Secondly, an image matting model is introduced to retain the spectral information of the MS image. I is used as the initial α channel to estimate the foreground color F and the background color B. Depending on the NSST, the PAN image and I are decomposed into one low-frequency component and several high-frequency components, respectively. Fusion rules are designed corresponding to the characteristics of the low-frequency and high-frequency components. A β-divergence NMF method based on the Alternating Direction Method of Multipliers (ADMM) is used to fuse the low frequency components. A WLCM-based rule is used to fuse the high-frequency components. The fused components are inverted by NSST inverse transformation, and the obtained image is used as the final α channel. Finally, the final fused image is reconstructed according to the foreground color F, background color B, and the final α channel. The experimental results demonstrate that the proposed method achieves superior performance in both subjective visual effects and objective evaluation, and effectively preserves spectral information while improving spatial resolution. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Figure 1

25 pages, 16536 KiB  
Article
Precise Crop Classification of Hyperspectral Images Using Multi-Branch Feature Fusion and Dilation-Based MLP
by Haibin Wu, Huaming Zhou, Aili Wang and Yuji Iwahori
Remote Sens. 2022, 14(11), 2713; https://doi.org/10.3390/rs14112713 - 5 Jun 2022
Cited by 19 | Viewed by 3446
Abstract
The precise classification of crop types using hyperspectral remote sensing imaging is an essential application in the field of agriculture, and is of significance for crop yield estimation and growth monitoring. Among the deep learning methods, Convolutional Neural Networks (CNNs) are the premier [...] Read more.
The precise classification of crop types using hyperspectral remote sensing imaging is an essential application in the field of agriculture, and is of significance for crop yield estimation and growth monitoring. Among the deep learning methods, Convolutional Neural Networks (CNNs) are the premier model for hyperspectral image (HSI) classification for their outstanding locally contextual modeling capability, which facilitates spatial and spectral feature extraction. Nevertheless, the existing CNNs have a fixed shape and are limited to observing restricted receptive fields, constituting a simulation difficulty for modeling long-range dependencies. To tackle this challenge, this paper proposed two novel classification frameworks which are both built from multilayer perceptrons (MLPs). Firstly, we put forward a dilation-based MLP (DMLP) model, in which the dilated convolutional layer replaced the ordinary convolution of MLP, enlarging the receptive field without losing resolution and keeping the relative spatial position of pixels unchanged. Secondly, the paper proposes multi-branch residual blocks and DMLP concerning performance feature fusion after principal component analysis (PCA), called DMLPFFN, which makes full use of the multi-level feature information of the HSI. The proposed approaches are carried out on two widely used hyperspectral datasets: Salinas and KSC; and two practical crop hyperspectral datasets: WHU-Hi-LongKou and WHU-Hi-HanChuan. Experimental results show that the proposed methods outshine several state-of-the-art methods, outperforming CNN by 6.81%, 12.45%, 4.38% and 8.84%, and outperforming ResNet by 4.48%, 7.74%, 3.53% and 6.39% on the Salinas, KSC, WHU-Hi-LongKou and WHU-Hi-HanChuan datasets, respectively. As a result of this study, it was confirmed that the proposed methods offer remarkable performances for hyperspectral precise crop classification. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Figure 1

22 pages, 7183 KiB  
Article
Hyperspectral Image Classification Based on Non-Parallel Support Vector Machine
by Guangxin Liu, Liguo Wang, Danfeng Liu, Lei Fei and Jinghui Yang
Remote Sens. 2022, 14(10), 2447; https://doi.org/10.3390/rs14102447 - 19 May 2022
Cited by 21 | Viewed by 2436
Abstract
Support vector machine (SVM) has a good effect in the supervised classification of hyperspectral images. In view of the shortcomings of the existing parallel structure SVM, this article proposes a non-parallel SVM model. Based on the traditional parallel boundary structure vector machine, this [...] Read more.
Support vector machine (SVM) has a good effect in the supervised classification of hyperspectral images. In view of the shortcomings of the existing parallel structure SVM, this article proposes a non-parallel SVM model. Based on the traditional parallel boundary structure vector machine, this model adds an additional empirical risk minimization term to the original optimization problem by adding the least square term of the sample and obtains two non-parallel hyperplanes, respectively, forming a new non-parallel SVM algorithm to minimize the additional empirical risk of non-parallel SVM (Additional Empirical Risk Minimization Non-parallel Support Vector Machine, AERM-NPSVM). On the basis of AERM-NPSVM, the bias constraint is added to it, and AERM-NPSVM (BC-AERM-NPSVM) is further obtained. The experimental results show that, compared with the traditional parallel SVM model and the classical non-parallel SVM model, Twin Support Vector Machine (TWSVM), the new model, has a better effect in hyperspectral image classification and better generalization performance. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Figure 1

27 pages, 4381 KiB  
Article
One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification
by Haizhu Pan, Moqi Liu, Haimiao Ge and Liguo Wang
Remote Sens. 2022, 14(9), 2265; https://doi.org/10.3390/rs14092265 - 8 May 2022
Cited by 14 | Viewed by 2350
Abstract
In recent years, hyperspectral image (HSI) classification has become a hot research direction in remote sensing image processing. Benefiting from the development of deep learning, convolutional neural networks (CNNs) have shown extraordinary achievements in HSI classification. Numerous methods combining CNNs and attention mechanisms [...] Read more.
In recent years, hyperspectral image (HSI) classification has become a hot research direction in remote sensing image processing. Benefiting from the development of deep learning, convolutional neural networks (CNNs) have shown extraordinary achievements in HSI classification. Numerous methods combining CNNs and attention mechanisms (AMs) have been proposed for HSI classification. However, to fully mine the features of HSI, some of the previous methods apply dense connections to enhance the feature transfer between each convolution layer. Although dense connections allow these methods to fully extract features in a few training samples, it decreases the model efficiency and increases the computational cost. Furthermore, to balance model performance against complexity, the AMs in these methods compress a large number of channels or spatial resolutions during the training process, which results in a large amount of useful information being discarded. To tackle these issues, in this article, a novel one-shot dense network with polarized attention, namely, OSDN, was proposed for HSI classification. More precisely, since HSI contains rich spectral and spatial information, the OSDN has two independent branches to extract spectral and spatial features, respectively. Similarly, the polarized AMs contain two components: channel-only AMs and spatial-only AMs. Both polarized AMs can use a specially designed filtering method to reduce the complexity of the model while maintaining high internal resolution in both the channel and spatial dimensions. To verify the effectiveness and lightness of OSDN, extensive experiments were carried out on five benchmark HSI datasets, namely, Pavia University (PU), Kennedy Space Center (KSC), Botswana (BS), Houston 2013 (HS), and Salinas Valley (SV). Experimental results consistently showed that the OSDN can greatly reduce computational cost and parameters while maintaining high accuracy in a few training samples. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Figure 1

26 pages, 10862 KiB  
Article
Hyperspectral Image Classification Based on Spectral Multiscale Convolutional Neural Network
by Cuiping Shi, Jingwei Sun and Liguo Wang
Remote Sens. 2022, 14(8), 1951; https://doi.org/10.3390/rs14081951 - 18 Apr 2022
Cited by 6 | Viewed by 4028
Abstract
In recent years, convolutional neural networks (CNNs) have been widely used for hyperspectral image classification, which show good performance. Compared with using sufficient training samples for classification, the classification accuracy of hyperspectral images is easily affected by a small number of samples. Moreover, [...] Read more.
In recent years, convolutional neural networks (CNNs) have been widely used for hyperspectral image classification, which show good performance. Compared with using sufficient training samples for classification, the classification accuracy of hyperspectral images is easily affected by a small number of samples. Moreover, although CNNs can effectively classify hyperspectral images, due to the rich spatial and spectral information of hyperspectral images, the efficiency of feature extraction still needs to be further improved. In order to solve these problems, a spatial–spectral attention fusion network using four branch multiscale block (FBMB) to extract spectral features and 3D-Softpool to extract spatial features is proposed. The network consists of three main parts. These three parts are connected in turn to fully extract the features of hyperspectral images. In the first part, four different branches are used to fully extract spectral features. The convolution kernel size of each branch is different. Spectral attention block is adopted behind each branch. In the second part, the spectral features are reused through dense connection blocks, and then the spectral attention module is utilized to refine the extracted spectral features. In the third part, it mainly extracts spatial features. The DenseNet module and spatial attention block jointly extract spatial features. The spatial features are fused with the previously extracted spectral features. Experiments are carried out on four commonly used hyperspectral data sets. The experimental results show that the proposed method has better classification performance than some existing classification methods when using a small number of training samples. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Graphical abstract

21 pages, 7703 KiB  
Article
Multiscale Feature Aggregation Capsule Neural Network for Hyperspectral Remote Sensing Image Classification
by Runmin Lei, Chunju Zhang, Xueying Zhang, Jianwei Huang, Zhenxuan Li, Wencong Liu and Hao Cui
Remote Sens. 2022, 14(7), 1652; https://doi.org/10.3390/rs14071652 - 30 Mar 2022
Cited by 8 | Viewed by 2345
Abstract
Models based on capsule neural network (CapsNet), a novel deep learning method, have recently made great achievements in hyperspectral remote sensing image (HSI) classification due to their excellent ability to implicitly model the spatial relationship knowledge embedded in HSIs. However, the number of [...] Read more.
Models based on capsule neural network (CapsNet), a novel deep learning method, have recently made great achievements in hyperspectral remote sensing image (HSI) classification due to their excellent ability to implicitly model the spatial relationship knowledge embedded in HSIs. However, the number of labeled samples is a common bottleneck in HSI classification, limiting the performance of these deep learning models. To alleviate the problem of limited labeled samples and further explore the potential of CapsNet in the HSI classification field, this study proposes a multiscale feature aggregation capsule neural network (MS-CapsNet) based on CapsNet via the implementation of two branches that simultaneously extract spectral, local spatial, and global spatial features to integrate multiscale features and improve model robustness. Furthermore, because deep features are generally more discriminative than shallow features, two kinds of capsule residual (CapsRES) blocks based on 3D convolutional capsule (3D-ConvCaps) layers and residual connections are proposed to increase the depth of the network and solve the limited labeled sample problem in HSI classification. Moreover, a squeeze-and-excitation (SE) block is introduced in the shallow layers of MS-CapsNet to enhance its feature extraction ability. In addition, a reasonable initialization strategy that transfers parameters from two well-designed, pretrained deep convolutional capsule networks is introduced to help the model find a good set of initializing weight parameters and further improve the HSI classification accuracy of MS-CapsNet. Experimental results on four widely used HSI datasets demonstrate that the proposed method can provide results comparable to those of state-of-the-art methods. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Figure 1

19 pages, 7129 KiB  
Article
Affinity Propagation Based on Structural Similarity Index and Local Outlier Factor for Hyperspectral Image Clustering
by Haimiao Ge, Liguo Wang, Haizhu Pan, Yuexia Zhu, Xiaoyu Zhao and Moqi Liu
Remote Sens. 2022, 14(5), 1195; https://doi.org/10.3390/rs14051195 - 28 Feb 2022
Cited by 7 | Viewed by 2741
Abstract
In hyperspectral remote sensing, the clustering technique is an important issue of concern. Affinity propagation is a widely used clustering algorithm. However, the complex structure of the hyperspectral image (HSI) dataset presents challenge for the application of affinity propagation. In this paper, an [...] Read more.
In hyperspectral remote sensing, the clustering technique is an important issue of concern. Affinity propagation is a widely used clustering algorithm. However, the complex structure of the hyperspectral image (HSI) dataset presents challenge for the application of affinity propagation. In this paper, an improved version of affinity propagation based on complex wavelet structural similarity index and local outlier factor is proposed specifically for the HSI dataset. In the proposed algorithm, the complex wavelet structural similarity index is used to calculate the spatial similarity of HSI pixels. Meanwhile, the calculation strategy of the spatial similarity is simplified to reduce the computational complexity. The spatial similarity and the traditional spectral similarity of the HSI pixels jointly constitute the similarity matrix of affinity propagation. Furthermore, the local outlier factors are applied as weights to revise the original exemplar preferences of the affinity propagation. Finally, the modified similarity matrix and exemplar preferences are applied, and the clustering index is obtained by the traditional affinity propagation. Extensive experiments were conducted on three HSI datasets, and the results demonstrate that the proposed method can improve the performance of the traditional affinity propagation and provide competitive clustering results among the competitors. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Figure 1

20 pages, 3290 KiB  
Article
Spatial Potential Energy Weighted Maximum Simplex Algorithm for Hyperspectral Endmember Extraction
by Meiping Song, Ying Li, Tingting Yang and Dayong Xu
Remote Sens. 2022, 14(5), 1192; https://doi.org/10.3390/rs14051192 - 28 Feb 2022
Cited by 5 | Viewed by 1954
Abstract
Most traditional endmember extraction algorithms focus on spectral information, which limits the effectiveness of endmembers. This paper develops a spatial potential energy weighted maximum simplex algorithm (SPEW) for hyperspectral endmember extraction, combining the relevance of hyperspectral spatial context with spectral information to effectively [...] Read more.
Most traditional endmember extraction algorithms focus on spectral information, which limits the effectiveness of endmembers. This paper develops a spatial potential energy weighted maximum simplex algorithm (SPEW) for hyperspectral endmember extraction, combining the relevance of hyperspectral spatial context with spectral information to effectively extract endmembers. Specifically, for pixels in a uniform spatial area, SPEW assigns a high weight to pixels with higher spatial potential energy. For pixels scattered in a spatial area, the high weights are assigned to the representative pixels with a smaller spectral angle distance. Then, the optimal endmember collection is determined by the simplex with maximum volume in the space of representative pixels. SPEW not only reduces the complexity of searching for the maximum simplex volume but also improves the performance of endmember extraction. In particular, compared with other newly proposed spatial-spectral hyperspectral endmember extraction methods, SPEW can effectively extract the hidden endmembers in a spatial area without adjusting any parameters. Experiments on synthetic and real data show that the SPEW algorithm has also provides better results than the traditional algorithms. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Graphical abstract

Back to TopTop