applsci-logo

Journal Browser

Journal Browser

Intelligent Computing and Remote Sensing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 60635

Special Issue Editors

School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
Interests: space perception and intelligent computing; data fusion
Special Issues, Collections and Topics in MDPI journals
School of Computer Science and Engineering, Beihang University, Beijing 100191, China
Interests: remote sensing image processing; visual computing; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are inviting submissions to the Special Issue on “Intelligent Computing and Remote Sensing”.

Recently, we have been witnessing an urgent need for intelligent computing to solve the challenging problems in the remote sensing field. These problems include but are not limited to computational intelligence for remote sensing image processing and image understanding, on-board or on-orbit intelligent information processing, remote sensing big data analysis, and intelligent computing methods for UAV systems. Therefore, it is necessary to establish a Special Issue for colleagues to communicate intelligent computing for remote sensing research.

In this Special Issue, we invite submissions exploring cutting-edge research and recent advances in the fields of intelligent computing and remote sensing. Both theoretical and application studies are welcome, as well as comprehensive review and survey papers.

Prof. Dr. Qizhi Xu
Dr. Jin Zheng
Dr. Feng Gao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • on-orbit/on-board real-time computing
  • advanced image processing
  • remote sensing big data
  • UAV information intelligent computing
  • radar information intelligent computing
  • information fusion
  • video imaging satellite intelligent computing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (28 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 23090 KiB  
Article
DCTransformer: A Channel Attention Combined Discrete Cosine Transform to Extract Spatial–Spectral Feature for Hyperspectral Image Classification
by Yuanyuan Dang, Xianhe Zhang, Hongwei Zhao and Bing Liu
Appl. Sci. 2024, 14(5), 1701; https://doi.org/10.3390/app14051701 - 20 Feb 2024
Cited by 3 | Viewed by 1330
Abstract
Hyperspectral image (HSI) classification tasks have been adopted in huge applications of remote sensing recently. With the rise of deep learning development, it becomes crucial to investigate how to exploit spatial–spectral features. The traditional approach is to stack models that can encode spatial–spectral [...] Read more.
Hyperspectral image (HSI) classification tasks have been adopted in huge applications of remote sensing recently. With the rise of deep learning development, it becomes crucial to investigate how to exploit spatial–spectral features. The traditional approach is to stack models that can encode spatial–spectral features, coupling sufficient information as much as possible, before the classification model. However, this sequential stacking tends to cause information redundancy. In this paper, a novel network utilizing the channel attention combined discrete cosine transform (DCTransformer) to extract spatial–spectral features has been proposed to address this issue. It consists of a detail spatial feature extractor (DFE) with CNN blocks and a base spectral feature extractor (BFE) utilizing the channel attention mechanism (CAM) with a discrete cosine transform (DCT). Firstly, the DFE can extract detailed context information using a series of layers of a CNN. Further, the BFE captures spectral features using channel attention and stores the wider frequency information by utilizing the DCT. Ultimately, the dynamic fusion mechanism has been adopted to fuse the detail and base features. Comprehensive experiments show that the DCTransformer achieves a state-of-the-art (SOTA) performance in the HSI classification task, compared to other methods on four datasets, the University of Houston (UH), Indian Pines (IP), MUUFL, and Trento datasets. On the UH dataset, the DCTransformer achieves an OA of 94.40%, AA of 94.89%, and kappa of 93.92. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

20 pages, 10819 KiB  
Article
Efficient Small-Object Detection in Underwater Images Using the Enhanced YOLOv8 Network
by Minghua Zhang, Zhihua Wang, Wei Song, Danfeng Zhao and Huijuan Zhao
Appl. Sci. 2024, 14(3), 1095; https://doi.org/10.3390/app14031095 - 27 Jan 2024
Cited by 11 | Viewed by 4128
Abstract
Underwater object detection plays a significant role in marine ecosystem research and marine species conservation. The improvement of related technologies holds practical significance. Although existing object-detection algorithms have achieved an excellent performance on land, they are not satisfactory in underwater scenarios due to [...] Read more.
Underwater object detection plays a significant role in marine ecosystem research and marine species conservation. The improvement of related technologies holds practical significance. Although existing object-detection algorithms have achieved an excellent performance on land, they are not satisfactory in underwater scenarios due to two limitations: the underwater objects are often small, densely distributed, and prone to occlusion characteristics, and underwater embedded devices have limited storage and computational capabilities. In this paper, we propose a high-precision, lightweight underwater detector specifically optimizing for underwater scenarios based on the You Only Look Once Version 8 (YOLOv8) model. Firstly, we replace the Darknet-53 backbone of YOLOv8s with FasterNet-T0, reducing model parameters by 22.52%, FLOPS by 23.59%, and model size by 22.73%, achieving model lightweighting. Secondly, we add a Prediction Head for Small Objects, increase the number of channels for high-resolution feature map detection heads, and decrease the number of channels for low-resolution feature map detection heads. This results in a 1.2% improvement in small-object detection accuracy, while the remaining model parameters and memory consumption are nearly unchanged. Thirdly, we use Deformable ConvNets and Coordinate Attention in the neck part to enhance the accuracy in the detection of irregularly shaped and densely occluded small targets. This is achieved by learning convolution offsets from feature maps and emphasizing the regions of interest (RoIs). Our method achieves 52.12% AP on the underwater dataset UTDAC2020, with only 8.5 M parameters, 25.5 B FLOPS, and 17 MB model size. It surpasses the performance of large model YOLOv8l, at 51.69% AP, with 43.6 M parameters, 164.8 B FLOPS, and 84 MB model size. Furthermore, by increasing the input image resolution to 1280 × 1280 pixels, our model achieves 53.18% AP, making it the state-of-the-art (SOTA) model for the UTDAC2020 underwater dataset. Additionally, we achieve 84.4% mAP on the Pascal VOC dataset, with a substantial reduction in model parameters compared to previous, well-established detectors. The experimental results demonstrate that our proposed lightweight method retains effectiveness on underwater datasets and can be generalized to common datasets. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

17 pages, 8085 KiB  
Article
The Impact of Different Types of El Niño Events on the Ozone Valley of the Tibetan Plateau Based on the WACCM4 Mode
by Yishun Wan, Feng Xu, Shujie Chang, Lingfeng Wan and Yongchi Li
Appl. Sci. 2024, 14(3), 1090; https://doi.org/10.3390/app14031090 - 27 Jan 2024
Viewed by 881
Abstract
This study integrates the sea surface temperature, ozone and meteorological data of ERA5 to count the El Niño events since 1979 and has classified these events into eastern and central types in space as well as spring and summer types in time. The [...] Read more.
This study integrates the sea surface temperature, ozone and meteorological data of ERA5 to count the El Niño events since 1979 and has classified these events into eastern and central types in space as well as spring and summer types in time. The impacts of different types of El Niño events on the ozone valley of the Tibetan Plateau are discussed. The eastern (and spring) type of El Niño events are generally more intense and longer in duration than the central (and summer) type of El Niño events. Overall, in the summer of the following year after El Niño events, the total column ozone (TCO) anomalies near the Tibetan Plateau have a regular zonal distribution. At low latitudes, TCO exhibits negative anomalies, which become more negative approaching the equator. The TCO in the region north of 30° N mainly shows positive anomalies with the high-value region around 40° N. The responses of ozone to different types of El Niño events over the Tibetan Plateau are different, which is further validated by the WACCM4 simulation results. The greater intensity of the eastern (and spring) type of El Niño events caused stronger upward movement of the middle and upper atmosphere in the 20° N region in the subsequent summer as well as a stronger South Asian High. These have resulted in a wider range of negative TCO anomalies in the southern low-latitude region of the South Asian High. In addition, the growing intensity of El Niño extreme events over more than half a century warrants significant concern. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

14 pages, 2715 KiB  
Article
Automatic GNSS Ionospheric Scintillation Detection with Radio Occultation Data Using Machine Learning Algorithm
by Guangwang Ji, Ruimin Jin, Weimin Zhen and Huiyun Yang
Appl. Sci. 2024, 14(1), 97; https://doi.org/10.3390/app14010097 - 21 Dec 2023
Cited by 1 | Viewed by 1345
Abstract
Ionospheric scintillation often occurs in the polar and equator regions, and it can affect the signals of the Global Navigation Satellite System (GNSS). Therefore, the ionospheric scintillation detection applied to the polar and equator regions is of vital importance for improving the performance [...] Read more.
Ionospheric scintillation often occurs in the polar and equator regions, and it can affect the signals of the Global Navigation Satellite System (GNSS). Therefore, the ionospheric scintillation detection applied to the polar and equator regions is of vital importance for improving the performance of satellite navigation. GNSS radio occultation is a remote sensing technique that primarily utilizes GNSS signals to study the Earth’s atmosphere, but its measurement results are susceptible to the effects of ionospheric scintillation. In this study, we propose an ionospheric scintillation detection algorithm based on the Sparrow-Search-Algorithm-optimized Extreme Gradient Boosting model (SSA-XGBoost), which uses power spectral densities of the raw signal intensities from GNSS occultation data as input features to train the algorithm model. To assess the performance of the proposed algorithm, we compare it with other machine learning algorithms such as XGBoost and a Support Vector Machine (SVM) using historical ionospheric scintillation data. The results show that the SSA-XGBoost method performs much better compared to the SVM and XGBoost models, with an overall accuracy of 97.8% in classifying scintillation events and a miss detection rate of only 12.9% for scintillation events with an unbalanced GNSS RO dataset. This paper can provide valuable insights for designing more robust GNSS receivers. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

18 pages, 9840 KiB  
Article
Research on Panorama Generation from a Multi-Camera System by Object-Distance Estimation
by Hongxia Cui, Ziwei Zhao and Fangfei Zhang
Appl. Sci. 2023, 13(22), 12309; https://doi.org/10.3390/app132212309 - 14 Nov 2023
Viewed by 1410
Abstract
Panoramic imagery from multi-camera systems often suffers the problem of geometric mosaicking errors due to eccentric errors between the optical centers of cameras and variations in object-distances within the panoramic environment. In this paper, an inverse rigorous panoramic imaging model was derived completely [...] Read more.
Panoramic imagery from multi-camera systems often suffers the problem of geometric mosaicking errors due to eccentric errors between the optical centers of cameras and variations in object-distances within the panoramic environment. In this paper, an inverse rigorous panoramic imaging model was derived completely for a panoramic multi-camera system. Additionally, we present an estimation scheme aimed at extracting object-distance information to enhance the seamlessness of panoramic image stitching. The essence of the scheme centers around our proposed object-space-based image matching algorithm called the Panoramic Vertical Line Locus (PVLL). As a result, panoramas were generated using the proposed inverse multi-cylinder projection method, utilizing the estimated object-distance information. The experiments conducted on our developed multi-camera system demonstrate that the root mean square errors (RMSEs) in the overlapping areas of panoramic images are no more than 1.0 pixel. In contrast, the RMSEs of the conventional traditional methods are typically more than 6 pixels, and in some cases, even exceed 30 pixels. Moreover, the inverse imaging model has successfully addressed the issue of empty pixels. The proposed method can effectively meet the accurate panoramic imaging requirements for complex surroundings with varied object-distance information. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

17 pages, 7066 KiB  
Article
Dual Parallel Branch Fusion Network for Road Segmentation in High-Resolution Optical Remote Sensing Imagery
by Lin Gao and Chen Chen
Appl. Sci. 2023, 13(19), 10726; https://doi.org/10.3390/app131910726 - 27 Sep 2023
Viewed by 950
Abstract
Road segmentation from high-resolution (HR) remote sensing images plays a core role in a wide range of applications. Due to the complex background of HR images, most of the current methods struggle to extract a road network correctly and completely. Furthermore, they suffer [...] Read more.
Road segmentation from high-resolution (HR) remote sensing images plays a core role in a wide range of applications. Due to the complex background of HR images, most of the current methods struggle to extract a road network correctly and completely. Furthermore, they suffer from either the loss of context information or high redundancy of details information. To alleviate these problems, we employ a dual branch dilated pyramid network (DPBFN), which enables dual-branch feature passing between two parallel paths when it is merged to a typical road extraction structure. A DPBFN consists of three parts: a residual multi-scaled dilated convolutional network branch, a transformer branch, and a fusion module. Constructing pyramid features through parallel multi-scale dilated convolution operations with multi-head attention block can enhance road features while suppressing redundant information. Both branches after fusing can solve shadow or vision occlusions and maintain the continuity of the road network, especially on a complex background. Experiments were carried out on three datasets of HR images to showcase the stable performance of the proposed method, and the results are compared with those of other methods. The OA in the three data sets of Massachusetts, Deep Globe, and GF-2 can reach more than 98.26%, 95.25%, and 95.66%, respectively, which has a significant improvement compared with the traditional CNN network. The results and explanation analysis via Grad-CAMs showcase the effective performance in accurately extracting road segments from a complex scene. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

24 pages, 7394 KiB  
Article
Hardware Acceleration of Satellite Remote Sensing Image Object Detection Based on Channel Pruning
by Yonghui Zhao, Yong Lv and Chao Li
Appl. Sci. 2023, 13(18), 10111; https://doi.org/10.3390/app131810111 - 8 Sep 2023
Viewed by 1602
Abstract
Real-time detection of satellite remote sensing images is one of the key technologies in the field of remote sensing, which requires not only high-efficiency algorithms, but also low-power and high-performance hardware deployment platforms. At present, the image processing hardware acceleration platform mainly uses [...] Read more.
Real-time detection of satellite remote sensing images is one of the key technologies in the field of remote sensing, which requires not only high-efficiency algorithms, but also low-power and high-performance hardware deployment platforms. At present, the image processing hardware acceleration platform mainly uses an image processing unit (GPU), but the GPU has the problem of large power consumption, and it is difficult to apply to micro-nano satellites and other devices with limited volume, weight, computing power, and power consumption. At the same time, the deep learning algorithm model has the problem of too many parameters, and it is difficult to directly deploy it on embedded devices. In order to solve the above problems, we propose a YOLOv4-MobileNetv3 field programmable gate array (FPGA) deployment scheme based on channel layer pruning. Experiments show that the acceleration strategy proposed by us can reduce the number of model parameters by 91.11%, and on the aerial remote sensing dataset DIOR, the average accuracy of the design scheme in this paper reaches 82.61%, the FPS reaches 48.14, and the average power consumption is 7.2 W, which is 317.88% FPS higher than the CPU and reduces the power consumption by 81.91%. Compared to the GPU, it reduces power consumption by 91.85% and improves FPS by 8.50%. Compared with CPUs and GPUs, our proposed lightweight algorithm model is more energy-efficient and more real-time, and is suitable for application in spaceborne remote sensing image processing systems. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

16 pages, 6866 KiB  
Article
Real-Time Simulation and Sensor Performance Evaluation of Space-Based Infrared Point Target Group
by Chao Gong, Peng Rao and Yejin Li
Appl. Sci. 2023, 13(17), 9794; https://doi.org/10.3390/app13179794 - 30 Aug 2023
Viewed by 1387
Abstract
Small space targets are usually present in the form of point sources when observed by space-based sensors. To ease the difficulty of obtaining real observation images and overcome the limitations of the existing Systems Tool Kit/electro-optical and infrared sensors (STK/EOIR) module in supporting [...] Read more.
Small space targets are usually present in the form of point sources when observed by space-based sensors. To ease the difficulty of obtaining real observation images and overcome the limitations of the existing Systems Tool Kit/electro-optical and infrared sensors (STK/EOIR) module in supporting the display and output of point target observation results from multiple platforms of the constellation, a method is provided for the fast simulation of point target groups using EOIR combined with external computation. A star lookup table based on the Midcourse Space Experiment (MSX) infrared astrometry catalog is established by dividing the grid to generate the background. A Component Object Model (COM) is used to connect STK to enable the rapid deployment and visualization of complex simulation scenarios. Finally, the automated output of simulated images and infrared information is achieved. Simulation experiments on point targets show that the method can support 20 sensors to image groups of targets at 128 × 128 resolution and achieve 32 frames of real-time output at 1 K × 1 K resolution, providing an effective approach to spatial situational awareness and the building of target infrared datasets. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

21 pages, 3454 KiB  
Article
Swin–MRDB: Pan-Sharpening Model Based on the Swin Transformer and Multi-Scale CNN
by Zifan Rong, Xuesong Jiang, Linfeng Huang and Hongping Zhou
Appl. Sci. 2023, 13(15), 9022; https://doi.org/10.3390/app13159022 - 7 Aug 2023
Viewed by 1504
Abstract
Pan-sharpening aims to create high-resolution spectrum images by fusing low-resolution hyperspectral (HS) images with high-resolution panchromatic (PAN) images. Inspired by the Swin transformer used in image classification tasks, this research constructs a three-stream pan-sharpening network based on the Swin transformer and a multi-scale [...] Read more.
Pan-sharpening aims to create high-resolution spectrum images by fusing low-resolution hyperspectral (HS) images with high-resolution panchromatic (PAN) images. Inspired by the Swin transformer used in image classification tasks, this research constructs a three-stream pan-sharpening network based on the Swin transformer and a multi-scale feature extraction module. Unlike the traditional convolutional neural network (CNN) pan-sharpening model, we use the Swin transformer to establish global connections with the image and combine it with a multi-scale feature extraction module to extract local features of different sizes. The model combines the advantages of the Swin transformer and CNN, enabling fused images to maintain good local detail and global linkage by mitigating distortion in hyperspectral images. In order to verify the effectiveness of the method, this paper evaluates fused images with subjective visual and quantitative indicators. Experimental results show that the method proposed in this paper can better preserve the spatial and spectral information of images compared to the classical and latest models. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

12 pages, 1000 KiB  
Article
Conformal Test Martingale-Based Change-Point Detection for Geospatial Object Detectors
by Gang Wang, Zhiying Lu, Ping Wang, Shuo Zhuang and Di Wang
Appl. Sci. 2023, 13(15), 8647; https://doi.org/10.3390/app13158647 - 27 Jul 2023
Viewed by 1197
Abstract
Unsupervised domain adaptation for object detectors addresses the problem of improving the cross-domain robustness of object detection from label-rich to label-poor domains, which has been explored in many studies. However, one important issue in terms of when to apply the domain adaptation algorithm [...] Read more.
Unsupervised domain adaptation for object detectors addresses the problem of improving the cross-domain robustness of object detection from label-rich to label-poor domains, which has been explored in many studies. However, one important issue in terms of when to apply the domain adaptation algorithm for geospatial object detectors has not been fully considered in the literature. In this paper, we tackle the problem of detecting the moment or change-point when the domain of geospatial images changes based on conformal test martingale. Beyond the simple introduction of this martingale-based process, we also propose a novel transformation approach to the original conformal test martingale to make change-point detection more efficient. The experiments are conducted with two partitions of our released large-scale remote sensing dataset and the experimental results empirically demonstrate the promising effectiveness and efficiency of our proposed algorithms for change-point detection. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

19 pages, 1925 KiB  
Article
HRU-Net: High-Resolution Remote Sensing Image Road Extraction Based on Multi-Scale Fusion
by Anchao Yin, Chao Ren, Zhiheng Yan, Xiaoqin Xue, Weiting Yue, Zhenkui Wei, Jieyu Liang, Xudong Zhang and Xiaoqi Lin
Appl. Sci. 2023, 13(14), 8237; https://doi.org/10.3390/app13148237 - 15 Jul 2023
Cited by 2 | Viewed by 1900
Abstract
Road extraction from high-resolution satellite images has become a significant focus in the field of remote sensing image analysis. However, factors such as shadow occlusion and spectral confusion hinder the accuracy and consistency of road extraction in satellite images. To overcome these challenges, [...] Read more.
Road extraction from high-resolution satellite images has become a significant focus in the field of remote sensing image analysis. However, factors such as shadow occlusion and spectral confusion hinder the accuracy and consistency of road extraction in satellite images. To overcome these challenges, this paper presents a multi-scale fusion-based road extraction framework, HRU-Net, which exploits the various scales and resolutions of image features generated during the encoding and decoding processes. First, during the encoding phase, we develop a multi-scale feature fusion module with upsampling capabilities (UMR module) to capture fine details, enhancing shadowed areas and road boundaries. Next, in the decoding phase, we design a multi-feature fusion module (MPF module) to obtain multi-scale spatial information, enabling better differentiation between roads and objects with similar spectral characteristics. The network simultaneously integrates multi-scale feature information during the downsampling process, producing high-resolution feature maps through progressive cross-layer connections, thereby enabling more effective high-resolution prediction tasks. We conduct comparative experiments and quantitative evaluations of the proposed HRU-Net framework against existing algorithms (U-Net, ResNet, DeepLabV3, ResUnet, HRNet) using the Massachusetts Road Dataset. On this basis, this paper selects three network models (U-Net, HRNet, and HRU-Net) to conduct comparative experiments and quantitative evaluations on the DeepGlobe Road Dataset. The experimental results demonstrate that the HRU-Net framework outperforms its counterparts in terms of accuracy and mean intersection over union. In summary, the HRU-Net model proposed in this paper skillfully exploits information from different resolution feature maps, effectively addressing the challenges of discontinuous road extraction and reduced accuracy caused by shadow occlusion and spectral confusion factors. In complex satellite image scenarios, the model accurately extracts comprehensive road regions. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

15 pages, 2531 KiB  
Article
Semantic Segmentation with High-Resolution Sentinel-1 SAR Data
by Hakan Erten, Erkan Bostanci, Koray Acici, Mehmet Serdar Guzel, Tunc Asuroglu and Ayhan Aydin
Appl. Sci. 2023, 13(10), 6025; https://doi.org/10.3390/app13106025 - 14 May 2023
Viewed by 3159
Abstract
The world’s high-resolution images are supplied by a radar system named Synthetic Aperture Radar (SAR). Semantic SAR image segmentation proposes a computer-based solution to make segmentation tasks easier. When conducting scientific research, accessing freely available datasets and images with low noise levels is [...] Read more.
The world’s high-resolution images are supplied by a radar system named Synthetic Aperture Radar (SAR). Semantic SAR image segmentation proposes a computer-based solution to make segmentation tasks easier. When conducting scientific research, accessing freely available datasets and images with low noise levels is rare. However, SAR images can be accessed for free. We propose a novel process for labeling Sentinel-1 SAR radar images, which the European Space Agency (ESA) provides free of charge. This process involves denoising the images and using an automatically created dataset with pioneering deep neural networks to augment the results of the semantic segmentation task. In order to exhibit the power of our denoising process, we match the results of our newly created dataset with speckled noise and noise-free versions. Thus, we attained a mean intersection over union (mIoU) of 70.60% and overall pixel accuracy (PA) of 92.23 with the HRNet model. These deep learning segmentation methods were also assessed with the McNemar test. Our experiments on the newly created Sentinel-1 dataset establish that combining our pipeline with deep neural networks results in recognizable improvements in challenging semantic segmentation accuracy and mIoU values. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

20 pages, 4320 KiB  
Article
Research and Implementation of High Computational Power for Training and Inference of Convolutional Neural Networks
by Tianling Li, Bin He and Yangyang Zheng
Appl. Sci. 2023, 13(2), 1003; https://doi.org/10.3390/app13021003 - 11 Jan 2023
Cited by 6 | Viewed by 2522
Abstract
Algorithms and computing power have consistently been the two driving forces behind the development of artificial intelligence. The computational power of a platform has a significant impact on the implementation cost, performance, power consumption, and flexibility of an algorithm. Currently, AI algorithmic models [...] Read more.
Algorithms and computing power have consistently been the two driving forces behind the development of artificial intelligence. The computational power of a platform has a significant impact on the implementation cost, performance, power consumption, and flexibility of an algorithm. Currently, AI algorithmic models are mainly trained using high-performance GPU platforms, and their inferencing can be implemented using GPU, CPU, and FPGA. On the one hand, due to its high-power consumption and extreme cost, GPU is not suitable for power and cost-sensitive application scenarios. On the other hand, because the training and inference of the neural network use different computing power platforms, the data of the neural network model needs to be transmitted on platforms with varying computing power, which affects the data processing capability of the network and affects the real-time performance and flexibility of the neural network. This paper focuses on the high computing power implementation method of the integration of convolutional neural network (CNN) training and inference in artificial intelligence and proposes to implement the process of CNN training and inference by using high-performance heterogeneous architecture (HA) devices with field programmable gate array (FPGA) as the core. Numerous repeated multiplication and accumulation operations in the process of CNN training and inference have been implemented by programmable logic (PL), which significantly improves the speed of CNN training and inference and reduces the overall power consumption, thus providing a modern implementation method for neural networks in an application field that is sensitive to power, cost, and footprint. First, based on the data stream containing the training and inference process of the CNN, this study investigates methods to merge the training and inference data streams. Secondly, high-level language was used to describe the merged data stream structure, and a high-level description was converted to a hardware register transfer level (RTL) description by the high-level synthesis tool (HLS), and the intellectual property (IP) core was generated. The PS was used for overall control, data preprocessing, and result analysis, and it was then connected to the IP core via an on-chip AXI bus interface in the HA device. Finally, the integrated implementation method was tested and validated with the Xilinx HA device, and the MNIST handwritten digit validation set was used in the tests. According to the test results, compared with using a GPU, the model trained in the HA device PL achieves the same convergence rate with only 78.04 percent training time. With a processing time of only 3.31 ms and 0.65 ms for a single frame image, an average recognition accuracy of 95.697%, and an overall power consumption of only 3.22 W @ 100 MHz, the two convolutional neural networks mentioned in this paper are suitable for deployment in lightweight domains with limited power consumption. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

21 pages, 4430 KiB  
Article
A Multispectral and Panchromatic Images Fusion Method Based on Weighted Mean Curvature Filter Decomposition
by Yuetao Pan, Danfeng Liu, Liguo Wang, Shishuai Xing and Jón Atli Benediktsson
Appl. Sci. 2022, 12(17), 8767; https://doi.org/10.3390/app12178767 - 31 Aug 2022
Cited by 3 | Viewed by 1869
Abstract
Since the hardware limitations of satellite sensors, the spatial resolution of multispectral (MS) images is still not consistent with the panchromatic (PAN) images. It is especially important to obtain the MS images with high spatial resolution in the field of remote sensing image [...] Read more.
Since the hardware limitations of satellite sensors, the spatial resolution of multispectral (MS) images is still not consistent with the panchromatic (PAN) images. It is especially important to obtain the MS images with high spatial resolution in the field of remote sensing image fusion. In order to obtain the MS images with high spatial and spectral resolutions, a novel MS and PAN images fusion method based on weighted mean curvature filter (WMCF) decomposition is proposed in this paper. Firstly, a weighted local spatial frequency-based (WLSF) fusion method is utilized to fuse all the bands of a MS image to generate an intensity component IC. In accordance with an image matting model, IC is taken as the original α channel for spectral estimation to obtain a foreground and background images. Secondly, a PAN image is decomposed into a small-scale (SS), large-scale (LS) and basic images by weighted mean curvature filter (WMCF) and Gaussian filter (GF). The multi-scale morphological detail measure (MSMDM) value is used as the inputs of the Parameters Automatic Calculation Pulse Coupled Neural Network (PAC-PCNN) model. With the MSMDM-guided PAC-PCNN model, the basic image and IC are effectively fused. The fused image as well as the LS and SS images are linearly combined together to construct the last α channel. Finally, in accordance with an image matting model, a foreground image, a background image and the last α channel are reconstructed to acquire the final fused image. The experimental results on four image pairs show that the proposed method achieves superior results in terms of subjective and objective evaluations. In particular, the proposed method can fuse MS and PAN images with different spatial and spectral resolutions in a higher operational efficiency, which is an effective means to obtain higher spatial and spectral resolution images. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

16 pages, 2652 KiB  
Article
Research on Seismic Signal Analysis Based on Machine Learning
by Xinxin Yin, Feng Liu, Run Cai, Xiulong Yang, Xiaoyue Zhang, Meiling Ning and Siyuan Shen
Appl. Sci. 2022, 12(16), 8389; https://doi.org/10.3390/app12168389 - 22 Aug 2022
Cited by 11 | Viewed by 3917
Abstract
In this paper, the time series classification frontier method MiniRocket was used to classify earthquakes, blasts, and background noise. From supervised to unsupervised classification, a comprehensive analysis was carried out, and finally, the supervised method achieved excellent results. The relatively simple model, MiniRocket, [...] Read more.
In this paper, the time series classification frontier method MiniRocket was used to classify earthquakes, blasts, and background noise. From supervised to unsupervised classification, a comprehensive analysis was carried out, and finally, the supervised method achieved excellent results. The relatively simple model, MiniRocket, is only a one-dimensional convolutional neural network structure which has achieved the best comprehensive results, and its computational efficiency is far stronger than other supervised classification methods. Through our experimental results, we found that the MiniRocket model could well-extract the decisive features of the seismic sensing signal. In order to try to eliminate the tedious work of making data labels, we proposed a novel lightweight collaborative learning for seismic sensing signals (LCL-SSS) based on the method of feature extraction in MiniRocket combined with unsupervised classification. The new method gives new vitality to the unsupervised classification method that could not be used originally and opens up a new path for the unsupervised classification of seismic sensing signals. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

14 pages, 2745 KiB  
Article
Fusion Information Multi-View Classification Method for Remote Sensing Cloud Detection
by Qi Hao, Wenguang Zheng and Yingyuan Xiao
Appl. Sci. 2022, 12(14), 7295; https://doi.org/10.3390/app12147295 - 20 Jul 2022
Cited by 3 | Viewed by 1496
Abstract
In recent years, many studies have been carried out to detect clouds on remote sensing images. Due to the complex terrain, the variety of clouds, the density, and content of clouds are various, and the current model has difficulty accurately detecting the cloud [...] Read more.
In recent years, many studies have been carried out to detect clouds on remote sensing images. Due to the complex terrain, the variety of clouds, the density, and content of clouds are various, and the current model has difficulty accurately detecting the cloud in the image. In our strategy, a multi-view data training set based on super pixel is constructed. View A uses multi-level network to extract the boundary, texture, and deep abstract feature of super pixels. View B is the statistical feature of the three channels of the image. Privilege information View P contains the cloud content of super pixels and the tag status of adjacent super pixels. Finally, we propose a cloud detection method for remote sensing image classification based on multi-view support vector machine (SVM). The proposed method is tested on images of different terrain and cloud distribution in GF-1_WHU and Cloud-38 remote sensing datasets. Visual performance and quantitative analysis show that the method has excellent cloud detection performance. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

17 pages, 7098 KiB  
Article
SBNN: A Searched Binary Neural Network for SAR Ship Classification
by Hairui Zhu, Shanhong Guo, Weixing Sheng and Lei Xiao
Appl. Sci. 2022, 12(14), 6866; https://doi.org/10.3390/app12146866 - 7 Jul 2022
Cited by 5 | Viewed by 2163
Abstract
The synthetic aperture radar (SAR) for ocean surveillance missions requires low latency and light weight inference. This paper proposes a novel small-size Searched Binary Network (SBNN), with network architecture search (NAS) for ship classification with SAR. In SBNN, convolution operations are modified by [...] Read more.
The synthetic aperture radar (SAR) for ocean surveillance missions requires low latency and light weight inference. This paper proposes a novel small-size Searched Binary Network (SBNN), with network architecture search (NAS) for ship classification with SAR. In SBNN, convolution operations are modified by binarization technologies. Both input feature maps and weights are quantized into 1-bit in most of the convolution computation, which significantly decreases the overall computational complexity. In addition, we propose a patch shift processing, which can adjust feature maps with learnable parameters at spatial level. This process enhances the performance by reducing the information irrelevant to the targets. Experimental results on the OpenSARShip dataset show the proposed SBNN outperforms both binary neural networks from computer vision and CNN-based SAR ship classification methods. In particular, SBNN shows a great advantage in computational complexity. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

17 pages, 3870 KiB  
Article
Research on the Lightweight Deployment Method of Integration of Training and Inference in Artificial Intelligence
by Yangyang Zheng, Bin He and Tianling Li
Appl. Sci. 2022, 12(13), 6616; https://doi.org/10.3390/app12136616 - 29 Jun 2022
Cited by 1 | Viewed by 2188
Abstract
In recent years, the continuous development of artificial intelligence has largely been driven by algorithms and computing power. This paper mainly discusses the training and inference methods of artificial intelligence from the perspective of computing power. To address the issue of computing power, [...] Read more.
In recent years, the continuous development of artificial intelligence has largely been driven by algorithms and computing power. This paper mainly discusses the training and inference methods of artificial intelligence from the perspective of computing power. To address the issue of computing power, it is necessary to consider performance, cost, power consumption, flexibility, and robustness comprehensively. At present, the training of artificial intelligence models mostly are based on GPU platforms. Although GPUs offer high computing performance, their power consumption and cost are relatively high. It is not suitable to use GPUs as the implementation platform in certain application scenarios with demanding power consumption and cost. The emergence of high-performance heterogeneous architecture devices provides a new path for the integration of artificial intelligence training and inference. Typically, in Xilinx and Intel’s multi-core heterogeneous architecture, multiple high-performance processors and FPGAs are integrated into a single chip. When compared with the current separate training and inference method, heterogeneous architectures leverage a single chip to realize the integration of AI training and inference, providing a good balance of training and inference of different targets, further reducing the cost of training and implementation of AI inference and power consumption, so as to achieve the lightweight goals of computation, and to improve the flexibility and robustness of the system. In this paper, based on the LeNet-5 network structure, we first introduced the process of network training using a multi-core CPU in Xilinx’s latest multi-core heterogeneous architecture device, MPSoC. Then, the method of converting the network model into hardware logic implementation was studied, and the model parameters were transferred from the processing system of the device to the hardware accelerator structure, composed of programmable logic through the bus interface AXI provided on the chip. Finally, the integrated implementation method was tested and verified in Xilinx MPSoC. According to the test results, the recognition accuracy of this lightweight deployment scheme on MNIST dataset and CIFAR-10 dataset reached 99.5 and 75.4% respectively, while the average processing time of the single frame was only 2.2 ms. In addition, the power consumption of the network within the SoC hardware accelerator is only 1.363 W at 100 MHz. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

17 pages, 3830 KiB  
Article
Object Detection in Remote Sensing Images by Combining Feature Enhancement and Hybrid Attention
by Jin Zheng, Tong Wang, Zhi Zhang and Hongwei Wang
Appl. Sci. 2022, 12(12), 6237; https://doi.org/10.3390/app12126237 - 19 Jun 2022
Cited by 3 | Viewed by 1917
Abstract
The objects in remote sensing images have large-scale variations, arbitrary directions, and are usually densely arranged, and small objects are easily submerged by background noises. They all hinder accurate object detection. To address the above problems, this paper proposes an object detection method [...] Read more.
The objects in remote sensing images have large-scale variations, arbitrary directions, and are usually densely arranged, and small objects are easily submerged by background noises. They all hinder accurate object detection. To address the above problems, this paper proposes an object detection method combining feature enhancement and hybrid attention. Firstly, a feature enhancement fusion network (FEFN) is designed, which carries out dilated convolution with different dilation rates acting on the multi-layer features, and thus fuses multi-scale, multi-receptive field feature maps to enhance the original features. FEFN obtains more robust and discriminative features, which adapt to various objects with different scales. Then, a hybrid attention mechanism (HAM) module composed of pixel attention and channel attention is proposed. Through context dependence and channel correlation, introduced by pixel attention and channel attention respectively, HAM can make the network focus on object features and suppress background noises. Finally, this paper uses box boundary-aware vectors to determine the locations of objects and detect the arbitrary direction objects accurately, even if they are densely arranged. Experiments on public dataset DOTA show that the proposed method achieves 75.02% mAP, showing an improvement of 2.7% mAP compared with BBAVectors. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

11 pages, 2632 KiB  
Article
GPU-Accelerated Target Strength Prediction Based on Multiresolution Shooting and Bouncing Ray Method
by Gang Zhao, Naiwei Sun, Shen Shen, Xianyun Wu and Li Wang
Appl. Sci. 2022, 12(12), 6119; https://doi.org/10.3390/app12126119 - 16 Jun 2022
Cited by 6 | Viewed by 1470
Abstract
The application of the traditional planar acoustics method is limited due to the low accuracy when computing the echo characteristics of underwater targets. Based on the concept of the shooting and bouncing ray which considers multiple reflections on the basic of the geometrics [...] Read more.
The application of the traditional planar acoustics method is limited due to the low accuracy when computing the echo characteristics of underwater targets. Based on the concept of the shooting and bouncing ray which considers multiple reflections on the basic of the geometrics optics principle, this paper presents a more efficient GPU-accelerated multiresolution grid algorithm in the shooting and bouncing ray method (SBR) to quickly predict the target strength value of complex underwater targets. The procedure of the virtual aperture plane generation, ray tracing, scattered sound field integral and subdividing the divergent ray tubes are all implemented on the GPU. Particularly, stackless KD-tree traversal is adopted to effectively improve the ray-tracing efficiency. Experiments on the rigid sphere, cylinder and corner reflector model verify the accuracy of GPU-based multiresolution SBR. Besides, the GPU-based SBR is more than 750 times faster than the CPU version because of its tremendous computing capability. Further, the proposed accelerated GPU-based multiresolution SBR improves runtime performance at least 2.4 times that of the single resolution GPU-based SBR. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

11 pages, 2412 KiB  
Article
Sparse Data-Extended Fusion Method for Sea Surface Temperature Prediction on the East China Sea
by Xiaoliang Wang, Lei Wang, Zhiwei Zhang, Kuo Chen, Yingying Jin, Yijun Yan and Jingjing Liu
Appl. Sci. 2022, 12(12), 5905; https://doi.org/10.3390/app12125905 - 10 Jun 2022
Cited by 3 | Viewed by 1705
Abstract
The accurate temperature background field plays a vital role in the numerical prediction of sea surface temperature (SST). At present, the SST background field is mainly derived from multi-source data fusion, including satellite SST data and in situ data from marine stations, buoys, [...] Read more.
The accurate temperature background field plays a vital role in the numerical prediction of sea surface temperature (SST). At present, the SST background field is mainly derived from multi-source data fusion, including satellite SST data and in situ data from marine stations, buoys, and voluntary observing ships. The characteristics of satellite SST data are wide coverage but low accuracy, whereas the in situ data have high accuracy but sparse distribution. For obtaining a more accurate temperature background field and realizing the fusion of measured data with satellite data as much as possible, we propose a sparse data-extended fusion method to predict SST in this paper. By using this method, the actual observed sites and buoys data in the East China Sea area are fused with Advanced Very High Resolution Radiometer (AVHRR) Pathfinder Version 5.0 SST data. Furthermore, the temperature field in the study area were predicted by using Long Short-Term Memory (LSTM) and Gate Recurrent Unit (GRU) deep learning methods, respectively. Finally, we obtained the results by traditional prediction methods to verify them. The experimental results show that the method we proposed in this paper can obtain more accurate prediction results, and effectively compensate for the uncertainty caused by the parameterization of ocean dynamic process, the discrete method, and the error of initial conditions. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

17 pages, 18417 KiB  
Article
A Dense Feature Pyramid Network for Remote Sensing Object Detection
by Yu Sun, Wenkai Liu, Yangte Gao, Xinghai Hou and Fukun Bi
Appl. Sci. 2022, 12(10), 4997; https://doi.org/10.3390/app12104997 - 15 May 2022
Cited by 8 | Viewed by 2683
Abstract
In recent years, object detection in remote sensing images has become a popular topic in computer vision research. However, there are various problems in remote sensing object detection, such as complex scenes, small objects in large fields of view, and multi-scale object in [...] Read more.
In recent years, object detection in remote sensing images has become a popular topic in computer vision research. However, there are various problems in remote sensing object detection, such as complex scenes, small objects in large fields of view, and multi-scale object in different categories. To address these issues, we propose DFPN-YOLO, a dense feature pyramid network for remote sensing object detection. To address difficulties in detecting small objects in large scenes, we add a larger detection layer on top of the three detection layers of YOLOv3, and we propose Dense-FPN, a dense feature pyramid network structure that enables all four detection layers to combine semantic information before sampling and after sampling to improve the performance of object detection at different scales. In addition, we add an attention module in the residual blocks of the backbone to allow the network to quickly extract key feature information in complex scenes. The results show that the mean average precision (mAP) of our method on the RSOD datasets reached 92%, which is 8% higher than the mAP of YOLOv3, and the mAP increased from 62.41% on YOLOv3 to 69.33% with our method on the DIOR datasets, outperforming even YOLOv4. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

16 pages, 3654 KiB  
Article
Generative Adversarial Networks for Zero-Shot Remote Sensing Scene Classification
by Zihao Li, Daobing Zhang, Yang Wang, Daoyu Lin and Jinghua Zhang
Appl. Sci. 2022, 12(8), 3760; https://doi.org/10.3390/app12083760 - 8 Apr 2022
Cited by 11 | Viewed by 2429
Abstract
Deep learning-based methods succeed in remote sensing scene classification (RSSC). However, current methods require training on a large dataset, and if a class does not appear in the training set, it does not work well. Zero-shot classification methods are designed to address the [...] Read more.
Deep learning-based methods succeed in remote sensing scene classification (RSSC). However, current methods require training on a large dataset, and if a class does not appear in the training set, it does not work well. Zero-shot classification methods are designed to address the classification for unseen category images in which the generative adversarial network (GAN) is a popular method. Thus, our approach aims to achieve the zero-shot RSSC based on GAN. We employed the conditional Wasserstein generative adversarial network (WGAN) to generate image features. Since remote sensing images have inter-class similarity and intra-class diversity, we introduced classification loss, semantic regression module, and class-prototype loss to constrain the generator. The classification loss was used to preserve inter-class discrimination. We used the semantic regression module to ensure that the image features generated by the generator can represent the semantic features. We introduced class-prototype loss to ensure the intra-class diversity of the synthesized image features and avoid generating too homogeneous image features. We studied the effect of different semantic embeddings for zero-shot RSSC. We performed experiments on three datasets, and the experimental results show that our method performs better than the state-of-the-art methods in zero-shot RSSC in most cases. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

15 pages, 29734 KiB  
Article
DCS-TransUperNet: Road Segmentation Network Based on CSwin Transformer with Dual Resolution
by Zheng Zhang, Chunle Miao, Chang’an Liu and Qing Tian
Appl. Sci. 2022, 12(7), 3511; https://doi.org/10.3390/app12073511 - 30 Mar 2022
Cited by 31 | Viewed by 3322
Abstract
Recent advances in deep learning have shown remarkable performance in road segmentation from remotely sensed images. However, these methods based on convolutional neural networks (CNNs) cannot obtain long-range dependency and global contextual information because of the intrinsic inductive biases. Motivated by the success [...] Read more.
Recent advances in deep learning have shown remarkable performance in road segmentation from remotely sensed images. However, these methods based on convolutional neural networks (CNNs) cannot obtain long-range dependency and global contextual information because of the intrinsic inductive biases. Motivated by the success of Transformer in computer vision (CV), excellent models based on Transformer are emerging endlessly. However, patches with a fixed scale limit the further improvement of the model performance. To address this problem, a dual-resolution road segmentation network (DCS-TransUperNet) with a features fusion module (FFM) was proposed for road segmentation. Firstly, the encoder of DCS-TransUperNet was designed based on CSwin Transformer, which uses dual subnetwork encoders of different scales to obtain the coarse and fine-grained feature representations. Secondly, a new FFM was constructed to build enhanced feature representation with global dependencies, using different scale features from the subnetwork encoders. Thirdly, a mixed loss function was designed to avoid the local optimum caused by the imbalance between road and background pixels. Experiments using the Massachusetts dataset and DeepGlobe dataset showed that the proposed DCS-TransUperNet could effectively solve the discontinuity problem and preserve the integrity of the road segmentation results, achieving a higher IoU (65.36% on Massachusetts dataset and 56.74% on DeepGlobe) of road segmentation compared to other state-of-the-art methods. The considerable performance also proves the powerful generation ability of our method. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

16 pages, 21960 KiB  
Article
Cloudformer: Supplementary Aggregation Feature and Mask-Classification Network for Cloud Detection
by Zheng Zhang, Zhiwei Xu, Chang’an Liu, Qing Tian and Yanping Wang
Appl. Sci. 2022, 12(7), 3221; https://doi.org/10.3390/app12073221 - 22 Mar 2022
Cited by 16 | Viewed by 2374
Abstract
Cloud detection is an important step in the processing of optical satellite remote-sensing data. In recent years, deep learning methods have achieved excellent results in cloud detection tasks. However, most of the current models have difficulties to accurately classify similar objects (e.g., clouds [...] Read more.
Cloud detection is an important step in the processing of optical satellite remote-sensing data. In recent years, deep learning methods have achieved excellent results in cloud detection tasks. However, most of the current models have difficulties to accurately classify similar objects (e.g., clouds and snow) and to accurately detect clouds that occupy a few pixels in an image. To solve these problems, a cloud-detection framework (Cloudformer) combining CNN and Transformer is being proposed to achieve high-precision cloud detection in optical remote-sensing images. The framework achieves accurate detection of thin and small clouds using a pyramidal structure encoder. It also achieves accurate classification of similar objects using a dual-path decoder structure of CNN and Transformer, reducing the rate of missed detections and false alarms. In addition, since the Transformer model lacks the perception of location information, an asynchronous position-encoding method is being proposed to enhance the position information of the data entering the Transformer module and to optimize the detection results. Cloudformer is experimented on two datasets, AIR-CD and 38-Cloud, and the results show that it has state-of-the-art performance. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

20 pages, 8652 KiB  
Article
Saliency Guided DNL-Yolo for Optical Remote Sensing Images for Off-Shore Ship Detection
by Jian Guo, Shuchen Wang and Qizhi Xu
Appl. Sci. 2022, 12(5), 2629; https://doi.org/10.3390/app12052629 - 3 Mar 2022
Cited by 5 | Viewed by 2199
Abstract
The complexity of changeable marine backgrounds makes ship detection from satellite remote sensing images a challenging task. The ubiquitous interference of cloud and fog led to missed detection and false-alarms when using imagery-based optical satellite remote sensing. An off-shore ship detection method with [...] Read more.
The complexity of changeable marine backgrounds makes ship detection from satellite remote sensing images a challenging task. The ubiquitous interference of cloud and fog led to missed detection and false-alarms when using imagery-based optical satellite remote sensing. An off-shore ship detection method with scene classification and a saliency-tuned YOLONet is proposed to solve this problem. First, the image blocks are classified into four categories by a density peak clustering algorithm (DPC) according to their grayscale histograms, i.e., cloudless areas, thin cloud areas, scattered clouds areas, and thick cloud areas. Secondly, since the ships can be regarded as salient objects in a marine background, the spectral residue saliency detection method is used to extract prominent targets from different image blocks. Finally, the saliency tuned YOLOv4 network is designed to quickly and accurately detect ships from different marine backgrounds. We validated the proposed method using more than 2000 optical remote sensing images from the GF-1 satellite. The experimental results demonstrated that the proposed method obtained a better detection performance than other state-of-the-art methods. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

19 pages, 1244 KiB  
Article
SAR Target Incremental Recognition Based on Hybrid Loss Function and Class-Bias Correction
by Yongsheng Zhou, Shuo Zhang, Xiaokun Sun, Fei Ma and Fan Zhang
Appl. Sci. 2022, 12(3), 1279; https://doi.org/10.3390/app12031279 - 25 Jan 2022
Cited by 10 | Viewed by 2873
Abstract
The Synthetic Aperture Radar (SAR) target recognition model usually needs to be retrained with all the samples when there are new-coming samples of new targets. Incremental learning emerges to continuously obtain new knowledge from new data while preserving most previously learned knowledge, saving [...] Read more.
The Synthetic Aperture Radar (SAR) target recognition model usually needs to be retrained with all the samples when there are new-coming samples of new targets. Incremental learning emerges to continuously obtain new knowledge from new data while preserving most previously learned knowledge, saving both time and storage. There are still three problems in the existing incremental learning methods: (1) the recognition performance of old target classes degrades significantly during the incremental process; (2) the target classes are easily confused when similar target classes increase; (3) the model is inclined to new target classes due to class imbalance. Regarding the three problems, firstly, the old sample preservation and knowledge distillation were introduced to preserve both old representative knowledge and knowledge structure. Secondly, a class separation loss function was designed to reduce the intra-class distance and increase the inter-class distance, effectively avoiding the confusion between old and new classes. Thirdly, a bias correction layer and a linear model was designed, which enabled the model to treat the old and new target classes more fairly and eliminate the bias. The experimental results on the MSTAR dataset verified the superior performance compared with the other incremental learning methods. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

16 pages, 4011 KiB  
Article
On-Board Flickering Pixel Dynamic Suppression Method Based on Multi-Feature Fusion
by Liangjie Jia, Peng Rao, Xin Chen and Shanchang Qiu
Appl. Sci. 2022, 12(1), 198; https://doi.org/10.3390/app12010198 - 25 Dec 2021
Cited by 1 | Viewed by 2557
Abstract
The blind pixel suppression is the key preprocess to guarantee the real-time space-based infrared point target (IRPT) detection and tracking. Meanwhile, flickering pixels, as one of the blind pixels, is hard to suppress because of randomness. At present, common methods adopting a single [...] Read more.
The blind pixel suppression is the key preprocess to guarantee the real-time space-based infrared point target (IRPT) detection and tracking. Meanwhile, flickering pixels, as one of the blind pixels, is hard to suppress because of randomness. At present, common methods adopting a single feature generally need to accumulate dozens or hundreds of frames to ensure detection accuracy, which cannot update flickering pixels frequently. However, with low detection frequency, the flickering pixels are easily miss detected. In this paper, we propose an on-board flickering pixel dynamic suppression method based on multi-feature fusion. The visual and motion features of flickering pixels are extracted from the result of IRPT detection and tracking. Then, the confidence of flickering pixel evaluation strategy and selection mechanism of flickering pixel are introduced to fuse the above features, which achieves accurate flickering pixel suppression using a dozen frames. The experimental results evaluated on the real image of four scenarios show that the blind pixel false detection rate of the proposed method is no more than 1.02%. Meanwhile, evaluated on the simulated image, the flickering pixel miss suppression rate is no more than 2.38%, and the flickering pixel false suppression rate is 0. The proposed method could be an addition to most other IRPT detection methods, which guarantees the near-real-time and reliability of on-board IRPT detection applications. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

Back to TopTop