remotesensing-logo

Journal Browser

Journal Browser

Image Processing and Analysis: Trends in Registration, Data Fusion, 3D Reconstruction, and Change Detection II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 54935

Special Issue Editors


E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

Satellite, aerial, UAV, and terrestrial imaging techniques are constantly evolving in terms of data volumes, quality, and variety. Earth observation programs, both public and private, are making available a growing amount of multitemporal data, often publicly accessible, at an increased spatial resolution and with a high revisit time. At the opposite end of the platform scale, UAVs, due to their higher flexibility, represent a new paradigm for acquiring high-resolution information with high frequencies. Similarly, consumer-grade 360° cameras and hyperspectral sensors are more and more widespread in different terrestrial platforms and applications.

Remotely sensed data can provide the basis for timely and efficient analysis in several fields, such as land usage and environmental monitoring, cultural heritage, archaeology, precision farming, human activity monitoring, and other engaging research and practical fields of interest. Availability, the increasing need for fast and reliable responses, and the increment of the number of active (but often unskilled) users all pose new relevant challenges in research fields connected to data registration, data fusion, 3D reconstruction, and change detection. In this context, automated and reliable techniques are needed to process and extract information from such a large amount of data.

This Special Issue is the second edition on these subjects (1st edition available at https://www.mdpi.com/journal/remotesensing/special_issues/rs_image_trends) and aims at presenting the latest advances of innovative image analysis and image processing techniques and their contribution in a wide range of application fields, in an attempt to foresee where they will lead the discipline and practice in the coming years. As far as process automation is concerned, it is of utmost importance to invest in an appropriate understanding of the algorithmic implementation of the different techniques and identify their maturity as well as possible applications where their use might leverage their full potential. For this reason, aspects of interest include (i) accuracy: the agreement between the reference (check) and measured data (e.g., accuracy of check point in image orientation or accuracy of testing set in data classification); (ii) completeness: the amount of information obtained from different methodologies and their space/time distribution; (iii) reliability: algorithm consistency, intended as stability to noise, and algorithm robustness, intended as estimation of the measurements’ reliability level and capability to identify gross errors; and (iv) processing speed: the algorithm computational load.

The scope includes but is not limited to the following:

  • Image registration and multisource data integration or fusion methods;
  • Deep learning methods for data classification and pattern recognition;
  • Automation in thematic map production (e.g., spatial and temporal pattern analysis, change detection, and definition of specific change metrics);
  • Cross-calibration of sensors and cross-validation of data/models;
  • Orientation in a seamless way of images acquired with different platforms;
  • Object extraction and accuracy evaluation in 3D reconstruction;
  • Low-cost 360° and fisheye consumer-grade camera calibration, orientation, and 3D reconstruction;
  • Direct georeferencing of images acquired by different platforms.

Dr. Riccardo Roncella
Dr. Mattia Previtali
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Image registration
  • Change detection
  • 3D reconstruction
  • Deep learning
  • Hyperspectral
  • Image matching
  • Data/sensor fusion
  • Object-based image analysis
  • Pattern recognition

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 6326 KiB  
Article
PointCNT: A One-Stage Point Cloud Registration Approach Based on Complex Network Theory
by Xin Wu, Xiaolong Wei, Haojun Xu, Caizhi Li, Yuanhan Hou, Yizhen Yin and Weifeng He
Remote Sens. 2023, 15(14), 3545; https://doi.org/10.3390/rs15143545 - 14 Jul 2023
Cited by 1 | Viewed by 1658
Abstract
Inspired by the parallel visual pathway model of the human neural system, we propose an efficient and high-precision point cloud registration method based on complex network theory (PointCNT). A deep learning network (DNN) design method based on complex network theory is proposed, and [...] Read more.
Inspired by the parallel visual pathway model of the human neural system, we propose an efficient and high-precision point cloud registration method based on complex network theory (PointCNT). A deep learning network (DNN) design method based on complex network theory is proposed, and a multipath feature extraction network, namely, Complex Kernel Point Convolution Neural Network (ComKP-CNN) for point clouds is designed based on the design method. Self-supervision is introduced to improve the feature extraction ability of the model. A feature embedding module is proposed to explicitly embed the transformation-variant coordinate information and transformation-invariant distance information into features. A feature fusion module is proposed to enable the source and template point clouds to perceive each other’s nonlocal features. Finally, a Multilayer Perceptron (MLP) with prominent fitting characteristics is utilized to estimate the transformation matrix. The experimental results show that the Registration Recall (RR) of PointCNT on ModelNet40 dataset reached 96.4%, significantly surpassing one-stage methods such as Feature-Metric Registration (FMR) and approaching two-stage methods such as Geometric Transformer (GeoTransformer). The computation speed is faster than two-stage methods, and the registration run time is 0.15 s. In addition, ComKP-CNN is universal and can improve the registration accuracy of other point cloud registration methods. Full article
Show Figures

Figure 1

21 pages, 11195 KiB  
Article
SRTPN: Scale and Rotation Transform Prediction Net for Multimodal Remote Sensing Image Registration
by Xiangzeng Liu, Xueling Xu, Xiaodong Zhang, Qiguang Miao, Lei Wang, Liang Chang and Ruyi Liu
Remote Sens. 2023, 15(14), 3469; https://doi.org/10.3390/rs15143469 - 9 Jul 2023
Cited by 1 | Viewed by 1787
Abstract
How to recover geometric transformations is one of the most challenging issues in image registration. To alleviate the effect of large geometric distortion in multimodal remote sensing image registration, a scale and rotate transform prediction net is proposed in this paper. First, to [...] Read more.
How to recover geometric transformations is one of the most challenging issues in image registration. To alleviate the effect of large geometric distortion in multimodal remote sensing image registration, a scale and rotate transform prediction net is proposed in this paper. First, to reduce the scale between the reference and sensed images, the image scale regression module is constructed via CNN feature extraction and FFT correlation, and the scale of sensed image can be recovered roughly. Second, the rotation estimate module is developed for predicting the rotation angles between the reference and the scale-recovered images. Finally, to obtain the accurate registration results, LoFTR is employed to match the geometric-recovered images. The proposed registration network was evaluated on GoogleEarth, HRMS, VIS-NIR and UAV datasets with contrast differences and geometric distortions. The experimental results show that the number of correct matches of our model reached 74.6%, and the RMSE of the registration results achieved 1.236, which is superior to the related methods. Full article
Show Figures

Figure 1

18 pages, 3630 KiB  
Article
CBFM: Contrast Balance Infrared and Visible Image Fusion Based on Contrast-Preserving Guided Filter
by Xilai Li, Xiaosong Li and Wuyang Liu
Remote Sens. 2023, 15(12), 2969; https://doi.org/10.3390/rs15122969 - 7 Jun 2023
Cited by 7 | Viewed by 1722
Abstract
Infrared (IR) and visible image fusion is an important data fusion and image processing technique that can accurately and comprehensively integrate the thermal radiation and texture details of source images. However, existing methods neglect the high-contrast fusion problem, leading to suboptimal fusion performance [...] Read more.
Infrared (IR) and visible image fusion is an important data fusion and image processing technique that can accurately and comprehensively integrate the thermal radiation and texture details of source images. However, existing methods neglect the high-contrast fusion problem, leading to suboptimal fusion performance when thermal radiation target information in IR images is replaced by high-contrast information in visible images. To address this limitation, we propose a contrast-balanced framework for IR and visible image fusion. Specifically, a novel contrast balance strategy is proposed to process visible images and reduce energy while allowing for detailed compensation of overexposed areas. Moreover, a contrast-preserving guided filter is proposed to decompose the image into energy-detail layers to reduce high contrast and filter information. To effectively extract the active information in the detail layer and the brightness information in the energy layer, we proposed a new weighted energy-of-Laplacian operator and a Gaussian distribution of the image entropy scheme to fuse the detail and energy layers, respectively. The fused result was obtained by adding the detail and energy layers. Extensive experimental results demonstrate that the proposed method can effectively reduce the high contrast and highlighted target information in an image while simultaneously preserving details. In addition, the proposed method exhibited superior performance compared to the state-of-the-art methods in both qualitative and quantitative assessments. Full article
Show Figures

Graphical abstract

20 pages, 2992 KiB  
Article
Low-Cost Object Detection Models for Traffic Control Devices through Domain Adaption of Geographical Regions
by Dahyun Oh, Kyubyung Kang, Sungchul Seo, Jinwu Xiao, Kyochul Jang, Kibum Kim, Hyungkeun Park and Jeonghun Won
Remote Sens. 2023, 15(10), 2584; https://doi.org/10.3390/rs15102584 - 15 May 2023
Viewed by 1535
Abstract
Automated inspection systems utilizing computer vision technology are effective in managing traffic control devices (TCDs); however, they face challenges due to the limited availability of training datasets and the difficulty in generating new datasets. To address this, our study establishes a benchmark for [...] Read more.
Automated inspection systems utilizing computer vision technology are effective in managing traffic control devices (TCDs); however, they face challenges due to the limited availability of training datasets and the difficulty in generating new datasets. To address this, our study establishes a benchmark for cost-effective model training methods that achieve the desired accuracy using data from related domains and YOLOv5, a one-stage object detector known for its high accuracy and speed. In this study, three model cases were developed using distinct training approaches: (1) training with COCO-based pre-trained weights, (2) training with pre-trained weights from the source domain, and (3) training with a synthesized dataset mixed with source and target domains. Upon comparing these model cases, this study found that directly applying source domain data to the target domain is unfeasible, and a small amount of target domain data is necessary for optimal performance. A model trained with fine-tuning-based domain adaptation using pre-trained weights from the source domain and minimal target data, proved to be the most resource-efficient approach. These results contribute valuable guidance for practitioners aiming to develop TCD models with limited data, enabling them to build optimal models while conserving resources. Full article
Show Figures

Figure 1

17 pages, 4938 KiB  
Article
IFormerFusion: Cross-Domain Frequency Information Learning for Infrared and Visible Image Fusion Based on the Inception Transformer
by Zhang Xiong, Xiaohui Zhang, Qingping Hu and Hongwei Han
Remote Sens. 2023, 15(5), 1352; https://doi.org/10.3390/rs15051352 - 28 Feb 2023
Cited by 4 | Viewed by 2110
Abstract
The current deep learning-based image fusion methods can not sufficiently learn the features of images in a wide frequency range. Therefore, we proposed IFormerFusion, which is based on the Inception Transformer and cross-domain frequency fusion. To learn features from high- and low-frequency information, [...] Read more.
The current deep learning-based image fusion methods can not sufficiently learn the features of images in a wide frequency range. Therefore, we proposed IFormerFusion, which is based on the Inception Transformer and cross-domain frequency fusion. To learn features from high- and low-frequency information, we designed the IFormer mixer, which splits the input features through the channel dimension and feeds them into parallel paths for high- and low-frequency mixers to achieve linear computational complexity. The high-frequency mixer adopts a convolution and a max-pooling path, while the low-frequency mixer adopts a criss-cross attention path. Considering that the high-frequency information relates to the texture detail, we designed a cross-domain frequency fusion strategy, which trades high-frequency information between the source images. This structure can sufficiently integrate complementary features and strengthen the capability of texture retaining. Experiments on the TNO, OSU, and Road Scene datasets demonstrate that IFormerFusion outperforms other methods in object and subject evaluations. Full article
Show Figures

Figure 1

22 pages, 20086 KiB  
Article
Infrared and Visible Image Fusion Method Based on a Principal Component Analysis Network and Image Pyramid
by Shengshi Li, Yonghua Zou, Guanjun Wang and Cong Lin
Remote Sens. 2023, 15(3), 685; https://doi.org/10.3390/rs15030685 - 24 Jan 2023
Cited by 7 | Viewed by 2387
Abstract
The aim of infrared (IR) and visible image fusion is to generate a more informative image for human observation or some other computer vision tasks. The activity-level measurement and weight assignment are two key parts in image fusion. In this paper, we propose [...] Read more.
The aim of infrared (IR) and visible image fusion is to generate a more informative image for human observation or some other computer vision tasks. The activity-level measurement and weight assignment are two key parts in image fusion. In this paper, we propose a novel IR and visible fusion method based on the principal component analysis network (PCANet) and an image pyramid. Firstly, we use the lightweight deep learning network, a PCANet, to obtain the activity-level measurement and weight assignment of IR and visible images. The activity-level measurement obtained by the PCANet has a stronger representation ability for focusing on IR target perception and visible detail description. Secondly, the weights and the source images are decomposed into multiple scales by the image pyramid, and the weighted-average fusion rule is applied at each scale. Finally, the fused image is obtained by reconstruction. The effectiveness of the proposed algorithm was verified by two datasets with more than eighty pairs of test images in total. Compared with nineteen representative methods, the experimental results demonstrate that the proposed method can achieve the state-of-the-art results in both visual quality and objective evaluation metrics. Full article
Show Figures

Figure 1

25 pages, 15772 KiB  
Article
Image Registration Algorithm for Remote Sensing Images Based on Pixel Location Information
by Xuming Zhang, Yao Zhou, Peng Qiao, Xiaoning Lv, Jimin Li, Tianyu Du and Yiming Cai
Remote Sens. 2023, 15(2), 436; https://doi.org/10.3390/rs15020436 - 11 Jan 2023
Cited by 5 | Viewed by 3023
Abstract
Registration between remote sensing images has been a research focus in the field of remote sensing image processing. Most of the existing image registration algorithms applied to feature point matching are derived from image feature extraction methods, such as scale-invariant feature transform (SIFT), [...] Read more.
Registration between remote sensing images has been a research focus in the field of remote sensing image processing. Most of the existing image registration algorithms applied to feature point matching are derived from image feature extraction methods, such as scale-invariant feature transform (SIFT), speed-up robust features (SURF) and Siamese neural network. Such methods encounter difficulties in achieving accurate image registration where there is a large bias in the image features or no significant feature points. Aiming to solve this problem, this paper proposes an algorithm for multi-source image registration based on geographical location information (GLI). By calculating the geographic location information that corresponds to the pixel in the image, the ideal projected pixel position of the corresponding image is obtained using spatial coordinate transformation. Additionally, the corresponding relationship between the two images is calculated by combining multiple sets of registration points. The simulation experiment illustrates that, under selected common simulation parameters, the average value of the relative registration-point error between the two images is 12.64 pixels, and the registration accuracy of the corresponding ground registration point is higher than 6.5 m. In the registration experiment involving remote sensing images from different sources, the average registration pixel error of this algorithm is 20.92 pixels, and the registration error of the image center is 21.24 pixels. In comparison, the image center registration error given by the convolutional neural network (CNN) is 142.35 pixels after the registration error is manually eliminated. For the registration of homologous and featureless remote sensing images, the SIFT algorithm can only offer one set of registration points for the correct region, and the neural network cannot achieve accurate registration results. The registration accuracy of the presented algorithm is 7.2 pixels, corresponding to a ground registration accuracy of 4.32 m and achieving more accurate registration between featureless images. Full article
Show Figures

Figure 1

25 pages, 11768 KiB  
Article
SAR2HEIGHT: Height Estimation from a Single SAR Image in Mountain Areas via Sparse Height and Proxyless Depth-Aware Penalty Neural Architecture Search for Unet
by Minglong Xue, Jian Li, Zheng Zhao and Qingli Luo
Remote Sens. 2022, 14(21), 5392; https://doi.org/10.3390/rs14215392 - 27 Oct 2022
Cited by 7 | Viewed by 3298
Abstract
Height estimation from a single Synthetic Aperture Radar (SAR) image has demonstrated a great potential in real-time environmental monitoring and scene understanding. The projection of a single 2D SAR image from multiple 3D height maps is an ill-posed problem in mathematics. Although Unet [...] Read more.
Height estimation from a single Synthetic Aperture Radar (SAR) image has demonstrated a great potential in real-time environmental monitoring and scene understanding. The projection of a single 2D SAR image from multiple 3D height maps is an ill-posed problem in mathematics. Although Unet has been widely used for height estimation from a single image, the ill-posed problem cannot be completely resolved, and it leads to deteriorated performance with limited training data. This paper tackles the problem by Unet with the help of supplementary sparse height information and proxyless neural architecture search (PDPNAS) for Unet. The sparse height, which can be accepted from low-resolution SRTM or LiDAR products, is included as the supplementary information and is helpful to improve the accuracy of the estimated height map, especially in mountain areas with a wide range of elevations. In order to explore the effect of sparsity of sparse height on the estimated height map, a parameterized method is proposed to generate sparse height with a different sparse ratio. In order to further improve the accuracy of the estimated height map from a single SAR imagery, PDPNAS for Unet is proposed. The optimal architecture for Unet can be searched by PDPNAS automatically with the help of a depth-aware penalty term p. The effectiveness of our approach is evaluated by visual and quantitative analysis on three datasets from mountain areas. The root mean squared error (RMSE) is reduced by 90.30% through observing only 0.0109% of height values from a low-resolution SRTM product. Furthermore, the RMSE is reduced by 3.79% via PDPNAS for Unet. The research proposes a reliable method for estimating height and an alternative method for wide-area DEM mapping from a single SAR image, especially for the implementation of real-time DEM estimation in mountain areas. Full article
Show Figures

Figure 1

26 pages, 10790 KiB  
Article
Image-to-Image Subpixel Registration Based on Template Matching of Road Network Extracted by Deep Learning
by Shuhei Hikosaka and Hideyuki Tonooka
Remote Sens. 2022, 14(21), 5360; https://doi.org/10.3390/rs14215360 - 26 Oct 2022
Cited by 7 | Viewed by 2771
Abstract
The vast digital archives collected by optical remote sensing observations over a long period of time can be used to determine changes in the land surface and this information can be very useful in a variety of applications. However, accurate change extraction requires [...] Read more.
The vast digital archives collected by optical remote sensing observations over a long period of time can be used to determine changes in the land surface and this information can be very useful in a variety of applications. However, accurate change extraction requires highly accurate image-to-image registration, which is especially true when the target is urban areas in high-resolution remote sensing images. In this paper, we propose a new method for automatic registration between images that can be applied to noisy images such as old aerial photographs taken with analog film, in the case where changes in man-made objects such as buildings in urban areas are extracted from multitemporal high-resolution remote sensing images. The proposed method performs image-to-image registration by applying template matching to road masks extracted from images using a two-step deep learning model. We applied the proposed method to multitemporal images, including images taken more than 36 years before the reference image. As a result, the proposed method achieved registration accuracy at the subpixel level, which was more accurate than the conventional area-based and feature-based methods, even for image pairs with the most distant acquisition times. The proposed method is expected to provide more robust image-to-image registration for differences in sensor characteristics, acquisition time, resolution and color tone of two remote sensing images, as well as to temporal variations in vegetation and the effects of building shadows. These results were obtained with a road extraction model trained on images from a single area, single time period and single platform, demonstrating the high versatility of the model. Furthermore, the performance is expected to be improved and stabilized by using images from different areas, time periods and platforms for training. Full article
Show Figures

Figure 1

19 pages, 10966 KiB  
Article
Automatic Matching of Multimodal Remote Sensing Images via Learned Unstructured Road Feature
by Kun Yu, Chengcheng Xu, Jie Ma, Bin Fang, Junfeng Ding, Xinghua Xu, Xianqiang Bao and Shaohua Qiu
Remote Sens. 2022, 14(18), 4595; https://doi.org/10.3390/rs14184595 - 14 Sep 2022
Cited by 7 | Viewed by 2335
Abstract
Automatic matching of multimodal remote sensing images remains a vital yet challenging task, particularly for remote sensing and computer vision applications. Most traditional methods mainly focus on key point detection and description of the original image, thus ignoring the deep semantic feature information [...] Read more.
Automatic matching of multimodal remote sensing images remains a vital yet challenging task, particularly for remote sensing and computer vision applications. Most traditional methods mainly focus on key point detection and description of the original image, thus ignoring the deep semantic feature information such as semantic road features, with the result that the traditional method can not effectively resist nonlinear grayscale distortion, and has low matching efficiency and poor accuracy. Motivated by this, this paper proposes a novel automatic matching method named LURF via learned unstructured road features for the multimodal images. There are four main contributions in LURF. To begin with, the semantic road features were extracted from multimodal images based on segmentation model CRESIv2. Next, based on semantic road features, a stable and reliable intersection point detector has been proposed to detect unstructured key points. Moreover, a local entropy descriptor has been designed to describe key points with the local skeleton feature. Finally, a global optimization strategy is adopted to achieve the correct matching. The extensive experimental results demonstrate that the proposed LURF outperforms other state-of-the-art methods in terms of both accuracy and efficiency on different multimodal image data sets. Full article
Show Figures

Graphical abstract

22 pages, 5273 KiB  
Article
Prediction Algorithm for Satellite Instantaneous Attitude and Image Pixel Offset Based on Synchronous Clocks
by Lingfeng Huang, Feng Dong and Yutian Fu
Remote Sens. 2022, 14(16), 3941; https://doi.org/10.3390/rs14163941 - 13 Aug 2022
Cited by 1 | Viewed by 1729
Abstract
To ensure a high signal-to-noise ratio and high image volume, a geostationary orbiting ocean remote-sensing system needs to maintain high platform stability over a long integration time because it is affected by satellite attitude changes. When the observation target is the ocean, it [...] Read more.
To ensure a high signal-to-noise ratio and high image volume, a geostationary orbiting ocean remote-sensing system needs to maintain high platform stability over a long integration time because it is affected by satellite attitude changes. When the observation target is the ocean, it is difficult to extract image features because of the lack of characteristic objects in the target area. In this paper, we attempt to avoid using image data for satellite attitude and image pixel offset estimation. We obtain the satellite attitude by using equipment such as gyroscopes and performing time registration between the satellite attitude and the image data to achieve pixel offset matching between images. According to the law of satellite attitude change, we designed a Kalman-like filter fitting (KLFF) algorithm based on the satellite attitude change model and the Nelder–Mead search principle. The discrete attitude data were time-marked by a synchronization system, and high-precision estimation of the satellite attitude was achieved after fitting with the KLFF algorithm. When the measurement accuracy of the equipment was 1.0 × 10−3°, the average prediction error of the algorithm was 1.09 × 10−3°, 21.58% better than the traditional interpolation prediction result of 1.39 × 10−3°. The peak value of the fitting angle error reached 2.5 × 10−3°. Compared with the interpolation prediction result of 6.2 × 10−3°, the estimated stability of the satellite attitude improved by about 59.68%. After using the linear interpolation method to compensate for the estimated pixel offset, its discrete range was 0.697 pixels. Compared with the 1.476 pixels of the interpolation algorithm, it was 52.8% lower, which improved the noise immunity of the algorithm. Finally, a KLFF algorithm was designed based on the satellite attitude change model by using external measurement data and the synchronous clock as a benchmark. The instantaneous attitude of the satellite was accurately estimated in real time, and the offset matching between the images was realized, which lays the foundation for in-orbit satellite data processing. Full article
Show Figures

Graphical abstract

22 pages, 6089 KiB  
Article
MFST: Multi-Modal Feature Self-Adaptive Transformer for Infrared and Visible Image Fusion
by Xiangzeng Liu, Haojie Gao, Qiguang Miao, Yue Xi, Yunfeng Ai and Dingguo Gao
Remote Sens. 2022, 14(13), 3233; https://doi.org/10.3390/rs14133233 - 5 Jul 2022
Cited by 19 | Viewed by 3681
Abstract
Infrared and visible image fusion is to combine the information of thermal radiation and detailed texture from the two images into one informative fused image. Recently, deep learning methods have been widely applied in this task; however, those methods usually fuse multiple extracted [...] Read more.
Infrared and visible image fusion is to combine the information of thermal radiation and detailed texture from the two images into one informative fused image. Recently, deep learning methods have been widely applied in this task; however, those methods usually fuse multiple extracted features with the same fusion strategy, which ignores the differences in the representation of these features, resulting in the loss of information in the fusion process. To address this issue, we propose a novel method named multi-modal feature self-adaptive transformer (MFST) to preserve more significant information about the source images. Firstly, multi-modal features are extracted from the input images by a convolutional neural network (CNN). Then, these features are fused by the focal transformer blocks that can be trained through an adaptive fusion strategy according to the characteristics of different features. Finally, the fused features and saliency information of the infrared image are considered to obtain the fused image. The proposed fusion framework is evaluated on TNO, LLVIP, and FLIR datasets with various scenes. Experimental results demonstrate that our method outperforms several state-of-the-art methods in terms of subjective and objective evaluation. Full article
Show Figures

Graphical abstract

19 pages, 5119 KiB  
Article
Image Enhancement-Based Detection with Small Infrared Targets
by Shuai Liu, Pengfei Chen and Marcin Woźniak
Remote Sens. 2022, 14(13), 3232; https://doi.org/10.3390/rs14133232 - 5 Jul 2022
Cited by 37 | Viewed by 4180
Abstract
Today, target detection has an indispensable application in various fields. Infrared small-target detection, as a branch of target detection, can improve the perception capability of autonomous systems, and it has good application prospects in infrared alarm, automatic driving and other fields. There are [...] Read more.
Today, target detection has an indispensable application in various fields. Infrared small-target detection, as a branch of target detection, can improve the perception capability of autonomous systems, and it has good application prospects in infrared alarm, automatic driving and other fields. There are many well-established algorithms that perform well in infrared small-target detection. Nevertheless, the current algorithms cannot achieve the expected detection effect in complex environments, such as background clutter, noise inundation or very small targets. We have designed an image enhancement-based detection algorithm to solve both problems through detail enhancement and target expansion. This method first enhances the mutation information, detail and edge information of the image and then improves the contrast between the target edge and the adjacent pixels to make the target more prominent. The enhancement improves the robustness of detection with background clutter or noise-flooded scenes. Moreover, bicubic interpolation is used on the input image, and the target pixels are expanded with upsampling, which enhances the detection effectiveness for tiny targets. From the results of qualitative and quantitative experiments, the algorithm proposed in this paper outperforms the existing work on various evaluation indicators. Full article
Show Figures

Figure 1

23 pages, 32087 KiB  
Article
Three-Dimensional Geometry Reconstruction Method for Slowly Rotating Space Targets Utilizing ISAR Image Sequence
by Zuobang Zhou, Lei Liu, Rongzhen Du and Feng Zhou
Remote Sens. 2022, 14(5), 1144; https://doi.org/10.3390/rs14051144 - 25 Feb 2022
Cited by 6 | Viewed by 2307
Abstract
For a slowly rotating space target (SRST) with a fixed axis, traditional 3D geometry reconstruction methods become invalid as the projection vectors cannot be formed without accurate target rotational parameters. To tackle this problem, we present a new technique for 3D geometry reconstruction [...] Read more.
For a slowly rotating space target (SRST) with a fixed axis, traditional 3D geometry reconstruction methods become invalid as the projection vectors cannot be formed without accurate target rotational parameters. To tackle this problem, we present a new technique for 3D geometry reconstruction by using inverse synthetic aperture radar (ISAR) image sequence energy accumulation (ISEA). Firstly, by constituting the motion model of SRST, an explicit expression is derived to describe the relative geometric relationship between the 3D geometry and ISAR image sequence. Then accurate rotational parameters and the 3D geometry of SRST can be estimated by combining the idea of the ISEA method and quantum-behaved particle swarm optimization (QPSO). Compared with the ISEA method, which can be only applied to triaxial stabilized space targets, the proposed method can achieve 3D geometry reconstruction of SRST. Experimental results based on the simulated point model and simulated electromagnetic computer aided design (CAD) model validate the effectiveness and robustness of the proposed method. Full article
Show Figures

Graphical abstract

24 pages, 24663 KiB  
Article
Robust Multimodal Remote Sensing Image Registration Based on Local Statistical Frequency Information
by Xiangzeng Liu, Jiepeng Xue, Xueling Xu, Zixiang Lu, Ruyi Liu, Bocheng Zhao, Yunan Li and Qiguang Miao
Remote Sens. 2022, 14(4), 1051; https://doi.org/10.3390/rs14041051 - 21 Feb 2022
Cited by 6 | Viewed by 3756
Abstract
Multimodal remote sensing image registration is a prerequisite for comprehensive application of remote sensing image data. However, inconsistent imaging environment and conditions often lead to obvious geometric deformations and significant contrast differences between multimodal remote sensing images, which makes the common feature extraction [...] Read more.
Multimodal remote sensing image registration is a prerequisite for comprehensive application of remote sensing image data. However, inconsistent imaging environment and conditions often lead to obvious geometric deformations and significant contrast differences between multimodal remote sensing images, which makes the common feature extraction extremely difficult, resulting in their registration still being a challenging task. To address this issue, a robust local statistics-based registration framework is proposed, and the constructed descriptors are invariant to contrast changes and geometric transformations induced by imaging conditions. Firstly, maximum phase congruency of local frequency information is performed by optimizing the control parameters. Then, salient feature points are located according to the phase congruency response map. Subsequently, the geometric and contrast invariant descriptors are constructed based on a joint local frequency information map that combines Log-Gabor filter responses over multiple scales and orientations. Finally, image matching is achieved by finding the corresponding descriptors; image registration is further completed by calculating the transformation between the corresponding feature points. The proposed registration framework was evaluated on four different multimodal image datasets with varying degrees of contrast differences and geometric deformations. Experimental results demonstrated that our method outperformed several state-of-the-art methods in terms of robustness and precision, confirming its effectiveness. Full article
Show Figures

Figure 1

25 pages, 41586 KiB  
Article
A General Framework of Remote Sensing Epipolar Image Generation
by Xuanqi Wang, Feng Wang, Yuming Xiang and Hongjian You
Remote Sens. 2021, 13(22), 4539; https://doi.org/10.3390/rs13224539 - 11 Nov 2021
Cited by 2 | Viewed by 2594
Abstract
Epipolar images can improve the efficiency and accuracy of dense matching by restricting the search range of correspondences from 2-D to 1-D, which play an important role in 3-D reconstruction. As most of the satellite images in archives are incidental collections, which do [...] Read more.
Epipolar images can improve the efficiency and accuracy of dense matching by restricting the search range of correspondences from 2-D to 1-D, which play an important role in 3-D reconstruction. As most of the satellite images in archives are incidental collections, which do not have rigorous stereo properties, in this paper, we propose a general framework to generate epipolar images for both in-track and cross-track stereo images. We first investigate the theoretical epipolar constraints of single-sensor and multi-sensor images and then introduce the proposed framework in detail. Considering large elevation changes in mountain areas, the publicly available digital elevation model (DEM) is applied to reduce the initial offsets of two stereo images. The left image is projected into the image coordinate system of the right image using the rational polynomial coefficients (RPCs). By dividing the raw images into several blocks, the epipolar images of each block are parallel generated through a robust feature matching method and fundamental matrix estimation, in which way, the horizontal disparity can be drastically reduced while maintaining negligible vertical disparity for epipolar blocks. Then, stereo matching using the epipolar blocks can be easily implemented and the forward intersection method is used to generate the digital surface model (DSM). Experimental results on several in-track and cross-track images, including optical-optical, SAR-SAR, and SAR-optical pairs, demonstrate the effectiveness of the proposed framework, which not only has obvious advantages in mountain areas with large elevation changes but also can generate high-quality epipolar images for flat areas. The generated epipolar images of a ZiYuan-3 pair in Songshan are further utilized to produce a high-precision DSM. Full article
Show Figures

Figure 1

Other

Jump to: Research

29 pages, 16316 KiB  
Technical Note
The Potential of Visual ChatGPT for Remote Sensing
by Lucas Prado Osco, Eduardo Lopes de Lemos, Wesley Nunes Gonçalves, Ana Paula Marques Ramos and José Marcato Junior
Remote Sens. 2023, 15(13), 3232; https://doi.org/10.3390/rs15133232 - 22 Jun 2023
Cited by 11 | Viewed by 7474
Abstract
Recent advancements in Natural Language Processing (NLP), particularly in Large Language Models (LLMs), associated with deep learning-based computer vision techniques, have shown substantial potential for automating a variety of tasks. These are known as Visual LLMs and one notable model is Visual ChatGPT, [...] Read more.
Recent advancements in Natural Language Processing (NLP), particularly in Large Language Models (LLMs), associated with deep learning-based computer vision techniques, have shown substantial potential for automating a variety of tasks. These are known as Visual LLMs and one notable model is Visual ChatGPT, which combines ChatGPT’s LLM capabilities with visual computation to enable effective image analysis. These models’ abilities to process images based on textual inputs can revolutionize diverse fields, and while their application in the remote sensing domain remains unexplored, it is important to acknowledge that novel implementations are to be expected. Thus, this is the first paper to examine the potential of Visual ChatGPT, a cutting-edge LLM founded on the GPT architecture, to tackle the aspects of image processing related to the remote sensing domain. Among its current capabilities, Visual ChatGPT can generate textual descriptions of images, perform canny edge and straight line detection, and conduct image segmentation. These offer valuable insights into image content and facilitate the interpretation and extraction of information. By exploring the applicability of these techniques within publicly available datasets of satellite images, we demonstrate the current model’s limitations in dealing with remote sensing images, highlighting its challenges and future prospects. Although still in early development, we believe that the combination of LLMs and visual models holds a significant potential to transform remote sensing image processing, creating accessible and practical application opportunities in the field. Full article
Show Figures

Figure 1

17 pages, 66941 KiB  
Technical Note
A Practical 3D Reconstruction Method for Weak Texture Scenes
by Xuyuan Yang and Guang Jiang
Remote Sens. 2021, 13(16), 3103; https://doi.org/10.3390/rs13163103 - 6 Aug 2021
Cited by 16 | Viewed by 4181
Abstract
In recent years, there has been a growing demand for 3D reconstructions of tunnel pits, underground pipe networks, and building interiors. For such scenarios, weak textures, repeated textures, or even no textures are common. To reconstruct these scenes, we propose covering the lighting [...] Read more.
In recent years, there has been a growing demand for 3D reconstructions of tunnel pits, underground pipe networks, and building interiors. For such scenarios, weak textures, repeated textures, or even no textures are common. To reconstruct these scenes, we propose covering the lighting sources with films of spark patterns to “add” textures to the scenes. We use a calibrated camera to take pictures from multiple views and then utilize structure from motion (SFM) and multi-view stereo (MVS) algorithms to carry out a high-precision 3D reconstruction. To improve the effectiveness of our reconstruction, we combine deep learning algorithms with traditional methods to extract and match feature points. Our experiments have verified the feasibility and efficiency of the proposed method. Full article
Show Figures

Figure 1

Back to TopTop