remotesensing-logo

Journal Browser

Journal Browser

Image Processing and Analysis: Trends in Registration, Data Fusion, 3D Reconstruction, and Change Detection (Third Edition)

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 30 December 2024 | Viewed by 3835

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
Department of Civil, Environmental, Architectural Engineering and Mathematics, Università degli Studi di Brescia, Via Branze, 43, 25123 Brescia, BR, Italy
Interests: image orientation; 3D reconstruction; image-based modeling; terrestrial/UAV/fisheye photogrammetry; digital photography
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Satellite, aerial, UAV, and terrestrial imaging techniques are constantly evolving, both in terms of data volumes, quality, and variety. Earth observation programs, both public and private, are making available a growing amount of multi-temporal data, often publicly accessible, at an increased spatial resolution with a high revisit time. At the opposite end of the platform scale, UAVs, due to their higher flexibility, represent a new paradigm for acquiring high-resolution information at high frequencies. Similarly, consumer-grade 360° cameras and hyperspectral sensors are becoming more widespread on different terrestrial platforms and applications.

Remotely sensed data can provide the basis for timely and efficient analysis in several fields, such as land usage and environmental monitoring, cultural heritage, archaeology, precision farming, human activity monitoring, and others engaging research and practical fields of interest. Its availability, the increasing need for fast and reliable responses, and the increase in the number of active (but often unskilled) users all pose new relevant challenges in research fields connected to data registration, data fusion, 3D reconstruction, and change detection. In such a context, automated and reliable techniques are needed to process and extract information from such a large amount of data.

This Special Issue is the third edition on these subjects (the first edition is available at https://www.mdpi.com/journal/remotesensing/special_issues/rs_image_trends, and the second edition is available at https://www.mdpi.com/journal/remotesensing/special_issues/rs_image_trends_II) and aims to present the latest advances in innovative image analysis and image processing techniques and their contribution in a wide range of application fields in an attempt to foresee where they will lead the discipline and practice in the next few years. As far as process automation is concerned, it is of the utmost importance to invest in an appropriate understanding of the algorithmic implementation of the different techniques and identify their maturity as well as possible applications where their use might leverage their full potential. For this reason, special focusing features might be (i) accuracy: the agreement between the reference (check) and measured data (e.g., accuracy of check point in image orientation or accuracy of testing set in data classification); (ii) completeness: the amount of information obtained from the different methodologies and their space/time distribution; (iii) reliability: the algorithm’s consistency, intended as stability to noise, and algorithm’s robustness, intended as estimation of the measurements’ reliability level and capability to identify gross errors; and (iv) processing speed: the algorithm’s computational load.

Scope includes, but is not limited to, the following:

  • image registration and multi-source data integration or fusion methods;
  • deep learning methods for data classification and pattern recognition;
  • automation in thematic map production (e.g., spatial and temporal pattern analysis, change detection, and definition of specific change metrics);
  • cross-calibration of sensors and cross-validation of data/models;
  • orientation in a seamless way of images acquired with different platforms;
  • object extraction and accuracy evaluation in 3D reconstruction, including volume rendering methods (e.g., NeRF and Gaussian Splatting);
  • low-cost 360° and fisheye consumer-grade camera calibration, orientation, and 3D reconstruction;
  • direct georeferencing of images acquired by different platforms.

Prof. Dr. Riccardo Roncella
Dr. Mattia Previtali
Dr. Luca Perfetti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image registration
  • change detection
  • 3D reconstruction
  • deep learning
  • hyperspectral
  • image matching
  • data/sensor fusion
  • object-based image analysis
  • pattern recognition
  • volume rendering

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issues

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

33 pages, 15070 KiB  
Article
Cross Attention-Based Multi-Scale Convolutional Fusion Network for Hyperspectral and LiDAR Joint Classification
by Haimiao Ge, Liguo Wang, Haizhu Pan, Yanzhong Liu, Cheng Li, Dan Lv and Huiyu Ma
Remote Sens. 2024, 16(21), 4073; https://doi.org/10.3390/rs16214073 - 31 Oct 2024
Viewed by 497
Abstract
In recent years, deep learning-based multi-source data fusion, e.g., hyperspectral image (HSI) and light detection and ranging (LiDAR) data fusion, has gained significant attention in the field of remote sensing. However, the traditional convolutional neural network fusion techniques always provide poor extraction of [...] Read more.
In recent years, deep learning-based multi-source data fusion, e.g., hyperspectral image (HSI) and light detection and ranging (LiDAR) data fusion, has gained significant attention in the field of remote sensing. However, the traditional convolutional neural network fusion techniques always provide poor extraction of discriminative spatial–spectral features from diversified land covers and overlook the correlation and complementarity between different data sources. Furthermore, the mere act of stacking multi-source feature embeddings fails to represent the deep semantic relationships among them. In this paper, we propose a cross attention-based multi-scale convolutional fusion network for HSI-LiDAR joint classification. It contains three major modules: spatial–elevation–spectral convolutional feature extraction module (SESM), cross attention fusion module (CAFM), and classification module. In the SESM, improved multi-scale convolutional blocks are utilized to extract features from HSI and LiDAR to ensure discriminability and comprehensiveness in diversified land cover conditions. Spatial and spectral pseudo-3D convolutions, pointwise convolutions, residual aggregation, one-shot aggregation, and parameter-sharing techniques are implemented in the module. In the CAFM, a self-designed local-global cross attention block is utilized to collect and integrate relationships of the feature embeddings and generate joint semantic representations. In the classification module, average polling, dropout, and linear layers are used to map the fused semantic representations to the final classification results. The experimental evaluations on three public HSI-LiDAR datasets demonstrate the competitiveness of the proposed network in comparison with state-of-the-art methods. Full article
Show Figures

Graphical abstract

23 pages, 62103 KiB  
Article
Iterative Optimization-Enhanced Contrastive Learning for Multimodal Change Detection
by Yuqi Tang, Xin Yang, Te Han, Kai Sun, Yuqiang Guo and Jun Hu
Remote Sens. 2024, 16(19), 3624; https://doi.org/10.3390/rs16193624 - 28 Sep 2024
Viewed by 1352
Abstract
Multimodal change detection (MCD) harnesses multi-source remote sensing data to identify surface changes, thereby presenting prospects for applications within disaster management and environmental surveillance. Nonetheless, disparities in imaging mechanisms across various modalities impede the direct comparison of multimodal images. In response, numerous methodologies [...] Read more.
Multimodal change detection (MCD) harnesses multi-source remote sensing data to identify surface changes, thereby presenting prospects for applications within disaster management and environmental surveillance. Nonetheless, disparities in imaging mechanisms across various modalities impede the direct comparison of multimodal images. In response, numerous methodologies employing deep learning features have emerged to derive comparable features from such images. Nevertheless, several of these approaches depend on manually labeled samples, which are resource-intensive, and their accuracy in distinguishing changed and unchanged regions is not satisfactory. In addressing these challenges, a new MCD method based on iterative optimization-enhanced contrastive learning is proposed in this paper. With the participation of positive and negative samples in contrastive learning, the deep feature extraction network focuses on extracting the initial deep features of multimodal images. The common projection layer unifies the deep features of two images into the same feature space. Then, the iterative optimization module expands the differences between changed and unchanged areas, enhancing the quality of the deep features. The final change map is derived from the similarity measurements of these optimized features. Experiments conducted across four real-world multimodal datasets, benchmarked against eight well-established methodologies, incontrovertibly illustrate the superiority of our proposed approach. Full article
Show Figures

Figure 1

20 pages, 8709 KiB  
Article
Automatic Fine Co-Registration of Datasets from Extremely High Resolution Satellite Multispectral Scanners by Means of Injection of Residues of Multivariate Regression
by Luciano Alparone, Alberto Arienzo and Andrea Garzelli
Remote Sens. 2024, 16(19), 3576; https://doi.org/10.3390/rs16193576 - 25 Sep 2024
Viewed by 571
Abstract
This work presents two pre-processing patches to automatically correct the residual local misalignment of datasets acquired by very/extremely high resolution (VHR/EHR) satellite multispectral (MS) scanners, one for, e.g., GeoEye-1 and Pléiades, featuring two separate instruments for MS and panchromatic (Pan) data, the other [...] Read more.
This work presents two pre-processing patches to automatically correct the residual local misalignment of datasets acquired by very/extremely high resolution (VHR/EHR) satellite multispectral (MS) scanners, one for, e.g., GeoEye-1 and Pléiades, featuring two separate instruments for MS and panchromatic (Pan) data, the other for WorldView-2/3 featuring three instruments, two of which are visible and near-infra-red (VNIR) MS scanners. The misalignment arises because the two/three instruments onboard GeoEye-1 / WorldView-2 (four onboard WorldView-3) share the same optics and, thus, cannot have parallel optical axes. Consequently, they image the same swath area from different positions along the orbit. Local height changes (hills, buildings, trees, etc.) originate local shifts among corresponding points in the datasets. The latter would be accurately aligned only if the digital elevation surface model were known with sufficient spatial resolution, which is hardly feasible everywhere because of the extremely high resolution, with Pan pixels of less than 0.5 m. The refined co-registration is achieved by injecting the residue of the multivariate linear regression of each scanner towards lowpass-filtered Pan. Experiments with two and three instruments show that an almost perfect alignment is achieved. MS pansharpening is also shown to greatly benefit from the improved alignment. The proposed alignment procedures are real-time, fully automated, and do not require any additional or ancillary information, but rely uniquely on the unimodality of the MS and Pan sensors. Full article
Show Figures

Figure 1

29 pages, 10253 KiB  
Article
Hyperspectral Image Denoising and Compression Using Optimized Bidirectional Gated Recurrent Unit
by Divya Mohan, Aravinth J and Sankaran Rajendran
Remote Sens. 2024, 16(17), 3258; https://doi.org/10.3390/rs16173258 - 2 Sep 2024
Viewed by 1043
Abstract
The availability of a higher resolution fine spectral bandwidth in hyperspectral images (HSI) makes it easier to identify objects of interest in them. The inclusion of noise into the resulting collection of images is a limitation of HSI and has an adverse effect [...] Read more.
The availability of a higher resolution fine spectral bandwidth in hyperspectral images (HSI) makes it easier to identify objects of interest in them. The inclusion of noise into the resulting collection of images is a limitation of HSI and has an adverse effect on post-processing and data interpretation. Denoising HSI data is thus necessary for the effective execution of post-processing activities like image categorization and spectral unmixing. Most of the existing models cannot handle many forms of noise simultaneously. When it comes to compression, available compression models face the problems of increased processing time and lower accuracy. To overcome the existing limitations, an image denoising model using an adaptive fusion network is proposed. The denoised output is then processed through a compression model which uses an optimized deep learning technique called "chaotic Chebyshev artificial hummingbird optimization algorithm-based bidirectional gated recurrent unit" (CCAO-BiGRU). All the proposed models were tested in Python and evaluated using the Indian Pines, Washington DC Mall and CAVE datasets. The proposed model underwent qualitative and quantitative analysis and showed a PSNR value of 82 in the case of Indian Pines and 78.4 for the Washington DC Mall dataset at a compression rate of 10. The study proved that the proposed model provides the knowledge about complex nonlinear mapping between noise-free and noisy HSI for obtaining the denoised images and also results in high-quality compressed output. Full article
Show Figures

Figure 1

Back to TopTop