remotesensing-logo

Journal Browser

Journal Browser

3D Information Recovery and 2D Image Processing for Remotely Sensed Optical Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 May 2023) | Viewed by 39472

Special Issue Editors


E-Mail Website
Guest Editor
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
Interests: computer vision; SLAM; artificial intelligence; LiDAR point clouds processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Control Science and Engineering, Shandong University, Jinan 250061, China
Interests: computer vision; machine learning; robotics
Special Issues, Collections and Topics in MDPI journals
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
Interests: image processing; texture mapping; photogrammetry
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In the photogrammetry and remote sensing fields, an important and longstanding task is the recovery of the 3D information of scenes, followed by the generation of visually appealing digital orthophoto maps (DOMs) with rich semantic information. Remotely sensed optical images are one of the widely used data sources. The key technologies of this task include 3D information recovery and 2D image processing. Recently, with the development of deep-learning techniques, many deep-learning-based methods have been proposed in the computer vision field to recover the 3D information of the scenes, to enhance the image quality, and to acquire semantic information. However, almost all these methods focus on photos taken by smart mobile phones or SLR cameras. Few works have explored these recent advances in remote sensing. Thus, we aim to collect recent research works related to “3D Information Recovery and 2D Image Processing for Remotely Sensed Optical Images”. We invite you to participate this Special issue by submitting articles. Topics of particular interest include, but are not limited to, the following:

  • Feature matching and outlier detection for remote sensing image matching;
  • Pose estimation from 2D remote sensing images;
  • Dense matching of images acquired by remote sensing for 3D reconstruction;
  • Depth estimation of images acquired by remote sensing;
  • Texture mapping for 3D models;
  • Digital elevation model generation from remotely sensed images;
  • Digital orthophoto map generation;
  • Image stitching and color correction for remotely sensed images;
  • Enhancement, denoising, and super-resolution of images acquired by remote sensing;
  • Semantic segmentation and object detection for images obtained by remote sensing.

Prof. Dr. Jian Yao
Prof. Dr. Wei Zhang
Dr. Li Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • remote sensing image processing
  • feature matching
  • dense matching
  • pose estimation
  • 3D reconstruction
  • semantic segmentation
  • object detection
  • image stitching
  • image enhancement
  • image denoising
  • image super-resolution
  • digital elevation model (DEM)
  • digital orthophoto map (DOM)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

19 pages, 9866 KiB  
Article
Hierarchical Edge-Preserving Dense Matching by Exploiting Reliably Matched Line Segments
by Yi Yue, Tong Fang, Wen Li, Min Chen, Bo Xu, Xuming Ge, Han Hu and Zhanhao Zhang
Remote Sens. 2023, 15(17), 4311; https://doi.org/10.3390/rs15174311 - 1 Sep 2023
Viewed by 1049
Abstract
Image dense matching plays a crucial role in the reconstruction of three-dimensional models of buildings. However, large variations in target heights and serious occlusion lead to obvious mismatches in areas with discontinuous depths, such as building edges. To solve this problem, the present [...] Read more.
Image dense matching plays a crucial role in the reconstruction of three-dimensional models of buildings. However, large variations in target heights and serious occlusion lead to obvious mismatches in areas with discontinuous depths, such as building edges. To solve this problem, the present study mines the geometric and semantic information of line segments to produce a constraint for the dense matching process. First, a disparity consistency-based line segment matching method is proposed. This method correctly matches line segments on building structures in discontinuous areas based on the assumption that, within the corresponding local areas formed by two corresponding line pairs, the disparity obtained by the coarse-level matching of the hierarchical dense matching is similar to that derived from the local homography estimated from the corresponding line pairs. Second, an adaptive guide parameter is designed to constrain the cost propagation between pixels in the neighborhood of line segments. This improves the rationality of cost aggregation paths in discontinuous areas, thereby enhancing the matching accuracy near building edges. Experimental results using satellite and aerial images show that the proposed method efficiently obtains reliable line segment matches at building edges with a matching precision exceeding 97%. Under the constraint of the matched line segments, the proposed dense matching method generates building edges that are visually clearer, and achieves higher accuracy around edges, than without the line segment constraint. Full article
Show Figures

Graphical abstract

20 pages, 11594 KiB  
Article
Sat-Mesh: Learning Neural Implicit Surfaces for Multi-View Satellite Reconstruction
by Yingjie Qu and Fei Deng
Remote Sens. 2023, 15(17), 4297; https://doi.org/10.3390/rs15174297 - 31 Aug 2023
Cited by 8 | Viewed by 2552
Abstract
Automatic reconstruction of surfaces from satellite imagery is a hot topic in computer vision and photogrammetry. State-of-the-art reconstruction methods typically produce 2.5D elevation data. In contrast, we propose a one-stage method directly generating a 3D mesh model from multi-view satellite imagery. We introduce [...] Read more.
Automatic reconstruction of surfaces from satellite imagery is a hot topic in computer vision and photogrammetry. State-of-the-art reconstruction methods typically produce 2.5D elevation data. In contrast, we propose a one-stage method directly generating a 3D mesh model from multi-view satellite imagery. We introduce a novel Sat-Mesh approach for satellite implicit surface reconstruction: We represent the scene as a continuous signed distance function (SDF) and leverage a volume rendering framework to learn the SDF values. To address the challenges posed by lighting variations and inconsistent appearances in satellite imagery, we incorporate a latent vector in the network architecture to encode image appearances. Furthermore, we introduce a multi-view stereo constraint to enhance surface quality. This constraint minimizes the similarity between image patches to optimize the position and orientation of the SDF surface. Experimental results demonstrate that our method achieves superior visual quality and quantitative accuracy in generating mesh models. Moreover, our approach can learn seasonal variations in satellite imagery, resulting in texture mesh models with different and consistent seasonal appearances. Full article
Show Figures

Graphical abstract

20 pages, 11400 KiB  
Article
Vehicle Localization in a Completed City-Scale 3D Scene Using Aerial Images and an On-Board Stereo Camera
by Haihan Zhang, Chun Xie, Hisatoshi Toriya, Hidehiko Shishido and Itaru Kitahara
Remote Sens. 2023, 15(15), 3871; https://doi.org/10.3390/rs15153871 - 4 Aug 2023
Cited by 2 | Viewed by 1580
Abstract
Simultaneous Localization and Mapping (SLAM) forms the foundation of vehicle localization in autonomous driving. Utilizing high-precision 3D scene maps as prior information in vehicle localization greatly assists in the navigation of autonomous vehicles within large-scale 3D scene models. However, generating high-precision maps is [...] Read more.
Simultaneous Localization and Mapping (SLAM) forms the foundation of vehicle localization in autonomous driving. Utilizing high-precision 3D scene maps as prior information in vehicle localization greatly assists in the navigation of autonomous vehicles within large-scale 3D scene models. However, generating high-precision maps is complex and costly, posing challenges to commercialization. As a result, a global localization system that employs low-precision, city-scale 3D scene maps reconstructed by unmanned aerial vehicles (UAVs) is proposed to optimize visual positioning for vehicles. To address the discrepancies in image information caused by differing aerial and ground perspectives, this paper introduces a wall complementarity algorithm based on the geometric structure of buildings to refine the city-scale 3D scene. A 3D-to-3D feature registration algorithm is developed to determine vehicle location by integrating the optimized city-scale 3D scene with the local scene generated by an onboard stereo camera. Through simulation experiments conducted in a computer graphics (CG) simulator, the results indicate that utilizing a completed low-precision scene model enables achieving a vehicle localization accuracy with an average error of 3.91 m, which is close to the 3.27 m error obtained using the high-precision map. This validates the effectiveness of the proposed algorithm. The system demonstrates the feasibility of utilizing low-precision city-scale 3D scene maps generated by unmanned aerial vehicles (UAVs) for vehicle localization in large-scale scenes. Full article
Show Figures

Figure 1

18 pages, 4981 KiB  
Article
Point Cloud Registration Based on Fast Point Feature Histogram Descriptors for 3D Reconstruction of Trees
by Yeping Peng, Shengdong Lin, Hongkun Wu and Guangzhong Cao
Remote Sens. 2023, 15(15), 3775; https://doi.org/10.3390/rs15153775 - 29 Jul 2023
Cited by 10 | Viewed by 2338
Abstract
Three-dimensional (3D) reconstruction is an essential technique to visualize and monitor the growth of agricultural and forestry plants. However, inspecting tall plants (trees) remains a challenging task for single-camera systems. A combination of low-altitude remote sensing (an unmanned aerial vehicle) and a terrestrial [...] Read more.
Three-dimensional (3D) reconstruction is an essential technique to visualize and monitor the growth of agricultural and forestry plants. However, inspecting tall plants (trees) remains a challenging task for single-camera systems. A combination of low-altitude remote sensing (an unmanned aerial vehicle) and a terrestrial capture platform (a mobile robot) is suggested to obtain the overall structural features of trees including the trunk and crown. To address the registration problem of the point clouds from different sensors, a registration method based on a fast point feature histogram (FPFH) is proposed to align the tree point clouds captured by terrestrial and airborne sensors. Normal vectors are extracted to define a Darboux coordinate frame whereby FPFH is calculated. The initial correspondences of point cloud pairs are calculated according to the Bhattacharyya distance. Reliable matching point pairs are then selected via random sample consensus. Finally, the 3D transformation is solved by singular value decomposition. For verification, experiments are conducted with real-world data. In the registration experiment on noisy and partial data, the root-mean-square error of the proposed method is 0.35% and 1.18% of SAC-IA and SAC-IA + ICP, respectively. The proposed method is useful for the extraction, monitoring, and analysis of plant phenotypes. Full article
Show Figures

Figure 1

31 pages, 18975 KiB  
Article
Generalized Stereo Matching Method Based on Iterative Optimization of Hierarchical Graph Structure Consistency Cost for Urban 3D Reconstruction
by Shuting Yang, Hao Chen and Wen Chen
Remote Sens. 2023, 15(9), 2369; https://doi.org/10.3390/rs15092369 - 30 Apr 2023
Cited by 2 | Viewed by 2032
Abstract
Generalized stereo matching faces the radiation difference and small ground feature difference brought by different satellites and different time phases, while the texture-less and disparity discontinuity phenomenon seriously affects the correspondence between matching points. To address the above problems, a novel generalized stereo [...] Read more.
Generalized stereo matching faces the radiation difference and small ground feature difference brought by different satellites and different time phases, while the texture-less and disparity discontinuity phenomenon seriously affects the correspondence between matching points. To address the above problems, a novel generalized stereo matching method based on the iterative optimization of hierarchical graph structure consistency cost is proposed for urban 3D scene reconstruction. First, the self-similarity of images is used to construct k-nearest neighbor graphs. The left-view and right-view graph structures are mapped to the same neighborhood, and the graph structure consistency (GSC) cost is proposed to evaluate the similarity of the graph structures. Then, cross-scale cost aggregation is used to adaptively weight and combine multi-scale GSC costs. Next, object-based iterative optimization is proposed to optimize outliers in pixel-wise matching and mismatches in disparity discontinuity regions. The visibility term and the disparity discontinuity term are iterated to continuously detect occlusions and optimize the boundary disparity. Finally, fractal net evolution is used to optimize the disparity map. This paper verifies the effectiveness of the proposed method on a public US3D dataset and a self-made dataset, and compares it with state-of-the-art stereo matching methods. Full article
Show Figures

Graphical abstract

23 pages, 7770 KiB  
Article
Reconstructing Digital Terrain Models from ArcticDEM and WorldView-2 Imagery in Livengood, Alaska
by Tianqi Zhang and Desheng Liu
Remote Sens. 2023, 15(8), 2061; https://doi.org/10.3390/rs15082061 - 13 Apr 2023
Cited by 4 | Viewed by 2273
Abstract
ArcticDEM provides the public with an unprecedented opportunity to access very high-spatial resolution digital elevation models (DEMs) covering the pan-Arctic surfaces. As it is generated from stereo-pairs of optical satellite imagery, ArcticDEM represents a mixture of a digital surface model (DSM) over a [...] Read more.
ArcticDEM provides the public with an unprecedented opportunity to access very high-spatial resolution digital elevation models (DEMs) covering the pan-Arctic surfaces. As it is generated from stereo-pairs of optical satellite imagery, ArcticDEM represents a mixture of a digital surface model (DSM) over a non-ground areas and digital terrain model (DTM) at bare grounds. Reconstructing DTM from ArcticDEM is thus needed in studies requiring bare ground elevation, such as modeling hydrological processes, tracking surface change dynamics, and estimating vegetation canopy height and associated forest attributes. Here we proposed an automated approach for estimating DTM from ArcticDEM in two steps: (1) identifying ground pixels from WorldView-2 imagery using a Gaussian mixture model (GMM) with local refinement by morphological operation, and (2) generating a continuous DTM surface using ArcticDEMs at ground locations and spatial interpolation methods (ordinary kriging (OK) and natural neighbor (NN)). We evaluated our method at three forested study sites characterized by different canopy cover and topographic conditions in Livengood, Alaska, where airborne lidar data is available for validation. Our results demonstrate that (1) the proposed ground identification method can effectively identify ground pixels with much lower root mean square errors (RMSEs) (<0.35 m) to the reference data than the comparative state-of-the-art approaches; (2) NN performs more robustly in DTM interpolation than OK; (3) the DTMs generated from NN interpolation with GMM-based ground masks decrease the RMSEs of ArcticDEM to 0.648 m, 1.677 m, and 0.521 m for Site-1, Site-2, and Site-3, respectively. This study provides a viable means of deriving high-resolution DTM from ArcticDEM that will be of great value to studies focusing on the Arctic ecosystems, forest change dynamics, and earth surface processes. Full article
Show Figures

Figure 1

21 pages, 24584 KiB  
Article
Lightweight Semantic Architecture Modeling by 3D Feature Line Detection
by Shibiao Xu, Jiaxi Sun, Jiguang Zhang, Weiliang Meng and Xiaopeng Zhang
Remote Sens. 2023, 15(8), 1957; https://doi.org/10.3390/rs15081957 - 7 Apr 2023
Cited by 1 | Viewed by 1998
Abstract
Existing architecture semantic modeling methods in 3D complex urban scenes continue facing difficulties, such as limited training data, lack of semantic information, and inflexible model processing. Focusing on extracting and adopting accurate semantic information into a modeling process, this work presents a framework [...] Read more.
Existing architecture semantic modeling methods in 3D complex urban scenes continue facing difficulties, such as limited training data, lack of semantic information, and inflexible model processing. Focusing on extracting and adopting accurate semantic information into a modeling process, this work presents a framework for lightweight modeling of buildings that joints point clouds semantic segmentation and 3D feature line detection constrained by geometric and photometric consistency. The main steps are: (1) Extraction of single buildings from point clouds using 2D-3D semi-supervised semantic segmentation under photometric and geometric constraints. (2) Generation of lightweight building models by using 3D plane-constrained multi-view feature line extraction and optimization. (3) Introduction of detailed semantics of building elements into independent 3D building models by using fine-grained segmentation of multi-view images to achieve high-accuracy architecture lightweight modeling with fine-grained semantic information. Experimental results demonstrate that it can perform independent lightweight modeling of each building on point cloud at various scales and scenes, with accurate geometric appearance details and realistic textures. It also enables independent processing and analysis of each building in the scenario, making them more useful in practical applications. Full article
Show Figures

Graphical abstract

21 pages, 11683 KiB  
Article
Voronoi Centerline-Based Seamline Network Generation Method
by Xiuxiao Yuan, Yang Cai and Wei Yuan
Remote Sens. 2023, 15(4), 917; https://doi.org/10.3390/rs15040917 - 7 Feb 2023
Cited by 2 | Viewed by 2236
Abstract
Seamline network generation is a crucial step in mosaicking multiple orthoimages. It determines the topological and mosaic contribution area for each orthoimage. Previous methods, such as Voronoi-based and AVOD (area Voronoi)-based, may generate mosaic holes in low-overlap and irregular orthoimage cases. This paper [...] Read more.
Seamline network generation is a crucial step in mosaicking multiple orthoimages. It determines the topological and mosaic contribution area for each orthoimage. Previous methods, such as Voronoi-based and AVOD (area Voronoi)-based, may generate mosaic holes in low-overlap and irregular orthoimage cases. This paper proposes a Voronoi centerline-based seamline network generation method to address this problem. The first step is to detect the edge vector of the valid orthoimage region; the second step is to construct a Voronoi triangle network using the edge vector points and extract the centerline of the network; the third step is to segment each orthoimage by the generated centerlines to construct the image effective mosaic polygon (EMP). The final segmented EMP is the mosaic contribution region. All EMPs are interconnected to form a seamline network. The main contribution of the proposed method is that it solves the mosaic holes in the Voronoi-based method when processing with low overlap, and it solves the limitation of the AVOD-based method polygon shape requirement, which can generate a complete mosaic in any overlap and any shape of the orthoimage. Five sets of experiments were conducted, and the results show that the proposed method surpasses the well-known state-of-the-art method and commercial software in terms of adaptability and effectiveness. Full article
Show Figures

Figure 1

23 pages, 7702 KiB  
Article
Two-View Structure-from-Motion with Multiple Feature Detector Operators
by Elisabeth Johanna Dippold and Fuan Tsai
Remote Sens. 2023, 15(3), 605; https://doi.org/10.3390/rs15030605 - 19 Jan 2023
Cited by 2 | Viewed by 2879
Abstract
This paper presents a novel two-view Structure-from-Motion (SfM) algorithm with the application of multiple Feature Detector Operators (FDO). The key of this study is the implementation of multiple FDOs into a two-view SfM algorithm. The two-view SfM algorithm workflow can be divided into [...] Read more.
This paper presents a novel two-view Structure-from-Motion (SfM) algorithm with the application of multiple Feature Detector Operators (FDO). The key of this study is the implementation of multiple FDOs into a two-view SfM algorithm. The two-view SfM algorithm workflow can be divided into three general steps: feature detection and matching, pose estimation and point cloud (PCL) generation. The experimental results, the quantitative analyses and a comparison with existing algorithms demonstrate that the implementation of multiple FDOs can effectively improve the performance of a two-view SfM algorithm. Firstly, in the Oxford test dataset, the RMSE reaches on average 0.11 m (UBC), 0.36 m (bikes), 0.52 m (trees) and 0.37 m (Leuven). This proves that illumination changes, blurring and JPEG compression can be handled satisfactorily. Secondly, in the EPFL dataset, the number of features lost in the processes is 21% with a total PCL of 27,673 pt, and this is only minimally higher than ORB (20.91%) with a PCL of 10,266 pt. Finally, the verification process with a real-world unmanned aerial vehicle (UAV) shows that the point cloud is denser around the edges, the corners and the target, and the process speed is much faster than existing algorithms. Overall, the framework proposed in this study has been proven a viable alternative to a classical procedure, in terms of performance, efficiency and simplicity. Full article
Show Figures

Figure 1

16 pages, 5801 KiB  
Article
Self-Supervised Depth Completion Based on Multi-Modal Spatio-Temporal Consistency
by Quan Zhang, Xiaoyu Chen, Xingguo Wang, Jing Han, Yi Zhang and Jiang Yue
Remote Sens. 2023, 15(1), 135; https://doi.org/10.3390/rs15010135 - 26 Dec 2022
Cited by 4 | Viewed by 1874
Abstract
Due to the low cost and easy deployment, self-supervised depth completion has been widely studied in recent years. In this work, a self-supervised depth completion method is designed based on multi-modal spatio-temporal consistency (MSC). The self-supervised depth completion nowadays faces other problems: moving [...] Read more.
Due to the low cost and easy deployment, self-supervised depth completion has been widely studied in recent years. In this work, a self-supervised depth completion method is designed based on multi-modal spatio-temporal consistency (MSC). The self-supervised depth completion nowadays faces other problems: moving objects, occluded/dark light/low texture parts, long-distance completion, and cross-modal fusion. In the face of these problems, the most critical novelty of this work lies in that the self-supervised mechanism is designed to train the depth completion network by MSC constraint. It not only makes better use of depth-temporal data, but also plays the advantage of photometric-temporal constraint. With the self-supervised mechanism of MSC constraint, the overall system outperforms many other self-supervised networks, even exceeding partially supervised networks. Full article
Show Figures

Graphical abstract

18 pages, 12814 KiB  
Article
Efficient and Robust Feature Matching for High-Resolution Satellite Stereos
by Danchao Gong, Xu Huang, Jidan Zhang, Yongxiang Yao and Yilong Han
Remote Sens. 2022, 14(21), 5617; https://doi.org/10.3390/rs14215617 - 7 Nov 2022
Cited by 4 | Viewed by 2108
Abstract
Feature matching between high-resolution satellite stereos plays an important role in satellite image orientation. However, images of changed regions, weak-textured regions and occluded regions may generate low-quality matches or even mismatches. Furthermore, matching throughout the entire satellite images often has extremely high time [...] Read more.
Feature matching between high-resolution satellite stereos plays an important role in satellite image orientation. However, images of changed regions, weak-textured regions and occluded regions may generate low-quality matches or even mismatches. Furthermore, matching throughout the entire satellite images often has extremely high time cost. To compute good matching results at low time cost, this paper proposes an image block selection method for high-resolution satellite stereos, which processes feature matching in several optimal blocks instead of the entire images. The core of the method is to formulate the block selection into the optimization of an energy function, and a greedy strategy is designed to compute an approximate solution. The experimental comparisons on various satellite stereos show that the proposed method could achieve similar matching accuracy and much lower time cost when compared with some state-of-the-art satellite image matching methods. Thus, the proposed method is a good compromise between matching accuracy and matching time, which has great potential in large-scale satellite applications. Full article
Show Figures

Figure 1

13 pages, 6768 KiB  
Article
Structure Tensor-Based Infrared Small Target Detection Method for a Double Linear Array Detector
by Jinyan Gao, Luyuan Wang, Jiyang Yu and Zhongshi Pan
Remote Sens. 2022, 14(19), 4785; https://doi.org/10.3390/rs14194785 - 25 Sep 2022
Cited by 2 | Viewed by 1696
Abstract
The paper focuses on the mathematical modeling of a new double linear array detector. The special feature of the detector is that image pairs can be generated at short intervals in one scan. After registration and removal of dynamic cloud edges in each [...] Read more.
The paper focuses on the mathematical modeling of a new double linear array detector. The special feature of the detector is that image pairs can be generated at short intervals in one scan. After registration and removal of dynamic cloud edges in each image, the image differentiation-based change detection method in the temporal domain is proposed to combine with the structure tensor edge suppression method in the spatial domain. Finally, experiments are conducted, and our results are compared with theoretic analyses. It is found that a high signal-to-clutter ratio (SCR) of camera input is required to obtain an acceptable detection rate and false alarm rate in real scenes. Experimental results also show that the proposed cloud edge removal solution can be used to successfully detect targets with a very low false alarm rate and an acceptable detection rate. Full article
Show Figures

Figure 1

13 pages, 6823 KiB  
Article
Remote Sensing Image Information Quality Evaluation via Node Entropy for Efficient Classification
by Jiachen Yang, Yue Yang, Jiabao Wen, Yang Li and Sezai Ercisli
Remote Sens. 2022, 14(17), 4400; https://doi.org/10.3390/rs14174400 - 4 Sep 2022
Cited by 4 | Viewed by 1931
Abstract
Combining remote sensing images with deep learning algorithms plays an important role in wide applications. However, it is difficult to have large-scale labeled datasets for remote sensing images because of acquisition conditions and costs. How to use the limited acquisition budget to obtaina [...] Read more.
Combining remote sensing images with deep learning algorithms plays an important role in wide applications. However, it is difficult to have large-scale labeled datasets for remote sensing images because of acquisition conditions and costs. How to use the limited acquisition budget to obtaina better remote sensing image dataset is a problem worth studying. In response to this problem, this paper proposes a remote sensing image quality evaluation method based on node entropy, which can be combined with active learning to provide low-cost guidance for remote sensing image collection and labeling. The method includes a node selection module and a remote sensing image quality evaluation module. The function of the node selection module is to select representative images, and the remote sensing image quality evaluation module evaluates the remote sensing image information quality by calculating the node entropy of the images. The image at the decision boundary of the existing images has a higher information quality. To validate the method proposed in this paper, experiments are performed on two public datasets. The experimental results confirm the superiority of this method compared with other methods. Full article
Show Figures

Graphical abstract

25 pages, 12175 KiB  
Article
Multi-Task Learning of Relative Height Estimation and Semantic Segmentation from Single Airborne RGB Images
by Min Lu, Jiayin Liu, Feng Wang and Yuming Xiang
Remote Sens. 2022, 14(14), 3450; https://doi.org/10.3390/rs14143450 - 18 Jul 2022
Cited by 8 | Viewed by 2585
Abstract
The generation of topographic classification maps or relative heights from aerial or remote sensing images represents a crucial research tool in remote sensing. On the one hand, from auto-driving, three-dimensional city modeling, road design, and resource statistics to smart cities, each task requires [...] Read more.
The generation of topographic classification maps or relative heights from aerial or remote sensing images represents a crucial research tool in remote sensing. On the one hand, from auto-driving, three-dimensional city modeling, road design, and resource statistics to smart cities, each task requires relative height data and classification data of objects. On the other hand, most relative height data acquisition methods currently use multiple images. We find that relative height and geographic classification data can be mutually assisted through data distribution. In recent years, with the rapid development of artificial intelligence technology, it has become possible to estimate the relative height from a single image. It learns implicit mapping relationships in a data-driven manner that may not be explicitly available through mathematical modeling. On this basis, we propose a unified, in-depth learning structure that can generate both estimated relative height maps and semantically segmented maps and perform end-to-end training. Compared with the existing methods, our task is to perform both relative height estimation and semantic segmentation tasks simultaneously. We only need one picture to obtain the corresponding semantically segmented images and relative heights simultaneously. The model’s performance is much better than that of equivalent computational models. We also designed dynamic weights to enable the model to learn relative height estimation and semantic segmentation simultaneously. At the same time, we have conducted good experiments on existing datasets. The experimental results show that the proposed Transformer-based network architecture is suitable for relative height estimation tasks and vastly outperforms other state-of-the-art DL (Deep Learning) methods. Full article
Show Figures

Graphical abstract

32 pages, 10132 KiB  
Article
Robust Extraction of 3D Line Segment Features from Unorganized Building Point Clouds
by Pengju Tian, Xianghong Hua, Wuyong Tao and Miao Zhang
Remote Sens. 2022, 14(14), 3279; https://doi.org/10.3390/rs14143279 - 7 Jul 2022
Cited by 11 | Viewed by 3313
Abstract
As one of the most common features, 3D line segments provide visual information in scene surfaces and play an important role in many applications. However, due to the huge, unstructured, and non-uniform characteristics of building point clouds, 3D line segment extraction is a [...] Read more.
As one of the most common features, 3D line segments provide visual information in scene surfaces and play an important role in many applications. However, due to the huge, unstructured, and non-uniform characteristics of building point clouds, 3D line segment extraction is a complicated task. This paper presents a novel method for extraction of 3D line segment features from an unorganized building point cloud. Given the input point cloud, three steps were performed to extract 3D line segment features. Firstly, we performed data pre-processing, including subsampling, filtering and projection. Secondly, a projection-based method was proposed to divide the input point cloud into vertical and horizontal planes. Finally, for each 3D plane, all points belonging to it were projected onto the fitting plane, and the α-shape algorithm was exploited to extract the boundary points of each plane. The 3D line segment structures were extracted from the boundary points, followed by a 3D line segment merging procedure. Corresponding experiments demonstrate that the proposed method works well in both high-quality TLS and low-quality RGB-D point clouds. Moreover, the robustness in the presence of a high degree of noise is also demonstrated. A comparison with state-of-the-art techniques demonstrates that our method is considerably faster and scales significantly better than previous ones. To further verify the effectiveness of the line segments extracted by the proposed method, we also present a line-based registration framework, which employs the extracted 2D-projected line segments for coarse registration of building point clouds. Full article
Show Figures

Figure 1

18 pages, 16895 KiB  
Article
Optimizing Local Alignment along the Seamline for Parallax-Tolerant Orthoimage Mosaicking
by Hongche Yin, Yunmeng Li, Junfeng Shi, Jiaqin Jiang, Li Li and Jian Yao
Remote Sens. 2022, 14(14), 3271; https://doi.org/10.3390/rs14143271 - 7 Jul 2022
Cited by 6 | Viewed by 2085
Abstract
Orthoimage mosaicking with obvious parallax caused by geometric misalignment is a challenging problem in the field of remote sensing. Because the obvious objects are not included in the digital terrain model (DTM), large parallax exists in these objects. A common strategy is to [...] Read more.
Orthoimage mosaicking with obvious parallax caused by geometric misalignment is a challenging problem in the field of remote sensing. Because the obvious objects are not included in the digital terrain model (DTM), large parallax exists in these objects. A common strategy is to search an optimal seamline between orthoimages, avoiding the majority of obvious objects. However, stitching artifacts may remain because (1) the seamline may still cross several obvious objects and (2) the orthoimages may not be precisely aligned in geometry when the accuracy of the DTM is low. While applying general image warping methods to orthoimages can improve the local geometric consistency of adjacent images, these methods usually significantly modify the geometric properties of orthophoto maps. To the best of our knowledge, no approach has been proposed in the field of remote sensing to solve the problem of local geometric misalignments after orthoimage mosaicking with obvious parallax. In this paper, we creatively propose a method to optimize local alignment along the seamline after seamline detection. It consists of the following main processes. First, we locate regions with geometric misalignments along the seamline based on the similarity measure. Second, for any one region, we find one-dimensional (1D) feature matches along the seamline using a semi-global matching approach. The deformation vectors are calculated for these matches. Third, these deformation vectors are robustly and smoothly propagated into the buffer region centered on the seamline by minimizing the associated energy function. Finally, we directly warp the orthoimages to eliminate the local parallax under the guidance of dense deformation vectors. The experimental results on several groups of orthoimages show that our proposed approach is capable of eliminating the local parallax existing in the seamline while preserving most geometric properties of digital orthophoto maps, and that it outperforms state-of-the-art approaches in terms of both visual quality and quantitative metrics. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

16 pages, 3714 KiB  
Technical Note
A Convolution and Attention Neural Network with MDTW Loss for Cross-Variable Reconstruction of Remote Sensing Image Series
by Chao Li, Haoran Wang, Qinglei Su, Chunlin Ning and Teng Li
Remote Sens. 2023, 15(14), 3552; https://doi.org/10.3390/rs15143552 - 14 Jul 2023
Viewed by 1270
Abstract
Environmental images that are captured by satellites can provide significant information for weather forecasting, climate warning, and so on. This article introduces a novel deep neural network that integrates a convolutional attention feature extractor (CAFE) in a recurrent neural network frame and a [...] Read more.
Environmental images that are captured by satellites can provide significant information for weather forecasting, climate warning, and so on. This article introduces a novel deep neural network that integrates a convolutional attention feature extractor (CAFE) in a recurrent neural network frame and a multivariate dynamic time warping (MDTW) loss. The CAFE module is designed to capture the complicated and hidden dependencies within image series between the source variable and the target variable. The proposed method can reconstruct the image series across environmental variables. The performance of the proposed method is validated by experiments using a real-world remote sensing dataset and compared with several representative methods. Experimental results demonstrate the emerging performance of the proposed method for cross-variable image series reconstruction. Full article
Show Figures

Figure 1

11 pages, 2709 KiB  
Technical Note
Dithered Depth Imaging for Single-Photon Lidar at Kilometer Distances
by Jiying Chang, Jining Li, Kai Chen, Shuai Liu, Yuye Wang, Kai Zhong, Degang Xu and Jianquan Yao
Remote Sens. 2022, 14(21), 5304; https://doi.org/10.3390/rs14215304 - 23 Oct 2022
Cited by 5 | Viewed by 1912
Abstract
Depth imaging using single-photon lidar (SPL) is crucial for long-range imaging and target recognition. Subtractive-dithered SPL breaks through the range limitation of the coarse timing resolution of the detector. Considering the weak signals at kilometer distances, we present a novel imaging method based [...] Read more.
Depth imaging using single-photon lidar (SPL) is crucial for long-range imaging and target recognition. Subtractive-dithered SPL breaks through the range limitation of the coarse timing resolution of the detector. Considering the weak signals at kilometer distances, we present a novel imaging method based on blending subtractive dither with a total variation image restoration algorithm. The spatial correlation is well-considered to obtain more accurate depth profile images with fewer signal photons. Subsequently, we demonstrate the subtractive dither measurement at ranges up to 1.8 km using an array of avalanche photodiodes (APDs) operating in the Geiger mode. Compared with the pixel-wise maximum-likelihood estimation, the proposed method reduces the depth error, which has great promise for high-depth resolution imaging at long-range imaging. Full article
Show Figures

Graphical abstract

Back to TopTop