remotesensing-logo

Journal Browser

Journal Browser

3D Reconstruction and Visualization of Dynamic Object/Scenes Using Data Fusion

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 33723

Special Issue Editors


E-Mail Website
Guest Editor
Department of Multimedia Engineering, Dongguk University, Seoul, Korea
Interests: 3D reconstruction; artificial intelligence for game and robot; virtual reality; NUI/NUX; human–robot interaction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computing Science, University of Aberdeen, Aberdeen, UK
Interests: edge computing; IoT security; blockchain; software-defined networking; social networking
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Computer Science and Technology, North China University of Technology, Beijing 100144, China
Interests: environment perception; unmanned ground vehicle; 3D reconstruction; object recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

For an in-depth analysis and understanding of the contextual environment, knowledge of the 3D structure of a scene provides valuable information. 3D virtual reconstruction involves the geometric structure of a scene captured by a collection of images by facilitating the position of the camera and the internal parameters. The technology of data fusion-based 3D reconstructing using 3D sensors such as RGB-D Camera, Lidar, and Radar have been used in various applications such as autonomous things, robotics, remote sensing, or VR/AR. In particular, deep learning methods for multi-modal 3D data fusion using only images or heterogamous sensor data such as images and point clouds are actively used for 3D reconstruction in research and industry. Complexity, occlusions, variety of structures, and inaccessible locations are serious issues that will affect the capture of all the geometric details of 3D structures. It is, therefore, necessary to collect a large amount of data from different stations that must be accurately recorded and integrated together.

This Special Issue on “3D Reconstruction and Visualization of Dynamic Object/Scene using Data fusion” will focus on finding robust methods to use in uncontrolled environments using 3D scene modeling, autonomous exploration of unknown scenes, autonomous obstacle avoidance system, etc. We welcome novel research, reviews, and opinion articles covering all related topics.

Dr. Kyungeun Cho
Dr. Pradip Kumar Sharma
Dr. Wei Song
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multi-view 3D reconstruction
  • 3D remote sensing
  • Multi-modal data fusion of 3D sensors
  • Depth map fusion
  • Point cloud analysis
  • Deep learning and statistical computing
  • Procedural modeling

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 6799 KiB  
Article
Reflective Noise Filtering of Large-Scale Point Cloud Using Transformer
by Rui Gao, Mengyu Li, Seung-Jun Yang and Kyungeun Cho
Remote Sens. 2022, 14(3), 577; https://doi.org/10.3390/rs14030577 - 26 Jan 2022
Cited by 24 | Viewed by 6078
Abstract
Point clouds acquired with LiDAR are widely adopted in various fields, such as three-dimensional (3D) reconstruction, autonomous driving, and robotics. However, the high-density point cloud of large scenes captured with Lidar usually contains a large number of virtual points generated by the specular [...] Read more.
Point clouds acquired with LiDAR are widely adopted in various fields, such as three-dimensional (3D) reconstruction, autonomous driving, and robotics. However, the high-density point cloud of large scenes captured with Lidar usually contains a large number of virtual points generated by the specular reflections of reflective materials, such as glass. When applying such large-scale high-density point clouds, reflection noise may have a significant impact on 3D reconstruction and other related techniques. In this study, we propose a method that uses deep learning and multi-position sensor comparison method to remove noise due to reflections from high-density point clouds in large scenes. The proposed method converts large-scale high-density point clouds into a range image and subsequently uses a deep learning method and multi-position sensor comparison method for noise detection. This alleviates the limitation of the deep learning networks, specifically their inability to handle large-scale high-density point clouds. The experimental results show that the proposed algorithm can effectively detect and remove noise due to reflection. Full article
Show Figures

Figure 1

31 pages, 11596 KiB  
Article
A Precisely One-Step Registration Methodology for Optical Imagery and LiDAR Data Using Virtual Point Primitives
by Chunjing Yao, Hongchao Ma, Wenjun Luo and Haichi Ma
Remote Sens. 2021, 13(23), 4836; https://doi.org/10.3390/rs13234836 - 28 Nov 2021
Cited by 1 | Viewed by 2077
Abstract
The registration of optical imagery and 3D Light Detection and Ranging (LiDAR) point data continues to be a challenge for various applications in photogrammetry and remote sensing. In this paper, the framework employs a new registration primitive called virtual point (VP) that can [...] Read more.
The registration of optical imagery and 3D Light Detection and Ranging (LiDAR) point data continues to be a challenge for various applications in photogrammetry and remote sensing. In this paper, the framework employs a new registration primitive called virtual point (VP) that can be generated from the linear features within a LiDAR dataset including straight lines (SL) and curved lines (CL). By using an auxiliary parameter (λ), it is easy to take advantage of the accurate and fast calculation of the one-step registration transformation model. The transformation model parameters and λs can be calculated simultaneously by applying the least square method recursively. In urban areas, there are many buildings with different shapes. Therefore, the boundaries of buildings provide a large number of SL and CL features and selecting properly linear features and transforming into VPs can reduce the errors caused by the semi-discrete random characteristics of the LiDAR points. According to the result shown in the paper, the registration precision can reach the 1~2 pixels level of the optical images. Full article
Show Figures

Graphical abstract

18 pages, 4283 KiB  
Article
Textured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks
by Mingyun Wen, Jisun Park and Kyungeun Cho
Remote Sens. 2021, 13(21), 4254; https://doi.org/10.3390/rs13214254 - 22 Oct 2021
Cited by 1 | Viewed by 2648
Abstract
This study focuses on reconstructing accurate meshes with high-resolution textures from single images. The reconstruction process involves two networks: a mesh-reconstruction network and a texture-reconstruction network. The mesh-reconstruction network estimates a deformation map, which is used to deform a template mesh to the [...] Read more.
This study focuses on reconstructing accurate meshes with high-resolution textures from single images. The reconstruction process involves two networks: a mesh-reconstruction network and a texture-reconstruction network. The mesh-reconstruction network estimates a deformation map, which is used to deform a template mesh to the shape of the target object in the input image, and a low-resolution texture. We propose reconstructing a mesh with a high-resolution texture by enhancing the low-resolution texture through use of the super-resolution method. The architecture of the texture-reconstruction network is like that of a generative adversarial network comprising a generator and a discriminator. During the training of the texture-reconstruction network, the discriminator must focus on learning high-quality texture predictions and to ignore the difference between the generated mesh and the actual mesh. To achieve this objective, we used meshes reconstructed using the mesh-reconstruction network and textures generated through inverse rendering to generate pseudo-ground-truth images. We conducted experiments using the 3D-Future dataset, and the results prove that our proposed approach can be used to generate improved three-dimensional (3D) textured meshes compared to existing methods, both quantitatively and qualitatively. Additionally, through our proposed approach, the texture of the output image is significantly improved. Full article
Show Figures

Graphical abstract

22 pages, 28554 KiB  
Article
Reflective Noise Filtering of Large-Scale Point Cloud Using Multi-Position LiDAR Sensing Data
by Rui Gao, Jisun Park, Xiaohang Hu, Seungjun Yang and Kyungeun Cho
Remote Sens. 2021, 13(16), 3058; https://doi.org/10.3390/rs13163058 - 4 Aug 2021
Cited by 20 | Viewed by 5423
Abstract
Signals, such as point clouds captured by light detection and ranging sensors, are often affected by highly reflective objects, including specular opaque and transparent materials, such as glass, mirrors, and polished metal, which produce reflection artifacts, thereby degrading the performance of associated computer [...] Read more.
Signals, such as point clouds captured by light detection and ranging sensors, are often affected by highly reflective objects, including specular opaque and transparent materials, such as glass, mirrors, and polished metal, which produce reflection artifacts, thereby degrading the performance of associated computer vision techniques. In traditional noise filtering methods for point clouds, noise is detected by considering the distribution of the neighboring points. However, noise generated by reflected areas is quite dense and cannot be removed by considering the point distribution. Therefore, this paper proposes a noise removal method to detect dense noise points caused by reflected objects using multi-position sensing data comparison. The proposed method is divided into three steps. First, the point cloud data are converted to range images of depth and reflective intensity. Second, the reflected area is detected using a sliding window on two converted range images. Finally, noise is filtered by comparing it with the neighbor sensor data between the detected reflected areas. Experiment results demonstrate that, unlike conventional methods, the proposed method can better filter dense and large-scale noise caused by reflective objects. In future work, we will attempt to add the RGB image to improve the accuracy of noise detection. Full article
Show Figures

Figure 1

26 pages, 22290 KiB  
Article
Deep Learning-Based Point Upsampling for Edge Enhancement of 3D-Scanned Data and Its Application to Transparent Visualization
by Weite Li, Kyoko Hasegawa, Liang Li, Akihiro Tsukamoto and Satoshi Tanaka
Remote Sens. 2021, 13(13), 2526; https://doi.org/10.3390/rs13132526 - 28 Jun 2021
Cited by 3 | Viewed by 3291
Abstract
Large-scale 3D-scanned point clouds enable the accurate and easy recording of complex 3D objects in the real world. The acquired point clouds often describe both the surficial and internal 3D structure of the scanned objects. The recently proposed edge-highlighted transparent visualization method is [...] Read more.
Large-scale 3D-scanned point clouds enable the accurate and easy recording of complex 3D objects in the real world. The acquired point clouds often describe both the surficial and internal 3D structure of the scanned objects. The recently proposed edge-highlighted transparent visualization method is effective for recognizing the whole 3D structure of such point clouds. This visualization utilizes the degree of opacity for highlighting edges of the 3D-scanned objects, and it realizes clear transparent viewing of the entire 3D structures. However, for 3D-scanned point clouds, the quality of any edge-highlighting visualization depends on the distribution of the extracted edge points. Insufficient density, sparseness, or partial defects in the edge points can lead to unclear edge visualization. Therefore, in this paper, we propose a deep learning-based upsampling method focusing on the edge regions of 3D-scanned point clouds to generate more edge points during the 3D-edge upsampling task. The proposed upsampling network dramatically improves the point-distributional density, uniformity, and connectivity in the edge regions. The results on synthetic and scanned edge data show that our method can improve the percentage of edge points more than 15% compared to the existing point cloud upsampling network. Our upsampling network works well for both sharp and soft edges. A combined use with a noise-eliminating filter also works well. We demonstrate the effectiveness of our upsampling network by applying it to various real 3D-scanned point clouds. We also prove that the improved edge point distribution can improve the visibility of the edge-highlighted transparent visualization of complex 3D-scanned objects. Full article
Show Figures

Graphical abstract

19 pages, 29829 KiB  
Article
DeepLabV3-Refiner-Based Semantic Segmentation Model for Dense 3D Point Clouds
by Jeonghoon Kwak and Yunsick Sung
Remote Sens. 2021, 13(8), 1565; https://doi.org/10.3390/rs13081565 - 17 Apr 2021
Cited by 11 | Viewed by 5306
Abstract
Three-dimensional virtual environments can be configured as test environments of autonomous things, and remote sensing by 3D point clouds collected by light detection and range (LiDAR) can be used to detect virtual human objects by segmenting collected 3D point clouds in a virtual [...] Read more.
Three-dimensional virtual environments can be configured as test environments of autonomous things, and remote sensing by 3D point clouds collected by light detection and range (LiDAR) can be used to detect virtual human objects by segmenting collected 3D point clouds in a virtual environment. The use of a traditional encoder-decoder model, such as DeepLabV3, improves the quality of the low-density 3D point clouds of human objects, where the quality is determined by the measurement gap of the LiDAR lasers. However, whenever a human object with a surrounding environment in a 3D point cloud is used by the traditional encoder-decoder model, it is difficult to increase the density fitting of the human object. This paper proposes a DeepLabV3-Refiner model, which is a model that refines the fit of human objects using human objects whose density has been increased through DeepLabV3. An RGB image that has a segmented human object is defined as a dense segmented image. DeepLabV3 is used to make predictions of dense segmented images and 3D point clouds for human objects in 3D point clouds. In the Refiner model, the results of DeepLabV3 are refined to fit human objects, and a dense segmented image fit to human objects is predicted. The dense 3D point cloud is calculated using the dense segmented image provided by the DeepLabV3-Refiner model. The 3D point clouds that were analyzed by the DeepLabV3-Refiner model had a 4-fold increase in density, which was verified experimentally. The proposed method had a 0.6% increase in density accuracy compared to that of DeepLabV3, and a 2.8-fold increase in the density corresponding to the human object. The proposed method was able to provide a 3D point cloud that increased the density to fit the human object. The proposed method can be used to provide an accurate 3D virtual environment by using the improved 3D point clouds. Full article
Show Figures

Figure 1

19 pages, 4350 KiB  
Article
DGCB-Net: Dynamic Graph Convolutional Broad Network for 3D Object Recognition in Point Cloud
by Yifei Tian, Long Chen, Wei Song, Yunsick Sung and Sangchul Woo
Remote Sens. 2021, 13(1), 66; https://doi.org/10.3390/rs13010066 - 26 Dec 2020
Cited by 10 | Viewed by 3895
Abstract
3D (3-Dimensional) object recognition is a hot research topic that benefits environment perception, disease diagnosis, and the mobile robot industry. Point clouds collected by range sensors are a popular data structure to represent a 3D object model. This paper proposed a 3D object [...] Read more.
3D (3-Dimensional) object recognition is a hot research topic that benefits environment perception, disease diagnosis, and the mobile robot industry. Point clouds collected by range sensors are a popular data structure to represent a 3D object model. This paper proposed a 3D object recognition method named Dynamic Graph Convolutional Broad Network (DGCB-Net) to realize feature extraction and 3D object recognition from the point cloud. DGCB-Net adopts edge convolutional layers constructed by weight-shared multiple-layer perceptrons (MLPs) to extract local features from the point cloud graph structure automatically. Features obtained from all edge convolutional layers are concatenated together to form a feature aggregation. Unlike stacking many layers in-depth, our DGCB-Net employs a broad architecture to extend point cloud feature aggregation flatly. The broad architecture is structured utilizing a flat combining architecture with multiple feature layers and enhancement layers. Both feature layers and enhancement layers concatenate together to further enrich the features’ information of the point cloud. All features work on the object recognition results thus that our DGCB-Net show better recognition performance than other 3D object recognition algorithms on ModelNet10/40 and our scanning point cloud dataset. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

15 pages, 5139 KiB  
Technical Note
LiDAR Data Enrichment by Fusing Spatial and Temporal Adjacent Frames
by Hao Fu, Hanzhang Xue, Xiaochang Hu and Bokai Liu
Remote Sens. 2021, 13(18), 3640; https://doi.org/10.3390/rs13183640 - 12 Sep 2021
Cited by 2 | Viewed by 2751
Abstract
In autonomous driving scenarios, the point cloud generated by LiDAR is usually considered as an accurate but sparse representation. In order to enrich the LiDAR point cloud, this paper proposes a new technique that combines spatial adjacent frames and temporal adjacent frames. To [...] Read more.
In autonomous driving scenarios, the point cloud generated by LiDAR is usually considered as an accurate but sparse representation. In order to enrich the LiDAR point cloud, this paper proposes a new technique that combines spatial adjacent frames and temporal adjacent frames. To eliminate the “ghost” artifacts caused by moving objects, a moving point identification algorithm is introduced that employs the comparison between range images. Experiments are performed on the publicly available Semantic KITTI dataset. Experimental results show that the proposed method outperforms most of the previous approaches. Compared with these previous works, the proposed method is the only method that can run in real-time for online usage. Full article
Show Figures

Figure 1

Back to TopTop