sensors-logo

Journal Browser

Journal Browser

Sensing and Computer Vision Technologies in 3D Reconstruction and Understanding

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (12 March 2023) | Viewed by 10159

Special Issue Editors

National Engineering Laboratory for Integrated aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science, Northwestern Polytechnical University, Xi’an 710129, China
Interests: 3D vision; 3D registration; 3D computer vision; deep learning; pattern recognition; point cloud/image feature description and matching; pose estimation
National Key Laboratory of Science and Technology on Multispectral Information Processing, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
Interests: 3D action recognition / detection; 3D human / hand pose estimation; 3D object detection / tracking
Hubei Key laboratory of Intelligent Geo-Information Processing, School of Computer Science, China University of Geosciences, Wuhan 430078, China
Interests: multi-view image matching; large scale Structure from Motion (SfM); 3D point cloud processing.

Special Issue Information

Dear Colleagues,

With the rapid development of 3D acquisition systems, 3D data are now easily accessible to us. This greatly promotes the research potential in the field of 3D vision. In 3D vision, 3D reconstruction and 3D understanding are two critical problems. The former aims to reconstruct 3D point cloud/surface from various data, such as single image, stereo images, image sequences, and 2.5D point clouds. The latter aims to analyze the semantic information of a 3D object/scene from the perspective of human beings. In particular, the sensing of 3D data is a premise for understanding 3D. Computer vision methods are popular solutions for 3D reconstruction and understanding.

In this Special Issue, we will focus on the above 3D vision tasks, and present works on new sensing and computer vision technologies.

This Special Issue invites contributions in the following topics (but this list is by no means exhaustive):

  • Stereo vision;
  • Structure from motion;
  • Depth estimation;
  • Point cloud registration;
  • Surface reconstruction;
  • Deep learning for 3D data;
  • 3D object tracking;
  • 3D semantic/instance segmentation;
  • 3D object part segmentation;
  • RGB-D vision.

Dr. Jiaqi Yang
Dr. Yang Xiao
Dr. Kun Sun
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • stereo vision
  • structure from motion
  • depth estimation
  • point cloud registration
  • surface reconstruction
  • deep learning for 3D data
  • 3D object tracking
  • 3D semantic/instance segmentation
  • 3D object part segmentation
  • RGB-D vision

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 22226 KiB  
Article
Pose and Focal Length Estimation Using Two Vanishing Points with Known Camera Position
by Kai Guo, Rui Cao, Ye Tian, Binyuan Ji, Xuefeng Dong and Xuyang Li
Sensors 2023, 23(7), 3694; https://doi.org/10.3390/s23073694 - 3 Apr 2023
Cited by 3 | Viewed by 3214
Abstract
This paper proposes a new pose and focal length estimation method using two vanishing points and a known camera position. A vanishing point can determine the unit direction vector of the corresponding parallel lines in the camera frame, and as input, the unit [...] Read more.
This paper proposes a new pose and focal length estimation method using two vanishing points and a known camera position. A vanishing point can determine the unit direction vector of the corresponding parallel lines in the camera frame, and as input, the unit direction vector of the corresponding parallel lines in the world frame is also known. Hence, the two units of direction vectors in camera and world frames, respectively, can be transformed into each other only through the rotation matrix that contains all the information of the camera pose. Then, two transformations can be obtained because there are two vanishing points. The two transformations of the unit direction vectors can be regarded as transformations of 3D points whose coordinates are the values of the corresponding unit direction vectors. The key point in this paper is that our problem with vanishing points is converted to rigid body transformation with 3D–3D point correspondences, which is the usual form in the PnP (perspective-n-point) problem. Additionally, this point simplifies our problem of pose estimation. In addition, in the camera frame, the camera position and two vanishing points can form two lines, respectively, and the angle between the two lines is equal to the angle between the corresponding two sets of parallel lines in the world frame. When using this geometric constraint, the focal length can be estimated quickly. The solutions of pose and focal length are both unique. The experiments show that our proposed method has good performances in numerical stability, noise sensitivity and computational speed with synthetic data and real scenarios and also has strong robustness to camera position noise. Full article
Show Figures

Figure 1

19 pages, 4529 KiB  
Article
A Dense Mapping Algorithm Based on Spatiotemporal Consistency
by Ning Liu, Chuangding Li, Gao Wang, Zibin Wu and Deping Li
Sensors 2023, 23(4), 1876; https://doi.org/10.3390/s23041876 - 7 Feb 2023
Cited by 1 | Viewed by 1783
Abstract
Dense mapping is an important part of mobile robot navigation and environmental understanding. Aiming to address the problem that Dense Surfel Mapping relies on the input of a common-view relationship, we propose a local map extraction strategy based on spatiotemporal consistency. The local [...] Read more.
Dense mapping is an important part of mobile robot navigation and environmental understanding. Aiming to address the problem that Dense Surfel Mapping relies on the input of a common-view relationship, we propose a local map extraction strategy based on spatiotemporal consistency. The local map is extracted through the inter-frame pose observability and temporal continuity. To reduce the blurring of map fusion caused by the different viewing angles, a normal constraint is added to the map fusion and weight initialization. To achieve continuous and stable time efficiency, we dynamically adjust the parameters of superpixel extraction. The experimental results on the ICL-NUIM and KITTI datasets show that the partial reconstruction accuracy is improved by approximately 27–43%. In addition, the system achieves a greater than 15 Hz real-time performance using only CPU computation, which is improved by approximately 13%. Full article
Show Figures

Figure 1

15 pages, 1283 KiB  
Article
WPL-Based Constraint for 3D Human Pose Estimation from a Single Depth Image
by Huiqin Xing and Jianyu Yang
Sensors 2022, 22(23), 9040; https://doi.org/10.3390/s22239040 - 22 Nov 2022
Viewed by 1717
Abstract
Three-dimensional human pose estimation from depth maps is a fast-growing research area in computer vision. The distal joints of the human body are more flexible than the proximal joints, making it more difficult to estimate the distal joints. However, most existing methods ignore [...] Read more.
Three-dimensional human pose estimation from depth maps is a fast-growing research area in computer vision. The distal joints of the human body are more flexible than the proximal joints, making it more difficult to estimate the distal joints. However, most existing methods ignore the difference between the distal joints and proximal joints. Moreover, the distal joint can be constrained by the proximal joint on the same kinematic chain. In our work, we model the human skeleton as the tree structure called the human-tree. Then, motivated by the WPL (weighted path length) in the data structure, we propose a WPL-based loss function to constrain the distal joints with the proximal joints in a global-to-local manner. Extensive experiments on benchmarks demonstrate that our method can effectively improve the performance of the distal joints. Full article
Show Figures

Figure 1

17 pages, 9936 KiB  
Article
Fast and Accurate Pose Estimation with Unknown Focal Length Using Line Correspondences
by Kai Guo, Zhixiang Zhang, Zhongsen Zhang, Ye Tian and Honglin Chen
Sensors 2022, 22(21), 8253; https://doi.org/10.3390/s22218253 - 28 Oct 2022
Cited by 4 | Viewed by 2600
Abstract
Estimating camera pose is one of the key steps in computer vison, photogrammetry and SLAM (Simultaneous Localization and Mapping). It is mainly calculated based on the 2D–3D correspondences of features, including 2D–3D point and line correspondences. If a zoom lens is equipped, the [...] Read more.
Estimating camera pose is one of the key steps in computer vison, photogrammetry and SLAM (Simultaneous Localization and Mapping). It is mainly calculated based on the 2D–3D correspondences of features, including 2D–3D point and line correspondences. If a zoom lens is equipped, the focal length needs to be estimated simultaneously. In this paper, a new method of fast and accurate pose estimation with unknown focal length using two 2D–3D line correspondences and the camera position is proposed. Our core contribution is to convert the PnL (perspective-n-line) problem with 2D–3D line correspondences into an estimation problem with 3D–3D point correspondences. One 3D line and the camera position in the world frame can define a plane, the 2D line projection of the 3D line and the camera position in the camera frame can define another plane, and actually the two planes are the same plane, which is the key geometric characteristic in this paper’s estimation of focal length and pose. We establish the transform between the normal vectors of the two planes with this characteristic, and this transform can be regarded as the camera projection of a 3D point. Then, the pose estimation using 2D–3D line correspondences is converted into pose estimation using 3D–3D point correspondences in intermediate frames, and, lastly, pose estimation can be finished quickly. In addition, using the property whereby the angle between two planes is invariant in both the camera frame and world frame, we can estimate the camera focal length quickly and accurately. Experimental results show that our proposed method has good performance in numerical stability, noise sensitivity and computational speed with synthetic data and real scenarios, and has strong robustness to camera position noise. Full article
Show Figures

Figure 1

Back to TopTop