sensors-logo

Journal Browser

Journal Browser

Imaging Depth Sensors—Sensors, Algorithms and Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (27 October 2017) | Viewed by 182438

Special Issue Editors


E-Mail Website
Guest Editor
University of Siegen, Institute for Vision and Graphics, Computer Graphics Group, 57076 Siegen, Germany
Interests: computer graphics; computer vision based on ToF sensors; graphics hardware based visualization

E-Mail Website
Guest Editor
University of Bonn, Institute of Computer Science III, Computer Vision Group, Office A.114, Römerstraße 164, 53117 Bonn, Germany
Interests: action recognition; human pose estimation; object detection

E-Mail Website
Guest Editor
Chronoptics Limited, PO Box 19502, Hamilton 3244, New Zealand
Interests: ToF imaging systems; hardware; instrumentation; application

E-Mail Website
Guest Editor
School of Engineering, The University of Waikat, Private Bag, 3105. Hamilton, New Zealand
Interests: time-of-flight range image metrology; image processing; statistical modelling; numerical methods

Special Issue Information

Dear Colleagues,

Range sensing witnessed tremendous improvements over the last decade. Three-dimensional vision systems, with radically improved characteristics, have emerged, based on classical approaches, such as stereo vision and structured light on the one hand, and novel range imaging techniques, such as Time-of-Flight (ToF), on the other. These sensors make full-range 3D data available at interactive frame rates, and, thus, opened the door towards highly-improved 3D vision systems and novel applications.

This Special Issue focuses on three main categories of topics related to range sensors:

  1. Range sensor development and improvement: Novel active, passive, hybrid and/or multi-modal 3D imaging methods. Subtopics of interest include, but are not limited to:
    • ToF sensor improvement.
    • Novel ToF range imaging optical and operational techniques related to, e.g.,
      • AMCW, FMCW, and homodyne/heterodyne modulation.
      • Multi-path suppression, super-resolution, and motion blur mitigation.
    • 3D single photon imaging.
    • Improvement on structured-light range sensors.
    • Characterisation of range image sensors, including sensor performance and data quality metrics.
  2. Range data processing: (Generic) pre-processing of range data, including calibration, data fusion, and accumulation of range sensor data, potentially combined with other modalities. Subtopics of interest include, but are not limited to:
    • Calibration.
    • Multi-path interference correction.
    • Motion blur correction.
    • Improvement on range extraction from structured-light and stereo cameras.
    • 3D sensor data fusion.
    • Models for representation of objects shape, appearance and dynamics.
    • Filtering and super-resolution.
  3. Novel applications and systems, which leverage 3D imaging sensors. Subtopics of interest include, but are not limited to:
    • Range cameras for transient imaging or other inverse imaging problems.
    • 3D object and shape detection.
    • High quality online scene reconstruction.
    • Scene understanding.
    • Alternate ToF sensor applications, e.g., diffuse optical tomography and fluorescent lifetime imaging.

Across these fields, novel results of theoretical and practical significance in the field of range sensors and 3D range imaging will be considered.

Prof. Dr. Andreas Kolb
Prof. Dr. Jürgen Gall
Dr. Adrian A. Dorrington
Dr. Lee Streeter
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • RGB-D camera
  • LIDAR
  • ToF range imaging
  • Stereo vision
  • Structured light
  • 3D optical flow
  • 2D optical flow on range data
  • Doppler LiDAR imaging
  • Noise modelling
  • Error correction, calibration, and cancellation
  • Range imaging filtering
  • Multi-path
  • 3D data-fusion
  • Spatial and/or temporal modulation for 3D imaging

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (25 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 3219 KiB  
Article
A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks
by Po-Chang Su, Ju Shen, Wanxin Xu, Sen-Ching S. Cheung and Ying Luo
Sensors 2018, 18(1), 235; https://doi.org/10.3390/s18010235 - 15 Jan 2018
Cited by 47 | Viewed by 8370
Abstract
From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it [...] Read more.
From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

14 pages, 3082 KiB  
Article
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle
by Seigo Ito, Shigeyoshi Hiratsuka, Mitsuhiko Ohta, Hiroyuki Matsubara and Masaru Ogawa
Sensors 2018, 18(1), 177; https://doi.org/10.3390/s18010177 - 10 Jan 2018
Cited by 30 | Viewed by 10375
Abstract
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a [...] Read more.
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

19 pages, 18387 KiB  
Article
A Weld Position Recognition Method Based on Directional and Structured Light Information Fusion in Multi-Layer/Multi-Pass Welding
by Jinle Zeng, Baohua Chang, Dong Du, Li Wang, Shuhe Chang, Guodong Peng and Wenzhu Wang
Sensors 2018, 18(1), 129; https://doi.org/10.3390/s18010129 - 5 Jan 2018
Cited by 38 | Viewed by 7069
Abstract
Multi-layer/multi-pass welding (MLMPW) technology is widely used in the energy industry to join thick components. During automatic welding using robots or other actuators, it is very important to recognize the actual weld pass position using visual methods, which can then be used not [...] Read more.
Multi-layer/multi-pass welding (MLMPW) technology is widely used in the energy industry to join thick components. During automatic welding using robots or other actuators, it is very important to recognize the actual weld pass position using visual methods, which can then be used not only to perform reasonable path planning for actuators, but also to correct any deviations between the welding torch and the weld pass position in real time. However, due to the small geometrical differences between adjacent weld passes, existing weld position recognition technologies such as structured light methods are not suitable for weld position detection in MLMPW. This paper proposes a novel method for weld position detection, which fuses various kinds of information in MLMPW. First, a synchronous acquisition method is developed to obtain various kinds of visual information when directional light and structured light sources are on, respectively. Then, interferences are eliminated by fusing adjacent images. Finally, the information from directional and structured light images is fused to obtain the 3D positions of the weld passes. Experiment results show that each process can be done in 30 ms and the deviation is less than 0.6 mm. The proposed method can be used for automatic path planning and seam tracking in the robotic MLMPW process as well as electron beam freeform fabrication process. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

23861 KiB  
Article
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects
by David Bulczak, Martin Lambers and Andreas Kolb
Sensors 2018, 18(1), 13; https://doi.org/10.3390/s18010013 - 22 Dec 2017
Cited by 19 | Viewed by 7687
Abstract
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these [...] Read more.
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

9671 KiB  
Article
Depth-Based Detection of Standing-Pigs in Moving Noise Environments
by Jinseong Kim, Yeonwoo Chung, Younchang Choi, Jaewon Sa, Heegon Kim, Yongwha Chung, Daihee Park and Hakjae Kim
Sensors 2017, 17(12), 2757; https://doi.org/10.3390/s17122757 - 29 Nov 2017
Cited by 58 | Viewed by 6376
Abstract
In a surveillance camera environment, the detection of standing-pigs in real-time is an important issue towards the final goal of 24-h tracking of individual pigs. In this study, we focus on depth-based detection of standing-pigs with “moving noises”, which appear every night in [...] Read more.
In a surveillance camera environment, the detection of standing-pigs in real-time is an important issue towards the final goal of 24-h tracking of individual pigs. In this study, we focus on depth-based detection of standing-pigs with “moving noises”, which appear every night in a commercial pig farm, but have not been reported yet. We first apply a spatiotemporal interpolation technique to remove the moving noises occurring in the depth images. Then, we detect the standing-pigs by utilizing the undefined depth values around them. Our experimental results show that this method is effective for detecting standing-pigs at night, in terms of both cost-effectiveness (using a low-cost Kinect depth sensor) and accuracy (i.e., 94.47%), even with severe moving noises occluding up to half of an input depth image. Furthermore, without any time-consuming technique, the proposed method can be executed in real-time. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

5221 KiB  
Article
Monocular Visual-Inertial SLAM: Continuous Preintegration and Reliable Initialization
by Yi Liu, Zhong Chen, Wenjuan Zheng, Hao Wang and Jianguo Liu
Sensors 2017, 17(11), 2613; https://doi.org/10.3390/s17112613 - 14 Nov 2017
Cited by 19 | Viewed by 6557
Abstract
In this paper, we propose a new visual-inertial Simultaneous Localization and Mapping (SLAM) algorithm. With the tightly coupled sensor fusion of a global shutter monocular camera and a low-cost Inertial Measurement Unit (IMU), this algorithm is able to achieve robust and real-time estimates [...] Read more.
In this paper, we propose a new visual-inertial Simultaneous Localization and Mapping (SLAM) algorithm. With the tightly coupled sensor fusion of a global shutter monocular camera and a low-cost Inertial Measurement Unit (IMU), this algorithm is able to achieve robust and real-time estimates of the sensor poses in unknown environment. To address the real-time visual-inertial fusion problem, we present a parallel framework with a novel IMU initialization method. Our algorithm also benefits from the novel IMU factor, the continuous preintegration method, the vision factor of directional error, the separability trick and the robust initialization criterion which can efficiently output reliable estimates in real-time on modern Central Processing Unit (CPU). Tremendous experiments also validate the proposed algorithm and prove it is comparable to the state-of-art method. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

3935 KiB  
Article
Single-Shot Dense Depth Sensing with Color Sequence Coded Fringe Pattern
by Fu Li, Baoyu Zhang, Guangming Shi, Yi Niu, Ruodai Li, Lili Yang and Xuemei Xie
Sensors 2017, 17(11), 2558; https://doi.org/10.3390/s17112558 - 6 Nov 2017
Cited by 8 | Viewed by 4426
Abstract
A single-shot structured light method is widely used to acquire dense and accurate depth maps for dynamic scenes. In this paper, we propose a color sequence coded fringe depth sensing method. To overcome the phase unwrapping problem encountered in phase-based methods, the color-coded [...] Read more.
A single-shot structured light method is widely used to acquire dense and accurate depth maps for dynamic scenes. In this paper, we propose a color sequence coded fringe depth sensing method. To overcome the phase unwrapping problem encountered in phase-based methods, the color-coded sequence information is embedded into the phase information. We adopt the color-encoded De Bruijn sequence to denote the period of the phase information and assign the sequence into two channels of the pattern, while the third channel is used to code the phase information. Benefiting from this coding strategy, the phase information distributed in multiple channels can improve the quality of the phase intensity by channel overlay, which results in precise phase estimation. Meanwhile, the wrapped phase period assists the sequence decoding to obtain a precise period order. To evaluate the performance of the proposed method, an experimental platform is established. Quantitative and qualitative experiments demonstrate that the proposed method generates a higher precision depth, as compared to a Kinect and larger resolution ToF (Time of Flight) camera. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

5088 KiB  
Article
Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing
by Mingchi Feng, Xiang Jia, Jingshu Wang, Song Feng and Taixiong Zheng
Sensors 2017, 17(11), 2494; https://doi.org/10.3390/s17112494 - 31 Oct 2017
Cited by 13 | Viewed by 7160
Abstract
Multi-camera systems are widely applied in the three dimensional (3D) computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is [...] Read more.
Multi-camera systems are widely applied in the three dimensional (3D) computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

12931 KiB  
Article
Modified Gray-Level Coding Method for Absolute Phase Retrieval
by Xiangcheng Chen, Shunping Chen, Jie Luo, Mengchao Ma, Yuwei Wang, Yajun Wang and Lei Chen
Sensors 2017, 17(10), 2383; https://doi.org/10.3390/s17102383 - 19 Oct 2017
Cited by 19 | Viewed by 4577
Abstract
Fringe projection systems have been widely applied in three-dimensional (3D) shape measurements. One of the important issues is how to retrieve the absolute phase. This paper presents a modified gray-level coding method for absolute phase retrieval. Specifically, two groups of fringe patterns are [...] Read more.
Fringe projection systems have been widely applied in three-dimensional (3D) shape measurements. One of the important issues is how to retrieve the absolute phase. This paper presents a modified gray-level coding method for absolute phase retrieval. Specifically, two groups of fringe patterns are projected onto the measured objects, including three phase-shift patterns for the wrapped phase, and three n-ary gray-level (nGL) patterns for the fringe order. Compared with the binary gray-level (bGL) method which just uses two intensity values, the nGL method can generate many more unique codewords with multiple intensity values. With assistance from the average intensity and modulation of phase-shift patterns, the intensities of nGL patterns are normalized to deal with ambient light and surface contrast. To reduce the codeword detection errors caused by camera/projector defocus, nGL patterns are designed as n-ary gray-code (nGC) patterns to ensure that at most, one code changes at each point. Experiments verify the robustness and effectiveness of the proposed method to measure isolated objects with complex surfaces. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

6412 KiB  
Article
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model
by Jing Li, Fangbing Zhang, Lisong Wei, Tao Yang and Zhaoyang Lu
Sensors 2017, 17(10), 2354; https://doi.org/10.3390/s17102354 - 16 Oct 2017
Cited by 14 | Viewed by 6624
Abstract
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and [...] Read more.
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

6470 KiB  
Article
Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap
by Khalil M. Ahmad Yousef, Bassam J. Mohd, Khalid Al-Widyan and Thaier   Hayajneh
Sensors 2017, 17(10), 2346; https://doi.org/10.3390/s17102346 - 14 Oct 2017
Cited by 24 | Viewed by 10556
Abstract
Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap [...] Read more.
Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 respectively. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

9364 KiB  
Article
CuFusion: Accurate Real-Time Camera Tracking and Volumetric Scene Reconstruction with a Cuboid
by Chen Zhang and Yu Hu
Sensors 2017, 17(10), 2260; https://doi.org/10.3390/s17102260 - 1 Oct 2017
Cited by 8 | Viewed by 5800
Abstract
Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge [...] Read more.
Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge of the precisely manufactured cuboid reference object, we keep drift-free camera tracking without explicit global optimization; (b) we improve the fineness of the volumetric surface representation by proposing a prediction-corrected data fusion strategy rather than a simple moving average, which enables accurate reconstruction of high-frequency details such as the sharp edges of objects and geometries of high curvature; (c) we introduce a benchmark dataset CU3D that contains both synthetic and real-world scanning sequences with ground-truth camera trajectories and surface models for the quantitative evaluation of 3D reconstruction algorithms. We test our algorithm on our dataset and demonstrate its accuracy compared with other state-of-the-art algorithms. We release both our dataset and code as open-source (https://github.com/zhangxaochen/CuFusion) for other researchers to reproduce and verify our results. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

14504 KiB  
Article
Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors
by Fucheng Deng, Xiaorui Zhu and Chao He
Sensors 2017, 17(9), 2101; https://doi.org/10.3390/s17092101 - 13 Sep 2017
Cited by 18 | Viewed by 5603
Abstract
Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is [...] Read more.
Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

5650 KiB  
Article
A Robotic Platform for Corn Seedling Morphological Traits Characterization
by Hang Lu, Lie Tang, Steven A. Whitham and Yu Mei
Sensors 2017, 17(9), 2082; https://doi.org/10.3390/s17092082 - 12 Sep 2017
Cited by 24 | Viewed by 5905
Abstract
Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires [...] Read more.
Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x-axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

1066 KiB  
Article
Improved Range Estimation Model for Three-Dimensional (3D) Range Gated Reconstruction
by Sing Yee Chua, Ningqun Guo, Ching Seong Tan and Xin Wang
Sensors 2017, 17(9), 2031; https://doi.org/10.3390/s17092031 - 5 Sep 2017
Cited by 16 | Viewed by 5019
Abstract
Accuracy is an important measure of system performance and remains a challenge in 3D range gated reconstruction despite the advancement in laser and sensor technology. The weighted average model that is commonly used for range estimation is heavily influenced by the intensity variation [...] Read more.
Accuracy is an important measure of system performance and remains a challenge in 3D range gated reconstruction despite the advancement in laser and sensor technology. The weighted average model that is commonly used for range estimation is heavily influenced by the intensity variation due to various factors. Accuracy improvement in term of range estimation is therefore important to fully optimise the system performance. In this paper, a 3D range gated reconstruction model is derived based on the operating principles of range gated imaging and time slicing reconstruction, fundamental of radiant energy, Laser Detection And Ranging (LADAR), and Bidirectional Reflection Distribution Function (BRDF). Accordingly, a new range estimation model is proposed to alleviate the effects induced by distance, target reflection, and range distortion. From the experimental results, the proposed model outperforms the conventional weighted average model to improve the range estimation for better 3D reconstruction. The outcome demonstrated is of interest to various laser ranging applications and can be a reference for future works. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

7197 KiB  
Article
Dual Quaternions as Constraints in 4D-DPM Models for Pose Estimation
by Enrique Martinez-Berti, Antonio-José Sánchez-Salmerón and Carlos Ricolfe-Viala
Sensors 2017, 17(8), 1913; https://doi.org/10.3390/s17081913 - 19 Aug 2017
Cited by 2 | Viewed by 5112
Abstract
The goal of this research work is to improve the accuracy of human pose estimation using the Deformation Part Model (DPM) without increasing computational complexity. First, the proposed method seeks to improve pose estimation accuracy by adding the depth channel to DPM, which [...] Read more.
The goal of this research work is to improve the accuracy of human pose estimation using the Deformation Part Model (DPM) without increasing computational complexity. First, the proposed method seeks to improve pose estimation accuracy by adding the depth channel to DPM, which was formerly defined based only on red–green–blue (RGB) channels, in order to obtain a four-dimensional DPM (4D-DPM). In addition, computational complexity can be controlled by reducing the number of joints by taking it into account in a reduced 4D-DPM. Finally, complete solutions are obtained by solving the omitted joints by using inverse kinematics models. In this context, the main goal of this paper is to analyze the effect on pose estimation timing cost when using dual quaternions to solve the inverse kinematics. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

8274 KiB  
Article
Detecting Traversable Area and Water Hazards for the Visually Impaired with a pRGB-D Sensor
by Kailun Yang, Kaiwei Wang, Ruiqi Cheng, Weijian Hu, Xiao Huang and Jian Bai
Sensors 2017, 17(8), 1890; https://doi.org/10.3390/s17081890 - 17 Aug 2017
Cited by 38 | Viewed by 8628
Abstract
The use of RGB-Depth (RGB-D) sensors for assisting visually impaired people (VIP) has been widely reported as they offer portability, function-diversity and cost-effectiveness. However, polarization cues to assist traversability awareness without precautions against stepping into water areas are weak. In this paper, a [...] Read more.
The use of RGB-Depth (RGB-D) sensors for assisting visually impaired people (VIP) has been widely reported as they offer portability, function-diversity and cost-effectiveness. However, polarization cues to assist traversability awareness without precautions against stepping into water areas are weak. In this paper, a polarized RGB-Depth (pRGB-D) framework is proposed to detect traversable area and water hazards simultaneously with polarization-color-depth-attitude information to enhance safety during navigation. The approach has been tested on a pRGB-D dataset, which is built for tuning parameters and evaluating the performance. Moreover, the approach has been integrated into a wearable prototype which generates a stereo sound feedback to guide visually impaired people (VIP) follow the prioritized direction to avoid obstacles and water hazards. Furthermore, a preliminary study with ten blindfolded participants suggests its effectivity and reliability. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Graphical abstract

11955 KiB  
Article
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking
by Shaopeng Hu, Yuji Matsumoto, Takeshi Takaki and Idaku Ishii
Sensors 2017, 17(8), 1839; https://doi.org/10.3390/s17081839 - 9 Aug 2017
Cited by 27 | Viewed by 7958
Abstract
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for [...] Read more.
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512 × 512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

5640 KiB  
Article
A Foot-Arch Parameter Measurement System Using a RGB-D Camera
by Sungkuk Chun, Sejin Kong, Kyung-Ryoul Mun and Jinwook Kim
Sensors 2017, 17(8), 1796; https://doi.org/10.3390/s17081796 - 4 Aug 2017
Cited by 18 | Viewed by 11317
Abstract
The conventional method of measuring foot-arch parameters is highly dependent on the measurer’s skill level, so accurate measurements are difficult to obtain. To solve this problem, we propose an autonomous geometric foot-arch analysis platform that is capable of capturing the sole of the [...] Read more.
The conventional method of measuring foot-arch parameters is highly dependent on the measurer’s skill level, so accurate measurements are difficult to obtain. To solve this problem, we propose an autonomous geometric foot-arch analysis platform that is capable of capturing the sole of the foot and yields three foot-arch parameters: arch index (AI), arch width (AW) and arch height (AH). The proposed system captures 3D geometric and color data on the plantar surface of the foot in a static standing pose using a commercial RGB-D camera. It detects the region of the foot surface in contact with the footplate by applying the clustering and Markov random field (MRF)-based image segmentation methods. The system computes the foot-arch parameters by analyzing the 2/3D shape of the contact region. Validation experiments were carried out to assess the accuracy and repeatability of the system. The average errors for AI, AW, and AH estimation on 99 data collected from 11 subjects during 3 days were −0.17%, 0.95 mm, and 0.52 mm, respectively. Reliability and statistical analysis on the estimated foot-arch parameters, the robustness to the change of weights used in the MRF, the processing time were also performed to show the feasibility of the system. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

21411 KiB  
Article
3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor
by Haopeng Zhang, Quanmao Wei and Zhiguo Jiang
Sensors 2017, 17(7), 1689; https://doi.org/10.3390/s17071689 - 22 Jul 2017
Cited by 23 | Viewed by 6601
Abstract
In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers [...] Read more.
In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

2654 KiB  
Article
Development of Three-Dimensional Dental Scanning Apparatus Using Structured Illumination
by Jae Sung Ahn, Anjin Park, Ju Wan Kim, Byeong Ha Lee and Joo Beom Eom
Sensors 2017, 17(7), 1634; https://doi.org/10.3390/s17071634 - 15 Jul 2017
Cited by 23 | Viewed by 6151
Abstract
We demonstrated a three-dimensional (3D) dental scanning apparatus based on structured illumination. A liquid lens was used for tuning focus and a piezomotor stage was used for the shift of structured light. A simple algorithm, which detects intensity modulation, was used to perform [...] Read more.
We demonstrated a three-dimensional (3D) dental scanning apparatus based on structured illumination. A liquid lens was used for tuning focus and a piezomotor stage was used for the shift of structured light. A simple algorithm, which detects intensity modulation, was used to perform optical sectioning with structured illumination. We reconstructed a 3D point cloud, which represents the 3D coordinates of the digitized surface of a dental gypsum cast by piling up sectioned images. We performed 3D registration of an individual 3D point cloud, which includes alignment and merging the 3D point clouds to exhibit a 3D model of the dental cast. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

10594 KiB  
Article
Efficient Depth Enhancement Using a Combination of Color and Depth Information
by Kyungjae  Lee, Yuseok  Ban and Sangyoun  Lee
Sensors 2017, 17(7), 1544; https://doi.org/10.3390/s17071544 - 1 Jul 2017
Cited by 7 | Viewed by 5915
Abstract
Studies on depth images containing three-dimensional information have been performed for many practical applications. However, the depth images acquired from depth sensors have inherent problems, such as missing values and noisy boundaries. These problems significantly affect the performance of applications that use a [...] Read more.
Studies on depth images containing three-dimensional information have been performed for many practical applications. However, the depth images acquired from depth sensors have inherent problems, such as missing values and noisy boundaries. These problems significantly affect the performance of applications that use a depth image as their input. This paper describes a depth enhancement algorithm based on a combination of color and depth information. To fill depth holes and recover object shapes, asynchronous cellular automata with neighborhood distance maps are used. Image segmentation and a weighted linear combination of spatial filtering algorithms are applied to extract object regions and fill disocclusion in the object regions. Experimental results on both real-world and public datasets show that the proposed method enhances the quality of the depth image with low computational complexity, outperforming conventional methods on a number of metrics. Furthermore, to verify the performance of the proposed method, we present stereoscopic images generated by the enhanced depth image to illustrate the improvement in quality. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

4201 KiB  
Article
Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras
by Yajie Liao, Ying Sun, Gongfa Li, Jianyi Kong, Guozhang Jiang, Du Jiang, Haibin Cai, Zhaojie Ju, Hui Yu and Honghai Liu
Sensors 2017, 17(7), 1491; https://doi.org/10.3390/s17071491 - 24 Jun 2017
Cited by 51 | Viewed by 6647
Abstract
Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are [...] Read more.
Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

1444 KiB  
Article
Breathing Analysis Using Thermal and Depth Imaging Camera Video Records
by Aleš Procházka, Hana Charvátová, Oldřich Vyšata, Jakub Kopal and Jonathon Chambers
Sensors 2017, 17(6), 1408; https://doi.org/10.3390/s17061408 - 16 Jun 2017
Cited by 58 | Viewed by 10498
Abstract
The paper is devoted to the study of facial region temperature changes using a simple thermal imaging camera and to the comparison of their time evolution with the pectoral area motion recorded by the MS Kinect depth sensor. The goal of this research [...] Read more.
The paper is devoted to the study of facial region temperature changes using a simple thermal imaging camera and to the comparison of their time evolution with the pectoral area motion recorded by the MS Kinect depth sensor. The goal of this research is to propose the use of video records as alternative diagnostics of breathing disorders allowing their analysis in the home environment as well. The methods proposed include (i) specific image processing algorithms for detecting facial parts with periodic temperature changes; (ii) computational intelligence tools for analysing the associated videosequences; and (iii) digital filters and spectral estimation tools for processing the depth matrices. Machine learning applied to thermal imaging camera calibration allowed the recognition of its digital information with an accuracy close to 100% for the classification of individual temperature values. The proposed detection of breathing features was used for monitoring of physical activities by the home exercise bike. The results include a decrease of breathing temperature and its frequency after a load, with mean values −0.16 °C/min and −0.72 bpm respectively, for the given set of experiments. The proposed methods verify that thermal and depth cameras can be used as additional tools for multimodal detection of breathing patterns. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

5125 KiB  
Article
Easy and Fast Reconstruction of a 3D Avatar with an RGB-D Sensor
by Aihua Mao, Hong Zhang, Yuxin Liu, Yinglong Zheng, Guiqing Li and Guoqiang Han
Sensors 2017, 17(5), 1113; https://doi.org/10.3390/s17051113 - 12 May 2017
Cited by 17 | Viewed by 8753
Abstract
This paper proposes a new easy and fast 3D avatar reconstruction method using an RGB-D sensor. Users can easily implement human body scanning and modeling just with a personal computer and a single RGB-D sensor such as a Microsoft Kinect within a small [...] Read more.
This paper proposes a new easy and fast 3D avatar reconstruction method using an RGB-D sensor. Users can easily implement human body scanning and modeling just with a personal computer and a single RGB-D sensor such as a Microsoft Kinect within a small workspace in their home or office. To make the reconstruction of 3D avatars easy and fast, a new data capture strategy is proposed for efficient human body scanning, which captures only 18 frames from six views with a close scanning distance to fully cover the body; meanwhile, efficient alignment algorithms are presented to locally align the data frames in the single view and then globally align them in multi-views based on pairwise correspondence. In this method, we do not adopt shape priors or subdivision tools to synthesize the model, which helps to reduce modeling complexity. Experimental results indicate that this method can obtain accurate reconstructed 3D avatar models, and the running performance is faster than that of similar work. This research offers a useful tool for the manufacturers to quickly and economically create 3D avatars for products design, entertainment and online shopping. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

Back to TopTop