Next Article in Journal
Sensor Fusion-Based Cooperative Trail Following for Autonomous Multi-Robot System
Next Article in Special Issue
Exploring RGB+Depth Fusion for Real-Time Object Detection
Previous Article in Journal
A Self-Selective Correlation Ship Tracking Method for Smart Ocean Systems
Previous Article in Special Issue
Deep Attention Models for Human Tracking Using RGBD
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Calibration Method for Photonic Mixer Device Solid-State Array Lidars

Key Laboratory of Biomimetic Robots and Systems (Ministry of Education), Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(4), 822; https://doi.org/10.3390/s19040822
Submission received: 25 December 2018 / Revised: 5 February 2019 / Accepted: 14 February 2019 / Published: 17 February 2019
(This article belongs to the Special Issue Depth Sensors and 3D Vision)

Abstract

:
The photonic mixer device (PMD) solid-state array lidar, as a three-dimensional imaging technology, has attracted research attention in recent years because of its low cost, high frame rate, and high reliability. To address the disadvantages of traditional PMD solid-state array lidar calibration methods, including low calibration efficiency and accuracy, and serious human error factors, this paper first proposes a calibration method for an array complementary metal–oxide–semiconductor photodetector using a black-box calibration device and an electrical analog delay method; it then proposes a modular lens distortion correction method based on checkerboard calibration and pixel point adaptive interpolation optimization. Specifically, the ranging error source is analyzed based on the PMD solid-state array lidar imaging mechanism; the black-box calibration device is specifically designed for the calibration requirements of anti-ambient light and an echo reflection route; a dynamic distance simulation system integrating the laser emission unit, laser receiving unit, and delay control unit is designed to calibrate the photodetector echo demodulation; the checkerboard calibration method is used to correct external lens distortion in grayscale mode; and the pixel adaptive interpolation strategy is used to reduce distortion of distance images. Through analysis of the calibration process and results, the proposed method effectively reduces the calibration scene requirements and human factors, meets the needs of different users of the lens, and improves both calibration efficiency and measurement accuracy.

1. Introduction

Three-dimensional (3D) measurement technologies have been widely used in surveying, structural measurements, virtual reality, and unmanned driving over the past few decades [1,2,3,4,5]. Among these technologies, and unlike two-dimensional (2D)/3D mechanical scanning lidar [6,7], which requires fast rotation scanning to obtain depth data, solid-state array lidar based on time-of-flight measurements provides the metric distance to the scene from the sensor [8,9,10]; in particular, photonic mixer device (PMD) solid-state array lidar can reduce the reliability problems associated with mechanical rotating equipment and increase the frame rate while reducing the complexity of 3D reconstruction from the data.
PMD solid-state array lidar is widely used in the computer vision community. For example, it is applied to human–computer interactions that require gestures and real-time motion recognition [11,12]; simultaneous localization and mapping in robotics, unmanned vehicles, and drones [13]; and virtual reality in game-type entertainment equipment [14]. The basic composition of a PMD solid-state array lidar system is shown in Figure 1. The system emits laser light through an array laser; the emitted laser light is then reflected by obstacles and the reflected light is received by the array complementary metal–oxide–semiconductor (CMOS) photodetector. The processor then calculates the distance between the lidar system and the target [9,10]. In this process, the pulse echo signals are integrated multiple times using two different capacitors (C1, C2) and two different phase windows (Phase A, Phase B) under the same pixel; the time of the pulse flight is then calculated based on the integration results for the different capacitors, and the target distance is then finally calculated from the flight time.
However, PMD solid-state array lidar is affected by the chip temperature, chip design, lens characteristics, and distance calibration [15,16,17,18,19] and thus has low calibration efficiency, poor measurement accuracy, and other operational issues.
Several studies have been conducted on PMD solid-state array lidar calibration. Several works focused on calibration of a 3D time-of-flight (ToF) sensor system. Fuchs and Hirzinger [20] presented a calibration procedure that allowed the user to calibrate the distance-related and amplitude-related errors of a ToF camera for a desired operating range and also determined the extrinsic parameters of the ToF camera. Kuhnert and Stommel proposed a distance calibration approach [21] that centered on the corrected pixel and corrected the phase difference by linearly fitting the 5 × 5 surrounding pixels to obtain the true distance. After correction of a given input image, a min/max depth map was calculated by examining the neighbors of each pixel, which led to development of a confidence interval for the true distance information. Kahlmann et al. [22] presented a systematic calibration method that considered different influencing parameters, including the reflectivity, the integration time, the temperature, and the distance itself. These parameters were analyzed with respect to their effects on the distance-measuring pixels and their output data were determined. Lindner and Kolb [23] combined a per-pixel linear calibration with a global B-spline fit that provided improved local control and was especially useful for online calibration tasks. Schiller et al. [24] developed a calibration method that estimated the full intrinsic calibration of a photonic mixing device (PMD) camera and included consideration of both optical lens distortions and systematic range errors. Their calibration approach uses a planar checkerboard pattern as a calibration reference for multiple cameras and distances. Schmidt [25] proposed a calibration approach that performed an implicit calibration of the sensor for homogeneities in arbitrary raw data that were acquired from a scene. Jung et al. [26] presented a calibration method for a ToF sensor and color camera pair to align 3D measurements with a color image correctly. They designed a 2.5-dimensional pattern board with irregularly placed holes to be detected accurately from both the low-resolution depth images of a ToF camera and high-resolution color images. In general, these calibration systems set a reflective baffle at a specified initial distance from the PMD solid-state array lidar and then vary this distance repeatedly, thus obtaining ranging results at different distances. These ranging results are then compared with the actual distances to acquire the calibration data. However, there are several problems involved in using these systems, including the heavy workload and the inevitability of human interference because the scene needs to be set manually.
Several works have focused on the calibration algorithm used for PMD solid-state array lidar. Some studies [27,28] used a single integration time adjustment, while others [29,30] selected an appropriate integration time based on the average amplitude data of a scene; the results of these approaches are not ideal for scenarios with both a foreground and a background. Steiger et al. [31] discussed a method for setting of the optimal global integration time; however, all their distance errors were above the uncertainties specified by the manufacturer even after calibration. Swadzba et al. [32] used a stepwise optimization and particle filtering algorithm based on the 3D ToF camera. Reynolds et al. [33] proposed an improved per-pixel confidence measure using a random forest regressor that was trained using real-world data, but the confidence assignment speed was slow. Pathak et al. [34] analyzed a scenario in which a pixel beam is intersected by more than one object and where the assumption of a unimodal probabilistic distribution causes spurious objects to appear in the detected scene. The drawback of their method was the requirement for integration of over 100 images for each frame, which reduced the frame rate significantly. Lindner et al. [35] described a fast algorithmic approach to combine high-resolution RGB images with PMD distance data that were acquired using a binocular camera, but the simple threshold based on segmentation using the PMD autocorrelation amplitude does not always lead to optimal results close to object boundaries. Kim et al. [36] proposed a novel denoising algorithm for ToF depth images based on estimation of the space-varying ToF depth noise and introduced a parametric model for ToF depth noise that used the infrared intensity while assuming additive white Gaussian noise. Kern [37] proposed a calibration technique for a laser scanner using a plane with holes, but the specific objective in that case was to calibrate a laser scanner that provides much more accurate depth measurements than an array lidar. This method is thus unsuitable for array lidar systems.
Overall, most current calibration systems calibrate distance errors by placing calibration plates at different distances and most current calibration methods calibrate distance errors using complex settings or algorithms. However, some problems do exist in these systems and methods, including heavy workloads, complex computational requirements and serious human interference because of the need to set the scene manually. Therefore, based on an analysis of the ranging error of PMD solid-state array lidar systems, a calibration method for the CMOS photodetector array is proposed that uses a black-box calibration device and an electrical analog delay method. With the aim of meeting the demand for improvement of the lens field of view, a modular lens distortion correction method that uses a checkerboard and pixel point adaptive interpolation optimization approach is proposed. The main contributions are as follows:
(1)
From an analysis of the actual laser emission modulation signal, the echo demodulation error of the PMD solid-state array lidar system is obtained through a detailed study of the echo demodulation processes of a sinusoidal modulation wave and a rectangular modulation wave. This provides a theoretical basis for rapid calibration of PMD solid-state array lidar at long distances.
(2)
A black-box calibration device is specially prepared for the calibration requirements of anti-ambient light and the echo reflection route to reduce the external interference on the one hand while also improving the uniformity of the received light on the other hand.
(3)
A dynamic distance simulation system that integrates a laser emission unit, a laser receiving unit, a delay control unit, and other units is designed. The echo-demodulation of the CMOS photodetector array is calibrated using the electrical analog delay. The delay phase-locked loop is used to set different laser emission times to simulate different calibration distances. We correct the calibration curve linearly to improve the delay time accuracy. This method realizes rapid and long-distance range calibration for the CMOS photodetector array without changing the calibration distance.
(4)
A checkerboard image is captured in grayscale, and the internal parameters and distortion coefficients of the lens that are obtained using the calibration method of Zhang [38] are used to correct the distortion of the external lens. To address the problems where the distortion correction pixels do not correspond exactly and the distance image collected using the area lidar has no depth values for some pixels, the pixel adaptive interpolation strategy is used to reduce the distortion. This method can meet the needs of the different users of the lens and achieve modular calibration.

2. System Introduction

The calibration methods proposed in this paper include two aspects, comprising CMOS photodetector array calibration and modular lens distortion correction, as shown in Figure 2. The calibration of the CMOS photodetector array involves analysis of the sinusoidal wave–rectangular wave modulation and demodulation error, use of the automatic calibration method in combination with the black-box calibration device and the electrical analog delay; and calibration for lens distortion via lens installation, internal parameter/distortion coefficient calibration, coordinate conversion/distortion correction, and pixel point adaptive interpolation optimization.

3. Materials and Methods

3.1. Analysis of Lidar Modulation and Demodulation Errors

The PMD solid-state array lidar system mainly uses the ToF to calculate the distance to the target. The specific working process is illustrated in Figure 3. In this process, the pulse echo signals are integrated multiple times using two different capacitors and different phase windows under the same pixel; the time of the pulse flight is then calculated based on the integration results for the different capacitors, and, finally, the target distance is calculated from the flight time.

3.1.1. Sinusoidal-Wave Modulation and Demodulation

At present, the most commonly used ToF measurement method is the sinusoidal-wave laser modulation and demodulation method; in this method, a laser emits a sinusoidally modulated wave and this sinusoidally modulated wave encounters an object that reflects or scatters the light during flight; the light then returns to the CMOS photodetector array along the opposite path. The CMOS photodetector array demodulates the echo signal to obtain the laser flight time and multiplies it by the speed of light to obtain the relative distance to the target. The specific sinusoidal wave modulation and demodulation process is illustrated in Figure 4. Four different phase windows (0°, 180°, 90°, and 270°) for the two capacitors of a single pixel are used to demodulate the echo signal and obtain the phase change of the echo signal.
We define the transmitted modulated wave here as a + bsin(ωt). The reflected wave is weaker than the transmitted wave because of the effects of propagation and reflection but the reflected and transmitted waves have the same frequency. The reflected wave is then defined as A + Bsinω(t–tTOF). According to the processes of modulation and demodulation shown in Figure 4, the integral control signal controls capacitances C1 and C2 to integrate them separately. The integration results for four phase windows in a single cycle are taken here as an example:
s Q 1 D C 0 = 0 π ω [ A + B sin ω ( t t T O F ) ] d t , s Q 2 D C 0 = π ω 2 π ω [ A + B sin ω ( t t T O F ) ] d t , s Q 1 D C 1 = π 2 ω π 2 ω [ A + B sin ω ( t t T O F ) ] d t , s Q 2 D C 1 = π 2 ω 3 π 2 ω [ A + B sin ω ( t t T O F ) ] d t ,
where s Q 1 D C 0 , s Q 2 D C 0 , s Q 1 D C 1 , and s Q 2 D C 1 are the modulated wave conditions in the case of a sinusoidal wave, i.e., they are the amount of charge obtained by integrating C1 when the phase window is 0°, the amount of charge obtained by integrating C2 when the phase window is 180°, the amount of charge obtained by integrating C1 when the phase window is 90°, and the amount of charge obtained by integrating C2 when the phase window is 270°, respectively. ω is the frequency of the sinusoidal wave, tTOF is the flight duration, and the capacitance integral charge differences s D C 0 and s D C 1 based on sinusoidal wave demodulation can be obtained from Equation (1) as follows:
s D C 0 = 4 × B ω × cos ω t T O F , s D C 1 = 4 × B ω × sin ω t T O F .
In addition, the flight duration is:
s t T O F = 1 ω × [ π + atan 2 ( s D C 0 ,   s D C 1 ) ] ,
Where atan2(x,y) is calculated as:
atan 2 ( x , y ) = { atan ( y / x ) x > 0 atan ( y / x ) + π x < 0 , y 0 atan ( y / x ) π x < 0 , y < 0 π / 2 x = 0 , y > 0 π / 2 x = 0 , y < 0 0 x = 0 , y = 0 .
The CMOS photodetector array controls the time-sharing integration of capacitors C1 and C2 through the integral switch control signal to realize demodulation of the reflected wave, and the distance to the target can then be resolved based on the demodulation result. However, the sinusoidal modulation signal that is emitted by the PMD solid-state array lidar often fails to meet the ideal requirements. The actual curve is shown in Figure 5. The actual signal is similar to a rectangular wave, and this wave is low-pass filtered because of the limitations of the generator bandwidth. This paper therefore uses rectangular waves for further modulation and demodulation analyses.

3.1.2. Rectangular Wave Modulation and Demodulation

The rectangular modulated wave laser beam encounters objects and is reflected or scattered during flight and then returns to the CMOS photodetector array along the opposite path. The CMOS photodetector array then demodulates the echo signal to obtain the laser flight duration and the relative distance to the target by considering the speed of light. The specific rectangular wave modulation and demodulation process is shown in Figure 6. Four different phase windows (0°, 180°, 90°, and 270°) of the two capacitors from a single pixel are used to demodulate the echo signal and thus obtain the phase change of the echo signal. The demodulation of the rectangular reflected wave is similar to the demodulation of the sinusoidal reflected wave. According to the modulation and demodulation process shown in Figure 6, the integral control signal controls capacitances C1 and C2 to integrate them separately.
The integration results r Q 1 D C 0 , r Q 2 D C 0 , r Q 1 D C 1 , and r Q 2 D C 1 for the four phase windows in a single cycle are used as an example here:
r Q 1 D C 0 = 0 π ω f ( t ) d t , r Q 2 D C 0 = π ω 2 π ω f ( t ) d t , r Q 1 D C 1 = π 2 ω π 2 ω f ( t ) d t , r Q 2 D C 1 = π 2 ω 3 π 2 ω f ( t ) d t ,
where f(t) is a reflected wave and is defined as:
f ( t ) = { A        t T O F < t t T O F + π ω 0        t T O F + π ω + < t t T O F + 2 π ω .
From this,   r Q 1 D C 0 , r Q 2 D C 0 , r Q 1 D C 1 , and r Q 2 D C 1 and tTOF have the following relationships:
r Q 1 D C 0 = A ( t T O F π ω ) 0 < t T O F π ω r Q 2 D C 0 = A t T O F 0 < t T O F π ω r Q 1 D C 1 = { A ( t T O F π 2 ω ) 0 < t T O F π 2 ω   A ( t T O F π 2 ω ) π 2 ω < t T O F π ω   r Q 2 D C 1 = { A ( t T O F + π 2 ω ) 0 < t T O F π 2 ω A ( t T O F 3 π 2 ω ) π 2 ω < t T O F π ω
Furthermore, the capacitance integral charge differences r D C 0 and r D C 1 based on rectangular wave demodulation are:
r D C 0 = r Q 2 D C 0 r Q 1 D C 0 r D C 1 = r Q 2 D C 1 r Q 1 D C 1
Because the actual modulation wave is not exactly the same as a sinusoidal wave or a rectangular wave, and because the sinusoidal wave demodulation mode is simultaneously simpler and more real-time-based than the rectangular wave demodulation mode, the actual flight time is usually calculated using Equation (3). Therefore, substitution of r D C 0 and r D C 1 as obtained by rectangular wave demodulation into Equation (3) provides rtTOF. However, r D C 0 and r D C 1 are different from s D C 0 and s D C 1 , and there is thus a difference between the calculated and real values of rtTOF.

3.2. Calibration Method

By analyzing the ranging error of PMD solid-state array lidar, this paper proposes a calibration method for a CMOS photodetector array based on a black-box calibration device and an electrical analog delay method. First, according to the calibration requirements of anti-ambient light and the echo reflection route, we construct a black-box calibration device that reduces external interference on the one hand while also improving the uniformity of the received light on the other hand. Second, a dynamic distance simulation system that integrates a laser emission unit, a laser receiving unit, a delay control unit, and various other units is designed. This system calibrates the echo demodulation of the photodetector using the electrical analog delay method and uses a delay phase-locked loop to set various laser emission times to simulate different calibration distances without actually changing the calibration distance. Simultaneously, the calibration curve is corrected linearly to improve the delay time accuracy.

3.2.1. Black-box Calibration Device

A schematic diagram of the black-box calibration device is shown in Figure 7. The black-box calibration device mainly comprises six panels, a cylindrical cylinder, and a white calibration plate. The six panels and the cylindrical tube are made from black nonreflective materials. The white calibration plate is the same size as the rear panel and is fixed on the inner side of the rear panel. The front panel has two holes for the laser modulation signal emitted by the incident laser and the echo laser signal emitted back to the CMOS photodetector array. The black-box device is made entirely from black materials except for the white calibration plate.
In this work, the infrared light is reflected by the white calibration plate during the transmission process and only the directly reflected light can enter the CMOS photodetector array. All other reflected light is absorbed by the black surfaces, and this again helps to avoid the effects of light entering the CMOS photodetector array after repeated reflections from other surfaces during calibration.

3.2.2. Calibration of CMOS Photodetector Array Based on the Electrical Delay Method

This paper presents a dynamic distance simulation system design that integrates a laser emission unit, a laser receiving unit, a delay control unit, and other units into a single system. A schematic diagram of the design is shown in Figure 8. The delay length that can be input in real time is realized using synchronous clock counting and a delay phase-locked loop. The time interval between the laser emission and the laser echo can thus be simulated using the delay time to allow simulation of high-precision dynamic distance information and calibration of the photodetector echo demodulation.
Figure 9 shows a schematic diagram of the delay phase-locked loop when generating different delay times. The delay time for each step is τ. The total delay is thus n × τ and the virtual distance range that can be simulated is [ τ × c / 2 + a , n × τ × c / 2 + a ] , where c is the speed of light, n is an integer and a is the distance from the lidar to the reflective panel of the black-box calibration device. Using different values of the virtual distance d, the test distance d can be calculated, and the calibration curve of the virtual distance and the test distance can thus be obtained. The distance is corrected using this calibration curve during real-time measurements.

3.3. Lens Distortion Correction

The depth image is distorted and correction of the depth image is thus an essential step. We use a checkerboard to calibrate lidar systems. After the lidar system’s internal parameters and distortion coefficients are obtained, we can correct the image distortion. We need to modify the traditional correction algorithm because the distortion correction pixels do not correspond exactly and the depth image collected using the ToF lidar has no depth values for some pixels. The calibration process is presented in Figure 10.
The specific steps are as follows:
(1)
Install the lens for different angles of view at the front end of the PMD solid-state array lidar CMOS photodetector array.
(2)
Print a checkerboard grid to act as a target and fix it on a hard sheet.
(3)
Change the relative positions of the PMD solid-state array lidar and the target and acquire multiple images of the target from different angles in the grayscale mode of the lidar system.
(4)
Extract the feature points from each image and select the corner points of the checkerboard to act as calibration points.
(5)
Find the plane projection matrix H for each image.
(6)
Solve for the internal parameters of the PMD solid-state array lidar system using the matrix H.
(7)
Optimize the calibration results by back-projection transformation to obtain more accurate calibration results and calculate the distortion coefficients of the PMD solid-state array lidar system.
(8)
Use the internal parameters of the PMD solid-state array lidar system to convert the normalized plane points and the pixel plane points.
(9)
Correct the distortion of the PMD solid-state array lidar system using the distortion coefficients.
(10)
Process the depth image pixel points using a pixel adaptive interpolation strategy.
Step 1 is installation of the lens, which allows the lens to be selected by different users with different needs and also solves the limitations of using a single-field-angle lens. Steps 2–7 use the checkerboard and the calibration method of Zhang that was mentioned earlier to acquire the internal parameters and the distortion coefficients of the PMD solid-state array lidar system, which are not detailed in this paper. Steps 8–10 are the processes of coordinate conversion, distortion correction, and pixel adaptive interpolation, and are described as follows.
Steps 8–9: Depth information of the pixel in the corrected image is obtained in the following steps, as illustrated in Figure 11.
First, we project the pixel in the corrected image onto the normalized plane using the internal parameters of the PMD solid-state array lidar system:
[ u v 1 ] = [ f x 0 c x 0 f y c y 0 0 1 ] [ x y 1 ]
Here, (u,v,1) denotes the pixel’s coordinates in the pixel coordinate system, while (x,y,1) denotes the coordinates in the normalized coordinate system. fx, fy, cx, and cy are the internal lidar parameters, where fx and fy denote the focal lengths of the lidar system in the x and y directions, respectively, and (cx,cy) denotes the coordinates of the principal point in the image coordinate system.
Second, we correct the distortion of the depth image:
x c o r r e c t = x ( 1 + k 1 r 2 + k 2 r 4 ) + 2 p 1 x y + p 2 ( r 2 + 2 x 2 ) y c o r r e c t = y ( 1 + k 1 r 2 + k 2 r 4 ) + p 1 ( r 2 + 2 y 2 ) + 2 p 2 x y
where (xcorrect, ycorrect, 1) denotes the coordinates of the corrected point in the pixel coordinate system. r =   x 2 + y 2 , k1, k2, p1, and p2 are all distortion coefficients.
Third, we project the corrected pixel in the normalized plane to the original image using the internal lidar parameters.
Step 10: Because the distortion correction pixels do not correspond exactly and the depth image acquired using the ToF lidar has no depth values for some pixels, we must modify the traditional correction algorithm. We interpolate the depth values of the pixels. The pixel adaptive interpolation strategies for the different cases are presented in Table 1.
In the table, a green point represents the projection of corrected pixels on the uncorrected image. Purple pixels represent the points closest to the green point. Solid purple pixels have depth information, while hollow pixels have no depth information. D0, D1, D2, D3, and D4 are the distances to the center point, the lower-left pixel, the top-left pixel, the lower-right pixel and the top-right pixel, respectively. (xp0,yp0), (xp1,yp1), (xp2,yp2), (xp3,yp3), and (xp4,yp4) are the coordinates of the center point, the lower-left pixel, the top-left pixel, the lower-right pixel, and the top right pixel, respectively. In addition, αx = xp0xp1, αy = yp0yp1, and (m,n) denotes the coordinates of the center point in the barycentric coordinate system.

4. Experiments and Results

4.1. Results of Array CMOS Photoelectric Sensor Calibration

4.1.1. Results of Theoretical Error Analysis

According to the theoretical analysis, the relationship between the flight time obtained by sinusoidal wave–rectangular wave modulation and demodulation and the real flight time is given by:
r t T O F = { 1 ω × [ π + atan 2 ( 2 A t T O F A π ω ,   2 A t T O F ) ]   0 < t T O F π 2 ω 1 ω × [ π + atan 2 ( 2 A t T O F A π ω ,   2 A t T O F 2 A π ω ) ] π 2 ω < t T O F π ω .
In this paper, the flight time error is simulated using MatLab. Figure 12 shows the flight time simulation error obtained under the assumption that f = 10 MHz.
In the PMD solid-state array lidar calibration process, the value of tTOF is (n × 2 + t0) ns, where n is an integer. For comparison with the actual calibration process, the theoretical correction results for the PMD solid-state array lidar at modulation frequencies of 12 MHz and 24 MHz are shown in Figure 13.

4.1.2. Actual Experimental Results

This paper verifies the calibration method for the CMOS photodetector array on the PMD solid-state array lidar platform. The experimental setup is shown in Figure 14.
The equipment in the figure is partially blocked because the PMD solid-state array lidar system is still at the research and development stage. Among the system parameters, the laser emission intensity is set before calibration and the calibration begins when the echo intensity that is observed on the display interface is moderate and the CMOS photodetector array is receiving uniform light. The reflected signal intensity is moderate and consistent during the calibration process. In addition, to avoid noise, the distance is only calculated when the reflected signal intensity reaches a specific value in the actual measurement process, so the signal intensity is not taken into account during the error simulation. The actual calibration curves are shown in Figure 15a and Figure 16a. The comparison between theoretical analysis results and actual calibration results are shown in Figure 15b and Figure 16b. Figure 15 shows the distance calibration results obtained when the modulation frequency is 12 MHz. The phase offset is related to the distance to the black-box calibration device itself. Figure 16 shows the distance calibration results obtained when the modulation frequency is 24 MHz. The offset is again related to the distance to the black-box calibration device itself. The truncated position indicates the farthest distance at which the corresponding virtual distance exceeded the 24 MHz modulation frequency when the number of delay-phase-locked loop steps is 30.

4.2. Modular Lens Distortion Calibration Results

This work verifies the modular lens distortion calibration method on the PMD solid-state array lidar platform. The experimental device used is shown in Figure 17.
The checkerboards used for the PMD solid-state array lidar system in grayscale mode are shown in Figure 18.
The lens internal parameters and the distortion coefficients are listed in Table 2.
Figure 19 presents three experimental results that were obtained after distortion correction and pixel adaptive interpolation. The scene is an office environment with dimensions of 10 m × 10 m × 3 m and the illumination is approximately 500 lux. The CMOS photodetector array described in this paper responds to the optical band range of 600–900 nm, so the CMOS photodetector array will collect ambient light (i.e., sunlight and other light) and emit optical signals simultaneously during the actual measurement process. However, the modulation and demodulation method proposed in this paper cancels out the common ambient light via multi-capacitance synchronous charge accumulation, which means that the measurement error is reduced. In addition, the PMD solid-state array lidar takes approximately 40 μs to output a single frame of data.

4.3. Distance Test Results After Completion

The accuracy of the PMD solid-state array lidar measurements is evaluated following the calibration process. The experimental platform is shown in Figure 20, where the actual distance is obtained using a high-precision single-point laser range finder (LEICA DISTO A6, Heerbrugg, St. Gallen, Switzerland). The measurement accuracy is evaluated by comparing the distances to the center point of the white plate obtained when measured using the LEICA DISTO A6 and the PMD solid-state array lidar system.
After the LEICA DISTO A6 and the PMD solid-state array lidar have been moved continuously, the distances to the center point of the white plate measured using the LEICA DISTO A6 are compared with the same distances when measured using the PMD solid-state array lidar over the 0.5–5 m range; the results are presented in Figure 21. During the actual measurement process, the LEICA DISTO A6 was used as the standard. The reason why the LEICA DISTO A6’s curve does not appear to be straight is that we did not measure it at the same proportional distance.

4.4. Performance Comparison

We have compared the performance of our calibration method with that of Steiger et al. [31], Lindner et al. [28], Schiller et al. [24] and Jung et al. [26]. The results of this comparison in terms of distance error, calibration time and scene setup are given in Table 3, where the term “NaN” is used to indicate instances where no metrical data were available at some of the distances.
The results in Table 3 indicate that the method proposed in this paper is superior to the methods described in [28,31] but is slightly poorer than the methods described in [24,26]. However, the proposed method is superior to all other methods in terms of both calibration time and scene setup. Our method only requires 10 min to perform the complete calibration process, including setup of the calibration scene, initialization and calculation, while the other methods all require dozens of minutes at a rough estimate. At the same time, in terms of the required scene scope, the requirements of our study are much smaller than those of the other methods. More importantly, our scene scope will not change with calibration distance.

5. Discussion

We have presented a fast calibration method for PMD solid-state array lidar. First, based on an analysis of the PMD solid-state array lidar ranging error, we have proposed a calibration method for the CMOS photodetector array based on a black-box calibration device and an electrical analog delay method. Second, we presented a lens distortion correction method based on a checkerboard and pixel adaptive interpolation to resolve the limitations of use of the single-field-angle lens. We highlight the following findings of the study:
(1)
By analyzing the actual laser emission modulation signals, the echo demodulation error of the PMD solid-state array lidar system was obtained based on a detailed study of the echo demodulation process of a sinusoidal modulation wave and a rectangular modulation wave. This provided a theoretical basis for rapid calibration of PMD solid-state array lidar over a large distance range. Comparison of Figure 13, Figure 14, Figure 15 and Figure 16 indicates that the theoretical error analysis results are basically consistent with the actual error analysis results, which further verifies the correctness of the theoretical analysis.
(2)
As shown in Figure 14, this paper presents the design of a dynamic distance simulation system that integrates a laser emission unit, a laser receiving unit, a delay control unit, and various other units. Echo demodulation of the CMOS photodetector array was performed using the black-box calibration device and the electrical analog delay method. Table 3 shows that, in terms of accuracy, the method provided by our group is superior to those proposed in [28,31] but is slightly poorer than those presented in [24,26]. However, the proposed method is superior to all other methods in terms of both calibration time and scene setup. The apparatus and the method can calibrate a CMOS photodetector array over a large distance range without changing the calibration distance.
(3)
Different users have different requirements for the angle of view of the lens. We have proposed a modular lens distortion correction method based on a checkerboard. Figure 18 and Table 2 show that the internal parameters and the distortion coefficients of the lens can be obtained using the calibration method of Zhang and these parameters can then be used to correct the lens distortion. To address the problems in that the distortion correction pixels do not correspond exactly and the distance images collected via area lidar have no depth values for some pixels, this paper proposed a pixel adaptive interpolation strategy to achieve distortion optimization. Figure 19 shows that the method can correct distortion for the different angles of view.

6. Conclusions

To address the disadvantages of the traditional PMD solid-state array lidar calibration method, which include low calibration efficiency, low calibration accuracy, and serious human factors, this paper proposed a calibration method for a CMOS photodetector array based on a black-box calibration device and an electrical analog delay method. By analyzing the error performance in demodulation of a sinusoidal modulation wave and a rectangular modulation wave, a dynamic distance simulation system was then designed by integrating a laser emission unit, a laser receiving unit, a delay control unit, and other units into a single system. The photodetector echo demodulation was calibrated using the electrical analog delay method, which verified the correctness of the theoretical results. This method effectively reduces the calibration scene requirements, human factors and material requirements while improving the calibration efficiency. In addition, a modular lens distortion correction scheme based on a checkerboard and pixel adaptive interpolation was proposed. In the grayscale mode of the PMD solid-state array lidar system, the calibration method of Zhang was used to analyze a checkerboard image, and the external lens internal parameters and distortion coefficients were obtained to correct the lens distortion. Simultaneously, we adopted a pixel point adaptive interpolation strategy to reduce the distortion. The proposed method meets the needs of the different users of the lens and improves the universality of PMD solid-state array lidar technology.

Author Contributions

Y.Z. and P.S. designed the research and developed the fast calibration method for PMD solid-state array lidar and the simulation. Y.Z. and X.C. built the verification test platform, designed the software, and analyzed the results.

Funding

This work was supported by the National Defense Basic Scientific Research Program of China under Grant JCKY2017602B012.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boonkwang, S.; Saiyod, S. Distance measurement using 3D stereoscopic technique for robot eyes. In Proceedings of the 7th International Conference on Information Technology and Electrical Engineering (ICITEE), Chiang Mai, Thailand, 29–30 October 2015; pp. 232–236. [Google Scholar]
  2. Xu, H.; Ma, Z.; Chen, Y. A modified calibration technique of camera for 3D laser measurement system. In Proceedings of the IEEE International Conference on Automation and Logistics (ICAL), Shenyang, China, 5–7 August 2009; pp. 794–798. [Google Scholar]
  3. Yi, H.; Yan, L.; Tsujino, K.; Lu, C. A long-distance sea wave height measurement based on 3D image measurement technique. In Proceedings of the Progress in Electromagnetic Research Symposium (PIERS), Shanghai, China, 7–10 August 2016; pp. 4774–4779. [Google Scholar]
  4. Zhao, H.; Diao, X.; Jiang, H.; Zhao, Z. The 3D measurement techniques for ancient architecture and historical relics. In Proceedings of the Second International Conference on Photonics and Optical Engineering (OPAL), Suzhou, China, 26–28 September 2017; p. 102562L. [Google Scholar]
  5. Zeng, X.; Ma, S. Flying attitude measurement of projectile using high speed photography and 3D digital image correlation technique. In Proceedings of the 2011 International Conference on Optical Instruments and Technology, Beijing, China, 6–9 November 2011; p. 819706. [Google Scholar]
  6. Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [PubMed]
  7. Zeng, Y.; Yu, H.; Dai, H.; Song, S.; Lin, M.; Sun, B.; Jiang, W.; Meng, M.Q.-H. An improved calibration method for a rotating 2D LiDAR system. Sensors 2018, 18, 497. [Google Scholar] [CrossRef] [PubMed]
  8. Sun, M.J.; Edgar, M.P.; Gibson, G.M.; Sun, B.; Radwell, N.; Lamb, R.; Padgett, M.J. Single-pixel three-dimensional imaging with time-based depth resolution. Nat. Commun. 2016, 7, 12010. [Google Scholar] [CrossRef] [PubMed]
  9. Kurtti, S.; Nissinen, J.; Kostamovaara, J. A wide dynamic range CMOS laser radar receiver with a time-domain walk error compensation scheme. IEEE Trans. Circuits Syst. I 2017, 64, 550–561. [Google Scholar] [CrossRef]
  10. Bjorndal, O.; Hamran, S.E.; Lande, T.S. UWB waveform generator for digital CMOS radar. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), Oregon, Portland, 24–27 May 2015; pp. 1510–1513. [Google Scholar]
  11. Shotton, J.; Fitzgibbon, A.; Cook, M.; Sharp, T.; Finocchio, M.; Moore, R.; Kipman, A.; Blake, A. Real-time human pose recognition in parts from single depth images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 1297–1304. [Google Scholar]
  12. Izadi, S.; Kim, D.; Hilliges, O.; Molyneaux, D.; Newcombe, R.; Kohli, P.; Shotton, J.; Hodges, S.; Freeman, D.; Davison, A. KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST), Santa Barbara, CA, USA, 16–19 October 2011; pp. 559–568. [Google Scholar]
  13. Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
  14. Schiller, I.; Bartczak, B.; Kellner, F.; Koch, R. Increasing Realism and Supporting Content Planning for Dynamic Scenes in a Mixed Reality System incorporating a Time-of-Flight Camera. J. Virtual Real. Broadcast. 2010, 7, 1–10. [Google Scholar]
  15. Fürsattel, P.; Placht, S.; Balda, M.; Schaller, C.; Hofmann, H.; Maier, A.; Riess, C. A comparative error analysis of current time-of-flight sensors. IEEE Trans. Comput. Imaging 2016, 2, 27–41. [Google Scholar] [CrossRef]
  16. Chiabrando, F.; Chiabrando, R.; Piatti, D.; Rinaudo, F. Sensors for 3D imaging: Metric evaluation and calibration of a CCD/CMOS time-of-flight camera. Sensors 2009, 9, 10080–10096. [Google Scholar] [CrossRef] [PubMed]
  17. Kim, Y.M.; Chan, D.; Theobalt, C.; Thrun, S. Design and calibration of a multi-view TOF sensor fusion system. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 24–26 June 2008; pp. 1–7. [Google Scholar]
  18. Rapp, H. Experimental and Theoretical Investigation of Correlating TOF-Camera Systems. Master’s Thesis, University of Heidelberg, Heidelberg, Germany, September 2007; pp. 1–85. [Google Scholar]
  19. Huang, T.; Qian, K.; Li, Y. All Pixels Calibration for ToF Camera. In Proceedings of the IOP Conference Series: Earth and Environmental Science, Paris, France, 7–9 February 2018; p. 022164. [Google Scholar]
  20. Fuchs, S.; Hirzinger, G. Extrinsic and depth calibration of ToF-cameras. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 24–26 June 2008; pp. 1–6. [Google Scholar]
  21. Kuhnert, K.-D.; Stommel, M. Fusion of Stereo-Camera and PMD-Camera Data for Real-Time Suited Precise 3D Environment Reconstruction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Beijing, China, 9–15 October 2006; pp. 4780–4785. [Google Scholar]
  22. Kahlmann, T.; Remondino, F.; Ingensand, H. Calibration for increased accuracy of the range imaging camera swissrangertm. Image Eng. Vis. Metrol. 2006, 36, 136–141. [Google Scholar]
  23. Lindner, M.; Kolb, A. Lateral and depth calibration of PMD-distance sensors. In Proceedings of the International Symposium on Visual Computing, Lake Tahoe, NV, USA, 6–8 November 2006; pp. 524–533. [Google Scholar]
  24. Schiller, I.; Beder, C.; Koch, R. Calibration of a PMD-camera using a planar calibration pattern together with a multi-camera setup. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 21, 297–302. [Google Scholar]
  25. Schmidt, M. Analysis, Modeling and Dynamic Optimization of 3d Time-of-Flight Imaging Systems. Ph.D. Thesis, University of Heidelberg, Heidelberg, Germany, 20 July 2011; pp. 1–158. [Google Scholar]
  26. Jung, J.; Lee, J.-Y.; Jeong, Y.; Kweon, I.S. Time-of-flight sensor calibration for a color and depth camera pair. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1501–1513. [Google Scholar] [CrossRef] [PubMed]
  27. Frank, M.; Plaue, M.; Rapp, H.; Köthe, U.; Jähne, B.; Hamprecht, F.A. Theoretical and experimental error analysis of continuous-wave time-of-flight range cameras. Opt. Eng. 2009, 48, 013602. [Google Scholar]
  28. Lindner, M.; Kolb, A. Calibration of the intensity-related distance error of the PMD ToF-camera. In Proceedings of the Intelligent Robots and Computer Vision XXV: Algorithms, Techniques, and Active Vision, Boston, MA, USA, 9–11 September 2007; p. 67640W. [Google Scholar]
  29. May, S.; Werner, B.; Surmann, H.; Pervolz, K. 3D time-of-flight cameras for mobile robotics. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 790–795. [Google Scholar]
  30. Gil, P.; Pomares, J.; Torres, F. Analysis and adaptation of integration time in PMD camera for visual servoing. In Proceedings of the 20th International Conference on Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010; pp. 311–315. [Google Scholar]
  31. Steiger, O.; Felder, J.; Weiss, S. Calibration of time-of-flight range imaging cameras. In Proceedings of the 15th IEEE International Conference on Image Processing (ICIP), San Diego, CA, USA, 12–15 October 2008; pp. 1968–1971. [Google Scholar]
  32. Swadzba, A.; Beuter, N.; Schmidt, J.; Sagerer, G. Tracking objects in 6D for reconstructing static scenes. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 24–26 June 2008; pp. 1–7. [Google Scholar]
  33. Reynolds, M.; Doboš, J.; Peel, L.; Weyrich, T.; Brostow, G.J. Capturing time-of-flight data with confidence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 945–952. [Google Scholar]
  34. Pathak, K.; Birk, A.; Poppinga, J. Sub-pixel depth accuracy with a time of flight sensor using multimodal gaussian analysis. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nice, France, 22–26 September 2008; pp. 3519–3524. [Google Scholar]
  35. Lindner, M.; Lambers, M.; Kolb, A. Sub-pixel data fusion and edge-enhanced distance refinement for 2d/3d images. Int. J. Intell. Syst. Technol. Appl. 2008, 5, 344–354. [Google Scholar] [CrossRef]
  36. Kim, Y.S.; Kang, B.; Lim, H.; Choi, O.; Lee, K.; Kim, J.D.; Kim, C. Parametric model-based noise reduction for ToF depth sensors. In Proceedings of the Three-Dimensional Image Processing (3DIP) and Applications II, Burlingame, CA, USA, 24–26 January 2012; p. 82900A. [Google Scholar]
  37. Kern, F. Supplementing laserscanner geometric data with photogrammetric images for modeling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 454–461. [Google Scholar]
  38. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
Figure 1. Composition of PMD solid-state array lidar system.
Figure 1. Composition of PMD solid-state array lidar system.
Sensors 19 00822 g001
Figure 2. System overview.
Figure 2. System overview.
Sensors 19 00822 g002
Figure 3. Specific working processes of the PMD solid-state array lidar system. (a) The different capacitor charging processes of the same pixel. (b) Charge integration processes for different phase windows.
Figure 3. Specific working processes of the PMD solid-state array lidar system. (a) The different capacitor charging processes of the same pixel. (b) Charge integration processes for different phase windows.
Sensors 19 00822 g003
Figure 4. Sinusoidal-wave modulation and demodulation processes.
Figure 4. Sinusoidal-wave modulation and demodulation processes.
Sensors 19 00822 g004
Figure 5. Actual sinusoidal modulation signal and its Fourier transform. (a) Actual sinusoidal modulation signal. (b) Fast Fourier transform of the sinusoidal modulation signal.
Figure 5. Actual sinusoidal modulation signal and its Fourier transform. (a) Actual sinusoidal modulation signal. (b) Fast Fourier transform of the sinusoidal modulation signal.
Sensors 19 00822 g005
Figure 6. Rectangular wave modulation and demodulation process.
Figure 6. Rectangular wave modulation and demodulation process.
Sensors 19 00822 g006
Figure 7. Schematic of the black-box calibration device.
Figure 7. Schematic of the black-box calibration device.
Sensors 19 00822 g007
Figure 8. Calibration diagram of CMOS photodetector array based on the electrical delay method.
Figure 8. Calibration diagram of CMOS photodetector array based on the electrical delay method.
Sensors 19 00822 g008
Figure 9. Delayed phase-locked loop when generating different delays.
Figure 9. Delayed phase-locked loop when generating different delays.
Sensors 19 00822 g009
Figure 10. Flowchart for the lens distortion correction process for PMD solid-state array lidar.
Figure 10. Flowchart for the lens distortion correction process for PMD solid-state array lidar.
Sensors 19 00822 g010
Figure 11. Schematic diagram of correction of the depth image.
Figure 11. Schematic diagram of correction of the depth image.
Sensors 19 00822 g011
Figure 12. Error in the flight time calculation.
Figure 12. Error in the flight time calculation.
Sensors 19 00822 g012
Figure 13. Theoretical error analysis of calibration results at modulation frequencies of 12 MHz and 24 MHz. (a) Modulation frequency of 12 MHz. (b) Modulation frequency of 24 MHz.
Figure 13. Theoretical error analysis of calibration results at modulation frequencies of 12 MHz and 24 MHz. (a) Modulation frequency of 12 MHz. (b) Modulation frequency of 24 MHz.
Sensors 19 00822 g013
Figure 14. Calibration system for the CMOS photoelectric sensor array of the PMD solid-state array lidar system.
Figure 14. Calibration system for the CMOS photoelectric sensor array of the PMD solid-state array lidar system.
Sensors 19 00822 g014
Figure 15. Distance calibration results when the modulation frequency was 12 MHz. (a) The calibration curve. (b) Comparison between theoretical analysis results and actual calibration results.
Figure 15. Distance calibration results when the modulation frequency was 12 MHz. (a) The calibration curve. (b) Comparison between theoretical analysis results and actual calibration results.
Sensors 19 00822 g015
Figure 16. Distance calibration results when the modulation frequency was 24 MHz. (a) The calibration curve. (b) Comparison between theoretical analysis results and actual calibration results.
Figure 16. Distance calibration results when the modulation frequency was 24 MHz. (a) The calibration curve. (b) Comparison between theoretical analysis results and actual calibration results.
Sensors 19 00822 g016
Figure 17. Experimental device used for modular lens distortion calibration.
Figure 17. Experimental device used for modular lens distortion calibration.
Sensors 19 00822 g017
Figure 18. Grayscale images of the partial checkerboard patterns.
Figure 18. Grayscale images of the partial checkerboard patterns.
Sensors 19 00822 g018
Figure 19. Three experimental results obtained after distortion correction and pixel adaptive interpolation. (a) Distance images without distortion correction. (b) Distance images after distortion correction.
Figure 19. Three experimental results obtained after distortion correction and pixel adaptive interpolation. (a) Distance images without distortion correction. (b) Distance images after distortion correction.
Sensors 19 00822 g019aSensors 19 00822 g019b
Figure 20. Distance test platform.
Figure 20. Distance test platform.
Sensors 19 00822 g020
Figure 21. Distance measurement results. (a) Comparison of distance measurements made using the LEICA DISTO A6 and the PMD solid-state array lidar system. (b) Errors in the distance measurement results.
Figure 21. Distance measurement results. (a) Comparison of distance measurements made using the LEICA DISTO A6 and the PMD solid-state array lidar system. (b) Errors in the distance measurement results.
Sensors 19 00822 g021
Table 1. Pixel adaptive interpolation strategies for different cases.
Table 1. Pixel adaptive interpolation strategies for different cases.
CasePixel Adaptive Interpolation Strategy
Sensors 19 00822 i001 D 0 = ( 1 a x ) × ( 1 a y ) × D 1 + ( 1 a x ) × a y × D 2 + a x × ( 1 a y ) × D 3 + a x × a y × D 4
Sensors 19 00822 i002 D 0 = m D 1 + n D 2 + ( 1 m n ) D 4
Sensors 19 00822 i003 D 0 = ( 1 x p 0 + y p 0 2 ) × D 1 + x p 0 + y p 0 2 × D 4
Sensors 19 00822 i004 D 0 = a y D 4 + ( 1 a y ) × D 3
Sensors 19 00822 i005 D 0 = D x
Sensors 19 00822 i006 D 0 = N a N
Table 2. Lens internal parameters and distortion coefficients.
Table 2. Lens internal parameters and distortion coefficients.
Internal Parametersfxfycxcy
207.767209.308174.585129.201
Distortion Coefficientsk1k2p1p2
−0.375680.157290.003040.00046
Table 3. Performance comparison.
Table 3. Performance comparison.
Distance Error (mm)Calibration TimeScene Scope
Test Distance90011001300170021002500300035004000
Steiger et al. [31]NaN3 (at 1207)25 (at 1608)57 (at 2250)NaNNaNNaNNaNAbout dozens of minutesNot mentioned
Lindner et al. [28]19.428.221.028.913.517.315.921.826.7About dozens of minutesAbout 4 m × 0.6 m × 0.4 m
Schiller et al. [24] (automatic feature detection)7.45 (mean)NaNNaNAbout dozens of minutesAbout 3 m × 0.6 m × 0.4 m
Schiller et al. [24] (some manual feature selection)7.51 (mean)NaNNaNAbout dozens of minutesAbout 3 m × 0.6 m × 0.4 m
Jung et al. [26]7.18 (mean)NaNNaN137 s (calculation)
About dozens of minutes (scene setup)
About 3 m × 0.6 m ×1 m
Our method3.374.826.178.309.578.8812.7910.5814.5290 s (calculation)
10 min (calculation, scene setup and initialization)
About 0.8 m × 0.4 m × 0.3 m

Share and Cite

MDPI and ACS Style

Zhai, Y.; Song, P.; Chen, X. A Fast Calibration Method for Photonic Mixer Device Solid-State Array Lidars. Sensors 2019, 19, 822. https://doi.org/10.3390/s19040822

AMA Style

Zhai Y, Song P, Chen X. A Fast Calibration Method for Photonic Mixer Device Solid-State Array Lidars. Sensors. 2019; 19(4):822. https://doi.org/10.3390/s19040822

Chicago/Turabian Style

Zhai, Yayu, Ping Song, and Xiaoxiao Chen. 2019. "A Fast Calibration Method for Photonic Mixer Device Solid-State Array Lidars" Sensors 19, no. 4: 822. https://doi.org/10.3390/s19040822

APA Style

Zhai, Y., Song, P., & Chen, X. (2019). A Fast Calibration Method for Photonic Mixer Device Solid-State Array Lidars. Sensors, 19(4), 822. https://doi.org/10.3390/s19040822

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop