Next Article in Journal
An IMM-Aided ZUPT Methodology for an INS/DVL Integrated Navigation System
Next Article in Special Issue
A Robotic Platform for Corn Seedling Morphological Traits Characterization
Previous Article in Journal
Design of a Sensor System for On-Line Monitoring of Contact Pressure in Chalcographic Printing
Previous Article in Special Issue
Dual Quaternions as Constraints in 4D-DPM Models for Pose Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Range Estimation Model for Three-Dimensional (3D) Range Gated Reconstruction

1
School of Engineering, Monash University Malaysia, Jalan Lagoon Selatan, 47500 Bandar Sunway, Selangor, Malaysia
2
Faculty of Engineering, Multimedia University, Jalan Multimedia, 63000 Cyberjaya, Selangor, Malaysia
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(9), 2031; https://doi.org/10.3390/s17092031
Submission received: 7 June 2017 / Revised: 25 August 2017 / Accepted: 28 August 2017 / Published: 5 September 2017
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)

Abstract

:
Accuracy is an important measure of system performance and remains a challenge in 3D range gated reconstruction despite the advancement in laser and sensor technology. The weighted average model that is commonly used for range estimation is heavily influenced by the intensity variation due to various factors. Accuracy improvement in term of range estimation is therefore important to fully optimise the system performance. In this paper, a 3D range gated reconstruction model is derived based on the operating principles of range gated imaging and time slicing reconstruction, fundamental of radiant energy, Laser Detection And Ranging (LADAR), and Bidirectional Reflection Distribution Function (BRDF). Accordingly, a new range estimation model is proposed to alleviate the effects induced by distance, target reflection, and range distortion. From the experimental results, the proposed model outperforms the conventional weighted average model to improve the range estimation for better 3D reconstruction. The outcome demonstrated is of interest to various laser ranging applications and can be a reference for future works.

1. Introduction

Because of the non-contact and non-destructive nature, laser has been a favoured solution especially in remote sensing and machine vision [1]. While a variety of techniques are suited for different applications, the range gated technique provides efficient laser ranging based on the Time-of-Flight (TOF) principle where the distance is determined from the travel time measured between the emitted and reflected laser pulse [2]. In recent years, this technique has become even more cost effective with the continuous development of equipment and the processing method. Due to good application prospects, the range gated technique has been widely applied for target detection and recovery [3], night vision [4], underwater [5], and 3D imaging [6].
In spite of the advancement in equipment, i.e., laser and sensor technology, the accuracy of 3D range gated reconstruction remains a challenge due to various factors in the system. Therefore, range estimation improvement is crucial to accomplish accurate reconstruction. A laser signal reflected from the target surface can be exploited using different range estimation algorithms such as peak estimator/discriminator, leading edge detection, and average (centre of mass/gravity determination) [7]. In some applications, the use of matched filter (cross-correlation) and maximum likelihood estimator are possible to provide better range estimation based on a reference signal or model [8,9]. Peak detection determines the range by the maximum of the returned signal where the highest power theoretically coincide with the reflected target. Leading edge detection works based on a predefined threshold where the range is determined by the threshold crossing in the returned signal [10]. Matched filter compares the received signal to a reference using a cross-correlation technique. Maximum likelihood estimator performs similarly to a matched filter with respect to the mean squared error (MSE) but has a bias in range estimation when a signal deviates from the model assumed [9].
The range gated technique utilises the reflected time slices that consist of the reflectivity and range information simultaneously for 3D reconstruction. There are some practical complications to be considered. Although the reflected laser pulse is a Gaussian function in general, the actual signal returned is not precisely known since the target scene or object of interest is unknown. In this context, peak estimator and leading edge detection are clearly not suitable due to the noisy nature of the reflected laser signal recorded in the image pixels [11]. As for matched filter and maximum likelihood method, a reference signal or model needs to be assumed that is not feasible in some cases. Moreover, inaccurate reference directly affects the range accuracy. Because of the aforementioned considerations and limitations, weighted average method is commonly applied for range estimation in 3D range gated reconstruction [12,13]. However, the range accuracy of this model is influenced by the intensity variation [14].
Essentially, the received intensity is affected by a few main components: laser source, sensor, target, and atmospheric effect [15,16]. In addition, the intensity data is also influenced by other factors such as the near-distance effect [17], multiple returns [18], system settings [19], and automatic gain control [20]. Various parameters and processing techniques were reviewed in literatures [14]. A laser based solution is complicated and requires considerable post-processing efforts because of the characteristics of laser and its changes along the transmission. The laser beam propagation towards the target and return back to the receiver can be described by the spatial-temporal distribution. Since the signal to noise ratio (SNR) is proportional to the laser intensity in general, the ranging performance exhibits a correlated variation to the laser optical power distribution, which is non-uniform profile in practice [21]. System performance is often limited by the reduced SNR due to the decreasing reflected intensity over distance, which restricts the operating range of many applications [22]. Target reflection affects the detection and ranging behaviours that are crucial especially to applications such as target recognition and object modelling [23,24]. These characteristics of target reflection can be significant important to analyse and solve the complex ranging problem [25]. Laser illumination and optical components introduce additional effects which could affect the image formation [26]. As the accuracy of depth and 3D reconstruction strongly relies on the range information, it is necessary to understand the range distortion due to the inhomogeneity of illumination and propose an appropriate correction. Recovery of the actual target echo pulse is a well known problem in range gated systems. The influence of the laser profile [27], distance interference [28], sensor [29,30], and scattering effects [31] has been discussed in various literatures.
This paper provides an in-depth analysis of the 3D range gated reconstruction and proposes a new range estimation model to alleviate the effects induced by the influence factors. In Section 2, system set-up, operating principle, and technique used in 3D range gated reconstruction are described. In Section 3, we discuss the range accuracy in a range gated system with time slicing reconstruction and the associated influence factors. In order to improve the range accuracy, we derive and propose a new range estimation model in Section 4 by considering multiple influence factors in the system. In Section 5, the proposed range estimation model is validated and discussed. Lastly, conclusion is given in Section 6.

2. 3D Range Gated Reconstruction

Figure 1 shows the schematic diagram of a range gated imaging system for 3D reconstruction. The system set-up includes a pulsed laser module, Intensified Charged Coupled Device (ICCD) gated camera and its control unit, delay generator for system triggering and synchronisation, lens assemblies, and power supplies. Additionally, interfaces such as a frame grabber or digital video converter and measurement devices such as a photodetector and oscilloscope are used. The laser and camera are controlled simultaneously by the delay generator during the acquisition of range gated images. A Q-switched Nd:YAG pulsed laser with wavelength 532 nm provides illumination to the target and an ICCD camera is configured to capture the reflected intensity images that are the input for range and 3D reconstruction.
Delay generator is configured to trigger the camera gate to open for a very short duration normally in nanoseconds or picoseconds at the designated delayed time to capture the reflected laser pulse in the form of a two-dimensional (2D) intensity image. Synchronisation between the laser and the gated camera is particularly important during the image acquisition. The camera gate remains closed when the laser pulse is emitted towards the target. The camera gate is configured to open at the designated delayed time to capture the visible time slice reflected in the form of an intensity image. This operating principle of the image acquisition is illustrated in Figure 2.
Based on the time slicing technique, a sequence of 2D images i = 1 , 2 , , n is acquired by sliding the camera gate t g a t e at delay time t i with a time step t s t e p to sample range slices from the target scene as illustrated in Figure 3. These images acquired consist of reflectivity and range information simultaneously, which are important for 3D reconstruction.
For each image pixel ( x , y ) , the corresponding range < r > ( x , y ) can be obtained from the average TOF or two-way travel time < t > ( x , y ) to construct a 3D depth map for the entire image field-of-view (FOV).
< r > ( x , y ) = c < t > ( x , y ) 2 .
The average TOF < t > ( x , y ) can be determined from the pixel intensity I i ( x , y ) captured over the image sequence acquired using the weighted average method.
< t > ( x , y ) = i = 1 n I i ( x , y ) t i i = 1 n I i ( x , y ) .
Eventually, the calculated 3D depth map and 2D image textures are used for 3D reconstruction of the target scene.

3. Accuracy Analysis

3.1. Range Accuracy

In a range gated system with time slicing reconstruction, SNR is expressed in terms of the reflected laser intensity I i and associated noises δ I i from a sequence of image slices I i ( x , y ) .
S N R = i I i i ( δ I i ) 2 .
By considering random noise, where ( δ I i ) 2 I i , SNR can be simplified as:
S N R i I i i I i = i I i .
Theoretically, SNR can be estimated from the system parameters as follows:
S N R i I i = σ t s t e p m a x ( I i ) ,
where the total intensity in an image pixel I = i I i is contributed by number of time slices, which is given by the σ / t s t e p factor. Variance of the measured travel time σ 2 depends on the laser pulse width and camera gate time, while t s t e p is the delay time step used for images acquisition. Accordingly, range accuracy is estimated as [12]:
δ r 1 2 c σ S N R .
As can be seen, range accuracy is governed by two parameters: σ and S N R . In general, σ is affected by the system specification, i.e., laser and camera, where the range accuracy can be improved by the hardware advancement. On the other hand, SNR is proportional to the reflected laser intensity, which can be influenced by other conditions such as distance and target reflection.

3.2. Influence Factors Affecting Range Accuracy

The reflected laser intensity captured in an image pixel I i is the incident energy of the laser pulse P r integrated when the camera gate G ( t ) opens, which is expressed as:
I i = P r ( t 2 r c ) G ( t t i ) d t .
The reflected laser P r ( t ) is delayed by the round trip travel time 2 r c and the camera gate G ( t ) is delayed by time t i . The reflected laser energy and its dependencies on multiple influence factors can be defined by the Laser Detection And Ranging (LADAR) equation [32]:
P r = η s y s η a t m D 2 ρ A P t r 2 θ R ( θ t r ) 2 ,
where P t represent the transmitted laser energy across range r. η s y s and η a t m are the system efficiency and atmospheric transmission loss factor, respectively. D is the receiver aperture’s diameter and ρ is the target surface reflectivity. θ t represents the laser transmitter beam diameter and angular divergence. θ R is the solid angle over which radiation is dispersed upon reflection.
To simplify the LADAR equation, we assume a well-resolved target where the target area A equals the projected area of the laser beam. This can be written as:
A = π θ t 2 r 2 4 .
Accordingly, Equation (8) becomes:
P r = π η s y s η a t m D 2 4 r 2 ρ θ R P t .
System efficiency η s y s , atmospheric transmission loss factor η a t m , and the receiver parameter D can be regarded as constants under the same set-up condition. On the other hand, distance r changes along the laser propagation and acquisition for different time slices. The reflected laser energy P r underlies an inverse range-squared dependency that decreases the intensity captured in the range gated imaging system. The relationship between reflected laser intensity and distance factor was studied in previous work [28]. As a result, SNR decreases with distance and causes a higher range error as deduced from Equation (6). As for ρ θ R , these parameters vary across image pixels, attributed to the target reflection characteristics. The effect of target reflection to ranging performance was investigated in previous work [33]. The reflected laser intensity is proportional to the target surface reflectivity and the amplitude is maximum when angle of incidence θ = 0 and decreases accordingly with the increase in angle. Range accuracy has dependency on the SNR, which is proportional to the reflected laser intensity.
Laser illumination with a diverging lens assembly introduces additional effects on the illuminated scene, reflection from the target, and image formation. As illustrated in Figure 4, the laser beam is diverged to cover a target area within ( x m a x , y m a x ) with half diverging cone propagation of angle ϕ . x and y represent the horizontal and vertical position of an image pixel with diverging angle θ .
Deficiency in straight lines transmission has direct effect on the reflection and image geometry. The difference between orthogonal distance r and radial distance r leads to the range distortion. Radial distance r can be written as:
r = x 2 + y 2 + r 2 .
Orthogonal distance r is normally used, assuming that the illumination is at the centre or perpendicular to the image plane. This is the ideal condition where the laser approximates to directional lighting. In reality, pixels within the angular space close to the illumination centre most likely receive maximum reflection while other pixels exhibit variation due to the illuminant direction [26,34]. This results in inhomogeneous illumination where intensity decreases as the pixels distance from the centre of illumination and causes the range distortion.

4. Proposed Range Estimation Model

Theoretically, the system performance can be optimised if the effect of influence factors can be fully compensated in the range algorithm. Therefore, we propose a new range estimation model in this section. Specular and diffuse reflection are the fundamental reflection mechanisms that are practically involved in any target surface. We apply the Bidirectional Reflection Distribution Function (BRDF) model to describe the reflection behaviours due to the interference of multiple parameters [35] as follows:
B R D F = B R D F S + B R D F D = K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ,
where B R D F S and B R D F D denote the specular and diffuse component, respectively. K S and K D are the specular and diffuse reflection constants, θ is the angle of incidence and reflection, s is the surface slope, and m is the diffusivity coefficient. These parameters can be estimated from the reflected laser intensity with respect to the angle of incidence. The BRDF parameters vary depending on the target surface properties and reflection characteristics. Comprehensive analysis and discussion about target reflection and the BRDF model are included in previous works [33,35].
3D range gated reconstruction can be treated as one pixel problem where each pixel exhibits the same characteristics corresponding to the reflected laser pulse [36]. Thus, the same basic principles of the LADAR range equation apply. Target reflectivity ρ and angular dispersion θ R correspond to the target reflection characteristics. Considering the effect of target reflection, we substitute the BRDF model Equation (12) into the LADAR Equation (10).
P r = π η s y s η a t m D 2 4 r 2 [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] P t .
The temporal function of the transmitted laser pulse P t ( t ) is commonly assumed as Gaussian [12]:
P t ( t ) = P o 2 π σ p e x p ( t 2 2 σ p 2 ) ,
where P o is the transmitted power and σ p is the standard deviation of the echo pulse. From Equations (13) and (14), the reflected laser energy received P r ( t ) is written as:
P r ( t ) = π η s y s η a t m D 2 4 r 2 [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] P o 2 π σ p e x p ( t 2 2 σ p 2 ) .
Based on the time slicing technique used for 3D reconstruction, the summation of radiant energy in an image pixel I ( x , y ) = i I i ( x , y ) can be equal to the integration over the time slices d t i / t s t e p as the time step t s t e p is much smaller than width of laser pulse and camera gate [12]:
I ( x , y ) = i I i ( x , y ) = I i ( x , y ) d t i t s t e p .
From Equation (7), we further obtain I ( x , y ) as:
I ( x , y ) = P r ( x , y , t ) d t G ( τ ) d τ t s t e p .
By substituting P r ( x , y , t ) from Equation (15) and assuming G ( τ ) = 1 when 0 τ t g a t e , I ( x , y ) is expressed as:
I ( x , y ) = π η s y s η a t m D 2 4 r 2 t s t e p [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] P o 2 π σ p e x p ( t 2 2 σ p 2 ) d t 0 t g a t e d τ .
We further simplify I ( x , y ) into:
I ( x , y ) = π η s y s η a t m D 2 4 r 2 [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] P o t g a t e t s t e p .
From Equation (19), the camera gate t g a t e and time step t s t e p are fixed during the images acquisition. The compensation factor α to the intensity received in an image pixel can be written as:
α = ( π η s y s η a t m D 2 4 r 2 [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] ) 1 .
Average two-way travel time for an image pixel < t > ( x , y ) based on the intensity captured over times slices can be obtained by considering the compensation factor α :
< t > ( x , y ) = i = 1 n ( π η s y s η a t m D 2 4 r 2 [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] ) 1 I i ( x , y ) t i i = 1 n ( π η s y s η a t m D 2 4 r 2 [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] ) 1 I i ( x , y ) .
Under the same set-up condition, system efficiency η s y s , atmospheric transmission loss factor η a t m caused by absorption and scattering, and receiver parameter D can be regarded as constants, which can be ignored to further simplify the equation. On the other hand, range-squared factor r 2 is a variable across the time slices. Target reflection could vary depending on the characteristics of the target scene adhering to the BRDF model. Therefore, Equation (21) can be written as:
< t > ( x , y ) = i = 1 n [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] 1 r i 2 I i ( x , y ) t i i = 1 n [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] 1 r i 2 I i ( x , y ) .
Equation (22) can be further simplified where variable r i corresponds to the range value for the particular ith time slice at delayed time t i :
< t > ( x , y ) = i = 1 n [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] 1 ( c t i 2 ) 2 I i ( x , y ) t i i = 1 n [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] 1 ( c t i 2 ) 2 I i ( x , y ) .
Accordingly, we obtain average time < t > ( x , y ) and range < r > ( x , y ) as:
< t > ( x , y ) = i = 1 n [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] 1 I i ( x , y ) t i 3 i = 1 n [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] 1 I i ( x , y ) t i 2 .
< r > ( x , y ) = c < t > ( x , y ) 2 = i = 1 n [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] 1 I i ( x , y ) r i 3 i = 1 n [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] 1 I i ( x , y ) r i 2 .
For the distortion due to the inhomogeneous illumination, two assumptions are made. Firstly, the distortion ratio is unity at the centre of illumination, i.e., r = r and the distortion effect increases as the image pixels ( x , y ) move radially from the centre. This radial distortion can be expressed as [37]:
r d = e + λ ( r u e ) ,
where r d and r u are the distorted and undistorted points, e indicates the centre of distortion, and λ denotes the distortion ratio. In our case, it is assumed that the centre of distortion equals to the centre of illumination, which is the image centre, i.e., e = 0 . r is considered as the distorted point r d and r is the undistorted points r u . Distortion ratio is equal to unity near the image centre to give r = r . This distortion between radial distance r and orthogonal distance r can be modelled as:
r = λ ( r ) .
We propose the distortion ratio λ as a function of the angular difference between r and r, which is bounded by the maximum diverging angle ϕ . λ can be modelled to decrease with respect to the angular difference θ ( x , y ) where 0 < λ 1 .
λ = r r = c o s θ ( x , y ) .
The distortion effect due to the inhomogeneous illumination can be regarded as radially symmetric because the lenses are typically ground to be circularly symmetric. Thus we make the second assumption that the distortion is radially symmetric within the illuminated area ( x m a x , y m a x ) . θ (x,y) can be determined from the position of the image pixel ( x , y ) relative to the centre of illumination ( 0 , 0 ) and the maximum diverging angle ϕ :
θ ( x , y ) = ( x 2 + y 2 x m a x 2 + y m a x 2 ) ϕ .
Accordingly, we formulate the distortion ratio λ as:
λ = c o s θ ( x , y ) = c o s [ ( x 2 + y 2 x m a x 2 + y m a x 2 ) ϕ ] .
By considering the distortion correction, radial distance r is obtained as follows from Equation (27):
r = r λ .
From Equations (25) and (30), the average range is then corrected as < r > ( x , y ) . Eventually, we propose a range estimation model as follows:
< r > ( x , y ) = 1 cos [ ( x 2 + y 2 x m a x 2 + y m a x 2 ) ϕ ] i = 1 n [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] 1 I i ( x , y ) r i 3 i = 1 n [ K S c o s 6 θ e x p ( t a n 2 θ s 2 ) + K D c o s m θ ] 1 I i ( x , y ) r i 2 .

5. Model Validation and Results Analysis

Based on the range gated imaging system specification, we can estimate the range accuracy using Equation (6) as explained in Section 3. Table 1 shows the experimental set-up specification used and the resulted range accuracy δ r 8.894 mm in the ideal scenario. The corresponding depth error can be calculated as:
% depth   error = δ r actual object depth × 100 % .
where δ r is the range accuracy or deviation of the reconstructed depth from the actual object depth reference which is measured manually with ruler. In our experimental study, Object 1 and Object 2 are tested where Object 1 has higher reflectivity relative to Object 2. Figure 5 shows the raw grayscale image of the test objects acquired using the range gated imaging system. The actual object depth of the test objects are 0.48 m and 0.4 m measured from background respectively. With the system range accuracy δ r 8.894 mm, it is estimated to give depth error ≈1.85% and 2.22% for Object 1 and Object 2, calculated from Equation (33). This unavoidable error calculated from the ideal experimental set-up specification is a rough estimation and deviation of the depth error is expected.
The range value for each pixel is calculated from the sequence of 2D images of the target scene using the conventional weighted average [12], range compensation [28], and the proposed range estimation model. From Equation (2), weighted average range is determined as:
< r > ( x , y ) = c < t > ( x , y ) 2 = i = 1 n I i ( x , y ) r i i = 1 n I i ( x , y ) .
Meanwhile, the range compensation model is expressed as:
< r > ( x , y ) = i = 1 n I i ( x , y ) r i 3 i = 1 n I i ( x , y ) r i 2 .
The proposed range estimation is calculated based on Equation (32). The calculated range for all image pixels eventually reconstruct the 3D surface of the test object. The 3D depth map shows the distance of the target scene to the camera. Accordingly, the reconstructed object depth can be determined from the difference between the maximum and minimum depth value. Absolute depth error between the reconstructed object depth and the actual object depth is calculated as follows:
% absolute   depth   error = | reconstructed object depth actual object depth | actual object depth × 100 % .
The evaluation results are summarised in Table 2. Additionally, the results are compared to the estimated error per range gated imaging set-up specification in the ideal scenario. As the test objects have homogeneous surface material, target reflection BRDF compensation is not considered because the reflectivity is assumed to be uniform throughout the object surfaces. Based on the conventional weighted average method, depth errors of 12.65% and 14.11% are observed for Object 1 and Object 2. With the range compensation model proposed in previous work, the depth error of the objects are reduced to 5.42% and 8.11%. The proposed range estimation model further improves the reconstruction where the depth error reduces to 2.26% and 2.93% for Object 1 and Object 2, respectively. The proposed range estimation model results in smaller depth error as compared to the weighted average model that is commonly used and further refines the range estimation to give better accuracy than the range compensation model. Figure 6 shows the graphical comparison of 3D surface reconstruction based on weighted average, range compensation, and the proposed range estimation model for Object 1 and Object 2, respectively. As can be seen from Object 2, which is less reflective, the proposed model succeeds in reconstructing the eyebrows and the right eye. It also gives a uniform reconstruction in the neck area as compared to the weighted average method. The reconstructed background of the overall scene is more uniform as well. From the results, the proposed range estimation model outperforms the conventional weighted average and range compensation model to give a better range estimation for 3D range gated reconstruction.
In addition, various target surface materials are tested. 3D range gated reconstruction can be treated as one pixel problem where each pixel exhibits the same characteristics corresponding to the reflected laser pulse [38]. Therefore, we perform an experimental study based on the reflected laser pulse using the set-up shown in Figure 7. The laser is emitted towards the flat target surface and returns a reflected signal. Photodetectors detect the emitted and reflected laser pulse to determine the range, which is the distance between the target surface and the photodetector. This experiment evaluates the ranging performance for different target surface materials based on the single pixel principle where some effects are excluded, for instance, the camera efficiency and noises. Results based on 30 measurements are summarised in Figure 8. The range error shown refers to the deviation between the calculated range and the actual range measured manually with ruler. The resulted range error using the proposed range estimation model is compared to the conventional weighted average [12] and range compensation model [28]. In general, the range error observed from the weighted average method is higher for a weak reflective target surface material such as wood. The range compensation model is able to improve the range determination but not for all cases. From the results, it can be seen that the proposed model consistently reduces the range error with lower relative standard error for the tested target surfaces.

6. Conclusions

Performance of 3D range gated reconstruction is strongly influenced by various factors in the system despite the advancement in lasers, sensors, signal processing, and computer technology. In this paper, detailed modelling of 3D range gated reconstruction and the range accuracy analysis are demonstrated to provide a comprehensive understanding of the influence factors and their impact to the system performance. Correspondingly, we propose a new range estimation model to address these influence factors to accomplish accurate reconstruction.
Based on the operating principle of time slicing technique, fundamental of radiant energy, LADAR, and BRDF, theoretical derivation of the range gated reconstruction model is presented. The derived model shows the relationship and dependency of various parameters with respect to the reflected laser intensity, SNR, and range accuracy. Accordingly, the algorithm of range estimation is improved by considering the energy attenuation and intensity variation due to distance and target reflection, and range distortion because of the inhomogeneous illumination. From the experimental results, the proposed range estimation model shows a noticeable improvement as compared to the conventional weighted average model, which proves the validity of the formulation presented. By comparing the results to the accuracy estimated from the set-up specification, the proposed model is able to achieve comparable performance.
In future, the proposed model can be a reference for ranging improvement, which contributes to miscellaneous applications. The range gated reconstruction presented in this study strongly relies on the reflectivity and range/time information associated with the images captured. Therefore, this is not suitable for non-reflective target and dark surface where weak or lack of laser intensity return will be encountered. For example, a black object absorbs the incoming laser and does not reflect any.

Acknowledgments

This material is based upon work supported by the Air Force Office of Scientific Research under award number FA2386-16-1-4115.

Author Contributions

This research work was mainly conducted by Sing Yee Chua, supervised by Xin Wang, Ningqun Guo, and Ching Seong Tan. They provided guidance, input, and support over the course of the research activity. Xin Wang is the principle investigator of the project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Feng, D.; Feng, M.Q.; Ozer, E.; Fukuda, Y. A vision-based sensor for noncontact structural displacement measurement. Sensors 2015, 15, 16557–16575. [Google Scholar] [CrossRef] [PubMed]
  2. Sansoni, G.; Trebeschi, M.; Docchio, F. State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation. Sensors 2009, 9, 568–601. [Google Scholar] [CrossRef] [PubMed]
  3. Kahlmann, T.; Remondino, F.; Guillaume, S. Range imaging technology: New developments and applications for people identification and tracking. In Proceedings of the 2007 Electronic Imaging, San Jose, CA, USA, 28 January–1 February 2007. [Google Scholar]
  4. Wang, X.W.; Zhou, Y.; Fan, S.T.; He, J.; Liu, Y.L. Range-Gated Laser Stroboscopic Imaging for Night Remote Surveillance. Chin. Phys. Lett. 2010, 27, 094203. [Google Scholar]
  5. Massot-Campos, M.; Oliver-Codina, G. Optical sensors and methods for underwater 3D reconstruction. Sensors 2015, 15, 31525–31557. [Google Scholar] [CrossRef] [PubMed]
  6. Matwyschuk, A. Direct method of three-dimensional imaging using the multiple-wavelength range-gated active imaging principle. Appl. Opt. 2016, 55, 3782–3786. [Google Scholar] [CrossRef] [PubMed]
  7. Jutzi, B.; Stilla, U. Simulation and analysis of full-waveform laser data of urban objects. In Proceedings of the 2007 Urban Remote Sensing Joint Event, Paris, France, 11–13 April 2007; pp. 1–5. [Google Scholar]
  8. Allen, B.; Anderson, W.G.; Brady, P.R.; Brown, D.A.; Creighton, J.D. FINDCHIRP: An algorithm for detection of gravitational waves from inspiraling compact binaries. Phy. Rev. D 2012, 85, 122006. [Google Scholar] [CrossRef]
  9. Jordan, S. Range estimation algorithms comparison in simulated 3-D flash LADAR data. In Proceedings of the 2009 IEEE Aerospace conference, Big Sky, MT, USA, 7–14 March 2009; pp. 1–7. [Google Scholar]
  10. Jutzi, B.; Stilla, U. Laser pulse analysis for reconstruction and classification of urban objects. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2003, 34, 151–156. [Google Scholar]
  11. Cao, J.; Hao, Q.; Cheng, Y.; Peng, Y.; Zhang, K.; Mu, J.; Wang, P. Differential time domain method improves performance of pulsed laser ranging and three-dimensional imaging. Appl. Opt. 2016, 55, 360–367. [Google Scholar] [CrossRef] [PubMed]
  12. Busck, J.; Heiselberg, H. Gated viewing and high-accuracy three-dimensional laser radar. Appl. Opt. 2004, 43, 4705–4710. [Google Scholar] [CrossRef] [PubMed]
  13. Li, L.; Wu, L.; Wang, X.; Dang, E. Gated viewing laser imaging with compressive sensing. Appl. Opt. 2012, 51, 2706–2712. [Google Scholar] [CrossRef] [PubMed]
  14. Kashani, A.G.; Olsen, M.J.; Parrish, C.E.; Wilson, N. A review of LIDAR radiometric processing: From Ad Hoc intensity correction to rigorous radiometric calibration. Sensors 2015, 15, 28099–28128. [Google Scholar] [CrossRef] [PubMed]
  15. Höfle, B.; Pfeifer, N. Correction of laser scanning intensity data: Data and model-driven approaches. ISPRS J. Photogramm. Remote Sens. 2007, 62, 415–433. [Google Scholar] [CrossRef]
  16. Chua, S.Y.; Wang, X.; Guo, N.; Tan, C.S. Theoretical and experimental investigation into the influence factors for range gated reconstruction. Photonic Sens. 2016, 6, 359–365. [Google Scholar] [CrossRef]
  17. Fang, W.; Huang, X.; Zhang, F.; Li, D. Intensity correction of terrestrial laser scanning data by estimating laser transmission function. IEEE Trans. Geosci. Remote Sens. 2015, 53, 942–951. [Google Scholar] [CrossRef]
  18. Richter, K.; Stelling, N.; Maas, H. Correcting attenuation effects caused by interactions in the forest canopy in full-waveform airborne laser scanner data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 273–280. [Google Scholar] [CrossRef]
  19. Yan, W.Y.; Shaker, A. Radiometric normalization of overlapping LiDAR intensity data for reduction of striping noise. Int. J. Digit. Earth 2016, 9, 649–661. [Google Scholar] [CrossRef]
  20. Vain, A.; Yu, X.; Kaasalainen, S.; Hyyppa, J. Correcting airborne laser scanning intensity data for automatic gain control effect. IEEE Geosci. Remote Sens. Lett. 2010, 7, 511–514. [Google Scholar] [CrossRef]
  21. Hafiane, M.L.; Wagner, W.; Dibi, Z.; Manck, O. Depth resolution enhancement technique for CMOS time-of-flight 3-D image sensors. IEEE Sens. J. 2012, 12, 2320–2327. [Google Scholar] [CrossRef]
  22. Regtien, P. Sensors for Mechatronics; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
  23. Johnson, S.E. Target detection with randomized thresholds for lidar applications. Appl. Opt. 2012, 51, 4139–4150. [Google Scholar] [CrossRef] [PubMed]
  24. Liu, G.H.; Liu, X.Y.; Feng, Q.Y. 3D shape measurement of objects with high dynamic range of surface reflectivity. Appl. Opt. 2011, 50, 4557–4565. [Google Scholar] [CrossRef] [PubMed]
  25. Chevalier, T.R.; Steinvall, O.K. Laser radar modeling for simulation and performance evaluation. In Proceedings of the Electro-Optical Remote Sensing, Photonic Technologies, and Applications III, Berlin, Germany, 1–3 September 2009. [Google Scholar]
  26. Pentland, A.P. Finding the illuminant direction. J. Opt. Soc. Am. 1982, 72, 448–455. [Google Scholar] [CrossRef]
  27. Wang, X.; Li, Y.; Zhou, Y. Triangular-range-intensity profile spatial-correlation method for 3D super-resolution range-gated imaging. Appl. Opt. 2013, 52, 7399–7406. [Google Scholar]
  28. Chua, S.Y.; Wang, X.; Guo, N.; Tan, C.S. Range Compensation for Accurate 3D Imaging System. Appl. Opt. 2016, 55, 153–158. [Google Scholar] [CrossRef] [PubMed]
  29. Fu, B.; Yang, K.; Rao, J.; Xia, M. Analysis of MCP gain selection for underwater range-gated imaging applications based on ICCD. J. Mod. Opt. 2010, 57, 408–417. [Google Scholar] [CrossRef]
  30. Chua, S.Y.; Wang, X.; Guo, N.; Tan, C.S.; Chai, T.Y.; Seet, G. Improving three-dimensional (3D) range gated reconstruction through time-of-flight (TOF) imaging analysis. J. Eur. Opt. Soc. Rapid Publ. 2016, 11, 16015. [Google Scholar] [CrossRef]
  31. Laurenzis, M.; Christnacher, F.; Monnin, D.; Scholz, T. Investigation of range-gated imaging in scattering environments. Opt. Eng. 2012, 51, 061303. [Google Scholar] [CrossRef]
  32. Richmond, R.D.; Cain, S.C. Direct-Detection LADAR Systems; SPIE Press: Bellingham, WA, USA, 2010. [Google Scholar]
  33. Chua, S.Y.; Wang, X.; Guo, N.; Tan, C.S. Influence of target reflection on three-dimensional range gated reconstruction. Appl. Opt. 2016, 55, 6588–6595. [Google Scholar] [CrossRef] [PubMed]
  34. Hara, K.; Nishino, K. Variational estimation of inhomogeneous specular reflectance and illumination from a single view. J. Opt. Soc. Am. A 2011, 28, 136–146. [Google Scholar] [CrossRef] [PubMed]
  35. Steinvall, O. Effects of target shape and reflection on laser radar cross sections. Appl. Opt. 2000, 39, 4381–4391. [Google Scholar] [CrossRef] [PubMed]
  36. Kim, S.; Lee, I.; Kwon, Y.J. Simulation of a geiger-mode imaging LADAR system for performance assessment. Sensors 2013, 13, 8461–8489. [Google Scholar] [CrossRef] [PubMed]
  37. Hartley, R.; Kang, S.B. Parameter-free radial distortion correction with center of distortion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1309–1321. [Google Scholar] [CrossRef] [PubMed]
  38. Andersson, P. Long-range three-dimensional imaging using range-gated laser radar images. Opt. Eng. 2006, 45, 034301. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of range gated imaging system setup.
Figure 1. Schematic diagram of range gated imaging system setup.
Sensors 17 02031 g001
Figure 2. Synchronisation of the laser and camera during range gated image acquisition.
Figure 2. Synchronisation of the laser and camera during range gated image acquisition.
Sensors 17 02031 g002
Figure 3. Time slicing technique captures a sequence of intensity images for 3D reconstruction.
Figure 3. Time slicing technique captures a sequence of intensity images for 3D reconstruction.
Sensors 17 02031 g003
Figure 4. Illustration of range distortion.
Figure 4. Illustration of range distortion.
Sensors 17 02031 g004
Figure 5. Raw image of the test objects captured by the range gated imaging system. (a) Raw image of Object 1. (b) Raw image of Object 2.
Figure 5. Raw image of the test objects captured by the range gated imaging system. (a) Raw image of Object 1. (b) Raw image of Object 2.
Sensors 17 02031 g005
Figure 6. 3D surface reconstruction based on the conventional weighted average, range compensation, and the proposed range estimation model for test objects. (a) Weighted average model for Object 1. (b) Weighted average model for Object 2. (c) Range compensation model for Object 1. (d) Range compensation model for Object 2. (e) Proposed range estimation model for Object 1. (f) Proposed range estimation model for Object 2.
Figure 6. 3D surface reconstruction based on the conventional weighted average, range compensation, and the proposed range estimation model for test objects. (a) Weighted average model for Object 1. (b) Weighted average model for Object 2. (c) Range compensation model for Object 1. (d) Range compensation model for Object 2. (e) Proposed range estimation model for Object 1. (f) Proposed range estimation model for Object 2.
Sensors 17 02031 g006
Figure 7. Schematic diagram of the experimental set-up for reflected laser investigation.
Figure 7. Schematic diagram of the experimental set-up for reflected laser investigation.
Sensors 17 02031 g007
Figure 8. Range error comparison for various target surfaces calculated based on the conventional weighted average, range compensation, and the proposed range estimation model.
Figure 8. Range error comparison for various target surfaces calculated based on the conventional weighted average, range compensation, and the proposed range estimation model.
Sensors 17 02031 g008
Table 1. Range accuracy estimation based on the experimental letup specification.
Table 1. Range accuracy estimation based on the experimental letup specification.
Specification/ParameterValue
Laser pulse width4 ns
Camera gate time5 ns
σ ≈9 ns
t s t e p 100 ps
m a x ( I i ) 2 8
S N R ≈151.79
δ r ≈8.894 mm
Table 2. Absolute depth error (%) calculated based on the conventional weighted average, range compensation, and the proposed range estimation model as compared to the estimated depth error per set-up specification in ideal scenario.
Table 2. Absolute depth error (%) calculated based on the conventional weighted average, range compensation, and the proposed range estimation model as compared to the estimated depth error per set-up specification in ideal scenario.
Test ObjectDepth Error perWeighted AverageRange CompensationProposed Range
Setup SpecificationModelModelEstimation Model
Object 11.85%12.65%5.42%2.26%
Object 22.22%14.11%8.11%2.93%

Share and Cite

MDPI and ACS Style

Chua, S.Y.; Guo, N.; Tan, C.S.; Wang, X. Improved Range Estimation Model for Three-Dimensional (3D) Range Gated Reconstruction. Sensors 2017, 17, 2031. https://doi.org/10.3390/s17092031

AMA Style

Chua SY, Guo N, Tan CS, Wang X. Improved Range Estimation Model for Three-Dimensional (3D) Range Gated Reconstruction. Sensors. 2017; 17(9):2031. https://doi.org/10.3390/s17092031

Chicago/Turabian Style

Chua, Sing Yee, Ningqun Guo, Ching Seong Tan, and Xin Wang. 2017. "Improved Range Estimation Model for Three-Dimensional (3D) Range Gated Reconstruction" Sensors 17, no. 9: 2031. https://doi.org/10.3390/s17092031

APA Style

Chua, S. Y., Guo, N., Tan, C. S., & Wang, X. (2017). Improved Range Estimation Model for Three-Dimensional (3D) Range Gated Reconstruction. Sensors, 17(9), 2031. https://doi.org/10.3390/s17092031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop