A 3D Compensation Method for the Systematic Errors of Kinect V2
Abstract
:1. Introduction
- (1)
- Multiple types of error control: some studies are only concerned with one type or two types of error in depth camera data while the other errors are ignored. For example, some of the aforementioned compensation models ignore processing the potential model errors, resulting in inaccurate systematic error compensation.
- (2)
- Model error control: research based on the 3D calibration field needs to be supplemented, and a higher accuracy 3D error compensation model for an RGB-D camera needs to be explored.
- (1)
- A new method to simultaneously handle three types of errors in RGB-D cameras is the first to be proposed. First, the Bayes SAmple Consensus (BaySAC) is used to eliminate the gross errors generated in RGB-D camera processing. Then, the potential slightly larger accidental errors are further controlled by the iteration method with variable weights. Finally, a new optimal model is established to compensate for the systematic error. Therefore, the proposed method can simultaneously control the gross error, accidental error, and systematic error in RGB-D camera (i.e., Kinect V2) calibration.
- (2)
- 3D systematic error compensation models of a Kinect V2 depth camera based on a strict 3D calibration field are established and optimized for the first time. A stepwise regression is used to optimize the parameters of these models. Then, the optimal model is selected to compensate the depth camera, which reduces the error caused by improper model selection, i.e., the model error or overparameterization problem of the model.
2. Materials and Methods
2.1. Algorithm Processing Flow
- (1)
- Acquire the raw image data. Color images and depth images are taken from an indoor 3D calibration field, and then corresponding points are extracted separately.
- (2)
- Preprocess RGB-D data. The subpixel-accurate 2D coordinates of the marker points [31] on the color image and the corresponding depth image are extracted and transformed to acquire 3D points in the RGB-D camera coordinate system.
- (3)
- Match the 3D point pairs. The marker points from the 3D calibration field and RGB-D camera are matched to acquire 3D corresponding point pairs, which are used to calculate the depth error.
- (4)
- Obtain inliers. The inliers are determined by BaySAC, thus, gross errors are eliminated.
- (5)
- Compute the parameters of the 3D transformation model based on inliers. The iteration method with variable weights is used to compute the model parameters, and residuals with slightly larger accidental errors are controlled.
- (6)
- Establish 3D compensation models of the systematic error. Three types of error compensation models are established, and the parameters of these models are calculated by the stepwise regression method to avoid overparameterization.
- (7)
- Select the optimal model to compensate for the systematic error. The accuracy of the models is evaluated, and then the optimal model is selected to calibrate the RGB-D camera.
2.2. Data Acquisition and Preprocessing of an RGB-D Camera
2.3. 3D Coordinate Transformation Relationship
2.4. Inlier Determination Based on BaySAC
- (1)
- Calculate the maximum number of iterations K using Formula (5):In general, the value of w is uncertain, so it can be estimated as a relatively small number to acquire the expected set of inliers with a moderate number of iterations. Therefore, we initially estimate that w is approximately 80% in this experiment. The significance level p should be less than 0.05 to ensure that the selected points are inliers in the iteration.
- (2)
- Determine the Bayesian prior probability of any 3D control point using Formula (6) as follows:
- (3)
- Compute the parameters of the hypothesis model. From the measurement point set, n points with the highest inlier probabilities are selected as the hypothesis set to calculate the parameters of the hypothesis model.
- (4)
- Eliminate outliers. The remaining points in the dataset are input into Formula (1), and the differences are obtained by subtracting two sides of the equation. After the inliers and outliers are distinguished by the value, outliers whose distance exceeds the threshold are removed.
- (5)
- Update the inlier probability of each point. The inlier probability is updated by the following formula [35] and then enters the next cycle:
- (6)
- Repeat the iterations. Steps 3–5 are repeated until the maximum number of iterations K is reached.
- (7)
- Acquire the optimized inliers. The model parameters with the largest number of inliers are used to judge the inliers and outliers again to acquire the optimized inliers.
2.5. Parameter Calculation of 3D Registration Model
2.6. 3D Compensation Method for Systematic Errors
2.6.1. Error Compensation Model in the XOY Plane
2.6.2. Error Compensation Model in Depth
2.7. Accuracy Evaluation
- (1)
- Three-dimensional calibration field experiment: Here, 80% of the inliers are used as control points to train the models for error compensation, and the remaining 20% points are used as check points to evaluate the accuracy using cross validation and the root mean square error (RMSE). The dataset was randomly split 20 times in order to reduce the impact of the randomness of dataset splitting on model accuracy. Then, the average accuracy of 20 splits for the compensation of each model was calculated and compared to acquire the optimal model. The smaller the RMSE is, the better the model effect.
- (2)
- Checkerboard verification experiment: The checkerboard images were taken by the Kinect V2 camera every 500 mm from 500 to 5000 mm. Then, the coordinates of checkerboard corner points were extracted and compensated through the optimal compensation model separately. Finally, the RMSE of the checkerboard before and after RGB-D calibration was calculated and compared to verify the effect of the 3D compensation model.
- (3)
- Sphere fitting experiment: The optimal model is applied to compensate for the spherical point cloud data captured by the Kinect V2 depth camera. Then, the original and compensated point cloud data are respectively substituted into the sphere fitting equation in Formula (14) to obtain the sphere parameters and residuals. Finally, the residual standard deviation is used to verify the effect of 3D compensation and reconstruction.
3. Results
3.1. Pre-Processing of 3D Data
3.2. Establishment of Error Compensation Model in 3D Calibration Field
3.3. Accuracy Evaluation Based on Checkerboard
3.4. Accuracy Evaluation of the Model for Real Sphere Fitting
4. Discussion
- (1)
- The optimized BaySAC algorithm can effectively eliminate gross errors. This occurs because the algorithm adds the influence of Bayesian prior probability into the RANSAC algorithm, which could reduce the number of iterations by half. In addition, a novel robust and efficient estimation method is applied that leads to the outliers being eliminated more accurately, and the relatively accurate coordinate transformation relationship can also be determined.
- (2)
- The accidental error could be further reduced by the iteration method with variable weights. In the parameter calculation process of the 3D registration model, the residuals between the matching points are used to update the weights until the iteration termination condition is reached. While more optimized parameters of the 3D registration model are obtained, the influence of larger accidental errors could also be reduced.
- (3)
- The systematic error of the depth camera could be controlled better by the error compensation model. Because some errors existed in the three directions of the depth data, compared with the 2D calibration field, the 3D calibration field with depth variation among control points can preferably meet the requirements of RGB-D camera calibration in the depth direction. Moreover, it is also helpful to obtain depth data with better accuracy.
- (4)
- Model optimization can reduce part of the model error. Three systematic error compensation models are established in the paper, and then we compared their compensation effects and selected the optimal model to calibrate the RGB-D camera. The results show that the linear model obtains the best compensation result. This may be because the essential errors of the Kinect V2 are relatively simple so that the linear model fits it well and has no overparameterization issue. Hence, the linear model has good generalization ability with simple parameters, while the quadratic and cubic models are relatively complicated.
- (5)
- The proposed compensation models are effective and have good generalization ability, which allows them to be applied to 3D modeling. This has been proven by the realization of the checkerboard and sphere fitting experiments, and the generalization differences are discussed below.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Lachat, E.; Macher, H.; Mittet, M.-A.; Landes, T.; Grussenmeyer, P. First Experiences with Kinect V2 Sensor For Close Range 3d Modelling. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W4, 93–100. [Google Scholar] [CrossRef] [Green Version]
- Naeemabadi, M.; Dinesen, B.; Andersen, O.K.; Hansen, J. Investigating the impact of a motion capture system on Microsoft Kinect v2 recordings: A caution for using the technologies together. PLoS ONE 2018, 13, e0204052. [Google Scholar] [CrossRef] [PubMed]
- Chi, C.; Bisheng, Y.; Shuang, S.; Mao, T.; Jianping, L.; Wenxia, D.; Lina, F. Calibrate Multiple Consumer RGB-D Cameras for Low-Cost and Efficient 3D Indoor Mapping. Remote Sens. 2018, 10, 328. [Google Scholar]
- Choe, G.; Park, J.; Tai, Y.-W.; Kweon, I.S. Refining Geometry from Depth Sensors using IR Shading Images. Int. J. Comput. Vis. 2017, 122, 1–16. [Google Scholar] [CrossRef]
- Weber, T.; Hänsch, R.; Hellwich, O. Automatic registration of unordered point clouds acquired by Kinect sensors using an overlap heuristic. ISPRS J. Photogramm. Remote Sens. 2015, 102, 96–109. [Google Scholar] [CrossRef]
- Nir, O.; Parmet, Y.; Werner, D.; Adin, G.; Halachmi, I. 3D Computer-vision system for automatically estimating heifer height and body mass. Biosyst. Eng. 2018, 173, 4–10. [Google Scholar] [CrossRef]
- Silverstein, E.; Snyder, M. Implementation of facial recognition with Microsoft Kinect v2 sensor for patient verification. Med. Phys. 2017, 44, 2391–2399. [Google Scholar] [CrossRef]
- Guffanti, D.; Brunete, A.; Hernando, M.; Rueda, J.; Cabello, E.N. The Accuracy of the Microsoft Kinect V2 Sensor for Human Gait Analysis. A Different Approach for Comparison with the Ground Truth. Sensors 2020, 20, 4405. [Google Scholar] [CrossRef]
- Cui, J.; Zhang, J.; Sun, G.; Zheng, B. Extraction and Research of Crop Feature Points Based on Computer Vision. Sensors 2019, 19, 2553. [Google Scholar] [CrossRef] [Green Version]
- Wang, L.; Huynh, D.Q.; Koniusz, P. A Comparative Review of Recent Kinect-Based Action Recognition Algorithms. IEEE Trans. Image Process. 2020, 29, 15–28. [Google Scholar] [CrossRef] [Green Version]
- Fankhauser, P.; Bloesch, M.; Rodriguez, D.; Kaestner, R.; Siegwart, R. Kinect v2 for Mobile Robot Navigation: Evaluation and Modeling. In Proceedings of the International Conference on Advanced Robotics (ICAR), Istanbul, Turkey, 27–31 July 2015. [Google Scholar]
- Nascimento, H.; Mujica, M.; Benoussaad, M. Collision Avoidance Interaction Between Human and a Hidden Robot Based on Kinect and Robot Data Fusion. IEEE Robot. Autom. Lett. 2020, 6, 88–94. [Google Scholar] [CrossRef]
- Wang, K.Z.; Lu, T.K.; Yang, Q.H.; Fu, X.H.; Lu, Z.H.; Wang, B.L.; Jiang, X. Three-Dimensional Reconstruction Method with Parameter Optimization for Point Cloud Based on Kinect v2. In Proceedings of the 2019 International Conference on Computer Science, Communications and Big Data, Bejing, China, 24 March 2019. [Google Scholar]
- Lachat, E.; Landes, T.; Grussenmeyer, P. Combination of TLS point clouds and 3D data from Kinect V2 sensor to complete indoor models. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B5, 659–666. [Google Scholar] [CrossRef] [Green Version]
- Camplani, M.; Mantecón, T.; Salgado, L. Depth-Color Fusion Strategy for 3-D Scene Modeling With Kinect. IEEE Trans. Cybern. 2013, 43, 1560–1571. [Google Scholar] [CrossRef]
- Maybank, S.J.; Faugeras, O.D. A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 1992, 8, 123–151. [Google Scholar] [CrossRef]
- Lachat, E.; Macher, H.; Landes, T.; Grussenmeyer, P. Assessment and Calibration of a RGB-D Camera (Kinect v2 Sensor) Towards a Potential Use for Close-Range 3D Modeling. Remote Sens. 2015, 7, 13070–13097. [Google Scholar] [CrossRef] [Green Version]
- He, X.; Zhang, H.; Hur, N.; Kim, J.; Wu, Q.; Kim, T. Estimation of Internal and External Parameters for Camera Calibration Using 1D Pattern. In Proceedings of the IEEE International Conference on Video & Signal Based Surveillance, Sydney, Australia, 22–24 November 2006. [Google Scholar]
- Koshak, W.J.; Stewart, M.F.; Christian, H.J.; Bergstrom, J.W.; Solakiewicz, R.J. Laboratory Calibration of the Optical Transient Detector and the Lightning Imaging Sensor. J. Atmos. Ocean. Technol. 2000, 17, 905–915. [Google Scholar] [CrossRef]
- Sampsell, J.B.; Florence, J.M. Spatial Light Modulator Based Optical Calibration System. U.S. Patent 5,323,002A, 21 June 1994. [Google Scholar]
- Hamid, N.F.A.; Ahmad, A.; Samad, A.M.; Ma’Arof, I.; Hashim, K.A. Accuracy assessment of calibrating high resolution digital camera. In Proceedings of the IEEE 9th International Colloquium on Signal Processing and its Applications, Kuala Lumpur, Malaysia, 8–10 March 2013. [Google Scholar]
- Tsai, C.-Y.; Huang, C.-H. Indoor Scene Point Cloud Registration Algorithm Based on RGB-D Camera Calibration. Sensors 2017, 17, 1874. [Google Scholar] [CrossRef] [Green Version]
- Wang, X.; Habert, S.; Ma, M.; Huang, C.H.; Fallavollita, P.; Navab, N. Precise 3D/2D calibration between a RGB-D sensor and a C-arm fluoroscope. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1385–1395. [Google Scholar] [CrossRef]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
- Sturm, P.F.; Maybank, S.J. On plane-based camera calibration: A general algorithm, singularities, applications. In Proceedings of the IEEE Computer Society Conference on Computer Vision & Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999. [Google Scholar]
- Boehm, J.; Pattinson, T. Accuracy of exterior orientation for a range camera. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 103–108. [Google Scholar]
- Liu, W.; Fan, Y.; Zhong, Z.; Lei, T. A new method for calibrating depth and color camera pair based on Kinect. In Proceedings of the 2012 International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 16–18 July 2012. [Google Scholar]
- Song, X.; Zheng, J.; Zhong, F.; Qin, X. Modeling deviations of rgb-d cameras for accurate depth map and color image registration. Multimed. Tools Appl. 2018, 77, 14951–14977. [Google Scholar] [CrossRef]
- Gui, P.; Ye, Q.; Chen, H.; Zhang, T.; Yang, C. Accurately calibrate kinect sensor using indoor control field. In Proceedings of the 2014 3rd International Workshop on Earth Observation and Remote Sensing Applications (EORSA), Changsha, China, 11–14 June 2014. [Google Scholar]
- Zhang, C.; Huang, T.; Zhao, Q. A New Model of RGB-D Camera Calibration Based On 3D Control Field. Sensors 2019, 19, 5082. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Geiger, A.; Moosmann, F.; Car, O.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3936–3943. [Google Scholar]
- Heikkila, J. Geometric camera calibration using circular control points. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1066–1077. [Google Scholar] [CrossRef] [Green Version]
- Kang, Z.; Jia, F.; Zhang, L. A Robust Image Matching Method based on Optimized BaySAC. Photogramm. Eng. Remote Sens. 2014, 80, 1041–1052. [Google Scholar] [CrossRef]
- Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. In Readings in Computer Vision; Fischler, M.A., Firschein, O., Eds.; Morgan Kaufmann: San Francisco, CA, USA, 1987; pp. 726–740. [Google Scholar]
- Botterill, T.; Mills, S.; Green, R. New Conditional Sampling Strategies for Speeded-Up RANSAC. In Proceedings of the British Machine Vision Conference, BMVC, London, UK, 7–10 September 2009. [Google Scholar]
- Li, D. Gross Error Location by means of the Iteration Method with variable Weights. J. Wuhan Tech. Univ. Surv. Mapp. 1984, 9, 46–68. [Google Scholar]
- Brown, D.C. Close-range camera calibration. PE 1971, 37, 855–866. [Google Scholar]
Dimension | Compensation Model | RMSE | ||
---|---|---|---|---|
Before Correction | After Correction | Percentage Reduction of Error | ||
2D | 2D correction model | 0.0963 | 0.0794 | 17.55% |
Depth | Linear model | 0.1828 | 0.0161 | 91.19% |
Quadratic model | 0.0165 | 90.97% | ||
Cubic model | 0.0174 | 90.48% | ||
3D | Linear model | 0.2066 | 0.0794 | 61.58% |
Quadratic model | 0.0795 | 61.53% | ||
Cubic model | 0.0797 | 61.45% |
Distance (mm) | RMSE | ||
---|---|---|---|
Before Correction | After Correction | Percentage Reduction of Error | |
500 | 0.0463 | 0.0444 | 4.20% |
1000 | 0.0694 | 0.0407 | 41.31% |
1500 | 0.0846 | 0.0283 | 66.50% |
2000 | 0.1268 | 0.0411 | 67.53% |
2500 | 0.1374 | 0.0242 | 82.37% |
3000 | 0.1611 | 0.0197 | 87.79% |
3500 | 0.1958 | 0.0248 | 87.35% |
4000 | 0.2200 | 0.0216 | 90.19% |
4500 | 0.2889 | 0.0599 | 79.25% |
5000 | 0.2092 | 0.0469 | 77.59% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, C.; Li, B.; Zhao, S. A 3D Compensation Method for the Systematic Errors of Kinect V2. Remote Sens. 2021, 13, 4583. https://doi.org/10.3390/rs13224583
Li C, Li B, Zhao S. A 3D Compensation Method for the Systematic Errors of Kinect V2. Remote Sensing. 2021; 13(22):4583. https://doi.org/10.3390/rs13224583
Chicago/Turabian StyleLi, Chang, Bingrui Li, and Sisi Zhao. 2021. "A 3D Compensation Method for the Systematic Errors of Kinect V2" Remote Sensing 13, no. 22: 4583. https://doi.org/10.3390/rs13224583
APA StyleLi, C., Li, B., & Zhao, S. (2021). A 3D Compensation Method for the Systematic Errors of Kinect V2. Remote Sensing, 13(22), 4583. https://doi.org/10.3390/rs13224583