Triple-Camera Rectification for Depth Estimation Sensor
Abstract
:1. Introduction
2. Related Work
3. Proposed Method
3.1. Checkerboard Design and Lab Setting
3.2. Checkerboard Corner Detection
3.3. Calibration and Rectification
3.3.1. Single-Camera Calibration
3.3.2. Triple-Camera Calibration
3.3.3. Triple-Camera Rectification
Algorithm 1. Require: three checkerboard images captured by cameras arranged in a row [30,31]. |
4. Experimental Results
5. Discussion
5.1. Rectification with Multiple Images
5.2. The Affect of in Optimization
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Hirschmuller, H. Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 30, 328–341. [Google Scholar] [CrossRef] [PubMed]
- Facciolo, G.; De Franchis, C.; Meinhardt, E. MGM: A significantly more global matching for stereovision. In Proceedings of the British Machine Vision Conference (BMVC), Swansea, UK, 7–10 September 2015. [Google Scholar]
- Bethmann, F.; Luhmann, T. Semi-global matching in object space. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 23–30. [Google Scholar] [CrossRef]
- Schauwecker, K. Real-time stereo vision on FPGAs with SceneScan. In Proceedings of the Forum Bildverarbeitung, Karlsruhe, Germany, 29–30 November 2018; Volume 339. [Google Scholar]
- Chang, J.R.; Chen, Y.S. Pyramid stereo matching network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5410–5418. [Google Scholar]
- Chuah, W.; Tennakoon, R.; Hoseinnezhad, R.; Bab-Hadiashar, A. Deep learning-based incorporation of planar constraints for robust stereo depth estimation in autonomous vehicle applications. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6654–6665. [Google Scholar] [CrossRef]
- Xu, G.; Wang, X.; Ding, X.; Yang, X. Iterative geometry encoding volume for stereo matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 14–24 June 2023; pp. 21919–21928. [Google Scholar]
- Kendall, A.; Martirosyan, H.; Dasgupta, S.; Henry, P.; Kennedy, R.; Bachrach, A.; Bry, A. End-to-end learning of geometry and context for deep stereo regression. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 66–75. [Google Scholar]
- Tankovich, V.; Hane, C.; Zhang, Y.; Kowdle, A.; Fanello, S.; Bouaziz, S. Hitnet: Hierarchical iterative tile refinement network for real-time stereo matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14362–14372. [Google Scholar]
- Guo, X.; Yang, K.; Yang, W.; Wang, X.; Li, H. Group-wise correlation stereo network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3273–3282. [Google Scholar]
- Huang, B.; Zheng, J.Q.; Giannarou, S.; Elson, D.S. H-net: Unsupervised attention-based stereo depth estimation leveraging epipolar geometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 4460–4467. [Google Scholar]
- Bradski, G.; Kaehler, A. Learning OpenCV: Computer Vision with the OpenCV Library; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2008. [Google Scholar]
- Alshawabkeh, Y. Linear feature extraction from point cloud using color information. Herit. Sci. 2020, 8, 28. [Google Scholar] [CrossRef]
- Hartley, R.I. Theory and practice of projective rectification. Int. J. Comput. Vis. 1999, 35, 115–127. [Google Scholar] [CrossRef]
- Monasse, P.; Morel, J.M.; Tang, Z. Three-step image rectification. In Proceedings of the BMVC 2010-British Machine Vision Conference, Aberystwyth, UK, 31 August–3 September 2010; BMVA Press: Durham, UK, 2010; pp. 89–91. [Google Scholar]
- Kang, Y.S.; Ho, Y.S. Efficient stereo image rectification method using horizontal baseline. In Advances in Image and Video Technology: 5th Pacific Rim Symposium, PSIVT 2011, Gwangju, Republic of Korea, 20–23 November 2011, Proceedings, Part I; Springer: Berlin/Heidelberg, Germany, 2012; pp. 301–310. [Google Scholar]
- Isgro, F.; Trucco, E. Projective rectification without epipolar geometry. In Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), Fort Collins, CO, USA, 23–25 June 1999; IEEE: Piscataway, NJ, USA, 1999; Volume 1, pp. 94–99. [Google Scholar]
- Pollefeys, M.; Koch, R.; Van Gool, L. A simple and efficient rectification method for general motion. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; IEEE: Piscataway, NJ, USA, 1999; Volume 1, pp. 496–501. [Google Scholar]
- Fusiello, A.; Trucco, E.; Verri, A. A compact algorithm for rectification of stereo pairs. Mach. Vis. Appl. 2000, 12, 16–22. [Google Scholar] [CrossRef]
- Lafiosca, P.; Ceccaroni, M. Rectifying homographies for stereo vision: Analytical solution for minimal distortion. In Proceedings of the Science and Information Conference, London, UK, 14–15 July 2022; Springer: Cham, Switzerland, 2022; pp. 484–503. [Google Scholar]
- Loop, C.; Zhang, Z. Computing rectifying homographies for stereo vision. In Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), Fort Collins, CO, USA, 23–25 June 1999; IEEE: Piscataway, NJ, USA, 1999; Volume 1, pp. 125–131. [Google Scholar]
- Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; Volume 15, pp. 10–5244. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006, Proceedings, Part I; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 2564–2571. [Google Scholar]
- Rosten, E.; Drummond, T. Machine learning for high-speed corner detection. In Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006, Proceedings, Part I; Springer: Berlin/Heidelberg, Germany, 2006; pp. 430–443. [Google Scholar]
- Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary robust independent elementary features. In Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, 5–11 September 2010, Proceedings, Part IV; Springer: Berlin/Heidelberg, Germany, 2010; pp. 778–792. [Google Scholar]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 886–893. [Google Scholar]
- Matas, J.; Chum, O.; Urban, M.; Pajdla, T. Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 2004, 22, 761–767. [Google Scholar] [CrossRef]
- Geiger, A.; Moosmann, F.; Car, Ö.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 3936–3943. [Google Scholar]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
- Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
- Galántai, A. The theory of Newton’s method. J. Comput. Appl. Math. 2000, 124, 25–44. [Google Scholar] [CrossRef]
- Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 2006. [Google Scholar]
- Duc-Hung, L.; Cong-Kha, P.; Trang, N.T.T.; Tu, B.T. Parameter extraction and optimization using Levenberg-Marquardt algorithm. In Proceedings of the 2012 Fourth International Conference on Communications and Electronics (ICCE), Hue, Vietnam, 1–3 August 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 434–437. [Google Scholar]
- Coleman, T.F.; Li, Y. An interior trust region approach for nonlinear minimization subject to bounds. SIAM J. Optim. 1996, 6, 418–445. [Google Scholar] [CrossRef]
Num. | Original | Hartley [14] | Fusiello [19] | Lafiosca [20] | Bouguet [12] | Proposed | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
L/R | L/RGB | L/R | L/RGB | L/R | L/RGB | L/R | L/RGB | L/R | L/RGB | L/R | L/RGB | |
1 | 3.502 | 5.139 | 0.147 | 0.112 | 0.143 | 0.141 | 0.107 | 0.070 | 0.107 | 0.077 | 0.127 | 0.090 |
2 | 3.693 | 2.650 | 0.142 | 0.101 | 0.216 | 0.159 | 0.097 | 0.045 | 0.088 | 0.046 | 0.115 | 0.043 |
3 | 1.314 | 6.671 | 0.105 | 0.097 | 0.193 | 0.161 | 0.051 | 0.080 | 0.059 | 0.069 | 0.077 | 0.090 |
4 | 5.161 | 6.009 | 0.132 | 0.124 | 0.181 | 0.169 | 0.067 | 0.078 | 0.067 | 0.071 | 0.135 | 0.069 |
5 | 2.368 | 1.880 | 0.122 | 0.100 | 0.192 | 0.147 | 0.044 | 0.050 | 0.056 | 0.053 | 0.074 | 0.067 |
6 | 4.007 | 1.020 | 0.092 | 0.101 | 0.144 | 0.130 | 0.053 | 0.084 | 0.060 | 0.068 | 0.094 | 0.092 |
7 | 0.445 | 8.102 | 0.142 | 0.077 | 0.198 | 0.133 | 0.070 | 0.034 | 0.071 | 0.039 | 0.139 | 0.059 |
8 | 4.052 | 6.075 | 0.195 | 0.122 | 0.266 | 0.194 | 0.105 | 0.054 | 0.101 | 0.060 | 0.138 | 0.062 |
9 | 1.572 | 1.959 | 0.110 | 0.080 | 0.204 | 0.155 | 0.063 | 0.042 | 0.056 | 0.052 | 0.109 | 0.096 |
10 | 10.372 | 4.878 | 0.136 | 0.096 | 0.195 | 0.164 | 0.093 | 0.095 | 0.084 | 0.070 | 0.101 | 0.071 |
11 | 4.902 | 13.379 | 0.121 | 0.083 | 0.171 | 0.190 | 0.044 | 0.027 | 0.062 | 0.038 | 0.081 | 0.100 |
12 | 0.446 | 2.356 | 0.100 | 0.085 | 0.216 | 0.146 | 0.086 | 0.044 | 0.064 | 0.043 | 0.063 | 0.087 |
13 | 6.517 | 4.865 | 0.162 | 0.100 | 0.239 | 0.176 | 0.181 | 0.062 | 0.140 | 0.054 | 0.161 | 0.065 |
14 | 1.622 | 1.115 | 0.103 | 0.077 | 0.161 | 0.113 | 0.037 | 0.027 | 0.045 | 0.041 | 0.056 | 0.038 |
15 | 5.822 | 1.365 | 0.126 | 0.107 | 0.167 | 0.147 | 0.072 | 0.071 | 0.073 | 0.064 | 0.174 | 0.102 |
16 | 1.111 | 0.752 | 0.132 | 0.112 | 0.205 | 0.145 | 0.111 | 0.067 | 0.088 | 0.081 | 0.138 | 0.208 |
17 | 6.145 | 3.775 | 0.170 | 0.130 | 0.289 | 0.157 | 0.056 | 0.060 | 0.076 | 0.074 | 0.118 | 0.084 |
18 | 5.750 | 3.420 | 0.120 | 0.104 | 0.161 | 0.149 | 0.040 | 0.050 | 0.051 | 0.061 | 0.135 | 0.112 |
19 | 0.898 | 0.911 | 0.116 | 0.103 | 0.194 | 0.133 | 0.056 | 0.064 | 0.050 | 0.060 | 0.091 | 0.074 |
min | 0.445 | 0.752 | 0.092 | 0.077 | 0.143 | 0.113 | 0.037 | 0.027 | 0.045 | 0.038 | 0.056 | 0.038 |
max | 10.372 | 13.379 | 0.195 | 0.130 | 0.289 | 0.194 | 0.181 | 0.095 | 0.140 | 0.081 | 0.174 | 0.208 |
avg | 3.668 | 4.017 | 0.130 | 0.101 | 0.197 | 0.153 | 0.075 | 0.058 | 0.074 | 0.059 | 0.112 | 0.085 |
Num. | L/R | L/RGB | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Relational Angle (Degree) | Relational Location (mm) | Relational Angle (Degree) | Relational Location (mm) | |||||||||
x | y | z | x | y | z | x | y | z | x | y | z | |
1 | 0.24 | 0.42 | −1.16 | −49.10 | −0.92 | 0.06 | 0.111 | 0.084 | −0.490 | −36.322 | −0.966 | −0.095 |
2 | −0.19 | 0.02 | −0.22 | −50.61 | 1.07 | −0.31 | −0.006 | −0.203 | −0.217 | −38.061 | −0.187 | −0.104 |
3 | −0.04 | −0.62 | −0.36 | −50.82 | −0.65 | −0.44 | −0.169 | −0.440 | −0.426 | −37.803 | 0.572 | 0.081 |
4 | −1.81 | 0.38 | 0.04 | −49.35 | −0.85 | −0.38 | −1.407 | 0.367 | −0.103 | −36.140 | −0.617 | 0.042 |
5 | −0.14 | −0.20 | −0.60 | −50.68 | 0.18 | 0.11 | −0.160 | −0.256 | −0.465 | −37.325 | 0.732 | 0.082 |
6 | −0.01 | 0.28 | −0.42 | −49.95 | −0.39 | −0.21 | 0.000 | 0.171 | −0.262 | −36.943 | 1.160 | −0.176 |
7 | −0.16 | 0.32 | −0.03 | −50.45 | 0.66 | −0.38 | −0.068 | 0.560 | −0.502 | −37.787 | −0.303 | 0.602 |
8 | −0.09 | −0.38 | 0.10 | −49.78 | 1.55 | −0.66 | 0.020 | −0.628 | 0.246 | −37.083 | 0.152 | −0.308 |
9 | 0.03 | −0.14 | −0.20 | −50.34 | −0.47 | −0.64 | 0.411 | −0.422 | −0.402 | −37.850 | −1.388 | −0.312 |
10 | −0.08 | −0.12 | −0.14 | −50.03 | 1.02 | −0.41 | −0.064 | −0.296 | −0.417 | −37.759 | 0.737 | −0.338 |
11 | −0.04 | 0.17 | 0.36 | −50.30 | −1.36 | 0.34 | 0.037 | 0.013 | −0.076 | −38.085 | −0.517 | 0.492 |
12 | 0.07 | −0.05 | −0.02 | −51.69 | −0.32 | 0.10 | −0.107 | 0.094 | 0.125 | −37.796 | −0.111 | −0.350 |
13 | −0.22 | −0.19 | 0.26 | −50.94 | 2.04 | −0.17 | −0.156 | −0.078 | 0.299 | −37.358 | 0.419 | −0.226 |
14 | −0.03 | −0.19 | 0.01 | −50.69 | −0.43 | −0.35 | 0.059 | −0.208 | −0.393 | −37.260 | 0.105 | −0.241 |
15 | −0.13 | 0.20 | −0.72 | −49.18 | 1.71 | −0.69 | −0.078 | 0.094 | −0.211 | −36.592 | 1.017 | −0.378 |
16 | −0.07 | −0.16 | −0.34 | −49.76 | 1.75 | −0.62 | −0.151 | −0.026 | −0.071 | −35.715 | 1.606 | −1.003 |
17 | 0.11 | −0.44 | −0.71 | −50.66 | −1.42 | −0.35 | −0.007 | −0.396 | −0.854 | −37.227 | −0.766 | −0.397 |
18 | −0.11 | −0.06 | −0.08 | −50.33 | −0.91 | −0.73 | −0.017 | −0.148 | −0.120 | −37.484 | −1.993 | −0.031 |
19 | −0.06 | −0.02 | −0.37 | −50.57 | −0.02 | −0.41 | −0.143 | −0.001 | −0.343 | −36.673 | 0.060 | −0.234 |
min | −0.01 | −0.02 | 0.01 | −49.10 | −0.02 | 0.06 | 0.000 | −0.001 | −0.071 | −35.715 | 0.060 | −0.031 |
max | −1.81 | −0.62 | −1.16 | −51.69 | 2.04 | −0.73 | −1.407 | −0.628 | −0.854 | −38.085 | −1.993 | −1.003 |
avg | −0.14 | −0.04 | −0.24 | −50.28 | 0.12 | −0.32 | −0.100 | −0.090 | −0.246 | −37.224 | −0.015 | −0.152 |
Multiple Capture | Single Capture | ||||||
---|---|---|---|---|---|---|---|
Multi-I | Multi-II | Multi-III | |||||
L/R | L/RGB | L/R | L/RGB | L/R | L/RGB | L/R | L/RGB |
0.047 | 0.037 | 0.070 | 0.102 | 0.195 | 0.201 | 0.106 | 0.094 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jeon, M.; Park, J.; Kim, J.-W.; Woo, S. Triple-Camera Rectification for Depth Estimation Sensor. Sensors 2024, 24, 6100. https://doi.org/10.3390/s24186100
Jeon M, Park J, Kim J-W, Woo S. Triple-Camera Rectification for Depth Estimation Sensor. Sensors. 2024; 24(18):6100. https://doi.org/10.3390/s24186100
Chicago/Turabian StyleJeon, Minkyung, Jinhong Park, Jin-Woo Kim, and Sungmin Woo. 2024. "Triple-Camera Rectification for Depth Estimation Sensor" Sensors 24, no. 18: 6100. https://doi.org/10.3390/s24186100
APA StyleJeon, M., Park, J., Kim, J. -W., & Woo, S. (2024). Triple-Camera Rectification for Depth Estimation Sensor. Sensors, 24(18), 6100. https://doi.org/10.3390/s24186100