An Automatic Marker–Object Offset Calibration Method for Precise 3D Augmented Reality Registration in Industrial Applications
Abstract
:Featured Application
Abstract
1. Introduction
- Generally, the marker’s coordinate system is assumed to coincide with the local coordinate system of the CAD model, but in the practical condition, the marker may be laid on other planar surfaces of the object, which introduces an undetermined transformation between the planned layout position and the real layout position of the marker.
- Though the transformation could be set manually, it is calculated by multiplying several manually measured transformation matrix, which brings in the systematic measurement error.
- Given both the manual marker layout and the transformation measurement are accurate, the AR registered CAD model will still not be perfectly aligned with the real object because of the slight structure, shape, or appearance changes caused by the machining or assembling errors.
2. Materials and Methods
2.1. Overview of the Proposed Method
2.2. Normal Estimation
2.3. Translation Estimation
2.4. Global Optimization
Algorithm 1 Calculation of the gradient based dense image descriptor | |
Input: Image , ROI rectangle . | |
Output: Feature value arrays . | |
1: | Extract ROI area from , note as . |
2: | Uniformly sample points on . |
3: | Convert type as double float gray scale image. |
4: | Calculate the Scharr gradient of as and . |
5: | Calculate the strongest gradients in and as and . |
6: | Composite and in one array . |
7: | Fordo |
8: | Generate pyramid images of by |
9: | End for |
10: | Fordo |
11: | For do |
12: | Get value at of and store it in double float arrays |
13: | End for |
14: | End for |
15: | Return |
3. Results
3.1. Quantitative Validation
3.1.1. Experiment Configuration
3.1.2. Results
3.2. Qualitative Validation
3.2.1. Experiment Configuration
3.2.2. Results
4. Discussion and Limitations
5. Conclusions and Future Perspectives
Author Contributions
Funding
Conflicts of Interest
References
- Martinetti, A.; Marques, H.C.; Singh, S.; Van Dongen, L. Reflections on the Limited Pervasiveness of Augmented Reality in Industrial Sectors. Appl. Sci. 2019, 9, 3382. [Google Scholar] [CrossRef]
- Kim, K.; Billinghurst, M.; Bruder, G.; Duh, H.B.; Welch, G.F. Revisiting Trends in Augmented Reality Research: A Review of the 2nd Decade of ISMAR (2008–2017). IEEE Trans. Vis. Comput. Gr. 2018, 24, 2947–2962. [Google Scholar] [CrossRef] [PubMed]
- Lepetit, V.; Fua, P. Monocular model-based 3d tracking of rigid objects: A survey. Found. Trends Comput. Gr. Vis. 2005, 1, 1–89. [Google Scholar] [CrossRef]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
- Palmarini, R.; Erkoyuncu, J.A.; Roy, R.; Torabmostaedi, H. A systematic review of augmented reality applications in maintenance. Robot. Comput. Integr. Manuf. 2018, 49, 215–228. [Google Scholar] [CrossRef] [Green Version]
- Diao, P.-H.; Shih, N.-J. Trends and Research Issues of Augmented Reality Studies in Architectural and Civil Engineering Education—A Review of Academic Journal Publications. Appl. Sci. 2019, 9, 1840. [Google Scholar] [CrossRef]
- Kato, H.; Billinghurst, M. Developing AR Applications with ARToolKit. In Proceedings of the IEEE & ACM International Symposium on Mixed & Augmented Reality, Arlington, VA, USA, 2–5 November 2004. [Google Scholar]
- Yin, X.; Gu, Y.; Qiu, S.; Fan, X. Vr&ar combined manual operation instruction system on industry products: A case study. In Proceedings of the 2014 International Conference on Virtual Reality and Visualization, Shenyang, China, 30–31 August 2014; pp. 65–72. [Google Scholar]
- Yin, X.; Fan, X.; Zhu, W.; Liu, R. Synchronous AR assembly assistance and monitoring system based on ego-centric vision. Assem. Autom. 2019, 39, 1–16. [Google Scholar] [CrossRef]
- Baker, S.; Matthews, I. Equivalence and efficiency of image alignment algorithms. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; p. I-1090. [Google Scholar]
- Sharp, G.C.; Lee, S.W.; Wehe, D.K. Multiview registration of 3D scenes by minimizing error between coordinate frames. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1037–1050. [Google Scholar] [CrossRef]
- Deng, J.; Roussos, A.; Chrysos, G.; Ververas, E.; Kotsia, I.; Shen, J.; Zafeiriou, S. The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking. Int. J. Comput. Vis. 2018, 127, 599–624. [Google Scholar] [CrossRef] [Green Version]
- Liu, F.; Zhao, Q.; Liu, X.; Zeng, D. Joint Face Alignment and 3D Face Reconstruction with Application to Face Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2018. [Google Scholar] [CrossRef] [PubMed]
- Wuest, H.; Engekle, T.; Wientapper, F.; Schmitt, F.; Keil, J. From CAD to 3D Tracking—Enhancing & Scaling Model-based Tracking for Industrial Appliances. In Proceedings of the 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Merida, Mexico, 19–23 September 2016; pp. 346–347. [Google Scholar]
- Crivellaro, A.; Verdie, Y.; Yi, K.M.; Fua, P.; Lepetit, V. [DEMO] Tracking texture-less, shiny objects with descriptor fields. In Proceedings of the 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, Germany, 10–12 September 2014; pp. 331–332. [Google Scholar]
- Bansal, A.; Russell, B.; Gupta, A. Marr revisited: 2d-3d alignment via surface normal prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5965–5974. [Google Scholar]
- Loing, V.; Marlet, R.; Aubry, M. Virtual Training for a Real Application: Accurate Object-Robot Relative Localization Without Calibration. Int. J. Comput. Vis. 2018, 126, 1045–1060. [Google Scholar] [CrossRef] [Green Version]
- Viola, P.; Iii, W.M.W. Alignment by Maximization of Mutual Information. Int. J. Comput. Vis. 1997, 24, 137–154. [Google Scholar] [CrossRef]
- Tagare, H.D.; Rao, M. Why Does Mutual-Information Work for Image Registration? A Deterministic Explanation. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1286–1296. [Google Scholar] [CrossRef] [PubMed]
- Domokos, C.; Kato, Z. Realigning 2D and 3D Object Fragments without Correspondences. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 195–202. [Google Scholar] [CrossRef] [PubMed]
- Crivellaro, A.; Rad, M.; Verdie, Y.; Moo Yi, K.; Fua, P.; Lepetit, V. A novel representation of parts for accurate 3D object detection and tracking in monocular images. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 4391–4399. [Google Scholar]
- Crivellaro, A.; Rad, M.; Verdie, Y.; Yi, K.M.; Fua, P.; Lepetit, V. Robust 3D object tracking from monocular images using stable parts. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1465–1479. [Google Scholar] [CrossRef] [PubMed]
- Grassia, F.S. Practical Parameterization of Rotations Using the Exponential Map. J. Gr. Tools 1998, 3, 29–48. [Google Scholar] [CrossRef]
- Hinterstoisser, S.; Lepetit, V.; Ilic, S.; Fua, P.; Navab, N. Dominant orientation templates for real-time detection of texture-less objects. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Francisco, CA, USA, 13–18 June 2010; pp. 2257–2264. [Google Scholar]
- Baker, S.; Matthews, I. Lucas-kanade 20 years on: A unifying framework. Int. J. Comput. Vis. 2004, 56, 221–255. [Google Scholar] [CrossRef]
- 3D Rigid Tracking from RGB Images Dataset. Available online: https://cvlab.epfl.ch/data/data-3d_object_tracking (accessed on 10 June 2019).
- Beijing Ned Ltd. Available online: http://www.nedplusar.com/en/index (accessed on 12 June 2019).
Experiment Video | Mean Absolute Error | Time (s) | ||||||
---|---|---|---|---|---|---|---|---|
Video 1 | 0.43 | 0.69 | 0.93 | 2.25 | 1.39 | 0.49 | 0.18 | 6.24 |
Video 2 | 2.24 | 1.53 | 5.32 | 1.87 | 1.56 | 0.76 | 0.23 | 15.62 |
Video 3 | 1.17 | 1.39 | 5.19 | 2.94 | 2.25 | 0.57 | 0.21 | 14.83 |
Average | 1.280 | 1.203 | 3.813 | 2.353 | 1.733 | 0.607 | 0.207 | 12.23 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yin, X.; Fan, X.; Yang, X.; Qiu, S.; Zhang, Z. An Automatic Marker–Object Offset Calibration Method for Precise 3D Augmented Reality Registration in Industrial Applications. Appl. Sci. 2019, 9, 4464. https://doi.org/10.3390/app9204464
Yin X, Fan X, Yang X, Qiu S, Zhang Z. An Automatic Marker–Object Offset Calibration Method for Precise 3D Augmented Reality Registration in Industrial Applications. Applied Sciences. 2019; 9(20):4464. https://doi.org/10.3390/app9204464
Chicago/Turabian StyleYin, Xuyue, Xiumin Fan, Xu Yang, Shiguang Qiu, and Zhinan Zhang. 2019. "An Automatic Marker–Object Offset Calibration Method for Precise 3D Augmented Reality Registration in Industrial Applications" Applied Sciences 9, no. 20: 4464. https://doi.org/10.3390/app9204464
APA StyleYin, X., Fan, X., Yang, X., Qiu, S., & Zhang, Z. (2019). An Automatic Marker–Object Offset Calibration Method for Precise 3D Augmented Reality Registration in Industrial Applications. Applied Sciences, 9(20), 4464. https://doi.org/10.3390/app9204464