Automated Calibration Method for Eye-Tracked Autostereoscopic Display
Abstract
:1. Introduction
- We establish the parameters that require calibration. To achieve this, we firstly address the process of the ETAD from eye tracking to 3D image rendering. To the best of our knowledge, the whole process of the ETAD has never been explored before.
- We propose a novel methodology, which calibrates all devices of the ETAD. Our calibration approach is mainly focused on both the automated and the simultaneous calibration. This method calibrates all parameters at the same time, not sequentially. The proposed method also does not require the movement of the ETAD or the calibration device, but instead is performed in a fixed environment. To achieve this goal, we designed and implemented a calibration module that consists of a 3D pattern and an external camera. The module enabled us to calibrate an ETAD by simply setting it in front of the calibration module.
2. Parameters Establishment
2.1. Physical Model of the ETAD
2.2. Rendering Process
2.3. Parameter Classification
3. Calibration
3.1. Calibration Process
3.2. Calibration Method
4. Experiments and Discussion
4.1. Simulation Environment
4.2. Real System
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Appendix A.
Appendix A.1. Eye Position Estimation
References
- Dodgson, N.A. Optical devices: 3D without the glasses. Nature 2013, 495, 316–317. [Google Scholar] [CrossRef] [PubMed]
- Urey, H.; Chellappan, K.V.; Erden, E.; Surman, P. State of the art in stereoscopic and autostereoscopic displays. Proc. IEEE 2011, 99, 540–555. [Google Scholar] [CrossRef]
- Takaki, Y. Multi-view 3-D display employing a flat-panel display with slanted pixel arrangement. J. Soc. Inf. Disp. 2010, 18, 476–482. [Google Scholar] [CrossRef]
- Kim, S.K.; Yoon, S.K.; Yoon, K.H. Crosstalk minimization in autostereoscopic multiveiw 3D display by eye tracking and fusion (overlapping) of viewing zones. In SPIE Defense, Security, and Sensing; International Society for Optics and Photonics: Bellingham, WA, USA, 2012; p. 838410. [Google Scholar]
- Hanhart, P.; di Nolfo, C.; Ebrahimi, T. Active crosstalk reduction system for multiview autostereoscopic displays. In Proceedings of the 2015 IEEE International Conference on Multimedia and Expo (ICME), Turin, Italy, 29 June–3 July 2015; IEEE: Biloxi, MS, USA, 2015; pp. 1–6. [Google Scholar] [Green Version]
- Kim, S.K.; Yoon, K.H.; Yoon, S.K.; Ju, H. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Opt. Commun. 2015, 357, 185–192. [Google Scholar] [CrossRef]
- Van Berkel, C. Image preparation for 3D LCD. In Stereoscopic Displays and Virtual Reality Systems VI; International Society for Optics and Photonics: Bellingham, WA, USA, 1999; Volume 3639, pp. 84–92. [Google Scholar]
- Woodgate, G.J.; Ezra, D.; Harrold, J.; Holliman, N.S.; Jones, G.R.; Moseley, R.R. Autostereoscopic 3D display systems with observer tracking. Signal Process. Image Commun. 1998, 14, 131–145. [Google Scholar] [CrossRef]
- Borner, R.; Duckstein, B.; Machui, O.; Roder, H.; Sinnig, T.; Sikora, T. A family of single-user autostereoscopic displays with head-tracking capabilities. IEEE Trans. Circuits Syst. Video Technol. 2000, 10, 234–243. [Google Scholar] [CrossRef]
- Kooima, R.L.; Peterka, T.; Girado, J.I.; Ge, J.; Sandin, D.J.; DeFanti, T.A. A gpu sub-pixel algorithm for autostereoscopic virtual reality. In Proceedings of the 2007 Virtual Reality Conference, Charlotte, NC, USA, 10–14 March 2007; IEEE: Biloxi, MS, USA, 2007; pp. 131–137. [Google Scholar]
- Andiel, M.; Hentschke, S.; Elle, T.; Fuchs, E. Eye tracking for autostereoscopic displays using web cams. In Electronic Imaging 2002; International Society for Optics and Photonics: Bellingham, WA, USA, 2002; pp. 200–206. [Google Scholar]
- Hopf, K.; Chojecki, P.; Neumannn, F.; Przewozny, D. Novel autostereoscopic single-user displays with user interaction. In Optics East 2006; International Society for Optics and Photonics: Bellingham, WA, USA, 2006; p. 639207. [Google Scholar]
- Peterka, T.; Kooima, R.L.; Girado, J.I.; Ge, J.; Sandin, D.J.; Johnson, A.; Leigh, J.; Schulze, J.; DeFanti, T.A. Dynallax: Solid state dynamic parallax barrier autostereoscopic VR display. In Proceedings of the 2007 Virtual Reality Conference, Charlotte, NC, USA, 10–14 March 2007; IEEE: Biloxi, MS, USA, 2007; pp. 155–162. [Google Scholar]
- Yi, S.Y.; Chae, H.B.; Lee, S.H. Moving parallax barrier design for eye-tracking autostereoscopic displays. In Proceedings of the 2008 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, Istanbul, Turkey, 28–30 May 2008; IEEE: Biloxi, MS, USA, 2008; pp. 165–168. [Google Scholar]
- Chappuis, A.; Rerabek, M.; Hanhart, P.; Ebrahimi, T. Subjective evaluation of an active crosstalk reduction system for mobile autostereoscopic displays. In IS&T/SPIE Electronic Imaging; International Society for Optics and Photonics: Bellingham, WA, USA, 2014; p. 90110S. [Google Scholar]
- Leroy, L.; Fuchs, P.; Moreau, G. Visual fatigue reduction for immersive stereoscopic displays by disparity, content, and focus-point adapted blur. IEEE Trans. Ind. Electron. 2012, 59, 3998–4004. [Google Scholar] [CrossRef]
- Sandin, D.J.; Margolis, T.; Ge, J.; Girado, J.; Peterka, T.; DeFanti, T.A. The Varrier TM autostereoscopic virtual reality display. ACM Trans. Graph. 2005, 24, 894–903. [Google Scholar] [CrossRef]
- Bailer, C.; Henriques, J.; Schmitz, N.; Stricker, D. A Simple Real-Time Eye Tracking and Calibration Approach for Autostereoscopic 3D Displays. In Proceedings of the 2015 VISAPP: 10th International Conference on Computer Vision Theory and Applications, Berlin, Germany, 11–14 March 2015; Volume 3, pp. 123–129. [Google Scholar]
- Lawton, G. 3D displays without glasses: Coming to a screen near you. IEEE Comput. Soc. 2011, 44, 17–19. [Google Scholar] [CrossRef]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
- Hwang, H.; Chang, H.S.; Nam, D.; Kweon, I.S. 3D Display Calibration by Visual Pattern Analysis. IEEE Trans. Image Process. 2017, 26, 2090–2102. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, X.; Fang, Y.; Li, B.; Wang, J. Visual servoing of nonholonomic mobile robots with uncalibrated camera-to-robot parameters. IEEE Trans. Ind. Electron. 2017, 64, 390–400. [Google Scholar] [CrossRef]
- Dodgson, N.A. Autostereoscopic 3D displays. Computer 2005, 38, 31–36. [Google Scholar] [CrossRef]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis; Springer: Berlin, Germany, 1978; pp. 105–116. [Google Scholar]
- Malis, E.; Vargas, M. Deeper Understanding of the Homography Decomposition for Vision-Based Control. Ph.D. Thesis, INRIA, Rocquencourt, France, 2007. [Google Scholar]
- Konrad, J.; Agniel, P. Subsampling models and anti-alias filters for 3-D automultiscopic displays. IEEE Trans. Image Process. 2006, 15, 128–140. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Buck, D.K.; Collins, A.A. POV-Ray—The Persistence of Vision Raytracer. Available online: http://www.povray.org/ (accessed on 26 January 2018).
- Tan, L.; Wang, Y.; Yu, H.; Zhu, J. Automatic camera calibration using active displays of a virtual pattern. Sensors 2017, 17, 685. [Google Scholar] [CrossRef] [PubMed]
- Hwang, H.; Chang, H.S.; Kweon, I.S. Local deformation calibration for autostereoscopic 3D display. Opt. Express 2017, 25, 10801–10814. [Google Scholar] [CrossRef] [PubMed]
- Huang, K.C.; Chou, Y.H.; Lin, L.C.; Lin, H.Y.; Chen, F.H.; Liao, C.C.; Chen, Y.H.; Lee, K.; Hsu, W.H. Investigation of designated eye position and viewing zone for a two-view autostereoscopic display. Opt. Express 2014, 22, 4751–4767. [Google Scholar] [CrossRef] [PubMed]
- Hirschmuller, H.; Scharstein, D. Evaluation of cost functions for stereo matching. In Proceedings of the 2007 Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; IEEE: Biloxi, MS, USA, 2007; pp. 1–8. [Google Scholar]
- Scharstein, D.; Hirschmüller, H.; Kitajima, Y.; Krathwohl, G.; Nešić, N.; Wang, X.; Westling, P. High-resolution stereo datasets with subpixel-accurate ground truth. In German Conference on Pattern Recognition; Springer: Berlin, Germany, 2014; pp. 31–42. [Google Scholar]
- Stanford. The (New) Stanford Light Field Archive. Available online: http://lightfield.stanford.edu (accessed on 26 January 2018).
- MIT. Synthetic Light Field Archive. Available online: http://web.media.mit.edu/$\sim$gordonw/SyntheticLightFields (accessed on 26 January 2018).
- Kim, C.; Zimmer, H.; Pritch, Y.; Sorkine-Hornung, A.; Gross, M.H. Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. 2013, 32, 73. [Google Scholar] [CrossRef]
Device | Parameters | Symbols | Category |
---|---|---|---|
onboard camera | focal length | , | variable |
principal point | , | variable | |
distortion coefficient | ,,, | variable | |
display panel | pixel width | constant | |
pixel height | constant | ||
optical element | pitch | p | constant |
slanted angle | variable | ||
gap (thickness) | variable | ||
offset | variable | ||
onboard camera pose | rotation | variable | |
translation | variable |
Device | Parameter (Unit) | Designed | Estimation Errors | ||
---|---|---|---|---|---|
Designed Value | Variation | Mean of Absolute Error | Standard Deviation | ||
onboard camera | (pixel/mm) | 1506.9 | ±40.0 | 0.9167 | 0.7122 |
(pixel/mm) | 1506.9 | ±40.0 | 0.6045 | 0.4178 | |
(pixel) | 960 | - | - | - | |
(pixel) | 480 | - | - | - | |
optical layer | p (mm) | 0.01237 | - | - | - |
(°) | 12.5288 | ±1.0 | 0.0041 | 0.0031 | |
(mm) | 0.5 | ±0.1 | 0.0252 | 0.0175 | |
(mm) | 0.2 | ±0.1 | 0.0135 | 0.0087 | |
onboard camera pose | (°) | 0 | ±1.0 | 0.9362 | 0.5986 |
(°) | 0 | ±1.0 | 0.8596 | 0.6038 | |
(°) | 0 | ±1.0 | 0.0160 | 0.0124 | |
(mm) | 204.48 | ±5.0 | 0.4171 | 0.3226 | |
(mm) | −15.0 | ±5.0 | 0.1804 | 0.1364 | |
(mm) | 2.0 | ±5.0 | 0.6030 | 0.3814 |
Distance | No Calibration | Proposed Method |
---|---|---|
test position #1 | 77.5889 | 7.9029 |
test position #2 | 85.7855 | 8.0671 |
test position #3 | 91.0957 | 8.5187 |
test position #4 | 88.1954 | 8.7885 |
mean | 85.6664 | 8.3193 |
Name | PSNR (Standard Deviation) | SSIM (Standard Deviation) | ||
---|---|---|---|---|
No Calibration | Proposed | No Calibration | Proposed | |
Lego Knights | 18.51 (2.46) | 26.25 (0.46) | 0.44 (0.02) | 0.65 (0.03) |
Fish | 17.83 (0.64) | 22.75 (0.10) | 0.56 (0.02) | 0.80 (0.03) |
Flower | 23.62 (0.36) | 30.18 (0.30) | 0.68 (0.01) | 0.92 (0.04) |
Aloe | 20.55 (1.05) | 29.39 (0.59) | 0.48 (0.04) | 0.80 (0.03) |
Motorcycle | 19.67 (0.19) | 26.05 (0.25) | 0.72 (0.01) | 0.82 (0.06) |
Baby3 | 24.06 (0.08) | 27.61 (0.49) | 0.47 (0.03) | 0.73 (0.12) |
Couch | 23.34 (0.15) | 29.45 (1.02) | 0.67 (0.03) | 0.91 (0.17) |
Bicycle1 | 20.07 (0.25) | 28.76 (0.89) | 0.34 (0.12) | 0.88 (0.10) |
© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hwang, H. Automated Calibration Method for Eye-Tracked Autostereoscopic Display. Sensors 2018, 18, 2614. https://doi.org/10.3390/s18082614
Hwang H. Automated Calibration Method for Eye-Tracked Autostereoscopic Display. Sensors. 2018; 18(8):2614. https://doi.org/10.3390/s18082614
Chicago/Turabian StyleHwang, Hyoseok. 2018. "Automated Calibration Method for Eye-Tracked Autostereoscopic Display" Sensors 18, no. 8: 2614. https://doi.org/10.3390/s18082614
APA StyleHwang, H. (2018). Automated Calibration Method for Eye-Tracked Autostereoscopic Display. Sensors, 18(8), 2614. https://doi.org/10.3390/s18082614