Next Article in Journal
Classification for Penicillium expansum Spoilage and Defect in Apples by Electronic Nose Combined with Chemometrics
Previous Article in Journal
Surface Morphology Analysis of Metallic Structures Formed on Flexible Textile Composite Substrates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Miniaturized 3D Depth Sensing-Based Smartphone Light Field Camera

School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, 123 Cheomdangwagi-ro, Buk-gu, Gwangju 61005, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2020, 20(7), 2129; https://doi.org/10.3390/s20072129
Submission received: 25 February 2020 / Revised: 1 April 2020 / Accepted: 7 April 2020 / Published: 9 April 2020
(This article belongs to the Section Intelligent Sensors)

Abstract

:
The miniaturization of 3D depth camera systems to reduce cost and power consumption is essential for their application in electrical devices that are trending toward smaller sizes (such as smartphones and unmanned aerial systems) and in other applications that cannot be realized via conventional approaches. Currently, equipment exists for a wide range of depth-sensing devices, including stereo vision, structured light, and time-of-flight. This paper reports on a miniaturized 3D depth camera based on a light field camera (LFC) configured with a single aperture and a micro-lens array (MLA). The single aperture and each micro-lens of the MLA serve as multi-camera systems for 3D surface imaging. To overcome the optical alignment challenge in the miniaturized LFC system, the MLA was designed to focus by attaching it to an image sensor. Theoretical analysis of the optical parameters was performed using optical simulation based on Monte Carlo ray tracing to find the valid optical parameters for miniaturized 3D camera systems. Moreover, we demonstrated multi-viewpoint image acquisition via a miniaturized 3D camera module integrated into a smartphone.

1. Introduction

Cameras have become indispensable devices for recording human history over the last couple of centuries. Furthermore, research on 3D cameras has been conducted actively over recent years in accordance with the increasing demand for information measurement in the real world beyond simply capturing 2D images. The recent trend of minimizing the size and cost of 3D cameras in many applications, such as smartphones, entertainment, remote sensing for facial recognition, motion detectors, and 3D surface imaging, has motivated research into the miniaturization of 3D cameras [1,2,3,4,5,6]. Moreover, small unmanned aerial systems (sUAS) that include 3D cameras have the potential to offer an anti-collision function and environmental remote sensing; for example, civil and military sUAS with 3D cameras are used in unexpected scenarios during emergencies [7,8,9,10]. However, sUAS pose challenges resulting from their small payload weight and battery capacity, which significantly affects their flight time [9]. These challenges could be overcome by using miniaturized 3D cameras without an additional light source and multi-camera systems. Many 3D camera techniques have been developed, such as stereoscopic vision [11,12,13], structured light [14,15,16], and time-of-flight (TOF) [17,18,19]. Stereoscopic vision systems acquire depth information from two or multiple cameras that capture the same image from different viewpoint angles. Structured light camera systems contain a camera and projector that generate certain geometric light patterns, such as dot arrays, arbitrary fringes, and stripes to perform the 3D reconstruction. TOF cameras are depth-sensing devices that measure the round-trip time of an infrared light signal. However, the aforementioned 3D camera techniques require more than two cameras or external light sources, meaning that they are not suitable for miniaturization. Recent studies on 3D imaging systems have focused on using a single image sensor rather than two identical cameras to reduce the overall size, cost, weight, and battery size of the optical system [20]. As an alternative to the above-mentioned 3D cameras, light field camera (LFC) technology provides suitable conditions and a high potential for miniaturization. They are passive 3D cameras composed of a single image sensor without a light-emitting device [13]. Typically, there are two drawbacks for miniaturizing LFCs: (1) the camera systems are usually large because of the size of the main lens; (2) there are challenges for the focus alignment of the micro-lens array (MLA) in the integration of image-sensor stages.
Here, we report on a novel compact 3D camera based on an LFC consisting of an image sensor, an MLA, and a single aperture to capture different viewpoint-angle images. The size of the LFC optical system was significantly reduced by using only one aperture without the main lens. The aperture and each micro-lens of the MLA act as a camera at different positions, capturing a multi-viewpoint image using a process similar to that in a focused plenoptic camera, which is one of the conventional realizations of the LFC. Optical simulation based on Monte Carlo ray tracing was performed to determine the valid aperture size and distance from the MLA. Furthermore, we propose a simplified integration method to solve the complex optical alignment issue, thus achieving targeted miniaturization. The LFC image-sensor stage was constructed by placing it above the engineered MLA, thereby precluding difficulties in the focus alignment of the micro-unit. Therefore, the proposed compact LFC system addresses the miniaturization issue with both multi-camera and active 3D camera systems.

2. Results

The structure of a conventional LFC comprises the main lens, a micro-lens array, and a single image sensor, as shown in Figure 1a. Generally, LFCs employ a micro-lens array to capture information regarding the intensity and direction of all the light rays from a scene through the main lens [21,22,23,24,25]. Moreover, an alignment structure is used for optical focus alignment at the image-sensor stage to achieve the focal length of the MLA. Such a structure is an obstacle to the miniaturization of LFC systems and to the combination of image sensors with an MLA. We propose a significantly simplified optical system, as shown in Figure 1b, to solve these impediments to the miniaturization of 3D cameras. The scene captured by the single aperture is formed in the image sensor, similar to the multiple camera images captured at various angles through each micro-lens in the micro-lens array. Furthermore, we processed the micro-lens into the form of a plano-convex lens, and the image-sensor stage was configured without an alignment structure to adjust the additional optical focal length through the MLA designed via ray-tracing-based optical simulation. Figure 1c shows a miniaturized LFC system implemented in a smartphone camera using a simple assembly. Note that the focus alignment was achieved by placing the engineered MLA on the image sensor of the smartphone, as shown in Figure 1d. Figure 1e shows a demonstration of image acquisition through a miniaturized LFC configured in the smartphone camera.
Figure 2a shows a schematic of the relevant parameters in miniaturized LFC systems. The angle formed between the aperture and the micro-lens array produces an image that is taken from various angles of a scene. The angle difference between the captured images is related to the position of the micro-lens and the aperture distance from MLA, denoted by d   . The shorter the distance d or the larger the pitch p of the micro-lens, the greater the angle difference between the images. Note that there is an overlap between micro-images. Therefore, it is necessary to determine appropriate values of d and p . We designed a practical micro-lens array pitch p , aperture size s , and distance from the MLA d to achieve this multi-viewpoint image formation. Figure 2b shows the difference in focal lengths with and without space when the micro-lens is composed of poly-dimethylsiloxane (PDMS) with a radius of 200 µm; this value was found from ray-tracing-based simulations. By adjusting the PDMS thickness for the case of a focal length without space, the focal length was relatively increased, thereby reducing the difficulty in controlling the thickness in the manufacture of the MLA [26,27]. Furthermore, by optimizing the thickness of the PDMS, the increased focal length in the absence of space achieved better performance in terms of the root-mean-square (RMS) spot radius. Figure 2c represents simulation data values for an optimal MLA thickness that does not require a space alignment structure according to the radii of the micro-lens. The optimum thickness tends to increase proportionally to the radius. Note that the optimum thickness without the alignment structure can be determined according to the target radius. Our target radius was 200 µm; hence, 0.1 million pixels comprised one micro-image. Moreover, the optimal thickness of the MLA was 665 µm. The areas to consider were the entrance pupil and d . The design value for the entrance pupil of the aperture was 4 mm, which is slightly smaller than the entrance size of the cover glass of the smartphone to allow integration of the LFC system. Figure 2d shows the results of ray-tracing-based image simulation according to the aperture distance from the MLA. When d was set to 7 mm, the field of view of the image coming in through the aperture was small, and the image fill factor tended to decrease significantly for the designed MLA. However, when d was set to approximately 3 mm, the viewing angle became large, and there was overlapping between images. Therefore, we used 5 mm as a valid value for d . As depicted in Figure 2e, the demonstration shows that the captured multi-viewpoint images were well-matched and presented the same tendency as in Figure 2d.
Figure 3a shows the overall fabrication procedure of the quartz MLA master mold. The fabrication steps were as follows:
(i) For the hard mask of hydrofluoric acid (HF) wet etching, poly-Si was deposited on both sides of a quartz substrate. The thickness of the deposited poly-Si was set to 700 nm to prevent the penetration of HF. Photoresist (PR) hole patterning was performed on one side of the poly-Si using photolithography.
(ii) The patterned sample was dry-etched by using SF6 gas through an inductively coupled plasma reactive ion etch (ICP-RIE) to transfer the hole pattern to the poly-Si. The ICP-RIE etching recipe of poly-Si was as follows: SF6 flow/working pressure/RF power/ICP power/etching time = 50 sccm/4 mTorr/50 W/100 W/2 min.
(iii) The patterned sample was immersed in an HF bath for 400 min, and the HF solution isotropically etched the quartz through the hole pattern of the poly-Si. The etch rate of the quartz exhibited hemispherical isotropy at the center of each via hole because the via holes were small(diameter: ~ 2 μm). Therefore, micro-lenses were created on the quartz substrate with the same radius and sag height.
(iv) Subsequently, the poly-Si was removed using potassium hydroxide (KOH) at a temperature of 150 °C for 30 min.
Figure 3b displays the PDMS replica molding process conducted using the fabricated quartz master mold with a concave micro-lens array. An anti-adhesive was sprayed before proceeding with PDMS replica molding. A fluorocarbon mold release agent was used as the anti-adhesive spray (DAIFREE GA-7550, DAIKIN, Japan). The spraying distance from the mold was 30 cm, and the spraying time was 4 s. A PDMS with a density of 0.97 g/cm3 was poured into a Petri dish [28]. The optimized MLA thickness was produced by controlling the weight of PDMS using a container and a precision balance [29]. Figure 3c shows the correlation between MLA thickness and PDMS weight (3–5 g); the correlation is almost linear when pouring PDMS in a flat Petri dish with a diameter of 100 mm. The estimated MLA thickness fabricated using PDMS weighing 4g is 663 µm. The designed MLA thickness was achieved with a negligible error of ~ 0.3% compared to the designed focal length of 665 µm. PDMS curing was performed for 6 h at 70 °C in a convection oven. Furthermore, the unleveled top surface of the PDMS MLA prevented proper image formation as a result of focal length misalignment. Thus, the curing of the PDMS MLA must be performed on a leveled optical stage. After the complete curing of the PDMS, the MLA was carefully detached from the quartz master mold. Figure 3d shows an exploded view of the simple configuration of miniaturized LFC systems. The engineered MLA with optimum thickness was placed on the sensor in the sensor module manufacturing stage. Moreover, an aperture of the entrance pupil of 4 mm was fabricated using a 3D printer (Ultimaker, Netherlands, Ultimaker3) to implement the small-factor-form LFC system using a smartphone camera with a simple assembly. Through this manufacturing method, it was possible to achieve miniaturization of the camera in a simple way without making and aligning an elaborate spacer, which is one of the most difficult processes in conventional compact cameras [30,31].

3. Discussion

Figure 4 shows how to implement the integration of a stitching image using a multi-viewpoint image, which is one of the essential features of the developed LFC. Figure 4a shows a checkboard image captured with a smartphone light field camera. The captured images show that each micro-image has a distinctly different viewing angle. Typically, camera calibration is performed using checkboard pattern images. To perform the calibration of the smartphone LFC, the reference point was marked on a checkerboard, as shown in Figure 4b. In particular, the reference points were marked on the vertices of each black square in the image. With camera calibration performed in this way, the captured Lena image shown in Figure 4c was processed to the image shown in Figure 4d through the stitching algorithm in MATLAB (Mathworks, USA). Moreover, images with different view directions can be acquired, resulting in wider view angles than the image angles obtained with the center micro-lens. As shown in Table 1, individual micro-lenses had a view angle of approximately 25°, but a stitching image process can be used to obtain images with a view angle of 52°, which is more than twice as wide. By contrast, the main lens of the existing LFC was replaced with a single aperture, and the image was only acquired through a micro-lens, not through a relay optical device. Consequently, there is blur in the images shown in Figure 4. However, the optimization of the micro-lens and the image quality can be improved by minimizing the spherical aberration through an MLA with low sag height [32]. Moreover, many studies have been conducted on thin cameras using MLAs with low effective resolution. Image post-processing methods, such as super-resolution processing, have achieved a level similar to that achieved with the main lens [29,30].
Figure 5 shows a comparison of simulation and measurement results for the quantitative analysis of pixel disparity according to the change in distance due to different view angles in the image. A schematic diagram for performing a ray-tracing-based simulation of a designed optical system that can measure point sources located on the same optical axis is shown in Figure 5a. Such sources cannot be measured with a conventional camera for a pixel shift against distance changes. The distance Sref. of the reference point source was set to 1 m, and the simulation result was obtained by moving the distance Scont. of the control point source from 1 to 5 cm. Figure 5b shows that the pixel shift for the distance change tends to increase as the object gets closer to the aperture than the reference point and as the micro-image gets farther from the center micro-lens. Figure 5c shows a quantitative pixel shift according to the control source point distance change for the image obtained from the center to view # 3 for each micro-lens position. The pixel shift exhibits a linear change because the change in the viewing angle changes linearly with the position of the micro-lens.
To carry out multi-viewpoint image acquisition, raw data were obtained by placing a near object (3 cm) and a distant object, as shown in Figure 4d. The raw data were captured, as shown in Figure 4e, and the centers of the images in the black dotted boxed area were set as the reference line to analyze the difference in the view direction between the micro-image arranged on the left and on the right. The angles of the viewpoint vary depending on the images formed by the respective micro-lens. The lateral pixel position difference between the distant and near object image pixels was compared to analyze the disparity of these images. Table 2 shows the pixel point position for each image based on the remote reference line. The data in parentheses represent the shift in the lateral pixel position between adjacent viewpoints. In adjacent views, the lateral shifts of the red object are 6 pixels. The pixel change of the red object was the same as that in simulations in which Scont. was located at 3 cm. In contrast, the pixel shift was zero because the blue object was located on the remote reference line. Figure 5f represents the post-processed image with the obtained multi-viewpoint image. To demonstrate 3D imaging, we extracted the disparity map by using a local stereo matching method [33]. Each viewpoint image in the obtained raw data was cropped and aligned with a distant blue object, which serves as the reference point source in Figure 5a. Then, disparity maps were extracted using aligned images with the same size through local stereo matching (Figure 5f, middle layer). In addition, reconstructed images were extracted using disparity and cropped images (Figure 5f, top layer). Consequently, we successfully demonstrated multi-viewpoint image acquisition by a single aperture, which is the most important feature for the validation of the 3D depth-sensing function of our miniaturized LFC system.

4. Conclusions

In summary, the results reported in this paper demonstrate that miniaturized 3D camera systems in smartphones based on light field cameras offer several attractive features, including compact configuration. These features can be implemented in conventional smartphones, passive depth-sensing systems that do not require significant energy consumption or a large volume of active lighting, resilient technologies consisting of one image sensor without camera synchronization, and compensation of single-sensor deviations. The miniaturized 3D camera design is implemented using different viewing directions. It comprises a single aperture and a micro-lens array. This compact system design was theoretically validated using ray-tracing-based simulations. Moreover, a sequential fabrication process (i.e., photolithography, isotropic wet etching, polymer replica molding, and 3D printing processes) was used to implement the engineered micro-lens arrays and apertures. Multi-view image acquisition and 3D depth map extraction, which are key elements of light field cameras, were also achieved with captured images using a compact 3D camera system integrated into a smartphone. The results indicate that miniaturization of the proposed 3D camera system, along with its simplified optical configuration, lighter weight, and lower power consumption, is a promising path toward advanced versions of compact depth-sensing systems for electronic applications that are being downsized.

Author Contributions

Cnceptualization, H.M.K.; methodology, H.M.K.; software, M.S.K. and H.J.J.; validation, H.M.K., M.S.K., G.J.L. and Y.M.S.; formal analysis, M.S.K. and H.J.J.; investigation, H.M.K.; resources, M.S.K. and H.J.J.; data curation, M.S.K.; writing—original draft preparation, H.M.K. and M.S.K.; writing—review and editing, H.M.K., M.S.K., G.J.L. and Y.M.S.; visualization, H.M.K. and M.S.K.; project administration, Y.M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Research Foundation of Korea (NRF) funded by the Korean government (MSIP) (NRF2017M3D1A1039288, and 2018R1A4A1025623), and the Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (No.2017000709), and Samsung Electronics.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Grüger, H.; Knobbe, J.; Pügner, T.; Reinig, P.; Meyer, S. New way to realize miniaturized complex optical systems in high volume. Proc. SPIE 2018, 10545, 1054505. [Google Scholar]
  2. Yang, S.P.; Seo, Y.H.; Kim, J.B.; Kim, H.W.; Jeong, K.H. Optical MEMS devices for compact 3D surface imaging cameras. Micro Nano Syst. Lett. 2019, 7, 8. [Google Scholar] [CrossRef]
  3. Wilk, M.P.; O’Flynn, B. Miniaturized Low-Power Wearable System for Human Motion Tacking Incorporating Monocular Camera and Inertial Sensor Data Fusion for Health Applications. Smart Systems Integration. In Proceedings of the 13th International Conference and Exhibition on Integration Issues of Miniaturized Systems, Barcelona, Spain, 10–11 April 2019; pp. 1–4. [Google Scholar]
  4. Son, Y.; Yoon, S.; Oh, S.; Han, S. A Lightweight and Cost-Effective 3D Omnidirectional Depth Sensor Based on Laser Triangulation. IEEE Access 2019, 7, 58740–58750. [Google Scholar] [CrossRef]
  5. Mattoccia, S.; Marchio, I.; Casadio, M. A Compact 3D Camera Suited for Mobile and Embedded Vision Applications. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Columbus, OH, USA, 23–28 June 2014; pp. 195–196. [Google Scholar]
  6. Wippermann, F.C.; Brückner, A.; Oberdörster, A.; Reimann, A. Novel multi-aperture approach for miniaturized imaging systems. In Proceedings of the MOEMS and Miniaturized Systems XV, SPIE OPTO, San Francisco, CA, USA, 16–18 February 2016; p. 9760. [Google Scholar]
  7. Tijmons, S.; Croon, G.H.; Remes, B.D.; Wagter, C.D.; Mulder, M. Obstacle Avoidance Strategy using Onboard Stereo Vision on a Flapping Wing MAV. IEEE Trans. Robot. 2017, 33, 858–874. [Google Scholar] [CrossRef] [Green Version]
  8. Hardin, P.J.; Lulla, V.; Jensen, R.R.; Jensen, J.R. Small Unmanned Aerial Systems (sUAS) for environmental remote sensing: Challenges and opportunities revisited. GISci. Remote Sens. 2018, 56, 309–322. [Google Scholar] [CrossRef]
  9. Bharadwaj, A.; Schultz, A.; Gilabert, R.; Huff, J.; Haag, M.U. Small-UAS navigation using 3D imager and infrared camera in structured environments. In Proceedings of the IEEE/ION Position, Location and Navigation Symposium (PLANS), Savannah, Georgia, 11–14 April 2016; p. 16036806. [Google Scholar]
  10. Kanellakis, C.; Nikolakopoulos, G. Survey on Computer Vision for UAVs: Current Developments and Trends. J. Intell. Robot. Syst. 2017, 87, 141–168. [Google Scholar] [CrossRef] [Green Version]
  11. Murry, D.; Little, J.J. Using Real-Time Stereo Vision for Mobile Robot Navigation. Auton. Robots 2000, 8, 161–171. [Google Scholar] [CrossRef]
  12. Fleischmann, P.; Berns, K. A Stereo Vision Based Obstacle Detection System for Agricultural Applications. In Field and Service Robotics; Springer Tracts in Advanced Robotics; Wettergreen, D., Barfoot, T., Eds.; Springer: Cham, Switzerland, 2016; Volume 113, pp. 217–231. [Google Scholar]
  13. Uchida, N.; Shibahara, T.; Aoki, T.; Nakajima, H.; Kobayashi, K. 3D face recognition using passive stereo vision. In Proceedings of the IEEE International Conference on Image Processing, Genova, Italy, 11–14 September 2005; p. 8845764. [Google Scholar]
  14. Li, B.; An, T.; Cappelleri, D.; Xu, J.; Zhang, S. High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics. Int. J. Intell. Robot. Appl. 2017, 1, 86–103. [Google Scholar] [CrossRef]
  15. Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photonics 2011, 3, 128–160. [Google Scholar] [CrossRef]
  16. Zhang, S. High-speed 3D shape measurement with structured light methods: A review. Opt. Laser Eng. 2018, 106, 119–131. [Google Scholar] [CrossRef]
  17. Lange, R.; Seitz, P. Solid-state time-of-flight range camera. IEEE J. Quantum Electron. 2011, 37, 390–397. [Google Scholar] [CrossRef] [Green Version]
  18. Sarbolandi, H.; Lefloch, D.; Kolb, A. Kinect range sensing: Structured-light versus Time-of-Flight Kinect. Comput. Vis. Image Underst. 2015, 139, 1–20. [Google Scholar] [CrossRef] [Green Version]
  19. Haase, J.F.; Beer, M.; Ruskowski, J.; Vogt, H. Multi object detection in direct Time-of-Flight measurements with SPADs. In Proceedings of the Conference on Ph.D. Research in Microelectronics and Electronics, Prague, Czech Repulbic, 2–5 July 2018; p. 18001185. [Google Scholar]
  20. Antipa, N.; Kuo, G.; Keckel, R.; Mildenhall, B.; Bostan, E.; Ng, R.; Waller, L. DiffuserCam: Lensless single-exposure 3D imaging. Optica 2018, 5, 1–9. [Google Scholar] [CrossRef]
  21. Ng, R.; Levoy, M.; Bredif, M.; Duval, G.; Horowitz, M.; Hanrahan, P. Light Field Photography with a Hand-held Plenoptic Camera. Comput. Sci. Tech. Rep. CTSR. 2005, 2, 1–11. [Google Scholar]
  22. Lumsdaine, A.; Georgiev, T. The focused plenoptic camera. In Proceedings of the IEEE International Conference on Computational Photography, San Francisco, CA, USA, 16–17 April 2009; p. 11499059. [Google Scholar]
  23. Wang, J.Y.A.; Adelson, E.H. Single Lens Stereo with a Plenoptic Camera. IEEE Trans. Patterna Anal. Mach. Intell. 1992, 14, 99–106. [Google Scholar]
  24. Dansereau, D.G.; Pizarro, O.; Williams, S.B. Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1027–1034. [Google Scholar]
  25. Fahringer, T.W.; Lynch, K.P.; Thurow, B.S. Volumetric particle image velocimetry with a single plenoptic camera. Meas. Sci. Technol. 2015, 26, 11. [Google Scholar] [CrossRef]
  26. Jo, B.H.; Lerberghe, L.M.; Motsegood, K.M.; Beebe, D.J. Three-Dimensional Micro-Channel Fabrication in Polydimethylsiloxane (PDMS) Elastomer. J. Microelectromech. Syst. 2000, 9, 1. [Google Scholar] [CrossRef]
  27. Jeong, O.C.; Park, S.W.; Yang, S.S.; Pack, J.J. Fabrication of a peristaltic PDMS micropump. Sens. Actuator A Phys. 2005, 23, 453–458. [Google Scholar] [CrossRef]
  28. Dinh, T.H.N.; Martincic, E.; Dufour-Gergam, E.; Jourbert, P.Y. Mechanical Characterization of PDMS Films for the Optimization of Polymer Based Flexible Capacitive Pressure Microsensors. J. Sens. 2017, 2017, 8235729. [Google Scholar] [CrossRef]
  29. Jang, N.S.; Ha, S.H.; Kim, K.H.; Cho, M.H.; Kim, S.H.; Kim, J.M. Low-power focused-laser-assisted remote ignition of nanoenegetic materials and application to a disposable membrane actuator. Combust. Flame 2017, 182, 58–63. [Google Scholar] [CrossRef]
  30. Bruckner, A.; Oberdorster, A.; Dunel, J.; Reimann, A.; Muller, M.; Wippermann, F. Ultra-thin wafer-level camera with 720p resolution using micro-optics. Proc. SPIE 2014, 9193, 91930W. [Google Scholar]
  31. Kim, K.; Jang, K.; Ryu, J.; Jeong, K. Biologically inspired ultrathin arrayed camera for high-contrast and high-resolution imaging. Light Sci. Appl. 2020, 9, 28. [Google Scholar] [CrossRef] [Green Version]
  32. Kim, H.M.; Kim, M.S.; Lee, G.J.; Yoo, Y.J.; Song, Y.M. Large area fabrication of engineered microlens array with low sag height for light-field imaging. Opt. Express 2019, 27, 4435–4444. [Google Scholar] [CrossRef] [PubMed]
  33. Barnard, S.T.; Martin, A.F. Computational stereo. ACM Comput. Surv. (CSUR) 1982, 14, 553–572. [Google Scholar] [CrossRef]
Figure 1. (a) Schematic of a conventional light field camera (LFC). (b) Schematic of a minimized LFC structure with only one aperture. (c) Photograph of a minimized LFC integrated into a smartphone. (d) Magnified photograph of a micro-lens array (MLA) stacked on an image sensor. The inset displays a scanning electron microscope (SEM) image of the fabricated MLA. The scale bar is 500 µm. (e) Demonstration of the multi-viewpoint image acquisition of the proposed module.
Figure 1. (a) Schematic of a conventional light field camera (LFC). (b) Schematic of a minimized LFC structure with only one aperture. (c) Photograph of a minimized LFC integrated into a smartphone. (d) Magnified photograph of a micro-lens array (MLA) stacked on an image sensor. The inset displays a scanning electron microscope (SEM) image of the fabricated MLA. The scale bar is 500 µm. (e) Demonstration of the multi-viewpoint image acquisition of the proposed module.
Sensors 20 02129 g001
Figure 2. (a) Schematic of the analysis parameters for the miniaturized LFC system. (b) Schematic of the ray-tracing simulation for a micro-lens with space and without space with the same radius, and comparison of root-mean-square (RMS) spot radii. (c) Optimum MLA thickness according to the micro-lens radius. (d) Image acquisition simulation results according to the aperture distance from the MLA. (e) Multi-viewpoint image acquired using the minimized LFC smartphone camera.
Figure 2. (a) Schematic of the analysis parameters for the miniaturized LFC system. (b) Schematic of the ray-tracing simulation for a micro-lens with space and without space with the same radius, and comparison of root-mean-square (RMS) spot radii. (c) Optimum MLA thickness according to the micro-lens radius. (d) Image acquisition simulation results according to the aperture distance from the MLA. (e) Multi-viewpoint image acquired using the minimized LFC smartphone camera.
Sensors 20 02129 g002
Figure 3. (a) Procedure schemes for quartz MLA mold fabrication. (b) Schematic of the replica molding process of the poly-dimethylsiloxane (PDMS) MLA. The inset shows an SEM image of the fabricated quartz master mold (left) and replicated PDMS MLA (right). The scale bar is 200 µm. (c) The graph shows that the MLA thickness is linearly proportional to the poured PDMS weight in a Petri dish with a diameter of 100 mm. (d) A magnified schematic of the configuration of the minimized LFC system.
Figure 3. (a) Procedure schemes for quartz MLA mold fabrication. (b) Schematic of the replica molding process of the poly-dimethylsiloxane (PDMS) MLA. The inset shows an SEM image of the fabricated quartz master mold (left) and replicated PDMS MLA (right). The scale bar is 200 µm. (c) The graph shows that the MLA thickness is linearly proportional to the poured PDMS weight in a Petri dish with a diameter of 100 mm. (d) A magnified schematic of the configuration of the minimized LFC system.
Sensors 20 02129 g003
Figure 4. (a) Photograph of the checkboard image for calibration with the miniaturized LFC system. (b) Reference point image on the checkboard to perform camera calibration. (c) Photograph of the Lena picture captured by the miniaturized LFC. (d) Images of wider view angles obtained using image stitching techniques from the captured Lena image.
Figure 4. (a) Photograph of the checkboard image for calibration with the miniaturized LFC system. (b) Reference point image on the checkboard to perform camera calibration. (c) Photograph of the Lena picture captured by the miniaturized LFC. (d) Images of wider view angles obtained using image stitching techniques from the captured Lena image.
Sensors 20 02129 g004
Figure 5. (a) Schematic of ray tracing for the calculation of the pixel shift. Parameters Sref., Scont., and d represent the reference point source, control point source, and distance between the MLA and aperture, respectively. (b) Contour image of the simulation results according to the distance of the point source. Each image shows 7 view-point differences. The red spot depicts the reference point source, whereas the blue spot points out the control point source. In the case of a close distance point source, Scont. becomes 1 cm, and the pixel shift increases (upper case). Otherwise, Scont. is 5 cm, and pixel disparity decreases (bottom part). (c) Graph of the pixel shift according to the point source location. (d) Photograph of measurement condition. (e) The image captured with the mobile light field camera. (f) Post-processed image with the obtained light field image. Each layer represents the original image (bottom layer), disparity map (middle layer), and reconstructed map (top layer).
Figure 5. (a) Schematic of ray tracing for the calculation of the pixel shift. Parameters Sref., Scont., and d represent the reference point source, control point source, and distance between the MLA and aperture, respectively. (b) Contour image of the simulation results according to the distance of the point source. Each image shows 7 view-point differences. The red spot depicts the reference point source, whereas the blue spot points out the control point source. In the case of a close distance point source, Scont. becomes 1 cm, and the pixel shift increases (upper case). Otherwise, Scont. is 5 cm, and pixel disparity decreases (bottom part). (c) Graph of the pixel shift according to the point source location. (d) Photograph of measurement condition. (e) The image captured with the mobile light field camera. (f) Post-processed image with the obtained light field image. Each layer represents the original image (bottom layer), disparity map (middle layer), and reconstructed map (top layer).
Sensors 20 02129 g005
Table 1. Specifications of the smartphone light field camera.
Table 1. Specifications of the smartphone light field camera.
Pixels of each sub-image260 × 260
Radius of the micro-lens200 µm
Diameter of each micro-lens400 µm
Focal length of each micro-lens665 µm
Acceptance angle of each micro-lens25°
The total field of view52°
Table 2. Pixel location of the red and blue objects from the reference point in Figure 5e.
Table 2. Pixel location of the red and blue objects from the reference point in Figure 5e.
ObjectView-1View-2View-3
Red4842 (6)36 (6)
Blue151515

Share and Cite

MDPI and ACS Style

Kim, H.M.; Kim, M.S.; Lee, G.J.; Jang, H.J.; Song, Y.M. Miniaturized 3D Depth Sensing-Based Smartphone Light Field Camera. Sensors 2020, 20, 2129. https://doi.org/10.3390/s20072129

AMA Style

Kim HM, Kim MS, Lee GJ, Jang HJ, Song YM. Miniaturized 3D Depth Sensing-Based Smartphone Light Field Camera. Sensors. 2020; 20(7):2129. https://doi.org/10.3390/s20072129

Chicago/Turabian Style

Kim, Hyun Myung, Min Seok Kim, Gil Ju Lee, Hyuk Jae Jang, and Young Min Song. 2020. "Miniaturized 3D Depth Sensing-Based Smartphone Light Field Camera" Sensors 20, no. 7: 2129. https://doi.org/10.3390/s20072129

APA Style

Kim, H. M., Kim, M. S., Lee, G. J., Jang, H. J., & Song, Y. M. (2020). Miniaturized 3D Depth Sensing-Based Smartphone Light Field Camera. Sensors, 20(7), 2129. https://doi.org/10.3390/s20072129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop