Next Article in Journal
InAsSb-Based Infrared Photodetectors: Thirty Years Later On
Next Article in Special Issue
Integrating Biosignals Measurement in Virtual Reality Environments for Anxiety Detection
Previous Article in Journal
Wavefront Shaping Concepts for Application in Optical Coherence Tomography—A Review
Previous Article in Special Issue
A Review on Map-Merging Methods for Typical Map Types in Multiple-Ground-Robot SLAM Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Single Image 3D Reconstruction Method Based on a Novel Monocular Vision System

1
Department of Mechanical Engineering, Shantou University, Shantou 515063, China
2
Key Laboratory of Intelligent Manufacturing Technology, Ministry of Education, Shantou University, Shantou 515063, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(24), 7045; https://doi.org/10.3390/s20247045
Submission received: 4 November 2020 / Revised: 6 December 2020 / Accepted: 7 December 2020 / Published: 9 December 2020
(This article belongs to the Special Issue Sensors and Computer Vision Techniques for 3D Object Modeling)

Abstract

:
Three-dimensional (3D) reconstruction and measurement are popular techniques in precision manufacturing processes. In this manuscript, a single image 3D reconstruction method is proposed based on a novel monocular vision system, which includes a three-level charge coupled device (3-CCD) camera and a ring structured multi-color light emitting diode (LED) illumination. Firstly, a procedure for the calibration of the illumination’s parameters, including LEDs’ mounted angles, distribution density and incident angles, is proposed. Secondly, the incident light information, the color distribution information and gray level information are extracted from the acquired image, and the 3D reconstruction model is built based on the camera imaging model. Thirdly, the surface height information of the detected object within the field of view is computed based on the built model. The proposed method aims at solving the uncertainty and the slow convergence issues arising in 3D surface topography reconstruction using current shape-from-shading (SFS) methods. Three-dimensional reconstruction experimental tests are carried out on convex, concave, angular surfaces and on a mobile subscriber identification module (SIM) card slot, showing relative errors less than 3.6%, respectively. Advantages of the proposed method include a reduced time for 3D surface reconstruction compared to other methods, demonstrating good suitability of the proposed method in reconstructing surface 3D morphology.

1. Introduction

Reconstructing three-dimensional (3D) surface morphology can help to quantify product surface quality information, characterize the product defects in manufacturing process and analyze the defects distribution. Therefore, 3D reconstruction technology has been widely used in industrial manufacturing processes [1,2,3,4]. With the rapid increase of automation technology, it is necessary to develop fast and reliable methods for 3D reconstruction and measurement of surface morphology to meet stricter process quality requirements [5,6,7].
In terms of 3D reconstruction and measurement, previous works mainly focused on contact and non-contact approaches. Traditional contact approaches, such as coordinate measuring instrument, atomic force microscopy (AFM) and other microcopies [8,9,10], are highly precise in measuring, but slow and costly, therefore not suitable to meet the fast measurement requirements for online fabrication. On the other hand, acoustic detection methods [11,12,13] and magnetic nanoparticle-based detection methods [14] belong to non-contact approaches, which have been extensively used in manufacturing processes [13,14]. Such methods are characterized by a high detection speed while the measurement accuracy is easily affected by the response time and active signal source.
Compared with above mentioned contact and non-contact methods, optical measurement methods are widely used as they are characterized by non-contact, non-damage, high-resolution and high -speed [15,16,17,18]. Multi-vision systems are typical optical measurement methods, able to reconstruct the 3D surface by calibrated cameras and space intersection calculation [19,20,21,22,23]. However, image matching is still a challenging task during the 3D reconstruction surface process because it needs a massive matching computation and it is prone to environment lighting issues [24,25,26], resulting in being highly costly and computationally time consuming [27].
Single image methods have also recently been proposed for 3D surface reconstruction, such as the monocular vision method and the structured light method. Currently, the monocular vision method originally derives from shape-from-shading (SFS) methods [28,29], which estimates the 3D shape from shading, defocus and correspondence using just a single-capture passive light-field image. Because of SFS methods mainly being developing from the Lambertian model of illumination reflection [30], they are ill-conditioned and have no unique solution. To solve such problem, SFS usually assumes that the object studied is a smooth object, and then the regularization model of the SFS problem is established according to the relevant conditions, which mainly include the minimization method, propagation method, local method and linearization method [31,32]. However, experimental results of 3D surface reconstruction are prone to uncertainties [33]. Recently, optical 3D measurement methods have been developed based on the structured light [34,35]. In such methods, color evaluation is a crucial aspect. To reveal the relationships between color and 3D reconstruction, the color calibration is usually formulated as a color pureness problem [36,37,38]; nevertheless, the color pureness can easily lead to the need for more accurate calibration.
From the available literature, the 3D measurement methods have developed quickly in recent years, showing an increasing trend in terms of computational speed and reliability [39]. However, the implementation of existing reconstruction methods still cannot meet fast and reliable measurement requirements. This paper aims at developing a quick 3D reconstruction method based on a monocular vision system, in which ring structured light emitting diode (LED) illumination is used to project the object’s 3D information into the image and only one image is required to realize the 3D measurement process.
This paper is organized as follows. Monocular vision system and its image information are presented in Section 2. Then, the imaging principle is analyzed, the model of 3D reconstruction is built, and the proposed method is developed in Section 3. Following that, a simplified calculation method for the proposed algorithm is designed in Section 4, experimental results are presented and discussed in Section 5, while conclusions are reported in Section 6.

2. Monocular Vision System and its Image Information

The proposed monocular vision system is composed of a three-level charge coupled device (3-CCD) camera and a red-green-blue (RGB) ring illumination source. As shown in Figure 1, the color light source includes three circular LED rings, respectively red, green and blue. The light source parameters, including sizes, position, incident angles and distribution density of LEDs are designed and calibrated accurately [40]. The detected objects are placed in the center of the worktable. In the acquired color image, each pixel has three intensity values of red, green and blue. Furthermore, these gray values vary with inclination of the detected surface due to light incident angles. Thus, the intensity values of red, green and blue reflect the variation trend of detected surface, in other words, the red, green and blue irradiates to the flat, slow slant and rapid slant surface, respectively [41]. Such characteristics will be used for image regions segmentation and to design the proposed algorithm.
Unlike the SFS methods, the proposed method uses a multi-color ring structured light, which has calibrated parameters in terms of height, incident angle and distribution density for each LED, in order to get the incident light position information of the acquired color image. Following the calibration, the same incident light intensities will have the same gray values in the same detected regions [40].

3. Analysis of Imaging Principle and Model of 3D Reconstruction

3.1. Principle of the Imaging based on the Monocular Vision System and the Ring Structure Light Source

In this section, a 3D reconstruction module is presented based on a monocular vision system. A simplified monocular vision system is shown in Figure 2a, which includes a 3-CCD camera, a lens and a three-color light source (red, green and blue). The three lighting points, namely, S r , S g and S b represent the red, green and blue luminous points from the ring light source, respectively, all located in the XOZ plane. The detected object cross section AGB is also in the XOZ plane. Because of the symmetry properties of the monocular vision system, every cross section has the same imaging process, including the Z-axis and perpendicular to the plane of XOY. In this case, the imaging plane of XOZ is shown in Figure 2b, where L O M M H represents the image plane of the 3-CCD camera, L L 1 L 2 is the lens width, the curve L A G B stands for the detected surface and the point G is irradiated by light sources S r , S g and S b . The point M H G r , G g , G b is the image of the detected point G, along with the gray values of red, green and blue ( G r , G g , G b ), respectively. The coordinate value of G is defined as ( x G , h ), in other words, the height of the detected point G is defined as h, h is also named z G .
In particular, h is unknown before reconstruction and measurement. For a focused optical imaging with object distance H − h, image distance d and a thin lens with focal length f, the following relationship holds:
1 f = 1 H h + 1 d
From Equation (1), the detected height can be expressed as:
h = H f d d f
As it is affected by depth of field, h may not be the exact height of the detected point. In other words, usually, the image plane is invariable in an optical system, while the object plane is variable due to the surface variety of the detected object within a tolerated range. If the object plane does not match the image plane in an optical system, all the objects located at different distances from the camera will appear blurred [42]. As shown in Figure 3 A, B and C are the detected points, L1L2 is the lens width and OMA1 is the image sensor. The detected point A is captured sharply as A1 is on the image plane OMA1, while the detected points B and C are imaged to B1 and C1, respectively. However, in an optical system, since A 1 is fixed, both B and C will be captured as blurred points. Therefore, in this configuration, it is not suitable to calculate the detected height only by Equation (2). To get the exact detected height, additional constraints should be considered.

3.2. The 3D Reconstruction Model

In an optical imaging system, every pixel coming from image sensor elements represents a special region in the detected object surface, denoted by σ G . Its corresponding gray value indicates the energy intensity of the incoming light, coming from the light source and reflected by the special region σ G of the detected object.
As shown in Figure 2, the physical point G, which is imaged as pixel MH, is lighted by the light source Sr, Sg, and Sb, and its gray values are G r , G g and G b , respectively. The angle of the tangent plane of point G to the plane XOY is denoted by θ. The angle of incidence of S g to the plane XOY is denoted by β g , its mirror reflecting line is denoted by L G M H and its reflection angle is denoted by φ, representing the theoretical reflecting line to the pixel MH. The actual reflecting line is denoted by L G R and its angle to the line L G M H is denoted by γ g . Then, the parallel light reflecting from G is denoted by L G M H focusing on the point MH. As shown in Figure 2, the incoming red, green and blue angles are denoted by β r , β g and β b , respectively, therefore the following equations hold:
t a n β r = z r z G x r x G
tan β g = z g z G x g x G
tan β b = z b z G x b x G
t a n + θ = z o z G x o x G
In which x r ,   z r ,   x g ,   z g , x b ,   z b , and x o ,   z o can be measured on the vision system, and x G can be retrieved from its nominal physical size based on its pixel coordinate. Therefore, only the three variables z G ,   and   θ are unknown in Equations (3)–(6). Based on the triangle transformation, reflection angles between the theoretical reflecting line and the actual reflecting lines of red, green, and blue, denoted by γ r , γ g   and γ b , respectively, which can be expressed as:
γ r = π + θ π a r c t a n z r z G x r x G = θ + a r c t a n z r z G x r x G
γ g = θ + a r c t a n z g z G x g x G
γ b = θ + arctan z b z G x b x G .  
In which 0 < γ r , γ g , γ b < π 2 .
The incoming red, green and blue light intensities are indicated as I r ,   I g ,   and   I b , respectively. Such intensities are reflected by the detected surface σ G , and part of their energy is transmitted to the pixel element M H . Furthermore, I r ,   I g   and   I b are affected by the detected material and the inclination angle of the detected surface. Based on Equations (7)–(9) and the reference model proposed in [40], if the influencing factors on the material to red, green and blue light are denoted by f r ,   f g   and   f b , respectively, which can be obtained by experimental tests, then the relationships between incoming lights and their gray values can be built as follows:
G r = I r · σ G · f r · cos θ · cos γ r
  G g = I g · σ G · f g · cos θ · cos γ g
G b = I b · σ G · f b · cos θ · cos γ b
Inserting Equations (7)–(9) into Equations (10)–(12) results in
G r = I r · σ G · f r · c o s θ · c o s θ + a r c t a n z r z G x r x G = I r · σ G · f r · c o s θ · c o s θ + a r c t a n z r z G x r x G
Similarly,
G g = I g · σ G · f g · c o s θ · c o s θ + a r c t a n z g z G x g x G
G b = I b · σ G · f b · cos θ · cos θ + arctan z b z G x b x G
In which, G r ,   G g ,   and   G b are the gray values of red, green and blue obtainable from the detected color image. I r ,   I g , I b , σ G ,   , f r ,   f g and f b , can be measured by calibrated experimental tests. In other words, there are only three variables of z G , ,   θ in Equations (13)–(15). Therefore, z G can be calculated by solving Equations (13)–(15), which indicate the height information of the detected area σ G . By repeating this procedure across the whole detected area, the whole detected surface will be reconstructed, and the measurement of surface size can be carried out.

4. Simplified Calculation Method for the Proposed Algorithm

As mentioned in Section 3, in order to reconstruct the whole detected surface, the proposed method needs to calculate all the pixels of the detected image by Equations (13)–(15), being very inefficient and time consuming. In this section, the proposed algorithm is optimized to improve its reconstruction speed while its inspection accuracy remains controllable.
Two aspects will be considered in the following to simplify the proposed algorithm. Firstly, the detected color image is divided into different regions based on the color gradients. Secondly, polar coordinates are used to simplify the computing processing based on the cycle calculation.

4.1. Algorithm Simplified Based on Regions

Within the uniform material, the surface of the detected object has the same property of the specular and diffuse reflection, which abides to the law of light reflection. In this manuscript, the detected image is obtained by the monocular vision system, which includes a 3-CCD color digital camera and a 3-color (red, green and blue) ring structured light source. As shown in Figure 1 and analyzed in Section 1, the red, green and blue distribution in the detected image indicates the surface slant gradients of the detected object [42]. Within the single-color layer image, the same gray values, which are adjacent, indicate the same height in the detected surface, and the increased or decreased gray values indicate variations in the detected surface height. Therefore, the detected image can be divided into several regions based on the gray levels, and the height information of every region can be calculated by Equations (13)–(15) using only one-pixel information within the same regions. In this way, a high number of redundant calculation tasks will be avoided, speeding up the reconstruction process.

4.2. Algorithm Simplified based on Coordinates Transformation

As shown in Figure 4, supposing that the detected point G corresponds to the pixel coordinates (i,j) in the detected image. δ x ,   δ y is denoted as the ratio factor between the physical region and its pixel’s region in the direction of X and Y, respectively. In the plane of XOY, the region of i · δ x ,   i · δ x + δ x in width and i · δ y ,   i · δ y + δ y in height, which corresponds to the pixel coordinates (i,j), is the projection of the detected surface region G. z G is the height of the detected surface region G. Then
r = i · δ x + 0.5 δ x 2 + j · δ y + 0.5 δ y 2
x = r · c o s ω
y = r · s i n ω
ω = a r c s i n j · δ y r
Inserting Equations (16)–(18) into Equations (13)–(15) and solving the resulting equation for z yields:
z = f x , y = f r · c o r ω ,   r · s i n ω
Then, the detected surface can be expressed as
a = 1 , z = r = 0 r m a x ω = 0 2 π f r · c o r ω ,   r · s i n ω = r = 0 r m a x ω = 0 2 π f r · c o r ω ,   r · s i n ω
In which, r m a x = i m a x · δ x + 0.5 δ x 2 + j m a x · δ y + 0.5 δ y 2 , and i m a x   , j m a x are limited by the field of view. z is calculated based on its pixel in the field of view one by one, then Equation (20) can be expressed as:
z = r = 0 r m a x ω = 0 2 π f r · c o r ω ,   r · s i n ω
From Equation (21), it indicates that the detected surface can be reconstructed completely by a series of cyclical calculation, which obviously reduces the complexity of computation and improves the efficiency of the 3D reconstruction process.
Therefore, the flow chart of the proposed algorithm can be presented as Figure 5. Especially, the boundary condition is limited to the field of view, and the calculation sequence comes from divided regions and the order of regions.

5. Experiments Results and Analysis

The process of three-dimensional measurement is carried out firstly by reconstructing the 3D surface of the detected object, and then by measuring the 3D size information of any point by the optical system. In this section, the 3D measurement process is illustrated according to the proposed method, including its performance evaluation via benchmarking. As shown in Figure 6, the image acquisition system is composed of a JAI CV-M9CL (JAI Corp., Copenhagen, CO, Denmark) 3-CCD camera and the designed light source, (reported in Figure 1). In this case study, the object distance H is 285.4 mm and the image distance d is 31.6 mm, the focal length f is 28.4 mm and the field of view is 24 mm × 18 mm. During the experiments, a noise environment less than 90 dB and stable lighting environment are necessary, the vibration frequency of the worktable should be less than 100 Hz and its amplitude should be less than 0.5 µm, moreover, the light intensity deviation of the light source should be less than 5% within the field of view. In addition, a 3D measurement system VR 5200 (KEYENCE Corp., Osaka, Japan) is used in the experiment (Figure 7) to measure the true height of samples. VR 5200 has 0.1 µm display resolution and 120 × resolution on 15 inch displays is 2.5 mm × 1.9 mm, and its precision of measurement repeatability is less than 0.4 µm without z-connection. Therefore, the object height can be measured accurately by VR 5200 to evaluate the performance of the proposed method comparing with other similar methods.
The process of measuring surface height with the proposed method is shown in Figure 8. Firstly, the working environment should be carefully checked, and the light source must be calibrated, then the sample color images can be acquired by the proposed system, for example, the color image shown in Figure 8a. Secondly, the acquired color image is converted to gray levels for red, green and blue channels, and then gray images are segmented into several regions based on similar gray levels, as shown in Figure 8b,c. Thirdly, each region is located to define the calculation region, and arrangement relations are identified to form the calculation sequence. Additionally, surface height information is cyclically calculated using Equation (22). Lastly, calculation results are arranged based on location information and then the 3D surface can be retrieved as shown in Figure 8d.
To evaluate the performance of the proposed method, four different samples, namely, convex surface, concave surface, angular surface and convex and concave surface were produced by 3D printing using a single material, and tested based on the proposed algorithm, as shown in Table 1. In addition, a subscriber identification module (SIM) mobile card slot was also tested. Furthermore, benchmarking was carried out to compare the proposed method to alternative methods, in terms of reconstruction accuracy and speed.
As shown in Table 1, the four samples in the first row are obtained by utilizing a digital camera and natural light. The second row shows images acquired using the proposed vision system. The bottom row shows the reconstruction results obtained by the proposed algorithm. In order to compare the proposed method to other methods (proposed in [9,29,39]), four section views were considered from the reconstruction results, as shown in Figure 9, Figure 10, Figure 11 and Figure 12, respectively. In these charts, the black line refers to the object height measured by VR 5200, the blue line refers to the object height measured by the method of light microscopy axial-view imaging [9], the yellow line refers to the reconstruction height based on the proposed method, the purple line refers to the reconstruction height by [29] and the green line refers to the reconstruction height by [39].
As regards the convex surface, Figure 9 shows that the proposed reconstruction height curve fluctuates along the actual object height curve with an error less than 2.3%, while the other two methods, respectively proposed by [29] and [39], both have an error over 6%. This result demonstrates that the proposed algorithm can reconstruct and measure the convex surface effectively. Within the error range, the proposed reconstruction height curve is also characterized by a smaller deviation compared to the other methods. This can be explained by the processed gray region segmentation. In addition, the wave-like pattern in the reconstruction height curve, suggests that the uniformity of the light source raying is very important in the proposed algorithm.
As regards the concave surface, Figure 10 shows that the proposed reconstruction height curve fluctuates along the actual object height curve with an error less than 3.2%, showing higher accuracy compared to the other two methods [29] and [39]. This result indicates that the proposed algorithm can also reconstruct and measure the concave surface effectively. Figure 10 also shows that the reconstruction height curve overestimates the actual object height curve. Such error is mainly caused by intensity calibration issues with the incoming light. Such errors can be reduced by properly calibrating the light source parameters.
As regards the angular surface, Figure 11 shows that the proposed reconstruction height curve fluctuates along the actual object height curve with an error less than 3.2%, showing a good suitability of the proposed algorithm in effectively reconstructing the angular surface. The error, shown in Figure 11, indicates that the gray region segmentation and the calibration accuracy of light source parameters are crucial aspects for improving the reconstruction precision with the proposed algorithm.
As regards the convex and concave surface, Figure 12 shows that the proposed reconstruction height curve fluctuates along the actual object height curve with an error less than 2.7%. Such results suggest that the proposed algorithm can be used to reconstruct and measure different types of surfaces, which represent a common scenario for industrial applications.
Furthermore, to evaluate the performance of the proposed algorithm, a mobile SIM card slot was also measured experimentally. The SIM card slot image is shown in Figure 13, and its concave-convex components are highlighted in Figure 13b. The reconstruction results using the proposed method are shown in Figure 14. A comparison with the light microscopy axial-view imaging method [9] and the true height of the SIM card measured by VR-5200 are presented in Figure 15. The chart shows how the proposed method yields very similar results to the method developed by Guo et al. [9] in terms of measurement accuracy. It is also found that the maximum deviation (within 3.6%) corresponds with the bottom of the concave region, this is due to the lower image definition in the region. Adjusting focal length and acquiring clear images can improve measurement accuracy.
To verify the computational load of the proposed method, the SIM card slot is detected with different reconstruction methods. As shown in Figure 16, with an increasing number of SIM card slots, the proposed method (before the simplification procedure) has the slowest detection speed because it needs to compute every pixel to retrieve the height information. The other three methods proposed in [9,29,39] have an intermediate velocity to compute the height of SIM card slots. In contrast, the proposed method after simplification has the fastest reconstruct speed, because it only needs to calculate few pixels within the same gray values of adjacent pixels, and because of the polar coordinate symmetry of monocular vision system, it reconstructs the surface height by a series of cyclical calculations with a double loop program. Such results indicate that the proposed method after simplification can reduce the computation complexity and improve the efficiency of the 3D reconstruction process.

6. Conclusions

In this paper, a single image 3D reconstruction method is proposed based on a novel monocular vision system. In such a proposed method, the relationship model of the detected height information and its image gray value is built, and the 3D reconstruction method is presented.
A simplified calculation method is described to speed up the process of 3D reconstruction based on gray regions segmentation and the coordinate transformation.
Experimental results show that the proposed algorithm can reconstruct the 3D size of convex, concave and angular surfaces with errors less than 3.2%. A mobile SIM card slot was also investigated and the resulting measurement error was less than 3.6%, which illustrates the validity of the proposed algorithm. In terms of applicability, the proposed method can be effectively utilized to reconstruct diffuse surfaces, while a low accuracy has been obtained for specular surfaces. In addition, a light source calibration procedure should be carried out prior to detection operations in order to improve the results accuracy.

Author Contributions

Conceptualization and methodology, F.W.; formal analysis, software and validation, F.W. and S.Z.; project administration, F.W. and W.Y.; investigation and data curation, F.W., S.Z. and W.Y.; resources, F.W. and S.Z.; writing—original draft preparation, F.W. and S.Z.; visualization, F.W.; writing—review and editing, F.W. and W.Y.; supervision, F.W. and W.Y.; project administration, F.W. and W.Y.; funding acquisition, F.W. and W.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under grant 61573233, the Natural Science Foundation of Guangdong, China under 2018A0303130188, the Guangdong Science and Technology Special Funds Project under 190805145540361, and in part by the Science and Technology Planning Project of Guangdong Province under 180914204960289, the Open Fund of the State Key Laboratory of Integrated Optoelectronics under IOSKL2018KF09, Special projects in key fields of colleges and universities in Guangdong Province under 2020ZDZX2005. The authors would like to thank OPT MachineVision Corporation, China, for providing experimental facilities and testing assistance. Lastly, the authors are very grateful to Alessandro Simeone for his help in revising and editing the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sun, X.; Jing, X.; Cheng, L.; Xu, J. 3-D quasi-zero-stiffness-based sensor system for absolute motion measurement and application in active vibration control. IEEE/ASME Trans. Mechatron. 2015, 20, 254–262. [Google Scholar] [CrossRef]
  2. Wilde, T.; Rossi, C.; Theisel, H. Recirculation surfaces for flow visualization. IEEE Trans. Vis. Comput. Graph. 2019, 25, 946–955. [Google Scholar] [CrossRef] [PubMed]
  3. Nasrulloh, A.; Willcocks, C.; Jackson, P.; Geenen, C.; Habib, M.; Steel, D.; Obara, B. Multi-scale segmentation and surface fitting for measuring 3-D macular holes. IEEE Trans. Med. Imaging 2018, 37, 580–589. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Banerjee, D.; Yu, K.; Aggarwal, G. Robotic arm based 3D reconstruction test automation. IEEE Access 2018, 6, 7206–7213. [Google Scholar] [CrossRef]
  5. Tomic, S.; Beko, M.; Dinis, R. 3-D Target localization in wireless sensor networks using RSS and AoA measurements. IEEE Trans. Veh. Technol. 2016, 66, 3197–3210. [Google Scholar] [CrossRef]
  6. Weissenbock, J.; Fröhler, B.; Groller, E.; Kastner, J.; Heinzl, C. Dynamic volume lines: Visual comparison of 3D volumes through space-filling curves. IEEE Trans. Vis. Comput. Graph. 2018, 25, 1040–1049. [Google Scholar] [CrossRef]
  7. Pichat, J.; Iglesias, J.E.; Yousry, T.; Ourselin, S.; Modat, M. A survey of methods for 3D histology reconstruction. Med. Image Anal. 2018, 46, 73–105. [Google Scholar] [CrossRef] [Green Version]
  8. Huang, J.; Sun, M.; Ma, J.; Chi, Y. Super-resolution image reconstruction for high-density three-dimensional single-molecule microscopy. IEEE Trans. Comput. Imaging 2017, 3, 763–773. [Google Scholar] [CrossRef]
  9. Guo, Y.; Zhang, Y.; Verbeek, F.J. A two-phase 3-D reconstruction approach for light microscopy axial-view imaging. IEEE J. Sel. Top. Signal Process. 2017, 11, 1034–1046. [Google Scholar] [CrossRef]
  10. Bang, T.K.; Shin, K.-H.; Koo, M.-M.; Han, C.; Cho, H.W.; Choi, J.-Y. Measurement and torque calculation of magnetic spur gear based on quasi 3-D analytical method. IEEE Trans. Appl. Supercond. 2018, 28, 1–5. [Google Scholar] [CrossRef]
  11. Shi, C.; Luo, X.; Guo, J.; Najdovski, Z.; Fukuda, T.; Ren, H. Three-dimensional intravascular reconstruction techniques based on intravascular ultrasound: A technical review. IEEE J. Biomed. Health Inform. 2018, 22, 806–817. [Google Scholar] [CrossRef] [PubMed]
  12. Song, Y.; Wong, K. Acoustic direction finding using a spatially spread tri-axial velocity sensor. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 834–842. [Google Scholar] [CrossRef]
  13. Zhang, X.; Wu, X.; Adelegan, O.J.; Yamaner, F.Y.; Oralkan, O. Backward-mode photoacoustic imaging using illumination through a CMUT with improved transparency. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2018, 65, 85–94. [Google Scholar] [CrossRef] [PubMed]
  14. Bevacqua, M.T.; Scapaticci, R. A compressive sensing approach for 3D breast cancer microwave imaging with magnetic nanoparticles as contrast agent. IEEE Trans. Med. Imaging 2015, 35, 665–673. [Google Scholar] [CrossRef] [PubMed]
  15. Fourgeaud, L.; Ercolani, E.; Duplat, J.; Gully, P.; Nikolayev, V.S. 3D reconstruction of dynamic liquid film shape by optical grid deflection method. Eur. Phys. J. 2018, 41, 1–7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Li, C.; Lu, B.; Zhang, Y.; Liu, H.; Qu, Y. 3D reconstruction of indoor scenes via image registration. Neural Process. Lett. 2018, 48, 1281–1304. [Google Scholar] [CrossRef]
  17. Ma, Z.; Liu, S. A review of 3D reconstruction techniques in civil engineering and their applications. Adv. Eng. Inform. 2018, 37, 163–174. [Google Scholar] [CrossRef]
  18. Zhang, H.; Zhang, Y.; Wang, H.; Ho, Y.-S.; Feng, S. WLDISR: Weighted local sparse representation-based depth image super-resolution for 3D video system. IEEE Trans. Image Process. 2018, 28, 561–576. [Google Scholar] [CrossRef]
  19. Gai, S.; Da, F.; Dai, X. A novel dual-camera calibration method for 3D optical measurement. Opt. Lasers Eng. 2018, 104, 126–134. [Google Scholar] [CrossRef]
  20. Dehais, J.B.; Anthimopoulos, M.; Shevchik, S.; Mougiakakou, S. Two-view 3D reconstruction for food volume estimation. IEEE Trans. Multimed. 2016, 19, 1090–1099. [Google Scholar] [CrossRef] [Green Version]
  21. Betta, G.; Capriglione, D.; Gasparetto, M.; Zappa, E.; Liguori, C.; Paolillo, A. Face recognition based on 3D features: Management of the measurement uncertainty for improving the classification. Measurement 2014, 70, 169–178. [Google Scholar] [CrossRef]
  22. Wang, L.; Xiong, Z.; Shi, G.; Wu, F.; Zeng, W. Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2104–2111. [Google Scholar] [CrossRef] [PubMed]
  23. Kim, H.; Hilton, A. 3D scene reconstruction from multiple spherical stereo pairs. Int. J. Comput. Vis. 2013, 104, 94–116. [Google Scholar] [CrossRef] [Green Version]
  24. Guillemaut, J.-Y.; Hilton, A. Joint multi-layer segmentation and reconstruction for free-viewpoint video applications. Int. J. Comput. Vis. 2010, 93, 73–100. [Google Scholar] [CrossRef] [Green Version]
  25. Merras, M.; Saaidi, A.; El Akkad, N.; Satori, K. Multi-view 3D reconstruction and modeling of the unknown 3D scenes using genetic algorithms. Soft Comput. 2017, 22, 6271–6289. [Google Scholar] [CrossRef]
  26. Chen, X.; Graham, J.; Hutchinson, C.; Muir, L. Automatic inference and measurement of 3D carpal bone kinematics from single view fluoroscopic sequences. IEEE Trans. Med. Imaging 2013, 32, 317–328. [Google Scholar] [CrossRef] [Green Version]
  27. Gallo, A.; Muzzupappa, M.; Bruno, F. 3D reconstruction of small sized objects from a sequence of multi-focused images. J. Cult. Herit. 2014, 15, 173–182. [Google Scholar] [CrossRef]
  28. Tao, M.W.; Srinivasan, P.P.; Hadap, S.; Rusinkiewicz, S.; Malik, J.; Ramamoorthi, R. Shape estimation from shading, defocus, and correspondence using light-field angular coherence. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 546–560. [Google Scholar] [CrossRef]
  29. Liu, X.; Zhao, Y.; Zhu, S.-C. Single-view 3D scene reconstruction and parsing by attribute grammar. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 710–725. [Google Scholar] [CrossRef]
  30. Zhang, R.; Tsai, P.-S.; Cryer, J.; Shah, M. Shape-from-shading: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 21, 690–706. [Google Scholar] [CrossRef] [Green Version]
  31. Durou, J.-D.; Falcone, M.; Sagona, M. Numerical methods for shape-from-shading: A new survey with benchmarks. Comput. Vis. Image Underst. 2008, 109, 22–43. [Google Scholar] [CrossRef]
  32. Yang, L.; Li, E.; Long, T.; Fan, J.; Mao, Y.; Fang, Z.; Liang, Z. A welding quality detection method for arc welding robot based on 3D reconstruction with SFS algorithm. Int. J. Adv. Manuf. Technol. 2017, 94, 1209–1220. [Google Scholar] [CrossRef]
  33. Xie, J.; Dai, G.X.; Fang, Y. Deep multi-metric learning for shape-based 3D model retrieval. IEEE Trans. Multimed. 2017, 19, 2463–2474. [Google Scholar] [CrossRef]
  34. Peng, K.; Cao, Y.; Wu, Y.; Lu, M. A new method using orthogonal two-frequency grating in online 3D measurement. Opt. Laser Technol. 2016, 83, 81–88. [Google Scholar] [CrossRef]
  35. Mei, Q.; Gao, J.; Lin, H.; Chen, Y.; Yunbo, H.; Wang, W.; Zhang, G.; Chen, X. Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition. Opt. Lasers Eng. 2016, 86, 83–91. [Google Scholar] [CrossRef]
  36. Vagharshakyan, S.; Bregovic, R.; Gotchev, A. Light field reconstruction using shearlet transform. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 133–147. [Google Scholar] [CrossRef] [Green Version]
  37. Zhou, P.; Xu, K.; Wang, D. Rail profile measurement based on line-structured light vision. IEEE Access 2018, 6, 16423–16431. [Google Scholar] [CrossRef]
  38. Finlayson, G.D.; Gong, H.; Fisher, R.B. Color homography: Theory and applications. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 20–33. [Google Scholar] [CrossRef] [Green Version]
  39. Wang, K.; Zhang, G.; Xia, S. Templateless non-rigid reconstruction and motion tracking with a single RGB-D camera. IEEE Trans. Image Process. 2017, 26, 5966–5979. [Google Scholar] [CrossRef]
  40. Wu, F.; Li, S.; Zhang, X.; Ye, W. A design method for LEDs arrays structure illumination. IEEE/OSA J. Disp. Technol. 2016, 12, 1177–1184. [Google Scholar] [CrossRef]
  41. Loh, H.; Lu, M. Printed circuit board inspection using image analysis. IEEE Trans. Ind. Appl. 1999, 35, 426–432. [Google Scholar]
  42. Wu, F.; Zhang, X. An inspection and classification method for chip solder joints using color grads and Boolean rules. Robot. Comput. Integr. Manuf. 2014, 30, 517–526. [Google Scholar] [CrossRef]
Figure 1. The monocular vision system.
Figure 1. The monocular vision system.
Sensors 20 07045 g001
Figure 2. (a) The monocular vision system; (b) the imaging plane of XOZ.
Figure 2. (a) The monocular vision system; (b) the imaging plane of XOZ.
Sensors 20 07045 g002
Figure 3. The imaging process of the measured object.
Figure 3. The imaging process of the measured object.
Sensors 20 07045 g003
Figure 4. Algorithm simplification based on coordinates transformation.
Figure 4. Algorithm simplification based on coordinates transformation.
Sensors 20 07045 g004
Figure 5. The flow chart of the proposed algorithm.
Figure 5. The flow chart of the proposed algorithm.
Sensors 20 07045 g005
Figure 6. (a) The acquisition image; (b) The experimental platform.
Figure 6. (a) The acquisition image; (b) The experimental platform.
Sensors 20 07045 g006
Figure 7. 3D measurement system VR 5200.
Figure 7. 3D measurement system VR 5200.
Sensors 20 07045 g007
Figure 8. Surface height measurement process scheme using the proposed method. (a) Color image acquisition; (b) grayscale image; (c) image regions segmentation and (d) detected surface reconstruction.
Figure 8. Surface height measurement process scheme using the proposed method. (a) Color image acquisition; (b) grayscale image; (c) image regions segmentation and (d) detected surface reconstruction.
Sensors 20 07045 g008
Figure 9. Convex surface reconstruction results.
Figure 9. Convex surface reconstruction results.
Sensors 20 07045 g009
Figure 10. Concave surface reconstruction results.
Figure 10. Concave surface reconstruction results.
Sensors 20 07045 g010
Figure 11. Angular surface reconstruction results.
Figure 11. Angular surface reconstruction results.
Sensors 20 07045 g011
Figure 12. Convex and concave surface reconstruction results.
Figure 12. Convex and concave surface reconstruction results.
Sensors 20 07045 g012
Figure 13. (a) SIM card slot acquired image; (b) an example of concave-convex components.
Figure 13. (a) SIM card slot acquired image; (b) an example of concave-convex components.
Sensors 20 07045 g013
Figure 14. Mobile SIM card slot reconstruction result via the proposed method.
Figure 14. Mobile SIM card slot reconstruction result via the proposed method.
Sensors 20 07045 g014
Figure 15. Measurement results comparison.
Figure 15. Measurement results comparison.
Sensors 20 07045 g015
Figure 16. The inspection speed with different methods.
Figure 16. The inspection speed with different methods.
Sensors 20 07045 g016
Table 1. Experimental results.
Table 1. Experimental results.
Convex Surface SampleConcave Surface SampleAngular Surface SampleConvex and Concave Surface Sample
Sample images Sensors 20 07045 i001 Sensors 20 07045 i002 Sensors 20 07045 i003 Sensors 20 07045 i004
Images acquired via the proposed system Sensors 20 07045 i005 Sensors 20 07045 i006 Sensors 20 07045 i007 Sensors 20 07045 i008
Reconstructed images via the proposed method Sensors 20 07045 i009 Sensors 20 07045 i010 Sensors 20 07045 i011 Sensors 20 07045 i012
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, F.; Zhu, S.; Ye, W. A Single Image 3D Reconstruction Method Based on a Novel Monocular Vision System. Sensors 2020, 20, 7045. https://doi.org/10.3390/s20247045

AMA Style

Wu F, Zhu S, Ye W. A Single Image 3D Reconstruction Method Based on a Novel Monocular Vision System. Sensors. 2020; 20(24):7045. https://doi.org/10.3390/s20247045

Chicago/Turabian Style

Wu, Fupei, Shukai Zhu, and Weilin Ye. 2020. "A Single Image 3D Reconstruction Method Based on a Novel Monocular Vision System" Sensors 20, no. 24: 7045. https://doi.org/10.3390/s20247045

APA Style

Wu, F., Zhu, S., & Ye, W. (2020). A Single Image 3D Reconstruction Method Based on a Novel Monocular Vision System. Sensors, 20(24), 7045. https://doi.org/10.3390/s20247045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop