Next Article in Journal
Ellipticity of High-Order Harmonics Generated by Aligned Homonuclear Diatomic Molecules Exposed to an Orthogonal Two-Color Laser Field
Next Article in Special Issue
Abnormal Fano Profile in Graphene-Wrapped Dielectric Particle Dimer
Previous Article in Journal
A Computational Study on Performance Improvement of THz Signal from a Grating Photoconductive Antenna
Previous Article in Special Issue
VOC Monitoring and Ozone Generation Potential Analysis Based on a Single-Photon Ionization Time-of-Flight Mass Spectrometer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Object Shape Measurement Based on Brox Optical Flow Estimation and Its Correction Method

Shandong Provincial Key Laboratory of Optics and Photonic Device, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
*
Author to whom correspondence should be addressed.
Photonics 2020, 7(4), 109; https://doi.org/10.3390/photonics7040109
Submission received: 20 September 2020 / Revised: 8 November 2020 / Accepted: 10 November 2020 / Published: 11 November 2020
(This article belongs to the Special Issue Recent Advances in the Study of Light Propagation in Optical Fibers)

Abstract

:
In this work, a new method of measuring surface shape based on Brox optical flow estimation is presented. The measuring system consists of a projector, a measured object and a charge coupled device (CCD) camera. The grating fringes are projected onto the reference plane at a small angle. Two fringe images—before and after placing the measured object on the reference plane—are captured, respectively. Then, the optical flow field between two images is evaluated by using Brox optical flow algorithm. The theoretical relationship between the optical flow field and the height of the measured surface is established. According to the relationship, the height distribution of the measured object can be retrieved quickly without phase-to-height transformation. However, the calculated height distribution has been found to be deviated from its true value. To solve the problem, a correction scheme suitable for the optical flow method is proposed. By using the correction scheme, the accuracy of the calculated result is greatly improved. Simulations and experiments are completed to verify the feasibility of the proposed method and the accuracy of the correction method. The results show that the proposed method is more accurate than that of the Fourier transform method. Compared with traditional surface shape measurement, the optical flow method has some obvious advantages: (1) Only two frame images are required to recover the height distribution. (2) Relatively simple in measurement process and calculation so less time consuming. (3) Because the optical flow method contains time factor itself, it is more suitable for dynamic measurement. (4) No restrictions on projection pattern.

Graphical Abstract

1. Introduction

Three-dimensional (3-D) surface shape profilometry is an important method to obtain object shape features, and it has many applications in cultural relic protection, computer vision, surface detection, quality inspection and so on [1,2,3,4]. In surface shape profilometry, a grating fringe image with a gray level of cosine distribution is projected onto the measured object surface by using a projector. The fringes on the measured surface will be deformed due to the modulation of the object surface. Then, the fringe images before and after the deformation are recorded using the camera, from which the shape information of the measured surface can be recovered. Commonly, the information extraction techniques of the measured surface include Fourier transform profilometry (FTP), phase measurement profilometry and the Projection moiré method [5,6,7,8]. Each of these techniques takes the phase of the measured surface as the measured physical parameter. Therefore, to obtain the height distribution of the measured surface, the phase-to-height operation is required.
Different from traditional phase measurement techniques, the optical flow surface shape measurement technique does not measure the phase distribution but the height distribution directly, only requiring two frames before and after deformation. The measurement accuracy of it is equivalent to that of the FTP method [9]. Because the height distribution is calculated point-by-point, the optical flow method is robust to noises [10].
Similar to traditional phase measurement methods, the optical flow method also has the problem of tilt distortion. The calculated height distribution deviates from its true value. In order to overcome the tilt distortion, two approaches have been reported: The first one is the projection fringe correction method [11], which uses the four-step phase shift method to calibrate the phase periods of the projected sinusoidal fringes, forming a uniform spatial periodic fringe. The second one is the generalized fringe projection method [12], which consists of a reference object of a known height to calibrate the parameters in the phase-to-height equation, in which the parameters have a relationship with the uniform distribution of phase on the vertical projection plane. In practice, both approaches have their limitations. The projection fringe correction method, more suitable for measuring large objects, requires that the optical center of the projector and that of the CCD camera are at the same distance from the reference surface. The distribution of the projected fringes needs to be estimated in advance. Especially, different modification is required for a different experiment in the method. These requirements limit the application of this method. Different from the projection fringe correction method, the generalized fringe projection method has high accuracy, but needs to capture a series of images in the experiment, and requires other operations such as phase extracting and phase unwrapping, fringe order offsetting and parameter detecting. Therefore, this method is time-consuming and complex in the experiment process.
In our work, a surface shape measurement system based on Brox optical flow estimation is presented. The setup of the measuring system is simple, consists of a projector, a measured object and a CCD camera, which is depicted in the world coordinate system established on the reference plane. The projector projects a pattern on the reference plane at a small projection angle so that the deformed pattern is small enough to ensure that the optical flow method can be applied. The small deformation of the deformed pattern is evaluated by using Brox optical flow algorithm [13]. Compared with the reported optical flow estimation algorithms in motion analysis [13,14,15], the Brox optical flow algorithm is more robust to noises and has a wider range in displacement measurement. By using the Brox optical flow algorithm to calculate the deformation of the projection fringes, we have completed the following two works. One is that the relationship between the optical flow and the measured height is established on the geometry of the measurement system. Based on this relationship, the distribution of the measured surface is retrieved. Another is the tilt error of the measurement results is found and its correction method is presented. We find that the calculated height distribution deviates from its true value, which is caused not only by the oblique projection but also by the misalignment between the camera and the projector in their distance to the reference surface. Then, a correction method is proposed to solve the problem, which can greatly improve the accuracy of the calculated result. Simulations and actual measurements are carried out to prove the feasibility of the proposed optical flow method and the effectiveness of the proposed tilt correction method. The comparison between the proposed method and the FTP method in experiment measurement is completed to show that the proposed method is effective and accurate in measurement. Many advantages of the newly proposed method are obvious but need to be emphasized:
  • Compared with existing phase measurement methods, it is simple in setup and operation. Only two frames of the image are captured and used for retrieving the height distribution.
  • Compared with existing methods, it is less time consuming due to retrieving the height distribution directly. The measurement of the surface shape could be completed in less than 8 s.
  • Because the optical flow method contains the time factor itself, it is more suitable for dynamic measurement.
  • Compared with the existing tilt correction method, the newly proposed method is easier to implement and requires no additional operations.
  • There are no strict limits to the projection pattern. Any image with varying gray values can be used for projection.

2. Principles

2.1. Introduction of the Brox Optical Flow Algorithm

Brox algorithm, a global method, is a powerful tool for motion analysis. The constraints used in Brox algorithm include the gray value constancy and the gradient constancy, where the latter is particularly helpful for the analysis of translatory motion. The piecewise smooth flow field and multiscale approach are utilized to solve the problem of the discontinuities at the boundaries of objects in the scene and to find the global minimum of the cost functional in a variational formulation in the case of larger displacement.
The brightness of an image point A i ( x i , y i ) captured at time t is described by the intensity I i ( x i , y i , t ) , where subscripts denote the variable on the image plane of the camera. After deformation, point A i ( x i , y i ) will move to a new position B i ( x i + Δ x i , y i + Δ y i ) at time t + Δ t with intensity I i ( x i + Δ x i , y i + Δ y i ) . According to the brightness constancy, that is I i ( x i , y i , t ) = I i ( x i + Δ x i , y i + Δ y i , t + Δ t ) , which can be modified with a first-order Taylor approximation, the optical flow constraint equation can be obtained, as shown in Equation (1):
I i x i Δ x i Δ t + I i y i Δ y i Δ t + I i t = 0
where u = Δ x i Δ t and v = Δ y i Δ t are the components of the optical flow field between two frames that are the velocity components of the observation point A i ( x i , y i ) in x and y directions. Then, the global deviations of the assumption of brightness constancy and gradient constancy and the assumption of the piecewise smooth flow field are measured by an energy expression—shown in Equation (2)—after taking quadratic penalisers into account:
E ( u , v ) = Ω Ψ ( | I ( p + w ) I ( p ) | 2 ) + γ Ψ ( | I ( p + w ) I ( p ) | 2 ) d p + α Ω Ψ ( | u | 2 + | v | 2 ) d p
where ( x , y ) Ω , Ω defines the image domain, p = ( x , y ) T , w = ( u , v ) T , γ is the weight between two assumptions and α denotes the regularization parameter greater than 0. Ψ ( x 2 ) = x 2 + ε 2 is an increasing concave function, ε is the small positive constant, which can be chosen as 0.001 [13]. Since Brox optical flow algorithm requires the optical flow itself to be as smooth as possible, it is necessary to find the values of u and v to minimize the energy in Equation (2). Then, the minimizer of Equation (2) must fulfill the Euler–Lagrange equation as follows:
Ψ ( I z 2 ) I x I z + γ Ψ ( I x z 2 + I y z 2 ) ( I x x I x z + I x y I y z ) α d i v [ Ψ ( u x 2 + u y 2 + v x 2 + v y 2 ) ( u x e x + u y e y ) ] = 0 Ψ ( I z 2 ) I y I z + γ Ψ ( I x z 2 + I y z 2 ) ( I x y I x z + I y y I y z ) α d i v [ Ψ ( u x 2 + u y 2 + v x 2 + v y 2 ) ( v x e x + v y e y ) ] = 0
where Ψ ( x 2 ) = 1 2 x 2 + ε 2 , u x = u x , u y = u y , v x = v x , v y = v y , and e x and e y are the unit vectors in the x and y axe directions. I x = I ( p + w ) x , I y = I ( p + w ) y , I z = I ( p + w ) I ( p ) , I x x = 2 I ( p + w ) x 2 , I x y = 2 I ( p + w ) x y , I y y = 2 I ( p + w ) y 2 , I x z = I ( p + w ) x I ( p ) x , and I y z = I ( p + w ) y I ( p ) y . Let w k = ( u k , v k ) Τ , k = 0 , 1 , , k represent the number of iterations, then w k + 1 can be obtained by Equation (4):
Ψ [ ( I z k + 1 ) 2 ] I x k I z k + 1 + γ Ψ [ ( I x z k + 1 ) 2 + ( I y z k + 1 ) 2 ] ( I x x k I x z k + 1 + I x y k I y z k + 1 ) α d i v { Ψ [ ( u x k + 1 ) 2 + ( u y k + 1 ) 2 + ( v x k + 1 ) 2 + ( v y k + 1 ) 2 ] ( u x k + 1 e x + u y k + 1 e y ) } = 0
The first order Taylor expansion is used to remove the nonlinearity in I k + 1 , then,
I z k + 1 I z k + I x k d u k + I y k d v k I x z k + 1 I x z k + I x x k d u k + I x y k d v k I y z k + 1 I y z k + I x y k d u k + I y y k d v k
where u k + 1 and v k + 1 are the velocity components obtained in the iteration k + 1, with k a positive integer, and then represented by u ^ and v ^ , respectively. Furthermore, they are expressed as Equation (6), the sum of the result u k and v k obtained in the iteration k and the increments d u k and d v k in the iteration k + 1.
u k + 1 = u k + d u k
v k + 1 = v k + d v k
Substitute Equation (5) into Equation (4); we have
I x k ( I z k + I x k d u k + I y k d v k ) 2 ( I z k + I x k d u k + I y k d v k ) 2 + ε 2 + γ 2 I x x k ( I x z k + I x x k d u k + I x y k d v k ) + I x y k ( I y z k + I x y k d u k + I y y k d v k ) ( I x z k + I x x k d u k + I x y k d v k ) 2 + ( I y z k + I x y k d u k + I y y k d v k ) 2 + ε 2 α 2 d i v ( u ^ x e x + u ^ y e y u ^ x 2 + u ^ y 2 + v ^ x 2 + v ^ y 2 ) = 0
I y k ( I z k + I x k d u k + I y k d v k ) 2 ( I z k + I x k d u k + I y k d v k ) 2 + ε 2 + γ 2 I x y k ( I x z k + I x x k d u k + I x y k d v k ) + I y y k ( I y z k + I x y k d u k + I y y k d v k ) ( I x z k + I x x k d u k + I x y k d v k ) 2 + ( I y z k + I x y k d u k + I y y k d v k ) 2 + ε 2 α 2 d i v ( v ^ x e x + v ^ y e y u ^ x 2 + u ^ y 2 + v ^ x 2 + v ^ y 2 ) = 0
The first two items in Equation (7) are data items and the third is the smoothness term. Disassemble the data items in Equation (7), let
a 11 = I x k I x k 2 ( I z k + I x k d u k + I y k d v ) 2 + ε 2 + γ 2 I x x k I x x k + I x y k I x y k ( I x z k + I x x k d u k + I x y k d v k ) 2 + ( I y z k + I x y k d u k + I y y k d v k ) 2 + ε 2 a 12 = I x k I y k 2 ( I z k + I x k d u k + I y k d v ) 2 + ε 2 + γ 2 I x x k I x y k + I x y k I y y k ( I x z k + I x x k d u k + I x y k d v k ) 2 + ( I y z k + I x y k d u k + I y y k d v k ) 2 + ε 2 b 1 = I x k I z k 2 ( I z k + I x k d u k + I y k d v ) 2 + ε 2 + γ 2 I x x k I x z k + I x y k I y z k ( I x z k + I x x k d u k + I x y k d v k ) 2 + ( I y z k + I x y k d u k + I y y k d v k ) 2 + ε 2 a 21 = I x k I y k 2 ( I z k + I x k d u k + I y k d v ) 2 + ε 2 + γ 2 I x x k I x y k + I x y k I y y k ( I x z k + I x x k d u k + I x y k d v k ) 2 + ( I y z k + I x y k d u k + I y y k d v k ) 2 + ε 2 a 22 = I y k I y k 2 ( I z k + I x k d u k + I y k d v ) 2 + ε 2 + γ 2 I x y k I x y k + I y y k I y y k ( I x z k + I x x k d u k + I x y k d v k ) 2 + ( I y z k + I x y k d u k + I y y k d v k ) 2 + ε 2 b 2 = I y k I z k 2 ( I z k + I x k d u k + I y k d v ) 2 + ε 2 + γ 2 I x y k I x z k + I y y k I y z k ( I x z k + I x x k d u k + I x y k d v k ) 2 + ( I y z k + I x y k d u k + I y y k d v k ) 2 + ε 2
Then, the data items in Equation (7) can be simplified as:
a 11 d u k + a 12 d v k = b 1 a 21 d u k + a 22 d v k = b 2
Let ϕ ( x , y ) = α / 2 u ^ x 2 + u ^ y 2 + v ^ x 2 + v ^ y 2 , then b 1 , b 2 , a 11 and a 22 can be expressed as:
b 1 = b 1 + { [ u k ( x , y ) u k ( x 1 , y ) ] ϕ ( x 1 , y ) [ u k ( x + 1 , y ) u k ( x , y ) ] ϕ ( x , y ) + [ u k ( x , y ) u k ( x , y 1 ) ] ϕ ( x , y 1 ) [ u k ( x , y + 1 ) u k ( x , y ) ] ϕ ( x , y ) } b 2 = b 2 + { [ v k ( x , y ) v k ( x 1 , y ) ] ϕ ( x 1 , y ) [ v k ( x + 1 , y ) v k ( x , y ) ] ϕ ( x , y ) + [ v k ( x , y ) v k ( x , y 1 ) ] ϕ ( x , y 1 ) [ v k ( x , y + 1 ) v k ( x , y ) ] ϕ ( x , y ) } a 11 = a 11 + ϕ ( x 1 , y ) + ϕ ( x , y ) + ϕ ( x , y 1 ) + ϕ ( x , y ) a 22 = a 22 + ϕ ( x 1 , y ) + ϕ ( x , y ) + ϕ ( x , y 1 ) + ϕ ( x , y )
Finally, the increments d u k + 1 and d v k + 1 can be obtained by using the SOR iteration method [16]:
d u k + 1 = ( 1 ω ) d u k + ω a 11 [ b 1 a 12 d v k + ( d u k ( x 1 , y ) ϕ ( x 1 , y ) + d u k ( x + 1 , y ) ϕ ( x , y ) + d u k ( x , y 1 ) ϕ ( x , y 1 ) + d u k ( x , y + 1 ) ϕ ( x , y + 1 ) ) ] d v k + 1 = ( 1 ω ) d v k + ω a 22 [ b 2 a 21 d u k + ( d v k ( x 1 , y ) ϕ ( x 1 , y ) + d v k ( x + 1 , y ) ϕ ( x , y ) + d v k ( x , y 1 ) ϕ ( x , y 1 ) + d v k ( x , y + 1 ) ϕ ( x , y + 1 ) ) ]
where ω is the relaxation parameter in the SOR iteration method. a 11 , a 22 , b 1 and b 2 can be obtained after the initial values are given to u , v , d u and d v . The iterative calculation using Equation (11) can be terminated until the difference of the calculation between two adjacent iterations approaches 0. Then, the optical flow fields u and v can be obtained using Equation (6). Experiment results show that Brox optical flow algorithm has the advantages of running fast, is suitable for larger displacement and is better robust to noises.

2.2. The Principle of the Surface Shape Measurement

Figure 1 shows the diagram of surface shape measurement using the optical flow method. A projector at point P ( x p , y p , z p ) projects a pattern onto the reference plane N, where a measured surface S can be placed. A camera located at point C ( x c , y c , z c ) captures the projected pattern on the measured surface. The projection optical axis PO rays from the projector to the reference plane at an angle θ with respect to the normal of the reference plane and crosses the plane at O, which is set as the origin of the coordinate system established on the reference plane. Take the line P O as the x-axis direction, where P represents the foot of the perpendicular P P on the reference plane, and the normal of the reference plane at point O as the z-axis direction. The optical axis C C of the CCD camera is perpendicular to the reference plane, and the foot point is C . The distance C C ¯ is the observation distance, represented by z c . These coordinate position parameters of the projector and the camera can be calibrated using the calibration method.
Observing a point A ( x , y ) on the reference plane, the projection light P A will meet the point D on the surface after placing the measured object. The height of the point D is h. In other words, the observation point A ( x , y ) moves to a new position B ( x , y ) after the time Δ t due to the existence of the object. The displacements between two points are Δ x = x x and Δ y = y y . Then, the point A i ( x i , y i ) moves to B i ( x i + Δ x i , y i + Δ y i ) on image plane, corresponding to the point A ( x , y ) and B ( x , y ) , where Δ x i and Δ y i are the displacement in the x and y direction. The relations of the displacement between them are Δ x i = M c Δ x and Δ y i = M c Δ y , where M c is the image magnification. The optical flow formed by deformation is the velocity components at the point A i ( x i , y i ) , can be expressed as:
u = Δ x i Δ t = M c Δ x Δ t ,   v = Δ y i Δ t = M c Δ y Δ t
where the time interval between two captured frames, Δ t , usually set as 1 for convenience in calculation because of no limitation on it.
In order to find the relationship between the deformation of the fringes and the height of the object surface, geometric analysis is performed as follows. First, use the coordinates of the projector and the camera to show some important lengths of the line segments in the measurement system:
d A P = ( x x p ) 2 + ( y y p ) 2 + z p 2 d B P = ( x x p ) 2 + ( y y p ) 2 + z p 2 d A B = ( x x ) 2 + ( y y ) 2 d A C = ( x x c ) 2 + ( y y c ) 2 + z c 2 d C P = ( x p x c ) 2 + ( y p y c ) 2 + ( z p z c ) 2
where x = x + Δ x = x + u Δ t / M c and y = y + Δ y = y + v Δ t / M c . Then, passing through point C, draw an auxiliary plane M parallel to the reference plane N. Extend the line AP to intersect the plane M at point P . Geometrically, we know that P C / / A B due to point, P , C , A and B , are coplanar. On the plane P C A B , the cosines of P A B and C P A are shown in Equation (14) using the law of cosines:
cos P A B = d A P 2 + d A B 2 d B P 2 2 d A P d A B , cos C P A = d C P 2 + d A P 2 d A C 2 2 d C P d A P
and
C P P = π C P A ,   C P P = P A B
Then, the length of the line segment C P can be expressed by the cosines of P A B and C P A using the sine theorem:
d C P = d C P sin C P P sin C P P = d C P 1 cos 2 C P P 1 cos 2 C P P
The height at point D can be expressed as:
h = d A B z c d C P + d A B
Substitute Equations (13)–(15) into Equation (17); the expression of the height h of point D can be obtained:
h = z c d A B 2 [ 2 ( d A P 2 + d B P 2 ) d A B 2 ] [ d A P 2 d B P 2 ] 2 d A B 2 [ 2 ( d A P 2 + d B P 2 ) d A B 2 ] [ d A P 2 d B P 2 ] 2 + d A C 2 [ 2 ( d A P 2 + d C P 2 ) d A C 2 ] [ d A P 2 d C P 2 ] 2
Usually, the foot of the CCD camera on reference plane, C , is used as the origin of the coordinate system, and let the projection center coincide with the observation center in experiment.

2.3. Correction Principle

In the actual measurement, usually, the optical center of the projector is not at the same height as that of the camera, which results in the inconsistent measured result with that of the actual object. In this section, we propose a correction method to solve the problem in the optical flow method.
For the sake of simplicity, Figure 1 is simplified into a 2-D coordinate system. As shown in Figure 2, the optical center of the projector is at the same height as that of the camera completely. Consider two points, D and D , which are at the same height h on the surface of the measured object. When the two points are observed by the camera, the magnitudes of the displacement at the two points caused by the same height are different. Let Δ x 1 and Δ x 2 represent the displacement caused by h at point D and D , respectively. Then, we have
h z c h = Δ x 1 d = Δ x 2 d
where d = | x p x c | is the distance between the projector and the camera. It is easy to obtain that
Δ x 1 = Δ x 2
It can be seen from Equation (20) that the horizontal component of the optical flow u at point D is also equal to that at point D . However, in practice, it is difficult to make the optical center of the projector at exactly the same height as that of the camera. Once the optical center P and C are not on the same horizontal line, Δ x 1 will no longer be equal to Δ x 2 , resulting in a height deviation in the calculated value.
As shown in Figure 3, we can think of the projector moving from P′ to P. Then, there is a displacement Δ p z in the vertical direction and Δ p x in the horizontal direction relative to the position P . The projection line that hits D will meet point A ( x , y ) on the reference plane at an incident angle α with respect to the normal of the reference plane. A displacement would be introduced due to the measured surface when the point D is observed from the CCD sensor, which is represented by Δ x . From the analyses above, we know that there are no deviation errors when the projector is located at the point P horizontal to the CCD optical center. Therefore, the corrected displacement Δ x can be obtained after finding the small offset δ caused by the movement of the projector.
As can be seen from Figure 3 that
Δ p x = Δ p z tan α = Δ p z x + d z c
and
δ = h Δ p z z c h x + d z c
Then, the corrected displacement can be expressed as:
Δ x = Δ x + δ = Δ x + h Δ p z z c h x + d z c
where Δ x = u Δ t / M c , which is calculated from the optical flow method. The height of the point D is H after modification, is expressed as:
H = Δ x z c d + Δ x = z c [ z c ( z c h ) Δ x + ( x + d ) h Δ p z ] z c ( z c h ) ( Δ x + d ) + ( x + d ) h Δ p z
Using Equation (24), the correction of the measurement result and the surface shape information of the object can be retrieved with high precision by using the proposed method, avoiding the problem of the tilt error.

3. Theoretical Simulation

3.1. Numerical Simulation and Analysis of Surface Shape Measurement by Optical Flow Method

The height distribution of the measured surface can be calculated by using Equation (18) and its tilt error can be corrected by using Equation (24). It can be found that there are no restrictions on projection patterns. First, we use the usual parallel projection fringe pattern for shape measurement in simulations and experiments. Then, speckle patterns are proved to be feasible in experiments.
A spherical crown of the size 40 × 40 mm simulated by MATLAB is set to be measured, as shown in Figure 4. The radius of the sphere R is 20 mm and the maximum of its height h max is 10 mm. The height of the spherical crown is expressed as
h ( x , y ) = ( R h max ) + R 2 x 2 y 2
The system shown in Figure 1 is employed to measure the shape of the spherical crown. The imaging distance of the camera z c and the projection distance z p are 2000 mm, respectively. The distance between projection and the camera d is 60 mm and the image magnification M c is −12.8 pixel/mm. The optical center of the projector can be considered to be approximately at the same height as that of the camera due to the small projection angle. Before placing the object, the light intensity of the projection fringes parallel to the y-axis direction on the reference plane can be expressed as
I ( x , y , t ) = a + b cos [ 2 π ( f x x + f y y ) ]
where a is the background light intensity, b is the fringe contrast, f x and f y are the fringe frequencies at ( x , y ) . The image captured by the CCD camera at time t is expressed as
I i ( x i , y i , t ) = a + b cos [ 2 π ( f i x x i + f i y y i ) ]
where f i x = f x / M c and f i y = f y / M c are the fringe frequencies of the recorded image, The subscript i represents the physical parameters on the image plane. After placing the object, the light intensity of the modulated fringe image captured at time t + Δ t is
I i ( x i + Δ x i , y i + Δ y i , t + Δ t ) = a + b cos 2 π [ f i x ( x i + Δ x i ) + f i y ( y i + Δ y i ) ]
According to Equations (27) and (28), the captured fringe patterns before and after deformation are of size 512 × 512 pixels—shown in Figure 5a,b—respectively, where a = 128, b = 60, f i x = 0.4   mm 1 , f i y = 0 . The fringe deformation is small due to the small distance d between the projector and CCD, and the maximum of deformation in the deformed pattern is 4 pixels.
By using the two images shown in Figure 5, the Brox optical flow algorithm is employed to estimate the optical flow field. In the calculation of the optical flow, the weight factor γ is 10, and the regularization parameter α is 100, which can make the algorithm robust to noise, suggested by Brox [13]. Then, the height distribution of the ball crown can be obtained using Equation (18). The result is shown in Figure 6a. Using two fringe patterns to obtain the height distribution in the optical flow method, it is similar to that of the FTP method. Therefore, the height distribution is recalculated by using the FTP method for comparison. The cross section data at y = 0 mm in x direction obtained by the optical flow method and the FTP method are compared with those of the true value, as shown in Figure 6b. It can be seen that both results of the two methods are close to the real value, indicating that the optical flow method has high precision in the surface shape measurement as good as that of the FTP method under the same condition.
The random noise has a great influence on the measurement results. Then, the fringe patterns before and after deformation are polluted by the Gaussian noises with the SNR of 10 and 20 dB, respectively, to simulate the influence of the environmental noises and the thermal noises of the camera. By using the noised fringe patterns, the height distributions of the ball crown are obtained. Then, the distributions of the absolute error are obtained modulo subtracting the cross section data of the calculated results from the true value at y = 0 mm in x direction, as shown in Figure 7. It can be seen that the absolute errors are less than 0.2 mm compared with the noiseless pattern when the noised pattern with SNR is of 10 and 20 dB, respectively. The root mean square (RMS) errors increases with the noise and its maximum value is less than 0.15 mm. It shows that the Brox optical flow algorithm has good robustness to noises.
In the simulations above, the parameters in the measurement system such as z p , z c and d are constants, which are set as z p = 2000 mm, z c = 2000 mm and d = 60 mm, respectively. However, in the actual experiment, these parameters need to be calibrated by optical methods, which may cause errors. In order to explain the impact of calibration error on the calculation results, a incremental 5 mm is firstly added to the observation distance z c and the projection distance z p , respectively, as the calibration error when d = 60 mm. Then, increase d by 5 mm when z p and z c are both 2000 mm. The absolute errors of 5 mm increments are compared with its original distance, as shown in Figure 8, respectively. It can be seen that the increases of 5 mm in the observation distance and projection distance have little effect on the measurement results. Although the change in d has an obvious effect on the error distribution compared with the first two parameters, the absolute error is still within a small range. So, we can ignore the error caused by the system calibration in the experiment. Furthermore, the parameter d is closely related to the amount of deformation of the projected pattern, which can affect the accuracy of the optical flow algorithm. Therefore, we should make d small enough in experiments.

3.2. Error Correction Simulation

In Section 3.1, the situations are completed when the projector is at the same height as that of the camera. In this section, we consider the case that the observation distance is not equal to the projection distance. When the projection angle θ = π / 100 , the observation distance z c and the projection distance L p are set as 2000 and 1800 mm, respectively, the move component of the projector Δ p y in vertical direction is about −200 mm. Then, a tilt error occurs due to the deviation of the projector. As shown in Figure 9a, the cross section data of the calculated height distribution is deviated from the true value at y = 0 mm in x direction. The tilt error can be corrected using Equation (24), as shown in Figure 9b. After correction, the absolute error between the calculated value and the true value can be greatly reduced to 0.3 mm, as shown in Figure 9c, indicating that the proposed correction method is feasible.

3.3. Sensitivity Analysis

The sensitivity of a system can be intuitively understood as the change of the output caused by a certain amount of input change. Based on the triangulation method, the optical setup of the proposed method is just like that of the PMP method. Then, the sensitivity of the method can be defined as
S = d ( Δ x i ) / d h
We can use Figure 2 to analyze the sensitivity of this method. According to Equation (17), the deformation of the fringe on image plane caused by h is
Δ x i = M c h d z c h
Then, Equation (29) can be approximately expressed as S = M c d / z c when the measured height is far less than z c . We can obtain that the sensitivity of the method is about 0.4 pixel/mm at the maximum height of the ball crown. The proposed method requires the incidence angle small enough to ensure that the deformation of the fringes can be measured by the optical flow method. Small angle projection can avoid shadows of the measured object, but it reduces the sensitivity of measurement. According to the reported resolution of 0.01 pixel in the horizontal direction in the optical flow method [10], the resolution in the direction of height is about 0.02 mm. This resolution is sufficient for most measurements.

4. Experiment

4.1. An Experiment on a Smooth Measuring Object

The measurement system as shown in Figure 1 is utilized to measure the height distribution of the specimen, as shown in Figure 10. The maximum height of the mask is 63.20 mm. The projected pattern used in the proposed optical flow method is not limited to the fringe pattern. Usually, a parallel fringe pattern is utilized to measure the surface shape in current techniques including the FTP method. Therefore, the fringe pattern is firstly used for projection in this section to facilitate comparison with the FTP method. According to the discussion above, the deformation of the deformed fringes should be small to meet the brightness constant of optical flow. Therefore, the projection angle should be small in the setup. Then, the measurement follows the following steps:
  • Calibrating the measurement system. In order to make the effect of the correction method obvious, we set the projection distance and observation distance with large distance. Different from the optical setup parameter estimation method [17], the optical center positions of the projector and camera are calibrated by using Zhang’s calibration method [18,19]. After calibration, the distance from the projector to the reference plane is 1975.40 mm and that of the CCD sensor is 1922.80 mm. The horizontal distance between the CCD sensor and the projector is 84.50 mm. The magnification of the image is 512/220 pixel/mm. The calibration process will take about 10 min, but the following steps can be performed multiple times as long as the calibration is completed once.
  • Capturing two images. The two images before and after placing the object are recorded by an ordinary CCD sensor with the sensitive area of 768 × 576 pixels at 8-bit resolution. The captured images are shown in Figure 11. We estimate the noise level of the image by using the method proposed by Chen et al. [20]. The mean noise level of Figure 11b is 1.41 dB.
  • Calculating the optical flow and height distribution. After calculating the optical flow between the two images by using the Brox method, the height distribution of the specimen can be obtained according to Equation (18)—shown in Figure 12. This step will take 6 to 10 s depending on the image size and the parameters of Brox algorithm.
It should be noted that the location of the projector is 52.60 mm farther than that of the CCD sensor to the reference plane. This would introduce a tilt error in the measurement result. The tilt error can be corrected according to Equation (24). The corrected result is shown in Figure 13a.
In order to show the effect of the correction, the cross section data of the corrected result at y=-20 mm in x direction (passing through the hollow area of the object) are compared with those of the uncorrected result, as shown in Figure 13b. To compare with the result before correction, the correction effect is obvious and satisfactory. First, the maximum of the height is 62.19 mm after correction. At the maximum height, the absolute error of the corrected measurement results is 1.01 mm and the relative error is less than 2%. Second, the height distribution becomes left-right symmetric after correction. The tilt error occurs in calculated height distribution before correction has been greatly reduced.
As described in Section 3.1, the proposed method has no restrictions on projection patterns, which is illustrated by an experiment with projected speckle patterns. Using the measurement system shown in Figure 1 and the same parameters as the experiment in Section 4.1, the height distributions of the specimen are obtained using Equation (18) following the experimental steps described above. Figure 14 shows the height distribution of the specimen using the projected speckle pattern.
The Brox method is slightly similar to the FTP method in the calculation of using the fringe patterns before and after deformation. Therefore, the height distribution of the specimen is recalculated by the FTP method. After transforming the two fringe patterns into their frequency domain using the fast Fourier transform algorithm, calculating the carrier frequency, selecting one of the spectra on the frequency axis by a window manually and translating it, inverse Fourier transform and phase unwrapping are necessary to obtain the phase distribution of the measured surface. Then, the transformation of phase-to-height is required in order to obtain the height distribution. After completing the operations above, the height distribution of the specimen can be obtained—shown in Figure 15a—from which the cross section data at y = −20 mm in x direction are selected to compare with that of the corrected Brox method. The comparison of them is shown in Figure 15b. It can be found that the result of the corrected Brox method is more accurate than that of the FTP method.

4.2. An Experiment on a Structured Object

In Section 3.1, we measure an object with a smooth surface. To illustrate the feasibility of the proposed approach, a packaging box of an instrument is measured. The packaging box is of structured surfaces, as shown in Figure 16. Its maximum height is 18.20 mm. The mean noise level of the captured image is 2.86 dB, also estimated by Chen’s method. The corrected measurement result is shown in Figure 17, where the maximum of the measured height is 18.25 mm. The absolute error and relative error at the point are 0.05 mm and 0.3%, respectively. The experimental result show that the proposed method is suitable for 3-D reconstruction of objects with structural surface.
Although the experimental result is satisfactory, the accuracy of it may not be as high as that of PMP method. However, the advantages of the proposed method are obvious and potential. Firstly, the measurement time is much shorter than that of the PMP method because only two images are required. Secondly, the height distribution is obtained directly without phase calculation like the PMP method. Finally, a more diversified pattern can be used for projection in the proposed method, such as speckle pattern, which is difficult for data processing in the PMP method. Moreover, the proposed method based on the optical flow is more suitable for dynamic measurement.

5. Conclusions

In this paper, a new technique for measuring the surface shape of an object based on the Brox optical flow method is presented. The measurement system is simple with one projector and a CCD sensor. Because of the small projection angle, the shadow problem is avoided. The proposed method has no restrictions on projection patterns. Due to the tilt error that exists in the measurement results, an error correction method for the optical flow method is proposed, by which the height distribution can be accurately measured. Simulations and experiments are completed to demonstrate the characteristics of the proposed method. The result shows that the Brox optical flow algorithm has good robustness to noises. It also shows that the measurement results are not sensitive to the deviation of projection distance and observation distance. The deviation of 5mm of the projection distance or observation distance has little influence on the measurement results. However, it is sensitive to the distance between the projector and the CCD sensor. Comparing with the Fourier transform method, the proposed method is simple in the calculation of the height distribution without phase-to-height transformation. The height calculation process does not require manual intervention. Only two images are needed to obtain the height distribution. Therefore, the proposed method is more suitable for dynamic measurement. However, there are still some problems that need to be discussed further, such as the influence of the background and the color inconsistency between the measured object and the reference plane. These problems would be studied in the future.

Author Contributions

Conceptualization, Y.T. and P.S.; methodology, Y.T. and P.S.; software, Y.T., Q.D. and P.S.; validation, Y.T., Q.D. and P.S.; formal analysis, Y.T.; investigation, Y.T.; resources, P.S.; data curation, Y.T.; writing—original draft preparation, Y.T. and P.S.; writing—review and editing, Y.T. and P.S.; visualization, Y.T. and P.S.; supervision, C.F., Z.L. and P.S.; project administration, P.S.; funding acquisition, P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Shandong Province, China (ZR201702090137); the National Science Foundation of China (61975099 and 11902317).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Godin, G.; Beraldin, J.A.; Taylor, J.; Cournoyer, L.; Rioux, M.; El-Hakim, S.; Baribeau, R.; Blais, F.; Boulanger, P.; Domey, J.; et al. Active optical 3D imaging for heritage applications. IEEE. Comput. Graph 2002, 22, 24–36. [Google Scholar] [CrossRef]
  2. Yue, L.; Liu, X. Application of 3D optical measurement system on quality inspection of turbine blade. In Proceedings of the 2009 16th International Conference on Industrial Engineering and Engineering Management, Beijing, China, 21–23 October 2009; pp. 1089–1092. [Google Scholar]
  3. Hung, Y.; Lin, L.; Shang, H.; Park, B.G. Practical three-dimensional computer vision techniques for full-field surface measurement. Opt. Eng. 2000, 39, 143. [Google Scholar] [CrossRef]
  4. Fitts, J.M. High-Speed 3-D Surface Measurement Surface Inspection and Reverse-CAD System. U.S. Patent 5,175,601, 29 December 1992. [Google Scholar]
  5. Gorthi, S.S.; Rastogi, P. Fringe projection techniques: Whither we are? Opt. Lasers Eng. 2010, 48, 133–140. [Google Scholar] [CrossRef] [Green Version]
  6. Takeda, M.; Ina, H.; Kobayashi, S. Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J. Opt. Soc. Am. 1982, 72, 156–160. [Google Scholar] [CrossRef]
  7. Yang, F.; He, X. Two-step phase-shifting fringe projection profilometry: Intensity derivative approach. Appl. Opt. 2007, 46, 7172. [Google Scholar] [CrossRef] [PubMed]
  8. Yao, J.; Tang, Y.; Chen, J. Three-dimensional shape measurement with an arbitrarily arranged projection moiré system. Opt. Lett. 2016, 41, 717–720. [Google Scholar] [CrossRef] [PubMed]
  9. Sun, P.; Dai, Q.; Tang, Y.; Lei, Z. Coordinate calculation for direct shape measurement based on optical flow. Appl. Opt. 2020, 59, 92–96. [Google Scholar] [CrossRef] [PubMed]
  10. Hartmann, C.; Wang, J.; Opristescu, D.; Volk, W. Implementation and evaluation of optical flow methods for two-dimensional deformation measurement in comparison to digital image correlation. Opt. Lasers Eng. 2018, 107, 127–141. [Google Scholar] [CrossRef]
  11. Li, B.; Fu, Y.; Zhang, J.; Wu, H.; Zeng, Z. Period correction method of phase coding fringe. Opt. Rev. 2015, 22, 717–723. [Google Scholar] [CrossRef]
  12. Wang, Z.; Du, H.; Bi, H. Out-of-plane shape determination in generalized fringe projection profilometry. Opt. Express 2006, 14, 12122–12133. [Google Scholar] [CrossRef] [PubMed]
  13. Brox, T.; Bruhn, A.; Papenberg, N.; Weickert, J. High accuracy optical flow estimation based on a theory for warping. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2004; pp. 25–36. [Google Scholar]
  14. Horn, B.K.P.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef] [Green Version]
  15. Lucas, B.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI’), San Francisco, CA, USA, 24–28 August 1981. [Google Scholar]
  16. Xing-Jiang, L.; Lai-I, L. Geometric interpretation of several classical iterative methods for linear system of equations and diverse relaxation parameter of the SOR method. Appl. Math. J. Chin. Univ. 2013, 28, 269–278. [Google Scholar]
  17. Kofman, J. Comparison of linear and nonlinear calibration methods for phase-measuring profilometry. Opt. Eng. 2007, 46, 043601. [Google Scholar] [CrossRef]
  18. Zhang, Z. A flexible new technique for camera calibration. IEEE. Trans. Pattern Anal. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  19. An, Y.; Bell, T.; Li, B.; Xu, J.; Zhang, S. Method for large-range structured light system calibration. Appl. Opt. 2016, 55, 9563–9572. [Google Scholar] [CrossRef] [PubMed]
  20. Chen, G.; Zhu, F.; Heng, P.A. An Efficient Statistical Method for Image Noise Level Estimation. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
Figure 1. The diagram of surface shape measurement using optical flow method.
Figure 1. The diagram of surface shape measurement using optical flow method.
Photonics 07 00109 g001
Figure 2. The simplified optical setup when the projector is at the same height as that of the camera.
Figure 2. The simplified optical setup when the projector is at the same height as that of the camera.
Photonics 07 00109 g002
Figure 3. The diagram of the error correction when the center of the camera is not the same as that of the projector.
Figure 3. The diagram of the error correction when the center of the camera is not the same as that of the projector.
Photonics 07 00109 g003
Figure 4. The spherical crown to be measured.
Figure 4. The spherical crown to be measured.
Photonics 07 00109 g004
Figure 5. The captured fringe patterns: (a) before deformation and (b) after deformation.
Figure 5. The captured fringe patterns: (a) before deformation and (b) after deformation.
Photonics 07 00109 g005
Figure 6. (a) The calculated height distribution of the spherical crown. (b) The comparison of the cross section data between the calculated value and true value at y = 0 mm in x direction.
Figure 6. (a) The calculated height distribution of the spherical crown. (b) The comparison of the cross section data between the calculated value and true value at y = 0 mm in x direction.
Photonics 07 00109 g006
Figure 7. The error analysis of the proposed method. (a) The absolute error and (b) RMS error of the calculated on the cross section data at y = 0 mm in x direction when the fringe patterns are polluted by the Gaussian noises with SNR of 10 and 20 dB, respectively.
Figure 7. The error analysis of the proposed method. (a) The absolute error and (b) RMS error of the calculated on the cross section data at y = 0 mm in x direction when the fringe patterns are polluted by the Gaussian noises with SNR of 10 and 20 dB, respectively.
Photonics 07 00109 g007
Figure 8. The influence of various parameters in the simulation. The comparison of the absolute error between the cross section data at y = 0 mm in x direction in the calculated height distribution before and after (a) z c increase by 5 mm, (b) z p increase by 5 mm and (c) d increase by 5 mm.
Figure 8. The influence of various parameters in the simulation. The comparison of the absolute error between the cross section data at y = 0 mm in x direction in the calculated height distribution before and after (a) z c increase by 5 mm, (b) z p increase by 5 mm and (c) d increase by 5 mm.
Photonics 07 00109 g008
Figure 9. The comparison of the cross section data between the calculated and true value at y = 0 mm in x direction (a) before correction and (b) after correction. (c) The comparison of the absolute error between the calculated cross section data and the true value at y = 0 mm in x direction before and after correction.
Figure 9. The comparison of the cross section data between the calculated and true value at y = 0 mm in x direction (a) before correction and (b) after correction. (c) The comparison of the absolute error between the calculated cross section data and the true value at y = 0 mm in x direction before and after correction.
Photonics 07 00109 g009
Figure 10. The test specimen.
Figure 10. The test specimen.
Photonics 07 00109 g010
Figure 11. The captured fringe pattern in an experiment (a) before modulation and (b) after modulation.
Figure 11. The captured fringe pattern in an experiment (a) before modulation and (b) after modulation.
Photonics 07 00109 g011
Figure 12. The height distribution of the object obtained by the Brox method.
Figure 12. The height distribution of the object obtained by the Brox method.
Photonics 07 00109 g012
Figure 13. (a) The corrected height distribution of the result; (b) the comparison of the cross section data between the calculated and corrected value at y = −20 mm in x direction.
Figure 13. (a) The corrected height distribution of the result; (b) the comparison of the cross section data between the calculated and corrected value at y = −20 mm in x direction.
Photonics 07 00109 g013
Figure 14. The speckle pattern is used as the projecting pattern (a) before deformation, (b) after deformation and (c) the result.
Figure 14. The speckle pattern is used as the projecting pattern (a) before deformation, (b) after deformation and (c) the result.
Photonics 07 00109 g014
Figure 15. 3-D surface height distribution of the object calculated by the FTP method. (a) The 3-D height distribution calculated by the FTP method; (b) comparison of the results of FTP method and optical flow method on cross section at y = −20 mm.
Figure 15. 3-D surface height distribution of the object calculated by the FTP method. (a) The 3-D height distribution calculated by the FTP method; (b) comparison of the results of FTP method and optical flow method on cross section at y = −20 mm.
Photonics 07 00109 g015
Figure 16. Images of the experiment on a structured object (a) before deformation, (b) after deformation.
Figure 16. Images of the experiment on a structured object (a) before deformation, (b) after deformation.
Photonics 07 00109 g016
Figure 17. The 3-D height distribution of the object obtained by the optical flow method after correction.
Figure 17. The 3-D height distribution of the object obtained by the optical flow method after correction.
Photonics 07 00109 g017
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tang, Y.; Sun, P.; Dai, Q.; Fan, C.; Lei, Z. Object Shape Measurement Based on Brox Optical Flow Estimation and Its Correction Method. Photonics 2020, 7, 109. https://doi.org/10.3390/photonics7040109

AMA Style

Tang Y, Sun P, Dai Q, Fan C, Lei Z. Object Shape Measurement Based on Brox Optical Flow Estimation and Its Correction Method. Photonics. 2020; 7(4):109. https://doi.org/10.3390/photonics7040109

Chicago/Turabian Style

Tang, Yuxin, Ping Sun, Qing Dai, Chao Fan, and Zhifang Lei. 2020. "Object Shape Measurement Based on Brox Optical Flow Estimation and Its Correction Method" Photonics 7, no. 4: 109. https://doi.org/10.3390/photonics7040109

APA Style

Tang, Y., Sun, P., Dai, Q., Fan, C., & Lei, Z. (2020). Object Shape Measurement Based on Brox Optical Flow Estimation and Its Correction Method. Photonics, 7(4), 109. https://doi.org/10.3390/photonics7040109

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop