Next Article in Journal
Development of a Passive Liquid Valve (PLV) Utilizing a Pressure Equilibrium Phenomenon on the Centrifugal Microfluidic Platform
Previous Article in Journal
Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast and Flexible Movable Vision Measurement for the Surface of a Large-Sized Object

Ministry of Education Key Laboratory of Precision Opto-Mechatronics Technology, Beihang University, No.37 Xueyuan Rd., Haidian District, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(3), 4643-4657; https://doi.org/10.3390/s150304643
Submission received: 30 November 2014 / Accepted: 3 February 2015 / Published: 25 February 2015
(This article belongs to the Section Physical Sensors)

Abstract

:
The presented movable vision measurement for the three-dimensional (3D) surface of a large-sized object has the advantages of system simplicity, low cost, and high accuracy. Aiming at addressing the problems of existing movable vision measurement methods, a more suitable method for large-sized products on industrial sites is introduced in this paper. A raster binocular vision sensor and a wide-field camera are combined to form a 3D scanning sensor. During measurement, several planar targets are placed around the object to be measured. With the planar target as an intermediary, the local 3D data measured by the scanning sensor are integrated into the global coordinate system. The effectiveness of the proposed method is verified through physical experiments.

1. Introduction

In the process of manufacture and assembly of large-sized objects the use of sensor feedback to guide the processing and assembly by the rapid online measurement of large-sized surface morphologies can significantly improve the machine efficiency and quality of the resulting parts. With advances in computer technology, image processing, and pattern-recognition technologies, vision measurement technology has rapidly developed. Vision measurement systems have gradually become the most important means of three-dimensional (3D) surface topography measurement for large-sized objects [1,2,3,4,5]. Currently, the main characteristic of 3D surface topography measurements for large-sized objects is that the measuring position is essentially fixed because in the industrial field each batch of large-sized product does not change significantly in shape. However, the fast-paced production and the small measuring space require that the measurement system be highly accurate, its speed rapid, and the structures be simple and flexible. However, most existing vision measurement systems cannot meet the needs of fast-paced on-site production. Thus, research on fast and high-precision 3D shape measurement of large-sized objects is important in the industrial field.
At present, 3D shape measurement of objects basically uses three methods: structured light vision measurement, Fourier profilometry, and phase profilometry. Structured light vision measurement includes the multi-line structured light method and the encoding structured light method. Light strip matching is more difficult, so the multi-line structured light method [6,7] is usually applied in object geometric measurement. Meanwhile, the coded structured light method [8,9,10] is an effective means of obtaining dense 3D point clouds of object 3D surface morphology; the method operates on a simple principle and has a high degree of automation, so it is the most commonly used among the 3D shape measurement methods. The biggest advantage of Fourier transform profilometry [11,12,13] is that by using only one image, it can achieve 3D object surface topography measurement. Therefore, it is suitable for dynamic 3D measurements. Its disadvantages are its long operation time and low automated performance, so it is not suitable for industrial measurements. Phase measurement profilometry [14,15,16] has high accuracy and is currently the most frequently used 3D shape measurement method, however, the algorithm is more complex and the phase unwrapping problem exists.
Single-vision sensors cannot achieve overall large-sized object 3D surface topography measurement because of the occlusions. The usual method is to divide the area to be measured into a plurality of sub-regions, and all the sub-region 3D data are integrated into the global coordinate system to obtain a 3D morphology of the entire surface of the object. Depending on the different overall unified approaches, the surface 3D morphology vision measurement of large-sized objects can be divided into two categories: movable single-vision sensor measurement and fixed multiple vision sensor measurement.
Movable single-vision sensor measurement method measures the 3D morphology of the entire surface of a large-sized object using a movable single-vision sensor. It uses simple equipment and is low cost. This method affixes labels on the object or uses a planar target to integrate the subregion 3D data into the global coordinate system. A typical method using adhesive markers is the ATOS movable 3D optical measurement system developed by the GOM Company. However, many of the objects to be measured (e.g., soft objects, liquids, or high-precision mechanical components) cannot be labeled. Meanwhile, this method has a long adhesive marking time and the marker is easily deformed. By contrast, using the planar target method [17], errors caused in the single movable measurement can easily accumulate. In addition, the planar target needs to be placed before the measured object in this method, the measuring time is long, and the operation is complex.
Fixed multiple-vision sensor measurement [18,19,20] needs to employ more on-site vision sensors and complete the global calibration of multiple vision sensors. Based on the global calibration results, data obtained by each vision sensor are united into the global coordinate system. Currently, typical measurement systems using fixed multiple-vision sensors include those of the American company Perceptron, with its auto-body geometry detection system, and Italian company MERMEC, with its online trial of full profile measurement systems. The principle of the method is simple, but the systems are complex and on-site calibration is difficult. After the measurement system is moved or the measured objects are changed, the measurement system needs to be reconfigured and recalibrated. At present, the method is often used in the geometry size measurement of the mass-produced large-sized products in the industrial field, but it is not suitable for large-sized and complex 3D surface reconstruction.
Based on the above analysis, compared with the fixed multiple-vision sensors measurement, the movable single-vision sensor measurement method seems more suitable for 3D surface topography measurement of the large-sized object. To achieve rapid measurement of the 3D surface topography of large-sized object, particularly mass-produced large-sized objects in the industrial field, the method proposed herein combines a raster binocular vision sensor with a wide-field camera to form a 3D scanning sensor. Multiple plane targets arranged in the surroundings of the measured object are used as intermediaries and the local 3D data obtained from the 3D scanning sensor are integrated into the global coordinate system. The remainder of the paper is organized as follows: Section 2 describes the structure and the mathematical model of the 3D scanning sensor, Section 3 provides a detailed description of the basic principles of the algorithm, and Section 4 verifies the effectiveness of the proposed algorithm through experiments.

2. System Measurement Principle

The structural schematic of the measurement system is shown in Figure 1. The coordinate systems of planar targets 1 and 2 are O t 1 x t 1 y t 1 z t 1 and O t 2 x t 2 y t 2 z t 2 , respectively. O o x o y o z o is the 3D scanning sensor coordinate system. The coordinate system of planar target 1 O t 1 x t 1 y t 1 z t 1 is selected as the global coordinate system O G x G y G z G . The 3D scanning sensor is placed in front of the measured object to ensure that the wide-field camera can “see” the target plane. T C , t 1 and T C , t 2 are the transformation matrices from the 3D scanning sensor coordinate system O o x o y o z o to the coordinate systems of plane targets 1 and 2, respectively. T t 2 , t 1 is the transformation matrix from the plane target 2 to plane target 1.
Figure 1. The structural schematic of the measurement system.
Figure 1. The structural schematic of the measurement system.
Sensors 15 04643 g001
The proposed system includes a 3D scanning sensor, multiple planar targets, a high-speed image acquisition system, a computer, measurement software, and the corresponding mechanical structure. The basic principle of the measurement system is as follows: first, multiple planar targets are arranged around the measured object. Second, the raster binocular stereo vision sensor of the 3D scanning sensor measures the local 3D surface of the object, and the wide-field camera of the 3D scanning sensor measures the planar target. Finally, these planar targets function as the mediators to integrate all local 3D data measured by the 3D scanning sensor into the global coordinate system O G x G y G z G .

2.1. 3D Scanning Sensor

As shown in Figure 2, the 3D scanning sensor includes a raster binocular stereo vision sensor and a wide-field camera. The raster binocular stereo vision sensor consists of two cameras and a projector. The wide-field camera is a combination of a high-resolution camera and a four-sided mirror. The wide-field camera also can be considered as a four-mirror camera to achieve multi-angle measurement. The 3D scanning sensor coordinate system O o x o y o z o is established under the wide-field camera coordinate system O l x l y l z l . The raster binocular stereo vision sensor coordinate system O s x s y s z s is established under the left camera coordinate system O c 1 x c 1 y c 1 z c 1 of the raster binocular vision sensor and T os is the transformation matrix from O o x o y o z o to O s x s y s z s . Schematic images of the wide-field camera and the image captured by the wide-field camera are shown in Figure 3a,b, respectively.
Figure 2. Structure of the 3D scanning sensor.
Figure 2. Structure of the 3D scanning sensor.
Sensors 15 04643 g002
Compared with the curved mirror in current panoramic cameras, the model of the wide-field camera with a four-surface mirror is simpler and its measurement accuracy is higher. However, the field of views of the wide-field camera has a blind area. The size and measuring location of mass-produced large-sized products are basically fixed in industrial production sites, so each movement position of the 3D scanning sensor can be determined in advance, and the planar targets used in the proposed method can optimize those positions based on each moving point of the 3D scanning sensor. Therefore, compared with using a panoramic camera with curved mirrors, the wide-field camera with a four-surface mirror is suitable for industrial production. It is also the main reason why flat mirrors are used in the proposed method instead of curved mirrors.
Figure 3. (a) Schematic image of the wide-field camera; (b) Image captured by the wide-field camera.
Figure 3. (a) Schematic image of the wide-field camera; (b) Image captured by the wide-field camera.
Sensors 15 04643 g003
As shown in Figure 3a, O m i x m i y m i z m i ( i = 1 , 2 , 3 , 4 ) is the coordinate system of the four mirror cameras of the wide-field camera, respectively. T m 21 , T m 31 , T m 41 are the transformation matrixes from the coordinate system of mirror cameras 2, 3, and 4 to that of mirror camera 1, respectively. The coordinate system O l x l y l z l of the wide-field camera is established based on the coordinate system O m 1 x m 1 y m 1 z m 1 of mirror camera 1. According to Equation (1), the 3D coordinates P o = [ x o , y o , z o , 1 ] of P under the coordinate system O o x o y o z o can be obtained under the coordinate system of the 3D scanning sensor:
P o = T os 1 P s
where P s is the 3D coordinates of P under the coordinate system O s x s y s z s . T os can be obtained by calibration before measuring [21].

2.2. Light Strip Image Center Extraction and Coding

Light Strip Coding

The proposed method uses the existing binary-coded method [22] to achieve the match with the left and right cameras’ light strips. The projector is first arranged according to Figure 4a–f to cast six black and white images. Supposing that the black is defined as 0 and the white is 1. The coding index of the first black light bar region in the left side of the 64 black and white light bar region is 000000 in Figure 4f. From left to right, the successive light bar coding is 000001, 000010, 000011, and so on.
As shown in Figure 4a–f, after the projection of six black and white images, 64 black and white light bar regions can be formed, and they have been encoded. Four light strip images are constructed, and each image has 64 vertical light strips, as shown in Figure 4g–j. In each graph of Figure 4g–j, 64 light strips are in 64 black and white coding light bar regions formed in Figure 4a–f. According to the four light strip images above, 64 × 4 = 256 light strips can be obtained. The distribution schematic diagram of all projected images is shown in Figure 5. At the same time, the number of light strip images can be increased or decreased based on need; 64 × 2 = 128 light strips can be obtained by projecting two images, 64 × 6 = 384 light strips can be obtained by projecting six images, and so on.
Figure 4. All projected images.
Figure 4. All projected images.
Sensors 15 04643 g004
Figure 5. Line distribution diagram of all projected images.
Figure 5. Line distribution diagram of all projected images.
Sensors 15 04643 g005
In Figure 4a–f, the black and white light bar region is only used for constructing the coding region to identify 64 light strips of each light strip image in Figure 4g–j. In the four light strip images shown in Figure 4g–j, the Steger [23] algorithm is used to extract the light strip center point of the light strip image in this paper. First, the Hessian matrix is used to determine the pixel-level coordinate and the normal direction of the light strip center Then, the sub-pixel level coordinate of the light strip center is obtained by solving the extreme points in the normal direction, as shown in Figure 6a. Finally, a link constraint method is used to remove the wrong center of the light bar and to link the correct light strip centers together to form a plurality of segments, as shown in Figure 6b.
Figure 6. Light stripe extraction results. (a) Extraction result of the sub-pixel coordinates of the light stripe center; (b) Extraction result of the light stripe center after linking.
Figure 6. Light stripe extraction results. (a) Extraction result of the sub-pixel coordinates of the light stripe center; (b) Extraction result of the light stripe center after linking.
Sensors 15 04643 g006

2.3. Partial 3D Reconstruction

Section 2.2 shows that each light strip in the projected image corresponds to a unique code index. The light strips captured by the left and right cameras of the raster binocular stereo vision sensor can be matched according to the code index. The corresponding points of the light strip captured by two cameras can be obtained according to the epipolar constraints. Finally, the corresponding points are substituted into the raster binocular stereo vision model to calculate the 3D coordinates of the corresponding points.
The schematic of the grating binocular vision sensor model is shown in Figure 7. The left camera’s coordinate system is O c 1 x c 1 y c 1 z c 1 and the right camera’s coordinate system is O c 2 x c 2 y c 2 z c 2 . The transformation matrix from O c 1 x c 1 y c 1 z c 1 to O c 2 x c 2 y c 2 z c 2 is T 21 = [ R 21 t 21 0 1 ] (where R 21   and   t 21 are the rotation matrix and translation vectors, respectively).
Without loss of generality, the raster binocular stereo vision sensor coordinate system O s x s y s z s is built under O c 1 x c 1 y c 1 z c 1 . p 1 = [ u 1 , v 1 , 1 ] T and p 2 = [ u 2 , v 2 , 1 ] T are undistorted homogeneous image coordinates of the light strip point P at the left and right camera coordinate system (obtained from the distorted image of homogeneous coordinates through lens distortion correction [24]). l 1 is the pole line of p 2 in the left camera and l 2 is the pole line of p 1 in the right camera. The 3D coordinates P s = [ x s , y s , z s , 1 ] T of P under O s x s y s z s can be solved by the binocular stereo vision model, as shown in Equation (2):
{ ρ 1 p 1 = K 1 [ I 0 ] P s ρ 2 p 2 = K 2 [ R 21 t 21 ] P s
where K 1 and K 2 are intrinsic parameters of the left and right cameras, respectively.
The light strip matching of the left and right cameras can be achieved by light strip coding. The epipolar constraints are added to achieve the corresponding light strip center points of the left and right cameras. The corresponding points are substituted into Equation (4) to calculate their 3D coordinates.
Figure 7. Schematic of the grating binocular vision sensor model.
Figure 7. Schematic of the grating binocular vision sensor model.
Sensors 15 04643 g007

2.4. Global Unity of Partial 3D Data

The partial 3D reconstruction process of the 3D scanning sensor is introduced in Section 2.3. However, limited by the vision sensor’s field of view, the partial 3D reconstruction of the 3D scanning sensor can only measure the local 3D data of large-sized objects. To achieve the overall 3D reconstruction of large-sized objects, all local 3D data need to be integrated into the global coordinate system.
The planar target 1 coordinate O t 1 x t 1 y t 1 z t 1 is selected as the global coordinate system O G x G y G z G . P O is the 3D coordinate of the light strip center point P measured by the raster binocular vision sensor in the 3D scanning sensor coordinate system O O x O y O z O and P G is the 3D coordinate of P in the global coordinate system O G x G y G z G . As shown in Figure 1, the wide-field camera of the 3D scanning sensor can “see” two plane targets placed around the large-sized object to calculate T C , t 1 and T C , t 2 , then calculate T t 2 , t 1 . The local 3D coordinates measured by 3D scanning sensor can be integrated into the global coordinate system using Equation (3):
P G = T C , t 1 P O
The wide-field camera may not “see” planar target 1, but it can “see” plane target 2. Then, the local 3D data can be integrated into the global coordinate system using Equation (4):
P G = T t 2 , t 1 T C , t 2 P O
To improve system efficiency and flexibility, multiple plane targets can be arranged around the large-sized object on the measurement site.

3. Physical Experiments

The setup of the physical experiment is shown in Figure 8. The raster binocular stereo vision sensor of the 3D scanning sensor consists of two cameras (AVTGC1380H with 17 mm lenses and a resolution of 1360 × 1024 and a field of view of 500 mm × 380 mm × 400 mm, Allied Vision Technologies, Stadtroda, Germany) and one projector (Dell M110 with a resolution of 1360 × 768, Dell, Round Rock, TX, USA). The wide-field camera of the 3D scanning sensor consists of one camera (Pointgray with 12 mm lenses and a resolution of 2448 × 2048, Point Grey Research, Richmond, Canada) and one mirror with four surfaces. The characteristic point of the planar target is unified as 10 × 10, with a machining accuracy of 5 μm.
Figure 8. Layout of the physical experiments.
Figure 8. Layout of the physical experiments.
Sensors 15 04643 g008

3.1. System Calibration Results

First, the raster binocular stereo vision and the intrinsic parameters of the wide-field camera are calibrated based on [24,25]. Then, the transformation matrix T m 21 , T m 31 , T m 31 and T os are calibrated by [21]. The planar target used for calibration is the same as the planar target for the measurement system.
All calibration results of 3D scanning sensor are shown as follows:
(1)
Binocular stereo vision sensor
Intrinsic parameters of left camera: fx = 2744.3; fy = 2745.9; γ = 0; u0 = 750.2; v0 = 480.6; k1 = −0.14; k2 = −0.51.
Intrinsic parameters of right camera: fx = 2754.6; fy = 2753.5; γ = 0; u0 = 709.7; v0 = 538.4; k1 = −0.15; k2 = 0.02.
T21: r21 = [0.0387, 0.416, 0.00024]; t21 = [−489.501, 13.515, 90.88667].
The uncertainty of intrinsic parameters of left camera: u f x = 1.75 ; u f y = 1.89 ; u u 0 = 3.67 ; u v 0 = 2.18 ; u k 1 = 5.0 × 10 3 ; u k 2 = 6.29 × 10 2 .
The uncertainty of intrinsic parameters of right camera: u f x = 1.70 ; u f y = 1.74 ; u u 0 = 3.43 ; u v 0 = 2.52 ; u k 1 = 5.91 × 10 3 ; u k 2 = 7.51 × 10 2 .
A planar target is placed before the binocular stereo vision sensor at two positions, which measures the distance of character points of the target. Compared the real distance and the measurement distance, the RMS error is 0.09 mm. The binocular stereo vision sensor measure the character points of planar target 100 times, the deviation error is 0.03 mm.
(2)
Wide-field camera
Intrinsic parameters: fx = 3705.8; fy = 3706.5; γ = 0; u0 = 1222.5; v0 = 997.4; k1 = −0.14; k2 = 0.26.
Tm21: rm21 = [−0.036, −1.528, −0.798]; tm21 = [85.327, −55.294, 94.977].
Tos: ros = [1.413, 1.002, −1.257]; tos = [97.625, −103.644, 261.717].
To verify the effectiveness of the proposed method, the following experiments were conducted to evaluate the measurement accuracy. The following subsection is a detailed description of the procedures and results.
A self-designed method is used to evaluate the global measurement precision. The specific experimental procedure is as follows: a one-dimensional (1D) target with two characteristic points (the distance between the two points is 1234.15 mm, with a precision of 0.01 mm) is placed before the 3D scanning sensor, which measures the characteristic point of the 1D target, as shown in Figure 9. Firstly, the 3D scanning sensor measures the left characteristic point of the 1D target at the first position, then it measures the right characteristic point of the 1D target at the second position. Finally, all characteristic points of 1D target measured by the scanning sensor at two positions are integrated into the global coordinate system by the planar target. The above progress is repeated eight times. The distance between the two points is calculated as the measurement distance (dm). The real distance between the two points of 1D target is the ideal distance (dt = 1234.15 mm). The deviation (Δd) between dm and dt and the RMS error are calculated to evaluate the global accuracy of the proposed method.
Figure 9. Schematic of global measurement precision evaluation.
Figure 9. Schematic of global measurement precision evaluation.
Sensors 15 04643 g009
Figure 10. Images captured by two sensors for global precision calibration.
Figure 10. Images captured by two sensors for global precision calibration.
Sensors 15 04643 g010
The images captured by the wide-field camera and the two cameras of the raster binocular stereo vision sensor at the first position are shown in Figure 10a. The images captured by the wide-field camera and the two cameras of the raster binocular stereo vision sensor at the second position are shown in Figure 10b. The distances between the two points of the 1D target by the 3D scanning sensor at eight positions and the RMS error are listed in Table 1. The result shows the global measurement accuracy of the proposed method can reach 0.14 mm.
Table 1. Evaluation of the global measurement accuracy (mm).
Table 1. Evaluation of the global measurement accuracy (mm).
Left PointRight Pointdt (mm)dm (mm)Δd (mm)
x (mm)y (mm)z (mm)x (mm)y (mm)z (mm)
1−38.9369.70944.181136.6010.561315.731234.151234.27−0.12
2−58.28−0.67942.061145.09 34.811213.41 1234.151234.060.09
3−207.67163.541177.131002.7013.97988.701234.151234.050.10
4−253.6144.321147.33915.61294.111452.901234.151234.040.11
5109.48102.07852.491265.7631.611278.491234.151234.27−0.12
6−214.8536.151012.341011.10162.10942.911234.151233.36−0.21
755.0916.891160.021160.7367.45613.741234.151234.27−0.12
858.981.35838.591191.43−109.661316.001234.151233.970.18
RMS error0.14

3.2. Real Data Measurement Experiment

To verify the effectiveness of the proposed method, the following real data measurement experiment was designed. In the experiment, the 3D scanning sensor is moved twice to measure the 3D morphology of the two parts of the train wheels.
Figure 11. Coded image and light strip image captured by 3D scanning sensor at the first position.
Figure 11. Coded image and light strip image captured by 3D scanning sensor at the first position.
Sensors 15 04643 g011
Figure 12. Coded image and light strip image captured by 3D scanning sensor at the second position.
Figure 12. Coded image and light strip image captured by 3D scanning sensor at the second position.
Sensors 15 04643 g012
The wide-field camera of the 3D scanning sensor is used to measure the arranged planar target. With the plane target as an intermediary, the local 3D data from two measurements is united to the global coordinate system. The coded images and the light strip images are shown in Figure 11 and Figure 12, respectively.
The 3D morphology of the object measured by the 3D scanning sensor at the first position is shown in Figure 13a. The 3D morphology of the object measured by the 3D scanning sensor at the second position is shown in Figure 13b. The united 3D morphology of the object measured by the 3D scanning sensor at two positions is shown in Figure 13c.
Figure 13. (a) 3D morphology of the object measured by the 3D scanning sensor at the first position; (b) 3D morphology of the object measured by the 3D scanning sensor at the second position; (c) united 3D morphology of the object measured by the 3D scanning sensor at two positions.
Figure 13. (a) 3D morphology of the object measured by the 3D scanning sensor at the first position; (b) 3D morphology of the object measured by the 3D scanning sensor at the second position; (c) united 3D morphology of the object measured by the 3D scanning sensor at two positions.
Sensors 15 04643 g013
In order to further validate the effectiveness of the proposed algorithm, we measured the missile model, and their 3D morphology are shown in Figure 14.
Figure 14. (a) 3D morphology of the missile model measured by the 3D scanning sensor at the first position; (b) 3D morphology of the missile model measured by the 3D scanning sensor at the second position; (c) united 3D morphology of the missile model measured by the 3D scanning sensor at two positions.
Figure 14. (a) 3D morphology of the missile model measured by the 3D scanning sensor at the first position; (b) 3D morphology of the missile model measured by the 3D scanning sensor at the second position; (c) united 3D morphology of the missile model measured by the 3D scanning sensor at two positions.
Sensors 15 04643 g014

4. Conclusions

Given that existing movable vision measurement methods for the 3D surface of large-sized objects have some problems, such as long operation times, low efficiency, and unsuitability for soft surfaces, a fast and high-precision movable vision measurement method for 3D surface of large-sized objects is introduced in this paper.
Compared with the existing measurement methods, the proposed method combines a raster binocular vision sensor with a wide-field camera to form a scanning sensor, and it is not necessary to past marks on the object’s surface in front of the object. Meanwhile, the proposed method realizes the synchronous measurement of partial 3D measurement and local 3D data integrating, which is not needed to move target repeatedly in front of the object and the 3D scanning sensor, so it greatly improving the measurement efficiency. Physical experiment confirms that when the size of 1D target is about 1.2 m, the accuracy of the proposed method could reach 0.14 mm. Meanwhile, the proposed method is s of high flexibility and efficiency.
Since the size and measuring location of mass-produced large-sized products are basically fixed, so each moved position of the 3D scanning sensor can be determined in advance, and those positions of the planar targets used in the proposed method can be optimized based on each moving point of the 3D scanning sensor. Thus, the proposed method is especially suitable for the measurement for the mass-produced large-sized products in the industrial site.

Acknowledgments

This work is supported by the National Natural Science Foundation of P. R. China under Grant no. 51175027, 61127009 and the Beijing Natural Science Foundation under Grant no. 3132029.

Author Contributions

Zhen Liu and Xinguo Wei conceived and designed the experiments; Zhen Liu, Xiaojing Li and Fenjiao Li performed the experiments and analyzed the data; Guangjun Zhang contributed analysis tools; Zhen Liu wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, F.; Brown, G.M.; Song, M. Overview of three-dimensional shape measurement using optical methods. Opt. Eng. 2000, 39, 10–22. [Google Scholar] [CrossRef]
  2. Malamas, E.N.; Petrakis, E.G.M.; Zervakis, M.; Petit, L.; Legat, J.D. A survey on industrial vision systems, applications and tools. Image Vis. Comput. 2003, 21, 171–188. [Google Scholar] [CrossRef]
  3. Kovac, I. Flexible inspection system in the body-in white manufacturing. In Proceedings of the International Workshop on Robot Sensing, 2004 (ROSE 2004), Graz, Austria, 24–25 May 2004; pp. 41–48.
  4. Okamoto, A.; Wasa, Y.; Kagawa, Y. Development of shape measurement system for hot large forgings. Kobe Steel Eng. Rep. 2007, 57, 29–33. [Google Scholar]
  5. Furferi, R.; Governi, L.; Volpe, Y.; Carfagni, M. Design and assessment of a machine vision system for automatic vehicle wheel alignment. Int. J. Adv. Robot. Syst. 2013, 10. [Google Scholar] [CrossRef]
  6. Zeng, L.; Hao, Q.; Kawachi, K. A scanning projected line method for measuring a beating bumblebee Wing. Opt. Commun. 2000, 183, 37–43. [Google Scholar] [CrossRef]
  7. Jang, W.; Je, C.; Seo, Y.; Lee, S.W. Structured-light stereo: Comparative analysis and integration of structured-light and active stereo for measuring dynamic shape. Opt. Lasers Eng. 2013, 51, 1255–1264. [Google Scholar] [CrossRef]
  8. Morano, R.A.; Ozturk, C.; Conn, R.; Dubin, S.; Zietz, S.; Nissano, J. Structured light using pseudorandom codes. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 322–327. [Google Scholar] [CrossRef]
  9. Salvi, J.; Pagès, J.; Batlle, J. Pattern codification strategies in structured light systems. Pattern Recognit. 2004, 37, 827–849. [Google Scholar] [CrossRef]
  10. Koninckx, T.; Griesser, A.; van Gool, L. Real-time range scanning of deformable surfaces by adaptively coded structured light. In Proceedings of the International Conference on 3-D Digital Imaging and Modelling, Banff, AB, Canada, 6–10 October 2003; pp. 293–300.
  11. Su, X.Y.; Chen, W.J.; Zhang, Q.C.; Chao, Y.P. Dynamic 3-D shape measurement method based on FTP. Opt. Lasers Eng. 2001, 36, 49–64. [Google Scholar] [CrossRef]
  12. Zappa, E.; Busca, G. Static and dynamic features of Fourier transform profilometry: A review. Opt. Lasers Eng. 2012, 50, 1140–1151. [Google Scholar] [CrossRef]
  13. Su, X.Y.; Zhou, W.S.; Bally, G.; Vukicevic, D. Automated phased-measuring profilometry using defocused projection of a Ronchi grating. Opt. Commun. 1992, 94, 561–573. [Google Scholar] [CrossRef]
  14. Quan, C.G.; Chen, W.; Tay, C.J. Phase-retrieval techniques in fringe-projection profilometry. Opt. Lasers Eng. 2010, 48, 235–243. [Google Scholar] [CrossRef]
  15. Huang, P.S.; Zhang, C.P.; Chiang, F.P. High-speed 3-D shape measurement based on digital fringe projection. Opt. Eng. 2003, 42, 163–168. [Google Scholar] [CrossRef]
  16. Zhang, S. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Opt. Lasers Eng. 2010, 48, 149–158. [Google Scholar] [CrossRef]
  17. Sun, J.H.; Zhang, G.J.; Wei, Z.Z.; Zhou, F.Q. Large 3D free surface measurement using a movable coded light-based stereo vision system. Sens. Actuators A Phys. 2006, 132, 460–471. [Google Scholar] [CrossRef]
  18. Lu, R.S.; Li, Y.F.; Yu, Q. On-line measurement of straightness of seamless steel pipe using machine vision technique. Sens. Actuators A Phys. 2001, 94, 95–101. [Google Scholar] [CrossRef]
  19. Li, Q.; Ren, S. A Real-Time Visual Inspection System for Discrete Surface Defects of Rail Heads. IEEE Trans. Instrum. Meas. 2012, 61, 2189–2199. [Google Scholar] [CrossRef]
  20. Li, Y.; Li, Y.F.; Wang, Q.L.; Xu, D.; Tan, M. Measurement and defect detection of the weld bead based on online vision inspection. IEEE Trans. Instrum. Meas. 2010, 59, 1841–1849. [Google Scholar] [CrossRef]
  21. Liu, Z.; Zhang, G.J.; Wei, Z.Z.; Sun, J.H. A global calibration method for multiple vision sensors based on multiple targets. Meas. Sci. Technol. 2011, 22. [Google Scholar] [CrossRef]
  22. Posdamer, J.L.; Altschuler, M.D. Surface measurement by space-encoded projected beam systems Comput. Graph. Image Process 1982, 18, 1–17. [Google Scholar] [CrossRef]
  23. Steger, C. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 113–125. [Google Scholar] [CrossRef]
  24. Zhang, Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  25. Bouguet, J.Y. Camera Calibration Toolbox for Matlab. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/ (accessed on 17 February 2015).

Share and Cite

MDPI and ACS Style

Liu, Z.; Li, X.; Li, F.; Wei, X.; Zhang, G. Fast and Flexible Movable Vision Measurement for the Surface of a Large-Sized Object. Sensors 2015, 15, 4643-4657. https://doi.org/10.3390/s150304643

AMA Style

Liu Z, Li X, Li F, Wei X, Zhang G. Fast and Flexible Movable Vision Measurement for the Surface of a Large-Sized Object. Sensors. 2015; 15(3):4643-4657. https://doi.org/10.3390/s150304643

Chicago/Turabian Style

Liu, Zhen, Xiaojing Li, Fengjiao Li, Xinguo Wei, and Guanjun Zhang. 2015. "Fast and Flexible Movable Vision Measurement for the Surface of a Large-Sized Object" Sensors 15, no. 3: 4643-4657. https://doi.org/10.3390/s150304643

APA Style

Liu, Z., Li, X., Li, F., Wei, X., & Zhang, G. (2015). Fast and Flexible Movable Vision Measurement for the Surface of a Large-Sized Object. Sensors, 15(3), 4643-4657. https://doi.org/10.3390/s150304643

Article Metrics

Back to TopTop