Next Article in Journal
Nanomaterials for Healthcare Biosensing Applications
Previous Article in Journal
Robust Cylindrical Panorama Stitching for Low-Texture Scenes Based on Image Alignment Using Deep Learning and Iterative Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effect of Catadioptric Component Postposition on Lens Focal Length and Imaging Surface in a Mirror Binocular System

Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Beihang University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2019, 19(23), 5309; https://doi.org/10.3390/s19235309
Submission received: 30 October 2019 / Revised: 27 November 2019 / Accepted: 29 November 2019 / Published: 2 December 2019
(This article belongs to the Section Optical Sensors)

Abstract

:
The binocular vision system is widely used in three-dimensional measurement, drone navigation, and many other fields. However, due to the high cost, large volume, and inconvenient operation of the two-camera system, it is difficult to meet the weight and load requirements of the UAV system. Therefore, the study of mirror binocular with single camera was carried out. Existing mirror binocular systems place the catadioptric components in front of the lens, which makes the volume of measurement system still large. In this paper, a catadioptric postposition system is designed, which places the prism behind the lens to achieve mirror binocular imaging. The influence of the post prism on the focal length and imaging surface of the optical system is analyzed. The feasibility of post-mirror binocular imaging are verified by experiments, and it is reasonable to compensate the focal length change by changing the back focal plane position. This research laid the foundation for the subsequent research on the 3D reconstruction of the novel mirror binocular system.

1. Introduction

With the rapid development of computer vision and imaging process technology, binocular vision system has been widely applied to different fields, such as industrial manufacturing, parts and workpiece inspection [1,2,3], drone navigation [4,5], intelligent security, and so on. Additionally, the demand of three-dimensional measuring technology has been continuously increasing these years, so the research on stereo camera system that can obtain the three-dimensional surface of objects has attracted great attention [6]. For example, Rodríguez and Mejía Alanís [7] discussed the situation that the first camera captures the occluded regions of the second camera, and the second camera captures the occluded regions of the first camera in a binocular system. Traditional binocular stereo vision system is generally composed of two cameras to capture scenes of the object from different positions [8], which is costly, time consuming, and of poor synchronization [9]. The delay in capturing images makes it impossible for both cameras to take pictures at the same time. Besides, the weight and data load of the two-camera measurement system are large, which is difficult to meet the volume and weight requirements of flight measurement in UAV navigation. To solve the problems above, many researchers have studied the mirror binocular system consisting of the single camera and catadioptric components.
Compared with the traditional stereo vision composed of multiple cameras, the mirror binocular system has great superiority. This method use the single camera and catadioptric components to acquire stereo vision images. This kind of vision sensors is low-cost, simple to operate, perfectly synchronous, and fast for measurement. Gluckman and Nayar [10] first proposed a catadioptric stereo system using a plane mirror. By using specular reflection to capture a stereo image in a measurement scene, the data acquisition and calibration process of the vision sensor is simplified. Feng X F and Pan D F et al. [11] proposed a stereo vision sensor model based on plane mirror imaging, and discussed the measurement theory of 3D points, which is suitable for high precision measurement in short distance. Yu L and Pan B et al. [12] designed a stereo camera system with a single camera combined with a four-plane mirror to capture a pair of object images with fast speed and high robustness. Zhou F et al. [13] designed a mirror binocular system consisting of a single camera and a quadrangular pyramid-shaped plane mirror group to establish a single camera multi-mirror polar constraint model. DooHyun Lee and InSo Kweon et al. [14] proposed a new stereo camera system using double prisms, in which a triangular prism is placed in front of the lens to construct a simple stereo system. It greatly simplifies the calibration process of the system. Xiao Y and Lim K B et al. [15] designed a single-lens stereo vision system based on multiface prisms, and extended from trinocular to multiocular, and established two models to describe the vision system. Chen C Y and Deng Q L et al. [16] proposed a single-lens panoramic stereo photography using a double-symmetric prism placed in front of the lens to obtain images of four different viewing angles, which makes the image resolution enhanced several times and eliminates the moiré effect. Teresa Wu et al. [17] explored the use of liquid-crystal shutter-based technology for stereoscopic display of prism-based single-camera systems for stereoscopic display, quantitatively assessing the performance of the proposed system compared with a benchmark dual camera system. Chen C Y and Yang T T et al. [18] designed a microprism array optical system to realize the acquisition of single camera stereo image pairs, and explained the principle of the optical system for capturing stereo images. Also, the field of view is optimized and the lens is optimized. Zhou F and Chen X et al. [19] proposed an asymmetric binocular catadioptric system, analyzed the influence of optical-path-difference in the optical system, and compensated it.
Ray tracing is an analytical method for accurately calculating the exact path of light passing through an optical system by applying the law of refraction to each refraction surface when actually processing optical system imaging problems or optical design. Yu L and Wei Q et al. [20] presented a compact structure composed of two transmittable aspheric surfaces and two reflective conicoid surfaces based on optical design, which enhanced the stability of the second mirror and has good imaging ability. Hagimori H et al. [21] explained the changes in the optical design concept of the focusing function in the zoom lens, and introduced the development of the zoom lens and the different types of typical lenses. Shen T C and Drost R J et al. [22] proposed a catadioptric beacon position system that can provide mobile network nodes with omnidirectional situational awareness of neighboring nodes. Gagnon Y L and Speiser D I et al. [23] proposed an optical system characterization method that simplifies numerical ray tracing, which greatly simplified the calculations for the location and direction of the rays by using the Chebyshev approximation, whereas all of the above existing mirror binocular researches place the catadioptric structure in front of the lens, making the size of the measuring sensor system larger, and the convenience of measurement has not been significantly improved.
This paper proposed a novel single-camera stereo vision system with a catadioptric component placed behind the lens based on the existing mirror binocular, and we concentrate on the influence of the postposition of catadioptric component on the focal length and imaging surface in whole system, thus making the single-camera stereo vision system capable of imaging by focal length compensation. In this paper, we use a small wedge angle prism as a catadioptric component and place it between the lens and camera, which greatly reduces the size of the measurement system and the cost with perfect synchronization. However, at the same time, it will change the focal length of the optical system and the position of the imaging surface, so that the imaging surface cannot fall on the original CCD plane, resulting in defocusing and poor image quality. Therefore, the change of the focal length and the position change of the imaging surface are studied to compensate, so that the image surface, which has been separated into left and right images, can fall on the CCD surface and acquire a clear image. Thus, we can verify the feasibility of single-camera binocular vision system. To verify the feasibility of the sensor, we carried out the calibration experiment. This article concentrates on the effect on the focal length and imaging surface, and we aim to enable the sensor have the ability to image. The three-dimensional reconstruction contents are planned to be discussed in future work.
The rest of the paper is organized as follows. Section 2 provides the detailed descriptions of the proposed system and the derivation of ray trace. Section 3 is followed by the experiments and analysis with setup parameters. Finally, Section 4 concludes the paper and proposes further research areas.

2. Principle

2.1. Influence of Catadioptric System Postposition on the Incident Light Path of Parallel Light

The camera used in machine is composed of a lens and a CCD camera. The camera imaging principle is similar to pinhole imaging model, in which the lens is equivalent to the pinhole [24]. It can receive enough light to make the camera get proper exposure in a short time, and it is capable of concentrating the beam, imaging a space object onto camera CCD plane to obtain a clear image, which can be seen by the Gaussian imaging formula. We can simplify the complex structure of the lens into a single convex lens for analysis. In the past, the design concept of the mirror binocular system was to place the catadioptric structure in front of the lens, as shown in Figure 1.
In Figure 1a, a triangle prism is placed in front of the camera lens. The prism surface will produce a refraction phenomenon, thus changing the optical path to form two beams to achieve the effect of the image. The image acquisition result can be equivalent to the effect of shooting by the left and right virtual cameras in the figure. Figure 1b–d show that the plane mirror is placed in front of the lens, and the light path is changed by the reflection of the plane mirror to form a beam splitting effect. Figure 1b shows a tilting single plane mirror placed in front of the lens, which is equivalent to a binocular system consisting of a real camera and a virtual camera. Figure 1c uses a four-plane mirror to form a catadioptric structure, which the light beam is split by two reflections. Figure 1d uses a quadrangular pyramid plane mirror to form a catadioptric structure. Both Figure 1c,d are equivalent to a binocular system consisting of two virtual cameras.
In the case above, the optical path is changed outside the camera system. Therefore, it will not affect the focus distance, and the position of the imaging surface in the camera does not change, either. We usually use virtual cameras to simulate for analysis.
In this paper, the catadioptric system uses the prism shown in Figure 1a and places it between the lens and the CCD plane of the camera to form a catadioptric postposition optical system, as shown in Figure 2. Light splitting phenomenon occurs inside the camera. As a result, the optical path in the original camera pinhole imaging model is changed, and the camera focal length is also changed. We have carried out simulation analysis and formula derivation.
Figure 3a shows that we use Zemax to simulate the imaging optical path in which a lens is replaced by a single convex lens. We simulate the optical path that parallel lights incident on a single lens, and converge on the focus point. The light lines hit the imaging plane. To observe what kind of change the optical path of the camera optical system will be caused as a result of prism postposition, we create a prism to simulate in Zemax. The position of prism and lens and the change of optical path is shown in Figure 3b. The parameters of each surface of the new lens prism group are set as shown in Table 1.
To quantitatively analyzed the change of focus, we calculate the optical path in the Figure 3b and derive formula. The schematic diagram of the optical path derivation is shown in Figure 4.
The single lens aperture is D 1 , the prism aperture is D 2 , and the distance between lens and prism is d. The two beams of the same side parallel light are traced. The distance between the beam 1 and the optical axis is l 1 , and the distance between the beam 2 and the optical axis is l 2 . The local length is f, and the wedge angle of the triangular prism is θ . J and K are the exit points of the two rays passing through the single lens. B and D are the incident points of the two rays at the prism. C and G are the exit points of the rays off the prism. α 1 and α 2 are the incident angles of the first refraction when two rays are incident on the lens. β 1 and β 2 are the exit angles of the first refraction. i 1 and i 2 are the incident angles of the second refraction. r 1 and r 2 are the exit angles of the second refraction. F’ is the intersection of the two rays refracted by the prism. The refractive index of the prism material is n. Establish a coordinate system at the center of the prism in Figure 4.
Analyze ray 1, it is known by the geometric relationship shown in the figure that
α 1 = arctan l 1 f
The first refraction occurs when light rays enter the prism, as can be seen from the Snell’s law of refraction:
sin α 1 = n sin β 1
The coordinate of point B is (d, l 1 dtan α 1 ). The slope of line l B C is −tan β 1 . Then the equation of the line l B C is y − (l 1 dtanα 1 ) = −tan β 1 (xd). The coordinate of point A is (d, D 2 2 ). The slope of line l B C is −cot θ . Then, the equation of the line l A P is y D 2 2 = −cot θ (x − d). The simultaneous linear equation can solve the C point coordinates as
x C = D 2 / 2 l 1 + d tan α 1 cot θ tan β + d y C = D 2 2 cot θ D 2 / 2 l 1 + d tan α 1 cot θ tan β
Known by the geometric relationship:
i 1 = π 2 ( π β 1 ( π 2 + θ ) ) = β 1 + θ
The second refraction occurs when the light rays exit the prism. Known by the law of refraction: nsini 1 = sinr 1 . The coordinate of point C is (x C ,y C ). The equation of line l C F is yy C = −tan(r 1 θ )(xx C ). The coordinate of point G is (x G , y G ). The equation of line l G F is yy G = −tan(r 1 θ )(xx G ). Combining the equation of line l C F and line l G F , the coordinates of the intersection of the two rays are solved as follows.
x F = y G y C + x G tan ( r 2 θ ) x C tan ( r 1 θ ) tan ( r 2 θ ) tan ( r 1 θ ) y F = y C tan ( r 1 θ ) ( x F x C )
In the simulation, to facilitate the observation and analysis, the prism and lens parameters can be assumed. If D 1 = 25 mm, D 2 = 25 mm, f = 100 mm, n = 1.575, d = 10 mm, l 1 = 10 mm, l 2 = 6 mm, and θ = 10 ° , the calculation result is as follows. The coordinates of point C is (10.6244, 8.9589), and the coordinate of point G is (11.2607, 5.3502). The coordinate of the new focus F’ is (93.2534, −7.3390). The light ray path diagram is shown in Figure 5. The dotted line is the original light path, and the solid line indicates the light path after the prism is added.
To study the effects of changes in prism and optical system parameters on the changes of the optical path and system focal length, we change the parameters and observe the optical path and focus changes; the focus coordinate changes are shown in Table 2.
Rows 1, 2, and 3 of the table show the change in the wedge angle of the prism, and the resulting optical path changes as shown in the Figure 6. The larger the wedge angle of the prism, the greater the degree of deflection of the optical path, the larger the coordinate change of the new focus F’, and the smaller the focal length, and the more obvious the imaging effect. Rows 4 and 5 of the table show the change in the distance between the prism and the lens, and the resulting optical path changes as shown in the Figure 7. The smaller the distance, the larger the coordinate change of the new focus F’, and the smaller the focal length, the more obvious the imaging effect. The first and fourth rows show the lens aperture change, and it is understood that the change in the lens aperture does not affect the new focus F’ when the focal length of the single lens does not change.
It can be seen from the above analysis that the prism postposition system will cause the focus to change, and the image separation effect will be formed, and two symmetrical focal points will be formed, and the focal length will become smaller, thus causing the imaging surface to become close, and the defocusing phenomenon occurs before failing on the CCD plane.

2.2. Analysis of Aliasing of Two Images after Imaging Separation

Based on the above formula, we continue to analysis the imaging on the CCD image surface that the light of finite objects path through the lens and the catadioptric postposition system. The formula derived above is to calculate the positional change of the focus in the case where parallel is incident. The object is now placed at a finite distance and the positional of the imaged point is calculated.
In Zemax’s optical path design, we change the ∞ in the object plane ( O B J ) thickness column to a finite value, and change the field of view height in Field Data, thus changing the angle of the incident ray, so that we can simulate the optical path of the object through the optical system at a finite distance. The parameters of each surface are shown in Table 3. Layout 3D diagram are shown in Figure 8.
It can be seen from the Figure 8 that the red line and purple lines, respectively, represent the light emitted by the upper and lower ends of the object, and after the system, there are two inverted images on the CCD image surface. To make it a complete and non-aliased image, it is necessary to satisfy that the rays 1 and 4 do not exceed the CCD image plane, and the position of the ray 2 is higher than ray 3 in the image plane. As this optical path is symmetrical, only one side of the optical path need to be analyzed.
In Figure 9, the ray 1 emitted from the top of the object is incident on the CCD plane through the lens and the prism at point D. The y-direction coordinate is y 1 , and the ray 2 is incident at the M point, whose y-direction coordinate is y 2 . If aliasing does not occur, y 1 > 0 must be satisfied. To make the image within the CCD plane range, y 2 > −D C C D /2 is required. The D C C D is the y-direction dimension length of the CCD plane.
In Figure 9, we can see by Gaussian formula:
1 l 1 l = 1 f
Vertical axis magnification:
β = l l = h h
For ray 1, the angle of incident at point B is α 1 = arctan( D 1 2 + h l ). It is known by the law of refraction: sin α 1 = nsinβ 1 . The liner equation of BC is y − (− D 1 2 dtanα 1 ) = tan β 1 ( x d ) , combining the linear equation of the hypotenuse of the lower half of the prism y − (− D 2 2 ) = cot θ ( x d ) . Then, we obtain the coordinates of point C:
x C = D 1 / 2 D 2 / 2 d tan α 1 tan β 1 cot θ + d y C = D 2 2 cot θ D 1 / 2 D 2 / 2 d tan α 1 tan β 1 cot θ
The incident angle at point C is i 1 = β 1 + θ . Known by law refraction nsini 1 = sinr 1 , the linear equation of line CD is yy C = tan(r 1 θ )(xx C ); this line intersects the CCD surface at point D. Substituting the abscissa of the CCD surface x D = D C C D , the position of the ray 1 on the CCD surface can be obtained y 1 = y C + tan(r 1 θ )(D C C D x C ). Let y 1 > 0, so that we can make the image not overlap.
For ray 2, the angle of incident at point F is α 2 = arctan( D 1 2 h l ). Combining the law of refraction sin α 2 = nsinβ 2 , the coordinates of the point G can be obtained similarly:
x G = D 2 / 2 D 1 / 2 + d tan α 2 cot θ tan β 2 + d y G = D 2 2 cot θ D 2 / 2 D 1 / 2 + d tan α 2 cot θ tan β 2
The incident angle at G is i 2 = β 2 + θ . Known by law refraction nsini 2 = sinr 2 , the linear equation of line GM is yy G = tan(r 2 θ )(xx G ); this line intersects the CCD surface at point M. Substituting the abscissa of the CCD surface x G = D C C D , the position of the ray 2 on the CCD surface can be obtained y 2 = y G + tan(r 2 θ )(D C C D x G ). Let y 2 > − D C C D 2 , so that we can make the image within the CCD image plane range.
Based on the above analysis, we design and select the prism parameters. The wedge angle of the prism should not be too large. If it is too large, the total internal reflection phenomenon will occur on the second refractive surface. According to geometric analysis, the wedge angle should be less than 17.9 ° . Let the wedge angle change from 0 ° to 17.9 ° to observe the change of the values of y 1 and y 2 . As shown in Figure 10, the blue line is the y 1 value curve, and the red line is the y 2 value curve. According to the actually size of the CCD, it can be seen that the prism wedge angle should be chosen to be ~6~13.5 ° .

2.3. Compensation for Focal Length Changes Based on Finite Point Imaging Changes on the Axis

Now, the change of the imaging point of the finite distance point on axis through the optical system is investigated. According to the position change of the image point of the object point on axis after the image separation, the distance between the CCD plane and the lens is compensated.
Samely, as the upper and lower sides of the axis are symmetrical in the optical path, only the optical path change on upper side is analyzed. The intersection of the two rays passing through the lens and the prism system is the new image point position after the image is separated.
As shown in Figure 11, point P is on axis, and the two-dimensional plane coordinate system is established with the lens center as the origin. The lens aperture is D 1 , the prism aperture is D 2 , the wedge angle is θ , and the prism-to-lens spacing is d. A 1 ,B 1 ,C 1 is the point at which ray 1 turns through the optical path of the optical system, and A 2 ,B 2 ,C 2 is the point at which ray 2 turns through the optical path of the optical system. P’ is the image point position of the P point when there is no prism, and P” is the image point position after the image is separated when the prism is added. The derivation process is similar to Equation (1). The parallel ray incident in Equation (1) is changed to the finite distance point source incidence, which can be combined with the single-lens Gaussian imaging formula to obtain l’, and change the incident angle into α 1 = arctan( y A 1 l ) to obtain two outgoing rays (straight line C 1 P” and straight line C 2 P”) equation:
x P = y C 2 y C 1 + tan ( r 2 θ ) x C 2 tan ( r 1 θ ) x C 1 tan ( r 2 θ ) tan ( r 1 θ ) y P = y C 1 tan ( r 1 θ ) ( y C 2 y C 1 + tan ( r 2 θ ) x C 2 tan ( r 1 θ ) x C 1 tan ( r 2 θ ) tan ( r 1 θ ) x C 1 )
We use matlab to simulate the optical circuit. The simulation experiment parameters are set according to the actual situation. The lens with focal length f = 8.5 mm is used. The single lens and prism aperture are both 12.76 mm, the prism wedge angle is 10 ° . Make the point distance on the axis l = −20, and observe the change of the image point, as shown in Figure 12. It can be seen from the figure that after the point on axis’ image is separated by the prism system, the position of the image point is closer to the back surface of the lens than the original image point, that is, the image distance is shortened. Change the object distance and observe the change of the image point. Considering the actual application process, the distance between the measured object and the camera lens is as large as 10 cm~3 m. Substituting l = −100~−3000 mm, observe the change of the position of the image point. Figure 13 shows the change in image point when the lens focal length is 8.5 mm with a 10 ° prism. The blue star dots represent the position coordinates of the point on the axis that is imaged by a single lens, and the red star points indicates the position of the point on the axis after adding the prism. The distance between the image plane and lens varies from 8.5 mm to 9 mm without prism, whereas if there is a prism, it varies from 5.3 mm to 6.3 mm. Therefore, if focal length compensation is to be performed inside the optical system, there are two methods: moving the CCD plane forward, reducing the distance between the CCD surface and the rear surface of the lens so that it is just at the ideal image point position; increasing the focal length of the lens, making the image point move back to the CCD plane position. We use the zoom lens to reduce the distance between the lens and the CCD plane by moving the position of the back focal plane while the focal length of the lens is constant, compensating for the focal length change caused by the rear of the prism.

2.4. Imaging Clarity Evaluation

To more accurately describe and evaluate the imaging quality of new optical systems, we need to perform Image Quality Assessment(IQA) [25]. The image quality was quantitatively evaluated by using no-reference evaluation algorithms [26,27]. In this paper, Laplacian gradient function [28], Tenengrad gradient function, and gray-scale variance function evaluation algorithms are used to quantitatively compare the imaging quality of the prism postposition mirror binocular system before and after focal length compensation.
The Laplacian gradient function uses the Laplacian operator to extract the horizontal and vertical gradient values [29]. The Laplacian operator is defined as follows.
L = 1 6 1 4 1 4 20 4 1 4 1
The definition of image sharpness based on Laplacian gradient function is as follows.
D ( f ) = y x G ( x , y ) ( G ( x , y ) > T )
where G(x, y) is the convolution of the Laplacian operator at the pixel point (x, y).
The Tenengrad gradient function [30] is basically the same as the Laplacian gradient function, just replacing the Laplacian operator with the Sobel operator:
g x = 1 4 1 0 1 2 0 2 1 0 1 , g y = 1 4 1 2 1 0 0 0 1 2 1
The gray-scale variance function [31] uses the gray-scale variation as the basis for the focus evaluation. The formula is as follows.
D ( f ) = y x | f ( x , y ) f ( x , y 1 ) | + | f ( x , y ) f ( x + 1 , y ) |

3. Experiment and Analysis

3.1. Construction of the Catadioptric Component Postposition System

According to the above theoretical analysis, the catadioptric structure postposition optical system is constructed. We use the composite target as the object to be tested, the optical support to fix the object(target) on the optical platform, and a computer to store image data. The processor is an Intel Core i7-7700HQ with 64-bit operating system. The single-camera mirror binocular vision system consists of a single camera, a prism and a zoom lens. The experiment uses a zoom lens with focal length of ~6–12 mm for focal length compensation imaging experiment. In the experiment, when the single lens is normally shot, the prism is directly added to take an image without focal length compensation, and then the back focal plane is adjusted to capture the image after compensating focal length. The camera, prism, and zoom lens parameters are shown in Table 4, and the experiment set-up is shown in Figure 14.
The zoom lens uses ZLKC’s megapixel HD lens, model VM06012MP, and the focal length can be changed from 6 to 12 mm by manual adjustment. The lens has three adjustment rings that adjust the focus, aperture, and back focal plane.

3.2. Comparative Analysis of Image Clarity before and after Prism Postposition Focal Length Compensation

In the experiment, corresponding to the parameters of the simulation experiment, we fixed the focal length into 8.5 mm without prism, and the aperture and the back focal plane are adjusted to achieve the best image clarity. Keep the three rings fixed, mount the prism on the back surface of the zoom lens, assembled into a catadioptric component postposition mirror binocular stereo vision system with the camera. Then, we use the single camera to capture a set of target images. This group of images is captured when the prism is added directly and the focus is not compensated. Next, the back focal plane is separately adjusted, the distance between the CCD plane and the lens is changed, and a set of target images are captured after compensating for the focal length change. This group is the image after the focal length compensation, and the two groups of images are shown in Figure 15. Then, we adjust the focal length ring, observe the imaging effect after the prism is placed on different focal length lenses, repeat above operations, and then take the target images before and after the focal length compensation of different focal length lenses. Figure 16 shows the mirrored binocular imaging pairs before and after the focal length compensation at focal lengths of 6.34 mm, 10.21 mm, and 11.12 mm, with a total of 9 pairs in each group. The above image was evaluated by three image clarity quantification method, and the image clarity before and after the focal length compensation was compared. The image clarity quantitative evaluation comparison value is shown in Table 5, Table 6 and Table 7. In these tables, “U” represents the clarity values of the uncompensated images and “C” means compensated images. Figure 17 is a chart of the comparison values, and Figure 18 shows the mean values evaluating the clarity of the images.
From the comparison of the image clarity before and after the above four sets of focal length compensation, it can be seen form Figure 18 that after the prism is placed behind the lens with different focal lengths, the image clarity after the focal length compensation is still not ideal, but the image quality is significantly improved than no compensation. It can be seen that by moving the back focal plane, reducing the distance between the CCD plane and the lens can compensate for the focal length change caused by the postposition of the prism and improve the imaging quality.

3.3. Calibration Experiment of the Catadioptric Component Postposition in Mirror Binocular

To prove the feasibility of the mirror binocular sensor with post-catadioptric component, we collected a series of dot target images and completed the calibration the experiment. The calibration accuracy determines the measurement accuracy, and the calibration accuracy is determined by the optimization method [32], whose accuracy is given as a pixel fraction. The original image of the target is shown in Figure 19. The extracted feature points images of two virtual cameras is shown in Figure 20, and the calibration results are listed in Table 8 and Table 9. In Table 9, ( α x , α y ) are the normalized focal lengths of two virtual cameras, ( u 0 , v 0 ) are the principal point coordinates, ( k 1 , k 2 , k 3 ) represent the radial distortion coefficients, and ( p 1 , p 2 ) represent the tangential distortion coefficients. Table 9 lists the relationship between the two virtual cameras. We can see that the back-reprojection error and forward-reprojection error of each images of the two virtual cameras are both small in Figure 21, which meets general visual measurement requirement. The mean back-reprojection errors of the left and right virtual camera are 0.012 pixel and 0.012 pixel. The mean forward-reprojection errors of the left and right virtual camera are 0.022 mm and 0.033 mm.

3.4. Discussion and Future Work

This article focuses on imaging analysis. We concentrate on the effect of catadioptric postposition on the focal length and imaging surface, using a zoom lens to reduce distance between the CCD and lens. In future research, we will further study the focal length compensation of the imaging system, such as the calibration of the aberrations. Also, we intend to use a special optical component whose refractive index is different at different parts to make compensation strategies in the subsequent studies. It will make focal length compensation achieve better results.
Imaging is the foundation of vision sensors. We proposed a totally novel mirror binocular system, which is significant for the development of vision sensors, and the imaging issues of the novel system need to be discussed in depth. Therefore, the detailed three-dimensional reconstruction contents are planned to be discussed in future work.

4. Conclusions

In this paper, a novel mirror binocular system with the catadioptric component postposition is proposed. After the prism is placed behind the lens, it is photographed by a single camera to obtain two images of the measured object in one picture. The simulation experiment of the prism postposition system was carried out. The influence of prism parameters on the focal length of the optical system was analyzed, and the method of focal length compensation was proposed. After the prism is placed, the focus will be moved forward and the focal length will be reduced. The method of reducing the distance between the CCD plane and the lens can be used to compensate. The Laplacian gradient function, Tenengrad gradient function and gray-scale variance algorithms is used to quantitatively evaluate the image clarity. It is verified by experiments that the compensated image quality is better than no compensation image, so the focal length compensation can be realized. Besides, the imaging and calibration experiment has been implemented, which demonstrated the feasibility of the designed sensor. Therefore, the catadioptric component postposition system can obtain a mirror binocular image, which is perfectly simultaneous and greatly reduces the cost and the operation as well as being more convenient than the conventional mirrored binocular system.

Author Contributions

F.Z. proposed the idea and guided the research direction; Y.C. were involved in the theoretical performance analysis, designed and optimized the experiment, and wrote the paper; M.Z. and X.L. revised the manuscript.

Funding

This research was funded by the Ministry of Science and Technology of the People’s Republic of China (No.2018YFB2003802).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript.
CCDCharge-coupled Device

References

  1. Luo, Z.; Zhang, K.; Wang, Z.; Zheng, J.; Chen, Y. 3D pose estimation of large and complicated workpieces based on binocular stereo vision. Appl. Opt. 2017, 56, 6822–6836. [Google Scholar] [CrossRef] [PubMed]
  2. Ali, M.A.; Lun, A.K. A cascading fuzzy logic with image processing algorithm–based defect detection for automatic visual inspection of industrial cylindrical object’s surface. Int. J. Adv. Manuf. Technol. 2019, 102, 81–94. [Google Scholar] [CrossRef]
  3. Jin, T.; Jia, H.; Hou, W.; Yamamoto, R.; Nagai, N.; Fujii, Y.; Maru, K.; Ohta, N.; Shimada, K. Evaluating 3D position and velocity of subject in parabolic flight experiment by use of the binocular stereo vision measurement. Chin. Opt. Lett. 2010, 8, 601–605. [Google Scholar] [CrossRef]
  4. Sanchez-Rodriguez, J.P.; Aceves-Lopez, A. A survey on stereo vision-based autonomous navigation for multi-rotor MUAVs. Robotica 2018, 36, 1225–1243. [Google Scholar] [CrossRef]
  5. Zhang, J.; Liu, W.; Wu, Y. Novel Technique for Vision-Based UAV Navigation. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 2731–2741. [Google Scholar] [CrossRef]
  6. Li, H.; Zhang, X.; Zhu, B.; Fatikow, S. Online Precise Motion Measurement of 3-DOF Nanopositioners Based on Image Correlation. IEEE Trans. Instrum. Meas. 2019, 68, 782–790. [Google Scholar] [CrossRef]
  7. Rodríguez, J.A.M.; Mejía Alanís, F.C. Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging. J. Mod. Opt. 2016, 63, 1–14. [Google Scholar] [CrossRef]
  8. Cui, Y.; Zhou, F.; Wang, Y.; Liu, L.; Gao, H. Precise calibration of binocular vision system used for vision measurement. Opt. Express 2014, 22, 9134–9149. [Google Scholar] [CrossRef]
  9. Shao, F.; Lin, W.; Jiang, G.; Dai, Q. Models of Monocular and Binocular Visual Perception in Quality Assessment of Stereoscopic Images. IEEE Trans. Comput. Imaging 2016, 2, 123–135. [Google Scholar] [CrossRef]
  10. Gluckman, J.; Nayar, S.K. Catadioptric Stereo Using Planar Mirrors. Int. J. Comput. Vision 2001, 44, 65–79. [Google Scholar] [CrossRef]
  11. Feng, X.F.; Pan, D.F. Research on the application of single camera stereo vision sensor in three-dimensional point measurement. J. Mod. Opt. 2015, 62, 1204–1210. [Google Scholar] [CrossRef]
  12. Yu, L.; Pan, B. Single-camera stereo-digital image correlation with a four-mirror adapter: Optimized design and validation. Opt. Lasers Eng. 2016, 87, 120–128. [Google Scholar] [CrossRef]
  13. Chai, X.; Zhou, F.; Chen, X. Epipolar constraint of single-camera mirror binocular stereo vision systems. Opt. Eng. 2017, 56, 084103. [Google Scholar] [CrossRef]
  14. Lee, D.; Kweon, I. A novel stereo camera system by a biprism. IEEE Trans. Rob. Autom. 2000, 16, 528–541. [Google Scholar]
  15. Xiao, Y.; Lim, K.B. A prism-based single-lens stereovision system: From trinocular to multi-ocular. Image Vision Comput. 2007, 25, 1725–1736. [Google Scholar] [CrossRef]
  16. Chen, C.Y.; Deng, Q.L.; Sun, W.S.; Cheng, Q.Y.; Lin, B.S.; Su, C.L. Panoramic stereo photography based on single-lens with a double-symmetric prism. Opt. Express 2013, 21, 8474–8482. [Google Scholar] [CrossRef]
  17. Zhao, Y.; Cui, X.; Wang, Z.; Chen, H.; Fan, H.; Wu, T. Prism-based single-camera system for stereo display. Opt. Express 2013, 55, 063106. [Google Scholar] [CrossRef]
  18. Chen, C.Y.; Yang, T.T.; Sun, W.S. Optics system design applying a micro-prism array of a single lens stereo image pair. Opt. Express 2008, 16, 15495–15505. [Google Scholar] [CrossRef]
  19. Zhou, F.; Chen, X.; Tan, H.; Chai, X. Optical-path-difference analysis and compensation for asymmetric binocular catadioptric vision measurement. Measurement 2017, 109, 233–241. [Google Scholar] [CrossRef]
  20. Yu, L.; Wei, Q.; Jiang, H.; Zhang, T.; Wang, C.; Zhu, R. Design of mid-wave infrared solid catadioptric optical system. In Proceedings of the 2013 IEEE Third International Conference on Information Science and Technology (ICIST), Yangzhou, China, 23–25 March 2013; pp. 1434–1436. [Google Scholar]
  21. Hagimori, H. Change of optical design thought about focusing of zoom lens. SPIE Opt. Eng. Appl. 2015, 9580, 95800J. [Google Scholar]
  22. Shen, T.C.; Drost, R.J.; Sadler, B.M.; Rzasa, J.R.; Davis, C.C. Omnidirectional beacon-localization using a catadioptric system. Opt. Express 2016, 24, 6931–6944. [Google Scholar] [CrossRef] [PubMed]
  23. Gagnon, Y.L.; Speiser, D.I.; Johnsen, S. Simplifying numerical ray tracing for characterization of optical systems. Appl. Opt. 2014, 21, 4784–4790. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  25. Liu, Z.; Laganière, R. Phase congruence measurement for image similarity assessment. Pattern Recognit. Lett. 2007, 28, 166–172. [Google Scholar] [CrossRef]
  26. Horita, Y.; Arata, S.; Murai, T. No-reference image quality assessment for JPEG/JPEG2000 coding. In Proceedings of the 12th European Signal Processing Conference, Vienna, Austria, 6–10 September 2004; pp. 1301–1304. [Google Scholar]
  27. Bosse, S.; Maniry, D.; Müller, K.R.; Wiegand, T.; Samek, W. Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment. IEEE Trans. Image Process. 2017, 101, 206–219. [Google Scholar] [CrossRef] [PubMed]
  28. Lv, X.; Wang, Z.J. Reduced-reference image quality assessment based on perceptual image hashing. In Proceedings of the 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 4361–4364. [Google Scholar]
  29. Bovik, A.C. Automatic Prediction of Perceptual Image and Video Quality. Proc. IEEE 2013, 101, 2008–2024. [Google Scholar]
  30. Tian, Y.; Hu, H.; Cui, H.; Yang, S.; Qi, J.; Xu, Z.; Li, L. Three-dimensional surface microtopography recovery from a multifocus image sequence using an omnidirectional modified Laplacian operator with adaptive window size. Appl. Opt. 2007, 56, 6300–6310. [Google Scholar] [CrossRef]
  31. Martinez, J.; Pistonesi, S.; Maciel, M.C.; Flesia, A.G. Multi-scale fidelity measure for image fusion quality assessment. Inf. Fusion 2019, 50, 197–211. [Google Scholar] [CrossRef]
  32. Rodríguez, J.A.M. Microscope self-calibration based on micro laser line imaging and soft computing algorithms. Opt. Lasers Eng. 2018, 105, 75–85. [Google Scholar] [CrossRef]
Figure 1. Traditional mirror binocular diagram. (a) Prism in front of camera lens. (b) Single plane mirror in front of lens. (c) Four plane mirror in front of lens. (d) Quadrangular pyramid mirror in front of lens.
Figure 1. Traditional mirror binocular diagram. (a) Prism in front of camera lens. (b) Single plane mirror in front of lens. (c) Four plane mirror in front of lens. (d) Quadrangular pyramid mirror in front of lens.
Sensors 19 05309 g001
Figure 2. Catadioptric postposition schematic diagram.
Figure 2. Catadioptric postposition schematic diagram.
Sensors 19 05309 g002
Figure 3. Comparison with single lens and lens with a prism.
Figure 3. Comparison with single lens and lens with a prism.
Sensors 19 05309 g003
Figure 4. Parallel light incident derivation diagram.
Figure 4. Parallel light incident derivation diagram.
Sensors 19 05309 g004
Figure 5. Schematic diagram of optical path change.
Figure 5. Schematic diagram of optical path change.
Sensors 19 05309 g005
Figure 6. Optical path simulation in the change of prism wedge angle.
Figure 6. Optical path simulation in the change of prism wedge angle.
Sensors 19 05309 g006
Figure 7. Optical path simulation in the change of the distance between prism and lens.
Figure 7. Optical path simulation in the change of the distance between prism and lens.
Sensors 19 05309 g007
Figure 8. Layout 3D of point light path in finite distance.
Figure 8. Layout 3D of point light path in finite distance.
Sensors 19 05309 g008
Figure 9. Light path diagram of point in finite distance.
Figure 9. Light path diagram of point in finite distance.
Sensors 19 05309 g009
Figure 10. Value of y 1 and y 2 varies with the change of the prism wedge angle.
Figure 10. Value of y 1 and y 2 varies with the change of the prism wedge angle.
Sensors 19 05309 g010
Figure 11. Schematic diagram of point ray tracing on the axis.
Figure 11. Schematic diagram of point ray tracing on the axis.
Sensors 19 05309 g011
Figure 12. l = −20, θ = 10 ° optical path simulation.
Figure 12. l = −20, θ = 10 ° optical path simulation.
Sensors 19 05309 g012
Figure 13. Image point position (l = −100~−3000, θ = 10 ° , f = 8.5 mm).
Figure 13. Image point position (l = −100~−3000, θ = 10 ° , f = 8.5 mm).
Sensors 19 05309 g013
Figure 14. The experiment set-up.
Figure 14. The experiment set-up.
Sensors 19 05309 g014
Figure 15. Comparison between uncompansated and compansated images with 8.5 mm focal length lens.
Figure 15. Comparison between uncompansated and compansated images with 8.5 mm focal length lens.
Sensors 19 05309 g015
Figure 16. Images before and after compensation comparision. (a) Uncompensated image with f 1 focal length lens. (b) Uncompensated image with f 2 focal length lens. (c) Uncompensated image with f 3 focal length lens. (d) Compensated image with f 1 focal length lens. (e) Compensated image with f 2 focal length lens. (f) Compensated image with f 3 focal length lens.
Figure 16. Images before and after compensation comparision. (a) Uncompensated image with f 1 focal length lens. (b) Uncompensated image with f 2 focal length lens. (c) Uncompensated image with f 3 focal length lens. (d) Compensated image with f 1 focal length lens. (e) Compensated image with f 2 focal length lens. (f) Compensated image with f 3 focal length lens.
Sensors 19 05309 g016
Figure 17. Different focal length uncompensated and compensated Articulation Value comparison.
Figure 17. Different focal length uncompensated and compensated Articulation Value comparison.
Sensors 19 05309 g017
Figure 18. Different focal length uncompensated and compensated Articulation mean value comparison.
Figure 18. Different focal length uncompensated and compensated Articulation mean value comparison.
Sensors 19 05309 g018
Figure 19. Dot target image.
Figure 19. Dot target image.
Sensors 19 05309 g019
Figure 20. Feature points of in left and right images.
Figure 20. Feature points of in left and right images.
Sensors 19 05309 g020
Figure 21. Reprojection errors.
Figure 21. Reprojection errors.
Sensors 19 05309 g021
Table 1. New lens–prism group parameters setting.
Table 1. New lens–prism group parameters setting.
NumberTypeRadiusThicknessMaterialAperture
0−∞
1Standard 10.000 12.500
2Standard 60.171 4.000 Glass(BK7) 12.500
3Standard 368.450 120.000 12.321
4Non-sequence modelGlass(BK7)12.321
5Standard0 12.500
Table 2. Changes of optical system parameters.
Table 2. Changes of optical system parameters.
D 1 /mmD 2 /mmf/mmd/mm θ / ° F’/mm
25251001020 ( 77.3904 , 12.1569 )
25251001010 ( 93.2534 , 7.3390 )
252510010 6.4 ( 96.5875 , 4.8750 )
20251001010 ( 93.2534 , 7.3390 )
25251002010 ( 94.1007 , 6.5103 )
Table 3. New lens–prism group parameters setting with finite point incidence.
Table 3. New lens–prism group parameters setting with finite point incidence.
NumberTypeRadiusThicknessMaterialAperture
0−∞ 300
1Standard 60.171 4.000 Glass(BK7) 12.500
2Standard 368.450 120.000 12.321
3Non-sequence modelGlass(BK7)12.321
4Standard0 12.500
Table 4. Single-camera mirror binocular vision system set-up.
Table 4. Single-camera mirror binocular vision system set-up.
Key ComponentConfiguration
CameraType: IMPERX IGV-B1610M-SC000
Sensors: CCD,1/1.8”
Resolution: 1628 × 1236
Pixel size: 4.4 μ m × 4.4 μ m
Zoom lensType: ZLKC VM06012MP
Operation: Manual
Aperture range: F1.6-C
Focal length: 6 mm~12 mm
PrismWedge Angle: 10 °
Size: 12.76 mm × 12.76 mm × 1.12 mm
Table 5. Images clarity, Laplacian evaluation method.
Table 5. Images clarity, Laplacian evaluation method.
Number8.5 mmf 1 f 2 f 3
UCUCUCUC
1 0.404916 0.597787 0.329958 0.356612 0.396266 0.466913 0.427831 0.503828
2 0.402704 0.571568 0.33086 0.371547 0.395573 0.446222 0.422204 0.495848
3 0.402364 0.582054 0.33305 0.397381 0.38742 0.434543 0.413441 0.478734
4 0.390978 0.600274 0.321311 0.373777 0.381786 0.416509 0.405406 0.463751
5 0.405271 0.60677 0.312168 0.368947 0.367252 0.456152 0.424629 0.442641
6 0.337233 0.358563 0.393361 0.430181 0.418071 0.490165
7 0.327276 0.346849 0.388683 0.408106 0.403412 0.497333
8 0.305368 0.384678 0.37485 0.432449 0.387374 0.481165
9 0.321917 0.384528 0.341102 0.456143 0.422035 0.464623
mean 0.401249 0.591691 0.324349 0.371431 0.380699 0.438580 0.413822 0.479788
Table 6. Images clarity, Tenengrad evaluation method.
Table 6. Images clarity, Tenengrad evaluation method.
Number8.5 mmf 1 f 2 f 3
UCUCUCUC
1 0.237058 0.322626 0.206773 0.222388 0.246308 0.272646 0.253961 0.276075
2 0.234988 0.315484 0.209144 0.228171 0.245331 0.267377 0.252828 0.271486
3 0.234645 0.315858 0.21057 0.237282 0.241437 0.264822 0.250102 0.266422
4 0.229827 0.30731 0.204355 0.233996 0.239944 0.259673 0.245449 0.260839
5 0.236956 0.313009 0.200203 0.231025 0.23165 0.270032 0.256534 0.255067
6 0.212365 0.228074 0.242763 0.261756 0.252835 0.270176
7 0.206249 0.224787 0.24121 0.253707 0.247988 0.273451
8 0.196007 0.234024 0.23366 0.262363 0.240395 0.27147
9 0.203061 0.234544 0.213349 0.270594 0.255026 0.266252
mean 0.234695 0.314857 0.205414 0.230477 0.237295 0.264774 0.250569 0.267972
Table 7. Images clarity, Variance evaluation method.
Table 7. Images clarity, Variance evaluation method.
Number8.5 mmf 1 f 2 f 3
UCUCUCUC
1 39.5996 44.2464 38.4736 36.6813 47.6726 48.4819 47.6474 47.8818
2 39.3334 44.3136 38.5472 38.0927 46.9064 47.0809 47.2903 47.6474
3 39.1678 42.8679 38.2744 39.7999 46.1471 46.4467 45.4121 45.3387
4 38.2404 43.027 37.2483 38.6948 45.3321 48.0055 47.0678 43.8563
5 39.6101 44.094 35.9539 38.368947 43.4656 48.0055 47.0678 43.8563
6 38.5745 37.6588 47.1563 45.9814 46.3777 47.3133
7 37.2112 36.7383 46.4811 44.0478 45.0714 47.3574
8 35.415 39.5831 45.017 46.1331 46.7654 45.0493
9 36.5621 39.0101 40.6953 47.6721 46.7654 45.0493
mean 39.1903 43.7092 37.3622 38.2982 45.4304 46.5377 46.1617 46.3686
Table 8. Calibration results of single virtual cameras.
Table 8. Calibration results of single virtual cameras.
Parameters ( α x , α y ) /(pixel) ( u 0 , v 0 ) /(pixel) ( k 1 , k 2 , k 3 ) /(1/pixel 2 ) ( p 1 , p 2 ) /(1/pixel 2 )
Left ( 1938.31 , 1938.67 ) ( 898.21 , 587.39 ) ( 0.083 , 3.57 , 2.48 ) ( 0.0033 , 0.0021 )
Right ( 1945.71 , 1935.00 ) ( 1083.07 , 611.65 ) ( 0.84 , 15.03 , 1.81 ) ( 0.028 , 0.0065 )
Table 9. Calibration results of structure parameters of two virtual cameras.
Table 9. Calibration results of structure parameters of two virtual cameras.
RT
0.0034 0.4626 0.8865 0.2169 0.8658 0.4510 0.9762 0.1908 0.1033 228.40 178.52 331.88
EF
246.27 253.27 131.23 224.08 197.12 270.63 48.94 280.33 55.26 5.246 × 10 7 5.3947 × 10 7 2.462 × 10 4 4.7998 × 10 7 4.2216 × 10 7 9.4046 × 10 7 1.0646 × 10 3 1.4877 × 10 3 1.0000

Share and Cite

MDPI and ACS Style

Zhou, F.; Chen, Y.; Zhou, M.; Li, X. Effect of Catadioptric Component Postposition on Lens Focal Length and Imaging Surface in a Mirror Binocular System. Sensors 2019, 19, 5309. https://doi.org/10.3390/s19235309

AMA Style

Zhou F, Chen Y, Zhou M, Li X. Effect of Catadioptric Component Postposition on Lens Focal Length and Imaging Surface in a Mirror Binocular System. Sensors. 2019; 19(23):5309. https://doi.org/10.3390/s19235309

Chicago/Turabian Style

Zhou, Fuqiang, Yuanze Chen, Mingxuan Zhou, and Xiaosong Li. 2019. "Effect of Catadioptric Component Postposition on Lens Focal Length and Imaging Surface in a Mirror Binocular System" Sensors 19, no. 23: 5309. https://doi.org/10.3390/s19235309

APA Style

Zhou, F., Chen, Y., Zhou, M., & Li, X. (2019). Effect of Catadioptric Component Postposition on Lens Focal Length and Imaging Surface in a Mirror Binocular System. Sensors, 19(23), 5309. https://doi.org/10.3390/s19235309

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop