Next Article in Journal
Review of Batteryless Wireless Sensors Using Additively Manufactured Microwave Resonators
Previous Article in Journal
Automatic Classification of Tremor Severity in Parkinson’s Disease Using a Wearable Device
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining

1
IK4-Ideko, 20870 Basque Country, Spain
2
I3A, Universidad de Zaragoza, 50018 Zaragoza, Spain
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(9), 2066; https://doi.org/10.3390/s17092066
Submission received: 8 August 2017 / Revised: 4 September 2017 / Accepted: 5 September 2017 / Published: 9 September 2017
(This article belongs to the Section Physical Sensors)

Abstract

:
Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.

1. Introduction

Large parts machining is performed on a near-to-shape raw part, obtained by processes like casting or welding. These raw parts very often do not have any reliable surface or feature reference that can be used for in-machine alignment. However, initial alignment of the part at the machine is a critical process, since an incorrect alignment will give rise to material shortage, which is associated with spoiling the part or with a costly recovery process. Due to the high cost associated with the rejection of a part, the initial alignment is usually done by way of long, time consuming manual processes.
Photogrammetry methods are being used more and more as a 3D technique for large scale applications [1,2,3,4]. To overcome the aforementioned limitations of the large raw part manual alignment, a photogrammetry based process was developed in previous work [5]. The process consists of two steps (Figure 1): (a) Out-of-machine measurement by photogrammetry of optical targets on raw part surfaces, and its mathematical orientation (fitting) to the ideal designed part frame, where the optimal location of optical targets is obtained to use them as feature references in the machine, and (b) In-machine measurement of such references by a machine-integrated measurement system, a special purpose machine vision or a spindle-integrated contact probe, where the in-machine location and orientation of the raw part is measured, assisting the machine operator through an efficient alignment and part fixturing process prior to part machining.
Two main limitations were observed for the out-of-machine portable photogrammetry presented in [5]: First, since a post-process ray-net bundle adjustment [6] was adopted, long computing times were needed for solving the 3D target coordinates after taking an image set, being in the range of some minutes. Due to the post-process computing approach, the lack of adequate images for having a solvable and consistent multiple view geometry or the diagnosis of pending optical targets to measure, could only be performed after all images were taken [7,8]. This resulted in a time consuming and user dependent iterative review and re-process measuring procedure until an adequate set of images was completed. As a result, the measuring process was clearly far away from the potential that photogrammetry may bring to easy, low-cost and fast industrial metrology for large components [9]. Better in-process computational efficiency was required for solving the bundle adjustment, so that images taken together thus far with solved 3D target coordinates could be checked-out in quasi real time for a reliable measuring process, making the most of the potential of portable photogrammetry in large scale applications.
Second, the system was thoroughly tested in a pilot case scenario showing measuring errors in the range of 1 mm for 1 m long parts (1/1000 relative precision), mainly due to the lack of precise calibration of the camera. Better accuracy was necessary for gaining performance when measuring raw parts larger than 1 m where the errors would increase proportionally to the size, as well as for applications where tighter raw material clearances may be expected. Indeed, camera calibration is a relevant uncertainty contributor in photogrammetry [2,10,11], and precise measurements require precise camera model calibration (so called intrinsic parameters). Continuous and recent improvements regarding the mathematical analysis of the data taken for calibration [11,12,13] and image distortion correction [14,15] can be found in the literature.
In the present work, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. The presented development demonstrates efficient in-process bundle computing capability on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene diagnosis so that a reliable measuring procedure can be conducted by portable photogrammetry, avoiding user dependent and inefficient measuring procedures. Additionally, based on the efficient computational approach, an in-process self-calibration method is implemented using the solved 3D point cloud scene itself as calibration geometry, avoiding the need for specialized off-process pre-calibrated cameras or the use of special purpose calibration artifacts in the scene for precise measurements in an industrial metrology application, where traceability is set only by the scale bars available in the measuring scene.
The developed self-calibrated in-process photogrammetry has been evaluated both in the pilot case scenario of [5] (1.5 m long reference part) and at two large scale industrial scenarios (up to 15 m long raw parts), showing successful results with an in-process computing time ranging around 1 s per image and a relative precision of 1/10,000, which is at the precision level reported for portable photogrammetry with precise pre-calibrated cameras.
Section 2 below describes the raw part measuring process by photogrammetry, along with the adopted method for evaluating measurement accuracy both in the pilot case and industrial scenarios. Section 3 introduces the multiple view geometry in portable photogrammetry, along with the non-linear multivariable optimization problem to compute the bundle adjustment. Section 4 presents the implemented in-process image and target computing procedure, and Section 5 describes the self-calibration functionality development, where computing efficiency and precision performance are demonstrated for the pilot case under study, respectively. Section 6 shows the evaluation results for large scale industrial scenarios. Finally, main conclusions and further steps are described in Section 7.

2. Materials and Methods

Regarding precision in portable photogrammetry, the uncertainty determination of optical measurement methods is still an issue [16]. Analytical calculated uncertainty budget of photogrammetric systems according to GUM [17] using error propagation theory is a challenge and consistent approaches have not been found in the literature. The main reason is the large number of undefined error sources affecting the process, especially when not working in laboratory conditions [1,2]. Variability simulation is an alternative to uncertainty budgeting for complex measurement tasks. However, it also requires deep knowledge of the measurement chain and the statistical distribution of each influence quantity [18], which is also a challenging issue in photogrammetry [19]. Therefore, despite the existence of well-accepted guidelines and standards, the definition of accuracy performance is still very heterogeneous [20]. However, industrially accepted assessment guidelines such as the VDI 2634 [21] offer an alternative for evaluating photogrammetric measurement performance, where the recommended procedure is to arrange a set of calibrated scale bars (control bars) around the object scene and through the principal coordinate axes. Lengths measured by photogrammetry are compared to their calibrated values and corresponding length measuring errors (LME) are then evaluated.
Photogrammetry is the core technique used in the procedure presented in [5] for raw part alignment into a machine tool prior to its machining (Figure 1). The raw part is measured by photogrammetry, using retroreflective coded and non-coded optical targets. Non-coded targets (Figure 2) are used to measure the raw part surfaces to be machined. Auxiliary coded targets are used to assist the measuring process by portable photogrammetry so that consistent image-taking is performed during measurement, enabling the correspondence matching between the information given at consecutive images. Images are taken around the part by a digital camera, and transferred in-process to a desktop PC via wireless communication. Images are computed and optical target 3D coordinates are obtained. After measuring, non-coded target coordinates are fitted to the desired nominal part geometry (defined in a CAD file in this work), using a specific purpose computing criteria for fitting [5], enabling even and positive overstock distribution over the surfaces to be machined. As a result, available overstock over all raw part surfaces can be controlled prior to part set-up into the machine. After out-of-machine measurement and fitting finishes, a subset among the optical target 3D coordinates are selected and used as a reference for raw part in-machine alignment. An in-machine measuring system is then used for measuring the selected target coordinates in the machine frame, and the raw part fixturing is adjusted so that its alignment to the machine axes corresponds with its computed optimal by fitting.
Figure 3 shows the verification scenario adopted in this work in order to evaluate the precision of the developed self-calibrated portable photogrammetry. A reference part is adopted as a pilot case (Figure 3a), resembling the measuring scenario at [5] where limited relative precisions of 1/1000 were reported for portable photogrammetry, with four prismatic steel subelements screwed to a mechano-welded structure and milled to a nominal geometry (Figure 3b). A length measuring error (LME) approach similar to VDI 2634 has been adopted for evaluating measuring procedure uncertainty in the developed self-calibrated portable photogrammetry, verifying the performance of the corresponding measurands (i.e., optical target 3D coordinates). Six scale bars (L1 to L6) were set around the scene (1.5 m × 1 m × 0.5 m) in two groups of three (L1 to L3, and L4 to L6), along with a scale bar in the center of the scene (L0), up to a total of seven bars in order to analyze uncertainty in non-coded target measuring (Figure 3c). Each group of three was located in opposite corners of the scene and each scale bar in a group was aligned such that the length components were set in all three spatial directions. For avoiding the influence of length calibration uncertainty between different scale bars in the LME analysis, only one and same scale bar, a calibrated carbon fiber bar with two non-coded targets separated a length Lscale = 1340.099 mm, was used and moved around the scene to each location. Additionally, as an alternative way of evaluating measuring performance, prismatic subelements (Figure 3d) have been measured placing non-coded targets on each milled surface. 3D coordinates of the computed non-coded targets were fitted to its nominal geometry, and the errors corresponding to each nominal reference surface were evaluated.
Eventually, the self-calibrated in-process photogrammetry was evaluated in two industrial scenarios with up to 15 m long raw parts, having as a reference a spindle integrated contact probe measuring process (repeatability ranging 1 μm) executed in relatively accurate milling machines (assumed uncertainty ranging at a typical value of 0.01 mm). Raw part surfaces to be machined were measured and non-coded target coordinates fitted [5] to its nominal. A subset among the fitted target coordinates (control points) were then used as a reference for raw part in-machine alignment, measured by a machine-spindle integrated contact probe during the alignment process. Once the part was properly aligned, machine coordinates of optical targets not used as control points for alignment (i.e., check points) were measured by the contact probe. In order to evaluate the measuring performance of the developed self-calibrated photogrammetry, errors were evaluated between the gauged values by the contact probe and the corresponding 3D coordinates obtained after measurement and fitting.

3. Measurement by Portable Photogrammetry

Photogrammetry has been defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) [23] as the art, science, and technology of obtaining reliable information about physical objects and the environment through processes of recoding, measuring and interpreting photographic images and patterns of recorded radiant electromagnetic energy and other phenomena. In precise industrial metrology applications, it calculates the 3D coordinates of a discrete number of optical targets on an object [20,24] based on their 2D image coordinates. Several photographs of an object are obtained from different positions of the camera. By detection of optical target 2D coordinates at different images it is possible to calculate the positions and orientations of the camera for each of the photographs (so called extrinsic parameters), and also to calculate the 3D coordinates of the target points by multiple view triangulation. The measuring frame is given by a predefined set of coded optical targets with initially known 3D coordinates (Figure 2) and process traceability is set by appropriate scale bars (Figure 2) with calibrated lengths between corresponding pairs of optical targets.

3.1. Multiple-View Geometry

The geometry of a general photogrammetric problem is illustrated in Figure 4. A set of i = 1 … N targets with 3D coordinates X i are projected in the image plane pij from j = 1 … M different points of view. Each camera position has associated extrinsic parameters, given by the rotation matrix Rj defined by the rotations around the reference frame axes x, y, z-based on the αj, βj and γj Euler angles- and the translation vector dj. The main goal of portable photogrammetry is the joint determination of the 3D coordinates of the N targets ( X i ) along with the location and orientation (camera extrinsic parameters, Rj and dj) of the M image camera frames.
Each target 3D coordinates X i = [ x i   y i   z i ] T can be expressed as U i j = [ u i j   v i j   w i j ] T in each camera frame depending on its extrinsic parameters Rj and dj as:
U i j = R j X i + d j
Each target 3D coordinate U i j can be projected into the corresponding camera 2D image plane as p i j and q i j coordinates (Figure 5), following the widely assumed pin-hole conic projection model in machine vision [24] as:
[ p i j q i j ] = f [ u i j w i j v i j w i j ]
being f the focal distance of the camera lens.
The widely assumed Brown’s model [1,16,25] is adopted for modelling camera and lens distortions, which can be expressed as follows:
[ p i j ˜ q i j ˜ ] = [ h i j ( 1 + k 1 r i j 2 + k 2 r i j 4 ) + π 1 ( r i j 2 + 2 h i j 2 ) + 2 π 2 h i j v i j v i j ( 1 + k 1 r i j 2 + k 2 r i j 4 ) + π 2 ( r i j 2 + 2 v i j 2 ) + 2 π 1 h i j v i j ] r i j = ( h i j h 0 ) 2 + ( v i j v 0 ) 2
where ( h i j ,   v i j ) are the 2D detected target coordinates in the image plane and ( p i j ,   ˜ q i j   ˜ ) are the undistorted or corrected image coordinates, given k 1 and k 2 as the first and second order coefficient for modelling the radial distortion, ( h 0 , v 0 ) as the center of distortion and π 1 and π 2 as the first order tangential distortion coefficients.
The camera and lens distortion model, along with the focal distance, form the so called camera intrinsic parameters characterizing the camera condition for multiple view geometry. The camera intrinsic parameters can be obtained by previous off-line calibration processes [26,27].
Reprojection residual errors r p i j = ( p i j p i j ˜ ) and r q i j = ( q i j q i j ˜ ) can then be defined for every target observed at every image (Figure 5) as the difference between the corrected target coordinates p ˜ i j and q ˜ i j and the projected target coordinates p i j and q i j , which directly depend on the 3D coordinates of the targets ( X i ) and camera extrinsic parameters (Rj and dj) to be solved. As a result, given by the camera intrinsic parameters and the multiple view geometry, target 3D coordinates and camera extrinsic parameters can be jointly computed so that residual errors are minimized for every target at every image, leading to the so-called bundle adjustment [24] for solving the reprojection optimization problem in portable photogrammetry.

3.2. Optimization Problem

The problem is thus defined as a nonlinear multivariate overdetermined system, which can be solved as a least square optimization problem or bundle adjustment by numerical methods such as Levenberg-Marquardt or Gauss-Newton [6,20,24,25]. An initial approximation is defined for the set of variables to solve (αj, βj, γj, and dj from j = 1 … M camera views, and X i from i = 1 … N optical targets) and a numerical iteration procedure is applied to compute them so that residual errors ( r p i j and r q i j ) are minimized.
Residual errors can be grouped into a single residual vector r according to the following structure:
r j = [ r 11 r 11 r 1 N j r 2 N j ] = [ p 1 p 1 ˜ q 1 q 1 ˜ p N j p N j ˜ q N j q N j ˜ ] r = [ r 1 r M ]
where each rj vector corresponds to the reprojection error distribution at the j image, being Nj the number of targets detected in each image.
There are two main sets of parameters to be solved-extrinsic and target coordinates, represented by θe and θx, respectively, which can be grouped in a common vector θ.
The extrinsic parameters vector θe is represented as follows,
θ e = [ α 1     β 1   γ 1   d 1 T     α M     β M   γ M   d M T ] T
and the target coordinates vector θx is like
θ x = [ x 1     y 1   z 1     x N     y N   z N   ] T
Based on these definitions, the complete vector θ is defined as
θ = [ θ e θ x ]
Following the above definitions, the process consists in the iteration of vector θ ( θ k θ k 1 + θ ) towards an optimal value θ ^ which minimizes the residual vector r 2 norm. According to Gauss-Newton method [28], each iteration θ can be computed as a linear system by
J T J θ = J T r
where J is the Jacobian matrix containing the partial derivatives of each component of the residual vector with respect to the parameters to optimize in θ .
J ( θ ) = [ r 1 θ 1 r 1 θ 6 M + 3 N r 2 m θ 1 r 2 N θ 6 M + 3 N ]
with m = j = 1 M N j being the total number of targets detected at all images.

4. In-Process Computing Procedure for Time Efficiency

The computing performance of the bundle adjustment method plays a relevant role in the development of a time-efficient in-process strategy for portable photogrammetry. The convergence time of the numerical method depends on the size of the Jabobian matrix J , due to matrix element allocation, assignation and computation times for managing relatively large and sparse J T J and J T matrices, and due to the conditioning and size of the ( J T J ) 1 inverse matrix calculation so that θ is determined in each iteration. As an example, given a measuring scenario with 100 images and 100 targets, assuming all targets are detected at every image, J T J matrix size is 900 × 900, with J being 40,000 × 900. Indeed, this problem increases with the number of images and targets included in the joint bundle during the measuring process. An alternative for reducing this computational work is the decomposition of the linear system to solve θ (Equation (8)) to a set of a lower range and individually solved linear subsystems. In this work, the reprojection error partial derivatives forming the Jacobian matrix J (Equation (9)) are analytically expressed and system decomposition is adopted so that θ e and θ x are individually solved for interdependent extrinsic and target coordinate iteration, avoiding direct computation of θ in order to increase computational efficiency.
On the other hand, the total number of iterations (k) and the corresponding convergence time depends on the initial adopted approximation for every variable in θ to optimize in the joint bundle minimization problem. Furthermore, the numerical method may diverge if a too-inaccurate initial approximation is adopted. So, along with the former system decomposition approach, the adoption of adequate and robust initial approaches for new camera extrinsic and target coordinates every time a new image is taken can avoid unnecessary joint computation effort every time a new image is taken.
Figure 6 shows a schematic view of the in-process procedure developed. The following steps are conducted each time a new image is taken during the measuring process by portable photogrammetry:
  • Image processing proceeds for optical target detection (hij, vij) and correction ( p i j ˜ ,   q i j ˜ ), along with decoding of coded targets [24,29].
  • Computation of an initial approach is performed for the camera extrinsic parameters of the new image (αj, βj, γj, and dj), according to already solved coded optical target 3D coordinates detected on the image (Figure 6a).
  • In case a minimum set of three coded optical targets with known coordinates is not available in the new image, the camera extrinsic is not computed and the procedure stops asking for a new image having a minimum set of targets to proceed back in step 1.
  • Given by the new camera extrinsic, code assignation is performed to non-coded targets with still unsolved correspondences in all images captured so far, following the so called Hungarian method [30,31].
  • Given by the new camera extrinsic parameters, computation of an initial approach is performed for new target 3D coordinates X i unsolved so far but coded, provided that each one is jointly observed by a minimum set of two camera views with known extrinsic parameters (Figure 6b).
  • An intermediate joint bundle adjustment (Figure 6c) is conducted for the camera extrinsic and target coordinates solved according to all images so far. Given by the initial approaches in steps 2 and 5, only one bundle iteration is performed, so that a sufficiently accurate and consistent epipolar net construction is obtained every time a new image is included in the minimization problem, ensuring a reliable correspondence solving for non-coded targets in step 4, and avoiding unnecessary computational work until joint bundle convergence at this step. The measuring process can now continue with the acquisition of new images, computed in-process from step 1 to 6 every time a new image is taken.
  • Finally, once the measuring process finishes, the post-process joint bundle of camera extrinsic parameters and target coordinates is computed until convergence and measuring process traceability is set by calibrated scale bar distances available at the scene, where measuring frame target coordinates are also included into the bundle adjustment.
Regarding process reliability, step 3 enables in-process control of the information provided by each image, so that measuring process by portable photogrammetry can be guided ensuring a set of images that will form a consistent and geometrically determined ray-net, avoiding inefficient post-process evaluation and iterative processes [7,8].
The main steps of the presented in-process strategy are described below. Results are shown for the pilot case under study (Figure 3). Section 4.1 and Section 4.2 show the initial computation approach of the camera’s extrinsic parameters (step 2, Figure 6a) and target coordinates (step 5, Figure 6b) as independent computing problems, respectively. Section 4.3 and Section 4.4 describe the joint bundle adjustment problem both before (step 6, Figure 6c) and after, including scale bars for traceability (step 7), respectively. Special attention is paid to the particular analytic expression of Equation (9) corresponding to each step of the in-process procedure. Although linear closed form expressions can be given in Section 4.1 and Section 4.2 for the independent camera extrinsic parameters and target coordinate computing [24], nonlinear approaches are shown as intermediate steps towards presenting the submatrix decomposition of Equation (8) for the nonlinear joint bundle computation described in Section 4.3 and the following. Optical target image coordinate detection ( h i j , v i j ) and decoding of coded targets is performed following [29].

4.1. Camera Extrinsic Parameters Initial Approach Computation

Computation of the extrinsic parameters of a j-th camera view (Figure 6a) means determining vector θ e j , where camera principal frame location ( d x , d y , and d z ) and orientation ( α , β , and γ Euler angles defining a corresponding rotation matrix R j as R γ R β R α ) are defined with respect to a measuring frame (Figure 4) determined by a pre-set of optical coded targets. The detection of a minimum set of 3 targets is required in order to geometrically determine image extrinsic computing.
θ e j = [ α   β   γ   d x   d y   d z ] T
As inputs for the method, Xi coordinates of the set of coded reference targets are known, as well as camera intrinsic parameters (focal distance, and distortion model parameters for Equation (3)). As a result, partial derivatives (Equation (9)) in a Jacobian matrix E j can be defined as follows (Equation (11)) for solving camera extrinsic parameters for the j-th image as a nonlinear reprojection minimization problem.
( E j ) 2 N j × 6 = [ J e 1 j J e N j ]
where each submatrix J e i j contains the partial derivatives of the projection errors r p i j = ( p i j p i j ˜ ) and r q i j = ( q i j q i j ˜ ) of the i-th target Xi coordinates with respect to the j-th image camera extrinsic, to a total of Nj optical targets detected in the that j-th image. Each J e i j submatrix can be expressed as
( J e i j ) 2 × 6 = ( E i j ) 2 × 6 = D P D U E
where DP contains the partial derivatives of the i-th projected target coordinates p i j and q i j in the j-th image with respect to U i j target coordinates (Equation (1)) in the corresponding j-th camera principal frame, given as follows according to Equation (2)
D P = [ 1 w i j 0 u i j w i j 2 0 1 w i j v i w i j 2 ]
and where D U E expresses the partial derivatives of the U i j target coordinates with respect to the j-th camera extrinsic in θ e j as
D U E = [   D A X i   D B X i   D C X i   I 3 × 3 ] D A = R γ R β D α D B = R γ R β R α D C = D γ R β R α
given the partial derivatives D A , D B , and D C of the camera principal frame rotation matrix R with respect to each Euler angle α , β , and γ , respectively.
At the beginning of the measuring process by portable photogrammetry, first images are taken to the set of coded targets determining the measuring frame. Given that a minimum number of three targets are detected in an image, the corresponding individual iteration of θ e j and independent extrinsic computing can proceed for each image. Figure 7 shows an example of computation of the extrinsic initial approach for the first three images of a measuring process conducted on the pilot case evaluation scenario (Figure 3). Nominal values are adopted as an initial approach for the intrinsic parameters, with focal distance f being 24.0 mm and first order radial distortion coefficient k 1 being 5.0 × 10−9 pixel−2, where higher order radial distortion coefficients are neglected, along with decentering and tangential distortion coefficients in Equation (3). At this stage, the initial adopted approach for the intrinsic parameters is not intended to be accurate but only a nominal value close enough to the expected precise one, to be computed by the in-process self-calibration (see Section 5) so that the final bundle converges in a stable and efficient way.
Table 1 presents the known 3D coordinates (Xi, i = 1 ..6) of the detected coded targets on each image, along with the detected image coordinates for each target at each image ( h i j , v i j , j = 1 .. 3).
Table 2 shows the computed extrinsic parameters for each camera view, according to Gauss-Newton method following Equation (8) for θ e j iteration. The extrinsic computing performance can be observed in the resulting RMS value of the reprojection error vector after convergence, ranging at 0.5 pixels. Regarding computing efficiency, a mean time lower than 0.1 ms is observed for each iteration.

4.2. Target 3D Coordinate Initial Approach Computation

Once the camera extrinsic parameters are available (Table 2), 3D coordinates of new targets other than those defining the measuring frame can be computed. Analog to extrinsic computing in Section 4.1, the computation of the 3D coordinates of a i-th new target (Figure 6b) means determining vector θ x i , where Xi coordinates are defined in the same measuring frame at which camera extrinsic parameters are known. The detection of the target in a minimum set of 2 images is required in order to geometrically determine target coordinate computing by triangulation.
θ x i = X i = [ x   y   z ] T
As inputs for the method, camera intrinsic and extrinsic parameters (αj, βj, γj, and dj) of a minimum set of images are known. As a result, partial derivatives (Equation (9)) in a Jacobian matrix D X i can be defined as follows (Equation (16)) for solving 3D coordinates for the i-th target.
( D X i ) 2 M i × 3 = [ J x 1 i J x M i ]
where each submatrix J x i j contains the partial derivates of the projection errors r p i j and r q i j of the i-th target with respect to its Xi coordinates, to a total of Mi images in which the i-th target is detected. Each J x i j submatrix can be expressed as
J x i j = ( X i j ) 2 × 3 = D P D U X
where DP was defined in Equation (13), and D U X expresses the partial derivatives of the U i j target coordinates at the j-th camera frame with respect to its X i coordinates at the common measuring frame as
D U X = R j
being R j the rotation matrix corresponding to the jth camera frame.
Following the example in Section 4.1, given the computed extrinsic parameters of the set of three camera views (Table 2), the iteration of θ x i and the computation of the Xi coordinates proceeds for coded targets detected at least in two camera views. Figure 8 shows an example of computation of a coded target detected in all three images (Figure 8a–c). Again, nominal values are adopted for the camera intrinsic parameters. Figure 8d depicts the 3D location computed for that new target in the same measuring frame as camera views are given.
Along with computing the new target Xi coordinates, new camera extrinsic parameters also enable code assignation to non-coded targets with pending correspondence to solve between detected coordinates in different images. The set of 3D epipolar lines given by the non-coded targets detected in all images can be expressed as 2D epipolar lines in each image. The distances from each 2D epipolar line to each non-coded target detected in the image can be calculated, so that the subset of 2D epipolar lines closest to a specific non-coded target in that image are likely to correspond to the epipolars of that 3D target detected in the other images. All possible 2D epipolar-to-target distances can be computed according to all possible correspondences between non-coded targets detected in three consecutive images, and the Hungarian method [24,29] can be adopted for solving the correspondences as the optimal combination which minimizes epipolar-to-target distance distribution among all 2D epipolars and non-coded target coordinates. As a result, codes can be assigned to non-coded targets in each image, and their 3D coordinate can be correspondingly computed.
Figure 9a shows all the detected targets in the image presented in Figure 8c, including both coded targets and non-coded targets with solved correspondences between the three images. Figure 9b shows the resulting scene including the 3D coordinates of the computed new targets, along with the 3D epipolar net used to solve them. Computed target coordinates are listed in Table 3, along with corresponding code identification and assignation (id) for coded and non-coded targets, respectively. The reprojection error minimization performance is also shown for computing each individual target, with a RMS value ranging from 0.085 pixels to 1.617 pixels. Again, the mean computing time ranges below 0.1 ms per iteration.

4.3. Joint Bundle Adjustment

Once the new target coordinates are solved, other than those defining the measuring frame, the bundle adjustment for joint computation ( θ in Equation (7)) of i = 1 .. N target 3D coordinates ( θ x i ) and j = 1 ..M camera extrinsic parameters ( θ e j ) proceeds, given by their interdependency through the epipolar net multiple view geometry (Figure 6c). Now, partial derivatives (Equation (9)) in a Jacobian matrix J can be defined (Equation (19)) for minimizing a joint reprojection error vector of 2 m elements, being m as defined at the end of Section 3.
J ( θ ) = [ E   X ]
with ( E ) 2 m × 6 M and ( X ) 2 m × 3 N containing the partial derivatives of each reprojection error with respect to all camera extrinsic (αj, βj, γj, and dj) and target coordinates (Xi), respectively. Reprojection error vector elements can be arranged so that E can be defined as a non-square diagonal matrix with ( E j ) 2 N j × 6 submatrices in its diagonal with the partial derivative to each image extrinsic parameters θ e j (Equation (11)) per the set of Nj reprojection errors of all targets detected in each j-th image, and X can be correspondingly expressed as a sparse matrix with ( X i j ) 2 × 3 submatrices with partial derivative to each target coordinate θ x i (Equation (17)) per the set of Mi reprojection errors of each ith target detected in its subset of images per X column and zeros in the rest of elements. Given this arrangement, J T J in Equation (8) can be expressed as follows
[ E   X ] T [ E   X ] = [ A B B T C ]
where ( A ) 6 M × 6 M is a square diagonal matrix in which its diagonal is composed of ( E j T E j ) 6 × 6 square submatrices accounting for each j-th camera extrinsic contribution to the minimization problem in the least square sense, given E j as defined in Equation (11), and ( C ) 3 N × 3 N is also a square diagonal matrix in which its diagonal is composed of ( j = 1 M i X i j T X i j ) 3 × 3 square submatrices accounting for the contribution of the 3D coordinate of each corresponding i-th target detected in its Mi image subtset, given X i j as defined in Equation (17).
( B ) 6 M × 3 N is a non-square sparse matrix where the interdependency between extrinsic parameters and target coordinate computation through the joint epipolar net is taken into account, with elements ( E i j T X i j ) 6 × 3 contributing along with the joint product of the partial derivatives to image extrinsic and target coordinates for a i-th target detected in a j-th image, and zero when a i-th target is not seen in a j-th image, given E i j as defined in Equation (12).
Thus, according to Equation (20), Equation (8) can be decomposed as
[ A B B T C ] [ θ e θ x ] = J T r
where the residual term J T r can be redefined according to Equation (19) as
[ ϵ e ϵ x ] = J T r = [ E T X T ] r
Given Equations (21) and (22), the linear system in Equation (8) can now be decomposed to a set of two lower range subsystems where θ e and θ x can be individually solved for interdependent extrinsic and target coordinate iteration, where extrinsic iteration θ e can be obtained as
θ e = ( A B C 1 B T ) 1 ( ϵ e B C 1   ϵ x )
and target coordinate iteration θ x can correspondingly be given as
θ x = C 1 ( ϵ x B T   θ e )
Following the examples in Section 4.1 and Section 4.2, given the computed initial approaches for the three first image extrinsic parameters (Table 2) and for the corresponding target coordinates (Table 3) in the evaluation scene shown in Figure 9b, their joint computation can be conducted according to Equations (23) and (24). As inputs for the method, 3D coordinates of the set of coded targets on the measuring frame are known (Table 1), as well as nominal camera intrinsic parameters, along with image coordinates of coded and non-coded target with solved correspondences between images (Table 1 and Table 3). Table 4 and Table 5 show the results of the intermediate joint bundle computing for extrinsic parameters and targets (step 6 of the in-process procedure), respectively. The RMS of the joint reprojection error is optimized to 0.802 pixel, with an iteration time of 0.17 ms.
The measuring process may now continue with the acquisition and in-process computing of new images, following the in-process procedure described in the introduction of Section 4 (steps 1 to 6). An example of a complete measuring set is shown in Figure 10a for the pilot case under study (Figure 3), where non-coded targets were placed on the milled surfaces and coded targets were placed around the scene enabling a consistent epipolar net construction during measurement, along with the reference frame with its corresponding coded targets. A total number of 68 images were taken, solving the 3D coordinates of 80 non-coded (20 at each milled prism) and 340 coded targets. A maximum joint bundle computation time ranging 150 ms was observed for the last images of the measuring process (see Section 4.4).
So far, the measuring scale and traceability depend on the coded target 3D coordinates defining the reference frame (Table 1), along with the adopted nominal camera intrinsic parameters. An alternative for increasing accuracy in portable photogrammetry is the adoption of appropriate scale bars with calibrated lengths between corresponding pairs of optical targets (highlighted in orange in Figure 10a), so that measuring process traceability is given by precise scale bar distances. Then, once image-taking finishes (steps 1 to 6), the final post-process joint bundle of camera extrinsic parameters and target coordinates can proceed until convergence (step 7), imposing calibrated relative distances between corresponding pairs of target coordinates in each scale bar available in the scene. A free network adjustment is then carried out. For this, the 3D coordinates of the coded targets on the reference frame (Table 1) have to be also computed, so that their assumed coordinates so far (in steps 1 to 6) do not influence measuring process traceability, and the measuring frame is correspondingly redefined according to their computed coordinates following the widely adopted 3 (to define a reference plane) −2 (to define a reference axis) −1 (to define the origin point) rule in metrology, setting back a determined origin and orientation to the measuring coordinate system.
To do so, a corresponding error vector r t can be expressed (Equation (25)) and its corresponding joint minimization can be accomplished along with the reprojection error vector r in Equation (4).
( r t ) ( B + 6 ) × 1 = [ r b r x ]
where r b expresses (Equation (26)) the square distance between each pair of target coordinates minus the corresponding scale square length, given by | x k b x k e | 2 b k 2 , from k = 1 ..B bars with b k scales in the scene, with x k b and x k e being the 3D computed coordinates for each pair of targets of the kth scale bar, which can be expressed as
( r b ) B × 1 = [ | x 1 b x 1 e | 2 b 1 2 | x k b x k e | 2 b k 2 | x B b x B e | 2 b B 2 ]
and r x contains (Equation (27)) the 3D coordinate errors of a set of reference targets selected for determining the measuring coordinate system according to the 3-2-1 rule, which can be expressed as
( r x ) 6 × 1 = [ x 0   y 0   z 0   y 1   z 1   z 2 ] T
where a first reference target X0 is set and constrained to be the coordinate system origin (X1 in Table 1), so that X 0 = [ x 0   y 0   z 0 ] T coordinates have to be minimized to zero as first elements in r x setting 3 restrictions, a second reference target X1 is constrained (X3 in Table 1) to have its Y and Z coordinates to zero, so that y 1 and z 1 coordinates in X 1 = [ x 1   y 1   z 1 ] T are included as the next elements in r x setting 2 restrictions determining X coordinate axis, and a last reference target X2 is constrained (X4 in Table 1) to have its Z coordinate to zero, so that z 2 coordinate in X 2 = [ x 2   y 2   z 2 ] T is included as the last elements in r x setting 1 restriction determining XY plane of the measuring frame. As a result, 6 constraints are included and a coordinate system is determined, the rest of the initially assumed reference target coordinates (Table 1) being unconstrained and correspondingly computed, so that measuring traceability is now set by the scales imposed in Equation (26).
A Jacobian matrix G corresponding to r t minimization problem (as in Equation (8)) can be defined as follows
( G ) ( B + 6 ) × ( 3 N ) = [ G b G x ]
where ( G b ) B × ( 3 N ) contains (Equation (29)) the partial derivatives of the k = 1 ..B square distance errors to the each pair of corresponding target coordinates x k b and x k e given as
( G b ) B × ( 3 N ) = [ G 1 G k G B ]
being each G k
( G k ) 1 × ( 3 N ) = [       D x k b         D x k e       ]
with derivatives D x k b and D x k e following Equations (31) and (32) and zeros for the rest of the elements in G k , being N the new total number of computed targets,
D x k b = 2 [ ( x k b x k e )   ( y k b y k e )   ( z k b z k e ) ]
D x k e = D x k b
and ( G x ) 6 × ( 3 N ) expresses the partial derivatives of r x to the constrained reference target coordinates, equal to 1 at the columns corresponding to each computed coordinate, and zeros in the rest of the elements.
Thus, a joint error vector r s can be defined as follows (Equation (33)) integrating reprojection errors in r along with error vector r t
r s = [ r r t ]
and joint bundle can be conducted (as in Equation (8)) with the corresponding redefined joint Jacobian for numerical iteration as
( J ) ( 2 m + B + 6 ) × ( 6 M + 3 N ) = [ E X 0 G ]
Following the same system decomposition approach as in Equations (21)–(24), it can be demonstrated that the main iteration equations given by Equation (21) can be redefined to a similar form as
[ A B B T C t ] [ θ e θ x ] = [ ϵ e ϵ x t ]
where terms C and ϵ x are redefined to C t and ϵ x t according to G and r t as
C t = C + G T G ϵ x t = ϵ x G T r t
so that expressions given at Equations (23) and (24) for θ e and θ x , respectively, can be used for computing to convergence the post-process joint bundle (step 7) of the scene extrinsic and target coordinates integrating scale and 3-2-1 rule restrictions. As input for the post-process joint bundle, results after processing the last image at step 6 are known. Figure 10b shows the corresponding histogram of the minimized reprojection error vector r after joint bundle with r t , resulting in a RMS value of 2.378 pixel.

4.4. Computing Performance of the In-Process Approach

The procedure described above was developed in C++ language, using the OpenCV library for image processing and the Eigen library for matrix management and processing, on a desktop PC Intel Core i7-5600U 2.6 Ghz, with 16 Gb RAM, running on Windows 7 with 64 bits. Wireless image transmission was used, observing a transmission time of 0.5–1 s per image for 12.2 Mpixel raw images. Image processing time (step 1) was observed to reach 0.5 s per image, depending on the number of segmented and decoded targets on the image. Computing time for the initial extrinsic and target approach (steps 2 and 5) for new images and targets prior to their first bundle could be relatively neglected, ranging below 1 ms. Code assignation to non-coded targets (step 4) was observed to range up to 0.1–0.3 s per image, depending on the number of non-coded targets and correspondences to solve between images.
Figure 11 shows the dependence of the computation time of the intermediate bundle (step 6) on the number of cumulated images and solved targets so far during the measuring process, with a time ranging up to 0.15 s per image for the last images of the measuring scenario shown in Figure 10. As a result, total in-process computation time (steps 1 to 6) ranged at a maximum of 1 s per image, in the same order of magnitude of the wireless image transmission time itself.
However, an average increase of 3 ms per additional image can be estimated from Figure 11 for step 6, which would lead to relevant in-process computation time contribution if larger measuring scenarios were adopted with more images and targets. If a better in-process computing performance was required for step 6, further optimization could be conducted following the system decomposition approach presented in this paper, taking advantage of the symmetry and characteristics of the involved submatrices in Equation (21) (diagonal A and C submatrices, dominating presence of zeros in sparse B submatrix, etc.) along with the development of analytic expressions for the involved inverse matrices in Equations (23) and (24).
Finally, the post-process joint computation time (step 7) reached 3 s in the same scenario, with 5 iterations to convergence, assuming convergence criteria of θ e and θ x being below a minimum iteration value of 10−6 mm and 10−6 radians for all location and orientations, respectively.

5. Camera Model Self-Calibration for Precision

Along with scale bars in the scene, camera intrinsic parameters determine measuring process traceability. Nominal camera intrinsic parameters have been adopted for the results shown so far. As introduced in Section 2, a length measuring error (LME) approach similar to VDI 2634 can be adopted for evaluating measuring procedure uncertainty in portable photogrammetry. Figure 12 shows the same measuring scenario as in Figure 10a, but additional six scale bars are included covering the scene so that just one bar is used for imposing scale to the measurement and LME errors can be evaluated on the rest of measured scales.
A maximum LME error of 1.2 mm was observed in the photogrammetric measurement given calibrated scale lengths ranging 1340 mm, corresponding to a limited relative precision of approximately 1/1000 similar to the reported in [5], one order of magnitude below the expected performance of portable photogrammetry using precise pre-calibrated cameras, where a relative precision ranging better than 1/10,000 is typically reported [20] for 1 m long scenes.
Other than assuming nominal values for intrinsic parameters, precise camera calibration is required for precise measuring. Accurate camera calibration is necessary in photogrammetry systems and that is why continuous and recent improvements regarding mathematical analysis of the data taken for self-calibration [11,12,13] and image distortion correction [14,15] can be found in the literature. Nowadays there are several software applications for close range photogrammetry in industry using optical targets (i.e., Aicon 3D, Geodetic V-Stars, Creaform MaxShot3D, GOM Tritop, etc.) that can automatically perform camera self-calibration. These photogrammetry specific commercial systems present a main limitation when talking about their self-calibration since they only provide this option by a specific procedure that must be performed out of the measuring process itself since traditional self-calibration methods need precise calibration pattern and extensive post-processing work. This does not enable the option of in-process self-calibration. Such self-calibration capabilities integrated into the measuring process would be essential to compensate for main uncertainty contributors, especially by determining the intrinsic parameters of the camera model on the fly. In addition, such a self-calibrated machine vision would enable the use of low-cost cameras—and not only specific photogrammetry cameras—for metrology applications.
Some approaches for the calibration of commercial digital cameras in different applications can be found in the literature. In [32] an accuracy comparison is carried out between three low-cost consumer grade digital cameras and a specific photogrammetry proven camera. In [33] a study of different commercial software options for the calibration and self-calibration of cameras mounted on unmanned aerial vehicles (UAVs) is presented. A similar approach for the self-calibration of commercial small action cameras applied to photogrammetric purpose is proposed in [34] by using a 2D external calibrated reference and a specifically developed software. However, the options presented do not allow in-process self-calibration since they need extensive post processing.
To overcome the time limitations of off-process calibrations, some approaches propose diagnosis methods for camera internal parameters in order to prevent the need to stop the measuring process so often to check them: in [35] a diagnostic method for internal parameters based on multivariate control charts is proposed in order to provide a comprehensive stability control over all the performed calibrations for systems used for regular monitoring of production lines. However, this cannot be considered a self-calibration, but a stability control of the intrinsic parameters.
Special attention should be paid to texture-based software for 3D reconstruction and modelling (i.e., Agisoft PhotoScan, Photomodeler Scanner, Bundler, Pix4Dmapper, VisualSFM, iWitness, MicMac, 3DF Zephir, etc.), with a broad range of applications (e.g., mapping, architecture, mining, construction, agriculture, heritage recording, archeology, forensic 3D modelling, etc.). Other than using optical retroreflective targets, measuring results are obtained on object surface textures available in the images. Texture-based techniques (SIFT [36] and SURF [37] image descriptors, SfM [38,39,40], etc.) are used, enabling dense 3D surface meshing and reconstruction. In-process camera model self-calibration techniques are integrated, demonstrating the capability of 3D surface reconstruction and modelling using consumer grade low-cost cameras [41,42,43]. However, strong limitations can be found for taking these texture-based approaches to precise industrial metrology. Textures available in the images depend on object characteristics and light conditions, limiting measurement precision and reliability compared to the use of physical optical targets. Additionally, computationally hard texture-based approaches are required, with, again, extensive post-processing times, limiting its application for efficient measuring processes in industry by photogrammetry.
The in-process self-calibration method proposed in this work overcomes these limitations by allowing its computationally efficient application for precise industrial metrology applications using optical targets. The method for the self-calibration of camera and lens distortion has been integrated due to its potential for highest precision and accuracy level [20,44] when using low cost non specialized digital cameras, where the solved 3D point cloud scene itself is used as calibration geometry. Measuring traceability is set only by the scale bars in the measuring scene, avoiding uncertainty contributors from off-process camera calibration processes. The camera can be continuously controlled and compensated on the fly, bringing the potential of getting independent measurement results to changes in camera condition over time, such as with thermo-mechanically unstable low-cost cameras. As a step forward to off-process camera calibration and in-process stability control, the approach adopted in this work consist in taking advantage of redundant information available in the portable photogrammetry scene, so that, along with extrinsic and target coordinates, camera intrinsic parameters are also included into a final bundle adjustment computing, having as the only inputs the measuring images themselves and the scale bar distances for precise traceability.

5.1. Including Camera Model into Bundle Adjustment

Computation of the camera intrinsic parameters means determining vector θ i (Equation (37)) along with camera extrinsic θ e (Equation (5)) and target coordinates in θ x (Equation (6)) in a joint bundle minimizing error vector r s (Equation (33)) including scale bars restrictions.
( θ i ) 7 × 1 = [ f     h 0   v 0   k 1   k 2   π 1   π 2   ] T
where the focal distance f , the coordinates of the distortion center ( h 0 , v 0 ), the radial distortion coefficients ( k 1 and k 2 ) and the tangential distortion coefficients ( π 1 and π 2 ) are included.
Hence, θ vector to compute in the joint bundle is redefined as
θ = [ θ e θ x θ i ]
and following Equation (34), a corresponding complete joint Jacobian matrix can be expressed as
( J ) ( 2 m + B + 6 ) × ( 6 M + 3 N + 7 ) = [ E X F 0 G 0 ]
where F expresses the partial derivatives of the reprojection error vector r to the camera intrinsic parameters in θ i , so that
( F ) 2 m × 7 = [ F 11 F i j F 2 m ]
being F i j the partial derivatives of the reprojection error r i j of the i-th target in the j-th camera view, according to the projection model in Equation (2) and the distortion model in Equation (3), as
F i j = [ r i j f r i j h 0 r i j v 0 r i j k 1 r i j k 2 r i j π 1 r i j π 2 ]
Given the partial derivatives in F and following again the same system decomposition approach as in Equation (35) for the join bundle including scale bars, the main iteration equations including camera self-calibration can be redefined as
[ A B E f B T C t D f E f T D f T F f ] [ θ e θ x θ f ] = [ ϵ e ϵ x t ϵ f ]
where
F f = F T F
E f = E T F
D f = X T F
being F , E and X defined as in Equations (40) and (19), respectively, where ( F f ) 7 × 7 is a square symmetric matrix accounting for each camera intrinsic parameter direct contribution to the minimization problem, and ( E f ) 6 M × 7 and ( D f ) 3 N × 7 are non-square matrices accounting for the corresponding joint product of partial derivatives integrating the interdependency between extrinsic and target coordinate computing with camera intrinsic parameters through the joint epipolar net, respectively, and ϵ f is expressed as
ϵ f = F T r
analog to ϵ e and ϵ x in Equation (22).
From the linear system in Equation (42), it can be obtained
θ f = F f 1 ( ϵ f E f T θ e D f T θ x )
and then, Equation (42) can be reduced to a similar system as Equation (35)
[ A f B f 1 B f 2 C f ] [ θ e θ x ] = [ ϵ e f ϵ x f ]
where θ e and θ x can be obtained following analog equations to Equations (23) and (24), substituting A , B , B T , C , ϵ e and ϵ x , by A f , B f 1 , B f 2 , C f , ϵ e f and ϵ x f , given them as
A f = A E f F f 1 E f T
B f 1 = B E f F f 1 D f T
B f 2 = B T D f F f 1 E f T
C f = C t D f F f 1 D f T
ϵ e f = ϵ e E f F f 1 ϵ f
ϵ x f = ϵ x t D f F f 1 ϵ f

5.2. Computing Efficiency and Precision Performance Evaluation for Self-Calibrated Photogrammetry

Bundle computing that includes self-calibration can be conducted following Equations (47) and (48) for the joint iteration of θ e , θ x , and θ i to convergence, having as initial approach the previous joint bundle computation given constant camera intrinsic parameters. Following the examples in Figure 10 and Figure 12, post-process joint bundle including self-calibration was conducted including the same scale bar (L3) for traceability.
Figure 13 shows the histogram of the optimized reprojection error distribution, where an order of magnitude lower RMS value is obtained compared to that obtained by assuming nominal camera instrinsics (histogram in Figure 10b). Table 6 shows the computed camera intrinsic parameters for this example, optimized to those assumed to be nominal in Section 4. Post-process joint computation including self-calibration took up to 8 s, with 11 iterations to convergence. As a result, the total post-processing after measuring was limited to 8 + 3 = 11 s, enabling on the fly camera self-calibration and measuring self-compensation, assuming a constant camera condition during measurement.
Again, the length measuring error (LME) approach was adopted for evaluating measuring uncertainty in self-calibrated portable photogrammetry. A set of 10 consecutive photogrammetric measurements were taken on the scene. Each measurement was solved imposing L3 scale bar (Figure 12), and remaining scales were used as control bars for LME evaluation. Maximum LME between targets on control scale bars obtained was 121.4 μm in L0, with a standard deviation for all the set of measurements of 45.1 μm. However, except for L0, the values obtained showed relatively lower LME results for lengths measured in the XY plane (ranging below 40 μm). Nevertheless, that maximum observed LME was assumed as a conservative estimation to characterize optical target measurement process uncertainty in all spatial directions, corresponding to a relative precision ranging 1/10,000.
According to the maximum observed LME value of 121.4 μm, the spatial uncertainty for the target coordinates can be estimated in 70.1 μm ( σ ), given that σ = L M E / 3 assuming a rectangular distribution. As an alternative way of evaluating measuring performance, the 3D coordinates of the computed non-coded targets in each measurement, properly placed on relatively precise milled prismatic elements of the pilot case (see Figure 3), can be fitted and compared with its nominal geometry [5], and the corresponding errors to each nominal surface can be evaluated for all targets in all 10 measurements. Figure 14 shows an example of fitting of a measurement result to the nominal reference geometry for error evaluation. A maximum standard deviation ( σ ) of 14 μm was observed for the measuring errors of non-coded targets to all surfaces in the XY plane in all measurements, but up to 60 μm for the measuring errors of the targets laying in lateral surfaces, close to the estimated σ of 70.1 μm given by the LME evaluation. This result points out again the conservative character of the spatial uncertainty ( σ ) value previously estimated by the LME approach, due to the observed anisotropy on the non-coded target measurement accuracy. In any case, a relative precision ranging 1/10,000 by LME analysis is confirmed in the worst-case scenario for all spatial directions, demonstrating the adequate accuracy of the developed in-process self-calibrated photogrammetry in the pilot case scenario, comparing to previously reported limited performance of 1/1000 relative precision [5], ranging now at the same precision level as the performance expected when using precise pre-process camera calibration.

6. Evaluation at Industrial Scenarios

Finally, the self-calibrated in-process photogrammetry system has been evaluated in an industrial scenario with two different parts (Figure 15a,c) demonstrating a fast, reliable and precise raw part measuring process. The system was applied for the measurement and alignment of up to 15 m long raw parts prior to their machining, at least one order of magnitude larger than the considered pilot case (Figure 3), with scene sizes ranging now up to 200 m3.
The measurement process took an overall time of 1 h per part in both industrial scenarios, determining the 3D coordinates of the set of non-coded target placed on the surfaces to be machined. An approximated total number of 800 targets and 200 images were computed in both scenarios (Figure 16a,c). Performance of the in-process computing procedure was observed, ranging now at a total maximum in-process computing time of almost 2 s per image for the last images taken during the measuring processes, higher than the 1 s per image observed at the pilot case scenario, but still enabling practical quasi real time diagnosis and control of correct images taken in an industrial scenario, so that every time an incorrect images was acquired (due to an image not contributing to a consistent epipolar net, pending target to be solved in a particular zone of the scene, etc.) measuring could be properly guided to locally take new adequate images if necessary before continuing with the process.
The higher intermediate bundle computing time was the main contributor to the increase of the in-process computation time in the last stages of the measuring process, as expected by the dependence of the bundle computational work on the number of images and targets to solve (as shown in Figure 11). Along with the alternatives pointed out in Section 4.4 for increasing its computing performance, a strategy could also be adopted for limiting bundle adjustment only to a minimum subset of the scene measured so far, so that continuous computing of the joint bundle of the complete scene could be avoided or only periodically executed.
Accordingly, the post-process computing time including self-calibration of camera intrinsic parameters ranged a maximum of 15 s, slightly higher than the observed one in the pilot case scenario as a result of the higher number of images and targets, but meeting efficient operation at an industrial scenario.
Regarding precision, according to the LME evaluation results reported in the literature [20], typical scale dependent LME errors could be estimated as 50 μm + 20 μm/m for portable photogrammetry with precise pre-calibrated cameras. For the considered industrial cases, with up to 15 m long parts, an estimated LME performance of 0.35 mm could be expected as a reference for precise operation, corresponding to 0.20 mm ( σ ) spatial uncertainty for the measured targets, given that σ = L M E / 3 assuming again a rectangular distribution.
After raw part measurement by photogrammetry, the complete process (Figure 1) was conducted towards in-machine raw part alignment in both industrial scenarios, so that gauging on the optical targets by a touch probe integrated in the machine (with machine axes typically ranging at 0.01 mm accuracy) was adopted as a reference for evaluating measuring precision. Optimal target coordinates were computed by fitting (Figure 17). Positive and even overstock was applied as in [5] as best-fitting criteria. A subset of targets was used as control points for alignment by a machine-integrated contact probe. In-machine fixturing of the part was adjusted to properly align it to machine axes so that probed relative coordinates between reference targets matched those optimally computed by fitting, to a difference below 0.1 mm so that alignment process uncertainty could be considered relatively neglected comparing to the measuring uncertainty of 0.2 mm ( σ ) estimated for the portable photogrammetry.
Once the part was precisely aligned, the measuring process performance was evaluated by contact probe gauging of a minimum set of 10 check points (i.e., non-coded targets not used as a reference for alignment), distributed in all 3 machine coordinate axes directions and at extreme and opposite surfaces of the part. A maximum probing error interval of ±0.6 mm (3 σ ) could be expected according to the above estimated target spatial uncertainty by LME ( σ = 0.20 mm). Probing errors were evaluated between the overstock values resulting from the fitting process and the actual ones by the in-machine gauging to nominal surface coordinates. In both industrial scenarios (Figure 15a,c), all the probing errors ranged below ±0.5 mm, demonstrating the adequate accuracy of the developed self-calibrated in-process photogrammetry for large raw part measuring and overstock control.

7. Conclusions

A new efficient procedure has been presented in this work for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing and diagnosis so that a reliable measuring procedure can be conducted, avoiding inefficient user-dependent post-process iterative procedures that limit the potential of portable photogrammetry for an easy, low-cost and fast solution for industrial metrology of large components.
A method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision and accuracy levels when using a non-specialized consumer grade digital camera (i.e., Nikon D300S), where the solved 3D point cloud scene itself is used as calibration geometry and measurement traceability is set only by the scale bars in the measuring scene, avoiding the need of off-process pre-calibrated cameras or the use of special purpose calibration artifacts in the scene for precise measurement in an industrial metrology application.
The developed self-calibrated in-process photogrammetry has been evaluated in a pilot case scenario (1.5 m long reference part) and at two large scale industrial scenarios (up to 15 m) for raw part measurement and alignment before machining, showing an in-process computing time typically below 1 s per image to a maximum of 2 s at the last stages of the computed industrial scenes, and a relative precision of 1/10,000 with an error RMS below 0.2 pixel at image plane, ranging at the same precision performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. Efficient camera model in-process self-calibration is also demonstrated, with post-processing times ranging 11 s.
Alternatives for increasing computational in-process efficiency have been pointed out, especially focusing on large scale industrial scenarios where a high number of images and optical targets to compute can be expected. Regarding precision, anisotropy has been observed in the spatial uncertainty distribution of the optical target coordinates, pointing to a better potential for the developed system than the figures reported according to the worst-case scenario. Further steps towards the prediction of ray-net-conditioning-induced uncertainty contributions, such as analytical approaches using error propagation theory, may enable to take the most of that potential, still remaining as a relevant and challenging issue in the state-of-the-art. Additionally, further steps could be conducted for enabling precise measurements with low cost but thermo-mechanically unstable consumer grade digital cameras, where the influence of an image dependent focal length due to integrated autofocus optics might be considered for precise self-calibration.
Finally, in the actual context, the need for “smart” systems integrating software procedures and hardware systems for data acquisition, self-diagnostics, set-up, control of tolerances, etc., is becoming more and more pressing. In addition, rapid advances in computing processing, image measurement and characterization algorithms have allowed photogrammetric systems to integrate with CAD/CAM systems offering high accuracy capabilities, and more and more “real-time” (1/25 s and faster) image sequence acquisition. The photogrammetric system presented here is a step towards meeting these requirements, accomplishing accurate measurements with inexpensive, easy-to-use measuring systems and with optimized procedures that make the whole measuring process much less time consuming.

Acknowledgments

The authors wish to thank Soraluce S. Coop machine tool builder, Goimek S. Coop and Liebherr machining shops for their technical support during the development and industrial evaluation of this research. Moreover, the authors wish to thank the anonymous reviewers for their valuable advice, which has led to an improvement of the article.

Author Contributions

Alberto Mendikute, Ibai Leizea, and José Antonio Yagüe wrote the paper. Alberto Mendikute and Mikel Zatarain conceived and designed the in-process computing approach integrating camera model self-calibration. Ibai Leizea, Alberto Mendikute and Álvaro Bertelsen accomplish the software development and implementation. José Antonio Yagüe and Alberto Mendikute set the uncertainty evaluation methodology. Ibai Leizea performed the experiments, both at the pilot case and industrial scenarios. Alberto Mendikute, Ibai Leizea and José Antonio Yagüe analyzed the data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Estler, W.; Ednundson, K.; Peggs, G.; Parker, D. Large-Scale Metrology—An Update. CIRP Ann. Manuf. Technol. 2002, 51, 587–609. [Google Scholar] [CrossRef]
  2. Schmitt, R.H.; Peterek, M.; Morse, E.; Knapp, W.; Galetto, M.; Hätig, F.; Goch, G.; Hughes, B.; Forbes, A.; Estler, W.T. Advances in Large-Scale Metrology—Review and future trends. CIRP Ann. Manuf. Technol. 2016, 65, 643–665. [Google Scholar] [CrossRef]
  3. Cuypers, W.; VanGestel, N.; Voet, A.; Kruth, J.-P.; Mingneau, J.; Bleys, P. Optical measurement techniques for mobile and large-scale dimensional metrology. Opt. Lasers Eng. 2009, 79, 292–300. [Google Scholar] [CrossRef] [Green Version]
  4. Uriarte, L.; Zatarain, M.; Axinte, D.; Yagüe-Fabra, J.; Ihlenfeldt, S.; Eguia, J.; Olarra, A. Machine tools for large parts. CIRP Ann. Manuf. Technol. 2013, 62, 731–750. [Google Scholar] [CrossRef]
  5. Zatarain, M.; Mendikute, A.; Inziarte, I. Raw part characterisation and automated alignment by means of a photogrammetric approach. CIRP Ann. Manuf. Technol. 2012, 61, 383–386. [Google Scholar] [CrossRef]
  6. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle Adjustment—A Modern Synthesis, Vision Algorithms: Theory and Practice; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2000; Volume 1883, pp. 298–372. [Google Scholar]
  7. Bösemann, W. Advances in photogrammetric measurement solutions. Comput. Ind. 2005, 56, 886–893. [Google Scholar] [CrossRef]
  8. Franceschini, F.; Galetto, M.; Maisano, D.; Mastrogiacomo, L. Large-Scale Dimensional Metrology (LSDM): From Tapes and Theodolites to Multi-Sensor Systems. Int. J. Precis. Eng. Manuf. 2014, 15, 1739–1758. [Google Scholar] [CrossRef]
  9. Peggs, G.N.; Maropoulos, P.G.; Hughes, E.B.; Forbes, A.B.; Robson, S.; Ziebart, M.; Muralikrishnan, B. Recent Developments in Large-Scale Dimensional Metrology. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2009, 223, 571–595. [Google Scholar] [CrossRef]
  10. Remodino, F.; El-Hakim, S. Image Based 3D Modelling: A review. Photogramm. Rec. 2016, 21, 269–291. [Google Scholar] [CrossRef]
  11. Long, C.; Zhu, J.; Yi, W. Portable visual metrology without traditional self-calibration measurement model. Measurement 2016, 90, 424–437. [Google Scholar] [CrossRef]
  12. Babapour, H.; Mokhtarzade, M.; Zoej, M.J.V. Self-calibration of digital aerial camera using combined orthogonal models. ISPRS J. Photogramm. Remote Sens. 2016, 117, 29–39. [Google Scholar] [CrossRef]
  13. Luhmann, T.; Fraser, C.; Maas, H.G. Sensor modelling and camera calibration for close-range photogrammetry. ISPRS J. Photogramm. Remote Sens. 2016, 115, 37–46. [Google Scholar] [CrossRef]
  14. Huang, S.; Zhang, Z.; Ke, T.; Tang, M.; Xu, X. Scanning Photogrammetry for Measuring Large Targets in Close Range. Remote Sens. 2015, 7, 10042–10077. [Google Scholar] [CrossRef]
  15. Herráez, J.; Denia, J.L.; Navarro, P.; Rodríguez, J.; Martín, M.T. Determining image distortion and PBS (point of best symmetry) in digital images using straight line matrices. Measurement 2016, 91, 641–650. [Google Scholar] [CrossRef]
  16. Wang, Q.; Zissler, N.; Holden, R. Evaluate error sources and uncertainty in large scale measurement systems. Robot. Comput. Integr. Manuf. 2013, 29, 1–11. [Google Scholar] [CrossRef] [Green Version]
  17. ISO; GUM; JCGM 100. Evaluation of Measurement Data–Guide to the Expression of Uncertainty in Measurement, Bureau International des Poids et Mesures, 2008. Available online: http://www.bipm.org/utils/common/documents/jcgm/JCGM_100_2008_E.pdf (accessed on 8 September 2017).
  18. Wilhelm, R.G.; Hocken, R.; Schwenke, H. Task Specific Uncertainty in Coordinate Measurement. CIRP Ann. Manuf. Technol. 2001, 50, 553–563. [Google Scholar] [CrossRef]
  19. Galantucci, L.M.; Pesce, M.; Lavecchia, F. A powerful scanning methodology for 3D measurements of smallparts with complex surfaces and sub millimeter-sized features, based on close range photogrammetry. Precis. Eng. 2016, 43, 211–219. [Google Scholar] [CrossRef]
  20. Luhmann, T. Close range photogrammetry for industrial applications. ISPRS J. Photogramm. Remote Sens. 2010, 65, 558–569. [Google Scholar] [CrossRef]
  21. VDI. VDI/VDE 2634: Optical 3D Measuring Systems—Part 1; VDI/VDE Guide Line; Beuth: Berlin, Germany, 2000. [Google Scholar]
  22. Geodesie Maintenance Services. Optical 3D Measurement Tools Catalog. Available online: http://www.geodesie-maintenance.com/us/produits-mesure-3d.php (accessed on 24 August 2017).
  23. McGlone, J.C. Manual of Photogrammetry, 6th ed.; American Society for Photogrammetry and Remote Sensing: Falls Church, VA, USA, 2013. [Google Scholar]
  24. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  25. Brown, D.C. Decentering distortion of lenses. Photogramm. Eng. Remote Sens. 1996, 32, 444–462. [Google Scholar]
  26. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  27. Faugeras, O.D.; Luong, Q.-T.; Maybank, S.J. Camera self-calibration: Theory and experiments. In Computer Vision—ECCV’92; Springer: Santa Margherita Ligure, Italy, 1992; pp. 321–334. [Google Scholar]
  28. Madsen, K.; Nielsen, H.B.; Tingle, O. Methods for Non-Linear Least Squares Problems, 2nd ed.; Informatics and Mathematical Modelling, Technical University of Denmark: Copenhagen, Denmark, 2004. [Google Scholar]
  29. Fernandez-Fernandez, M.; Alonso-Montes, C.; Bertelsen, A.; Mendikute, A. Industrial Non-intrusive Coded-Target Identification and Decoding Application. In Pattern Recognition and Image Analysis; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7887, pp. 790–797. [Google Scholar]
  30. Kuhn, H.W. The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 1955, 2, 83–97. [Google Scholar] [CrossRef]
  31. Jain, A.K.; Zhou, Y.; Mustufa, T.; Burdette, E.C.; Chirikjian, G.S.; Fichtinger, G. Matching and reconstruction of brachytherapy seeds using the Hungarian algorithm (MARSHAL). Med. Phys. 2005, 32, 3475–3492. [Google Scholar] [CrossRef] [PubMed]
  32. Chandler, J.H.; Fryer, J.G.; Jack, A. Metric capabilities of low-cost digital cameras for close range surface measurement. Photogramm. Rec. 2005, 20, 12–27. [Google Scholar] [CrossRef]
  33. Rosnell, T.; Honkavaara, E. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera. Sensors 2012, 12, 453–480. [Google Scholar] [CrossRef] [PubMed]
  34. Balletti, C.; Guerra, F.; Tsioukas, V.; Vernier, P. Calibration of Action Cameras for Photogrammetric Purposes. Sensors 2014, 14, 17471–17490. [Google Scholar] [CrossRef] [PubMed]
  35. Franceschini, F.; Galetto, M.; Genta, G. Multivariate control charts for monitoring internal cameraparameters in digital photogrammetry for LSDM (Large-ScaleDimensional Metrology) applications. Precis. Eng. 2015, 42, 133–142. [Google Scholar] [CrossRef]
  36. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  37. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  38. Longuet-Higgins, H.C. A computer algorithm for reconstructing a scene from two projections. Nature 1981, 293, 133–135. [Google Scholar] [CrossRef]
  39. Ullman, S. The interpretation of structure from motion. Proc. R. Soc. Lond. 1979, 203, 405–426. [Google Scholar] [CrossRef] [PubMed]
  40. Özyeşil, O.; Voroninski, V.; Basri, R.; Singer, A. A survey of structure from motion. Acta Numer. 2017, 26, 305–364. [Google Scholar] [CrossRef]
  41. Terpstra, T.; Voitel, T.; Hashemian, A. A Survey of Multi-View Photogrammetry Software for Documenting Vehicle Crush; SAE Technical Paper; SAE World Congress and Exhibition: Detroit, MI, USA, 2016. [Google Scholar]
  42. Remondino, F.; Del Pizzo, S.; Kersten, T.P.; Troisi, S. Low-cost and open-source solutions for automated image orientation—A critical overview. In Proceedings of the Euro-Mediterranean Conference, Limassol, Cyprus, 29 October–3 November 2012; Progress in Cultural Heritage Preservation. Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  43. Markiewicz, J.; Podlasiak, P.; Kowalczyk, M.; Zawieska, D. The New Approach to Camera Calibration-GCPs or TLS Data? In Proceedings of the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016; p. 41. [Google Scholar]
  44. Luhmann, T.; Robson, S.; Kyle, S.; Harley, I. Close Range Photogrammetry; Whittles Publishing: Dunbeath, Scotland, 2011. [Google Scholar]
Figure 1. Portable photogrammetry for out-of-machine raw part measurement (a) for efficient in-machine alignment process (b) of large components prior to machining.
Figure 1. Portable photogrammetry for out-of-machine raw part measurement (a) for efficient in-machine alignment process (b) of large components prior to machining.
Sensors 17 02066 g001
Figure 2. Measuring instrumental in portable photogrammetry (a), including a fixed focal consumer grade digital camera (Nikon D300S, 4288 × 2848 pixels, 24 mm), measuring frame, and multicoded target artifacts for efficient image matching assistance; (b) non-coded targets (Geodesie, diameter 8 mm) [22] for raw part measurement; and (c) carbon-fiber scale (Geodesie) [22] for precise traceability and precision evaluation, with precalibrated distances between non-coded targets (blue).
Figure 2. Measuring instrumental in portable photogrammetry (a), including a fixed focal consumer grade digital camera (Nikon D300S, 4288 × 2848 pixels, 24 mm), measuring frame, and multicoded target artifacts for efficient image matching assistance; (b) non-coded targets (Geodesie, diameter 8 mm) [22] for raw part measurement; and (c) carbon-fiber scale (Geodesie) [22] for precise traceability and precision evaluation, with precalibrated distances between non-coded targets (blue).
Sensors 17 02066 g002
Figure 3. Steel raw part pilot case (a) under analysis (1.5 m × 1 m × 0.5 m) with prismatic subelements (in red, 400 mm × 200 mm × 100 mm) milled to a 0.01 mm precision, with multicoded targets around the scene for assisting image matching; (b) CAD describing nominal reference surfaces (in red), and (c) uncertainty evaluation scene and scale bar lay-out for LME evaluation, with non-coded targets on surfaces to measure (d).
Figure 3. Steel raw part pilot case (a) under analysis (1.5 m × 1 m × 0.5 m) with prismatic subelements (in red, 400 mm × 200 mm × 100 mm) milled to a 0.01 mm precision, with multicoded targets around the scene for assisting image matching; (b) CAD describing nominal reference surfaces (in red), and (c) uncertainty evaluation scene and scale bar lay-out for LME evaluation, with non-coded targets on surfaces to measure (d).
Sensors 17 02066 g003
Figure 4. Epipolar ray-net geometry in portable photogrammetry. Camera frame coordinates (Rj and dj) and optical target 3D coordinates ( X i ) can be expressed with respect to a predefined measuring frame, and computed according to the ray-net corresponding to a set of projected 2D target coordinates (pij) at each image plane.
Figure 4. Epipolar ray-net geometry in portable photogrammetry. Camera frame coordinates (Rj and dj) and optical target 3D coordinates ( X i ) can be expressed with respect to a predefined measuring frame, and computed according to the ray-net corresponding to a set of projected 2D target coordinates (pij) at each image plane.
Sensors 17 02066 g004
Figure 5. Target 2D detection and conic projection through epipolar line. (a) Example of optical target coordinate detection ( p ˜ i j and q ˜ i j ) at image plane; (b) Conic projection ( p i j and q i j ) into image plane of the target 3D coordinate X i according to epipolar line from O j camera frame (Rj and dj), with the corresponding projection error contribution ( r p i j and r q i j ) to the joint residual error.
Figure 5. Target 2D detection and conic projection through epipolar line. (a) Example of optical target coordinate detection ( p ˜ i j and q ˜ i j ) at image plane; (b) Conic projection ( p i j and q i j ) into image plane of the target 3D coordinate X i according to epipolar line from O j camera frame (Rj and dj), with the corresponding projection error contribution ( r p i j and r q i j ) to the joint residual error.
Sensors 17 02066 g005
Figure 6. In-process computing procedure. Once a new image is processed, the extrinsic parameters are solved according to X1, X2, and X3 target coordinates (a). Then, the 3D coordinates of a new target (X4) is solved given by different points of view with known camera extrinsic parameters (b); Finally, camera extrinsic parameters and target coordinates are jointly optimized in an in-process bundle adjustment (c).
Figure 6. In-process computing procedure. Once a new image is processed, the extrinsic parameters are solved according to X1, X2, and X3 target coordinates (a). Then, the 3D coordinates of a new target (X4) is solved given by different points of view with known camera extrinsic parameters (b); Finally, camera extrinsic parameters and target coordinates are jointly optimized in an in-process bundle adjustment (c).
Sensors 17 02066 g006
Figure 7. Computing of initial approach of a new camera extrinsic parameters. First set of three images is shown (ac), where coded targets on measuring reference frame are detected (highlighted in purple on each image). Camera frames according to computed extrinsic parameters are depicted in (d,e), and (f) corresponding to each image (ac), respectively, along with the epipolar net for each image to the set of detected targets in the measuring reference frame (purple).
Figure 7. Computing of initial approach of a new camera extrinsic parameters. First set of three images is shown (ac), where coded targets on measuring reference frame are detected (highlighted in purple on each image). Camera frames according to computed extrinsic parameters are depicted in (d,e), and (f) corresponding to each image (ac), respectively, along with the epipolar net for each image to the set of detected targets in the measuring reference frame (purple).
Sensors 17 02066 g007
Figure 8. Computing of initial approach of a new target Xi coordinates. A coded target is detected (highlighted in blue) in each image (ac), where images corresponds to those three in Figure 7. Computed target coordinates are depicted in 3D scene (d) (in blue), along with the three epipolars from the images to the detected target.
Figure 8. Computing of initial approach of a new target Xi coordinates. A coded target is detected (highlighted in blue) in each image (ac), where images corresponds to those three in Figure 7. Computed target coordinates are depicted in 3D scene (d) (in blue), along with the three epipolars from the images to the detected target.
Sensors 17 02066 g008
Figure 9. Example of target detection (a) in image 3 (Figure 8c), including coded targets (blue) and non-coded targets with solved correspondences (red), along with targets on the measuring reference frame (purple). 3D view of the scene (b) including the epipolar net (red) for solving the detected 3D target coordinates (blue and red) given by the three images in Figure 8.
Figure 9. Example of target detection (a) in image 3 (Figure 8c), including coded targets (blue) and non-coded targets with solved correspondences (red), along with targets on the measuring reference frame (purple). 3D view of the scene (b) including the epipolar net (red) for solving the detected 3D target coordinates (blue and red) given by the three images in Figure 8.
Sensors 17 02066 g009
Figure 10. Measurement results for the pilot case (Figure 3) after bundle adjustment (a), where camera frames and the set of measured targets (coded in blue, non-coded in green) are shown given their computed coordinates in the reference frame (blue, green, and red axes) along with a scale bar for precise scale definition (orange), and histogram showing the reprojection error vector distribution in pixels (b) after including the scale bar distance (orange) into the joint bundle.
Figure 10. Measurement results for the pilot case (Figure 3) after bundle adjustment (a), where camera frames and the set of measured targets (coded in blue, non-coded in green) are shown given their computed coordinates in the reference frame (blue, green, and red axes) along with a scale bar for precise scale definition (orange), and histogram showing the reprojection error vector distribution in pixels (b) after including the scale bar distance (orange) into the joint bundle.
Sensors 17 02066 g010
Figure 11. In-process intermediate bundle (step 6) computing time (ms) and number of solved targets each time a new image is taken during the measuring process of the scenario depicted in Figure 10. A discontinuity is observed at image 20 bundle computing time, which corresponded to a steep computational change in the management and allocation time (Eigen library) of submatrices in Equation (23).
Figure 11. In-process intermediate bundle (step 6) computing time (ms) and number of solved targets each time a new image is taken during the measuring process of the scenario depicted in Figure 10. A discontinuity is observed at image 20 bundle computing time, which corresponded to a steep computational change in the management and allocation time (Eigen library) of submatrices in Equation (23).
Sensors 17 02066 g011
Figure 12. LME approach for precision evaluation, with one calibrated bar as the scale for measurement (red, same scale as in Figure 10a and L3 in Figure 3) and six bars for length error evaluation (orange), all covering the measuring scene of the pilot case under study, with multicoded artifacts (in blue, as shown previously in Figure 2 and Figure 3) and non-coded targets on prismatic subelements (green).
Figure 12. LME approach for precision evaluation, with one calibrated bar as the scale for measurement (red, same scale as in Figure 10a and L3 in Figure 3) and six bars for length error evaluation (orange), all covering the measuring scene of the pilot case under study, with multicoded artifacts (in blue, as shown previously in Figure 2 and Figure 3) and non-coded targets on prismatic subelements (green).
Sensors 17 02066 g012
Figure 13. Histogram of reprojection error vector after self-calibration in pixels, including one scale bar distance for traceability (Figure 10, in orange), with RMS value being 0.18 pixel.
Figure 13. Histogram of reprojection error vector after self-calibration in pixels, including one scale bar distance for traceability (Figure 10, in orange), with RMS value being 0.18 pixel.
Sensors 17 02066 g013
Figure 14. Fitting of non-coded target coordinates (green) to nominal reference surfaces on CAD file (red) for evaluating measuring error, along with auxiliary multicoded targets in the scene (blue).
Figure 14. Fitting of non-coded target coordinates (green) to nominal reference surfaces on CAD file (red) for evaluating measuring error, along with auxiliary multicoded targets in the scene (blue).
Sensors 17 02066 g014
Figure 15. Evaluation tests at industrial scenarios for two end-users (Goimek and Liebherr). Large raw parts prior to their machining are shown, (a) Soraluce milling machine travelling column (6 m long) manufactured at Goimek machining shop, and (c) Liebherr drilling rig lead center (15 m long), along with corresponding CAD views (b,d) showing measured surfaces (in red) for overstock control.
Figure 15. Evaluation tests at industrial scenarios for two end-users (Goimek and Liebherr). Large raw parts prior to their machining are shown, (a) Soraluce milling machine travelling column (6 m long) manufactured at Goimek machining shop, and (c) Liebherr drilling rig lead center (15 m long), along with corresponding CAD views (b,d) showing measured surfaces (in red) for overstock control.
Sensors 17 02066 g015
Figure 16. Measurement results (a,c) in large raw parts for industrial examples shown in Figure 15a,c respectively, where solved target 3D coordinates and camera views are jointly depicted, and corresponding in-process intermediate bundle computing times (b,d).
Figure 16. Measurement results (a,c) in large raw parts for industrial examples shown in Figure 15a,c respectively, where solved target 3D coordinates and camera views are jointly depicted, and corresponding in-process intermediate bundle computing times (b,d).
Sensors 17 02066 g016
Figure 17. Fitting of measured non-coded targets (yellow) to the nominal part geometry, corresponding to the industrial examples ((a) Soraluce milling machine travelling column, and (b) Liebherr drilling rig lead center), where optimal target coordinates are given in the ideal part frame for an even overstock distribution in all surfaces to be machined (red).
Figure 17. Fitting of measured non-coded targets (yellow) to the nominal part geometry, corresponding to the industrial examples ((a) Soraluce milling machine travelling column, and (b) Liebherr drilling rig lead center), where optimal target coordinates are given in the ideal part frame for an even overstock distribution in all surfaces to be machined (red).
Sensors 17 02066 g017
Table 1. Input data for extrinsic computing: 3D coordinates (x, y, and z in mm, for X1 to X6) for coded targets on reference frame, and image coordinates (hij and vij in pixels, for image j = 1 to 3).
Table 1. Input data for extrinsic computing: 3D coordinates (x, y, and z in mm, for X1 to X6) for coded targets on reference frame, and image coordinates (hij and vij in pixels, for image j = 1 to 3).
Reference Frame Targets (Xi)Image Coordinates ( h i j , v i j )
xyzhi1vi1hi2vi2hi3vi3
X1000−51.65221.593−21.713−24.39224.59237.326
X2−169.9632.650−0.356−696.36127.686−10.666−574.88618.178637.234
X3170.03600594.2537.982−26.418528.26523.846−561.086
X4−1.742−169.1860−52.039541.249−448.374−41.592504.45537.142
X5−0.16226.998145.558−64.312−473.212403.334−25.024−431.11532.518
X60.10926.59028.314−54.508−125.784104.393−22.956−112.20636.936
Table 2. Computed results for independent camera extrinsic approaches ( d x , d y , and d z in mm, and α , β , and γ in radians), given by the input data in Table 1, along with the optimization quality index for each minimized reprojection error vector (RMS in pixels).
Table 2. Computed results for independent camera extrinsic approaches ( d x , d y , and d z in mm, and α , β , and γ in radians), given by the input data in Table 1, along with the optimization quality index for each minimized reprojection error vector (RMS in pixels).
dXdYdZαβγRMS
Image 1−13.5525.6201145.020−2.3750.0050.0200.482
Image 2−6.593−7.5451340.136−2.348−0.009−1.5800.454
Image 36.89410.4941233.812−2.3780.023−4.7120.471
Table 3. Computed results for independent target coordinate initial approaches, given by the input data in Table 2, along with the optimization quality index for each minimized reprojection error vector (RMS in pixels).
Table 3. Computed results for independent target coordinate initial approaches, given by the input data in Table 2, along with the optimization quality index for each minimized reprojection error vector (RMS in pixels).
Target Coordinates (Xi)Image Coordinates ( h i j , v i j )
idxyzhi1vi1hi2vi2hi3vi3RMS
11−171.283−118.827−4.814−745.019400.151−316.781−623.685362.860679.5880.334
81123.935−80.850−0.931446.849247.991−221.605392.106243.357−423.0920.085
150−389.999271.54762.835−1370.495−757.428673.595−1141.890−734.4341274.2630.691
194254.725−340.006−97.4901086.0971369.686−1175.381899.4561284.152−994.7481.617
94248.080393.26065.33684.859−1027.708869.982127.453−958.663−95.2531.144
94337.041366.77450.75452.101−940.542795.29396.428−877.274−63.6110.970
94473.083376.54650.491165.633−959.464808.956195.930−892.928−170.0401.008
94717.904395.75253.643−10.528−998.670847.40343.948−934.252−4.8781.039
9482.228387.88920.125−57.610−895.363759.1100.405−839.24042.0360.878
Table 4. Camera extrinsic parameters ( d x , d y , and d z in mm, and α , β , and γ in radians) after joint bundle of epipolar net scene at Figure 9b.
Table 4. Camera extrinsic parameters ( d x , d y , and d z in mm, and α , β , and γ in radians) after joint bundle of epipolar net scene at Figure 9b.
dXdYdZαβγ
Image 1−13.5615.4921145.880−2.3780.0050.020
Image 2−6.635−7.5981339.838−2.347−0.008−1.580
Image 37.00510.4471232.964−2.3890.023−4.712
Table 5. Coded (id) target 3D coordinates (mm) after joint bundle of scene at Figure 9b.
Table 5. Coded (id) target 3D coordinates (mm) after joint bundle of scene at Figure 9b.
idxyz
11−170.704−121.223−1.921
81123.995−80.446−1.397
150−390.866274.14760.943
194254.332−340.750−96.037
94249.131422.06046.501
94337.779392.93432.957
94474.603403.02332.620
94718.272425.87933.690
9482.222417.722−0.569
Table 6. Camera intrinsic parameters after self-calibration.
Table 6. Camera intrinsic parameters after self-calibration.
f (mm) c l 0 (pixel) r w 0 (pixel) k 1 (pixel−2) k 2 (pixel−4) π 1 (pixel−1) π 2 (pixel−1)
24.557−7.65231.3964.866 × 10−09−2.150 × 10−16−8.636 × 10−08−1.236 × 10−08

Share and Cite

MDPI and ACS Style

Mendikute, A.; Yagüe-Fabra, J.A.; Zatarain, M.; Bertelsen, Á.; Leizea, I. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining. Sensors 2017, 17, 2066. https://doi.org/10.3390/s17092066

AMA Style

Mendikute A, Yagüe-Fabra JA, Zatarain M, Bertelsen Á, Leizea I. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining. Sensors. 2017; 17(9):2066. https://doi.org/10.3390/s17092066

Chicago/Turabian Style

Mendikute, Alberto, José A. Yagüe-Fabra, Mikel Zatarain, Álvaro Bertelsen, and Ibai Leizea. 2017. "Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining" Sensors 17, no. 9: 2066. https://doi.org/10.3390/s17092066

APA Style

Mendikute, A., Yagüe-Fabra, J. A., Zatarain, M., Bertelsen, Á., & Leizea, I. (2017). Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining. Sensors, 17(9), 2066. https://doi.org/10.3390/s17092066

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop