Next Article in Journal
IGWO-IVNet3: DL-Based Automatic Diagnosis of Lung Nodules Using an Improved Gray Wolf Optimization and InceptionNet-V3
Next Article in Special Issue
An Online Rail Track Fastener Classification System Based on YOLO Models
Previous Article in Journal
Automatic Recognition of Road Damage Based on Lightweight Attentional Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interpretation and Transformation of Intrinsic Camera Parameters Used in Photogrammetry and Computer Vision

Department of Geomatics, National Cheng Kung University, No. 1, Daxue Road, East District, Tainan City 701, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 9602; https://doi.org/10.3390/s22249602
Submission received: 9 November 2022 / Revised: 4 December 2022 / Accepted: 5 December 2022 / Published: 7 December 2022
(This article belongs to the Collection Visual Sensors)

Abstract

:
The precision modelling of intrinsic camera geometry is a common issue in the fields of photogrammetry (PH) and computer vision (CV). However, in both fields, intrinsic camera geometry has been modelled differently, which has led researchers to adopt different definitions of intrinsic camera parameters (ICPs), including focal length, principal point, radial distortion, decentring distortion, affinity and shear. These ICPs are indispensable for vision-based measurements. These differences can confuse researchers from one field when using ICPs obtained from a camera calibration software package developed in another field. This paper clarifies the ICP definitions used in each field and proposes an ICP transformation algorithm. The originality of this study lies in its use of least-squares adjustment, applying the image points involving ICPs defined in PH and CV image frames to convert a complete set of ICPs. This ICP transformation method is more rigorous than the simplified formulas used in conventional methods. Selecting suitable image points can increase the accuracy of the generated adjustment model. In addition, the proposed ICP transformation method enables users to apply mixed software in the fields of PH and CV. To validate the transformation algorithm, two cameras with different view angles were calibrated using typical camera calibration software packages applied in each field to obtain ICPs. Experimental results demonstrate that our proposed transformation algorithm can be used to convert ICPs derived from different software packages. Both the PH-to-CV and CV-to-PH transformation processes were executed using complete mathematical camera models. We also compared the rectified images and distortion plots generated using different ICPs. Furthermore, by comparing our method with the state of art method, we confirm the performance improvement of ICP conversions between PH and CV models.

1. Introduction

The precision modelling of the intrinsic geometry of a camera is essential to the various applications of the vision-based measurements acquired with the visual sensors. Digital cameras are the most widespread visual sensors applied to vision-based measurements due to their generality and affordability. The camera mathematical model is developed based on the intrinsic camera parameters (ICPs). ICPs obtained from the process of camera calibration are a key factor in accurate vision-based measurements in the field of instrumentation and measurement [1]. This is especially true in the fields of photogrammetry (PH) and computer vision (CV). Although both fields consider the intrinsic camera geometry as a perspective projection (pinhole camera model) with the lens distortion model, the adopted ICPs of the associated mathematical models in these two fields differ. Consequently, researchers from one field are frequently confused when using camera parameters obtained from a camera calibration software package developed in the other field. To ameliorate this problem, this paper clarifies the ICP definitions used in each field and proposes an ICP transformation algorithm for greater interoperability between both fields.
In the field of PH, the calibration of metric cameras has been studied since the 1960s [2]. The parameters derived to describe the intrinsic camera geometry are known as interior orientation parameters and are applied to correct image measurements to satisfy the geometry of a perspective projection. During the era of film cameras, the major items of calibration included lens aberrations, film deformation, the focal length and the principal point [3]. In film cameras, the principal point must be determined using fiducial marks fixed inside the camera; such cameras are called metric cameras. In digital cameras, the visual sensor is usually fixed inside the camera, so the principal point can be determined without the need for fiducial marks. Furthermore, the deformation of visual sensors is negligible. Therefore, the focal length, the principal point and lens aberration of digital cameras have become the standard parameters in modelling [4]. Lens aberration in Brown’s [3] model incorporates the radial distortion and decentring distortion. Additional parameters, such as affine distortion [5] and chromatic aberration [6], can also be adopted. Such a calibration model can also be extended to other kinds of cameras, such as a hemispheric camera [7], RGB-D camera [8], thermal camera [9] and line-scan camera [10].
In the field of PH, lens aberrations are modelled as correction equations in which lens distortion terms are a function of distorted image coordinates [3]. The advantage of this method is that image measurements can be directly corrected against lens aberrations using the calibrated lens distortion parameters. This allows corrected image points to be applied in the calculation of bundle adjustment. Through bundle adjustment, 3D coordinates of object points corresponding to the measurements of image tie points can be computed with reasonable precision. In the past, this approach was beneficial to aerial photogrammetry for mapping and surveying [11].
Camera mathematical models have also been developed and utilized in the field of CV. These CV models originated from the field of PH [12,13]. Similarly, ICPs related to lens distortion have been widely adopted and have become the classic approach [14]. The basic mathematical model in CV for a camera is based on a pinhole camera model compensating for lens distortion [13]. Although many variations have been proposed in the literature, almost all of them have been based on this model. This compensation model can be considered as the inverse of the correction model applied in the field of PH. It has been adopted and become the standard in the field of CV without undergoing any major modifications since its introduction [15]. By applying this compensation model, image lens distortion can be efficiently rectified. For the calculation of image rectification, the plane where the coordinates of image points are measured is named the frame buffer, and the plane where normalized image points are located is named the CCD plane [16]. The CCD plane is defined instead as the normalized image plane in the textbook [17]. At present, normalized image points are often used in computation; for example, they may be used to find the essential matrix [18]. To improve the convenience of generating rectified images, lens distortion terms are chosen to be a function of undistorted image coordinates. The advantage of this mathematical model is that the rectified image can be resampled from the original image efficiently without additional iterative calculations.
Wang et al. [19] performed a comparison of the coordinate systems, parameter definitions and the camera models commonly applied in the fields of PH and CV. Based on this comparison, they provide a linear transformation of the camera model defined in PH to that defined in CV. This linear transformation is only applicable to radial distortion parameters lower than the third order; therefore, it would not be suitable for highly distorted camera lenses. Drap and Lefèvre [20] proposed an exact formula for inverse radial lens distortions. However, the inversed radial lens distortion parameters defined in this study cannot be applied directly to common CV software, due to the different definition of image coordinates. Furthermore, this transformation does not include the decentring distortion parameters. Hastedt et al. [21,22] presented a table for the transformation of ICPs commonly used in the fields of PH and CV without laying out the derivation process for the conversion. For the validation, they merely demonstrated the transformation of ICPs from CV model to PH model; the other way around was not validated.
In summary, previous studies have noticed the differences of camera models defined in both fields, and some simplified transformation methods have been provided. For the convenience of applying software packages developed in both fields interactively, further studies are needed to provide a comprehensive interpretation and a complete transformation of ICPs commonly applied in both fields.
Hence, the differing models used in each field give rise to confusion among researchers. Because mathematical camera models in the fields of PH and CV are defined differently, the ICPs applied in these two fields cannot be transformed directly. Some proposed simplified methods use closed-form formulas to transform only the dominant ICPs. Although such methods are convenient, the resulting transformations are incomplete, and the accuracy of the converted parameters may be insufficient. This paper proposed a method for comprehensive transformation between PH and CV camera models. The proposed method can be used to transform a complete set of ICPs, including focal length, principal point, radial distortion, decentring distortion, affinity, and shear. This method would be important for those who are treating camera calibration as a critical issue.
Therefore, this paper had three objectives: (1) explicate the camera mathematical models used in both fields and list the equations for image correction according to their respective ICPs; (2) develop a general method for transforming ICPs between the PH and CV models; (3) discuss and analyze the transformed ICP values, rectified images and distortion plots. The three main contributions of this paper are as follows. First, mathematical camera models in the fields of PH and CV are both analyzed in detail. The related equations of ICPs are listed explicitly. Second, a ICP transformation algorithm is proposed. The algorithm uses least-squares adjustment to convert the ICPs by using image points involving ICPs defined in PH and CV image frames. This solves the main problems associated with conventional methods based on simplified formulas. The ICPs can be converted completely between fields of PH and CV, so that the number of converted ICPs obtained using the proposed method is identical to the original number. Selecting suitable image points can improve the accuracy of the generated adjustment model. Finally, this paper summarizes the results of testing the proposed transformation algorithm bidirectionally between the PH and CV models to evaluate the algorithm’s performance.

2. Camera Mathematical Model

2.1. Geometry of Perspective Projection Used in the Field of PH

The geometry of the perspective projection, as defined in the field of PH, is depicted in Figure 1a. The three coordinate systems (frames) involved in the definition are the object frame, image frame and camera frame. The coordinates of the object points are defined in the object frame. The corresponding coordinates of the image points defined in the image frame are the row (r) and column (c). The unit is the pixel. Figure 1a displays the coordinate axes of the camera frame, whose origin is the perspective center. The z-axis represents the optical axis of the camera lens, and the principal point is the intersection of the optical axis and image frame. The distance from the perspective center to the principal point is usually named the principal distance (f), which is the focal length (f) for an infinity focused camera lens. The principal point of the camera is usually close to the center of the image. In this paper, the image coordinates of the principal point are denoted ( c p ,   r p ), as indicated in Figure 1b.
Based on the preceding definition, the coordinates of an image point in the image frame are c ,   r and the corresponding coordinates in the camera frame are ( x ,   y , f ). According to the frame relationship, the transformation from image frame to camera frame can be presented in the form of homogeneous coordinates. Equation (1) describes the transformation process. Metric units are conventionally adopted for the PH camera frame. Therefore, the formula incorporates the image pixel size (ds).
x y f = d s 0 d s · c p 0 d s d s · r p 0 0 f · c r 1
There is another definition of the image frame in the field of PH. Two definitions of image frame are depicted in Figure 2. The origin is set on the image center, which is denoted ( c c ,   r c ), as indicated in Figure 2b. Metric units are adopted in this definition. The coordinates of an image point in this image frame are denoted x i ,   y i , and the coordinates of principal point are denoted ( x p ,   y p ). The conversion of principal point in two image frames is listed in Equation (2). The corresponding coordinates in the camera frame are still ( x ,   y , f ) that are calculated from Equation (3).
c p r p = c c + x p d s r c y p d s
x y = x i x p y i y p
For digital cameras, it would be more convenient to use the first definition than the second definition. In this paper, we adopt the first definition of image frame in the field of PH. Therefore, even if the image frame in the calibration software used is defined in metric unit, the coordinates of image point can be converted into the image frame as defined in pixel unit.
The preceding transformation is incomplete if lens distortion is uncorrected. In the camera frame, the coordinates of an image point with lens distortion should be denoted ( x d ,   y d ,   f ) instead of ( x ,   y , f ). Camera lens distortion has been commonly treated as a combination of radial and decentring (tangential) distortion, which are modelled as functions of distorted coordinates in the field of PH [3]. Therefore, the undistorted coordinates ( x u ,   y u ,   f ) can be obtained by adding the radial and decentring distortion terms to the distorted coordinates. Equation (4) displays the lens distortion model, in which ( Δ x r a d ,   Δ y r a d ) denotes radial distortion and ( Δ x d e c ,   Δ y d e c ) denotes decentring distortion.
x u y u = x d + Δ x r a d x d , y d + Δ x d e c x d , y d y d + Δ y r a d x d , y d + Δ y d e c x d , y d
In general, a camera lens has a much larger amount of radial distortion than decentring distortion. Radial distortion can be modelled using a polynomial function with multiple parameters, such as k 1 , k 2 , k 3 , etc. [3]. Equation (5) lists the formulas of radial distortion. Three parameters are sufficient in most cases, with one exception being the case of fisheye lens cameras. Brown [3] also proposed a mathematical function for decentring distortion with two parameters, p 1 and p 2 . Equation (6) lists the formula of decentring distortion.
Δ x r a d x d , y d Δ y r a d x d , y d = x d k 1 r 2 + k 2 r 4 + k 3 r 6 y d k 1 r 2 + k 2 r 4 + k 3 r 6
Δ x d e c x d , y d Δ y d e c x d , y d = p 1 r 2 + 2 x d 2 + 2 p 2 x d y d 2 p 1 x d y d + p 2 r 2 + 2 y d 2
where r = x d 2 + y d 2 .
There are the additional parameters of affinity and shear to take into account. However, such terms are rarely significant for common digital cameras [23]. The affinity and shear represent image-invariant affine distortion [4]. Equation (7) lists the formulas of affine distortion. The parameter of affinity is denoted b 1 . The parameter of shear is denoted b 2 . Affine distortion can be added with radial and decentring distortion into Equation (4).
Δ x a f f x d , y d Δ y a f f x d , y d = b 1 x d + b 2 y d 0
These additional parameters are mainly relevant to older frame grabber-based cameras, in which timing problems often lead to horizontal pixel-spacing relative errors and image shearing [5]. Therefore, these additional parameters are often ignored for digital cameras due to insignificant effect. In this paper, we also ignore the parameters of affinity and shear in the field of PH, so that b 1 and b 2 are set to zero.

2.2. Geometry of Perspective Projection Used in the Field of CV

The geometry of perspective projection, as defined in the field of CV, is depicted in Figure 3a. The coordinate systems involved include the object, image and camera frames, but the axes of the camera frame are defined differently from those in the field of PH. Furthermore, pixel units rather than metric units are adopted. This causes the transformation matrix from the image frame to the camera frame to differ from that used in the field of PH. The corresponding coordinates of the image points defined in the image frame are also the row and column. The unit is the pixel. The image coordinates of the principal point are also denoted ( c p ,   r p ), as indicated in Figure 3b.
In addition, the image pixel shape is not always modelled as a perfect square in the field of CV. The lengths of the image pixel in the x and y direction are represented by d s x and d s y , respectively. Therefore, the x and y coordinates of the focal length are termed f x and f y . CV researchers often prefer using homogeneous coordinate transformation, in which case the images defined in the camera frame are represented with normalized homogeneous coordinates. In this paper, this is termed the normalized image frame, as indicated in Figure 4.
In the normalized image frame, the coordinate unit is normalized to be the focal length. Accordingly, the homogeneous coordinates of each image point in the camera frame become ( x ^ ,   y ^ , 1 ). Therefore, the transformation from the image frame to the camera frame can be presented in a form of homogeneous coordinates in Equation (8). The parameters of affinity and shear are presented in this equation. The parameter of shear is denoted s . The parameter of affinity is implicit in f x and f y that is presented in Equation (9).
x ^ y ^ 1 = 1 f x s y ^ f x r c p f x 0 1 f y r p f y 0 0 1 · c r 1
f x = f d s x f y = f d s y
In the field of CV, the term skew is more often used than shear. For consistency in this paper, we still adopt the term shear as used in the field of PH. Since shear is usually very small, the assumption that s = 0 , commonly used by other authors [15,24], is quite reasonable [25]. In reality, the shear might not be zero, because when taking an image of an image [18], the x- and y- axes are not perpendicular. Therefore, the parameter of shear is often ignored for digital cameras due to insignificant effect. In this paper, we also ignore the parameter of shear in the field of CV, so that s is set to zero and Equation (8) is simplified to Equation (10).
x ^ y ^ 1 = 1 f x 0 c p f x 0 1 f y r p f y 0 0 1 · c r 1
Although researchers in both fields apply Brown’s formulas and use similar parameters, those parameters are defined differently. Therefore, software packages yield very different lens distortion parameter values for camera calibration, depending on whether that package was developed for PH or CV. In the field of CV, lens distortion is modelled in the undistorted image points rather than in the distorted points. The transformed coordinates generated from Equation (10) should contain distortion and can be denoted as ( x ^ d ,   y ^ d ,   1 ). The undistorted coordinates ( x ^ u ,   y ^ u ,   1 ) are obtained by subtracting the radial and decentring distortion terms from the distorted coordinates. Therefore, lens distortion is modelled as Equation (11) in the field of CV.
x ^ u y ^ u = x ^ d Δ x ^ r a d x ^ u , y ^ u Δ x ^ d e c x ^ u , y ^ u y ^ d Δ y ^ r a d x ^ u , y ^ u Δ y ^ d e c x ^ u , y ^ u
Equations (12) and (13) are the mathematical functions applied for radial distortion and decentring distortion, respectively, in the field of CV. The parameters for radial distortion, k ^ 1 , k ^ 2 and k ^ 3 , and for decentring distortion, p ^ 1 and p ^ 2 , appear similarly to those applied in the field of PH, but they are defined fairly differently. First, they are defined in the normalized image frame and are a function of the undistorted coordinates. This makes the parameter transformation between these two definitions non-transparent. Second, p ^ 1 and p ^ 2 are also defined differently.
Δ x ^ r a d x ^ u , y ^ u Δ y ^ r a d x ^ u , y ^ u = x ^ u k ^ 1 r ^ 2 + k ^ 2 r ^ 4 + k ^ 3 r ^ 6 y ^ u k ^ 1 r ^ 2 + k ^ 2 r ^ 4 + k ^ 3 r ^ 6
Δ x ^ d e c x ^ u , y ^ u Δ y ^ d e c x ^ u , y ^ u = 2 p ^ 1 x ^ u y ^ u + p ^ 2 r ^ 2 + 2 x ^ u 2 p ^ 1 r ^ 2 + 2 y ^ u 2 + 2 p ^ 2 x ^ u y ^ u
where r ^ = x ^ u 2 + y ^ u 2 .
There are a number of parameters, variables, and constants in the mathematical expressions. Table 1 lists their symbols and corresponding meanings, as defined in PH and CV standard.

3. Methodology

As mentioned, the transformation of ICPs is conventionally nonlinear. If transformation from one camera coordinate system to another is required, the conventional method is linearisation for related equations with iterative computation. The process would be complicated and computationally expensive. The initial value for linearisation must also be determined. However, image point coordinates can be transformed linearly. The coordinates of image points involving ICPs are the same defined in PH and CV image frames. Based on this principle, we propose a linear transformation theory for converting ICPs between the PH and CV camera mathematical models. The least-squares adjustment is used in our transformation algorithm. The corresponding weight of each observation is set to the same. The main challenges that must be addressed in ICP transformation are as follows: (1) selecting suitable image points in the PH or CV camera frame, accounting for both the number and distribution of image points; (2) transforming image points between the PH camera frame and the CV camera frame; and (3) using least-squares adjustment to transform a complete set of ICPs. The means by which the proposed method addresses these challenges are detailed in the following sections.

3.1. Transformation of ICPs from PH to CV Standard

The five-step workflow for transforming PH ICPs to CV ICPs is depicted in Figure 5. Each step is described in Figure 5a, and the entire framework is visualized in Figure 5b. The first step involves the selection of an adequate number of distorted image points as observation points. According to the image points selected, corresponding undistorted image points can be calculated based on ICPs by using Equation (4) to Equation (6). The coordinate system of distorted image points and undistorted image points are defined in the PH camera frame. Thus, the second step involves the calculation of the corresponding undistorted image points. The procedure was described as in the section ‘Geometry of Perspective Projection Used in the Field of PH’. The proposed ICP transformation method requires at least three observed image points to generate a solution. However, the solution generated when the minimum number of points is used may not be reliable. A reliable solution requires not only an adequate number of observation points, but also an even distribution of observation points. Selecting observation points at a particular intervals across the whole image may be a suitable strategy for achieving an even distribution. This can ensure that the most reliable solution can be obtained even when the number of observation points is limited.
The third step involves the transformation of image points from the PH camera frame to the CV camera frame. Both the undistorted and distorted image points must be converted in this step. The two parts of the step are detailed as follows. The first part is a coordinate transformation from the camera frame to the image frame. These two coordinate systems are defined in the field of PH. The equation obtained from Equation (1) is listed in Equation (14).
c r 1 = 1 d s 0 c p f 0 1 d s r p f 0 0 1 f · x y f
The second part is a coordinate transformation from the camera frame to the image frame. These two coordinate systems are defined in the field of CV. The equation obtained from Equation (10) is listed in Equation (15).
c r 1 = f x 0 c p 0 f y r p 0 0 1 · x ^ y ^ 1
The coordinates of the image point in the image frame are the same in the PH and CV models; Equations (16) and (17) are derived from Equations (14) and (15). Based on these two equations, the coordinates of the undistorted and distorted images points in the CV camera frame can be obtained. Therefore, coordinate transformation from the PH camera frame to the CV camera frame can be achieved. The conversion of the focal length from the field of PH to the field of CV is defined in Equation (18).
x ^ = x d s · f x
y ^ = y d s · f y
f x = f y = f d s
The fourth step involves the listing of the observation equations that are reorganized based on the correction equations in the field of CV. They are listed as Equations (11)–(13). The unknown parameters are ICPs in the field of CV. Equation (19) presents the entire least-squares form for solving CV ICPs. The undistorted image points are listed from ( x ^ u 1 ,   y ^ u 1 ) to ( x ^ u n ,   y ^ u n ) sequentially. Distorted image points are listed from ( x ^ d 1 ,   y ^ d 1 ) to ( x ^ d n ,   y ^ d n ) sequentially. All coordinates of the image points are known. Every value in the design matrix can be computed directly as coefficients. Consequently, this adjustment model is linear.
[ x ^ d 1 x ^ u 1 y ^ d 1 y ^ u 1 x ^ d n x ^ u n y ^ d n y ^ u n ] + V = [ x ^ u 1 r ^ 2 x ^ u 1 r ^ 4 x ^ u 1 r ^ 6 y ^ u 1 r ^ 2 y ^ u 1 r ^ 4 y ^ u 1 r ^ 6 x ^ u n r ^ 2 x ^ u n r ^ 4 x ^ u n r ^ 6 y ^ u n r ^ 2 y ^ u n r ^ 4 y ^ u n r ^ 6       2 x ^ u 1 y ^ u 1 r ^ 2 + 2 x ^ u 1     r ^ 2 + 2 y ^ u 1 2 x ^ u 1 y ^ u 1     2 x ^ u n y ^ u n r ^ 2 + 2 x ^ u n     r ^ 2 + 2 y ^ u n 2 x ^ u n y ^ u n ] [ k ^ 1 k ^ 2 k ^ 3 p ^ 1 p ^ 2 ]
where V denotes the residual vector.
The entire least-squares adjustment can be expressed as L + V = A · X , where L represents the matrix of observations, A represents the design matrix, and X represents the matrix of unknown parameters. Once the observation equations have been listed sequentially, the related values in the matrix of observations and design matrix can be calculated accordingly. X can be directly solved as A T · A 1 · A T · L .

3.2. Transformation of ICPs from CV to PH Standard

The five-step workflow is depicted in Figure 6. Each step is described in Figure 6a, and the entire framework is visualized in Figure 6b.
The process of transforming CV ICPs to PH ICPs is highly similar to the aforementioned process of transforming PH ICPs to CV ICPs. The procedure also comprises five steps. The major difference is that undistorted rather than distorted image points must be selected in the first step. Subsequently, in the second step, the corresponding distorted image points can be calculated based on ICPs. The equations are listed in Equations (11)–(13). The coordinate system of distorted image points and undistorted image points is defined in the CV camera frame.
The third step involves the transformation of image points from the CV camera frame to the PH camera frame. Both undistorted and distorted image points must be converted in this step. Equations (20) and (21), which are derived from Equations (16) and (17), detail the process. The conversion of the focal length from the CV to PH standard is defined in Equation (22). For the parameter of another focal length in CV standard, f x , its effect can be transformed into the parameter of shear, b 1 in PH standard. Therefore, after the transformation of CV ICPs to PH ICPs, the number of parameters is still the same.
x = x ^ · d s · f x
y = y ^ · d s · f y
f = f y · d s y
The fourth and final step involves listing the observation equations that are reorganized on the basis of Equations (4)–(7). The unknown parameters are ICPs defined according to the PH standard. Equation (23) displays the entire least-squares form for solving PH ICPs. Distorted image points are listed sequentially from ( x d 1 ,   y d 1 ) to ( x d n ,   y d n ). Undistorted image points are listed sequentially from ( x u 1 ,   y u 1 ) to ( x u n ,   y u n ). All coordinates of the image points are known. Every value in the design matrix can be computed directly as a coefficient. Consequently, this adjustment model is linear as well. If the affinity factor in Equation (7) is ignored, this adjustment model can be simplified by excluding the b 1 parameter in Equation (23). Least-squares adjustment can be expressed as the equation L + V = A · X . Using this equation, the converted ICPs can be solved.
[ x u 1 x d 1 y u 1 y d 1 x u n x d n y u n y d n ] + V = [ x d 1 r 2 x d 1 r 4 x d 1 r 6 r 2 + 2 x d 1 2 2 x d 1 y d 1 x d 1 y d 1 r 2 y d 1 r 4 y d 1 r 6 2 x d 1 y d 1 r 2 + 2 y d 1 2 0 x d n r 2 x d n r 4 x d n r 6 r 2 + 2 x d n 2 2 x d n y d n x d n y d n r 2 y d n r 4 y d n r 6 2 x d n y d n r 2 + 2 y d n 2 0 ] [ k 1 k 2 k 3 p 1 p 2 b 1 ]
where V denotes the residual vector.

4. Experimental Results

The test cameras, a Sony A6000 (Sony Group Corporation, Tokyo, Japan) and GoPro Hero 4 (GoPro, Inc., San Mateo, CA, USA), are characterized by different degrees of lens distortion. The specifications of these two cameras are listed in Table 2. The cameras were calibrated using typical camera calibration software, respectively applied in the fields of PH and CV to demonstrate the differences between the ICPs, which are produced during the calibration processes in the different fields. Second, to verify the proposed method, both the PH-to-CV and CV-to-PH transformations were executed. In addition, different image point selection strategies were applied to test their effects on the performance of the proposed transformation algorithm.

4.1. Camera Calibration Method

The camera calibration methods used in the fields of PH and CV differ. The technique in the field of PH is more rigorous, and includes a greater number of camera parameters [26]; whereas, the technique in the field of CV emphasises automation, efficiency and versatility [27]. Remondino and Fraser [23] performed a detailed review of the calibration approaches used in the fields of PH and CV. In this paper, only one method was selected for each camera mathematical model, to reduce complexity. A rotatable table with coded targets [28] and Australis photogrammetric software (version 8.0, Photometrix, Melbourne, Australia) [29] were used for PH camera calibration. A checkerboard [24] and the Camera Calibrator app in MATLAB (version R2021b, The MathWorks, Inc., Natick, MA, USA) [30] were used for CV camera calibration. Figure 7 displays the representative camera calibration methods from the fields of PH and CV used in this paper.
Although the definition of image frame in Australis photogrammetric software is not the same as the one adopted in the paper. Based on Equations (2) and (3), the related coordinates of image points and principal point have been converted to the image frame as defined in pixel unit. Therefore, the proposed method for transforming ICPs can still be implemented.
Table 3 lists the calibration results of the Sony camera. Table 4 lists the calibration results of the GoPro camera. The focal length in the PH calibration method was measured in millimeters, whereas in the CV calibration method, they were measured in units. The units of the lens distortion parameters also differed between the two camera mathematical models. As shown in Table 3 and Table 4, the PH and CV models had millimeter-derived units and a generic ‘unit’, respectively. As mentioned in the previous section, this unit is related to the focal length. In addition, according to Equation (7), b 1 and b 2 are only coefficients, so there is not unit.
Consequently, the two types of ICPs and their standards of precision are not the same. Values of ICPs in the same mathematical camera model can be compared, but the precision of ICPs from different models cannot be compared. Nevertheless, the overall precision in both the PH and CV calibration results was less than 1.5 pixels, according to the calibration reports in our experiments. For the Sony camera, the overall precision achieved when the PH and CV calibration methods were applied was 0.27 and 0.71 pixels, respectively. For the GoPro camera, the overall precision achieved when the PH and CV calibration methods were applied was 0.30 and 1.48 pixels, respectively.

4.2. ICP Transformation Results: From the PH to CV Standard

The proposed algorithm uses several image point selection strategies to account for the effects of the number and distribution of selected observation points. The first strategy involves three selected observation points with an intensive distribution (Case 1). The second involves three selected observation points with an extensive distribution (Case 2). The third involves 12 selected observation points evenly distributed across the whole image (Case 3). The last involves 4800 selected observation points placed at regular intervals across the whole image, ensuring an even distribution (Case 4). The selected observation points of Cases 1 to 4, when the Sony camera was used, are displayed in Figure 8. The experimental design adopted when the GoPro camera was used was identical. We implemented these strategies and compared four sets of results with the original calibration results obtained using the CV method.

4.2.1. Sony A6000 Results

Table 5 displays the Case 1 to 4 results obtained when the Sony camera was used. The focal length and principal point values obtained from the calibration and conversion methods differed by a maximum of 12 pixels. Overall, each ICP value was similar between the different methods, indicating that the proposed transformation algorithm is feasible.
Notably, k 1 was negative for the CV method results and positive for the PH method results. Normally, k 1 is positive for PH method results for most cameras due to barrel distortion. These two camera mathematical models are inversely related. Therefore, the range of k 1 in the converted results is the inverse of the original range. However, the other distortion parameters, including k 2 ,   k 3 , p 1 and p 2 , may not accord with the aforementioned condition because they are much less influential than k 1 .
The numerical results in Table 5 do not clearly indicate which set of ICPs is optimal. Therefore, we generated rectified images using ICPs obtained from each method. The effects of rectification (with relevant image resolutions listed) are illustrated in Figure 9. The differences were most visible in the four corners of each image. The distortion correction in Case 1 was more obvious than that in the other cases; however, the five rectified images look similar to the original images because of the small amount of lens distortion. Moreover, the differences in the resolution of the images, rectified using different methods, were minimal. The difference in resolution between the original image and each rectified images was less than 60 pixels.
The radial distortion and decentring distortion plots generated with the CV model are displayed in Figure 10. For each radial distortion plot, the red, blue and pink solid lines indicate k 1 , k 2 and k 3 , respectively. The black dotted line indicates the overall radial distortion curve. For each decentring distortion plot, the red and blue solid lines indicate p 1 and p 2 , respectively. The black dotted line indicates the overall decentring distortion curve. The values in the x- and y-axes in each figure are in pixels. In both plots, the boundary of x-axis is 3500 pixels. The curves obtained using the CV method can be used as a reference to evaluate the fit of the curves in Cases 1 to 4.
For the radial distortion plot, the range of the y-axis is from −100 to 100 pixels. Although only three observation points were selected in Cases 1 and 2, the curves in Case 1 were steeper, indicating that the distortion of the image points closer to the edge is greater. The curves in Case 2 were consistent with the curves obtained using the CV method. Therefore, using three observation points with an extensive distribution is a more feasible transformation strategy. The curves in Cases 3 and 4 were nearly identical, indicating that using 12 observation points with an even distribution can achieve transformation results equivalent to those that can be obtained using 4800 observation points. The curves in Cases 3 and Case 4 and the curved obtained using the CV method were similar. For all methods, the k 1 curve decreased. As mentioned, the PH and CV models are inversely related; this is evident in the radial distortion plot. For the decentring distortion plot, the range of the y-axis is from −0.001 to 0.001 pixels. Although the difference between each curve was small, in general, the curve for the CV method had the steepest gradient.
In summary, the number of selected observation points and distribution both affect the transformation of ICPs. After these two factors were accounted for, the curves in Case 4 had the smallest gradients, indicating that the corresponding method achieved the most favorable results. This indicates that ICPs converted based on image points selected at a set interval across the whole image are more suitable for the Sony camera. These ICPs also yielded better results than the original ICPs calibrated by the checkerboard.

4.2.2. GoPro Hero 4 Results

The results of Cases 1 to 4 for the GoPro camera are listed in Table 6. Focal length values obtained from the calibration and conversion methods differed greatly. The difference was approximately 67 pixels. By contrast, the principal point values differed slightly. Furthermore, the radial distortion of the GoPro camera belongs to barrel distortion, so that k 1 was also negative in the CV method. The results of the four cases were not completely consistent. Nevertheless, the corresponding ICP values acquired using the different methods were still similar, demonstrating the feasibility of our transformation algorithm.
Again, we generated rectified images using ICPs to evaluate which set of ICPs was superior. The rectified images and their corresponding resolutions are presented in Figure 11.
Because the lens distortion of the GoPro camera was much larger, the effects of rectification were much more noticeable. The rectified image in Case 1 was considerably deformed, but the rectified image in Case 2 was similar to the image rectified using the CV method, indicating that if only three observation points are selected, the distribution of the points is crucial. However, in the image in Case 2 and the image obtained using the CV method, distortion was still visible in the corners; for example, the images depicted three traffic cones instead of two, and a twisted building. Regarding the rectified image Case 3, although some slight distortion remained in the part of the image depicting the road and tree, the rectified image was of higher quality than the original image and similar to the rectified image in Case 4. The image in Case 4 still exhibited the least distortion, indicating that the number of observation points is also a key factor in transformation. In addition, the resolution of the images rectified by these methods differed from that of their corresponding original images by at least 1300 pixels.
The radial distortion and decentring distortion plots defined in the field of CV are depicted in Figure 12. For each radial distortion plot, the red, blue and pink solid lines indicate k 1 , k 2 and k 3 , respectively. The black dotted line indicates the overall radial distortion curve. For each decentring distortion plot, the red and blue solid lines indicate p 1 and p 2 , respectively. The black dotted line indicates the overall decentring distortion curve. The values on the x- and y-axes in each figure are in pixels. The boundary of the x-axis is 2500 pixels.
For the radial distortion plot, the range of the y-axis is −500 to 500 pixels. Because the distortion of the GoPro camera was larger, the overall curve decreased earlier than the curve for the Sony camera. The curves in Case 1 were steeper than those in Case 2 because of the larger k 1 and k 2 values. The curves in Case 2 look like the curves obtained using the CV method. Therefore, an extensive distribution of observation points is more effective for transformation. The curves in Case 4 look like those in Case 3. This indicates that using 12 evenly distributed observation points can achieve transformation results similar to those that can be obtained using 4800 observation points. Overall, the k 1 , k 2 , and k 3 curves in Case 4 were smoother than those in the other cases and those obtained using the CV method. As indicated in Figure 11, the ICPs in Case 4 were sufficient for the accurate rectification of the images. For the decentring distortion plot, the range of the y-axis is from −0.001 to 0.001 pixels. The overall curve in Case 4 was still significantly smoother than the others.
In summary, the number and distribution of the observation points must both be considered in the transformation of ICPs. The results obtained in Case 4 were superior to those obtained in the other cases in terms of radial distortion and decentring distortion. This indicates that ICPs converted based on image points selected at a set interval across the whole image are more suitable for the GoPro camera. These ICPs were also better than the original ICPs calibrated by the checkerboard.

4.2.3. Comparison with the State of Art

To validate the performance, the proposed method is compared with the state of art method presented by Hastedt et al. [21,22]. The Case 4 results of the proposed method are adopted for the comparison. Table 7 and Table 8 list the transformed ICPs resulting from both methods, with respect to Sony and GoPro cameras. The rectified images generated by applying both methods are also shown in Figure 13 and Figure 14, respectively.
Table 7 and Table 8 show that the values of k ^ 1 resulting from two methods are apparently different. Compared with the CV calibration results for validation, the values of k ^ 1 for both cameras should be negative rather than positive. If we take the difference between the converted k ^ 1 value and the validated one, using our method results in differences of 0.0130 and 0.0479 for the Sony and GoPro cameras, respectively. However, the differences are 0.1478 and 0.5534 when using the state of art method. Therefore, the improvement percentages of transformation are 91.2% and 91.3% for the Sony and GoPro cameras, respectively. The values of k ^ 2 and k ^ 3 are also quite different from the validated values. It is noteworthy that this improvement would affect the image rectification significantly. By checking the rectified images resulting from both methods, shown in Figure 13 and Figure 14, we can see apparent improvement in image rectification when using our method, especially when applied to the GoPro camera. In conclusion, our method outperforms the state of art method when a camera with severe lens distortion is applied, although it needs more computational effort.

4.3. ICP Transformation Results: From the CV to PH Standard

Our results indicate that the interval selection of image points is preferred. Therefore, only the interval selection strategy was included for comparison in the part of the experiment described in this section. ICPs of images from the Sony camera and GoPro camera, generated with the CV model, were transformed into PH-type ICPs. Table 9 lists the results for the image from the Sony camera. Table 10 lists the results for the image from the GoPro camera.
In the results of these two tables, the converted ICPs were comparable to the original ICPs. We also conducted the simplified adjustment model in which the affinity factor is ignored. The converted ICPs remain the same, even if b 1 is not included for the conversion. This demonstrates the feasibility of our proposed algorithm. For the Sony camera, the focal length, principal point and distortion parameters were all similar. For the GoPro camera, the difference in principal points was smaller. However, the differences in the focal length and lens distortion parameters were obviously larger, especially k 1 and k 2 . These results were very similar to the results for the PH-to-CV transformation. For the parameter of affinity, b 1 is very small. Based on Equation (7), its influence is quite insignificant compared with other distortion parameters; therefore, it is often ignored in the field of PH.
By using Equations (4)–(6), the correction amounts of lens distortion can be calculated. We checked the correction amounts at four selected points on the original image, and compared the cases using the original ICPs (PH) and the converted ICPs (CV to PH). Table 11 and Table 12 display the numerical results of corrections for Sony and GoPro cameras, respectively. The correction vectors are also shown in Figure 15 and Figure 16, respectively, in which blue dots indicate the selected points and red vectors indicate the corresponding corrections. For both applied cameras, the numerical differences between the compared cases are not significant. This comparison indicates that the proposed method for converting ICPs from CV standard to PH standard is applicable.

5. Discussion

Nowadays, image users for measurement, mapping or navigation frequently apply mixed software packages developed in the fields of PH and CV. For example, someone may apply a camera calibration software developed in the field of PH, then apply a CV software to recover image position and orientation. Under this circumstance, parameter transformation is needed to bridge between the software; this paper aims to clarify the camera model definitions and to propose a complete ICP transformation. By taking this approach, image users will be able to apply mixed software; an imagined example of this workflow is depicted in Figure 17. Consequently, further studies and applications could be generated in both fields accordingly. In addition, rectified images can be generated efficiently, no matter if the camera calibration is performed using a PH or CV model.

6. Conclusions

This paper expounds the mathematical camera models commonly applied in the fields of PH and CV. The discussion focuses on the different definitions of ICPs, including the focal length, principal point, lens distortion, affinity and shear. This discussion would help researchers to interpret ICPs obtained from the use of software packages developed in both fields.
Based on the discussion, we developed a least-squares adjustment method to transform ICPs between the conventionally used parameters in both fields. This method converts all ICPs usually applied to modern digital cameras, for example, radial distortion, decentring distortion, affinity, and shear parameters. Because both of the transformation models are linear, the calculation process is relatively efficient. The proposed method was verified with a Sony single-lens camera and a GoPro camera, both calibrated with camera calibration software typically used in the fields of PH and CV. Successful transformations of ICPs used in both fields were demonstrated. The accuracy of the transformation may have been slightly affected by the selection of image points. In general, selected image points at regular intervals across the entire image achieved the most favorable results, because this selection strategy accounted for both the number and distribution of selected observation points.
In addition, the comparison with the state of art demonstrates that the proposed method has superior performance. For converting the major parameter of radial distortion ( k ^ 1 ), the improvement percentages are 91.2% and 91.3% for the Sony and GoPro cameras, respectively. We consequently confirm that our method can improve the conversion of ICPs between models.
Camera calibration software applied in the field of CV commonly performs self-calibration bundle adjustment of overlapped images captured on a specially designed checkerboard. Consequently, the detected feature points are distributed relatively on a plane in the object space. However, coded targets distributed in a three-dimensional space are typically applied in the field of PH. The distribution of the detected object points can be considered the strength of the control space, which will affect the accuracy of derived ICPs. This effect is evident in the test which transformed ICPs obtained using CV software to the ICPs in the field of PH. The rectified images are obviously distorted in the border area of the rectified images. Therefore, where accurate ICPs are concerned, the photogrammetric camera calibration method is preferable to those applications.

Author Contributions

Conceptualization, K.-Y.L. and Y.-H.T.; methodology, K.-Y.L. and Y.-H.T.; software, K.-Y.L.; validation, K.-Y.L.; formal analysis, K.-Y.L.; investigation, K.-Y.L.; resources, Y.-H.T. and K.-W.C.; data curation, K.-Y.L.; writing—original draft preparation, K.-Y.L.; writing—review and editing, K.-Y.L. and Y.-H.T.; visualization, K.-Y.L.; supervision, Y.-H.T. and K.-W.C.; project administration, Y.-H.T. and K.-W.C.; funding acquisition, Y.-H.T. and K.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the National Science and Technology Council of Taiwan (109-2121-M-006-012-MY3) for the sponsorship.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shirmohammadi, S.; Ferrero, A. Camera as the instrument: The rising trend of vision based measurement. IEEE Instrum. Meas. Mag. 2014, 17, 41–47. [Google Scholar] [CrossRef]
  2. Brown, D.C. Advanced Methods for the Calibration of Metric Cameras; DBA Systems Inc.: Melbourne, FL, USA, 1968. [Google Scholar]
  3. Duane, C.B. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  4. Luhmann, T.; Fraser, C.; Maas, H.-G. Sensor modelling and camera calibration for close-range photogrammetry. ISPRS J. Photogramm. Remote Sens. 2016, 115, 37–46. [Google Scholar] [CrossRef]
  5. Beyer, H.A. Some aspects of the geometric calibration of CCD-cameras. In Proceedings of the ISPRS Intercommission Conference on Fast Processing of Photogrammetric Data, Interlaken, Switzerland, 2–4 June 1987. [Google Scholar]
  6. Luhmann, T.; Hastedt, H.; Tecklenburg, W. Modelling of chromatic aberration for high precision photogrammetry. In Proceedings of the ISPRS Commission V Symposium ‘Image Engineering and Vision Metrology’, Dresden, Germany, 25–27 September 2006; pp. 173–178. [Google Scholar]
  7. Schneider, D.; Schwalbe, E.; Maas, H.-G. Validation of geometric models for fisheye lenses. ISPRS J. Photogramm. Remote Sens. 2009, 64, 259–266. [Google Scholar] [CrossRef]
  8. Lachat, E.; Macher, H.; Landes, T.; Grussenmeyer, P. Assessment and calibration of a RGB-D camera (Kinect v2 Sensor) towards a potential use for close-range 3D modeling. Remote Sens. 2015, 7, 13070–13097. [Google Scholar] [CrossRef] [Green Version]
  9. Daakir, M.; Zhou, Y.; Deseilligny, M.P.; Thom, C.; Martin, O.; Rupnik, E. Improvement of photogrammetric accuracy by modeling and correcting the thermal effect on camera calibration. ISPRS J. Photogramm. Remote Sens. 2019, 148, 142–155. [Google Scholar] [CrossRef] [Green Version]
  10. Luna, C.A.; Mazo, M.; Lázaro, J.L.; Vazquez, J.F. Calibration of line-scan cameras. IEEE Trans. Instrum. Meas. 2009, 59, 2185–2190. [Google Scholar] [CrossRef]
  11. Clarke, T.A.; Fryer, J.G. The development of camera calibration methods and models. Photogramm. Rec. 1998, 16, 51–66. [Google Scholar] [CrossRef]
  12. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef] [Green Version]
  13. Heikkila, J.; Silven, O. Calibration procedure for short focal length off-the-shelf CCD cameras. In Proceedings of the 13th International Conference on Pattern Recognition, Washington, DC, USA, 25–29 August 1996; pp. 166–170. [Google Scholar]
  14. Slama, C.C. Manual of Photogrammetry; America Society of Photogrammetry: Falls Church, VA, USA, 1980. [Google Scholar]
  15. Heikkila, J.; Silvén, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1106–1112. [Google Scholar]
  16. Wei, G.-Q.; De Ma, S. Implicit and explicit camera calibration: Theory and experiments. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 469–480. [Google Scholar]
  17. Forsyth, D.A.; Ponce, J. Computer Vision: A Modern Approach; Prentice Hall: Hoboken, NJ, USA, 2002. [Google Scholar]
  18. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  19. Wang, H.; Shen, S.; Lu, X. Comparison of the camera calibration between photogrammetry and computer vision. In Proceedings of the 2012 International Conference On System Science And Engineering (ICSSE), Dalian, China, 30 June–2 July 2012; pp. 358–362. [Google Scholar]
  20. Drap, P.; Lefèvre, J. An exact formula for calculating inverse radial lens distortions. Sensors 2016, 16, 807. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Hastedt, H.; Luhmann, T. Investigations on the quality of the interior orientation and its impact in object space for UAV photogrammetry. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W4, 2015 International Conference on Unmanned Aerial Vehicles in Geomatics, Toronto, ON, Canada, 30 August–2 September 2015; Volume 40. [Google Scholar]
  22. Hastedt, H.; Ekkel, T.; Luhmann, T. Evaluation of the quality of action cameras with wide-angle lenses in uav photogrammetry. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016, XXIII ISPRS Congress, Prague, Czech Republic, 12–19 July 2016; Volume 41. [Google Scholar]
  23. Remondino, F.; Fraser, C. Digital camera calibration methods: Considerations and comparisons. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 266–272. [Google Scholar]
  24. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  25. Hartley, R.I. Self-calibration from multiple views with a rotating camera. In Proceedings of the Third European Conference on Computer Vision, Stockholm, Sweden, 2–6 May 1994; pp. 471–478. [Google Scholar]
  26. Fraser, C.S. Automatic camera calibration in close range photogrammetry. Photogramm. Eng. Remote Sens. 2013, 79, 381–388. [Google Scholar] [CrossRef] [Green Version]
  27. Fraser, C.S. Digital camera self-calibration. ISPRS J. Photogramm. Remote Sens. 1997, 52, 149–159. [Google Scholar] [CrossRef]
  28. Rau, J.-Y.; Yeh, P.-C. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration. Sensors 2012, 12, 11271–11293. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Fraser, C.S.; Edmundson, K.L. Design and implementation of a computational processing system for off-line digital close-range photogrammetry. ISPRS J. Photogramm. Remote Sens. 2000, 55, 94–104. [Google Scholar] [CrossRef]
  30. Knight, J. Image Processing and Computer Vision with MATLAB and SIMULINK. 2014. Available online: https://ww2.mathworks.cn/content/dam/mathworks/mathworks-dot-com/solutions/automotive/files/uk-expo-2014/image-processing-computer-vision-in-matlab-simulink.pdf (accessed on 9 November 2022).
Figure 1. Geometry of perspective projection as defined in the field of PH: (a) the three coordinate frames involved and (b) the image coordinates of the principal point.
Figure 1. Geometry of perspective projection as defined in the field of PH: (a) the three coordinate frames involved and (b) the image coordinates of the principal point.
Sensors 22 09602 g001
Figure 2. Two definitions of image frame in the field of PH: (a) the first: pixel unit, (b) the second: metric unit.
Figure 2. Two definitions of image frame in the field of PH: (a) the first: pixel unit, (b) the second: metric unit.
Sensors 22 09602 g002
Figure 3. Geometry of perspective projection as defined in the field of CV: (a) the three coordinate frames involved and (b) the image coordinates of the principal point.
Figure 3. Geometry of perspective projection as defined in the field of CV: (a) the three coordinate frames involved and (b) the image coordinates of the principal point.
Sensors 22 09602 g003
Figure 4. Geometry of normalized image frame in the field of CV.
Figure 4. Geometry of normalized image frame in the field of CV.
Sensors 22 09602 g004
Figure 5. Workflow of transformation of PH ICPs to CV ICPs: (a) the description of each step and (b) visualized framework.
Figure 5. Workflow of transformation of PH ICPs to CV ICPs: (a) the description of each step and (b) visualized framework.
Sensors 22 09602 g005
Figure 6. Workflow of transformation of CV ICPs to PH ICPs: (a) the description of each step and (b) visualized framework.
Figure 6. Workflow of transformation of CV ICPs to PH ICPs: (a) the description of each step and (b) visualized framework.
Sensors 22 09602 g006
Figure 7. Camera calibration method tools: (a) rotated table with coded targets for the PH method; (b) checkerboard for the CV method.
Figure 7. Camera calibration method tools: (a) rotated table with coded targets for the PH method; (b) checkerboard for the CV method.
Sensors 22 09602 g007
Figure 8. Selected observation points in Case 1 to 4 (Sony).
Figure 8. Selected observation points in Case 1 to 4 (Sony).
Sensors 22 09602 g008
Figure 9. Rectified images generated using different methods (Sony).
Figure 9. Rectified images generated using different methods (Sony).
Sensors 22 09602 g009
Figure 10. Radial distortion and decentring distortion plots (Sony).
Figure 10. Radial distortion and decentring distortion plots (Sony).
Sensors 22 09602 g010
Figure 11. Rectified images generated using different methods (GoPro).
Figure 11. Rectified images generated using different methods (GoPro).
Sensors 22 09602 g011
Figure 12. Radial distortion and decentring distortion plots (GoPro).
Figure 12. Radial distortion and decentring distortion plots (GoPro).
Sensors 22 09602 g012
Figure 13. Comparison of rectified images (Sony).
Figure 13. Comparison of rectified images (Sony).
Sensors 22 09602 g013
Figure 14. Comparison of rectified images (GoPro).
Figure 14. Comparison of rectified images (GoPro).
Sensors 22 09602 g014
Figure 15. Visual results of corrections (Sony): (a) using the original ICPs (PH) (b) using the converted ICPs (CV to PH).
Figure 15. Visual results of corrections (Sony): (a) using the original ICPs (PH) (b) using the converted ICPs (CV to PH).
Sensors 22 09602 g015
Figure 16. Visual results of corrections (GoPro): (a) using PH ICPs (b) using CV to PH ICPs.
Figure 16. Visual results of corrections (GoPro): (a) using PH ICPs (b) using CV to PH ICPs.
Sensors 22 09602 g016
Figure 17. Imagined workflow for using mixed software applied in the fields of PH and CV.
Figure 17. Imagined workflow for using mixed software applied in the fields of PH and CV.
Sensors 22 09602 g017
Table 1. Symbols and corresponding meaning in the mathematical expressions.
Table 1. Symbols and corresponding meaning in the mathematical expressions.
PhotogrammetryComputer Vision
SymbolMeaningSymbolMeaning
f Focal length. f x Focal length related to x direction.
f y Focal length related to y direction.
( c , r )Coordinates of image point in the image frame: pixel unit.( c , r )Coordinates of image point in the image frame: pixel unit.
( c p , r p )Coordinates of principal point in the image frame: pixel unit.( c p , r p )Coordinates of principal point in the image frame: pixel unit.
( c c , r c )Coordinates of image center in the image frame: pixel unit.( c c , r c )Coordinates of image center in the image frame: pixel unit.
( x i , y i )Coordinates of image point in another image frame: metric unit.
( x p , y p )Coordinates of principal point in another image frame: metric unit.
k 1 , k 2 , k 3 Parameters of radial distortion. k ^ 1 , k ^ 2 , k ^ 3 Parameters of radial distortion.
p 1 , p 2 Parameters of decentring distortion p ^ 1 , p ^ 2 Parameters of decentring distortion.
b 1 Parameter of affinity.
b 2 Parameter of shear s Parameter of shear (skew)
d s Image pixel size. d s x Image pixel size in the x direction.
d s y Image pixel size in the y direction.
( x ,   y )Coordinates of image point in the camera frame without considering lens distortion. ( x ^ ,   y ^ )Coordinates of image point in the camera frame without considering lens distortion.
( x d ,   y d )Coordinates of distorted image points in the camera frame.( x ^ d ,     y ^ d )Coordinates of distorted image points in the camera frame.
( x u ,   y u )Coordinates of undistorted image points in the camera frame.( x ^ u ,     y ^ u )Coordinates of undistorted image points in the camera frame.
Δ x r a d x d , y d , Δ y r a d x d , y d Radial distortion model formed by distorted image points. Δ x ^ r a d x ^ u , y ^ u , Δ y ^ r a d x ^ u , y ^ u Radial distortion model formed by undistorted image points.
Δ x d e c x d , y d , Δ y d e c x d , y d Decentring distortion model formed by distorted image points. Δ x ^ d e c x ^ u , y ^ u , Δ y ^ d e c x ^ u , y ^ u Decentring distortion model formed by undistorted image points.
Δ x a f f x d , y d , Δ y a f f x d , y d Affine distortion model formed by distorted image points.
r In the camera frame, the distance from distorted image points to principal point. r ^ In the camera frame, the distance from undistorted image points to principal point.
Table 2. Camera specifications.
Table 2. Camera specifications.
SpecificationSony A6000GoPro Hero 4
Focal length (mm)162.8
Field of view (degree)7294
Image resolution (pixel)6000 × 40004000 × 3000
Pixel size (mm)0.00390.0015
Table 3. Sony A6000 camera calibration.
Table 3. Sony A6000 camera calibration.
PhotogrammetryComputer Vision
ICPsValueStd.ICPsValueStd.
f m m 15.8657 ± 0.002 f x p i x e l 4076.82 ± 1.43
c 0 p i x e l 2962.49 ± 0.26 f y p i x e l 4079.62 ± 1.43
r 0 p i x e l 1961.21 ± 0.26 c 0 p i x e l 2957.94 ± 1.41
k 1 m m 2 2.77 × 10 4 ± 8.63 × 10 7 r 0 p i x e l 1966.85 ± 1.46
k 2 m m 4 1.51 × 10 6 ± 1.18 × 10 8 k ^ 1 u n i t −0.0782 ± 8.51 × 10 4
k 3 m m 6 3.15 × 10 10 ± 4.82 × 10 11 k ^ 2 u n i t 0.1190 ± 3.46 × 10 3
p 1 m m 1 2.62 × 10 5 ± 9.22 × 10 7 k ^ 3 u n i t −0.0185 ± 3.75 × 10 3
p 2 m m 1 1.17 × 10 5 ± 6.86 × 10 7 p ^ 1 u n i t 1.13 × 10 4 ± 1.02 × 10 4
b 1 0- p ^ 2 u n i t 6.87 × 10 4 ± 9.93 × 10 5
b 2 0- s u n i t 0-
Table 4. GoPro Hero 4 camera calibration.
Table 4. GoPro Hero 4 camera calibration.
PhotogrammetryComputer Vision
ICPsValueStd.ICPsValueStd.
f m m 2.7321 ± 0.001 f x p i x e l 1753.97 ± 1.37
c 0 p i x e l 1930.20 ± 0.67 f y p i x e l 1757.67 ± 1.33
r 0 p i x e l 1534.07 ± 0.67 c 0 p i x e l 1925.04 ± 0.92
k 1 m m 2 0.0412 ± 1.71 × 10 5 r 0 p i x e l 1533.72 ± 0.96
k 2 m m 4 3.95 × 10 4 ± 3.69 × 10 6 k ^ 1 u n i t −0.2460 ± 4.99 × 10 4
k 3 m m 6 1.84 × 10 4 ± 2.48 × 10 7 k ^ 2 u n i t 0.0711 ± 4.02 × 10 4
p 1 m m 1 1.16 × 10 4 ± 2.16 × 10 6 k ^ 3 u n i t −0.0095 ± 9.51 × 10 5
p 2 m m 1 8.66 × 10 5 ± 1.88 × 10 6 p ^ 1 u n i t 2.24 × 10 4 ± 5.27 × 10 5
b 1 0- p ^ 2 u n i t 2.23 × 10 4 ± 4.82 × 10 5
b 2 0- s u n i t 0-
Table 5. Case 1 to 4 transformation results (Sony).
Table 5. Case 1 to 4 transformation results (Sony).
ICPsCVCase 1Case 2Case 3Case 4
f x p i x e l 4076.824068.134068.134068.134068.13
f y p i x e l 4079.624068.134068.134068.134068.13
c 0 p i x e l 2957.942962.492962.492962.492962.49
r 0 p i x e l 1966.851961.211961.211961.211961.21
k ^ 1 u n i t −0.0782−0.0692−0.0691−0.0658−0.0652
k ^ 2 u n i t 0.11900.09890.10130.07900.0779
k ^ 3 u n i t −0.01850.0555−0.01070.02130.0217
p ^ 1 u n i t 1.13 × 10 4 1.88 × 10 4 1.78 × 10 4 2.02 × 10 4 1.87 × 10 4
p ^ 2 u n i t 6.87 × 10 4 4.08 × 10 4 4.03 × 10 4 4.50 × 10 4 4.20 × 10 4
s u n i t 00000
Table 6. Case 1 to 4 transformation results (GoPro).
Table 6. Case 1 to 4 transformation results (GoPro).
ICPsCVCase 1Case 2Case 3Case 4
f x p i x e l 1753.971821.401821.401821.401821.40
f y p i x e l 1757.671821.401821.401821.401821.40
c 0 p i x e l 1925.041930.201930.201930.201930.20
r 0 p i x e l 1533.721534.071534.071534.071534.07
k ^ 1 u n i t −0.2460−0.3075−0.2736−0.2027−0.1981
k ^ 2 u n i t 0.07110.25550.10420.03260.0296
k ^ 3 u n i t −0.0095−0.2415−0.0188−0.0020−0.0016
p ^ 1 u n i t 2.24 × 10 4 2.13 × 10 4 1.30 × 10 4 5.02 × 10 4 1.49 × 10 5
p ^ 2 u n i t 2.23 × 10 4 2.82 × 10 4 1.93 × 10 4 5.65 × 10 4 4.75 × 10 5
s u n i t 00000
Table 7. Comparison of transformation results (Sony).
Table 7. Comparison of transformation results (Sony).
ICPsCVPH to CV
(State of Art)
PH to CV
(Proposed)
f x p i x e l 4076.824068.134068.13
f y p i x e l 4079.624068.134068.13
c 0 p i x e l 2957.942962.492962.49
r 0 p i x e l 1966.851961.211961.21
k ^ 1 u n i t −0.07820.0696−0.0652
k ^ 2 u n i t 0.1190−0.09590.0779
k ^ 3 u n i t −0.0185−0.00500.0217
p ^ 1 u n i t 1.13 × 10 4 1.85 × 10 4 1.87 × 10 4
p ^ 2 u n i t 6.87 × 10 4 4.15 × 10 4 4.20 × 10 4
Table 8. Comparison of transformation results (GoPro).
Table 8. Comparison of transformation results (GoPro).
ICPsCVPH to CV
(State of Art)
PH to CV
(Proposed)
f x p i x e l 1753.971821.401821.40
f y p i x e l 1757.671821.401821.40
c 0 p i x e l 1925.041930.201930.20
r 0 p i x e l 1533.721534.071534.07
k ^ 1 u n i t −0.24600.3074−0.1981
k ^ 2 u n i t 0.07110.02200.0296
k ^ 3 u n i t −0.00950.0766−0.0016
p ^ 1 u n i t 2.24 × 10 4 2.37 × 10 4 1.49 × 10 5
p ^ 2 u n i t 2.23 × 10 4 3.17 × 10 4 4.75 × 10 5
Table 9. Overall transformation results (Sony).
Table 9. Overall transformation results (Sony).
ICPsPHCV to PHDifference
f m m 15.865715.91050.0448
c 0 p i x e l 2962.492957.94−4.55
r 0 p i x e l 1961.211966.855.64
k 1 m m 2 2.77 × 10 4 3.25 × 10 4 4.89 × 10 5
k 2 m m 4 1.51 × 10 6 2.09 × 10 6 5.81 × 10 7
k 3 m m 6 3.15 × 10 10 1.91 × 10 9 2.22 × 10 9
p 1 m m 1 2.62 × 10 5 4.35 × 10 5 1.74 × 10 5
p 2 m m 1 1.17 × 10 5 7.31 × 10 6 1.90 × 10 5
b 1 0 7.25 × 10 6 7.25 × 10 6
b 2 000
Table 10. Overall transformation results (GoPro).
Table 10. Overall transformation results (GoPro).
ICPsPHCV to PHDifference
f m m 2.73212.6365−0.0956
c 0 p i x e l 1930.201925.04−5.16
r 0 p i x e l 1534.071533.72−0.35
k 1 m m 2 0.04120.0323−0.0089
k 2 m m 4 3.95 × 10 4 0.00420.0038
k 3 m m 6 1.84 × 10 4 1.31 × 10 4 3.16 × 10 4
p 1 m m 1 1.16 × 10 4 2.34 × 10 4 3.51 × 10 4
p 2 m m 1 8.66 × 10 5 2.23 × 10 4 3.09 × 10 4
b 1 0 4.03 × 10 4 4.03 × 10 4
b 2 000
Table 11. Numerical results of corrections (Sony).
Table 11. Numerical results of corrections (Sony).
Original ICPs (PH)Converted ICPs (CV to PH)
Image PointCorrection in x Direction (mm)Correction in y Direction (mm)Total Corrections on the Image (Pixel)Correction in x Direction (mm)Correction in y Direction (mm)Total Corrections on the Image (Pixel)
P1 0.07510.086829.440.07900.091130.92
P2 −0.02980.02479.93−0.03430.027711.31
P30.00680.00261.880.00770.00282.09
P40.0539−0.065321.710.0600−0.074524.53
Table 12. Numerical results of corrections (GoPro).
Table 12. Numerical results of corrections (GoPro).
Original ICPs (PH)Converted ICPs (CV to PH)
Image PointCorrection in x Direction (mm)Correction in y Direction (mm)Total Corrections on the Image (Pixel)Correction in x Direction (mm)Correction in y Direction
(mm)
Total Corrections on the Image (Pixel)
P1 0.17330.0891129.910.16970.0864126.93
P2 −0.0513−0.000834.21−0.0455−0.001030.36
P30.0350−0.032431.790.0310−0.029228.40
P40.0480−0.2633178.420.0489−0.2691182.33
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lin, K.-Y.; Tseng, Y.-H.; Chiang, K.-W. Interpretation and Transformation of Intrinsic Camera Parameters Used in Photogrammetry and Computer Vision. Sensors 2022, 22, 9602. https://doi.org/10.3390/s22249602

AMA Style

Lin K-Y, Tseng Y-H, Chiang K-W. Interpretation and Transformation of Intrinsic Camera Parameters Used in Photogrammetry and Computer Vision. Sensors. 2022; 22(24):9602. https://doi.org/10.3390/s22249602

Chicago/Turabian Style

Lin, Kuan-Ying, Yi-Hsing Tseng, and Kai-Wei Chiang. 2022. "Interpretation and Transformation of Intrinsic Camera Parameters Used in Photogrammetry and Computer Vision" Sensors 22, no. 24: 9602. https://doi.org/10.3390/s22249602

APA Style

Lin, K. -Y., Tseng, Y. -H., & Chiang, K. -W. (2022). Interpretation and Transformation of Intrinsic Camera Parameters Used in Photogrammetry and Computer Vision. Sensors, 22(24), 9602. https://doi.org/10.3390/s22249602

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop