Next Article in Journal
Design of a Deployable Helix Antenna at L-Band for a 1-Unit CubeSat: From Theoretical Analysis to Flight Model Results
Previous Article in Journal
Influence of Oil Status on Membrane-Based Gas–Oil Separation in DGA
Previous Article in Special Issue
Simu2VITA: A General Purpose Underwater Vehicle Simulator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3DupIC: An Underwater Scan Matching Method for Three-Dimensional Sonar Registration

1
INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
2
ISEP—School of Engineering, Polytechnic Institute of Porto, Rua Dr. António Bernardino de Almeida, 431, 4249-015 Porto, Portugal
3
FEUP—Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(10), 3631; https://doi.org/10.3390/s22103631
Submission received: 24 March 2022 / Revised: 29 April 2022 / Accepted: 2 May 2022 / Published: 10 May 2022
(This article belongs to the Special Issue Autonomous Underwater Vehicle Navigation Ⅱ)

Abstract

:
This work presents a six degrees of freedom probabilistic scan matching method for registration of 3D underwater sonar scans. Unlike previous works, where local submaps are built to overcome measurement sparsity, our solution develops scan matching directly from the raw sonar data. Our method, based on the probabilistic Iterative Correspondence (pIC), takes measurement uncertainty into consideration while developing the registration procedure. A new probabilistic sensor model was developed to compute the uncertainty of each scan measurement individually. Initial displacement guesses are obtained from a probabilistic dead reckoning approach, also detailed in this document. Experiments, based on real data, demonstrate superior robustness and accuracy of our method with respect to the popular ICP algorithm. An improved trajectory is obtained by integration of scan matching updates in the localization data fusion algorithm, resulting in a substantial reduction of the original dead reckoning drift.

1. Introduction

The ability to accurately self localize is essential to achieve fully autonomous robotic agents. At the base level, most localization solutions incorporate a dead reckoning mechanism, where relative displacement measurements serve to incrementally update the localization estimate. This recursive strategy is characterized by unbounded uncertainty growth, resulting from successive integration of measurement errors. Hence, a periodic error reset is necessary to keep localization uncertainty within acceptable limits. This is usually accomplished by accessing some absolute positioning infrastructure, such as the Global Navigation Satellite System (GNSS), in the outdoors, and acoustic positioning networks underwater, or by sensing the surrounding environment through exteroceptive sensors. This last option is desirable, as it renders the robot self-sufficient and discards the need for external support infrastructures. In this article, we investigate the possibility to retrieve localization references from a 3D acoustic camera, by means of a scan matching procedure.
Given two range scans of the same scene, taken from different perspectives, scan matching is the process of computing the rigid body transformation that brings the scans together in the same reference frame. Assuming the sensor is fixed with respect to the robot frame, the transformation that best overlaps two scans corresponds to the relative robot displacement between scan acquisition poses. Hence, by registering consecutive scans, relative displacement measurements can be retrieved and used for dead reckoning, usually with better accuracy than odometry or inertial-based solutions. Moreover, whenever the robot revisits previously explored areas, scan matching can be attempted with respect to an old recorded scan, to perform loop closure and correct the accumulated localization error.
Despite the large body of work on scan matching, underwater applications have been barely covered, simply because underwater range sensors perform much worse than terrestrial ranging devices. Underwater mapping references come mainly from sonar, whose measurements lack resolution and are characterized by poor accuracy, low sampling rate, insufficient area coverage, substantial noise and significant outlier percentage.
Some of the limitations are surpassed in this work thanks to an exceptional 3D acoustic camera—the Echoscope 3D from Coda Octopus [1]—rarely available on board surveying AUVs, due to its high price and heavy weight. The Echoscope 3D is a high resolution profiling sonar capable of acquiring, from a single ping, a square 128 × 128 matrix of range measurements. The ability to observe a broad seafloor area ensures the possibility of capturing consecutive overlapping scans, making this device ideal for 3D scan matching.
Despite the remarkable measurement density, the Echoscope 3D is still a sonar-based sensor, whose measurements show substantial noise and angular uncertainty. Therefore, a new sensor model is applied to represent the uncertainty of each individual range measurement. Registration is accomplished with a new probabilistic scan matching variant, called 3D underwater probabilistic Iterative Correspondence (3DupIC), which takes both measurements and their uncertainty into consideration, to develop consistent data association and error minimization. Robot displacement estimates, between scan acquisition poses, are computed through a custom dead reckoning algorithm, and supplied as initial guesses to the 3DupIC. While other underwater implementations resort to submap building, the 3DupIC directly develops scan matching based on the raw sonar data. Moreover, our implementation develops scan matching in 6 degrees-of-freedom, allowing the correction of position and attitude localization states.

Related Work

From the extensive literature on scan matching, the Iterative Closest Point (ICP) [2] stands out as the most popular algorithm. It operates directly on the raw scan points, recursively refining the relative transformation, by minimizing the Euclidean distance between point-to-point matches. Repeating the expensive pairwise matching step in every iteration constitutes a major speed bottleneck [3]. Moreover, convergence to the global minimum is not guaranteed and the method is susceptible to outliers. Convergence becomes harder as sensor rotation increases, since the Euclidean distance metric does not capture the rotational component conveniently. Therefore, countless variants of the ICP algorithm have been proposed [4], either to increase speed [5] or improve robustness, convergence and accuracy [4,6,7,8].
Nevertheless, point-to-point correspondences give ICP the ability to generalize well over different environments, thus early attempts were made to use ICP for pairwise registration of 3D underwater scans [9,10], captured with an Echoscope acoustic camera [1], similar to the sensor used in the present work. Despite the highly structured testing environment and the substantial overlapping between consecutive scans, an outlier rejection stage was found to be essential to achieve convergence.
According to the author’s knowledge, there were no further developments on underwater scan matching using the Echoscope 3D until now. Nonetheless, several studies pursued a similar strategy of registering sonar scans using ICP, but this time focusing on sparse ultrasonic devices, both indoors [11] and underwater [12]. The sparse nature of single-beam sonar data denies the immediate application of scan matching, as information captured in a given instant provides insufficient information about the structure of the surrounding environment. As an alternative, both [11,12,13] resort to a submap-forming strategy, where sensor data, gathered for some period of time, are used to form a denser representation by grouping consecutive measurements based on dead reckoning.
However, sparsity is not the only weakness associated with sonar, and while modelling individual range measurements as point-like particles provides a plausible approximation for high-resolution terrestrial laser scanners, applying the same principle to noisier sonar references certainly constitutes a simplistic and overconfident interpretation of the data. Even when dealing with indoor laser scans, the work from Pfister et al. [14] has shown that careful error modelling greatly improves convergence and accuracy. Following the same premise, Montesano et al. [15] presented a new scan matching method, which takes noise into consideration in every phase of the algorithm—the probabilistic Iterative Correspondence (pIC). Here, scan points are modelled as Gaussian random variables and the Mahalanobis distance establishes the matching criteria to find statistically compatible point associations between scans. From the set of compatible points, a virtual association point is computed, in an effort to encapsulate all information provided by the set of compatible points. The squared Mahalanobis distance serves as cost function to minimize the point matching errors, through non-linear least squares. As opposed to the Euclidean distance, adopted by the traditional ICP, the Mahalanobis distance conveniently captures rotation information, making pIC more robust to sensor rotation. Experiments based on real laser scan data demonstrated the pIC’s superiority over standard scan matching techniques, converging more often, with higher accuracy and requiring less iterations [15].
Soon after, variants of the pIC have been developed for 2D scan matching with ultrasonic sensors, namely the sonar probabilistic Iterative Correspondence (spIC) [16], for indoor scenarios, and underwater implementations such as the MSISpIC [17] and the uspIC [18], both using mechanical scanning imaging sonars (MSIS). Given the sparse nature of sonar measurements all these techniques incorporate a submap building strategy, whose formulation constitutes the main aspect of differentiation between them. In a direct comparison [18], using the same data set, the uspIC outperforms the MSISpIC due to several improvements, such as, choosing a central submap origin to achieve a more balanced uncertainty distribution; adopting the Individual Compatibility Nearest Neighbor (ICNN) for data association, which is preferable when dealing with noisy sonar readings [19]; and using the scan matching result to correct the dead reckoning drift, accumulated during the submap forming process. The localization accuracy can be further enhanced by integrating probabilistic scan matching into Simultaneous Localization and Mapping (SLAM) solutions [20,21,22,23].
Subsequent improvements on data association were presented in [24], introducing a coarse registration, based on Monte Carlo pose sampling, for quick computation of better initial guesses for the robot displacement, in an effort to speed up and facilitate convergence. In a second stage, a fine alignment is accomplished based on a point-to-plane data association, known for performing better for small misalignments. Further, a grid-based search, which takes point uncertainty into consideration, is employed to reduce the matching effort from quadratic to linear complexity.
Unlike MSIS devices, that scan along the horizontal plane, multibeam profiling sonars are usually pointed down, to acquire swaths of the seabed, whose morphology is mostly flat and featureless. For such cases, Palomer et al. [25,26] propose an adaptive point sampling, based on an octree, with the objective of removing less informative points laying on flat areas, while preserving points in zones with a more salient structure. Despite the fact that involving less points in scan matching helps to reduce computational time, no substantial advantages, in terms of accuracy and convergence, were verified [25].
The literature shows that probabilistic scan matching techniques can effectively perform well in underwater environments based on noisy sonar data [17,18,20,21,22,23]. In addition to improving the mapping consistency, updates retrieved from probabilistic scan matching can also be exploited to reduce localization drift. However, it does not mean a probabilistic scan matching method will always improve dead reckoning, especially when the dead reckoning solution is of high quality, as reported in [25]. The full potential of this technique is reached when integrated in a SLAM solution, where probabilistic scan matching can be used to close loops and bound the overall localization uncertainty.
This paper is organized as follows. Section 2 presents the 3DupIC scan matching method, starting by detailing the probabilistic model for a single ultrasonic beam. Individual beams are later combined to form the probabilistic scan model. The dead reckoning approach, to retrieve initial registration guesses, is also presented. Finally, the complete 3DupIC algorithm is described. Results are presented in Section 3, where the 3DupIC is evaluated and compared with the ICP algorithm. Finally, Section 4 recapitulates the main contributions and most important findings and establishes future developments.

2. The 3D Underwater Probabilistic Iterative Correspondence

The 3DupIC is a probabilistic scan matching method, which adapts the original pIC [15], to perform registration, in six degrees of freedom, of 3D scans acquired from an Echoscope 3D underwater sonar. With this purpose in mind, a dedicated probabilistic sensor model was developed, to conveniently represent the main sources of measurement uncertainty. In order to track the robot’s movement in between scan acquisitions, a dead reckoning solution, based on the Extended Kalman Filter (EKF), was also developed.

2.1. Probabilistic Sonar Modelling

An Echoscope 3D scan is composed of readings arranged in a 128 by 128 matrix, as illustrated in Figure 1. First, we derive the model for each scan point individually, adopting a conic beam-shaped ping, commonly associated with profiling sonars. Finally, all independent beam models are placed in the sensor reference frame, to produce the probabilistic model for the entire scan.

2.1.1. Individual Beam Model

Considering the conic beam model in Figure 2, two main sources of uncertainty arise: the angular beam aperture α originates a target area, which grows with respect to the slant range r, introducing ambiguity in the plane normal to the beam direction; on the other hand, the intrinsic sensor range resolution η comprises the main source of measurement uncertainty along the beam direction. The 3DupIC takes these parameters into account to model each individual sonar beam i, in the beam reference frame (t), as a Gaussian random vector: b i t = N ( b ^ i t , Σ i t ) . The mean b ^ i t is directly defined by the measured range r i , along the Z-axis: b ^ i t = [ 0 , 0 , r i ] T . In the beam reference frame, with the Z-axis passing through the center of the cone, as illustrated in Figure 2, the ith beam uncertainty is characterized by the following covariance matrix:
Σ i t = r i · tan α / 2 2 0 0 0 r i · tan α / 2 2 0 0 0 η / 2 2
The first two diagonal elements of the covariance matrix characterize the uncertainty along the horizontal plane, associated with the size of the insonified area. This area takes a circular shape, with standard deviation given by the radius of the beam’s cone base r i · tan α / 2 . The third diagonal element characterizes the uncertainly along the beam direction, with a standard deviation equal to half the range resolution.

2.1.2. Complete Scan Modeling

Now, let us consider a complete scan, provided by the sensor as a collection of n three-dimensional points { p 1 s , , p n s } , defined in the sensor reference frame (s). The probabilistic scan model consists of a collection of individual beams S n e w s = { b 1 s , , b n s } , each one represented by a multivariate Gaussian distribution b i s = N ( b ^ i s , Σ i s ) , centered on the sonar measurement ( b ^ i s = p i s ), with covariance matrix given by:
Σ i s = R t , i s Σ i t ( R t , i s ) T
where R t , i s is a rotation matrix, from the beam reference frame (t) to the sensor reference frame (s), used to transform the ith beam covariance matrix Σ i t . This rotation is first formulated using a quaternion and later converted into the rotation matrix R t , i s . The method starts by computing vector v q , normal to the plane defined by vectors p i s and v z , using the cross product:
v q = p i s × v z
where × corresponds to the cross product operation and v z is a direction vector aligned with the Z-axis of the sensor reference frame. The beam covariance matrix must rotate around vector v q with an angle of α q , given by:
α q = arccos p i s · p z p i s p z
where p i s · p z indicates the dot product between vectors p i s and p z . From the direction vector v q and the angle α q , the quaternion q i s is constructed:
q i s = q 0 q 1 q 2 q 3 = cos ( α q / 2 ) v q sin ( α q / 2 )
The rotation matrix is finally obtained from the quaternion:
R i s = q 0 2 + q 1 2 q 2 2 q 3 2 2 q 1 q 2 2 q 0 q 3 2 q 1 q 3 + 2 q 0 q 2 2 q 1 q 2 + 2 q 0 q 3 q 0 2 q 1 2 + q 2 2 q 3 2 2 q 2 q 3 2 q 0 q 1 2 q 1 q 3 2 q 0 q 2 2 q 2 q 3 + 2 q 0 q 1 q 0 2 q 1 2 q 2 2 + q 3 2
Since all beam directions in the sensor array remain constant between scans, the calculation of this rotation matrix is only performed once, as soon as the first valid measurement for the ith beam arrives.

2.1.3. Reference Frame Transformations

During the registration process, the scan needs to be converted to other reference frames, particularly to the navigation (n) frame, which implies passing through the intermediate body reference frame (b). The multivariate Gaussian distributions, with mean b ^ i n and covariance Σ i n , for the ith beam, in the navigation reference frame are obtained as follows:
b ^ i n = R b n R s b b ^ i s + t s b b ^ i b + t b n
Σ i n = R b n R s b Σ i s R s b T Σ i b R b n T
where R s b and t s b define, respectively, the rotation matrix and translation vector from the sensor to the body reference frame. These parameters, obtained through a previous calibration procedure, allow the beam to be transformed to the body frame b i b = N ( b ^ i b , Σ i b ) . The transformation from the body to the navigation frame, encoded by the rotation matrix R b n and the translation vector t b n , represents the robot localization estimate, determined by the dead reckoning method presented next.

2.2. Dead Reckoning Localization

To facilitate convergence, an initial guess of the robot displacement should be supplied to the scan matching method. Moreover, the 3DupIC takes the uncertainty about the robot localization into consideration to develop the probabilistic registration. Therefore, a probabilistic localization strategy was implemented, to compute the robot pose and the underlying uncertainty, using the Extended Kalman Filter (EKF). The system receives angular velocity measurements as input, from a triad of fiber optic gyroscopes (FOG), plus linear velocity observations from a Doppler Velocity Log (DVL). The state vector, with 9 states x ^ k = [ p ^ k n , α ^ k n , ν ^ k n ] T , represents position p ^ k n = [ x ^ k n , y ^ k n , z ^ k n ] ; orientation, using Euler angles, α ^ k n = [ ϕ ^ k n , θ ^ k n , ψ ^ k n ] and velocity in the navigation reference frame ν ^ k n = [ u ^ k n , v ^ k n , w ^ k n ] . Inertial measurements are integrated in the prediction step, to compute the robot’s orientation, while velocity observations are fused in the update step.

2.2.1. State Prediction

The state prediction involves a constant velocity model to compute velocity and position, while orientation states are predicted using a simple inertial mechanization model:
x ^ k = f ( x ^ k 1 ) = p ^ k n α ^ k n ν ^ k n = p ^ k 1 n + ν ^ k 1 n Δ t α ^ k 1 n + E b n ω b Δ t ν ^ k 1 n
where the matrix E b n converts the angular velocities ω b , measured by the gyroscopes in the body frame, into the Euler angles rate of change:
E b n = 1 sin ϕ ^ k 1 n tan θ ^ k 1 n cos ϕ ^ k 1 n tan θ ^ k 1 n 0 cos ϕ ^ k 1 n sin ϕ ^ k 1 n 0 sin ϕ ^ k 1 n sec θ ^ k 1 n cos ϕ ^ k 1 n sec θ ^ k 1 n
The covariance matrix is computed through:
P k = F k P k 1 F k T + Q k
where the diagonal covariance matrix Q k characterizes the noise associated with the propagation of each state individually, and the matrix F k is the Jacobian of the motion model f ( x ^ k 1 ) with respect to the localization states:
F k = p ^ k n p ^ k n p ^ k n α ^ k n p ^ k n ν ^ k n 0 α ^ k n p ^ k n α ^ k n α ^ k n α ^ k n ν ^ k n ν ^ k n p ^ k n ν ^ k n α ^ k n ν ^ k n ν ^ k n = I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 α ^ k n α ^ k n 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3
where I 3 × 3 represents a 3 by 3 identity matrix and 0 3 × 3 is a 3 by 3 null matrix. To prevent correlation between velocity and position states, a slight modification is performed in the top right corner, discarding the Jacobian expression.

2.2.2. State Update

Before fusing the DVL measurements into the EKF, the measured velocities ν D V L b = [ u D V L b , v D V L b , w D V L b ] T , in the body frame, are transformed into the robot velocities in the navigation reference frame ν D V L n = [ u D V L n , v D V L n , w D V L n ] T , using the current robot orientation:
ν n = R b n · ν b
In this way, a direct velocity observation is produced: z D V L = [ u D V L n , v D V L n , w D V L n ] T . Hence, the observation matrix H becomes:
H = 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1
The computation of the update state and covariance is performed according to the standard EKF expressions.
By performing the reference frame conversion on velocity outside the EKF framework, the presented formulation prevents correlations to form between velocity and orientation states. Furthermore, in the prediction step, correlation between velocity and position is avoided. If this care is disregarded, the position and orientation will become cross-correlated with velocity states, thus, through the update step, robot position and orientation would be indirectly corrected. This is not desirable, as the nature of dead reckoning presupposes that localization uncertainty increases monotonically.
The scan matching method starts by establishing the point correspondences between the reference scan S ref = { r 1 , , r i } , composed of i points, and the j points from the target scan S target = { t 1 , , t j } . The reference scan is defined in the global navigation frame.

2.3. Probabilistic Scan Matching

Let us consider a reference scan S r e f n = r 1 n , , r m n | r j n = N r ^ j n , Σ j n , composed of m measurements, modeled according to our probabilistic sensor model and defined in the navigation reference frame. Eventually, the robot moves and a new scan is acquired. Let us refer to this new scan as the target scan S t a r g e t s = t 1 s , , t o s | t i s = N t ^ i s , Σ i s , with o measurements, defined in the sensor reference frame.
Ideally, compounding each point of the new scan with the localization estimate, following Equation (7), produces a perfect registration. Unfortunately, in reality, several sources of uncertainty such as localization error, scan noise and wrong reference frame calibrations, lead to an inaccurate result.
The 3DupIC aims to enhance the registration consistency and at the same time retrieve localization corrections to improve the dead reckoning solution. The method evolves in two steps. First, for each point in the target scan, the most statistically close point in the reference scan is identified. By repeating the same process for every point, a set of scan matches is constructed. In a second stage, the optimization procedure refines the transformation, by minimizing distances between the matched point pairs. Both steps repeat until convergence.

2.3.1. Finding Compatible Points

For each point t i s in the target scan, all points in S r e f n are tested for similarity using the squared Mahalanobis distance:
d i j 2 = ϱ i j T Σ i j 1 ϱ i j
where:
ϱ i j = f x ^ k , t ^ i s r ^ j n
represents the error between the jth reference point and the ith target point, this last one projected into the navigation frame, through function f x ^ k , t ^ i s , defined in (Equation (7)). The covariance matrix Σ i j , characterizing the matching uncertainty, is calculated using Equation (17).
Σ i j = Σ j n + Σ i n + J x ( x ^ k , t ^ i b ) P k J x x ^ k , t ^ i b T
The covariance matrices Σ j n and Σ j n encode the uncertainties of the scan point pairs, computed using Equation (8). Since localization uncertainty affects the matching result, its contribution to the matching uncertainty is computed by the expression J x ( x ^ k , t ^ i b ) P k J x x ^ k , t ^ i b , where the matrix J x ( x ^ k , t ^ i b ) is the Jacobian of the error function (Equation (16)) with respect to the localization states, evaluated at the current robot location x ^ k and at the scan point in the body frame t ^ i b . The matrix P k refers to the localization covariance matrix.
The Mahalanobis distance is assumed to be Gaussian, therefore, the squared Mahalanobis distance follows a chi-squared distribution χ q 2 , where q is the dimensionality of vector ϱ i j , in this case 3. A point r j n from the reference scan is statistically compatible with a point t i s if it passes the chi-squared test, i.e., the squared Mahalanobis distance is less than the inverse chi-squared cumulative function χ q α 2 , evaluated for a given confidence level α . Additionally, from the set of compatible points, only the one with a smaller Mahalanobis distance is selected to form the match pair with t i s . A point pair is defined as < r j n , t i s > : r j n = arg min d i j 2 d i j 2 < χ n α 2 .
By repeating this search for all points in the target scan, hopefully, in case of sufficient scan overlap, a set of point matches M = { < r κ 1 n , t κ 1 s > , , < r κ n n , t κ n s > } is computed, where, for simplicity, κ i represents the index pairing for the ith correspondence.

2.3.2. Optimization

The optimization step aims to refine the robot displacement by minimizing the squared Mahalanobis distances between corresponding points:
x ^ 3 D u p I C = min ϱ κ i t Σ κ i 1 ϱ κ i .
Fortunately, Equation (18) has a closed-form solution given by:
x ^ 3 D u p I C = ( J T Q J ) 1 J T Q A
where:
J = J x ( x ^ k , t ^ κ 1 s ) J x ( x ^ k , t ^ κ 2 s ) J x ( x ^ k , t ^ κ n s ) , A = J x ( x ^ k , t ^ κ 1 s ) x ^ k ϱ κ 1 J x ( x ^ k , t ^ κ 2 s ) x ^ k ϱ κ 1 J x ( x ^ k , t ^ κ n s ) x ^ k ϱ κ n
and the matrix Q is a block diagonal matrix, containing the inverse of the covariance matrices (Equation (17)), characterizing the uncertainty of each pair:
Q = Σ κ 1 1 Σ κ 2 1 Σ κ n 1

3. Experimental Results

Due to the high computational complexity, the 3DupIC was coded in C++ and OpenCL, with most of the scan matching algorithm, including point matching and optimization, running on a Graphics Processing Unit (GPU). The achieved processing performance is satisfactory for offline testing, but still too slow for real time execution—a limitation to be addressed in the future. The tests presented in this article focus on assessing the accuracy and convergence of our probabilistic scan matching method.
To evaluate the 3DupIC’s performance, several tests were developed, based on real data collected during the field trials of the ¡VAMOS! project, at the Silvermines flooded quarry, in the Republic of Ireland. The dataset was acquired by the EVA AUV, equipped with an Echoscope 3D, plus a set of localization sensors, including DVL, IMU, pressure sensor, dual-antenna GNSS system [27] and acoustic positioning. In real time, a data fusion localization solution, detailed in [28], fuses all sensor information to obtain an accurate localization solution, used here as ground truth reference. In order to verify the advantage of our probabilistic method, results are also compared with the standard ICP method. For this purpose, the ICP implementation from the point cloud library (PCL) [29] was employed. Additionally, an outlier rejection stage, based on Random Sample Consensus (RANSAC), was applied to filter wrong nearest neighbor correspondences before each ICP optimization.

3.1. Initial Displacement Perturbation Experiment

The first test validates the convergence of both methods in the presence of initial localization errors. Here, two identical scans are registered together, repeating the experiment for different initial displacement errors. The initial errors result from uniform sampling, with increments of ± 0.4 m, around the true pose, up to a maximum Euclidean distance of 2.25 m. Orientation errors were computed in a similar way, with increments of ± 0.5 degrees, until the Euclidean distance of the angular errors reaches 2.5 degrees. Figure 3 shows the set of initial errors induced in this experiment.
By registering one scan with itself, the impact of the localization error is eliminated, as both scans are acquired from the same robot pose. Moreover, the reference and target scans are equally affected by noise, making the supplied displacement perturbation the dominant error source. Therefore, the registration algorithm should converge to the global minimum, which in this case is the transformation corresponding to the initial induced error. Both registration methods run until the Euclidean position, taken between the solutions of two consecutive iterations, falls below 2 × 10 4 m.
Initial perturbations introduced in the position states, cause the final registration errors depicted in Figure 4, expressed by the Euclidean distance between position and orientation states. It can be observed that the 3DupIC produces the most consistent results, always converging to a similar solution, while the ICP starts by providing more accurate results, but soon degrades in performance as the displacement error increases. For small initial disturbances, the ICP method outperforms the 3DupIC. This behavior may result from the localization covariance matrix supplied to the 3DupIC, which sets a high uncertainty margin, to encompass the maximum initialization error. Since the covariance matrix is kept constant during the test, it provides an underconfident uncertainty representation for smaller initial perturbations, which could explain the 3DupIC’s inability to respond as accurately as the ICP in those situations. This demonstrates that a balanced uncertainty representation is essential for the 3DupIC to develop consistent data association and error minimization.
A clear difference is noticed when initial orientation errors are introduced (Figure 5). Once again, the 3DupIC consistently provides an accurate solution, whereas the ICP’s performance rapidly deteriorates as result of poor orientation initialization. This observation verifies the difficulty of the ICP algorithm to respond appropriately to relative scan misalignment. In this particular case we can argue that the ICP method diverges, since the registration error in Figure 5b is higher than the induced orientation error. Moreover, it can be concluded that by relying on the Mahalanobis distance, the 3DupIC effectively captures the orientation component conveniently.

3.2. Short Trajectory Test

In the previous experiment, the same scan was registered with itself in an effort to eliminate the impact of scan and localization inaccuracies. However, in regular applications, scans do not overlap completely, due to robot motion and noise affecting each scan differently. In order to compare the ICP and 3DupIC methods in a more realistic scenario, different scan pairs are registered along a short robot trajectory. This time, the first acquired scan serves as reference, and all subsequent scans, obtained as the robot moves away, are registered with respect to this first reference scan. In this way, as the robot progresses its mission moving forward, less overlap between scans is verified. This time, uncertainty about the robot displacement, computed from the dead reckoning method, is supplied to the 3DupIC in order to establish consistent data association and error minimization.
Once again, the importance of accurate robot displacement initialization is evaluated by repeating the test twice from each method, either using the dead reckoning solution to initialize the scan matching procedure, or running the registration without supplying an initial displacement estimate.
Results are compared with the ground truth to compute the registration errors, shown in Figure 6. Ideally, the registration algorithms should recover the robot displacement between the scan acquisition poses, but in practice, Figure 6 shows that as the robot displacement increases and scan overlap diminishes, the registration problem becomes harder and divergence occurs at some point. In this particular aspect, the 3DupIC demonstrates the capability to converge for longer relative scan displacements. Moreover, it can be verified that proper initialization delays divergence of both methods. Moreover, the 3DupIC provides the most accurate registration results, with satisfactory accuracy up to 5 m of displacement, which corresponds to a scan overlap of around 40 % .

3.3. Long-Term Performance

Finally, the transformation sequence obtained by registering consecutive scans, using the 3DupIC, is employed to correct the long-term robot trajectory. 3DupIC corrections are introduced in the dead reckoning’s EKF via a new update, to directly observe position and orientation states. Figure 7 provides a 2D representation of the trajectories obtained by each method, along a robot trajectory of 127 m with a total traveling time of 250 s. It can easily be observed that the integration of 3DupIC solutions contributes positively to reduce dead reckoning drift, as the 3DupIC aided trajectory closely follows the ground truth. At a certain point, around −60 m north and −5 m east, several successive divergences of the 3DupIC are noticeable. They are mainly caused by the flat underwater terrain in that area, which provides insufficient information to support scan matching.
A more detailed analysis is presented in Figure 8, where the position and orientation errors of the dead reckoning and the 3DupIC aided solutions are taken with respect to the ground truth. Once again, it can be verified that the 3DupIC aided localization characterizes by much lower position error when compared with the dead reckoning. The performance could be improved further by simply rejecting wrong 3DupIC solutions, as the one that occurs around timestamp 190 s, with great impact in position error. However, we decided to include all 3DupIC updates, without performing any outlier rejection, to provide a more faithful characterization of the algorithm’s performance.
In terms of orientation, at first glance, the impact of 3DupIC contribution is not so striking, since the error follows the dead reckoning tendency, as demonstrated in Figure 8b. Nonetheless, until the robot changes direction, around timestamp 160 s, the 3DupIC presents a slight lower error signature than the dead reckoning. After inverting the direction of motion, despite the higher error of the 3DupIC solution, we can argue that it is more stable than the dead reckoning, and hence more consistent, indicating the ability of the registration method to track the robot orientation with higher precision.
Additionally, we also believe the orientation estimation is being perturbed by a combination of hidden factors, namely inaccurate reference frame calibrations and time synchronization problems between the Echoscope 3D and the remaining sensors. This belief is supported by the presence of an artifact around timestamp 160, when the vehicle changes direction rapidly. At this moment, the orientation error shows two spikes, which indicates poor synchronization of the Echoscope 3D data with respect to the ground truth. On another hand, the transformation from the Echoscope’s sensor frame to the body frame was obtained directly from the robot’s CAD model. Moreover, considering the physical mountings for the Echoscope 3D in the robot’s body, a misalignment of some degrees may occur, leading to poor orientation determination through scan matching.

4. Conclusions

This article presents the 3DupIC method for registration of sonar scans, acquired by an Echoscope 3D profiling sonar. The 3DupIC develops a probabilistic scan representation, which enables consistent point matching and error minimization using the Mahalanobis distance. Results demonstrate the superiority of our method when compared with the ICP algorithm. Even when inaccurate initial guesses are supplied, the 3DupIC repeatedly converges to a similar result, close to the real solution. Moreover, our method successfully retrieves a consistent solution in situations of substantial sensor rotation and reduced scan overlap. Additionally, scan matching builds directly on the raw data, without the need for previous scan filtering or subsequent outlier rejection of wrong point matches.
An initial robot displacement, with corresponding uncertainty, obtained from dead reckoning, is supplied to the 3DupIC to facilitate convergence. Results show that our scan matching method consistently refines the initial transformation, so that scan matching updates inserted in the dead reckoning filter produce an improved robot trajectory, with reduced drift.
The experiments reported in this article validate the 3DupIC concept, confirming the feasibility of underwater scan matching in six degrees-of-freedom and its benefit for robot localization. However, before integrating the 3DupIC in a real robot, some additional improvements are necessary. First, the high computational complexity invalidates real-time execution, thus a simplified implementation is necessary. Further, as results indicate, the development of an extrinsic calibration procedure is necessary to refine reference frame transformations, especially to improve the consistency of the 3DupIC derived orientation references. Finally, the calculation of the registration uncertainty is also fundamental to achieve consistent data fusion within the localization estimation framework. For this reason, the determination of the registration covariance matrix should be addressed.

Author Contributions

Conceptualization, A.F.; methodology, A.F.; software, A.F.; validation, A.F.; formal analysis, A.F., J.A., A.M. (Alfredo Martins), A.M. (Aníbal Matos) and E.S.; investigation, A.F., J.A., A.M. (Alfredo Martins), A.M. (Aníbal Matos) and E.S.; writing—original draft preparation, J.A.; writing—review and editing, A.F., J.A., A.M. (Alfredo Martins), A.M. (Aníbal Matos) and E.S.; visualization, A.F., J.A., A.M. (Alfredo Martins), A.M. (Aníbal Matos) and E.S.; supervision, A.M. (Aníbal Matos) and E.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work is financed by National Funds through the Portuguese funding agency, FCT—Fundação para a Ciência e a Tecnologia, within project LA/P/0063/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
2DTwo-dimensional
3DThree-dimensional
3DupIC3D underwater probabilistic Iterative Correspondence
3DThree-dimensional
pICprobabilistic Iterative Correspondence
ICPIterative Closest Point
AUVAutonomous Underwater Vehicle
GNSSGlobal Navigation Satellite System
MSISMechanical scanning imaging sonar
ICNNIndividual Compatibility Nearest Neighbor
EKFExtended Kalman Filter
FOGFiber Optic Gyroscope
DVLDoppler Velocity Log
IMUInertial Motion Unit
USBLUltra-short baseline
SBLshort baseline
PCLPoint Cloud Library
CADComputer-aided design
GPUGraphics Processing Unit
OpenCLOpen Computing Language
RANSACRandom Sample Consensus

References

  1. Hansen, R.K.; Andersen, P.A. A 3-D underwater acoustic camera—Properties and applications. In Acoustical Imaging; Tortoli, P., Masotti, L., Eds.; Plenum Press: New York, NY, USA, 1996; pp. 607–611. [Google Scholar]
  2. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  3. Zinßer, T.; Schmidt, J.; Niemann, H. Performance Analysis of Nearest Neighbor Algorithms for ICP Registration of 3-D Point Sets. In Proceedings of the VMV, Munich, Germany, 19–21 November 2003; pp. 199–206. [Google Scholar]
  4. Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. [Google Scholar] [CrossRef] [Green Version]
  5. Greenspan, M.; Yurick, M. Approximate k-d tree search for efficient ICP. In Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, Banff, AB, Canada, 6–10 October 2003; pp. 442–448. [Google Scholar] [CrossRef] [Green Version]
  6. Lu, F.; Milios, E. Robot pose estimation in unknown environments by matching 2D range scans. In Proceedings of the 1994 IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 935–938. [Google Scholar] [CrossRef]
  7. Zhang, Z. Iterative Point Matching for Registration of Free-form Curves and Surfaces. Int. J. Comput. Vis. 1994, 10, 119–152. [Google Scholar] [CrossRef]
  8. Minguez, J.; Lamiraux, F.; Montesano, L. Metric-Based Scan Matching Algorithms for Mobile Robot Displacement Estimation. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 3557–3563. [Google Scholar] [CrossRef] [Green Version]
  9. Castellani, U.; Fusiello, A.; Murino, V. Registration of Multiple Acoustic Range Views for Underwater Scene Reconstruction. Comput. Vis. Image Underst. 2002, 87, 78–89. [Google Scholar] [CrossRef] [Green Version]
  10. Castellani, U.; Fusiello, A.; Murino, V.; Papaleo, L.; Puppo, E.; Pittore, M. A complete system for on-line 3D modelling from acoustic images. Signal Process. Image Commun. 2005, 20, 832–852. [Google Scholar] [CrossRef]
  11. Burguera, A.; Oliver, G.; Tardos, J.D. Robust Scan Matching Localization using Ultrasonic Range Finders. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 1367–1372. [Google Scholar] [CrossRef]
  12. Roman, C.; Singh, H. Improved vehicle based multibeam bathymetry using sub-maps and SLAM. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 3662–3669. [Google Scholar] [CrossRef]
  13. Barkby, S.; Williams, S.B.; Pizarro, O.; Jakuba, M.V. A Featureless Approach to Efficient Bathymetric SLAM Using Distributed Particle Mapping. J. Field Robot. 2011, 28, 19–39. [Google Scholar] [CrossRef]
  14. Pfister, S.T.; Kriechbaum, K.L.; Roumeliotis, S.I.; Burdick, J.W. Weighted range sensor matching algorithms for mobile robot displacement estimation. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), Washington, DC, USA, 11–15 May 2002; Volume 2, pp. 1667–1674. [Google Scholar] [CrossRef] [Green Version]
  15. Montesano, L.; Minguez, J.; Montano, L. Probabilistic scan matching for motion estimation in unstructured environments. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 3499–3504. [Google Scholar] [CrossRef] [Green Version]
  16. Burguera, A.; Gonzalez, Y.; Oliver, G. Probabilistic Sonar Scan Matching for Robust Localization. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3154–3160. [Google Scholar] [CrossRef]
  17. Hernàndez Bes, E.; Ridao Rodríguez, P.; Ribas Romagós, D.; Batlle i Grabulosa, J. MSISpIC: A Probabilistic Scan Matching Algorithm Using a Mechanical Scanned Imaging Sonar. J. Phys. Agents 2009, 3, 3–11. [Google Scholar] [CrossRef] [Green Version]
  18. González, Y.; Oliver, G.; Burguera, A. Underwater Scan Matching using a Mechanical Scanned Imaging Sonar. IFAC Proc. Vol. 2010, 43, 377–382. [Google Scholar] [CrossRef] [Green Version]
  19. Burguera, A.; González, Y.; Oliver, G. A probabilistic framework for sonar scan matching localization. Adv. Robot. 2008, 22, 1223–1241. [Google Scholar] [CrossRef]
  20. Mallios, A.; Ridao, P.; Hernandez, E.; Ribas, D.; Maurelli, F.; Petillot, Y. Pose-based SLAM with probabilistic scan matching algorithm using a mechanical scanned imaging sonar. In Proceedings of the OCEANS 2009-EUROPE, Bremen, Germany, 11–14 May 2009; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  21. Mallios, A.; Ridao, P.; Ribas, D.; Maurelli, F.; Petillot, Y. EKF-SLAM for AUV navigation under probabilistic sonar scan-matching. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 4404–4411. [Google Scholar] [CrossRef]
  22. Burguera, A.; Oliver, G.; Gonzàlez, Y. Scan-based SLAM with trajectory correction in underwater environments. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 2546–2551. [Google Scholar] [CrossRef]
  23. Chen, L.; Wang, S.; Hu, H.; Gu, D.; Liao, L. Improving Localization Accuracy for an Underwater Robot with a Slow-Sampling Sonar through Graph Optimization. IEEE Sens. J. 2015, 15, 5024–5035. [Google Scholar] [CrossRef] [Green Version]
  24. Zandara, S.; Ridao, P.; Mallios, A.; Ribas, D. MBpIC-SLAM: Probabilistic Surface Matching for Bathymetry Based SLAM. IFAC Proc. Vol. 2012, 45, 126–131. [Google Scholar] [CrossRef]
  25. Palomer, A.; Ridao, P.; Ribas, D.; Mallios, A.; Gracias, N.; Vallicrosa, G. Bathymetry-based SLAM with difference of normals point-cloud subsampling and probabilistic ICP registration. In Proceedings of the 2013 MTS/IEEE OCEANS—Bergen, Bergen, Norway, 10–14 June 2013; pp. 1–8. [Google Scholar] [CrossRef]
  26. Palomer, A.; Ridao, P.; Ribas, D. Multibeam 3D Underwater SLAM with Probabilistic Registration. Sensors 2016, 16, 560. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Ferreira, A.; Matias, B.; Almeida, J.; Silva, E. Real-time GNSS precise positioning: RTKLIB for ROS. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420904526. [Google Scholar] [CrossRef]
  28. Almeida, J.; Matias, B.; Ferreira, A.; Almeida, C.; Martins, A.; Silva, E. Underwater Localization System Combining iUSBL with Dynamic SBL in ¡VAMOS! Trials. Sensors 2020, 20, 4710. [Google Scholar] [CrossRef] [PubMed]
  29. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Illustration of a point cloud obtained from a single ping of the Echoscope 3D, where (a) is a top view and (b) a side view of the same scan. The scan is composed of a matrix of 128 × 128 points, represented in red. The robot model, in grey, represents the scan acquisition pose.
Figure 1. Illustration of a point cloud obtained from a single ping of the Echoscope 3D, where (a) is a top view and (b) a side view of the same scan. The scan is composed of a matrix of 128 × 128 points, represented in red. The robot model, in grey, represents the scan acquisition pose.
Sensors 22 03631 g001
Figure 2. Illustration of the probabilistic beam model adopted by the 3DupIC to represent each single scan point.
Figure 2. Illustration of the probabilistic beam model adopted by the 3DupIC to represent each single scan point.
Sensors 22 03631 g002
Figure 3. Initial perturbations introduced to (a) position states and (b) orientation states. Points are colored according to their Euclidean distance with respect to the true displacement, ranging from short distances, marked with blue, to higher distances in yellow.
Figure 3. Initial perturbations introduced to (a) position states and (b) orientation states. Points are colored according to their Euclidean distance with respect to the true displacement, ranging from short distances, marked with blue, to higher distances in yellow.
Sensors 22 03631 g003
Figure 4. Relation between the final registration errors, expressed through the Euclidean distance of (a) position and (b) orientation registration errors, with respect to the norm of the position disturbance introduced initially to each registration method. The vertical axis corresponds to the average registration error, computed from the experiments executed for each sampled disturbance distance.
Figure 4. Relation between the final registration errors, expressed through the Euclidean distance of (a) position and (b) orientation registration errors, with respect to the norm of the position disturbance introduced initially to each registration method. The vertical axis corresponds to the average registration error, computed from the experiments executed for each sampled disturbance distance.
Sensors 22 03631 g004
Figure 5. Induced errors in the initial orientation cause (a) position and (b) orientation registration errors, with respect to angular disturbance, represented in the horizontal axis. The vertical axis corresponds to the average registration error, computed from the experiments executed for each corresponding sampled disturbance distance.
Figure 5. Induced errors in the initial orientation cause (a) position and (b) orientation registration errors, with respect to angular disturbance, represented in the horizontal axis. The vertical axis corresponds to the average registration error, computed from the experiments executed for each corresponding sampled disturbance distance.
Sensors 22 03631 g005
Figure 6. Results of the short trajectory experiment, to evaluate the impact of scan distance on the registration accuracy. The horizontal axis represents the true distance between scan acquisition poses. In the vertical axis, (a) displays the position error and (b) shows the orientation error.
Figure 6. Results of the short trajectory experiment, to evaluate the impact of scan distance on the registration accuracy. The horizontal axis represents the true distance between scan acquisition poses. In the vertical axis, (a) displays the position error and (b) shows the orientation error.
Sensors 22 03631 g006
Figure 7. 2D representation of the trajectories obtained from dead reckoning (dashed line), from aiding dead reckoning with the 3DupIC corrections (solid line) and the ground truth trajectory (dotted line).
Figure 7. 2D representation of the trajectories obtained from dead reckoning (dashed line), from aiding dead reckoning with the 3DupIC corrections (solid line) and the ground truth trajectory (dotted line).
Sensors 22 03631 g007
Figure 8. Error comparison between the dead reckoning and the 3DupIC aided dead reckoning solutions. In (a), the position error is represented as the Euclidean distance with respect to the ground truth. Similarly, plot (b) contains the orientation errors.
Figure 8. Error comparison between the dead reckoning and the 3DupIC aided dead reckoning solutions. In (a), the position error is represented as the Euclidean distance with respect to the ground truth. Similarly, plot (b) contains the orientation errors.
Sensors 22 03631 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ferreira, A.; Almeida, J.; Martins, A.; Matos, A.; Silva, E. 3DupIC: An Underwater Scan Matching Method for Three-Dimensional Sonar Registration. Sensors 2022, 22, 3631. https://doi.org/10.3390/s22103631

AMA Style

Ferreira A, Almeida J, Martins A, Matos A, Silva E. 3DupIC: An Underwater Scan Matching Method for Three-Dimensional Sonar Registration. Sensors. 2022; 22(10):3631. https://doi.org/10.3390/s22103631

Chicago/Turabian Style

Ferreira, António, José Almeida, Alfredo Martins, Aníbal Matos, and Eduardo Silva. 2022. "3DupIC: An Underwater Scan Matching Method for Three-Dimensional Sonar Registration" Sensors 22, no. 10: 3631. https://doi.org/10.3390/s22103631

APA Style

Ferreira, A., Almeida, J., Martins, A., Matos, A., & Silva, E. (2022). 3DupIC: An Underwater Scan Matching Method for Three-Dimensional Sonar Registration. Sensors, 22(10), 3631. https://doi.org/10.3390/s22103631

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop