1. Introduction
Image registration is a fundamental problem encountered in image processing, e.g., image fusion [
1] and image change detection [
2]. It refers to the alignment of two or more images of the same scene taken at different time, from different sensors, or from different viewpoints. Image registration plays an increasingly important role in applications of surveillance [
3], remote-sensing [
4] and medical imaging [
5].
For a collection of images to be registered, one is chosen as the reference image and the others are selected as sensed images. Image registration align each sensed image to the reference image by finding the correspondence between all pixels in the image pair and estimating the spatial transformation from the sensed image to the reference image. In this paper, we just consider the image registration between two images, i.e., there is only one sensed image together with a given reference image.
Current image registration techniques that based on image domain can be generally divided into two categories [
6]: the sparse methods and dense methods. There are also some methods based on transform domain, like Fourier-Mellin transformation method [
7]. The transform domain based methods are often used for image registration with similarity transformation model. In this paper, we focus on the image domain based methods.
The sparse methods [
8] extracts and matches salient features from the reference image and sensed image and then estimates the spatial transformation between the two images based on these matched features. Line features (e.g., edges) and point features (corners, line intersections and gravities of regions) all can be used for image registration. Corner features are the mostly used features and can be manually selected or automatically detected by Harris [
9], FAST (Features from Accelerated Segment Test) [
10], SIFT (Scale-Invariant Feature Transform) [
11], SURF (Speeded-Up Robust Features) [
12], DAISY [
13], ORB (Oriented FAST and Rotated BRIEF) [
14], KAZE [
15], etc.
In contrast to the sparse methods, the dense methods [
16] do not detect features from the image pair but search the optimal spatial transformation directly that can best match all the pixels in the image pair. Similarity (resp. dissimilarity) measures are defined to quantify the independency (resp. dependency) between the pair of images. Various similarity and dissimilarity measures have been proposed [
17] such as RMSE (Root-Mean-Squared Error), PSNR (Peak Signal to Noise Ratio), Spearman’s Rho [
18], NCC (Normalized Cross-correlation Coefficient) and MI (Mutual Information). It should be noted that dense methods based on RMSE or PSNR cannot handle the cases with illumination variation since these two similarity/dissmilarity measures are very sensitive to illumination changes.
Both the sparse methods and dense methods involve uncertainty problems. For the sparse methods, keypoints obtained from different keypoint detectors describe different corner features of the image. Therefore, image registrations based on different keypoint detectors would obtain different spatial transformations. For the dense methods, different similarity (dissimilarity) measures quantify the difference between the pair of images from different aspects so that image registrations based on different similarity (dissimilarity) measures would obtain different spatial transformations. These different spatial transformations obtained have their own pros and cons, and the selection of the spatial transformation (the selection of the feature detector or similarity measure indeed) would bring uncertainty.
To deal with the uncertainty caused by the particular selection of feature detector or similarity (dissimilarity) measure, one feasible way is to combine these registration transformations obtained from different feature detection methods or similarity measures to obtain a better registration result. The belief functions introduced in Demspter–Shafer Theory (DST) [
19] of evidence offer a powerful theoretical tool for uncertainty modeling and reasoning; therefore, we propose a fusion based image registration method using belief functions. In this paper, the spatial transformations obtained from different feature detection algorithms or similarity measures compose the frame of discernment (FOD) and their uncertainties are modeled using belief functions. In uncertainty modeling, image information at different levels, i.e., image’s intensities, edges and phase angles, are jointly used to evaluate the beliefs about image transformations. Then, these uncertainties are further handled through the evidence combination of the above multiple information. The final registration result is obtained according to the combined evidence.
This paper is an extension of our previous work in [
20] where the basic idea is briefly presented. The main added values with respect to [
20] are as follows. First, the transformation model between the reference image and sensed image is more comprehensive. We use similarity transformation model in [
20] but use projective transformation model in this paper, which is more general since all similarity transformations are examples of projective transformations. Second, the keypoints used in the sparse approach in [
20] are manually selected. To reduce the subjective influence to the registration result, in this paper, the keypoints are generated from detection algorithms. Accordingly, feature matching and mismatching removal are added after the keypoint detection. Third, when modeling uncertainties, one more information source, i.e., image’s phase angle information, is considered in this work. Fourth, more experiments and analyses are provided for performance evaluation and analysis.
The rest of this paper is organized as follows. The basics of image registration are introduced in
Section 2. The basics of evidence theory are introduced in
Section 4. The proposed image registration method is introduced in
Section 4.1 with emphasis of uncertainty modeling and handling. Evaluation method is introduced in
Section 5. Experiment results of the proposed method and other registration methods are presented and compared in
Section 6.1. Concluding remarks are given in
Section 7.
2. Basics of Image Registration
For two (or more) images of the same scene taken at different time, from different sensors, or from different viewpoints, one is chosen as the reference image (
R) and the other one is chosen as the sensed image (
S). In this paper, we focus on the projective transformation model between the reference image and sensed image, which is a commonly used model in image registration [
16].
Denote pixel coordinates in the reference image
R as
and their mapping counterparts in the sensed image
S as
. The projective transformation from
R to
S can be expressed based on the homogeneous coordinates (Homogeneous coordinates can easily express the translation transformation as matrix multiplications while Cartesian coordinates cannot) as
The similarity transformation and affine transformation are important specializations of the projective transformation, as illustrated in
Table 1.
The purpose of image registration is to estimate the transformation
T to align the sensed image
S with the reference image
R by
where
and
denote pixel coordinates in registered sensed image
and sensed image
S, respectively. Current image registration techniques can be divided into two categories [
6] in general, including the sparse method and dense method. Basics of these two methods are introduced below.
2.1. Sparse Image Registration and Its Uncertainty
The feature detection and feature matching are two critical steps in the sparse methods. The flow chart of the sparse approach is illustrated in
Figure 1, where each functional block is detailed in the sequel.
2.1.1. Feature Detection
Corner features are the mostly used features in image registration due to their invariance to imaging geometry [
6]. Some early keypoint detectors, like Harris and FAST, are very sensitive to image scale changes so that have poor performance when the sensed images have different scales with the reference image. The most well-known SIFT detector shows good robustness to illumination, orientation and scale changes. Most scale invariant detectors, like SIFT, SURF, ORB and BRISK, detect and describe features at different scale levels by building or approximating the Gaussian scale space of the image. In a different way, KAZE detects features in a nonlinear scale space built using efficient additive operator splitting techniques and variable conductance diffusion.
2.1.2. Feature Matching
To align the sensed image and the reference image, the detected keypoints in the two images are matched first by comparing their local feature characterized by descriptors. Generally, if the two keypoints’ descriptors are similar, the two keypoints are likely to be a matched pair. Given a keypoint t in the reference image, there might be a set of candidates in the sensed image having similar descriptor with t. Among these candidates, t’s real counterpart should have the closest distance with t, and at the same time its distance should be much closer than other candidates’ distances.
The accuracy of the keypoints’ matching affects the accuracy of the transformation’s estimation. The mismatched keypoint pairs should be further removed before estimating the transformation. RANSAC (RANdom SAmple Consensus) [
21] and MSAC (M-estimator SAmple and Consensus) [
22] are often used to deal with this problem. A recent RANRESAC (RANdom RESAmple Consensus) [
23] algorithm has been proposed to remove mismatched keypoint pairs for noisy image registration. Besides the accuracy of the keypoints’ matching, the distribution of matched pairs over the image space is another key factor to obtain a high-quality estimation of transformation.
2.1.3. Transformation Estimation
With all the matched keypoint pairs, the transformation matrix
T can be estimated using Equation (
1). Since
T has eight degrees of freedom, four point correspondences (with no three collinear) are needed to obtain the unique solution of
T according to Cramer’s rule.
Normally, the amount of the matched keypoint pairs is more than four and
T can be estimated using the least squares (LS) fitting technique [
6] by searching the minimum sum of the Euclidean distances between all the matched keypoints:
where
represents the coordinate of the
ith matched keypoint in the reference image and
represents the coordinate of the
ith matched keypoint in the registered sensed image transformed from the sensed image using Equation (
2).
2.1.4. Uncertainty Encountered in Sparse Approach
Since different keypoint detection algorithms detect different kinds of corner features, the detected keypoints are usually different, as shown in
Figure 2.
Image registrations based on different matched keypoint pairs would in general yield different spatial transformations to align two images. Different transformations obtained have their own pros and cons. Therefore, the selection of keypoint detection algorithms would bring uncertainty problem to the registration results.
2.2. Dense Image Registration and Its Uncertainty
The dense image registration estimates the optimal transformation
T by searching the largest similarity (or the smallest dissimilarity) between the reference image
R and the registered sensed image
:
where
is a chosen similarity measure. The flow chart of the dense approach is illustrated in
Figure 3, where each functional block is detailed in the sequel.
2.2.1. Similarity Measure
Various similarity (or dissimilarity) measures have been proposed. Here we briefly introduce the commonly used MI, NCC and PSNR measures.
(1) MI
MI measure between images
A and
B is
where
is the joint probability distribution function (PDF) of images
A and
B, and
and
are the marginal PDFs of
A and
B, respectively. MI
is larger when
A and
B are more similar.
(2) NCC
For given images
A and
B with size of
, NCC measure between them is
where
and
are the pixels’ intensities in images
A and
B at
, respectively;
and
are the mean intensities of
A and
B, respectively;
and
are the standard deviation intensities of
A and
B, respectively. NCC
is larger when
A and
B are more similar.
(3) PSNR
PSNR measure between images
A and
B is
where MSE
. PSNR
is larger when
A and
B are more similar. Since PSNR measure is very sensitive to illumination changes, it cannot be used for image registration when there are illumination variations between image pairs.
2.2.2. Transformation Estimation
The estimation for transformation
T, i.e., Equation (
4), is always a non-convex problem and is not so easy to obtain the global maximum [
24]. Therefore, advanced optimization methods [
25], or intelligent optimization approaches (like genetic, or particle swarm algorithms, etc.) are often used to estimate the optimal transformation
T.
2.2.3. Uncertainty Encountered in Dense Approach
Since different similarity (dissimilarity) measures compare two images from different aspects, their calculated similarities (dissimilarities) between the reference image and registered sensed image are different. Image registration based on different measures would obtain different spatial transformations to align two images and they have their own pros and cons. Therefore, the selection of similarity (dissimilarity) measure would bring uncertainty problem to the registration results.
To deal with the uncertainty caused by the selection of feature detection algorithms or similarity measures, one feasible way is to combine the registration transformations (
,
, …,
) obtained from different feature detection algorithms (or different similarity measures) to expect a better registration result. We propose an evidential reasoning [
19] based image registration algorithm to generate a combined transformation from
,
, …,
thanks to the ability of belief functions for uncertainty modeling and reasoning. Basics of the theory of belief functions are recalled first below.
3. Basics of Evidence Theory
Dempster–Shafer evidence theory (DST) [
19] is a theoretical framework for uncertainty modeling and reasoning. In DST, elements in the frame of discernment (FOD)
are mutually exclusive and exhaustive. The power set of
, i.e.,
, is the set of all subsets of
. For example, if
, then
. The basic belief assignment (BBA, also called mass function) is defined by a function
m:
, satisfying
where
depicts the evidence support to the proposition
A.
A is called a focal element when
. If there is only one element in
A, like
and
,
A is called the singleton element; if there are more than one element in
A, e.g.,
and
,
A is called the compound element. The belief assigned to a compound element represents the degree of ambiguity for the multiple elements.
The plausibility function (
) and belief function (
) are defined as follows:
Dempster’s combination rule [
19] for combining two distinct pieces of evidence is defined as
Here,
denotes the total conflict or contradictory mass assignments.
An alternative fusion rule PCR6 [
26] for the combination of two sources is defined as
where
is the conjunctive rule defined as
General PCR6 formula for the combination of more than two sources is given in [
26].
For a probabilistic decision-making, Smets defined the pignistic probability transformation [
27] to obtain the probability measure
from a BBA
where
is the cardinality of
A. The decision can be made by choosing the element in FOD whose
value is the highest one and higher than a preset threshold. Other types of probability transformation methods can be found in [
26,
28].
4. Image Registration Based on Evidential Reasoning
To deal with the uncertainty caused by the choice of keypoint detectors in the sparse approach or the choice of similarity measure in the dense approach, we propose an image registration method based on evidential reasoning. Suppose that the spatial transformation between the reference image and sensed image is projective. Our purpose is to estimate the transformation matrix to align two images. Unlike the prevailing methods estimating the transformation matrix from single method of keypoint detection or similarity (dissimilarity) measure, we estimate the transformation matrix by jointly utilizing different keypoint detection methods or similarity measures.
To use belief functions for image registration, one should define the frame of discernment (FOD) first. The FOD
, where
Q is the amount of transformations obtained from different single feature detection algorithms or different single similarity measures. We first model the beliefs for every proposition
using BBAs.
A can be single transformation in FOD or a set of transformations in FOD. One BBA depicts the support to each proposition
A from one evidence source. The BBA allocations from different evidence sources describes the uncertainty of the transformations in FOD. Next, the BBAs are combined to generate the combined BBA
depicting the fused support to each proposition
A. Then, the combined transformation
is generated from the combined BBA
. Finally, the registered sensed image
is transformed from the sensed image using Equation (
2). During this process, the resampling [
29] is needed to determine the intensity of each pixel in
.
Figure 4 illustrates the flow chart of this new proposed method. It should be noted that the classical interpretation of BFT assumes that the final estimation should be in the FOD. In this work, we relax this assumption and the final transformation is a combination result of those in the FOD.
4.1. Uncertainty Modeling
If the similarity between the reference image
R and registered sensed image
is large, the corresponding transformation
is quite accurate and should be allocated large support (
is transformed from sensed image
S by
). Here we use NCC (Other similarity or dissimilarity measures, e.g., MI, are also appropriate to quantify the similarity here) to measure the similarity between
R and
:
where
and
are the mean intensities of
R and
, respectively;
and
are the standard deviation intensities of
R and
, respectively.
Since multi-source information can help to reduce the uncertainty through evidence combination, we use different levels of image information to quantify the similarity between
R and
. The similarity can be calculated from the gray images, edge feature images or reconstructed images using phase angle as shown in
Figure 5. Their corresponding
are denoted as
,
and
, respectively. The edge detection method used in
Figure 5b is the Canny detector [
30]. More details of the image reconstruction from phase angle information can be found in [
29].
The value range of
is
. According to our experiments, most values of
are larger than 0. Before allocating BBAs, we first enlarge the differences of
within
using function
, as illustrated in
Figure 6.
Each level of image information (gray images (
G), edge feature images (
E) and reconstructed images using phase angle (
P)) can be viewed as one evidence source and their corresponding
can be used to assign beliefs for transformation
:
4.2. Fusion-Based Registration
After obtaining BBAs
,
and
, we generate the combined BBA
using a combination rule denoted symbolically with ⊕:
describes the combined evidence support to
(a
matrix with 6 unknown parameters). The combined transformation
is computed by
Finally, the registered sensed image
can be obtained using Equation (
2) following the resampling.
5. Evaluation of Image Registration
Since the purpose of image registration is to align the reference image
R and sensed image
S to a single coordinate frame, one popular evaluation method for the registration result is to quantify the difference (usually quantified by Root-Mean-Squared Error (RMSE)) between
R and the registered sensed image
[
31,
32]. However, since
is transformed from the sensed image
S, which may have less information than
R (
S may be part of
R or have lower resolution than
R since
R and
S can be taken from different views or taken by different cameras), the difference between
R and
could be large even when the estimated transformation
equals to the true transformation
from the reference image
R to the sensed image
S, as shown in
Figure 7. Therefore, this kind of evaluation method is not accurate enough.
Another popular evaluation method is to quantify the difference between the reference image
R and image
, which is transformed from
R by the transformation matrix
[
16,
33], as shown in
Figure 7. The mapping relationship between pixel at
in image
R and pixel at
in image
satisfies
when the registration is absolutely accurate,
and
.
In this paper, we evaluate the registration performance by quantifying the difference between
R and
using AAID (average absolute intensity difference) [
16]:
AAID
is smaller when the registration result is better.
7. Conclusions
In this paper, we proposed a new image registration algorithm based on evidential reasoning. The uncertainty encountered in image registration is taken into account and modeled by belief functions. Image information at different levels are jointly used to achieve a more effective registration. Experimental results show that the proposed algorithm can improve the precision of image registration.
The generation of BBA is crucial in evidential reasoning and most methods are proposed based on applications. In this paper, we generate BBAs from three different image information, i.e., intensity, edge and phase angle. In future work, other image information, such as texture feature and gradient feature, will also be considered and jointly used in image registration. Furthermore, we will attempt to apply the proposed method to color image registration. Different color channels of the color image provide different image information and can be jointly used in image registration. We will also focus on the comparison with the state-of-the-art approaches based on convolutional neural networks (CNN).