Next Article in Journal
Sensors and Sensing for Intelligent Vehicles
Next Article in Special Issue
Classification and Prediction of Typhoon Levels by Satellite Cloud Pictures through GC–LSTM Deep Learning Model
Previous Article in Journal
Bloch Surface Wave Resonance Based Sensors as an Alternative to Surface Plasmon Resonance Sensors
Previous Article in Special Issue
Crowd Counting with Semantic Scene Segmentation in Helicopter Footage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

NCC Based Correspondence Problem for First- and Second-Order Graph Matching †

by
Beibei Cui
1,2,* and
Jean-Charles Créput
2
1
College of Electrical Engineering, Henan University of Technology, Zhengzhou 450001, Henan, China
2
CIAD, University Bourgogne Franche-Comté, UTBM, 90010 Belfort, France
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Cui, B.; Créput, J.-C. Using Entropy and Marr Wavelets to Automatic Feature Detection for Image Matching. In Proceedings of the 2019 15th International Conference on Signal-Image Technology & Internet-Based Systems, Sorrento, Italy, 26–29 November 2019.
Sensors 2020, 20(18), 5117; https://doi.org/10.3390/s20185117
Submission received: 20 July 2020 / Revised: 20 August 2020 / Accepted: 1 September 2020 / Published: 8 September 2020

Abstract

:
Automatically finding correspondences between object features in images is of main interest for several applications, as object detection and tracking, identification, registration, and many derived tasks. In this paper, we address feature correspondence within the general framework of graph matching optimization and with the principal aim to contribute. We proposed two optimized algorithms: first-order and second-order for graph matching. On the one hand, a first-order normalized cross-correlation (NCC) based graph matching algorithm using entropy and response through Marr wavelets within the scale-interaction method is proposed. First, we proposed a new automatic feature detection processing by using Marr wavelets within the scale-interaction method. Second, feature extraction is executed under the mesh division strategy and entropy algorithm, accompanied by the assessment of the distribution criterion. Image matching is achieved by the nearest neighbor search with normalized cross-correlation similarity measurement to perform coarse matching on feature points set. As to the matching points filtering part, the Random Sample Consensus Algorithm (RANSAC) removes outliers correspondences. One the other hand, a second-order NCC based graph matching algorithm is presented. This algorithm is an integer quadratic programming (IQP) graph matching problem, which is implemented in Matlab. It allows developing and comparing many algorithms based on a common evaluation platform, sharing input data, and a customizable affinity matrix and matching list of candidate solution pairs as input data. Experimental results demonstrate the improvements of these algorithms concerning matching recall and accuracy compared with other algorithms.

1. Introduction

Computer vision is an important research direction in current computer science since it occupies a pivotal position in human perception simulation. Automatically finding correspondences between object features in images is of interest for several applications, as target tracking [1], 3D object retrieval [2], pattern recognition [3], image stitching [4] and in many other fields.
Image matching is used to determine the geometric alignment of two or more images of the same scene taken by the same or different sensors from different viewpoints at the same or different times. We can distinguish dense correspondence, that determines correspondences at the pixel level, and sparse correspondence, that determines correspondences between a sparse set of higher lever features being first extracting from images. Most of the time such features represent invariant information at some location in the image, like corners, edges, gradients. Since we are interested in sparse correspondence, we study the standard methods for automatic extraction of the feature point sets from images. This becomes a critical step since it should avoid the presence of outliers and should allow discriminating objects easily. We propose a method for feature extraction step suitable to first-order matching.
Local feature descriptors, that is, providing detail feature detection and feature description information, play a fundamental and vital role in the process of feature correspondence, directly affecting the accuracy and objective score of graph matching (GM). High-quality local feature descriptors describe key points with uniqueness, repeatability, accuracy, compactness, and effective representation. These key points can keep robust and constant in terms of scaling, rotation, affine transformation, illumination, and occlusion [5]. Here we focus on the theoretical and mathematical descriptions of various local feature descriptors. In the feature points detection algorithm, local descriptors are typically used to describe image regions near feature points. Currently, methods for extracting feature points include Harris descriptor [6], Gilles descriptor [7], LoG descriptor [8], corner detector (CD) [9], Harrislaplace descriptor [10], SIFT descriptor [11], and so on. Among these feature extraction algorithms proposed in the literature, Marr wavelets which was originally used in [12] is favored for several properties: robustness (against distortion), rotationally invariant, noise insensitivity [13]. We choose to focus on the latter, whereas other methods will serve as a basis for comparative evaluation.
Given a pair of images, how to detect and extract feature points is the first step in image matching. Automatic feature extraction is the key point, and further related image matching can be performed based on the feature-to-feature correspondence. Given some standard nearest neighbor matching strategies, how to improve the reliability of the feature set is the problem to be solved here. To this end, we try to combine or enhance standard and easy-to-implement feature detection methods to make the final overall method (including feature detection and matching) competitive in terms of computation time and matching quality.
In order to obtain better matching results, the method proposed inserts a Laplace filter-based image preprocessing method before detecting feature points to increase the size of the candidate feature set. Since the Marr wavelet within scale-interaction method is more inclined to extract the edge information of the object, the Laplacian method can be used to enhance the edge details of the image. Then, a sparse feature point method based on entropy selection is proposed as a new filtering step. Filtering (also known as convolution) is also a very popular operation in the field of image processing, which can be applied to image encryption by changing pixel values [14]. This step combines the local entropy evaluation with the brightness deviation response as a new process for feature selection.
Entropy is a key concept in thermodynamics and statistical mechanics. It not only plays a special role in physical quantities, but also relates to the macro and micro aspects of nature, and determines the behavior of the macro system. Entropy is a well-defined quantity regardless of the type or size of the system under consideration. Entropy has many general properties, for instance, invariance, continuity, additivity, concavity, etc. [15].The probability distribution of entropy can be interpreted not only as a measure of uncertainty, but also as a measure of information [16]. Local entropy represents structured information, which is used to count the probability of occurrence of gray level in the sub-image, but independently around a single pixel. We claim that it is not sensitive to the influence of noise and can improve the accuracy of the image description. Based on the mesh division, the feature points in the sub-regions can be sorted according to their local entropy and selected trough deviation values. Then, entropy selection can not only effectively reduce the useless feature points for saving time, but also ensure the uniformity of feature point distribution. Mainly, the advantage of the proposed entropy and response algorithm is to realize a good compromise (trade-off) between accuracy and computation time together when compared to other standard approaches from literature. It is worth noting that both accuracy and computation time are essential criteria to compare and evaluate heuristic methods [17,18].
A necessary graph matching procedure based on the normalized cross-correlation similarity measure is applied to measure the effectiveness of the method in image matching. It entrusts quality assessment to the Random Sample Consensus Algorithm (RANSAC) program, which eliminates mismatched pairs and calculates true match recalls. The experiments were performed on standard image processing benchmarks. They showed how to increase the size of the feature set and matching accuracy with saving computation time.
The contributions of our paper are proposing two optimized algorithms: first-order and second-order for graph matching. Both of them are realized based on normalized cross-correlation (NCC) algorithm, which allows us to address graph matching or derived sub-problems with a closer relationship with experiments on integer quadratic programming (IQP) models in the Matlab platform. Especially for the first-order graph matching, we have proposed a new combination of feature points detection algorithms among Laplace filter, Marr wavelets, and the entropy-response based selection  method.
This paper is organized as follows. Section 2 introduces the motivations and taxonomy of graph matching. The formulation of standard graph matching is explained in Section 3. In Section 4, the different steps of the proposed feature extraction and first-order graph matching procedures are respectively presented. A preliminary version of this section forward appears as part of our previous conference paper [19]. We also extended the NCC based second-order graph matching in Section 5. The evaluation of the two proposed algorithms and their comparison with some algorithms are exposed in Section 6. Section 7 presents the conclusion.

2. Background of Graph Matching

2.1. Motivations of Graph Matching

Graph matching plays an important role in processing many practical applications in computer vision, such as feature correspondence [20], action recognition [21], image classification [22,23], shape matching [24], image retrieval [25], and pattern recognition [26,27]. The goal of graph matching is to find the optimal mapping constraint between two sets of nodes in two corresponding images, which will preserve the relationship between the graphs as much as possible so that when vertices are labeled based on the correspondence, they look ‘the most similar’.
In a more general case, this problem is expressed mathematically as a quadratic distribution problem, including finding a distribution that maximizes the objective function. Over the past decade, considerable effort has been invested in developing approximate methods to address more general QAP. Gold and Rangarajan [28] proposed a graduated assignment algorithm that combines graduated nonconvexity, two-way (assignment) constraints, and sparsity to solve a series of linear approximations to the cost function using Taylor expansion iterations. Leordeanu [29] presented an efficient approximation by using the spectral matching method (SM), which approximates the IQP problem to spectral relaxation. Cour et al. [17] introduced a new spectral matching with affine constraints (SMAC), which can not only provide a higher relaxation than SM but also keep the speed and scalability benefits of SM. Torresani et al. [20] designed a complex objective function which refers to dual decomposition (DD) that can be effectively optimized by double decomposition. Cho and Lee [18] presented reweighted random walk (RRWM) algorithms for graph matching. Later, Cho et al. [30] provided a max-pooling graph matching method (MPM), which not only resists deformation but also significantly tolerates outliers. This central idea of this algorithm is that the pairs with maximum scores are the correct matches.
Recently, some authors have proposed the use of high-dimensional relationships between super edges for high-order graph matching. The most commonly used is based on third-order research. The calculation of the high-order affinity matrix is generally based on the tuples of feature points, and it is achieved by comparing the corresponding edges and angle information of two sets of corresponding triangles. Another vital characteristic of high-order matching is that it is invariant to changes in scale and affine. Zass and Shashua [31] reformulated a high-order graph matching (HGM) in the view of probabilistic view of the probability setting of convex optimization representation. Chertok and Keller [32] proposed a general framework for solving higher-order assignment problems based on the core assumption that high order affinities are encoded in an affinity tensor. In this algorithm, they derive a marginalization scheme that can map triples to matrices or vectors. Olivier Duchenne et al. [33] derived a tensor-based high-order graph matching (TM) that invariant to affine, rigid, or transformations. This algorithm defined a tensor to represent the affinity assignment between the tuples of features. It is a multidimensional power iteration operation during which the solution will be projected onto the closest assignment matrix. Lee et al. [34] demonstrated a hypergraph matching via reweighted random walks (RRWHM) in a probabilistic manner. The algorithm uses a personalized jump and re-weighting scheme, which effectively reflects the one-to-one matching constraints in the random walk process. It can realize strong anti-deformation and anti-noise performance compared with other state-of-the-art methods. Ngoc [35] presented a general framework with a flexible tensor block coordinate ascent scheme for hypergraph matching. It is a crucial idea under a multilinear reconstruction using the original objective function, which can guarantee the third-order matching scores their algorithms increase monotonically. Another hypergraph matching algorithm based on tensor refining was proposed in [36], accompanied by an alternative adjustment approach to accelerate the convergence processing.
Despite decades of extensive research, graph matching is still a challenging problem for two main reasons: (1) generally, the objective function is non-convex and prone to local minima. (2) The constraints of space and time complexities. We set out to conduct research to solve these challenges.

2.2. Taxonomy of Graph Matching

The goal of the graph matching process is mainly focused on finding the correspondences between two characteristics, edges, and points, under certain constraints. A graph-based matching method treats a set of points as graphics. In the most common cases, the algorithm used to solve inexact matching problems can be classified into three categories: first-order, second-order, and high-order graph matching methods.
  • First-order graph matching At the view of first-order graph matching, it mainly implicates vertex-to-vertex properties based on local feature descriptors, focuses primarily on unary information, regardless of edge-related associated information. This matching method was proposed initially to find the correspondence between two sets of points and transformation parameters at the same time. It uses a coordinate positioning and grayscale information to calculate the transformation parameters and uses a soft assignment algorithm to estimate the correspondence between two sets of points. Finally, it converts the alignment of the two-point sets into the optimal match between the two graphs. Although the result of the first-order matching is stable, it fails when there is ambiguity, such as local appearance and repeating texture. In the case of high noise and outlier registration, it performs poorly, which dramatically limits its range of applications. Therefore, first-order matching is more suitable for focusing on non-rigid motion types with a small local affine transformation.
  • Second-order graph matching Second-order graph matching mainly enriches the vertex, and edge approaches; it is objective function established by a matrix representing the affinity properties between candidate pairs, that is to say, each node represents the correspondence between points, and the weights represent pairwise protocols between the corresponding potential pairs. A secure connection of the adjacency matrix can judge the correct assignment or not. Therefore, it better stands for some point of view between candidate pairs and overcomes the disadvantages of the first-order algorithm. Since the graph matching algorithm is usually based on the integer quadratic programming (IQP) formula, with an approximate solution, so the second-order graph matching problem is also the NP-hard problem. That means it can be formulated as an optimization problem; the purpose is to get the best match and receive a higher score based on the objective scoring function.
  • High-order graph matching Hypergraph is a natural generalization of traditional graphs. Since pairwise assignments are sensitive to scale-invariant between two corresponding graphs, pairwise relationships are not enough to capture the entire geometrical structure. Unlike pairwise matching in which each link can have two vertices at its ends, each link in the hypergraph can have three or more vertices, which can have a more powerful tool to model complex structures for more high-level information. Therefore, the most crucial idea to solve the hypergraph problem is to search for higher-order constraints, rather than unary or pairwise constraints. Essentially, hypergraph matching is a combinatorial optimization problem. In the process of obtaining the final solution, it is not straightforward for us to find its optimal global solution based on a reasonable time. In recent years, more accessible methods use probability frames to explain hypergraph matching, of which tensor-based models are often used. Considering the complex structure of the data, we believe that the learning and construction of hypergraphs will hopefully become an increasingly promising research direction in the future.

3. General Formulation of Graph Matching Problem

This section introduces the general representation of traditional graph matching and the definition of the high-order graph matching problem. The main purpose is on finding the correspondence one-to-one mapping between two feature sets from two image sources. The goal is to maximize a function score among the set of correspondence pairs. In first-order matching, only local attribute descriptors are considered and evaluated, whereas, in the general case of graph matching, two order potentials between pairs (edges) of features must also be maximized to established the similarity between edges of features. On the other hand, the high-order GM method considers the invariant geometric information by considering the relationship between tuples of feature points. The input feature graph becomes a hyper-graph, where hyper-edges replace edges, that is subsets of k points, with the order k 2 , rather than only considering couple of points.
Suppose we are given a pair of graphs G P = ( P , E P ) with N P feature points for the reference graph G P , and G Q = ( Q , E Q ) with N Q feature points for the query graph G Q . P and Q are the two sets of feature points, and E P and E Q denote edge sets. We note i , j P and a , b Q as representing feature points. Therefore, the main problem is to find a suitable one-to-one mapping from one feature set to the other feature set as illustrated in the Figure 1. The pictures in Figure 1 are from PF-WILLOW dataset (https://www.di.ens.fr/willow/research/proposalflow/).
Finding a mapping form P to Q can be equivalent to find an N P × N Q assignment matrix X, such that X i a = 1 when point i is assigned to point a, and X i a = 0 otherwise. Therefore, a one-to-one admissible solution must verify the following constraints in (1), that requires a binary solution, and (2) and (3), that express the two-ways constraints of a one-to-one mapping.
X 0 , 1 N P × N Q
i a = 1 N P X i a 1
a i = 1 N Q X i a 1
Then, the problem of graph matching can be formulated as the maximization of the following general objective score function (4):
s c o r e ( X ) = i a , j b H i a , j b X i a X j b ,
where H i a , j b means the similarity or affinity measurement corresponding to the tuple of feature points i , j and a , b . The higher is the score H i a , j b , the higher are the similarities between the two corresponding edges i , j and a , b . The product X i a X j b is equal to 1 if and only if points i , j are respectively mapped to points a , b .
Then, we need to know how to compute such a positive and symmetric similarity matrix H. Many cost functions may be used to compute affinity matrices for first-order and second-order GM. Note that H i a , i a represents first-order similarity term, between local attributes of points i P and a Q . For example, the authors in [37,38] use the normalized cross-correlation (NCC) cost function, as we have used to validate our feature point extraction method in this thesis. Nevertheless, any point-to-point distance function can be used, as the Euclidean distance between SIFT descriptors, or sum of squared error data terms.
Duchenne et al. in [33] propose a general formula to compute the second-order affinity term H i a , j b as shown in (5), where f is a feature vector associated to each edge.
i a , j b H i a , j b = e x p ( γ f i , j f a , b 2 )
Leordeanu et al. in [29], and as most often encountered, computes the Euclidean distance between the corresponding candidate point pairs i , j and a , b , to build the affinity term H i a , j b , as shown in Equation (6). Here, σ d is the sensitive controller of the deformation.
H ( i a , j b ) = 4.5 ( d i j d a b ) 2 2 σ d 2 i f | d i j d a b | < 3 σ d 0 o t h e r w i s e

4. First-Order NCC Based GM

In this section, we introduce the implementation of the system image matching process, including feature point detection and extraction, feature point matching, and removing mismatching pairs. The schematic diagram of the whole process is shown in Figure 2. Regarding the interest point matching strategy, the proposed pipeline illustrated in Figure 2 can be summarized as (1) Laplacian filter is used for edge detection; (2) Marr wavelets are used to identify salient points in the image; (3) entropy based metric for selecting the most distinctive points of the image denoted as feature points; (4) feature points matching; (5) outlier removal through RANSAC.

4.1. Image Pre-Processing

Laplacian is a second-order derivative operator that detects the zero-crossing in image intensity and usually produces more accurate edge detection results [39]. Laplace filter represents a discrete approximation to the mathematical Laplace operator. Its second-order partial derivative in the orthogonal direction of continuous space and the approximation of its mathematical equivalent are defined below [40]:
2 f ( x , y ) = 2 f ( x , y ) x 2 + 2 f ( x , y ) y 2
2 f ( x , y ) { f ( x + 1 , y ) + f ( x 1 , y ) + f ( x , y + 1 ) + f ( x , y 1 ) } 4 f ( x , y )
I ( X ) = I ( x , y ) = f ( x , y ) c · 2 f ( x , y )
where f ( x , y ) is the original image, I ( x , y ) is the processed image, c is a constant.
From formula (8), the digital mask filter w can be viewed as the following 3 × 3 set of filter coefficients as shown in Figure 3a. The process of the Laplacian filter sharpening is essentially a convolution process. Suppose the origin pixel of f is located in the upper left corner of the image f, and set the middle value of mask w as the center of kernel. Let w move at all possible positions so that the center kernel of w can coincide with each of pixels of f. The convolution operation is essentially the sum of the products of the corresponding positions of the two functions. The convolution between f and its corresponding mask filter w is shown in Figure 3b.
The definition of two-dimensional convolution is as:
I ( x , y ) = f * w = k , l f ( x + k , y + l ) w ( k , l )

4.2. Marr Wavelets within Scale-Interaction

Receptive field [41] is used to describe the stimulation pattern of the retina. The receiving field of high-level neuron cells in the visual pathway is synthesized from the receiving field of low-level neuron cells. Therefore, as the level increases, the range of the receptive field becomes more impressive. Ultra-complex neuron cell models [42] can respond to complex object features through powerful non-linear processing functions, and most ultra-complex neuron cells have sensitive termination characteristics at the ends of line segments, corner points, and line segments with high curvature. In other words, the response of ultra-complex neuron cells to light can be simulated by the difference in response of spatial filters with different bandwidths to light. The response of the received field to light can be represented by a spatial filter function, such as a Gaussian difference function or a Gabor wavelet function.
The scale-interaction model for feature detection is based on filtering using a class of self-similar Gabor functions or Gabor wavelets [43,44], which can achieve the minimum joint resolution in the spatial and frequency domains. This recommendation is made because it is unique in reaching the smallest possible value of the joint uncertainty [45]. The function of feature detection is defined as the following formula:
Q i j ( x , y , θ ) = f ( W i ( x , y , θ ) γ W j ( x , y , θ ) )
where γ is a normalizing factor, W i ( x , y , θ ) and W j ( x , y , θ ) are spatial filters. They go through the transformation of a nonlinear function f at location ( x , y ) with preferred orientation θ in two scales i and j respectively. If feature detection function Q i j obtains a local maximum at the location ( x , y ) , this location is considered to be a potential feature point position.
For further optimization, the Marr wavelets [46] were used instead of Gabor wavelets within the scale-interaction model, because of its isotropic [47,48]. Two-dimensional Marr wavelets and their corresponding feature detection function are defined as:
M i ( X ) = λ i ( 2 λ i 2 X 2 ) e x p ( λ i 2 X 2 2 )
Q i j ( X ) = | M i ( X ) γ M j ( X ) |
R i j ( X ) = I ( X ) * Q i j ( X )
where X = ( x , y ) , and X 2 = ( x 2 + y 2 ) . λ i = 2 i and i represents the scaling value of Marr wavelets. Convolve Q i j ( X ) with an grayscale image I ( X ) . If its response value R i j ( X ) obtains a maximum local value, then X is considered to be a potential feature point. Algorithm 1 is pseudo-code of Marr wavelets within scale-interaction. Figure 4 illustrates the process of extracting response value using Marr wavelets within scale-interaction. (b) and (c) are convolution results between the original picture and the mask filter. Then local maximum points are extracted on this basis.
Algorithm 1 Marr wavelets within scale-interaction algorithm
Input:   I ( X ) , i , j
Output: points
  1: I i = M a r r F i l t e r ( I ( X ) , i )
  2: I j = M a r r F i l t e r ( I ( X ) , j )
  3: I s u b | I i I j |
  4: l o c a l t h r m a x ( m a x ( I s u b ) ) * r
  5:  if I s u b ( i , j ) < l o c a l t h r  then
  6:     I l o c a l t h r ( i , j ) I s u b ( i , j ) 0
  7: end if
  8: p o i n t s c o r n e r p e a k s I l o c a l t h r
  9: function  M a r r F i l t e r ( I ( X ) , s c a l e )
 10:     δ = 2 s c a l e
 11:     x = ( 2 * f i x ( δ ) ) : 1 : ( 2 * f i x ( δ ) )
 12:     y = ( 2 * f i x ( 2 * δ ) ) : 1 : ( 2 * f i x ( 2 * δ ) )
 13:     M i ( X ) X m e s h g r i d ( x , y )
 14:     I f i l I ( X ) * M i ( X )
 15: end function

4.3. Entropy and Response

Aiming at the problems of uneven feature distribution, too many feature points, and long matching time, a feature point extraction method based on local entropy and feature point response was proposed, called the entropy and response algorithm (ER). In this section, three main parts are explained.

4.3.1. Entropy Algorithm

In actual images, feature points often appear as sharp changes of gray values or inhomogeneity in grayscale distribution; that is, the local region of a feature point has a large amount of information. Entropy is a measure of information in an image, and local entropy is a measure of local area information of an image. Local entropy value under feature-rich region is much higher than the local entropy value under feature-poor region. Therefore, it is possible to determine which regions have more features by calculating the local information entropy of image, and then extract the feature points in these regions.
Information entropy [49,50] is the amount that represents the overall characteristics of the source in a common sense. It is considered from the statistical properties of the entire source to measure the expected value of a random variable. An image is essentially a source of information that can be described by information entropy. Let the gray image G have m gray levels, mesh division is performed to obtain n × n sub-regions. The whole information entropy H i and the average entropy H ¯ of the image are calculated as follows:
H i = i = 1 m p i l o g 2 p i
H ¯ = 1 n 2 j = 1 n 2 H j
where p i is the probability that the i t h gray level appears, that is, the ratio of the number of pixels whose gray value is i to the total number of pixels of the image. So the local entropy is counted for the probability of occurrence of gray level in the sub-image. Since the value of the information entropy is only related to the distribution of the local gray-scale pixels, but independent with a single pixel, so it is not sensitive to the influence of noise and can improve the accuracy of the image authenticity description. Here, we use the local information entropy of the image to extract feature points. Under the meshing strategy, the image is divided into n × n sub-regions. Therefore, the sub-average entropy value of each sub-grid can be calculated. Figure 5 is a schematic diagram of mesh division, n is set to 40.

4.3.2. Response Algorithm

After mesh division and the computation of each local entropy, we will get n × n sub-regions for the whole image, then detected feature points are mapped into the respective sub-areas. In this case, if we compute and sort the entropy values of all of these feature points extracted, then the first N feature points with larger entropy values can be selected to describe the whole image. However, this method only utilizes the entropy values of feature points, without considering its distribution in the image. Finally, feature points with high entropy values may mostly appear in the same local area, which will cause aggregation. So a block division and response algorithm are proposed to deal with this problem.
As mentioned before, if a pixel presents a sharp change in its neighborhood, this pixel will have a stronger deviation value from the mean value. Based on the Bresenham discrete circle [51] with the pixel point p i as the center and three pixels as the radius, 16-pixel points on the discrete circumference are considered in correspondence with the central pixel point p i . This is shown in Figure 6. These 16 pixels are assigned to dark and bright areas. The dividing criteria and deviation [52] are respectively defined as the following:
S b r i g h t = x | I p i , x > I p i + t
S d a r k = x | I p i , x I p i t
D e v = m a x ( x S b r i g h t | I p i , x I p i | t , x S d a r k | I p i I p i , x | t )
where S b r i g h t indicates bright area, S d a r k indicates dark area. I p i is the gray value of center point p i , I p i , x represents the gray value of a pixel labeled x on a discrete circumference centered at pixel p i , t is the set threshold. D e v is the sum of the deviation value among the gray values of the pixel p i and its corresponding neighboring pixels located in the bright or dark area.

4.3.3. Distribution Criterion

After the calculation of the entropy and response, a distribution criterion is proposed to extract the corresponding points that meet the requirements. In the region where the local entropy is bigger than the average of entropy H ¯ , the feature points are extracted with the ratio r. The only one strongest response point is extracted in each remaining region where the local entropy is smaller than H ¯ . Here, we choose the unified ratio method for the selection of r. Assuming that there are m regions whose local entropy is greater than H ¯ , then r i ( 1 , 2 , , m ) is the ratio of extracting feature points in the i t h region. For example, when r = 10 % , that is, in the m t h region with large local entropy, the feature points with the top 10 % responses given by D e v are extracted. The appropriate value of r is set empirically.
The detail of the entire ER method is outlined in Algorithm 2. Based on the mesh division strategy, we first compute each of the local average entropy value of all these sub-regions. Then the feature points are sorted according to the computation of their deviation values in each of their sub-region. Finally, the distribution criterion can not only effectively reduce some of the useless feature points, but also ensure the uniformity of feature point distribution. As we can see, the step entropy and response can both develop a custom algorithm to identify feature points. Entropy strategy is only to calculate the reflection degree of an individual pixel, but the response is based on the deviation value of a set pixels between the center point, and it is adjacent points based on the Bresenham discrete circle principle, so that the mutation of response is more reflected for finding points with reliable contrast. Therefore, deviation is more capable of extracting more qualified feature points than entropy.
Algorithm 2 Entropy and response algorithms
Input:  I ( X ) , p o i n t s , m , n , t , r
Output:  p o i n t s s e l e c t
  1:  for  j = 1 n 2   do
  2:     p o i n t s j p o i n t s
  3:     H j , H ¯ E n t r o p y ( I ( X ) , m , n )
  4:     p s u b ( j ) R e s p o n s e ( p o i n t s j , t )
  5:    if H j H ¯ then
  6:         p o i n t s s e l e c t p ( p s u b ( j ) * r )
  7:     else
  8:         p o i n t s s e l e c t p m a x
  9:    end if
 10:     j = j + 1
 11: end for
 12: return  p o i n t s s e l e c t
 13: function  E n t r o p y ( I ( X ) , m , n )
 14:     p i ( m * 1 ) I s u b ( i ) ( n * n ) I ( X )
 15:     H = i = 1 m p i l o g 2 p i
 16:     H ¯ = 1 n 2 j = 1 n 2 H j
 17: end function
 18: function  R e s p o n s e ( p o i n t s , t )
 19:     I p , x c i r c l e p p o i n t s
 20:     D e v = m a x ( x S b r i g h t | I p , x I p | t , x S d a r k | I p I p , x | t )
 21:     p s u b p ( D e v p )
 22: end function

4.4. Definition of First-Order GM Problem Based on NCC

The purpose of graph matching [53] is to determine the correct attribute correspondences P = ( V P , E P ) and Q = ( V Q , E Q ) between two graphs P and Q, where V means vertex, and E represents edge. We customize corresponding mapping edges e 1 = i j E P , e 2 = a b E Q .
The objective of graph matching is to find the correct corresponding point pairs between two graphs P and Q among the feature points extracted. A unidirectional ’one-to-one’ constraint is assumed, which requires one node in P to match at most one node in Q.
Cross-correlation is a standard method for estimating the similarity between two sets of data [54]. Normalized cross-correlation (NCC) is an essential application, which has been used widely for many signal processing applications due to its effective and direct representation in the frequency domain, and it is less sensitive to linear variations in the amplitude of two comparison signals [55]. We use the NCC algorithm to measure the similarity between two feature point p in graph P and q in graph Q. These ratios R a ( p , q ) of the calculated correlation values represent the degree of matching between the two sets of corresponding images. The NCC algorithm used to find similarity match between a window near feature point p and a window around feature point q is defined as:
R a ( p , q ) = i [ ( W p i W p ¯ ) ( W q i W q ¯ ) ] i ( W p i W p ¯ ) 2 i ( W q i W q ¯ ) 2
where the summations are over all window coordinates, W p i and W q i are pixel intensity in windows for p and q respectively, each of the windows is sized as 5 × 5 . Also, W p ¯ and W q ¯ are the corresponding mean of the window pixels. The coordinate of maximum values in this normalization cross-correlation is the position of the best matches for reference images.
Based on the NCC similarity measure, we use the nearest neighbor ratio (NNR) method to perform a rough match on the feature point set. The selection and matching process of the NNR algorithm with one-way ‘one-to-one’ constraint is described as follows: Based on the sample feature point pairs in the two images, first use the NCC algorithm to extract all the most significant corresponding pairs. These ratios are then compared with a fixed threshold. If the NCC ratio is higher than this fixed threshold, the corresponding point pair is considered a match. Otherwise, the pair of points are discarded. The fixed threshold is usually a constant not greater than 0.9. Since correct matching has stronger similarity than incorrect matching, this is a functional judging characterization for graph matching according to the NNR concept. Figure 7 shows a flowchart of data processing. The detected feature points of the two images are respectively put into two corresponding buffers. Each point p in graph P is used to calculate the ratio of NCC to all points q in graph Q, and then all ratios are sorted in descending order. Under the principle of NNR, some will be extracted as the best matching points, otherwise, they will be discarded.

4.5. Outlier Elimination

In this application, before using the Marr filter for convolution, we will use Laplacian for convolution as a preprocessing step. Due to the linear relationship of these two operations, this corresponds to a single convolution, which is a smoothed fourth-order derivative filter. Therefore, as each derivative amplifies the noise, the number of feature points increases. On the other hand, although the NNR method is easy to implement and sometimes well-matched, some points in these extracted feature point sets do not match, so a mismatched cleanup operation is required. Therefore, it is particularly important to find ways to reduce the mismatch caused by interference.
The RANSAC algorithm [56,57] is called the Random Sample Consensus Algorithm, which can well eliminate the existence of mismatches. The algorithm has strong robustness and the ability to correct data sets. The basic idea of the RANSAC algorithm is to use an iterative method to extract the sample set from the model. Find an optimized parametric model that can include more internal points in the data set and then test the extracted samples using the residual set. Points in the algorithm that fit the data set model are called interior points. Otherwise, they are called outliers. Therefore, the RANSAC algorithm can be used to find the best parameter model in the data set containing outliers through an iterative algorithm. The detailed implementation process of RANSAC is as follows:
(a)
Randomly extracts non-collinear a pairs of feature points from the data set ( a = 4 in experiment), then calculate their transformation homography matrix H, and record it as model M.
(b)
Calculate projection error of each point in the dataset with model M k . If the error is less than a predefined threshold τ , add it into the inner point set I k .
(c)
If the current number of elements in inner point set I k is greater than the number in optimal inner point set I b e s t , then update I b e s t and re-estimate the model M b e s t .
(d)
If the number of iteration is more than k, the operation will be exited; otherwise, the number of iterations is increased by 1, and the above steps are repeated.
The threshold τ is selected in accordance with n-dimensional chi-square distribution. χ is the cumulative chi-square distribution. Assume that the out-of-class points are white Gaussian noise with a mean of 0 and a variance of η . The number of iterations k will be updated instead of fixed until it is greater than the maximum number of iterations.
τ 2 = χ n 1 ( μ ) η 2
k = l o g ( 1 p c ) l o g ( 1 μ a )
where p c is the confidence level, generally taking 0.95 to 0.99; μ is the ratio of inlier point; a is the minimum number of samples required to calculate for model. The pseudo-code of RANSAC is outlined in Algorithm 3.
Algorithm 3 Random Sample Consensus (RANSAC) algorithm
Input:  p c , k m a x , τ , a , m
Output:  M b e s t , I b e s t
  1: k = 0 , I m a x = 0
  2: for  k < k m a x do
  3:     τ 2 = X n 1 ( μ ) η 2
  4:    Use randomly sampled subset a to estimate M k and I k
  5:    if | I k | > I m a x then
  6:         M b e s t = M k , I b e s t = I k
  7:         μ = | I b e s t | / m , k m a x = l o g ( 1 p c ) / l o g ( 1 μ a )
  8:    end if
  9:     k = k + 1
 10: end for

4.6. Parameters of the Proposed Algorithm

This section details the specific parameters used in the experiments. In the feature point detection part, the Marr wavelet algorithm is used to define the feature points as the maximum local value inside the scale-interaction image (with γ = 1 ). The two scales we chose are i = 1 and j = 2 . Then the mesh division and feature points extraction are processed, the image is meshed by n × n to obtain n 2 sub-regions. Here, n = 40 is selected. The detected feature points are mapped into various sub-regions and sorted based on the deviation value D e v in each sub-region to which they belong. At the same time, each local information entropy H i and average information entropy H ¯ are calculated. Assume that there are k sub-regions with local information entropy greater than the average information entropy, then feature points with maximum responsiveness of 30% are extracted from these k regions. In the feature point matching part, the NCC similarity measurement algorithm and NNR method are used to match the feature point set roughly. For the matching point filtering part, RANSAC can better remove the unmatched points, and finally get more accurate matching results. Its computational complexity is O m a x ( N P , N Q ) , where N P is the number of features in the reference image, and N Q is number of features in query image. The algorithm is suitable for performing real-time global methods. Besides, it can also achieve efficient parallel implementation in GPU systems. Algorithm 4 outlines the details of the entire process of the proposed algorithm.
Algorithm 4 The proposed algorithm
Input: Input images I 1 and I 2
1:
I 1 and I 2 processed under Laplace filter
2:
( I 1 , I 2 ) = M a r r W a v e l e t s F u n c t i o n ( I 1 , I 2 )
3:
( p o i n t s 1 , p o i n t s 2 ) ( c o r n e r p e a k s 1 , c o r n e r p e a k s 2 ) ( I 1 , I 2 )
4:
( p o i n t s s e l e c t 1 , p o i n t s s e l e c t 2 ) = E n t r o p y R e s p o n s e F u n c t i o n ( p o i n t s 1 , p o i n t s 2 )
5:
( p o i n t s s e l e c t 1 , p o i n t s s e l e c t 2 ) = R A N S A C F u n c t i o n ( p o i n t s s e l e c t 1 , p o i n t s s e l e c t 2 )
6:
M a t c h i n g P a i r s = N C C a n d N N R ( p o i n t s s e l e c t 1 , p o i n t s s e l e c t 2 )

5. Second-Order NCC Based GM

The first-order GM provides convolution-based algorithms, whereas the second-order GM emphasizes geometric inter-feature relationships, transforming the correspondence problem to a purely geometric problem stated in a high dimensional space, generally modeled as an integer quadratic programming. This section presents our second application. We introduce in this section a new contribution with an application to second-order graph matching in the Matlab framework. The framework is based on the original Matlab application provided by Cho et al. [18]. This application constitutes a useful framework for graph matching as an IQP problem. It offers useful mathematical abstractions, and it allows us to develop and compare many algorithms based on a common evaluation platform, sharing input data, but also customizing affinity matrices and a matching list of candidate solution pairs as input data. This allows us to reuse these common data and context to start elaborate NCC algorithms for second-order graph matching application (As we discussed in detail in Section 4.4). This approach uses NCC algorithm to search for the indicator vector, then the matching score will be computed under IQP based formulation. By considering the second-order term, the algorithm determines the mapping between two graphs that should reflect the geometric similarity relationship between the pairwise matching features. All the algorithms are executed and compared based on the same experimental framework with common data from standard benchmarks in the domain.
We set P and Q the two sets of features of query graph G P = ( P , E P ) and reference graph G Q = ( Q , E Q ) respectively. We note i , j P and a , b Q as feature points, i j E P and a b E Q as edges. Also, e 1 = ( i , a ) and e 2 = ( j , b ) represent, when needed, candidate assignments. The main task it to find a suitable one-to-one mapping between P and Q. The feature points correspondence mapping is shown in the Figure 1. The yellow lines are correct matches.
Affinity matrix M, also known as affinity tensor, is used to organize the mutual similarities between sets of feature points. The measurement of affinity can be interpreted as a product of a solution vector x, that represents the set of candidate correspondences, by the matrix. The solution variable x { 0 , 1 } N P N Q is an indicator vector such that x i a = 1 means feature i P matches with feature a Q , x i a = 0 means no correspondence, and where N P and N Q are the respective set sizes of P and Q.
A graph matching score S between edges can be defined by the following equation:
S = i j a b f ( i j , a b ) = i a , j b M ( i a , j b ) x i a x j b = x T M x ,
where i j a b means ( i , a ) and ( j , b ) are correspondence pairs, and x is the indicator vector. Then, the purpose of the graph matching IQP problem is computing solution x * that maximizes the matching score as follows:
x * = a r g m a x ( x T M x ) ,
s . t . x { 0 , 1 } N P N Q ,
i a = 1 N Q x i a 1 ,
a i = 1 N P x i a 1 .
The binary constraint is expressed by Equation (25), while (26) and (27) express the two-way constraints, that specify the solution to be a one-to-one mapping from P to Q. Note that by removing constraint (27), we obtain a many-to-one mapping, that is, a (partial) function from P to Q. In this chapter both constraints must be verified.
Affinity matrix M which consists of the relational similarity values between edges and nodes must is considered as an input of the problem. It can be noted that its size is defined by the total number of candidate assignment pairs considered. Then, the affinity matrix size may vary from O ( N P N Q ) 2 , in the case of full possible pairs, to O ( K × N P ) 2 where K is some constant, in case of a restricted list of candidate pairs. Note that this list of candidate pairs must be added as part of the input to relate the entries of the affinity matrix to the feature points. The indicator variable x size varies also accordingly to the symmetric affinity matrix size. Its length corresponds to the column, of line, size of the matrix, and may vary from N P × N Q to K × N P depending on the application.
Here, the matching score is completely retained as pairwise geometric only. The individual affinity M ( e 1 , e 1 ) that represents first order affinity, is set to zero since there is no information about individual affinity. That is to say, all the diagonal values of the affinity matrix are zeros. The pairwise affinity M ( e 1 , e 2 ) = M ( i a , j b ) between edges is given by:
M ( i a , j b ) = m a x ( 50 d i a ; j b , 0 ) ,
where d i a ; j b is the mutual projection error function used in [58] between two candidate assignments ( i , a ) and ( j , b ) , that includes euclidean distance evaluation d i j between locations of features i , j . The Table 1 summarizes notations and definitions used in this paper.

6. Experimental Evaluation

6.1. First-Order GM Experiment

The experiments were performed on a CPU Intel(R) Core(TM) i5-4590 3.3 GHz. In this section, we evaluate the proposed feature point matching method by using the Visual Geometry Group dataset (https://www.robots.ox.ac.uk/~vgg/data/). The following will be divided into two parts for experimental description: extract feature points and perform feature matching. For quantitative evaluation, a set of experiments was performed. The proposed algorithm based on Laplace filter and Marr wavelets under entropy method (L_Marr_E) was compared with other classic conventional methods, corner detector [9], Gilles [7], Harris [6], LoG [8] and SIFT [11] algorithms.

6.1.1. Feature Points Extraction

Based on the above experimental theory, the following experimental verification was performed. First, we needed to verify the importance of the Laplacian filtering algorithm at the feature point extraction stage. Figure 8b,c show comparison graphs before and after adding a Laplacian filter. We can clearly see that using Laplace filters could greatly increase the number of feature points detected. Figure 9 shows the feature point detection results between different algorithms: CD, Gilles, Harris, LoG, SIFT, and L_Marr_E. Correspondingly, Table 2 details the specific number of feature points extracted of Figure 9. Compared with the conventional method, this method could extract more feature points.

6.1.2. Feature Points Matching

The following experiments verified the application of entropy-based refinement in feature matching. Combined with the fragile feature selection stage, ablation analysis provided some valuable intuitions for the pipeline in the feature matching stage. Then, the Figure 10 shows the comparison results, and we found that the RANSAC algorithm could thoroughly eliminate the mismatch points. Besides, the NCC algorithm based on Laplacian filter and Marr wavelet (L_Marr) not only had higher matching accuracy but also could increase the number of correct feature matches than the method without Laplace filter (Marr).
Then we conducted another ablation study. We calculated the five groups of databases as used in Figure 9 and took their final average as shown in Table 3. It summarizes the number of image matching pairs, the number of recall, and time. The computation of graph matching Recall [59] can be defined as follows:
R e c a l l = N r m / N t m
where N r m is defined as the number of detected true matches after RANSAC removes the mismatched points, and N t m means the total number of correspondences. From the Table 3, we can find that the use of Laplace increases the recall rate, but at the same time it increases the computing time; however, the use of entropy can greatly improve operating efficiency under the premise of basically ensuring the recall value. Therefore, the two complement each other, the combination of Laplace and entropy can better realize the superiority of the proposed algorithm. As shown in the Table 3, the accuracy of the global L_Marr_E method remains very high, even if slightly lower than the accuracy of L_Marr method with no entropy, whereas a substantial computation time acceleration by a factor 5 is a benefit of the method with entropy element.
We now selected the L_Marr_E method, which provided the best compromise in both accuracy and computation time in previous tests, as a competitor against other feature point selection methods from literature. We conducted another experiment based on test data from the University of Oxford’s Object Category dataset, which contained 13 different image sets. Among them, seven data sets were selected for comparative analysis in this section. Table 4 shows the average of all image pairs selected from the dataset. The best recall results were obtained through L_Marr_E, which produced the most significant feature points in the two query images. It should be noted that when compared to standard feature point extraction methods, our method provided the best accuracy despite a slight increase of computation time. While entropy accelerated feature point selection in our framework with Laplacian and Marr wavelet, as shown in Table 3, it allowed on the contrary to improve accuracy against other independent standard feature point selection methods, as shown in Table 4. Even when compared to the famous SIFT method, our method remained competitive according to the trade-off between time and accuracy, accuracy being improved at a slight expense of computation time. Note that L_Marr_E also provided the highest number of feature points. Figure 11 shows the corresponding links between images under different algorithms: CD, Gilles, Harris, LoG, SIFT, and L_Marr_E. Therefore, the proposed algorithm ould obtain the better matching effect.

6.2. Second-Order GM Experiment

In this subsection, experiments were conducted on a CPU Intel(R) Core(TM) i5-4590 3.3 GHz. We performed experiments on the CMU house image database and real image database. Accuracy, objective score and time were the most important main parameters in the field of graph matching. We needed to use them as criterion when comparing with other different algorithms. The datasets used in this paper, such as CALTECH and CMU house datasets, are the most popular databases for some state-of-the-art algorithms. Although there are still many different databases, in order to facilitate the comparative study of researchers who study graph matching, this article used these most popular databases for easier comparing research. The proposed NCC algorithm was compared with two state-of-the-art methods such as RRWM [18] and SM [29]. All of these different algorithms shared the same images, feature points, and affinity matrices as input data. Each test set had a ground-truth solution for accuracy evaluation.

6.2.1. Presentation

Performance evaluation could be done by computing the affinity score, but more importantly, by evaluating accuracy according to a ground-truth set of true assignment pairs, that reflected the application requirement of graph matching. We also evaluated the computation time. Based on the above IQP, the objective score could be obtained by formula (4). Accuracy could be obtained by dividing the actual correct number of matches detected by the maximum number of ground truth pairs that could be returned, and the formula was as follows:
A c c u r a c y = x * * V G T b o o l / m a x G T ,
where x * is the binary vector solution returned by the algorithm, V G T b o o l { 0 , 1 } N P N Q is a binary vector representing the ground-truth pairs, and m a x G T is the maximum number of true assignments that could be returned when considering a many-to-one mapping, relaxing constraint formula (2). However, solutions returned by the algorithms had to necessarily verify a one-to-one mapping in this section. They verified both constraints formulas (2) and (3).

6.2.2. CMU House Image Matching

Experiments using the CMU house internal sequence dataset (http://vasc.ri.cmu.edu/idb/html/motion/) could evaluate sequence matching of the same object. A total of 110 pictures in this dataset were divided into different sequence gaps (from 10 to 100 with an interval of 10). Therefore, we ended up with ten sets of data pairs. Each pair of image sets consisted of an initial fixed-position picture (sequence 1) and its changing transformations. To evaluate the matching accuracy, 30 landmark feature extraction points were manually tracked and labeled as ground truth on all frames. In this typical classic CMU test, the proposed NCC method was competitive to the RRWM algorithm. The experimental results are shown in Figure 12, and the detailed information of the quantitative evaluation is recorded in Table 5. From Table 5 we can find that in this typical single object with only the angle conversion test, the proposed NCC algorithm was comparable to RRWM in terms of accuracy and score. NCC also cost less computing time than RRWM.

6.2.3. Real Image Matching

In the experiments of real image matching, we used the CALTECH database (https://cv.snu.ac.kr/research/~RRWM/) customized by Cho et al. [18]. In this CALTECH database, it contained 30 different image pairs. All ground truths of these corresponding candidates were manually pre-labeled. Accuracy and objective scores were the main criteria for matching. Table 6 shows the average results of 30 pairs of images obtained by RRWM, SM, and NCC algorithms. As can be seen from the table, the performance of the NCC algorithm was not better than the RRWM and SM algorithms. Figure 13 shows the visualization of feature point connections. Correct and incorrect matches were marked with yellow and black lines, respectively.
Since the second-order graph matching based on IQP was an NP-hard problem, various approximate solutions were used to attempt to solve the pairwise similarity corresponding mapping problem. Leordeanu and Hebert provided a spectral matching (SM) algorithm [29] based on the main strength cluster of the adjacency matrix by finding its principal eigenvector. Reweighted random walks for graph matching (RRWM) algorithm was introduced by Cho et al. [18], and it combined mapping constraints with re-weighted jumping schemes. Both of these two methods implement an iteration loop to compute a principal eigenvector. This basic iteration loop technique belongs to the power iteration method. Although this method cannot guarantee to achieve the final global optimal solution during the operation, it can converge to the fixed point of the tensor. However, NCC algorithm is a single-threaded algorithm but not an iterative method. When the normalized cross-correlation algorithm calculation is over, the entire matching process is completed. It cannot run the optimization iterative until the solution x converges. Therefore, the NCC algorithm cannot achieve the expected optimization effect in the second-order matching.

7. Conclusions

We study different declinations of feature correspondence problems by the use of the Matlab platform, in order to reuse and provide state-of-the-art solution methods, as well as experimental protocols and input data necessary with evaluation and comparison tools against existing sequential algorithms, most of the time developed in Matlab framework. While feature extraction methods are numerous, it is not straightforward that each could represent the objects to match from a query to a reference image adequately. In order to vary the feature set size while preserving a reasonable recall rate in graph matching, we have proposed a new combination of filters with an entropy-response based selection method. Laplace filter enhances the edges and details of an image. Secondly, Marr wavelets embedded in scale-interaction are used to detect feature points. Then, we use the entropy and brightness response to extract typical feature points. Most importantly, the entropy-based selection method greatly reduces the calculation time. Image matching is achieved by nearest neighbor search using a normalized cross-correlation similarity measure. Finally, the RANSAC process deletes the outlier correspondence to achieve matching optimization. The first-order comparison results show that our algorithm has a higher matching rate for reasonable computation time, despite the augmentation of the number of feature points under the Laplace algorithm. The second-order graph matching is also realized based on the NCC algorithm, which allows us to address graph matching or derived sub-problems with a closed relationship with experiments on IQP models in the Matlab platform. We found that the second-order NCC approach performs competitively to IQP models on CMU images on accuracy and score. The performance looks less satisfactory for real case images of the CALTECH database. In future work, we will improve implementation by exploiting natural parallelism of the method on the GPU platform.

Author Contributions

Conceptualization, B.C. and J.-C.C.; methodology, B.C.; software, B.C.; validation, B.C. and J.-C.C.; formal analysis, J.-C.C.; investigation, B.C. and J.-C.C.; resources, B.C.; data curation, B.C.; writing—Original draft preparation, B.C.; writing—Review and editing, J.-C.C.; visualization, B.C.; supervision, J.-C.C.; project administration, J.-C.C.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the China Scholarship Council (201604490109).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tang, S.; Andres, B.; Andriluka, M.; Schiele, B. Multi-person tracking by multicut and deep matching. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 100–111. [Google Scholar]
  2. Mohamed, W.; Hamza, A.B. Reeb graph path dissimilarity for 3D object matching and retrieval. Vis. Comput. 2012, 28, 305–318. [Google Scholar] [CrossRef]
  3. Vu, N.S.; Caplier, A. Enhanced patterns of oriented edge magnitudes for face recognition and image matching. IEEE Trans. Image Process. 2011, 21, 1352–1365. [Google Scholar]
  4. Juan, L.; Oubong, G. SURF applied in panorama image stitching. In Proceedings of the 2010 2nd International Conference on Image Processing Theory, Tools and Applications, Paris, France, 7–10 July 2010; pp. 495–499. [Google Scholar]
  5. Leng, C.; Zhang, H.; Li, B.; Cai, G.; Pei, Z.; He, L. Local feature descriptor for image matching: A Survey. IEEE Access 2018, 7, 6424–6434. [Google Scholar] [CrossRef]
  6. Harris, C.; Stephens, M. A combined corner and edge detector. Citeseer 1988, 15, 10–5244. [Google Scholar]
  7. Gilles, S. Robust Description and Matching of Images. Ph.D. Thesis, Department of Engineering Science, University of Oxford, Oxford, UK, 1998. [Google Scholar]
  8. Lindeberg, T. Feature detection with automatic scale selection. Int. J. Comput. Vis. 1998, 30, 79–116. [Google Scholar] [CrossRef]
  9. Derpanis, K.G. The Harris Corner Detector; York University: Toronto, ON, Canada, 2004. [Google Scholar]
  10. Mikolajczyk, K.; Schmid, C. Scale & affine invariant interest point detectors. Int. J. Comput. Vis. 2004, 60, 63–86. [Google Scholar]
  11. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar]
  12. Kutter, M.; Bhattacharjee, S.K.; Ebrahimi, T. Towards second generation watermarking schemes. In Proceedings of the 1999 International Conference on Image Processing, Kobe, Japan, 24–28 October 1999; Volume 1, pp. 320–323. [Google Scholar]
  13. Tang, C.W.; Hang, H.M. A feature-based robust digital image watermarking scheme. IEEE Trans. Signal Process. 2003, 51, 950–959. [Google Scholar] [CrossRef]
  14. Li, T.; Shi, J.; Li, X.; Wu, J.; Pan, F. Image Encryption Based on Pixel-Level Diffusion with Dynamic Filtering and DNA-Level Permutation with 3D Latin Cubes. Entropy 2019, 21, 319. [Google Scholar] [CrossRef] [Green Version]
  15. Wehrl, A. General properties of entropy. Rev. Mod. Phys. 1978, 50, 221–260. [Google Scholar] [CrossRef]
  16. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; The Regents of the University of California: Oakland, CA, USA, 1961. [Google Scholar]
  17. Cour, T.; Srinivasan, P.; Shi, J. Balanced graph matching. In Advances in Neural Information Processing Systems; MIT Press: Philadelphia, PA, USA, 2007; pp. 313–320. [Google Scholar]
  18. Cho, M.; Lee, J.; Lee, K.M. Reweighted random walks for graph matching. In Proceedings of the European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; pp. 492–505. [Google Scholar]
  19. Cui, B.; Creput, J.C. Using Entropy and Marr Wavelets to Automatic Feature Detection for Image Matching. In Proceedings of the 2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Sorrento, Italy, 26–29 November 2019. [Google Scholar] [CrossRef]
  20. Torresani, L.; Kolmogorov, V.; Rother, C. Feature correspondence via graph matching: Models and global optimization. In Proceedings of the European Conference on Computer Vision, Marseille, France, 12–18 October 2008; pp. 596–609. [Google Scholar]
  21. Yao, B.; Fei-Fei, L. Action recognition with exemplar based 2.5 d graph matching. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 173–186. [Google Scholar]
  22. Neuhaus, M.; Bunke, H. A graph matching based approach to fingerprint classification using directional variance. In Proceedings of the International Conference on Audio-and Video-Based Biometric Person Authentication, Hilton Rye Town, NY, USA, 20–22 July 2005; pp. 191–200. [Google Scholar]
  23. Banerjee, B.; Bovolo, F.; Bhattacharya, A.; Bruzzone, L.; Chaudhuri, S.; Buddhiraju, K.M. A novel graph-matching-based approach for domain adaptation in classification of remote sensing image pair. IEEE Trans. Geosci. Remote. Sens. 2015, 53, 4045–4062. [Google Scholar] [CrossRef]
  24. Berg, A.C.; Berg, T.L.; Malik, J. Shape matching and object recognition using low distortion correspondences. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 26–33. [Google Scholar]
  25. Zhu, G.; Doermann, D. Logo matching for document image retrieval. In Proceedings of the 2009 10th International Conference on Document Analysis and Recognition, Barcelona, Spain, 26–29 July 2009; pp. 606–610. [Google Scholar]
  26. Conte, D.; Foggia, P.; Sansone, C.; Vento, M. Graph matching applications in pattern recognition and image processing. In Proceedings of the 2003 International Conference on Image Processing (Cat. No. 03CH37429), Barcelona, Spain, 14–17 September 2003. [Google Scholar]
  27. Foggia, P.; Percannella, G.; Vento, M. Graph matching and learning in pattern recognition in the last 10 years. Int. J. Pattern Recognit. Artif. Intell. 2014, 28, 1450001. [Google Scholar] [CrossRef]
  28. Gold, S.; Rangarajan, A. A graduated assignment algorithm for graph matching. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 377–388. [Google Scholar] [CrossRef] [Green Version]
  29. Leordeanu, M.; Hebert, M. A spectral technique for correspondence problems using pairwise constraints. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; pp. 1482–1489. [Google Scholar]
  30. Cho, M.; Sun, J.; Duchenne, O.; Ponce, J. Finding matches in a haystack: A max-pooling strategy for graph matching in the presence of outliers. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2083–2090. [Google Scholar]
  31. Zass, R.; Shashua, A. Probabilistic graph and hypergraph matching. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  32. Chertok, M.; Keller, Y. Efficient high order matching. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2205–2215. [Google Scholar] [CrossRef]
  33. Duchenne, O.; Bach, F.; Kweon, I.S.; Ponce, J. A tensor-based algorithm for high-order graph matching. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2383–2395. [Google Scholar] [CrossRef] [Green Version]
  34. Lee, J.; Cho, M.; Lee, K.M. Hyper-graph matching via reweighted random walks. In Proceedings of the CVPR 2011, Providence, RI, USA, 20–25 June 2011; pp. 1633–1640. [Google Scholar]
  35. Nguyen, Q.; Gautier, A.; Hein, M. A flexible tensor block coordinate ascent scheme for hypergraph matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5270–5278. [Google Scholar]
  36. Zhou, J.; Wang, T.; Lang, C.; Feng, S.; Jin, Y. A novel hypergraph matching algorithm based on tensor refining. J. Vis. Commun. Image Represent. 2018, 57, 69–75. [Google Scholar] [CrossRef]
  37. Yang, Z. Fast template matching based on normalized cross correlation with centroid bounding. In Proceedings of the 2010 International Conference on Measuring Technology and Mechatronics Automation, Changsha, China, 13–14 March 2010; Volume 2, pp. 224–227. [Google Scholar]
  38. Saravanan, C.; Surender, M. Algorithm for face matching using normalized cross-correlation. Int. J. Eng. Adv. Technol. (IJEAT) ISSN 2013, 2, 2249–8958. [Google Scholar]
  39. Tang, C.; Gao, T.; Yan, S.; Wang, L.; Wu, J. The oriented spatial filter masks for electronic speckle pattern interferometry phase patterns. Opt. Express 2010, 18, 8942–8947. [Google Scholar] [CrossRef]
  40. Van Vliet, L.J.; Young, I.T.; Beckers, G.L. A nonlinear Laplace operator as edge detector in noisy images. Comput. Vis. Graph. Image Process. 1989, 45, 167–195. [Google Scholar] [CrossRef] [Green Version]
  41. Stewart, C.V. Robust parameter estimation in computer vision. SIAM Rev. 1999, 41, 513–537. [Google Scholar] [CrossRef] [Green Version]
  42. Dobbins, A.; Zucker, S.W.; Cynader, M.S. Endstopping and curvature. Vis. Res. 1989, 29, 1371–1387. [Google Scholar] [CrossRef] [Green Version]
  43. Manjunath, B.; Chellappa, R.; von der Malsburg, C. A feature based approach to face recognition. In Proceedings of the 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, IL, USA, 15–18 June 1992; pp. 373–378. [Google Scholar]
  44. Manjunath, B.; Shekhar, C.; Chellappa, R. A new approach to image feature detection with applications. Pattern Recognit. 1996, 29, 627–640. [Google Scholar] [CrossRef]
  45. Daugman, J.G. Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. JOSA A 1985, 2, 1160–1169. [Google Scholar] [CrossRef]
  46. Bhattacharjee, S.K.; Kutter, M. Compression Tolerant Image Authentication. In Proceedings of the 1998 International Conference on Image Processing, Chicago, IL, USA, 7 October 1998; pp. 435–439. [Google Scholar]
  47. Antoine, J.P.; Murenzi, R. Two-dimensional directional wavelets and the scale-angle representation. Signal Process. 1996, 52, 259–281. [Google Scholar] [CrossRef]
  48. Ding, N.; Liu, Y.; Jin, Y.; Zhu, M. Image registration based on log-polar transform and SIFT features. In Proceedings of the 2010 International Conference on Computational and Information Sciences, Chengdu, China, 17–19 December 2010; pp. 749–752. [Google Scholar]
  49. Kapur, J.N.; Sahoo, P.K.; Wong, A.K. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vis. Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  50. Liu, M.Y.; Tuzel, O.; Ramalingam, S.; Chellappa, R. Entropy rate superpixel segmentation. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 20–25 June 2011; pp. 2097–2104. [Google Scholar]
  51. Rosten, E.; Porter, R.; Drummond, T. Faster and better: A machine learning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 105–119. [Google Scholar] [CrossRef] [Green Version]
  52. Kraft, M.; Schmidt, A.; Kasinski, A.J. High-Speed Image Feature Detection Using FPGA Implementation of Fast Algorithm. VISAPP 2008, 8, 174–179. [Google Scholar]
  53. Lee, J.; Cho, M.; Lee, K.M. A graph matching algorithm using data-driven markov chain monte carlo sampling. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2816–2819. [Google Scholar]
  54. Bourke, P. Cross correlation. Cross Correl. Auto Correl. 2D Pattern Ident. 2019, 3, 1996. [Google Scholar]
  55. Yoo, J.C.; Han, T.H. Fast normalized cross-correlation. Circuits Syst. Signal Process. 2009, 28, 819. [Google Scholar] [CrossRef]
  56. Raguram, R.; Chum, O.; Pollefeys, M.; Matas, J.; Frahm, J.M. USAC: A universal framework for random sample consensus. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2022–2038. [Google Scholar] [CrossRef]
  57. Derpanis, K.G. Overview of the RANSAC Algorithm. Image Rochester 2010, 4, 2–3. [Google Scholar]
  58. Cho, M.; Lee, J.; Lee, K.M. Feature correspondence and deformable object matching via agglomerative correspondence clustering. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 1280–1287. [Google Scholar]
  59. Miksik, O.; Mikolajczyk, K. Evaluation of local detectors and descriptors for fast feature matching. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 2681–2684. [Google Scholar]
Figure 1. Feature point correspondence mapping.
Figure 1. Feature point correspondence mapping.
Sensors 20 05117 g001
Figure 2. The basic flowchart of graph matching.
Figure 2. The basic flowchart of graph matching.
Sensors 20 05117 g002
Figure 3. The process of the Laplacian filter sharpening: (a) the digital mask filter w, (b) the convolution operation.
Figure 3. The process of the Laplacian filter sharpening: (a) the digital mask filter w, (b) the convolution operation.
Sensors 20 05117 g003
Figure 4. From the left to the right: (a) original image, (b) filtered result when i = 1 , (c) filtered result when i = 2 and (d) response image.
Figure 4. From the left to the right: (a) original image, (b) filtered result when i = 1 , (c) filtered result when i = 2 and (d) response image.
Sensors 20 05117 g004
Figure 5. Schematic diagram of meshing: (a) image divided by 4 × 4 sub-regions, (b) image divided by 8 × 8 sub-regions.
Figure 5. Schematic diagram of meshing: (a) image divided by 4 × 4 sub-regions, (b) image divided by 8 × 8 sub-regions.
Sensors 20 05117 g005
Figure 6. Bresenham discrete circle centered on pixel p i .
Figure 6. Bresenham discrete circle centered on pixel p i .
Sensors 20 05117 g006
Figure 7. The basic flowchart of data processing.
Figure 7. The basic flowchart of data processing.
Sensors 20 05117 g007
Figure 8. Quantitative experimental analysis of feature point extraction: (a) original image, (b) feature point extraction without Laplace filter, (c) feature point extraction with Laplace filter.
Figure 8. Quantitative experimental analysis of feature point extraction: (a) original image, (b) feature point extraction without Laplace filter, (c) feature point extraction with Laplace filter.
Sensors 20 05117 g008
Figure 9. Feature points extraction under different algorithms: (a) corner detector (CD) (b) Gilles (c) Harris (d) LoG (e) SIFT (f) L_Marr_E.
Figure 9. Feature points extraction under different algorithms: (a) corner detector (CD) (b) Gilles (c) Harris (d) LoG (e) SIFT (f) L_Marr_E.
Sensors 20 05117 g009
Figure 10. (a,b) show graph matching without Laplace filter before and after RANSAC optimization; (c,d) show graph matching under Laplace filter before and after RANSAC optimization; (e,f) show graph matching using entropy algorithm without Laplace filter before and after RANSAC optimization; (g,h) show graph matching using entropy algorithm under Laplace filter before and after RANSAC optimization.
Figure 10. (a,b) show graph matching without Laplace filter before and after RANSAC optimization; (c,d) show graph matching under Laplace filter before and after RANSAC optimization; (e,f) show graph matching using entropy algorithm without Laplace filter before and after RANSAC optimization; (g,h) show graph matching using entropy algorithm under Laplace filter before and after RANSAC optimization.
Sensors 20 05117 g010
Figure 11. Feature points matching under different algorithms: (a) CD (b) Gilles (c) Harris (d) LoG (e) SIFT and (f) L_Marr_E.
Figure 11. Feature points matching under different algorithms: (a) CD (b) Gilles (c) Harris (d) LoG (e) SIFT and (f) L_Marr_E.
Sensors 20 05117 g011
Figure 12. CMU house dataset matching results among (a) RRWM, (b) SM, and (c) NCC algorithms.
Figure 12. CMU house dataset matching results among (a) RRWM, (b) SM, and (c) NCC algorithms.
Sensors 20 05117 g012
Figure 13. From the left to the right: (a) RRWM algorithm, (b) SM algorithm, and (c) NCC algorithm for graph matching. (The yellow lines represent the correct matching pairs, and the black lines represent the wrong matches.)
Figure 13. From the left to the right: (a) RRWM algorithm, (b) SM algorithm, and (c) NCC algorithm for graph matching. (The yellow lines represent the correct matching pairs, and the black lines represent the wrong matches.)
Sensors 20 05117 g013
Table 1. Summarization of notations.
Table 1. Summarization of notations.
NotationPurpose
G P Reference graph
G Q Query graph
PSet of features in G P
QSet of features in G Q
N P Total number of data featrues of G P
N Q Total number of data featrues of G Q
CMapping constrains
LA set of candidate assignments
i , j Feature points in G P
a , b Feature points in G Q
e 1 = ( i , a ) , e 2 = ( j , b ) Candidate assignments
MAffinity matrix
M ( e 1 , e 2 ) Pairwise affinity
M ( e 1 , e 1 ) Individual affinity
SGraph matching score
xIndicator vector
x * Optimal solution
d i j Euclidean distances between the point i and j
Table 2. Feature point extraction results under different kinds of algorithms.
Table 2. Feature point extraction results under different kinds of algorithms.
MethodCDGillesHarrisLOGSIFTL_Marr_E
img.17273104174226264
img.270246457165980654
img.31442743411107631576
img.4203844113315335338402
img.532062769614033667806
Table 3. Experimental comparison results for quantitative analysis of feature point matching.
Table 3. Experimental comparison results for quantitative analysis of feature point matching.
Methods N rm N tm Recall (%)Time (s)
Marr3323471170.5818.76
L_Marr104071095495.0198.02
Marr_E998145568.5910.33
L_Marr_E3523374094.2021.45
Table 4. Feature point matching results under different kinds of algorithms.
Table 4. Feature point matching results under different kinds of algorithms.
Detector1st Image2nd ImageTime (s)Recall (%)
1CD51531.760.36
2Gilles1241342.240.16
3Harris2082262.80.35
4LoG3003004.270.27
5SIFT127512585.490.48
6L_Marr_E199819297.220.52
Table 5. Comparative evaluation on CMU database for reweighted random walk method (RRWM), spectral matching method (SM), and normalized cross-correlation (NCC).
Table 5. Comparative evaluation on CMU database for reweighted random walk method (RRWM), spectral matching method (SM), and normalized cross-correlation (NCC).
MethodsAccuracyScoreTime (s)
1RRWM92.6199.930.52
2SM89.2194.440.07
3NCC94.8895.310.23
Table 6. Comparative evaluation on CALTECH database for RRWM, SM, and NCC.
Table 6. Comparative evaluation on CALTECH database for RRWM, SM, and NCC.
MethodsAccuracyScoreTime (s)
1RRWM61.1399.920.16
2SM50.2279.930.02
3NCC52.1774.390.06

Share and Cite

MDPI and ACS Style

Cui, B.; Créput, J.-C. NCC Based Correspondence Problem for First- and Second-Order Graph Matching. Sensors 2020, 20, 5117. https://doi.org/10.3390/s20185117

AMA Style

Cui B, Créput J-C. NCC Based Correspondence Problem for First- and Second-Order Graph Matching. Sensors. 2020; 20(18):5117. https://doi.org/10.3390/s20185117

Chicago/Turabian Style

Cui, Beibei, and Jean-Charles Créput. 2020. "NCC Based Correspondence Problem for First- and Second-Order Graph Matching" Sensors 20, no. 18: 5117. https://doi.org/10.3390/s20185117

APA Style

Cui, B., & Créput, J. -C. (2020). NCC Based Correspondence Problem for First- and Second-Order Graph Matching. Sensors, 20(18), 5117. https://doi.org/10.3390/s20185117

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop