Next Article in Journal
A GIS Plugin for the Assessment of Deformations in Existing Bridge Portfolios via MTInSAR Data
Previous Article in Journal
Successful Precipitation Downscaling Through an Innovative Transformer-Based Model
Previous Article in Special Issue
What Is Beyond Hyperbola Detection and Characterization in Ground-Penetrating Radar Data?—Implications from the Archaeological Site of Goting, Germany
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Ground-Penetrating Radar Image Matching Method Based on Central Dense Structure Context Features

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
3
School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(22), 4291; https://doi.org/10.3390/rs16224291
Submission received: 26 September 2024 / Revised: 6 November 2024 / Accepted: 12 November 2024 / Published: 18 November 2024
(This article belongs to the Special Issue Advanced Ground-Penetrating Radar (GPR) Technologies and Applications)

Abstract

:
Subsurface structural distribution can be detected using Ground-Penetrating Radar (GPR). The distribution can be considered as road fingerprints for vehicle positioning. Similar to the principle of visual image matching for localization, the position coordinates of the vehicle can be calculated by matching real-time GPR images with pre-constructed reference GPR images. However, GPR images, due to their low resolution, cannot extract well-defined geometric features such as corners and lines. Thus, traditional visual image processing algorithms perform inadequately when applied to GPR image matching. To address this issue, this paper innovatively proposes a GPR image matching and localization method based on a novel feature descriptor, termed as central dense structure context (CDSC) features. The algorithm utilizes the strip-like elements in GPR images to improve the accuracy of GPR image matching. First, a CDSC feature descriptor is designed. By applying threshold segmentation and extremum point extraction to the GPR image, stratified strip-like elements and pseudo-corner points are obtained. The pseudo-corner points are treated as the centers, and the surrounding strip-like elements are described in context to form the GPR feature descriptors. Then, based on the feature description method, feature descriptors for both the real-time image and the reference image are calculated separately. By searching for the nearest matching point pairs and removing erroneous pairs, GPR image matching and localization are achieved. The proposed algorithm was evaluated on datasets collected from urban roads and railway tracks, achieving localization errors of 0.06 m (RMSE) and 1.22 m (RMSE), respectively. Compared to the traditional Speeded Up Robust Features (SURF) visual image matching algorithm, localization errors were reduced by 86.6% and 95.7% in urban road and railway track scenarios, respectively.

Graphical Abstract

1. Introduction

In the fields of intelligent transportation and automotive driving, particularly in complex urban environments, there is a pressing demand for high-precision and highly reliable navigation and localization systems. Currently, various vehicle localization methods are available, such as the Global Navigation Satellite System (GNSS), Inertial Navigation System (INS), Light Detection and Ranging (LiDAR)-based algorithms, and vision-based algorithms. However, relying solely on single localization algorithms cannot ensure continuous and reliable positioning.
For instance, GNSS fails to provide accurate and reliable location information in GNSS-denied environments like urban canyons and tunnels [1,2]. INS can achieve autonomous positioning through gyroscopes and accelerometers over short periods but suffers from position drift over extended durations [3]. Vision-based localization methods are unable to achieve positioning under challenging lighting conditions. LiDAR-based algorithms work by emitting multiple laser beams to measure distances and create environmental point cloud maps for positioning [4,5]. While LiDAR is unaffected by lighting conditions, it encounters difficulties in degraded environments, such as tunnels, where it also fails to achieve reliable localization [6].
Through the above analysis, the following conclusion can be drawn: single localization algorithms cannot provide continuous, reliable, and accurate positioning. Ground-Penetrating Radar (GPR), which senses subsurface environmental information, offers a natural advantage over traditional algorithms in adverse weather and low-visibility scenarios. When applied to localization, GPR serves as a valuable complement to existing localization algorithms.
GPR is a non-invasive technology that utilizes electromagnetic waves to detect underground targets. By analyzing variations in reflection time and intensity, the structure of underground targets and the properties of the medium can be determined [7]. The detection process is illustrated in Figure 1. As an efficient subsurface exploration technique, GPR can rapidly and accurately detect underground targets without the need for surface contact. This technology has found widespread application in areas such as road maintenance, geological surveys, and battlefield demining [8,9,10]. Recently, researchers have initiated studies on using GPR to assist with vehicle localization [11].
Research on GPR-based localization algorithms predominantly utilizes a matching mechanism, requiring the construction of a reference database in which GPR images are compared to determine vehicle location. Selecting stable and unique features is essential for the performance of such systems. Cornick et al. from MIT pioneered the study of vehicle localization using GPR image matching [11]. They used the normalized cross-correlation (NCC) as the similarity evaluation strategy for GPR images. By fusing a Localizing Ground-Penetrating Radar (LGPR) system with GPS/INS systems, they achieved decimeter-level or even centimeter-level localization accuracy. Subsequently, MIT’s Ort et al. expanded upon the LGPR system to achieve the localization of autonomous vehicles under various conditions, such as clear weather, rain, snow, and nighttime, demonstrating the GPR system’s strong environmental adaptability [12]. However, full image matching methods require substantial storage and computational resources, limiting their practical applicability. Consequently, researchers have shifted focus to extracting features from GPR images and using these features to perform similarity matching instead of utilizing the entire image. Baikovitz et al. employed a ResNet-18 autoencoder to capture high-dimensional deep learning features from GPR images [13]. Zhang et al. from the National University of Defense Technology (NUDT) used the real and imaginary parts of the Log-Gabor filter to convolve with GPR images and extract phase symmetry features, which demonstrated localization accuracy within 5 pixels across asphalt, cement, and brick roads. However, the operation speed is slow [14]. Ni et al. from the Aerospace Information Research Institute, Chinese Academy of Sciences, applied Faster-RCNN to extract hyperbolic features from GPR images, creating a feature map for matching-based localization. However, due to the sparse distribution and indistinct nature of underground hyperbolic features, there was insufficient localization continuity [15]. Zhang et al. from NUDT further developed TSVR-Net based on SVR-Net, using both B-Scan and D-Scan as network inputs to enhance localization stability. The results showed localization accuracy within 0.1 m. However, compared to the original data, the map storage and computational requirements for network training increased [16].
In summary, most existing GPR image matching algorithms are direct adaptations of visual image matching algorithms. Due to the low resolution of GPR images and the absence of well-defined geometric features, conventional image feature extraction methods are inadequate for effectively describing GPR image characteristics, leading to unsatisfactory matching results. GPR images commonly exhibit strip-like structures, which represent abstract representations of actual underground formations. These features are not only largely invariant over time but also distinct across different locations, making them ideal for matching. Shape context, a descriptor used to capture the outlines of shapes, is not constrained by the specific form of the target and can accurately reflect the distribution of sampled points along a contour. This method has been widely applied in object recognition and image matching [17,18,19].
This paper draws on the concepts of shape context and graphical statistics to propose a novel method for extracting Central Dense Structure Context (CDSC) features. By applying threshold segmentation and extremum point extraction to GPR images, stripe structures and pseudo-corner points are obtained. These pseudo-corner points serve as centers for contextual descriptions of the surrounding stripe structures, forming GPR feature descriptors. Based on these descriptors, the feature sets of the target image and the reference image are calculated. Furthermore, a GPR image matching method based on these features is designed for vehicle positioning. Experimental results demonstrate that the proposed algorithm performs well in both positioning accuracy and computational efficiency.
The remainder of this paper is organized as follows: Section 2 analyzes the characteristics of underground structural features. Section 3 presents the algorithm for CDSC feature extraction and the feature matching positioning method. Section 4 provides experimental analysis and accuracy assessment. Section 5 offers conclusions and a discussion.

2. Analysis of the Characteristics of Underground Structures

The construction materials of roads primarily consist of concrete, asphalt, gravel, and steel reinforcement, providing a rich variety of information about the subsurface. Additionally, cables and pipelines are often buried beneath roads, which can serve as distinctive underground features. Moreover, road defects such as voids and insufficient compaction are pervasive and can also be regarded as unique characteristics of the roadway. A single radar pulse from ground-penetrating radar (GPR) generates an A-scan, as shown in Figure 2a. A continuous sequence of A-scans along the driving direction forms a two-dimensional underground profile, referred to as a B-scan, as shown in Figure 2b. Combining multiple B-scans perpendicular to the driving direction results in a three-dimensional C-scan, as shown in Figure 2c. Compared to B-scans, C-scans not only provide underground information along the driving direction but also capture underground information perpendicular to the driving direction. Therefore, C-scans have an advantage when dealing with shifts in the vehicle’s trajectory. The richness of information increases with the scanning dimensions, but this also introduces significant challenges for data processing. To balance the trade-off between the amount of information and processing time, when vehicles travel within the same lane, B-scans are sufficient to meet the localization requirements. B-scan data were selected as the source for matching and positioning in this paper.
The underground structural features of the road are represented in GPR B-scan images as alternating bright and dark stripe textures, as shown in the red box in Figure 3, These stripes are horizontally distributed across the image. Additionally, higher-intensity spots appear on each stripe, as indicated by the yellow box in Figure 3. The effective extraction of these features for location is the primary focus of this paper.
These spots are typically circular or elliptical in shape, with pixel intensities that are similar within the spot but distinct from the surrounding neighborhood. The spots generally correspond one-to-one with underground structures and exhibit a degree of stability over time and space. To verify the stability of GPR image features under varying environmental conditions, six datasets were collected along the same road on three different days: 25 June 2023, 14 July 2023, and 17 July 2023. The dates 25 June 2023 and 17 July 2023 were sunny days, while 14 July 2023 was rainy. As shown in Figure 4, two datasets were collected each time along the same trajectory, labeled Track 1 and Track 2. This is evident in the yellow boxes in Figure 4, where similar underground features are observed across all datasets. A comparison of datasets a, b, e, and f reveals that even after a 20-day interval, the underground structural images remain largely unchanged and stable, thus meeting the requirements for positioning. Similar underground features can be observed in the two GPR images from datasets c and d, although the similarity has decreased. This is because rainfall altered the dielectric properties of the soil, which weakened the signal’s penetration ability. However, the underground features are still measurable.
In summary, using GPR images for positioning offers the following advantages:
(1)
The underground features are rich and highly correlated with spatial locations, providing coverage across most conventional roads, which can serve as natural geographic information for vehicle positioning.
(2)
These underground features are long lasting and stable, with minimal influence from external factors such as weather, resulting in strong adaptability to various positioning environments.

3. Matching Method Based on Center Dense Structure Context Features

This section first introduces the basic process of the method proposed in this paper, which first needs to build a feature reference database in advance, and then match the collected GPR images with the pre-built feature reference database and output the current geographical location, as shown in Figure 5.
GPR detects underground structures by transmitting and receiving electromagnetic waves. However, during vehicle movement, maintaining a constant speed is challenging, which inevitably leads to distance scaling issues in GPR images. To address this, spatial resampling and noise reduction filtering must be applied as part of the preprocessing operations. In the proposed algorithm, when constructing a reference database, the preprocessed images are first transformed from image coordinates to spatial position coordinates before extracting CDSC features. The extracted image features are then stored in a specific format to form the feature reference database.
The feature reference database needs to contain location information p o s , direction d i r , and descriptor d s c :
D a t a b a s e = { p o s , d i r , d s c }
During the positioning stage, the vehicle-mounted GPR captures the GPR images to be matched. These images undergo the same processing steps, including spatial resampling, noise reduction filtering, and CDSC feature extraction, to obtain the feature sequence for matching.
D a t a c u r r e n t = { d i r , d s c }
The similarity measurement method and mismatch elimination technique are used to determine the optimal matching position of the feature sequence within the feature reference database, ultimately yielding the positioning results.

3.1. SURF-Based GPR Image Corner Detection

GPR images are affected by factors such as jitter and noise. However, since spots represent strong underground reflective targets, they can be reliably detected by GPR, making them ideal image features. Feature point extraction has been extensively studied in visual navigation, with classic methods including ORB [20], SIFT [21], and SURF [22]. ORB combines the efficiency of FAST and the reliability of the Harris algorithm to detect feature points utilizing the intensity differences between the center and surrounding pixels. However, its performance in detecting spot features in GPR images is suboptimal. SIFT detects local extrema in a DoG pyramid and uses the Hessian matrix of local intensity values as a filter condition for extracting feature points. While SIFT performs well in detecting spots, the construction of the DoG pyramid and the calculation of the Hessian matrix are computationally intensive, resulting in longer detection times. SURF, an improvement on SIFT, is a fast and robust feature point matching algorithm that excels in detecting GPR spot features compared to the other two methods. The algorithm leverages 2D Haar wavelet responses, integral images, and scale-space techniques, providing strong robustness to image rotation, translation, scaling, and noise, while also offering improved speed over SIFT.
The performance of the aforementioned feature point detection algorithms on GPR images is illustrated in Figure 6, with results from SIFT, ORB, and SURF shown from left to right. Among the three algorithms, ORB performs the worst, detecting only a few spots and generating a significant number of false feature points. SIFT and SURF detect a larger number of spot features. However, SURF identifies more feature points with a more uniform distribution, resulting in a more complete description of the image. Therefore, SURF was selected as the feature point detection algorithm in this study.

3.2. Central Dense Structure Context Feature Extraction

The inconsistencies between roadbed layers and material differences result in alternating bright and dark stripes in GPR images. These structural variations within the road naturally provide descriptive information for pseudo-corner points. Therefore, this paper proposes the CDSC algorithm, based on the shape context algorithm, specifically targeting the stripe structures in GPR images.

3.2.1. Shape Context Algorithm

Shape context can describe arbitrary shapes and is widely used in contour matching, as shown in Figure 7. The contour line is uniformly sampled to obtain a set of contour points. For any given point in the contour point set, a polar coordinate system is established centered on that point. The number of contour points falling into small regions around that point is then counted. This process is repeated until each point in the contour point set has been accounted for, resulting in the SC feature.
The main steps are as follows: first, the image contour is uniformly sampled. Suppose there are N sample points. A point, p i ( x i , y i ) , is selected, and the relative spatial position, including direction θ and distance ρ , between the remaining N 1 sample points p j ( x j , y j ) and p i ( x i , y i ) is calculated.
ρ = ( x i x j ) 2 + ( y i y j ) 2 θ = arctan ( y i y j / x i x j )
A polar coordinate system is constructed around each sampling point, and the system is divided into M × N grids based on distance Δ ρ and directional Δ θ intervals.
M = R / Δ d N = 360 / Δ θ
In this formula, R represents the local neighborhood range of the sampling point. The number of sampling points falling into each grid is counted to obtain the shape context feature histogram. The statistical rules for constructing the histogram are as follows:
h i ( l ) = # { p j p i : ( p j p i ) b i n ( l ) }
In the equation, h i ( l ) represents the l -th component of the histogram for the i -th contour point.
The shape context algorithm requires uniform sampling of the contours in the original image. However, the stripe structures in GPR images are neither independent nor closed shapes. Therefore, building on the concept of shape context, this paper proposes the CDSC feature extraction algorithm to extract features from these stripe structures.

3.2.2. Center Dense Structure Context Feature Extraction

The CDSC is an improved version of the shape context algorithm, specifically designed for feature extraction from threshold-segmented GPR images centered around pseudo-corner points. The CDSC extraction process is illustrated in Figure 8. Compared to traditional shape context, this approach addresses the challenge of describing non-independent and non-closed images. Additionally, the dense stripe structures provide more neighborhood information for the descriptor.
The GPR image is segmented using the OTSU thresholding algorithm [23], which separates the stripe structures from the background, emphasizing large-scale structural features while ignoring local gradient details. The specific steps are as follows:
Assuming the GPR image i m g ( x , y ) has L grayscale levels, with n pixels at the i -th grayscale level, the total number of pixels N in the image i m g ( x , y ) is given by the following:
N = i = 1 L n i
The probability P i of a pixel having an i -th grayscale level and the mean grayscale level μ of the entire image are given by the following:
P i = n i N , μ = i = 1 L i P i
By using a grayscale value k as the threshold to segment the image into A 0 and A 1 , where A 0 consists of pixels with grayscale levels in the range [ 1 , k ] and A 1 consists of pixels with grayscale levels in the range [ k + 1 , L 1 ) , the probabilities P A 0 and P A 1 of A 0 and A 1 occurring, respectively, are given by the following:
P A 0 = i = 0 k P i , P A 1 = i = k + 1 L 1 P i
The mean grayscale values E A 0 and E A 1 for A 0 and A 1 , respectively, are given by the following:
E A 0 = i = 0 k i P i | P A 0 , E A 1 = i = k + 1 L 1 i P i | P A 1
The mean grayscale value E of the image i m g ( x , y ) is given by the following:
E = i = 0 L 1 i P i
The between-class variance for A 0 and A 1 is given by the following:
θ 2 ( k ) = P A 0 ( E E A 0 ) 2 + P A 1 ( E E A 1 ) 2
The optimal threshold is the value of k that maximizes θ 2 ( k ) .
When the grayscale value of the original image i m g ( x , y ) exceeds a predetermined threshold k , the pixel value is set to 1; otherwise, it is set to 0. This binarization process effectively reduces the complexity of the image and enhances the efficiency of subsequent operations.
i m g b w ( x , y ) = 1 , i m g ( x , y ) k 0 , i m g ( x , y ) < k
The GPR image is divided into two classes: the portion greater than the threshold, representing the stripe structures (non-zero points), is denoted as P S T R U C T , while the portion less than the threshold is considered the image background. The binarized GPR image emphasizes the existing stripe structures while ignoring minor local variations. This is because large-scale structural information is more stable compared to local gradient information.
Using spots as the center, the stripe structures are employed to describe the spots. Traditional shape context uses an “ N 1 ” feature construction method, where N contour points are used to describe a single point with N 1 of these points. In contrast, this paper adopts a “1 + N” feature construction method, where a single center point is described by N stripe structure points. The specific steps are as follows:
Let p s , i be the i -th non-zero point in the image i m g b w , p s , i ( x s , i , y s , i ) P S T R U C T , with coordinates ( x s , j , y s , j ) . In Section 3.1, the set of pseudo-corner points extracted from the GPR image is denoted as P S U R F , p c , j ( x c , j , y c , j ) P S U R F , with coordinates ( x c , j , y c , j ) , serving as the center points for the feature descriptors.
Replace the contour points in Equation (5) with the SURF pseudo-corner points P S U R F and the stripe structure points P S T R U C T , respectively.
ρ = ( x c , i x s , j ) 2 + ( y c , i y s , j ) 2 θ = arctan ( y c , i y s , j / x c , i x s , j )
Next, count the number of P S T R U C T points that fall into each grid within the neighborhood range of P S U R F , and obtain the feature vector D ( l ) .
D ( l ) = # { p s , j b i n ( l ) }

3.3. Localization Algorithm Based on CDSC Features

As shown in Figure 9, nine pseudo-corner points are selected from each image to extract CDSC features. The extracted features from both images are observed to be very similar.
Therefore, CDSC descriptors are used as the feature representation for the images, and the similarity of CDSC features is used to describe the similarity between the two images. During the matching and positioning process, the features are extracted from the image to be matched. This region’s features are then compared with a larger range of image features in the feature reference database. This approach helps minimize both false matches and missed matches. The specific process is illustrated in Figure 10.
(1)
CDSC Feature Matching
Nearest Neighbor Distance Ratio (NNDR) is a metric used to evaluate whether two feature descriptors are similar. When a match is correct, the distance between the two feature descriptors will be closer compared to the distance between mismatched descriptors. Therefore, the ratio of the distance to the nearest neighbor to the distance to the second nearest neighbor is calculated. If this ratio is below a certain threshold, the match is considered correct. The specific implementation of this method is as follows:
Let D b a s e = [ d b , 1 , d b , 2 , , d b , q ] be the feature vector from the reference database and D c u r r e n t = [ d c , 1 , d c , 2 , , d c , q ] be the feature vector extracted from the image to be matched. The distance between the two descriptors is measured using the Euclidean distance:
d i s t ( D b a s e , D c u r r e n t ) = j = 1 q ( d b , j , d c , j ) 2
The ratio is the following:
d N N D R = d i s t 1 d i s t 2
If d N N D R is less than the specified threshold, it indicates a successful match between the two feature vectors.
(2)
Elimination of False Matches
Each matching process yields a set of data points, but not all points in this set are correctly matched. Therefore, it is necessary to remove incorrect matches from the data set to improve matching accuracy. The Random Sample Consensus (RANSAC) algorithm can identify correct data from a set containing outliers by iteratively refining the dataset to exclude anomalous data [24]. Let the point set obtained from a single matching be denoted as follows:
P m a t c h = { [ p m a t c h , b 1 , p m a t c h , c 1 ] , [ p m a t c h , b 2 , p m a t c h , c 2 ] , , [ p m a t c h , b n , p m a t c h , c n ] }
In this context, p m a t c h , b i ( p x m a t c h , b i , p y m a t c h , b i ) represents the points in the feature database with coordinates p x m a t c h , b i and p y m a t c h , b i , while p m a t c h , c i denotes the points in the image to be matched. In the RANSAC algorithm, correct data points are classified as inliers ( P i n ), and mismatched points are classified as outliers ( P o u t ). The first step is to find the optimal homography matrix H :
s p x m a t c h , b i p y m a t c h , b i 1 = H p x m a t c h , c i p y m a t c h , c i 1 H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33
Here, s is the scale parameter. Typically, h 33 = 1 is set to 1 to normalize the matrix H . Thus, the matrix has eight unknowns, and solving it requires four pairs of matching points. The algorithm selects four pairs of sample points from the data set, ensuring that the points are not collinear, computes the homography matrix, and evaluates the cost function J :
J = i = 1 n p x m a t c h , b i h 11 p x m a t c h , c i + h 12 p y m a t c h , c i + h 13 h 31 p x m a t c h , c i + h 32 p y m a t c h , c i + h 33 2 + p y m a t c h , b i h 21 p x m a t c h , c i + h 22 p y m a t c h , c i + h 23 h 31 p x m a t c h , c i + h 31 p y m a t c h , c i + h 33 2
When the cost function is minimized, the final output is the set of matched points:
P R A N S A C = { [ p r , b 1 , p r , c 1 ] , [ p r , b 2 , p r , c 2 ] , , [ p r , b n , p r , c n ] }

4. Experiments and Results

4.1. Data and Experimental Scenarios

To evaluate the performance of the proposed algorithm in different scenarios, localization verification experiments were designed for two typical vehicle positioning environments: urban roads and railway tracks. During the experiments, the vehicle repeatedly collected two sets of data along a predetermined trajectory, which were used as the image set to be matched and the reference image set. An odometer provided mileage marker information for the images. After manually calibrating the mileage markers corresponding to the GPR data, the mileage marker error was controlled to be within 0.1 m. The key parameters of the ground-penetrating radar (GPR) used in this study are shown in Table 1. The detection depth of the GPR is closely related to the antenna’s center frequency. According to highway and railway subgrade design specifications [25,26], the depth of artificial structures beneath the pavement typically does not exceed 2 m, providing ample feature information for matching and localization. According to the working principle of GPR, 300 Hz GPR achieves a detection depth of 3~4 m [27]. The chosen GPR has a scanning rate of 500 Hz, and with the vehicle traveling at 120 km/h, the scanning interval is better than 0.08 m. This resolution meets the requirements for sub-meter level localization, ensuring effective data collection and maintaining continuity and resolution even at high speeds.
The original output images from the GPR record both effective echoes and various interference signals, making them unsuitable for direct image matching and localization. Therefore, preprocessing techniques such as Singular Value Decomposition (SVD) and filtering are applied to the GPR images [7] to highlight the structural features within. A comparison of the GPR images before and after preprocessing is shown in Figure 11. Panel (a) displays the original output image, while panel (b) shows the image after filtering. The results indicate that the subsurface structural distribution in the filtered image is more pronounced, which aids in subsequent feature extraction. Using interpolation functions to discretely sample the continuously distributed original GPR data, images with uniform sampling intervals were obtained.
(1)
Experimental Site 1: Urban Road
The first experimental site is located around the Beijing New Technology Base of the Chinese Academy of Sciences, as shown in Figure 12a. The total length of the urban test trajectory is approximately 7 km. The equipment setup for the road experiment is illustrated in Figure 12b. The GPR antenna is mounted at the rear of the vehicle to ensure that the scanning line is aligned with the vehicle’s travel trajectory.
(2)
Experimental Site 2: Railway Test Track
The second experimental site is the section of the Shenhua Railway, as shown in Figure 13a. The train test trajectory primarily consists of railway subgrade, including track ballast, subgrade fill, and foundation soil, with a total length of approximately 6.7 km. The equipment setup for the railway experiment is depicted in Figure 13b. The ground-penetrating radar (GPR) antenna is mounted on the side of the train to precisely measure the railway subgrade while minimizing the impact of the rails and sleepers on the imaging.

4.2. Experimental Evaluation Metrics

In this paper, the Root Mean Square Error (RMSE) is employed as the evaluation metric for assessing both the pseudo-corner point matching accuracy and the GPR image matching localization accuracy. RMSE is defined as follows:
R M S E p o i n t = 1 N i = 1 N [ ( x i , i m g x i , i m g ) 2 + ( y i , i m g y i , i m g ) 2 ] R M S E u r b a n = 1 N i = 1 N [ ( x i , u r b a n x i , u r b a n ) 2 + ( y i , u r b a n y i , u r b a n ) 2 ] R M S E r a l i w a y = 1 N i = 1 N ( m i l e i m i l e i ) 2
where ( x i , i m g , y i , i m g ) and ( x i , i m g , y i , i m g ) represent the coordinates of points in the reference image and the points in the image to be matched, respectively, and N denotes the number of matched point pairs. ( x i , u r b a n , y i , u r b a n ) and ( x i , u r b a n , y i , u r b a n ) represent the coordinates of the true position and calculated position, respectively. m i l e i and m i l e i represent the mileage of the true position and calculated position, respectively.
In this study, recall and precision are utilized to evaluate the performance of the image registration algorithm. Precision describes the usefulness of the search results and serves as a measure of accuracy. Recall, on the other hand, describes the completeness of the matching results, acting as a measure of completeness. The F-measure is a metric that incorporates both recall and precision. The specific definition is as follows:
R p r e c i s i o n = N c o r r e c t N c o r r e c t + N f a l s e R r e c a l l = N c o r r e c t N c o r r e s p o n d e n c e s
where N c o r r e c t represents the number of actual correct matching point pairs; N f a l s e denotes the number of incorrect matching point pairs; and N c o r r e s p o n d e n c e s is the theoretical number of correct matching point pairs. The higher the values of R p r e c i s i o n and R r e c a l l , the stronger the algorithm’s capability to correctly match points.

4.3. Experiment and Parameter Design

To comprehensively evaluate the performance of the proposed algorithm, this study is structured into three experimental analyses. First, in Section 4.3.1, the impact of key parameters of the algorithm on localization performance is assessed. Second, Section 4.3.2 compares the proposed CDSC feature descriptor extraction algorithm with existing typical visual image feature extraction algorithms to validate the designed descriptor’s registration performance. Finally, in Section 4.3.3, the proposed matching and localization algorithm is compared with conventional visual processing algorithms in terms of accuracy and computational efficiency, providing a comprehensive evaluation of the algorithm’s performance in vehicle localization tasks. This demonstrates its superiority and potential for practical application.

4.3.1. Feature Descriptor Parameter Analysis

The proposed descriptor in this paper aims to quantify the strip-like elements in GPR images into feature vectors. Consequently, the parameter settings for generating the descriptor directly impact the results of matching and localization. Therefore, experiments were conducted on both the railway and urban test trajectory to evaluate the effects of different parameters on the matching performance.
As shown in Equation (4), the parameters that influence the performance of the descriptor primarily include the feature statistics range and the number of grid divisions (both distance and angular). From the principles of image matching, it is known that the windows size also affects localization performance—larger windows contain more information that can be utilized for localization. Hence, this paper selected three key parameters: matching window size, feature statistics range, and the number of grid divisions. A series of experiments was performed using the controlled variable method, where the parameter values were gradually adjusted to explore their effects on descriptor performance.
It was necessary to select an appropriate matching window and search range. The size of the matching window is related to the distribution of underground features and includes two dimensions: depth and length. Different depths and lengths affect localization performance, as shown in Table 2. Theoretically, the larger the matching window involved in localization, the higher the accuracy. This is because a larger window extracts more pseudo-corner points for matching and localization. However, in practice, once the image depth exceeds a certain range, localization performance declines. This can be attributed to two factors: firstly, the shallow subsurface structure features are richer and more suitable for matching and localization. Deeper features exhibit less variation, contributing less to localization performance. Secondly, the GPR system used in this study operates at 300 MHz and primarily detects depths of 0–3 m. Beyond this depth, the imaging resolution decreases, making image matching prone to loss of correlation. For the urban road environment, a detection depth of 2 m and a length of 10 m were chosen, while for the railway environment, a depth of 2.5 m and a length of 10 m were selected, to strike a balance between localization accuracy and operational efficiency.
First, the impact of the feature statistics range on localization performance was analyzed. In GPR images, the resolution in the vertical direction is higher than that in the horizontal direction, while the stratified stripe structures in GPR images are distributed horizontally. To ensure the extraction of more information along the horizontal direction, the feature statistics range was set to an elliptical shape, with the minor axis ranging from 0.2 to 0.4 m and the major axis ranging from 0.5 to 0.95 m. The value of the major axis increasing from 0.5 to 0.95 is maintained, with the minor axis increasing proportionally, to access the localization performance of the algorithm.
The experimental results are shown in Table 3. As the feature statistics range increases, the localization error initially decreases and then increases. Ideally, a larger radius should provide more information for describing pseudo-corner points, leading to a corresponding reduction in matching localization error. However, the mutual influence between pseudo-corner points in the GPR image must also be considered. When the feature statistics range exceeds a certain threshold, the statistical regions of adjacent pseudo-corner points start to overlap, leading to redundancy in the feature descriptors, which can, in turn, increase localization errors.
The test results indicate that, in both urban and railway environments, a major axis range of around 0.8 m strikes an optimal balance between the quantity of neighborhood information and the distance between adjacent pseudo-corner points, ensuring the best localization accuracy.
Next, the influence of the density of grid divisions on localization performance is analyzed, as shown in Table 4. Increasing the number of angle or distance intervals effectively reduces localization errors. For the same feature statistics range, sparser grid divisions result in more pixels being contained within each grid, which weakens the grid’s ability to describe the shape. This is because, in larger grids, the variations in the stripe features become more smoothed, leading to more indistinct feature representations. Based on overall performance, dividing the distance into five grids and the angles into eight grids yields the best localization results.
In summary, moderately increasing the matching window size can improve localization accuracy. However, in urban environments, increasing depth may have certain negative effects. An appropriate feature statistics range can effectively balance the amount of information and the relationship between neighboring pseudo-corner points. Reducing angle and distance intervals moderately can improve localization accuracy. The parameters for the algorithm are set as follows:
Railway Test Environment: The feature statistics range is 0.8 m, with angle divisions set to 8 and distance divisions set to 5. The matching search radius is 20 m, and the matching window size is 2.5 m × 10 m.
Urban Test Environment: The feature statistics range is 0.8 m, with angle divisions set to 10 and distance divisions set to 5. The matching search radius is 20 m, and the matching window size is 2 m × 10 m.

4.3.2. Feature Descriptor Matching Analysis

A comparative analysis of the matching performance between the proposed method and three feature descriptors—ORB, SIFT, and SURF—was conducted. The statistical results for the urban test trajectory are shown in Table 5. The CDSC descriptor achieves the highest recall rate and, under the condition of high recall, also maintains a high precision rate. Among the four algorithms, the F-measures from lowest to highest are ORB, SURF, SIFT, and CDSC, with CDSC showing the best overall performance.
The statistical results for the railway test trajectory are shown in Table 6. Compared to the urban test results, all four descriptors showed improvements in both recall and precision rates. Notably, the CDSC algorithm exhibited the highest recall and precision rates. These findings demonstrate that the proposed algorithm outperforms conventional descriptor in both urban road and railway test environments, providing more accurate feature descriptions and higher matching precision.

4.3.3. Localization Analysis

The proposed method was compared with SIFT, ORB, SURF, and NCC image matching methods in terms of localization accuracy and computational efficiency. The localization scatter plots are shown in Figure 14, where red, green, blue, black, and yellow represent the localization results of SIFT, ORB, SURF, NCC, and CDSC, respectively. The ORB, SIFT, and SURF descriptors showed a large number of mismatches, with maximum errors exceeding 20 m. In contrast, the localization results of CDSC and NCC were largely consistent, with localization errors of less than 1 m. The NCC algorithm, which uses a full matching approach by leveraging all intensity information in the GPR images, ensured high localization accuracy. However, the proposed CDSC algorithm, which derives its features from stable structures in GPR images, also guaranteed high localization accuracy.
A section of GPR imaging with strong interference is shown in Figure 15a. Common pseudo-corner point descriptors like SIFT and ORB failed to identify correct matching point pairs on highly disturbed images, while the SURF algorithm managed to retrieve a small number of matching pairs. However, the proposed method identified a significant number of correct matching point pairs, showing markedly better performance than SIFT, ORB, and SURF descriptors when faced with strongly interfered GPR images, demonstrating superior feature description capabilities. As shown in Figure 15b, when the interference in the GPR image was weak, the number of correct matching pseudo-corner points identified by the CDSC algorithm was far higher than that of the other descriptor.
A statistical analysis of the localization error produced a cumulative probability distribution (CDF), as shown in Figure 16. The SIFT algorithm achieved 55.5% of the results within 1 m accuracy, ORB achieved 74.6%, SURF achieved 80.4%, NCC achieved 86.8% within 0.1 m, and CDSC achieved 85.6% within 0.1 m; CDSC and NCC have comparable performance.
As shown in Table 7, the RMSE (Root Mean Square Error) of the proposed algorithm is 0.055 m, representing a significant improvement in accuracy compared to other pseudo-corner point-based descriptors and is comparable to the localization accuracy of the NCC algorithm.
Due to the more complex road conditions in urban sections compared to railway sections, the probability of image decorrelation caused by GPR image interference is higher. As a result, the localization errors of all algorithms increased. However, the localization errors of CDSC and NCC were similar and significantly better than those of SURF, ORB, and SIFT. In regions with strong interference, the proposed algorithm demonstrated strong robustness. As shown in the subplots of Figure 17, at the mileage between 480 and 530 m, all algorithms except the CDSC algorithm exhibited substantial localization errors. Figure 18 compares the GPR images in this section, showing that clutter interference caused changes in image features. Conventional matching methods resulted in numerous mismatches or even failure to match, while CDSC could still ensure the correct matching of a large number of pseudo-corner points.
Statistical analysis of the localization results produced cumulative probability distributions (CDFs) shown in Figure 19. The SURF, ORB, and SIFT algorithms had 22.4%, 7.1%, and 12.2% of their localization errors within 1 m, respectively. NCC had 89.6% of localization errors within 1 m, while the proposed CDSC algorithm had 86.6% of its localization errors within 1 m, outperforming the other algorithms overall.
The matching and localization results for the urban test trajectory are shown in Table 8. Compared to the railway test results, the localization errors in the urban tests increased. The localization errors for SIFT, ORB, and SURF exceeded 5 m, while NCC had an error of 0.92 m and CDSC had an error of 1.22 m.
NCC utilizes all pixels in the matching process, effectively making use of all the image information, which results in the best positioning accuracy. However, the drawback of NCC is the large computational load and high storage space usage caused by involving all pixels in the computation. The CDSC algorithm extracts effective local features from the GPR images and uses feature vectors to replace the entire image in the computation. Although its positioning accuracy is slightly lower than that of NCC, it shows significant improvements in terms of positioning computation time and storage space usage. The average time and storage space required for localizing 1 km of data were recorded, as shown in Table 9.
As indicated in Table 9, the proposed algorithm has a time advantage in feature description. This is because the CDSC descriptor reduces a large number of partial derivative computations compared to traditional methods, simplifying the calculations. The SIFT algorithm has certain advantages over the CDSC algorithm in terms of runtime and storage space. As shown in Figure 15 and Figure 18, the number of correctly matched point pairs in SIFT is significantly fewer than in CDSC. This is because the SIFT algorithm is affected by the loss of correlation in local regions of the GPR images, leading to fewer feature points being extracted with lower accuracy, which in turn reduces the localization computation time. This limitation results in significantly inferior localization performance compared to the CDSC algorithm proposed in this manuscript, making it unsuitable for practical localization scenarios. Additionally, the proposed algorithm reduced matching time by 74.8% compared to NCC due to the lower dimensionality of the feature descriptor, which reduces the time required for matching. In summary, the proposed algorithm has a time advantage over other algorithms in both feature description and feature matching, which is beneficial for real-time matching and localization. In terms of storage space, the proposed algorithm reduced the required storage by 80.2% compared to NCC, demonstrating a significant improvement.

5. Conclusions

This paper presented a novel method for extracting features from GPR images, namely the central dense structure context (CDSC) features, which takes full advantage of the pseudo-corner and strip-like elements of GPR images. At the same time, a complete and reliable matching and positioning method was designed based on CDSC, which reduces the space size of reference image data and improves the matching calculation efficiency. The proposed vehicle matching method primarily includes steps such as extracting pseudo-corner points using SURF, CDSC descriptor extraction, feature matching, and outlier removal.
The proposed algorithm was evaluated on datasets collected from urban roads and railway tracks, and the performance of the localization method was evaluated. The results show that, compared to traditional methods, the proposed method demonstrates advantages in terms of localization accuracy, the number of matching point pairs, and precision. It significantly improves overall performance. This paper provides new ideas for research in the new technology of ground-penetrating radar navigation and positioning.
Achieving vehicle localization using GPR is a challenging task. Although the proposed algorithm improves localization accuracy and operational efficiency compared to traditional methods, it is still some ways from achieving real-time localization output. Future research will focus on optimizing threshold segmentation and graphical statistical strategies to further enhance the algorithm’s real-time performance. Additionally, the suppression of image interference and the assessment of matching availability are also areas of interest for future research.

Author Contributions

Conceptualization, J.X. and Q.L.; formal analysis, X.J. and G.S.; investigation, D.W.; methodology, J.X.; project administration, Q.L. and D.W.; resources, D.W.; supervision, H.Y.; validation, J.X. and Q.L.; writing—original draft, J.X.; writing—review and editing, Q.L. and D.W. All authors will be informed about each step of manuscript processing including submission, revision, revision reminder, etc., via emails from our system or assigned assistant editor. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Restrictions apply to the datasets. The datasets presented in this article are not readily available because the data are part of an ongoing study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, G.; Wen, W.; Xu, B.; Hsu, L.-T. Extending shadow matching to tightly-coupled GNSS/INS integration system. IEEE Trans. Veh. Technol. 2020, 69, 4979–4991. [Google Scholar] [CrossRef]
  2. Li, X.; Li, X.; Huang, J.; Shen, Z.; Wang, B.; Yuan, Y.; Zhang, K. Improving PPP–RTK in urban environment by tightly coupled integration of GNSS and INS. J. Geod. 2021, 95, 132. [Google Scholar] [CrossRef]
  3. Zhang, T.; Zhou, L.; Feng, X.; Shi, J.; Zhang, Q.; Niu, X. INS Aided GNSS Pseudo-range Error Prediction Using Machine Learning for Urban Vehicle Navigation. IEEE Sens. J. 2024, 24, 6. [Google Scholar] [CrossRef]
  4. Wen, W.W.; Hsu, L.-T. 3D LiDAR aided GNSS NLOS mitigation in urban canyons. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18224–18236. [Google Scholar] [CrossRef]
  5. Yue, J.; Wen, W.; Han, J.; Hsu, L.-T. 3D point clouds data super resolution-aided LiDAR odometry for vehicular positioning in urban canyons. IEEE Trans. Veh. Technol. 2021, 70, 4098–4112. [Google Scholar] [CrossRef]
  6. Wan, G.; Yang, X.; Cai, R.; Li, H.; Zhou, Y.; Wang, H.; Song, S. Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 4670–4677. [Google Scholar]
  7. Su, Y.; Chunlin, H.; Wentai, L. Ground Penetrating Radar: Theory and Applications; Science Press: Beijing, China, 2006. [Google Scholar]
  8. Solla, M.; Pérez-Gracia, V.; Fontul, S. A review of GPR application on transport infrastructures: Troubleshooting and best practices. Remote Sens. 2021, 13, 672. [Google Scholar] [CrossRef]
  9. Bestagini, P.; Lombardi, F.; Lualdi, M.; Picetti, F.; Tubaro, S. Landmine detection using autoencoders on multipolarization GPR volumetric data. IEEE Trans. Geosci. Remote Sens. 2020, 59, 182–195. [Google Scholar] [CrossRef]
  10. Li, Y.; Ren, Y.; Peng, S.S.; Cheng, H.; Wang, N.; Luo, J. Measurement of overburden failure zones in close-multiple coal seams mining. Int. J. Min. Sci. Technol. 2021, 31, 43–50. [Google Scholar] [CrossRef]
  11. Cornick, M.; Koechling, J.; Stanley, B.; Zhang, B. Localizing ground penetrating radar: A step toward robust autonomous ground vehicle localization. J. Field Robot. 2016, 33, 82–102. [Google Scholar] [CrossRef]
  12. Ort, T.; Gilitschenski, I.; Rus, D. Autonomous navigation in inclement weather based on a localizing ground penetrating radar. IEEE Robot. Autom. Lett. 2020, 5, 3267–3274. [Google Scholar] [CrossRef]
  13. Baikovitz, A.; Sodhi, P.; Dille, M.; Kaess, M. Ground encoding: Learned factor graph-based models for localizing ground penetrating radar. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September 2021; pp. 5476–5483. [Google Scholar]
  14. Zhang, P.; Shen, L.; Huang, X.; Xin, Q. Ground penetrating radar image template matching based on symmetrical structure features. Prog. Geophys. 2022, 37, 2657–2666. [Google Scholar]
  15. Ni, Z.; Ye, S.; Shi, C.; Pan, J.; Zheng, Z.; Fang, G. A deep learning assisted ground penetrating radar localization method. J. Electron. Inf. Technol. 2022, 44, 1265–1273. [Google Scholar]
  16. Bi, B.; Shen, L.; Zhang, P.; Huang, X.; Xin, Q.; Jin, T. TSVR-Net: An End-to-End Ground-Penetrating Radar Images Registration and Location Network. Remote Sens. 2023, 15, 3428. [Google Scholar] [CrossRef]
  17. Kim, G.; Kim, A. Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4802–4809. [Google Scholar]
  18. Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef]
  19. Yang, M.; Cao, M.; Cheng, L.; Jiang, H.; Ai, T.; Yan, X. Classification of urban interchange patterns using a model combining shape context descriptor and graph convolutional neural network. Geo-Spat. Inf. Sci. 2024, 27, 5. [Google Scholar] [CrossRef]
  20. Sun, C.; Wu, X.; Sun, J.; Qiao, N.; Sun, C. Multi-stage refinement feature matching using adaptive ORB features for robotic vision navigation. IEEE Sens. J. 2021, 22, 2603–2617. [Google Scholar] [CrossRef]
  21. Zhang, W. Combination of SIFT and Canny edge detection for registration between SAR and optical images. IEEE Geosci. Remote Sens. Lett. 2020, 19, 1–5. [Google Scholar] [CrossRef]
  22. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Proceedings of the Computer Vision—ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Proceedings, Part I. 9. pp. 404–417. [Google Scholar]
  23. Liu, C.; Cao, H.; Fan, Y. Study on Adaptive DCP Image Optimization Algorithm Combined with OSTU Threshold Method. Fire Control. Command. Control. 2022, 047. [Google Scholar]
  24. Xu, H.; Li, S.; Ji, Y.; Cao, R.; Zhang, M.; Li, H. Panoramic Camera Image Mosaic Method Based on Feature Points. Trans. Chin. Soc. Agric. Mach. 2019, 50, 9. [Google Scholar]
  25. JTG D50—2017 (EN); Specifications for Design of Highway Asphalt Pavement. Ministry of Transport of the People’s Republic of China: Beijing, China, 2017.
  26. TB 10001-2016; Code for Design of Railway Earth Structure. State Railway Administration: Beijing, China, 2016.
  27. Deng, P. Research on Detection of Railway Tunnel Seepage with Train-Mounted GPR. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, 2015. [Google Scholar]
Figure 1. Vehicle-borne GPR detects underground information.
Figure 1. Vehicle-borne GPR detects underground information.
Remotesensing 16 04291 g001
Figure 2. Three data types of ground-penetrating radar. (a) A-scan: the single-channel data signal acquired by GPR. (b) B-scan: data acquired by the GPR antenna through continuous scanning in the direction of movement. (c) C-scan: data composed of multiple B-scans.
Figure 2. Three data types of ground-penetrating radar. (a) A-scan: the single-channel data signal acquired by GPR. (b) B-scan: data acquired by the GPR antenna through continuous scanning in the direction of movement. (c) C-scan: data composed of multiple B-scans.
Remotesensing 16 04291 g002
Figure 3. Stripe and spot features in GPR images.
Figure 3. Stripe and spot features in GPR images.
Remotesensing 16 04291 g003
Figure 4. Data were collected from the same road segment at different times and under different weather conditions using GPR. Subfigure (a,b,e,f) are collected in sunny days, and subfigure (c,d) are collected in rainy days.
Figure 4. Data were collected from the same road segment at different times and under different weather conditions using GPR. Subfigure (a,b,e,f) are collected in sunny days, and subfigure (c,d) are collected in rainy days.
Remotesensing 16 04291 g004
Figure 5. Algorithm flow.
Figure 5. Algorithm flow.
Remotesensing 16 04291 g005
Figure 6. Comparison of feature point extraction results. (a) The SITF algorithm was used to extract feature points in GPR images. (b) The ORB algorithm was used to extract feature points in GPR images. (c) The SURF algorithm was used to extract feature points in GPR images.
Figure 6. Comparison of feature point extraction results. (a) The SITF algorithm was used to extract feature points in GPR images. (b) The ORB algorithm was used to extract feature points in GPR images. (c) The SURF algorithm was used to extract feature points in GPR images.
Remotesensing 16 04291 g006
Figure 7. Shape context algorithm feature extraction.
Figure 7. Shape context algorithm feature extraction.
Remotesensing 16 04291 g007
Figure 8. The steps of CDSC feature extraction. Descriptors are obtained by leveraging the stripes in the binarized image surrounding the pseudo-corner points. Subgraph (a) is the filtered image, subgraph (b) is the Binarized image, and subgraph (c) is the Center Dense Struct Context Feature.
Figure 8. The steps of CDSC feature extraction. Descriptors are obtained by leveraging the stripes in the binarized image surrounding the pseudo-corner points. Subgraph (a) is the filtered image, subgraph (b) is the Binarized image, and subgraph (c) is the Center Dense Struct Context Feature.
Remotesensing 16 04291 g008
Figure 9. CDSC features were extracted from GPR images collected twice at the same location and compared. The nine feature points on each image on the left correspond to the nine features on the right.
Figure 9. CDSC features were extracted from GPR images collected twice at the same location and compared. The nine feature points on each image on the left correspond to the nine features on the right.
Remotesensing 16 04291 g009
Figure 10. Image matching schematic diagram.
Figure 10. Image matching schematic diagram.
Remotesensing 16 04291 g010
Figure 11. Comparison of GPR images before and after preprocessing; (a) displays the original output image, (b) shows the filtered image.
Figure 11. Comparison of GPR images before and after preprocessing; (a) displays the original output image, (b) shows the filtered image.
Remotesensing 16 04291 g011
Figure 12. Urban test trajectory and equipment setup. (a) Urban test trajectory, the yellow line represents the test trajectory. (b) The equipment setup for the road experiment.
Figure 12. Urban test trajectory and equipment setup. (a) Urban test trajectory, the yellow line represents the test trajectory. (b) The equipment setup for the road experiment.
Remotesensing 16 04291 g012
Figure 13. Train test trajectory and equipment setup. (a) Train test trajectory total length is approximately 6.7 km, the yellow line represents the test trajectory. (b) The equipment setup for the railway experiment.
Figure 13. Train test trajectory and equipment setup. (a) Train test trajectory total length is approximately 6.7 km, the yellow line represents the test trajectory. (b) The equipment setup for the railway experiment.
Remotesensing 16 04291 g013
Figure 14. Railway test trajectory positioning error.
Figure 14. Railway test trajectory positioning error.
Remotesensing 16 04291 g014
Figure 15. Comparison of matching results of different methods in the railway test trajectory. (a) Matching GPR image with strong interference; (b) matching GPR image without interference.
Figure 15. Comparison of matching results of different methods in the railway test trajectory. (a) Matching GPR image with strong interference; (b) matching GPR image without interference.
Remotesensing 16 04291 g015
Figure 16. Railway test trajectory positioning error CDF.
Figure 16. Railway test trajectory positioning error CDF.
Remotesensing 16 04291 g016
Figure 17. Urban test trajectory positioning error.
Figure 17. Urban test trajectory positioning error.
Remotesensing 16 04291 g017
Figure 18. Comparison of matching results of different methods in the urban test trajectory. (a) Matching GPR image with strong interference; (b) matching GPR image without interference.
Figure 18. Comparison of matching results of different methods in the urban test trajectory. (a) Matching GPR image with strong interference; (b) matching GPR image without interference.
Remotesensing 16 04291 g018
Figure 19. Urban test trajectory positioning error CDF.
Figure 19. Urban test trajectory positioning error CDF.
Remotesensing 16 04291 g019
Table 1. GPR parameters.
Table 1. GPR parameters.
ParametersParameter Values
Frequency300 MHz
Number of sample points 512
Scanning rate500 Hz
Detection depth3~4 m
Table 2. The impact of the size of the matching window.
Table 2. The impact of the size of the matching window.
NumsMatching Window
Depth (m)
Matching Window
Length (m)
Urban
RMSE (m)
Train
RMSE (m)
11.561.760.11
21.581.510.07
31.5101.350.06
42.061.790.09
52.081.520.06
62.0101.220.06
72.561.860.09
82.581.580.07
92.5101.330.06
Table 3. The impact of the feature statistical range.
Table 3. The impact of the feature statistical range.
NumsFeature Statistical
Range (m)
Urban
RMSE (m)
Train
RMSE (m)
10.503.550.43
20.652.700.21
30.802.640.27
40.953.040.28
Table 4. The impact of the density of grid divisions.
Table 4. The impact of the density of grid divisions.
NumsDistance IntervalsAngle IntervalsUrban
RMSE (m)
Train
RMSE (m)
1445.441.17
2463.370.46
3482.640.27
4544.380.55
5563.150.38
6582.320.18
Table 5. The statistical results for the urban test trajectory.
Table 5. The statistical results for the urban test trajectory.
MethodRecall
Rate
Precision
Rate
F-Measure
SIFT5.3%30.1%0.09013
ORB0.6%37.7%0.011812
SURF2.6%11.9%0.042676
CDSC6.4%33.7%0.107571
Table 6. The statistical results for the railway test trajectory.
Table 6. The statistical results for the railway test trajectory.
MethodRecall
Rate
Precision
Rate
F-Measure
SIFT11.9%36.9%0.179963
ORB0.7%47.5%0.013797
SURF5.1%22.0%0.082804
CDSC21.3%60.9%0.315613
Table 7. Railway test trajectory positioning RMSE.
Table 7. Railway test trajectory positioning RMSE.
MethodsSIFTORBSURFNCCCDSC
RMSE (m)4.111.841.280.050.06
Table 8. Urban test trajectory positioning RMSE.
Table 8. Urban test trajectory positioning RMSE.
MethodsSIFTORBSURFNCCCDSC
RMSE (m)8.348.926.900.921.22
Table 9. Comparison of storage space and runtime.
Table 9. Comparison of storage space and runtime.
MethodsSIFTORBSURFNCCCDSC
Runtime (s)292.1733.0304.32113.9532.3
Storage (MB)194916423346
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, J.; Lai, Q.; Wei, D.; Ji, X.; Shen, G.; Yuan, H. The Ground-Penetrating Radar Image Matching Method Based on Central Dense Structure Context Features. Remote Sens. 2024, 16, 4291. https://doi.org/10.3390/rs16224291

AMA Style

Xu J, Lai Q, Wei D, Ji X, Shen G, Yuan H. The Ground-Penetrating Radar Image Matching Method Based on Central Dense Structure Context Features. Remote Sensing. 2024; 16(22):4291. https://doi.org/10.3390/rs16224291

Chicago/Turabian Style

Xu, Jie, Qifeng Lai, Dongyan Wei, Xinchun Ji, Ge Shen, and Hong Yuan. 2024. "The Ground-Penetrating Radar Image Matching Method Based on Central Dense Structure Context Features" Remote Sensing 16, no. 22: 4291. https://doi.org/10.3390/rs16224291

APA Style

Xu, J., Lai, Q., Wei, D., Ji, X., Shen, G., & Yuan, H. (2024). The Ground-Penetrating Radar Image Matching Method Based on Central Dense Structure Context Features. Remote Sensing, 16(22), 4291. https://doi.org/10.3390/rs16224291

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop