Next Article in Journal
On the Importance of Train–Test Split Ratio of Datasets in Automatic Landslide Detection by Supervised Classification
Next Article in Special Issue
Displacement Monitoring in Airport Runways by Persistent Scatterers SAR Interferometry
Previous Article in Journal
Development of a Land Surface Temperature Retrieval Algorithm from GK2A/AMI
Previous Article in Special Issue
GPR Spectra for Monitoring Asphalt Pavements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Frequency–Wavenumber Analysis of Deep Learning-based Super Resolution 3D GPR Images

Department of Architectural Engineering, Sejong University, Seoul 05006, Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(18), 3056; https://doi.org/10.3390/rs12183056
Submission received: 19 August 2020 / Revised: 15 September 2020 / Accepted: 17 September 2020 / Published: 18 September 2020
(This article belongs to the Special Issue Trends in GPR and Other NDTs for Transport Infrastructure Assessment)

Abstract

:
This paper proposes a frequency–wavenumber (f–k) analysis technique through deep learning-based super resolution (SR) ground penetrating radar (GPR) image enhancement. GPR is one of the most popular underground investigation tools owing to its nondestructive and high-speed survey capabilities. However, arbitrary underground medium inhomogeneity and undesired measurement noises often disturb GPR data interpretation. Although the f–k analysis can be a promising technique for GPR data interpretation, the lack of GPR image resolution caused by the fast or coarse spatial scanning mechanism in reality often leads to analysis distortion. To address the technical issue, we propose the f–k analysis technique by a deep learning network in this study. The proposed f–k analysis technique incorporated with the SR GPR images generated by a deep learning network makes it possible to significantly reduce the arbitrary underground medium inhomogeneity and undesired measurement noises. Moreover, the GPR-induced electromagnetic wavefields can be decomposed for directivity analysis of wave propagation that is reflected from a certain underground object. The effectiveness of the proposed technique is numerically validated through 3D GPR simulation and experimentally demonstrated using in-situ 3D GPR data collected from urban roads in Seoul, Korea.

Graphical Abstract

1. Introduction

In the past few decades, sinkhole accidents of urban roads have posed a serious hazard to buildings, infrastructures and especially inhabitants of the area [1,2]. Although vision-based road surface inspection techniques have been widely proposed for road degradation evaluation [3,4], early detection of sinkholes, which are typically invisible from the road surface, is still challenging. To effectively detect underground cavities which are most likely extended to sinkholes, various nondestructive testing (NDT) techniques have taken the limelight. Ground penetrating radar (GPR) is one of the widely accepted NDT tools thanks to its high sensitivity to underground media change and rapid inspection capability for broad target areas [5,6,7]. However, the physical interpretation of field GPR data for underground object detection and classification is still challenging in some cases, because electromagnetic waves, which are reflected from a target underground object, are often weaker than underground media’s inhomogeneity and undesired measurement noises [8,9]. In general, most of the dominant signals reflected from the road surface often hinder the precise data interpretation of relatively weak signals coming from underground media under air-coupled GPR data acquisition conditions [8,10,11].
To enhance the GPR data interpretability, a number of signal and image processing techniques, such as time-varying gain [10,11], subtraction [8], migration [12], deconvolution [13], basis-pursuit [9], compressive sensing [14], velocity analysis [15], radon transform [16], discrete wavelet transform [17] and empirical mode decomposition [18], have been proposed. Although these techniques have tried to make GPR data interpretation easier, their results still highly depend on experts’ experiences and are often susceptible to undesired noises. Thus, a number of researchers have proposed artificial neural networks to automate the GPR data interpretation [19,20,21]. Recently, deep learning networks have been actively applied to the GPR data interpretation for minimizing users’ intervention. For instance, Kim et al. [22] proposed a convolutional neural network (CNN) combined with a statistical thresholding technique to classify underground objects using GPR B-scan images. Then, more advanced deep learning networks based on the combination of B- and C-scan GPR images [23,24], as well as triplanar GPR images [25], were also developed to improve the data classification performance.
However, the undesired noise problems are inevitable in reality and still disturb the proper data interpretation. In particular, incoherent GPR data caused by inhomogeneity of arbitrary underground medium, measurement noises and systematic error are often misinterpreted [26]. A frequency–wavenumber (f–k) analysis has been developed to address the similar noise issue in ultrasonic NDT fields [27,28,29]. The f–k analysis, which transforms time–space (t–s) signals to the corresponding f–k space, is able to effectively filter out the noise components by removing undesired wave patterns in the f–k domain. The filtered f–k domain signals are then restored in the t–s domain signals without unwanted noise patterns, making it possible to highlight real wave components reflected from a target underground object. In addition, the wave propagation direction in the t–s domain can be precisely decomposed, which is useful to recognize the wave scatter size and location as well as to classify the object type. In spite of the f–k analysis’s benefits, 3D GPR data are not often suitable for the f–k analysis. High resolution GPR images, which are composed of dense spatial GPR data considering the minimum target underground object size, are necessary for the proper f–k analysis. However, lack of GPR image resolution determined by the number of GPR channels and spatial scanning speed, which leads to analysis distortion, is often caused by the fast or coarse spatial scanning mechanism in reality [30,31,32].
To tackle the image resolution issue, a number of image resolution enhancement techniques have been proposed in computer vision fields. For example, super resolution (SR) images have been artificially generated by various image processing methods such as an image prediction model [33], an image statistical method [34] and a patch-based method [35]. However, the high frequency regions, such as textures and edge components in the target image, are not properly generated by the conventional image processing methods. Recently, numerous deep learning-based SR image generation techniques have been proposed. Dong et al. proposed an SR image enhancement network using CNN as the first SR network [36]. Then, more advanced SR networks such as very deep SR [37], generative adversarial network-based SR network [38] and residual channel attention networks [39] have been developed. More recently, a residual learning-based deep CNN has been intensively studied for improving its training efficiency [40,41]. As for the GPR application, Kang et al. proposed a deep learning-based SR GPR image generation network for enhancing underground cavity detectability [42].
In this paper, the f–k analysis incorporated with a deep learning-based SR network is proposed for unwanted noise reduction and electromagnetic wavefield decomposition. First, a deep learning-based SR GPR image enhancement network is described in Section 2. The f–k analysis corresponding to the SR images is then proposed in Section 3. The effectiveness of the proposed technique is numerically validated using 3D GPR simulation data and experimentally demonstrated using in-situ 3D GPR data obtained from complex urban roads at Seoul, Korea.

2. Deep Learning-based SR GPR Image Enhancement

When multi-channel GPR scans along the region of interest, B- and C-scan images can be typically constructed by collecting multiple A-scan data along the scanning direction. The B-scan image at each GPR channel includes the parabola features that are reflected from underground objects, and the C-scan image at a certain depth is able to display the circular features corresponding to the parabola features. These parabola and circular features in the B- and C-scan images have been widely used as one of the main distinctive features for underground object identification and classification. However, lack of image resolution often hinders the feature recognition. To obtain high resolution B- and C-scan images, slow scanning speed and dense GPR antenna arrangement are necessary. Unfortunately, the resolution issues are, however, a trade-off with respect to time and cost in reality.
To effectively tackle the lack of resolution issue in existing 3D GPR data without data acquisition condition change, the deep learning-based SR GPR image enhancement network based on a deep residual channel attention network [39] is developed as shown in Figure 1. The deep residual channel attention network is one of the SR image enhancement networks based on CNN, which consists of 500 layers and 1.6 M parameters. This network increases the LR image resolution four times, comprises the four main steps: (1) shallow feature extraction, (2) deep feature extraction, (3) upscaling and (4) reconstruction. First, the shallow feature extraction step, which consists of a single convolution layer with 64 kernels of 3 × 3 size and stride of 1, extracts shallow features from the input low resolution (LR) image.
Subsequently, deep features are extracted through the residual-in-residual structure in the second step of Figure 1. The main purpose of the residual-in-residual structure learning is to allow very deep networks to achieve easy and powerful performance in the training process. Here, the deep feature means high frequency information composed of lines or edges, which is the biggest difference between the SR and LR images. The residual-in-residual structure is the very deep network which consists of 10 residual groups, and each residual group includes 20 residual blocks and 1 convolution layer as shown in Figure 1. To effectively train the deep network, the skip connections are embedded in the residual-in-residual structure in terms of the short and long skip connections. Each residual group is connected by the long skip connection, and the residual blocks inside the residual group are connected by the short skip connection as shown in Figure 1. The multiple skip connections allow enough shallow features to be bypassed, enabling the main network to focus on learning deep features. The skip connections make it possible to stabilize the training performance of the very deep network with residual learning, as the main superiority of the residual-in-residual structure. Each convolution layer inside the residual-in-residual structure similarly works as the convolution layer of the first step. To more efficiently train the high frequency regions of the image, the channel attention mechanism is performed inside each residual block through global average pooling, convolution, rectified linear unit (ReLU) layer and sigmoid function. The channel attention mechanism improves the discriminative learning capabilities by focusing on more useful channels based on the average value extracted from each channel. Each average value is obtained via the global average pooling layer. Then, the convolution, ReLU layer and sigmoid function provide non-linearity between the channels and make multiple channel-wise features to be emphasized for a non-mutually exclusive relationship. Next, the upscaling step, as the third step, extends the feature data of the image to the SR resolution size. It consists of the deconvolution layer with 256 kernels of 3 × 3 size and the stride of 1, which increases the size of each pixel by four times in this network. Finally, the SR image is generated through a single convolution layer comprised of three kernels of 3 × 3 size and stride of 1 in the reconstruction step, as the fourth step, of Figure 1.

3. SR GPR Image-Based f–k Analysis

Once the SR GPR images are generated, the SR C-scan images can be obtained in the t–s domain for the subsequent f–k analysis, as shown in Figure 2. Note that the proposed f–k analysis can be easily extended to B- or D-scan images, although the sequential C-scan images along the depth direction are used in this study. Even though the SR C-scan images contain meaningful wave signals mixed with noise components, it is difficult to remove the undesired noises caused by arbitrary underground medium inhomogeneity and data measurement procedure in the t–s domain. On the other hand, the noise components can be effectively eliminated in the f–k domain. Moreover, wavefield decomposition in the f–k domain is able to precisely analyze wave propagation directivity by reconverting the decomposed wavefield data to the t–s domain data as shown in Figure 2. First, the SR C-scan data in the t–s domain are transformed into the f–k domain through 3D Fourier transform which is given by:
U ( k x ,   k y , w ) = E ( x , y , t ) e i ( k x x + k y y + w t ) d x d y d t
where E and U denote the electromagnetic wavefields of the SR C-scan data in the t–s and f–k domains, respectively. k ,   w and t are the wavenumber, angular frequency and time, respectively. x and y are the spatial cartesian coordinates.
Subsequently, the tailored f–k filter is designed. First, a lowpass filter is applied in the f domain so that the measurement noises can be eliminated outside the excitation frequency range as shown in Figure 3. The filtering frequency bandwidth can be determined by considering the excitation frequency range. The k domain filter is then developed. Since the electromagnetic waves propagating along arbitrary underground media are partially and randomly reflected from the media’s inhomogeneity and numerous small porosities, these reflection signals randomly appear and look like non-propagating wave components in the C-scan images of the t–s domain. It physically means that no spatially propagating wave can be observed if there is no certain object inside the underground media. Even if there are meaningful signals reflected from a certain object, such physical phenomenon can be clearly observed, as shown in Figure 3a. The wave energy is highly concentrated near zero k x value, which are undesired noise components to be eliminated. One more interesting thing to see here is that these randomly reflected signals in the f–k domain may have dominant energy due to their high repetition rate, although each reflection signal intrinsically has small amplitude. Based on the physical observation, the k domain filter is established using a Laplacian of Gaussian window ( Φ k ):
Φ k = [ k x 2 + k y 2 2 σ 2 σ 4 ] e k x 2 + k y 2 2 σ 2                 w
where σ is the standard deviation.
Once the f–k filter is designed, the filtered SR C-scan data ( U f ) can be obtained in the f–k domain. Figure 3b shows that the measurement noises, as well as non-propagating components, are clearly filtered out, and meaningful wave components remain.
U f ( k x ,   k y , w ) = U ( k x ,   k y , w ) · Φ k
Furthermore, the electromagnetic wavefields can be decomposed in the f–k domain so that the wave propagation directivity can be precisely analyzed, which is useful to recognize underground objects’ size and location as well as to classify the object type. For instance, a ± x directional window filter ( Φ ± k x ) can be designed for decomposing U f to the + x or x directional wavefield ( U ± k x ) in the f–k domain, which is given by:
U ± k x ( k x ,   k y , w ) = U f ( k x ,   k y , w ) · Φ ± k x Φ + k x = { 0 1           k x 0 k x > 0       Φ k x = { 1 0           k x < 0 k x 0
In a similar fashion, it can be readily extended to the ± y directional filter.
Next, the resultant C-scan data ( E ± k x ) in the t–s domain can be reconstructed using the following inverse 3D Fourier transform:
E ± k x ( x , y , t ) = 1 2 π U ± k x ( k x ,   k y , w ) e i ( k x x + k y y + w t ) d k x d k y d w
As one of the representative examples, only the x directional wavefield ( E k x ) remains, and it reveals much higher signal-to-noise ratio (SNR) than E without pixel information loss and distortion, as shown in Figure 2. Note that the wavefield decomposition process is optional in the algorithm, thus U ± k x can be replaced by U f in Equation (5), resulting in the filtered SR C-scan data ( E f ) in the t–s domain.

4. Numerical and Experimental Validations

The proposed f–k analysis technique is numerically and experimentally validated through 3D GPR simulation using gprMax [43] and in-situ 3D GPR data obtained from urban roads in Seoul, Korea.

4.1. Numerical Validation

The target 3D model is comprised of 8 × 2.975 × 2.75 m3 soil layer, 8 × 0.525 × 2.75 m3 air layer and steel pipe with a diameter of 500 mm, as depicted in Figure 4. It was modelled so that the pipe was buried perpendicular to the GPR scanning direction inside the soil layer. Note that the pipe was intentionally selected in this study, because it is one of the representative wave scatters which can be clearly reflected in all GPR channels constituting the C-scan images. Here, the relative permittivity values of air, soil and pipe were set to 1, 5 and infinity, respectively. The transmitter (Tx) was 50 mm apart from the receiver (Rx), and the GPR data reflected from the pipe were acquired by moving the Tx and Rx antennas along the soil layer surface, as shown in Figure 4. A finite difference time domain method [44] was used to simulate electromagnetic wave propagation. To simulate the similar conditions with the real-world GPR scanning, the spatial discretization was set to 20 mm, which is equivalent to 20 km/h scanning speed with 20 GPR channels in the real-world application. Here, the 20 GPR channels are able to cover a road width of 1.5 m. The excitation electromagnetic wave was normalized by the second derivative of a Gaussian waveform with a center frequency of 1.8 GHz. In addition, Gaussian random noises, which are equivalent to 25% of magnitude of the maximum value of the GPR signal, were artificially added to simulate the arbitrary underground medium inhomogeneity and undesired measurement noises.
Figure 5 shows the representative GPR B- and C-scan images obtained from the simulation model. The original LR B- and C- scan images are clearly shown in Figure 5a, although the relatively slow scanning of 20 km/h with dense GPR channel arrangement was modelled in this simulation. On the other hand, the SR B- and C-scan images show that the edges of informative parabola and line features are well enhanced without pixel information loss and distortion, as displayed in Figure 5b. Although the image resolution is successfully enhanced, the arbitrary underground medium inhomogeneity and undesired measurement noises still remain in Figure 5b.
Figure 6a shows the representative k x k y plots at 300 MHz. As expected, the non-propagating components caused by incoherent noise components are highly concentrated on the origin of the k x k y plane. To remove the non-propagating components, the f–k filter is applied to U shown in Figure 6a using Equation (2). The lowpass filter was designed by fitting an exponential function with a rate parameter of 0.05. As for the k domain filter, σ was set to 1 considering k to all excitation frequency ranges. After applying the f–k filter, the non-propagating components are remarkably reduced in U f , while meaningful wave components reflected from the pipe remain, as shown in Figure 6b. Subsequently, U k x and U + k x are obtained by applying Φ ± k x using Equation (4) as shown in Figure 6c,d, respectively.
Figure 7 shows the resultant t–s domain images corresponding to Figure 6, which are reconstructed using Equation (5). Compared to Figure 7a, it is clearly observed that the incoherent and random noises are significantly eliminated in Figure 7b. To quantitatively estimate the results, the SNR values of the representative A-scan signals along the vertical white dash-dotted lines in Figure 7a,b were compared. Figure 8 shows the A-scan signals with the reference signals obtained by smoothing spline curve fitting. Figure 8a reveals that the A-scan signal of E is quite different from the reference signal, resulting in SNR of 19.2 dB as summarized in Table 1. Once the f–k filter is applied, Figure 8b shows that the A-scan signal of E f is well matched with the reference signal, which has 54.1 dB SNR as shown in Table 1. It can be confirmed that the proposed f–k filter is very effective in removing incoherent and random noise components. In addition, Figure 7c,d, respectively, show that E k x and E + k x are successfully decomposed along the x and + x directions. Again, the wavefield decomposition is very powerful in identifying the underground object boundary and classifying the object type.

4.2. Experimental Validation Using In-Situ 3D GPR Data

The proposed f–k analysis technique was also experimentally validated using 3D GPR data collected from complex urban roads in Seoul, Korea. Figure 9a shows the 3D GPR-mounted van for the field tests. The 3D GPR consisted of bow-tie monopole 20 Tx and Rx antennas generate a step frequency with wide frequency bandwidth ranging from 100 MHz to 3 GHz. Here, the 20-channeled GPR device has 1.5 m scanning width, which can typically cover a single road lane as shown in Figure 9a,b. The average scanning speed was approximately 20 km/h for avoiding traffic congestion in urban roads. The GEOSCOPE MK IV data acquisition system shown in Figure 9c, which has 3 GHz sampling rate and 250 ns time range, was used in the field tests.
Figure 10a and Figure 11a show the representative experimental results including B- and C-scan images obtained from two different underground pipes that are defined as pipe cases 1 and 2. Compared to the simulation results of Figure 5a, the experimental LR images show lower resolution and more noises which are most likely caused by arbitrary underground medium inhomogeneity and undesired measurement noises. In particular, the B- and C-scan images of Figure 10a and Figure 11a do not have sufficient pixel resolution for the f–k analysis. On the other hand, the SR B- and C-scan images reveal that informative edges of the parabola and line features are well reconstructed without pixel information loss and distortion, as displayed in Figure 10b and Figure 11b.
Figure 12 and Figure 13 show the representative k x k y plots at 500 MHz of the pipe cases 1 and 2. Similarly, the non-propagation components caused by the incoherent noises are concentrated on the origin of the k x k y plane in U , as shown in Figure 12a and Figure 13a. Then, U f ’s of Figure 12b and Figure 13b show that the undesired non-propagating components are remarkably reduced by applying the filtering parameters, i.e., rate parameter of 0.05 and σ of 1 in Equation (2). To decompose U f into U k x and U + k x , the Φ ± k x window filter is similarly applied using Equation (4), as shown in Figure 12c,d and Figure 13c,d.
Figure 14 and Figure 15 show the resultant t–s domain images of the pipe cases 1 and 2 corresponding to Figure 12 and Figure 13. The incoherent and random noises are remarkably removed in Figure 14b and Figure 15b compared to Figure 14a and Figure 15a. E k x and E + k x are also successfully decomposed using Equation (5), as shown in Figure 14c,d and Figure 15c,d, respectively.
Similar to the simulation one, the quantitative comparison results using SNR are summarized in Table 2. Both the pipe cases 1 and 2 show about 75% improvement after applying the proposed technique, which is consistent with the simulation results.

5. Discussion

The proposed f–k analysis technique based on SR GPR images was well validated via the numerical simulation and field tests. Note that the SNR improvement rate of simulation is higher than the experimental ones, because the incoherent noises were simply assumed using ideal Gaussian random noises in the simulation. One more interesting thing is that the pipe cases 1 and 2 show the similar SNR improvement rates, which means that the proposed f–k analysis technique is robust against the test environmental variation. In other words, the performance of the proposed technique would be consistent regardless of underground site conditions. Although the pipe cases, which is one of the most dominant features for clearer validation, are shown in the paper, the proposed technique can be easily extended to other types of underground objects. In addition, the cylindrical coordinate can be employed for the directivity analysis depending on the target objects’ shapes, rather than the Cartesian coordinate.

6. Conclusions

This paper proposes a frequency–wavenumber (f–k) technique of 3D ground penetrating radar (GPR) data, which enables one to effectively eliminate incoherent noises and precisely analyze the electromagnetic wave propagation directivity. This technique is newly proposed and validated using super resolution (SR) artificially generated by the deep learning network. Three-dimensional GPR data collected using the existing GPR devices typically suffer from the lack of resolution problem, making it difficult to be analyzed in the f–k domain. To avoid the f–k analysis distortion, a deep learning-based SR GPR image enhancement network incorporated with the f–k analysis is developed. The proposed technique is able to effectively eliminate incoherent noises caused by arbitrary underground medium inhomogeneity and undesired measurement noises, which is one of the biggest technical conundrums in real-world GPR data interpretation. In addition, electromagnetic wave propagation directivity can be precisely analyzed through wavefield decomposition, which is another strong benefit of the f–k analysis. The proposed f–k analysis technique is successfully validated via 3D GPR simulation, as well as field tests, revealing the pretty consistent and outstanding performances. The proposed f–k analysis would be a promising tool for 3D GPR data interpretation especially obtained from complex urban roads.
As the follow-up work, it is warranted that wavefield decomposition-based underground object characterization is thoroughly studied using more GPR data obtained from various in-site roads. Moreover, the proposed f–k analysis can be combined with deep learning-based automated data classification, making it possible to outperform the existing deep learning networks. It is envisioned that this novel f–k analysis can be helpful for not only underground object identification but also concrete structure inspection using GPR.

Author Contributions

Y.-K.A. designed the study. M.-S.K. acquired simulation and experimental data. Y.-K.A. and M.-S.K. analyzed the simulation and experimental data. M.-S.K. and Y.-K.A. wrote the manuscript. Y.-K.A. made critical revisions to the manuscript. Y.-K.A. and M.-S.K. approved the submission of this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by faculty research fund of Sejong University in 2020.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) (2018R1A1A1A05078493).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brinkmann, R.; Parise, M.; Dye, D. Sinkhole distribution in a rapidly developing urban environment: Hillsborough Country, Tampa Bay area, Florida. Eng. Geol. 2008, 99, 169–184. [Google Scholar] [CrossRef]
  2. Strzalkowski, P. Sinkhole formation hazard assessment. Earth Sci. 2018, 78, 9. [Google Scholar] [CrossRef] [Green Version]
  3. Huang, J.; Liu, W.; Sun, X. A pavement crack detection method combining 2D with 3D information based on dempster-shafer theory. Comput. Aided Civ. Infrastruct. Eng. 2014, 29, 299–313. [Google Scholar] [CrossRef]
  4. Guan, H.; Li, J.; Yu, Y.; Chapman, M.; Wang, H.; Wang, C.; Zhai, R. Iterative tensor voting for pavement crack extraction using mobile laser scanning data. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1527–1537. [Google Scholar] [CrossRef]
  5. Toksoz, D.; Yilmaz, I.; Seren, A.; Mataraci, I. A study on the performance of GPR for detection of different types of buried objects. Procedia Eng. 2016, 161, 399–406. [Google Scholar] [CrossRef] [Green Version]
  6. Sun, H.; Pashoutani, S.; Zhu, J. Nondestructive evaluation of concrete bridge decks with automated acoustic scanning system and ground penetrating radar. Sensors 2018, 18, 1955. [Google Scholar] [CrossRef] [Green Version]
  7. Ukaegbu, I.K.; Gamage, K.A.A.; Aspinall, M.D. Integration of ground-penetrating radar and gamma-ray detectors for nonintrusive characterization of buried radioactive objects. Sensors 2019, 19, 2743. [Google Scholar] [CrossRef] [Green Version]
  8. Sharma, P.; Kumar, B.; Singh, D.; Gaba, S.P. Critical analysis of background subtraction techniques on real GPR data. Def. Sci. J. 2017, 67, 559–571. [Google Scholar] [CrossRef]
  9. Park, B.J.; Kim, J.G.; Lee, J.S.; Kang, M.-S.; An, Y.-K. Underground object classification for urban roads using instantaneous phase analysis of GPR data. Remote Sens. 2018, 10, 1417. [Google Scholar] [CrossRef] [Green Version]
  10. Daniels, D.J. Ground Penetrating Radar, 2nd ed.; The Institution of Electrical Engineers: London, UK, 2004. [Google Scholar]
  11. Ciampoli, L.B.; Tosti, F.; Economou, N.; Benedetto, F. Signal processing of GPR data for road surveys. Geosciences 2019, 9, 96. [Google Scholar] [CrossRef] [Green Version]
  12. Feng, X.; Yu, Y.; Liu, C.; Fehler, M. Combination of H-alpha decomposition and migration for enhancing subsurface target classification of GPR. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4852–4863. [Google Scholar] [CrossRef]
  13. Economou, N.; Vafidis, A. GPR data time varying deconvolution by kurtosis maximization. J. Appl. Geophys. 2012, 81, 117–121. [Google Scholar] [CrossRef]
  14. Gurbuz, A.C.; McClellan, J.H.; Scott, W.R. Compressive sensing for subsurface imaging using ground penetrating radar. Signal. Process. 2009, 89, 1959–1972. [Google Scholar] [CrossRef]
  15. Pue, J.D.; Meirvenne, M.V.; Cornelis, W.M. Accounting for surface refraction in velocity semblance analysis with air-coupled GPR. IEEE J. Top. Appl. Earth Obs. Remote Sens. 2015, 9, 60–73. [Google Scholar] [CrossRef]
  16. Nuzzo, L. Coherent noise attenuation in GPR data by linear and parabolic radon transform techniques. Ann. Geophys. 2003, 46, 533–547. [Google Scholar]
  17. Baili, J.; Lahouar, S.; Hergli, M.; Al-Qadi, I.L.; Besbes, K. GPR signal de-noising by discrete wavelet transform. NDT E Int. 2009, 42, 696–703. [Google Scholar] [CrossRef]
  18. Ostoori, R.; Goudarzi, A.; Oskooi, B. GPR random noise reduction using BPD and EMD. J. Geophys. Eng. 2018, 15, 347–353. [Google Scholar] [CrossRef]
  19. Nunez-Nieto, X.; Solla, M.; Gomez-Perez, P.; Lorenzo, H. GPR signal characterization for automated landmine and UXO detection based on machine learning techniques. Remote Sens. 2014, 6, 9729–9748. [Google Scholar] [CrossRef] [Green Version]
  20. Klesk, P.; Godziuk, A.; Kapruziak, M.; Olech, B. Fast analysis of C-scans from ground penetrating radar via 3-D Haar-Like features with application to landmine detection. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3996–4009. [Google Scholar] [CrossRef]
  21. Mazurkiewicz, E.; Tadeusiewicz, R.; Tomecka-Suchon, S. Application of neural network enhanced ground penetrating radar to localization of burial sites. Appl. Artif. Intell. 2016, 30, 844–860. [Google Scholar] [CrossRef]
  22. Kim, N.G.; Kim, K.D.; An, Y.-K.; Lee, H.J.; Lee, J.J. Deep learning-based underground object detection for urban road pavement. Int. J. Pavement Eng. 2018, 1–13. [Google Scholar] [CrossRef]
  23. Kang, M.-S.; Kim, N.G.; Lee, J.J.; An, Y.-K. Deep learning-based automated underground cavity detection using three-dimensional ground penetrating radar. Struct. Health Monit. 2019, 19, 173–185. [Google Scholar] [CrossRef]
  24. Kim, N.G.; Kim, S.H.; An, Y.-K.; Lee, J.J. A novel 3D GPR image arrangement for deep learning—based underground object classification. Int. J. Pavement Eng. 2019, 1–12. [Google Scholar] [CrossRef]
  25. Kim, N.G.; Kim, S.H.; An, Y.-K.; Lee, J.J. Triplanar imaging of 3-D GPR data for deep-learning-based underground object detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4446–4456. [Google Scholar] [CrossRef]
  26. Benedetto, F.; Tosti, F. A signal processing methodology for assessing the performance of ASTM standard test methods for GPR systems. Signal. Process. 2017, 132, 327–337. [Google Scholar] [CrossRef]
  27. Ruzzene, M. Frequency-wavenumber domain filtering for improved damage visualization. Smart Mater. Struct. 2007, 16, 2116–2129. [Google Scholar] [CrossRef] [Green Version]
  28. An, Y.-K.; Park, B.J.; Sohn, H. Complete noncontact laser ultrasonic imaging for automated crack visualization in a plate. Smart Mater. Struct. 2013, 22, 1–10. [Google Scholar] [CrossRef] [Green Version]
  29. An, Y.-K.; Kwon, Y.S.; Sohn, H. Noncontact laser ultrasonic crack detection for plates with additional structural complexities. Struct. Health Monit. 2013, 12, 522–538. [Google Scholar] [CrossRef]
  30. Miwa, T.; Arai, I. Super-resolution imaging for point reflectors near transmitting and receiving array. IEEE Trans. Antennas Propag. 2004, 52, 220–229. [Google Scholar] [CrossRef]
  31. Yamaguchi, T.; Mizutani, T.; Tarumi, M.; Su, D. Sensitive damage detection of reinforced concrete bridge slab by “Time-variant deconvolution” of SHF-band radar signal. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1478–1488. [Google Scholar] [CrossRef]
  32. Chang, P.; Flatau, A.; Liu, S. Review paper: Health monitoring of civil infrastructure. Struct. Health Monit. 2003, 2, 257–267. [Google Scholar] [CrossRef]
  33. Irani, M.; Peleg, S. Improving resolution by imaging registration. CVGIP Graph. Models Image Process. 1991, 53, 231–239. [Google Scholar] [CrossRef]
  34. Kim, K.I.; Kwon, Y. Single-image super-resolution using sparse regression and natural image prior. IEEE Trans. Pattern Anal. 2010, 32, 1127–1133. [Google Scholar]
  35. Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image super-resolution via sparse representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef]
  36. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. 2015, 38, 295–307. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks; IEEE Computer Vision and Pattern Recognition: Las Vegas, NV, USA, 2016. [Google Scholar]
  38. Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Networks; IEEE Computer Vision and Pattern Recognition: Honolulu, HI, USA, 2017. [Google Scholar]
  39. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the 15th European Conference on Computer Vision, Munich, Germany, 8–14 September 2018. [Google Scholar]
  40. Tai, Y.; Yang, J.; Liu, X. Image Super-Resolution via Deep Recursive Residual Network; IEEE Computer Vision and Pattern Recognition: Honolulu, HI, USA, 2017. [Google Scholar]
  41. Bae, H.; Jang, K.; An, Y.-K. Deep super resolution crack network (SrcNet) for improving computer vision-based automated crack detectability in in situ bridges. Struct. Health Monit. 2020. [Google Scholar] [CrossRef]
  42. Kang, M.-S.; Kim, N.G.; Im, S.B.; Lee, J.J.; An, Y.-K. 3D GPR image—Based UcNet for enhancing underground cavity detectability. Remote Sens. 2019, 11, 2545. [Google Scholar] [CrossRef] [Green Version]
  43. Warren, C.; Giannopoulos, A.; Giannakis, I. gprMax: Open source software to simulate electromagnetic wave propagation for ground penetrating radar. Comput. Phys. Commun. 2016, 209, 163–170. [Google Scholar] [CrossRef] [Green Version]
  44. Yee, K. Numerical solution of initial boundary value problems involving Maxwell’s equations in isotropic media. IEEE Trans. Antennas Propag. 1966, 14, 302–307. [Google Scholar]
Figure 1. Super resolution (SR) ground penetrating radar (GPR) image enhancement network. The 1st step: shallow feature extraction; 2nd step: deep feature extraction; 3rd step: upscaling; 4th step: reconstruction (LR: low resolution, SR: super resolution).
Figure 1. Super resolution (SR) ground penetrating radar (GPR) image enhancement network. The 1st step: shallow feature extraction; 2nd step: deep feature extraction; 3rd step: upscaling; 4th step: reconstruction (LR: low resolution, SR: super resolution).
Remotesensing 12 03056 g001
Figure 2. Frequency–wavenumber (f–k) analysis using the SR C-scan images: SR C-scan images in the t–s domain are transformed into the f–k domain through 3D Fourier transform. The undesired incoherent noises and electromagnetic wave propagation directivity are then filtered out and analyzed, respectively, in the f–k domain. Finally, the filtered data in the f–k domain are reconstructed to the t–s domain through the inverse 3D Fourier transform.
Figure 2. Frequency–wavenumber (f–k) analysis using the SR C-scan images: SR C-scan images in the t–s domain are transformed into the f–k domain through 3D Fourier transform. The undesired incoherent noises and electromagnetic wave propagation directivity are then filtered out and analyzed, respectively, in the f–k domain. Finally, the filtered data in the f–k domain are reconstructed to the t–s domain through the inverse 3D Fourier transform.
Remotesensing 12 03056 g002
Figure 3. The representative f k x plots at the center of k y (a) before and (b) after filtering.
Figure 3. The representative f k x plots at the center of k y (a) before and (b) after filtering.
Remotesensing 12 03056 g003
Figure 4. Three-dimensional GPR simulation setup: Tx and Rx are the transmitter and receiver, respectively. ε means the permittivity.
Figure 4. Three-dimensional GPR simulation setup: Tx and Rx are the transmitter and receiver, respectively. ε means the permittivity.
Remotesensing 12 03056 g004
Figure 5. Representative image enhancement results of the simulation data: (a) original LR B- and C-scan images, (b) enhanced SR B- and C-scan images.
Figure 5. Representative image enhancement results of the simulation data: (a) original LR B- and C-scan images, (b) enhanced SR B- and C-scan images.
Remotesensing 12 03056 g005
Figure 6. Representative simulation k x k y plots at 300 MHz in the f–k domain: (a) U , (b) U f , (c) U k x and (d) U + k x .
Figure 6. Representative simulation k x k y plots at 300 MHz in the f–k domain: (a) U , (b) U f , (c) U k x and (d) U + k x .
Remotesensing 12 03056 g006aRemotesensing 12 03056 g006b
Figure 7. Representative simulation resultant images in the t–s domain: (a) E , (b) E f , (c) E k x and (d) E + k x .
Figure 7. Representative simulation resultant images in the t–s domain: (a) E , (b) E f , (c) E k x and (d) E + k x .
Remotesensing 12 03056 g007
Figure 8. Representative GPR A-scan signals with the reference signals obtained from (a) E and (b) E f .
Figure 8. Representative GPR A-scan signals with the reference signals obtained from (a) E and (b) E f .
Remotesensing 12 03056 g008
Figure 9. Experimental setup: (a) 3D GPR-mounted van with (b) 3D GPR device and (c) data acquisition system.
Figure 9. Experimental setup: (a) 3D GPR-mounted van with (b) 3D GPR device and (c) data acquisition system.
Remotesensing 12 03056 g009
Figure 10. Representative image enhancement results of the experimental pipe case 1: (a) original LR B- and C-scan images, (b) enhanced SR B- and C-scan images.
Figure 10. Representative image enhancement results of the experimental pipe case 1: (a) original LR B- and C-scan images, (b) enhanced SR B- and C-scan images.
Remotesensing 12 03056 g010
Figure 11. Representative image enhancement results of the experimental pipe case 2: (a) original LR B- and C-scan images, (b) enhanced SR B- and C-scan images.
Figure 11. Representative image enhancement results of the experimental pipe case 2: (a) original LR B- and C-scan images, (b) enhanced SR B- and C-scan images.
Remotesensing 12 03056 g011
Figure 12. Representative k x k y plots at 500 MHz of the experimental pipe case 1: (a) U , (b) U f , (c) U k x and (d) U + k x .
Figure 12. Representative k x k y plots at 500 MHz of the experimental pipe case 1: (a) U , (b) U f , (c) U k x and (d) U + k x .
Remotesensing 12 03056 g012aRemotesensing 12 03056 g012b
Figure 13. Representative k x k y plots at 500 MHz of the experimental pipe case 1: (a) U , (b) U f , (c) U k x and (d) U + k x .
Figure 13. Representative k x k y plots at 500 MHz of the experimental pipe case 1: (a) U , (b) U f , (c) U k x and (d) U + k x .
Remotesensing 12 03056 g013
Figure 14. Representative resultant images in the t–s domain of the experimental pipe case 1: (a) E , (b) E f , (c) E k x and (d) E + k x .
Figure 14. Representative resultant images in the t–s domain of the experimental pipe case 1: (a) E , (b) E f , (c) E k x and (d) E + k x .
Remotesensing 12 03056 g014
Figure 15. Representative resultant images in the t–s domain of the experimental pipe case 2: (a) E , (b) E f , (c) E k x and (d) E + k x .
Figure 15. Representative resultant images in the t–s domain of the experimental pipe case 2: (a) E , (b) E f , (c) E k x and (d) E + k x .
Remotesensing 12 03056 g015
Table 1. SNR comparison of the simulation A-scan signals between E and E f .
Table 1. SNR comparison of the simulation A-scan signals between E and E f .
A - Scan   of   E A - Scan   of   E f
SNR (dB)19.254.1
Table 2. SNR comparison of the experimental A-scan signals between E and E f .
Table 2. SNR comparison of the experimental A-scan signals between E and E f .
A - Scan   of   E A - Scan   of   E f
SNR (dB)Pipe 12950
Pipe 22850.3

Share and Cite

MDPI and ACS Style

Kang, M.-S.; An, Y.-K. Frequency–Wavenumber Analysis of Deep Learning-based Super Resolution 3D GPR Images. Remote Sens. 2020, 12, 3056. https://doi.org/10.3390/rs12183056

AMA Style

Kang M-S, An Y-K. Frequency–Wavenumber Analysis of Deep Learning-based Super Resolution 3D GPR Images. Remote Sensing. 2020; 12(18):3056. https://doi.org/10.3390/rs12183056

Chicago/Turabian Style

Kang, Man-Sung, and Yun-Kyu An. 2020. "Frequency–Wavenumber Analysis of Deep Learning-based Super Resolution 3D GPR Images" Remote Sensing 12, no. 18: 3056. https://doi.org/10.3390/rs12183056

APA Style

Kang, M. -S., & An, Y. -K. (2020). Frequency–Wavenumber Analysis of Deep Learning-based Super Resolution 3D GPR Images. Remote Sensing, 12(18), 3056. https://doi.org/10.3390/rs12183056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop