Next Article in Journal
Floodwater Extraction from UAV Orthoimagery Based on a Transformer Model
Next Article in Special Issue
Autonomous Extraction Technology for Aquaculture Ponds in Complex Geological Environments Based on Multispectral Feature Fusion of Medium-Resolution Remote Sensing Imagery
Previous Article in Journal
Research and Design of BPM Shortwave Time Signal Modulation Technology Based on Chirp
Previous Article in Special Issue
CMFPNet: A Cross-Modal Multidimensional Frequency Perception Network for Extracting Offshore Aquaculture Areas from MSI and SAR Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Seismic Random Noise Attenuation Using DARE U-Net

by
Tara P. Banjade
1,2,
Cong Zhou
1,2,3,
Hui Chen
1,2,
Hongxing Li
1,3,
Juzhi Deng
2,4,*,
Feng Zhou
2,5 and
Rajan Adhikari
6
1
Engineering Research Center for Seismic Disaster Prevention and Engineering Geological Disaster Detection of Jiangxi Province, East China University of Technology, Nanchang 330013, China
2
School of Geophysics and Measurement-Control Technology, East China University of Technology, Nanchang 330013, China
3
Key Laboratory of Metallogenic Prediction of Nonferrous Metals and Geological Environment Monitoring, Central South University, Ministry of Education, Changsha 410083, China
4
State Key Laboratory of Nuclear Resources and Environment, East China University of Technology, Nanchang 330013, China
5
Jiangxi Hydraulic Safety Engineering Technology Research Center, Nanchang 330029, China
6
School of Mathematical Sciences, Tribhuvan University, Kirtipur 44618, Nepal
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(21), 4051; https://doi.org/10.3390/rs16214051
Submission received: 10 September 2024 / Revised: 25 October 2024 / Accepted: 27 October 2024 / Published: 30 October 2024

Abstract

:
Seismic data processing plays a pivotal role in extracting valuable subsurface information for various geophysical applications. However, seismic records often suffer from inherent random noise, which obscures meaningful geological features and reduces the reliability of interpretations. In recent years, deep learning methodologies have shown promising results in performing noise attenuation tasks on seismic data. In this research, we propose modifications to the standard U-Net structure by integrating dense and residual connections, which serve as the foundation of our approach named the dense and residual (DARE U-Net) network. Dense connections enhance the receptive field and ensure that information from different scales is considered during the denoising process. Our model implements local residual connections between layers within the encoder, which allows earlier layers to directly connect with deep layers. This promotes the flow of information, allowing the network to utilize filtered and unfiltered input. The combined network mechanisms preserve the spatial information loss during the contraction process so that the decoder can locate the features more accurately by retaining the high-resolution features, enabling precise location in seismic image denoising. We evaluate this adapted architecture by applying synthetic and real data sets and calculating the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). The effectiveness of this method is well noted.

Graphical Abstract

1. Introduction

During geophysical exploration, recorded data are intertwined with random noise which covers the data and leads to misinterpretation. In the scientific community, there are a number of predominant mathematical models used to attenuate high-frequency as well as low-frequency noises. They are particularized by having some benefits and some shortcomings. The main sources of random noise during seismic data recording are instruments, wind motion, environmental waves, etc. Similarly, surface waves, direct waves, and ghost waves are considered coherent noise [1]. Based on the nature of contaminated noise, numerous attenuation approaches have been proposed and deployed.
Some of the noise reduction methods traditionally used are based on filtering. f x deconvolution denoising [2], f x domain predictive filtering [3], Wiener filtering [4], and Kalman filtering were designated to smooth the signals in the frequency domain. Other approaches include using transform domains such as Fourier transform and wavelet-transform-based applications including seismic denoising [5,6,7,8,9], curvelet transform [10], contourlet transform [11], and Shearlet transform [12]. Spectral-decomposition-based methods such as the empirical mode of decomposition (EMD) [13], variational mode of decomposition (VMD) [14,15,16], and geometrical mode of decomposition (GMD) [17,18] contribute to removing noise from one- and two-dimensional seismic data.
These days, many seismic exploration and analysis techniques are designated based on artificial intelligence [19,20]. The convolutional neural network (CNN) of the deep learning algorithm shows a strong ability in the field of computer vision and image/signal processing. The characteristic learning efficiency achieved by the CNN when learning images is highly remarkable [21]. The promising result of the successful mapping between noise-free and contaminated data during the training process helps to restore the original signal. Therefore, CNN-based seismic noise attenuation [22] is becoming much more significant. Image denoising and inpainting with deep neural networks [23], hyperspectral denoising via adversarial learning [24,25,26,27], Gaussian noise removal [28], and especially CNNs for seismic data denoising approaches [29] play a crucial role. CNN-based image-denoising methods have been quite successful; however, they are confined by some limitations. These models typically have a fixed architecture and hence cannot adapt well to various noise levels or types. They might struggle with noise patterns that they were not explicitly trained on. Training and running deep CNN models [30] can be computationally intensive, making them less suitable for real-time applications or resource-constrained devices. Sometimes, over-smoothing images results in a loss of fine details and textures while reducing noise [31,32,33]. This trade-off between noise reduction and detail preservation can be challenging to balance. Existing methods can introduce new artifacts or errors into the denoised image, especially when dealing with highly noisy data. Since the convolution kernel is content independent, it cannot represent and restore the different data regions [34,35,36,37]. Additionally, the kernel is created as a small patch, which enables the extraction of local features and neglects the global information. Another architecture called the U-Net [38,39] is utilized for biomedical image processing and other image-processing tasks. A residual network [40] that is used to address the degradation problem and a dense network [41,42] to reuse the information map from each layer as input within the network, which helps more precise extraction. Such types of networks are more suitable for seismic data [43,44] as they are computationally efficient and trainable with small data sets and can be trained end to end.
In this paper, we propose dense and residual DARE U-Nets, which are variations of the traditional U-Net architecture designed to improve performance in seismic data denoising. In the dense U-Nets, each layer is connected to every other layer in a feed-forward manner. This means that the output of each layer is fed as an input to all subsequent layers. This dense connectivity helps information flow across different levels of abstraction, allowing for better feature reuse and gradient propagation. Additionally, we implement local residual connections between layers within the encoder layer of the network, like in the residual network, which allows earlier layers to directly connect with deeper layers. The flow of information from preceding to succeeding layers is made more efficient by allowing the network to utilize filtered and unfiltered input. It enables the reuse of features learned in earlier layers, which is beneficial for tasks where low-level features are relevant throughout the network. This promotes the flow of information from input to output more efficiently by allowing the network to bypass certain layers if needed. Also, it enables the reuse of features learned in earlier layers, which is beneficial for tasks where low-level features are relevant throughout the network. These combined network connections allow the model to learn dense–residual functions, capturing the difference between the input and the desired output. This simplifies the learning process by improving gradient flow, enabling better feature reuse, and simplifying the optimization of deep architects, especially in deeper networks, and by focusing on learning the residual details.

2. Methodology

Seismic noise attenuation aims to reconstruct a clean image x from noisy image y. The equation is formulated as
y ( i , j ) = x ( i , j ) + n ( i , j )
where y ( i , j ) is the observed noisy data, x ( i , j ) is the noise-free original data, and n ( i , j ) represents the amount of random noise added.
The parametric function M ( : , Θ ) can be used to restore x.
x t = M ( y : Θ )
where x t represents the estimated signal, M is the mapping relation, and Θ = ( ω , k ) denotes the network parameters. The weight ω and bias k are eligible for modification. Since noisy data y have important features and information about the noise-free data x, assume a parametric mapping N ( : , Θ ) such that N ( y ; Θ ) n . Now, the noise attenuation parametric model based on residual learning is
M ( y ; Θ ) = N ( y ; Θ ) + y
We can solve the following optimization problem to estimate the parameters:
Θ = a r g min Θ 1 N i = 1 N L ( M ( y i ; Θ ) , x i ) + λ 2 | | Θ | | 2
where { ( y i , x i ) } i = 1 N is a set of training data and L ( . , . ) denotes the loss function. Equation (4) is the combination of the fidelity term and regularization term, while λ > 0 controls the trade-off between them.
The loss function is defined by
L Θ ( y , x ) = 1 N i = 1 N | | M ( y i ; Θ ) x i | | 2 2
and the ADAM algorithm is used to minimize the objective function. ADAM upgrades the network weight significantly and is different from stochastic gradient descent.
The architecture of the proposed DARE U-Net is shown in Figure 1. The network consists of an encoder, bridge, and decoder. The encoder, also called the contractive path, contracts the input, reducing the spatial resolution; however, it captures the contextual information. It follows the typical architecture from the convolutional neural network, consisting of repeated convolution with ReLU activation, a (3, 3) kernel, and, finally, a max-pooling operation in a single layer. The first layer of the encoder allows the input image to pass through the first convolutional layer with 64 filters of size (3, 3) followed by a ReLU activation.
For any signal y, the ReLU is
f ( y ) = m a x ( 0 , y ) = y if y 0 0 y < 0
It introduces the non-linearity in the network, and the gradient problem vanishes. The ReLU function output will be the same if the input is positive and zero if the model input is negative.
The output from the first convolution is then passed through another convolutional layer with 64 filter sizes (3, 3) followed by a ReLU activation. The output from the second convolution is used as a residual connection and is concatenated as input in the same layer as the decoder. The result is then passed through a max-pooling layer with a kernel size of (2, 2) and stride of (2, 2), which down-samples the feature maps. This process is repeated three times consecutively for three layers in the encoder with the number of filters doubling at each layer (64, 128, 256, 512). Thus, in a subsequent layer of an encoder, as in the first layer, the following operation occurs:
  • Conv2D (128 filters) → Conv2D (128 filters) → Addition → Max Pooling;
  • Conv2D (256 filters) → Conv2D (256 filters) → Addition → Max Pooling;
  • Conv2D (512 filters) → Conv2D (512 filters) → Dropout (0.5).
Figure 2, Figure 3 and Figure 4 are representations of skip connections, the local residual connections within each layer of the encoder, and the residual–dense block architecture.
In this way, the input image passes through these layers, and the spatial dimensions are reduced while the feature channels are increased. Through this process, the network captures the context and semantic information at various scales, producing a feature map of size (32, 32, 512).
Similarly, the decoder, also called the expansive path, of the network includes the up-sampling process in order to increase the spatial dimensions of the feature maps. It expands the encoded data, either as a bridge or a latent representation of the input, up-sampling the feature maps, maintaining the spatial resolution of the input, and using the contextual representation to generate data. In the first layer of the decoder, the output from the bridge is up-sampled using a transposed convolutional layer (UpConv2D) with 256 filters and a kernel size of (2, 2). The output is then concatenated (depth-wise) with the corresponding feature maps from the encoder using the residual connection. The bridge is the latent representation of the learned input. This is the basis whereby the decoder is able to decode the output. The latent representation also acts as a bridge between the encoder and decoder such that the learned representation of the input is transferred to the decoder through it; hence, it is called the bridge.
The concatenated output is then passed through two convolutional layers with 256 filters of size (3, 3), followed by ReLU activations. This process is repeated three times consecutively for three layers of the decoder with the number of filters halving at each layer (256, 128, 64).
  • UpConv2D (128 filters) → Concatenate → Conv2D (128 filters) → Conv2D (128 filters);
  • UpConv2D (64 filters) → Concatenate → Conv2D (64 filters) → Conv2D (64 filters.
Each layer in the expansive path consists of convolution with up-sampling, which increases the spatial dimension of the feature maps. Finally, the up-sampled feature maps are concatenated with the corresponding feature maps from the contracting path via residual connections. The final output is produced by a convolutional layer with a single filter of size (1, 1), followed by a linear activation. The residual connections from the encoder help to preserve the spatial information loss during the contraction process so that the decoder can locate the features more accurately by retaining the high-resolution features in front of the encoder, enabling precise location.
In the dense U-Net, each layer is connected to every other layer in a feed-forward manner. This implies that the result of each layer is fed as input to all subsequent layers. This dense connectivity helps information flow across different levels of abstraction, allowing for better feature reuse and gradient propagation. These connections also help to alleviate the vanishing gradient problem, enabling deeper networks to be trained effectively and leading to an increase in the number of parameters compared to the standard U-Net. Residual U-Nets incorporate residual connections, inspired by ResNet architectures. These connections allow the model to learn residual functions, capturing the difference between the input and the desired output. This simplifies the learning process, especially in deeper networks, by focusing on learning the residual details. These connections also help in preventing the vanishing gradient problem and enable the training of very deep networks more effectively. By focusing on learning residuals, they often require fewer parameters compared to traditional U-Nets while achieving similar or better performance.

2.1. Structural Similarity Index Measure and PSNR

The performance of the proposed architecture and existing models can be measured using the structural similarity index measure (SSIM). We calculate three components related to structure, luminance, and contrast changes in noisy and denoised images. This helps us to analyze and compare the performance of different models, including estimating the amount of signal loss.
Let A and B be images and μ A and μ B be the mean intensity. The luminance distortion component is given by
l ( A , B ) = 2 μ A μ B + R 1 μ A 2 + μ B 2 + R 1
where R 1 is the regularization constant.
Similarly, the contrast distortion component is given by
c ( A , B ) = 2 σ A σ B + R 2 σ A 2 + σ B 2 + R 2
where σ A and σ B are the standard deviation and R 2 is the regularization constant.
Finally, structural distortion is given by
s ( A , B ) = σ A B + R 3 σ A σ B + + R 3
where σ A B is the covariance and R 3 is a regularization constant.
Now, the structural similarity index measure is given by
S S I M ( A , B ) = l ( A , B ) α . c ( A , B ) β . s ( A , B ) γ
where α > 0 , β > 0 , and γ > 0 are weight coefficients. In practice, we assume α = β = γ = 1 and R 3 = R 2 / 2 for simplicity.
Therefore,
S S I M ( A , B ) = ( 2 μ A μ B + R 1 ) ( 2 σ A B + R 2 ) ( μ A 2 + μ B 2 + R 1 ) ( σ A 2 + σ B 2 + R 2 )
SSIM values range between 0 and 1. If the values are closer to 1, it indicates good image restoration quality and it is more similar.

2.2. Peak Signal-to-Noise Ratio

Peak signal-to-noise ratio (PSNR) is the ratio between the maximum possible power of an image and the power of the noise that degrades the quality of that image. Mathematically, it can be expressed as
P S N R = 10 · l o g 10 M A X 2 M S E
The term ‘MAX’ is the maximum possible pixel value of the image, and ‘MSE’ represents the mean squared error between two images. It is measured in decibels and higher PSNR values indicate better quality. PSNR is specifically designed to compare the denoised image to the original image. This direct comparison makes it a more suitable measure for denoising tasks. PSNR is a normalized measure that accounts for the dynamic range of pixel values. This helps in providing a more consistent evaluation across different images or image formats. In contrast, the SNR might not account for the range and could lead to misleading results when comparing different types of images. PSNR is often more sensitive to small changes in image quality, such as those introduced by compression artifacts or noise reduction techniques.

3. Experiments

We carry out a series of experiments for different seismic data sets and calculate the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). Figure 5 is an example of a training data set. Noise-free training sample data (Figure 5a) and noisy training data (Figure 5b) can be converted into the desirable size within the network. For this experiment, we use generated seismic images. If we have to denoise massive seismic data in reality, a patch-based method can be used. Dealing with a whole seismic volume is not practical; therefore, we split it into many patches and process each patch.
The four sets of noise-free seismic data are shown in the first column of Figure 6 and their noisy versions are shown in the second column. The PSNR value for the noisy data is 23 dB. The experimental PSNR and SSIM results obtained from all four data sets are noted. First, seismic data are created with the size 256 × 256 with three linear events. Most of the algorithms show better results for data with linear events, but real-time recorded data consist of more complex geometric patterns and a curve-like nature. Hence, we carry out an experiment on synthetic data with two linear events and a curve event in our second experiment, as shown in the second row of Figure 6. The third set of data is a magnified version of seismic data with a single linear event, which helps to test artifacts visibility clearly. The fourth one is real-time recorded seismic data with weak events and non-regular seismic features. The first column is a noise-free set; the second column is a noisy version. The third, fourth, and fifth columns are the results obtained by the wavelet transform, the U-Net, and the proposed model, respectively. We use 700 images for training, 100 images for validation, and 100 images for testing. During the training stage, N = 60 and 780 patches with size 256 × 256 are created with 128 strides. The optimal strategies, such as using the ADAM optimizer, are tested and confirmed to have better results. The total number of epochs is 80, and all experiments are carried out on an NVIDIA GTX1080Ti which consumes a training time of around 240 min.
To verify the effectiveness of the proposed method, we perform other experiments on seismic shot gather data, as shown in Figure 7. The data set is visible as continuous, coherent patterns across multiple traces, and continuous horizontal or near-horizontal bands indicate reflection events from subsurface layers. PSNR and SSIM values given by wavelet transform, the U-Net, and the proposed model are calculated.
Additionally, to verify the performance of different algorithms, the FK spectrum is sketched and analyzed. It helps to capture the frequency information and spatial directionality at different noise statuses by converting data from the time–space domain to the frequency–wavenumber domain. Figure 8 demonstrates the FK spectrum of the results obtained by the three methods. The horizontal axis indicates the normalized wavenumber, and the vertical axis indicates the frequency. Noise-free data and noisy data are represented in Figure 8a and Figure 8b, respectively. A noise mask can be clearly observed at low frequencies of 0–20 Hz, and the region around 30–50 Hz is masked completely in Figure 8b. Figure 8c is the spectrum obtained after wavelet denoising, in which a lot of noise is seen at low frequencies (0–20 Hz) and little information is lost at frequencies in the range of 30–50 Hz, which indicates that this method cannot stop the loss of curve signals during noise removal. Compared to these methods (Figure 8c,d), the proposed model removes most noise and preserves the signals significantly, as shown in Figure 8e. We observe noise of low frequency in the FK spectrum, which indicates that there is still noise in the obtained data set, and the loss of the high-frequency part means that some useful information is lost.

4. Discussion

The main purpose of attenuating the seismic data is to enhance the useful information by removing unwanted frequencies. Balancing the noise removal and preservation of the weak seismic features is very important during the process. Hence, we consciously focus on the resolution of image, peak signal-to-noise ratio, and similarity structure index measure. To preserve the edges is also equivalently important. Four sets of seismic data are shown in the first column of Figure 6, and their noisy versions are shown in the second column. The PSNR value for the noisy data is 23 dB. The experimental PSNR results obtained from the first data set by wavelet, U-Net, and DARE U-Net methods are 28 dB, 33 dB, and 36 dB, respectively. The resulting values from the second data set are 27.4 dB, 31 dB, and 35.5 dB, obtained using the three different methods. Similarly, we obtain 30.7 dB, 35.3 dB, and 37.9 dB and 29.5 dB, 33.5 dB, and 37.5 dB from the third and fourth data sets by applying wavelet, U-Net, and DARE U-Net methods, respectively.
The fourth column of Figure 6 is the denoised result given by U-Net. The quality of the data is significantly improved as the PSNR values are quite a lot higher. The fifth column represents the denoised results of the different models of seismic data obtained by DARE U-Net. The resolution and most of the original features are restored while applying the proposed architecture. Comparatively, the proposed method has a higher PSNR, indicating a better denoising result and restoration. The details of the PSNR numerical results are shown in Table 1.
Similarly, we compare the structural similarity index measure (SSIM) between the denoised results and the original image. This measurement assists in estimating the amount of signal loss and restoration of original features. The SSIM values lie between 0 and 1. The SSIM values corresponding to Figure 6 show the strength of the proposed model.
The SSIM results given by the wavelet, U-Net, and DARE U-Net methods on the first data set are 0.841, 0.891, and 0.951, respectively. The second seismic data set has SSIM values of 0.831, 0.861, and 0.901 for the three different methods. The results from the third data set are 0.863, 0.898, and 0.925. The SSIM values from the fourth data set obtained by wavelet, U-Net, and DARE U-Net methods are 0.835, 0.872, and 0.907, respectively. Details of the numerical results are shown in Table 2. Since an SSIM value closer to 1 means the two images are more similar and have better results, these values show that the proposed model restores the image with minimal loss of information, and features are well preserved.
Similarly, we attenuate noise from the seismic shot gather data, as shown in Figure 7. The noise-free data (Figure 7a) and their noisy version with a 22 dB PSNR are represented by Figure 7b. The wavelet result (Figure 7c), U-Net result (Figure 7d), and DARE U-Net result (Figure 7e) have PSNR values of 27.5 dB, 31 dB, and 33.5 dB, respectively. The image resolution given by the proposed model is high compared to wavelet and U-Net methods, suppressing the noise significantly. Structural similarity index measures of 0.841, 0.885, and 0.912 are achieved by wavelet, U-Net, and DARE U-Net methods, respectively. The details are mentioned in Table 3. Finally, we apply the proposed model to the post-stack real seismic data set, which consists of 250 seismic traces and has a size of 1000 × 250 , as shown in Figure 9a. The real data are composed of some weak features and have a complex nature. Usually, we use traditional methods to obtain denoised labels. However, this is not ideal. For a special case, the data may only contain noise in part of an area, and then the data in another area can be used as labels. Since we aim to validate the effectiveness of the proposed model, the blurriness is increased in Figure 9b by adding some arbitrary noise. The seismic features around the 400–600 ms section are less visible, and a few weak features around 800 ms disappear due to noise. Figure 9c represents the result obtained by the wavelet method in which a few noise versions still appear and some weak horizontal events are lost or broken. Figure 9d,e show the noise-free outcomes achieved using U-Net and DARE U-Net methods. It can be clearly seen that the result obtained by the proposed model have a high resolution, and masked seismic features are preserved and recovered successfully. The residual parts (Figure 10) are also collected to verify the results. In the noise section removed by the wavelet method, shown in Figure 10a, some seismic events are seen, which means information is not well preserved and is lost. Figure 10b,c indicate the residual of U-Net and DARE U-Net outcomes, showing no horizontal lines or very few lines, which indicates that useful pieces of information are well preserved.

5. Conclusions

In this paper, we proposed a modification to the standard U-Net integrating dense and residual connections called the DARE U-Net for seismic data denoising. In dense U-Nets, each layer is connected to every other layer in a feed-forward manner, which means that the output of each layer is fed as input to all subsequent layers. This dense connectivity helps information flow across different levels of abstraction, allowing for better feature reuse and gradient propagation. Additionally, this model implements local residual connections between layers within the encoder layer of the network, like in a residual network. This allows earlier layers to directly connect with deeper layers and promotes the flow of information from preceding to succeeding layers more efficiently by allowing the network to utilize filtered and unfiltered input. It helps encoders preserve the important features of the input data, even in the presence of noise, and promotes the flow of information from input to output more efficiently by allowing the network to bypass certain layers if needed. Also, this approach allows the construction of much deeper networks, if required, without suffering any performance degradation. By allowing the network to bypass a layer, skip connections make our network more robust to variations in the input. Overall, DARE U-Nets combine the advantages of dense and residual connections to enhance feature learning, gradient propagation, and parameter efficiency, leading to improved performance in image-denoising tasks compared to other methods.

Author Contributions

Conceptualization, T.P.B.; methodology, T.P.B.; software, T.P.B. and R.A.; validation, J.D. and H.L.; formal analysis, T.P.B.; investigation, T.P.B.; resources, J.D., H.C. and C.Z.; data curation, T.P.B.; writing—original draft preparation, T.P.B.; writing—review and editing, C.Z., R.A., F.Z., J.D., H.L. and H.C.; visualization, T.P.B.; supervision, J.D.; project administration, J.D.; funding acquisition, J.D., C.Z., H.C. and F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was jointly funded by the Key Research and Development Plan project of Yunnan Province (202303AA080006), the National Natural Science Foundation of China (grant number 42264006), the Science and Technology Project of Jiangxi Province (grant numbers 2022KSG01003, 2023KSG01008, and 20204BCJL23058), Jiangxi Hydraulic Safety Engineering Technology Research Center (grant number 2022SKSG01), and the Open Project of the Jiangxi Academy of Water Sciences and Engineering (grant number 2022SKLS04).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, Z.P.; Chen, X.H.; Li, J.Y.; Huang, R. Study on using radial trace transform to depress coherent noise in high-density acquired data. Oil Geophys. Prospect. 2008, 43, 321–326. [Google Scholar]
  2. Canales, L. Random noise reduction. In Proceedings of the 54th Annual International Meeting, Society of Exploration Geophysicists, Expanded Abstract, Atlanta, GA, USA, 2–6 December 1984; pp. 525–527. [Google Scholar]
  3. Harris, P.E.; White, R.E. Improving the performance of f-x prediction filtering at low signal-to-noise ratios. Geophys. Prospect. 2010, 45, 269302. [Google Scholar] [CrossRef]
  4. Shui, P.L. Image denoising algorithm via doubly local Wiener filtering with directional windows in wavelet domain. IEEE Signal Process. Lett. 2005, 12, 681–684. [Google Scholar] [CrossRef]
  5. Donoho, D.L. Denoising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef]
  6. Daubechies, I. The wavelet transform, time-frequency localization, and signal analysis. IEEE Trans. Inf. Theory 1990, 36, 961–1005. [Google Scholar] [CrossRef]
  7. You, N.; Han, L.; Zhu, D.; Song, W. Research on image denoising in edge detection based on wavelet transform. Appl. Sci. 2023, 13, 1837. [Google Scholar] [CrossRef]
  8. Chen, G.; Li, Q.Y.; Li, D.Q.; Wu, Z.Y.; Liu, Y. Main frequency band of blast vibration signal based on wavelet packet transform. Appl. Math. Model. 2019, 74, 569–585. [Google Scholar] [CrossRef]
  9. Chen, G.; Li, K.; Liu, Y. Applicability of continuous, stationary, and discrete wavelet transforms in engineering signal processing. J. Perform. Constr. Facil. 2021, 35, 04021060. [Google Scholar] [CrossRef]
  10. Ma, J.; Plonka, G. The curvelet transform. IEEE Signal Process. Mag. 2010, 27, 118–133. [Google Scholar] [CrossRef]
  11. Minh, N.D.; Vetterli, M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Trans. Image Process. 2011, 14, 2091–2106. [Google Scholar]
  12. Easley, G.; Labate, D.; Lim, W.Q. Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal. 2008, 25, 25–46. [Google Scholar] [CrossRef]
  13. Huang, N.; Shen, Z.; Long, S.; Wu, M.; Shih, H.; Zheng, Q.; Yen, N.; Tung, C.; Liu, H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. 1998, 454, 903–995. [Google Scholar] [CrossRef]
  14. Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Process. 2014, 62, 531–544. [Google Scholar] [CrossRef]
  15. Yu, S.; Ma, J. Complex variational mode decomposition for slop preserving denoising. IEEE Trans. Geosci. Remote Sens. 2017, 56, 586–597. [Google Scholar] [CrossRef]
  16. Banjade, T.P.; Yu, S.; Ma, J. Earthquake accelerogram denoising by wavelet-based variational mode decomposition. J. Seismol. 2019, 175, 649–663. [Google Scholar] [CrossRef]
  17. Yu, S.; Ma, J.; Osher, S. Geometric mode decomposition. Inverse Probl. Imaging 2018, 12, 831–852. [Google Scholar] [CrossRef]
  18. Banjade, T.P.; Zhou, C.; Chen, H.; Li, H.; Deng, J. Enhancing seismic data by edge-preserving geometrical mode decomposition. Digit. Signal Process. 2024, 148, 104442. [Google Scholar] [CrossRef]
  19. Zabihi, R.; Schaffie, M.; Nezamabadi-Pour, H.; Ranjbar, M. Artificial neural network for permeability damage prediction due to sulfate scaling. J. Petrol. Sci. Eng. 2011, 78, 575–581. [Google Scholar] [CrossRef]
  20. Zabihi, R.; Mowla, D.; Karami, H.R. Artificial intelligence approach to predict drag reduction in crude oil pipelines. J. Petrol. Sci. Eng. 2019, 178, 586–593. [Google Scholar] [CrossRef]
  21. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  22. Li, Y.; Wang, H.; Dong, X.T. The denoising of desert seismic data based on cycle-GAN with unpaired data training. IEEE Geosci. Remote Sens. Lett. 2021, 18, 2016–2020. [Google Scholar] [CrossRef]
  23. Xie, J.; Xu, L.; Chen, E. Image denoising and inpainting with deep neural network. Process. Adv. Neural Inf. Process. Syst. 2012, 25, 350–358. [Google Scholar]
  24. Zhang, J.; Cai, Z.; Chen, F.; Zeng, D. Hyperspectral image denoising via adversarial learning. Remote Sens. 2022, 14, 1790. [Google Scholar] [CrossRef]
  25. Chang, Y.L.; Tan, T.H.; Lee, W.H.; Chang, L.; Chen, Y.N.; Fan, K.C.; Alkhaleefah, M. Consolidated convolutional neural network for hyperspectral image classification. Remote Sens. 2022, 14, 1571. [Google Scholar] [CrossRef]
  26. Qin, J.; Zhao, H.; Liu, B. Self-supervised denoising for real satellite hyperspectral imagery. Remote Sens. 2022, 14, 3083. [Google Scholar] [CrossRef]
  27. Guo, M.; Xiong, F.; Zhao, B.; Huang, Y.; Xie, Z.; Wu, L.; Chen, X.; Zhang, J. TDEGAN: A texture-detail-enhanced dense generative adversarial network for remote sensing image super-resolution. Remote Sens. 2024, 16, 2312. [Google Scholar] [CrossRef]
  28. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond the Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
  29. Yu, S.; Ma, J.; Wang, W. Deep learning tutorial for denoising. Electr. Eng. Syst. Sci. 2018, arXiv:1810.11614v2. [Google Scholar]
  30. Zhao, H.; Bai, T.; Wang, Z. A natural images pre-trained deep learning method for seismic random noise attenuation. Remote Sens. 2022, 14, 263. [Google Scholar] [CrossRef]
  31. Zhu, W.; Mousavi, S.M.; Beroza, G.C. Seismic Signal Denoising and Decomposition Using Deep Neural Networks. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9476–9488. [Google Scholar] [CrossRef]
  32. Dong, X.; Wang, H.; Zhong, T.; Li, Y. An effective denoising network for land prestack seismic data. J. Appl. Geophys. 2022, 199, 104558. [Google Scholar] [CrossRef]
  33. Kaur, H.; Fomel, S.; Pham, N. Seismic ground-roll noise attenuation using deep learning. Geophys. Prospect. 2020, 68, 2064–2077. [Google Scholar] [CrossRef]
  34. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  35. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. arXiv 2021, arXiv:2103.14030v2. [Google Scholar]
  36. Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. SwinIR: Image Restoration Using Swin Transformer. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar]
  37. Jain, V.; Seung, S. Natural image denoising with convolutional networks. Adv. Neural Inf. Process. Syst. 2009, 21, 769–776. [Google Scholar]
  38. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention, MICCAI; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  39. Guo, M.; Liu, H.; Xu, Y.; Huang, Y. Building extraction-based U-net with an attention block and multiple losses. Remote Sens. 2020, 12, 1400. [Google Scholar] [CrossRef]
  40. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  41. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
  42. Guo, L.; Luo, R.; Li, X.; Zhou, Y.; Juanjuan, T.; Lei, C. Seismic random noise removal based on multiscale convolution and densely connected network for noise level evaluation. IEEE Access 2022, 10, 13911–13925. [Google Scholar] [CrossRef]
  43. Liu, D.; Deng, Z.; Wang, C.; Wang, X.; Chen, W. An unsupervised deep learning method for denoising prestack random noise. IEEE Geosci. Remote Sens. Lett. 2022, 19, 7500205. [Google Scholar] [CrossRef]
  44. Li, J.; Wu, X.; Hu, Z. Deep learning for simultaneous seismic image super-resolution and denoising. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5901611. [Google Scholar] [CrossRef]
Figure 1. DARE U-Net architecture.
Figure 1. DARE U-Net architecture.
Remotesensing 16 04051 g001
Figure 2. Residual connection.
Figure 2. Residual connection.
Remotesensing 16 04051 g002
Figure 3. Local residual connection within each layer of an encoder.
Figure 3. Local residual connection within each layer of an encoder.
Remotesensing 16 04051 g003
Figure 4. Structure of residual dense block.
Figure 4. Structure of residual dense block.
Remotesensing 16 04051 g004
Figure 5. A sample of the training data. (a) Noise-free data. (b) Noisy data.
Figure 5. A sample of the training data. (a) Noise-free data. (b) Noisy data.
Remotesensing 16 04051 g005
Figure 6. Test on four sets of seismic data. First to fifth column: noise-free data, noisy data, denoised by wavelet, U-Net, and DARE U-Net.
Figure 6. Test on four sets of seismic data. First to fifth column: noise-free data, noisy data, denoised by wavelet, U-Net, and DARE U-Net.
Remotesensing 16 04051 g006
Figure 7. (a) Noise-free data. (b) Noisy data. (c) Denoised by wavelet. (d) Denoised by U-Net. (e) Denoised by DARE U-Net.
Figure 7. (a) Noise-free data. (b) Noisy data. (c) Denoised by wavelet. (d) Denoised by U-Net. (e) Denoised by DARE U-Net.
Remotesensing 16 04051 g007
Figure 8. FK spectrum comparisons. (a) Noise-free data. (b) Noisy data. (c) Denoised by wavelet. (d) Denoised by U-Net. (e) Denoised by DARE U-Net.
Figure 8. FK spectrum comparisons. (a) Noise-free data. (b) Noisy data. (c) Denoised by wavelet. (d) Denoised by U-Net. (e) Denoised by DARE U-Net.
Remotesensing 16 04051 g008aRemotesensing 16 04051 g008b
Figure 9. Real data test. (a) Noise-free data. (b) Noisy data. (c) Denoised by wavelet. (d) Denoised by U-Net. (e) Denoised by DARE U-Net.
Figure 9. Real data test. (a) Noise-free data. (b) Noisy data. (c) Denoised by wavelet. (d) Denoised by U-Net. (e) Denoised by DARE U-Net.
Remotesensing 16 04051 g009
Figure 10. Residual section of denoised real data. (a) Wavelet. (b) U-Net. (c) DARE U-Net.
Figure 10. Residual section of denoised real data. (a) Wavelet. (b) U-Net. (c) DARE U-Net.
Remotesensing 16 04051 g010
Table 1. Peak signal-to-noise ratio (PSNR) obtained by different methods on four sets of seismic data, corresponding to Figure 6.
Table 1. Peak signal-to-noise ratio (PSNR) obtained by different methods on four sets of seismic data, corresponding to Figure 6.
Data NumberWaveletU-NetDARE U-Net
1283336
227.43135.5
330.735.237.9
429.533.537.5
Table 2. Structural similarity index measure (SSIM) of different sets of seismic data, corresponding to Figure 6.
Table 2. Structural similarity index measure (SSIM) of different sets of seismic data, corresponding to Figure 6.
Data NumberWaveletU-NetDARE U-Net
10.8410.8910.915
20.8310.8610.901
30.8630.8980.925
40.8350.8720.907
Table 3. Peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) of data, corresponding to Figure 7.
Table 3. Peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) of data, corresponding to Figure 7.
Test/MethodWaveletU-NetDARE U-Net
PSNR27.53133.5
SSIM0.8410.8850.912
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Banjade, T.P.; Zhou, C.; Chen, H.; Li, H.; Deng, J.; Zhou, F.; Adhikari, R. Seismic Random Noise Attenuation Using DARE U-Net. Remote Sens. 2024, 16, 4051. https://doi.org/10.3390/rs16214051

AMA Style

Banjade TP, Zhou C, Chen H, Li H, Deng J, Zhou F, Adhikari R. Seismic Random Noise Attenuation Using DARE U-Net. Remote Sensing. 2024; 16(21):4051. https://doi.org/10.3390/rs16214051

Chicago/Turabian Style

Banjade, Tara P., Cong Zhou, Hui Chen, Hongxing Li, Juzhi Deng, Feng Zhou, and Rajan Adhikari. 2024. "Seismic Random Noise Attenuation Using DARE U-Net" Remote Sensing 16, no. 21: 4051. https://doi.org/10.3390/rs16214051

APA Style

Banjade, T. P., Zhou, C., Chen, H., Li, H., Deng, J., Zhou, F., & Adhikari, R. (2024). Seismic Random Noise Attenuation Using DARE U-Net. Remote Sensing, 16(21), 4051. https://doi.org/10.3390/rs16214051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop