Next Article in Journal
Investigation of the Source of Iceland Basin Freshening: Virtual Particle Tracking with Satellite-Derived Geostrophic Surface Velocities
Next Article in Special Issue
ERS-HDRI: Event-Based Remote Sensing HDR Imaging
Previous Article in Journal
City Scale Traffic Monitoring Using WorldView Satellite Imagery and Deep Learning: A Case Study of Barcelona
Previous Article in Special Issue
VQ-InfraTrans: A Unified Framework for RGB-IR Translation with Hybrid Transformer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single-Image Simultaneous Destriping and Denoising: Double Low-Rank Property

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(24), 5710; https://doi.org/10.3390/rs15245710
Submission received: 29 September 2023 / Revised: 7 December 2023 / Accepted: 7 December 2023 / Published: 13 December 2023
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)

Abstract

:
When a remote sensing camera work in push-broom mode, the obtained image usually contains significant stripe noise and random noise due to differences in detector response and environmental factors. Traditional approaches typically treat them as two independent problems and process the image sequentially, which not only increases the risk of information loss and structural damage, but also faces the situation of noise mutual influence. To overcome the drawbacks of traditional methods, this paper leverages the double low-rank characteristics in the underlying prior of degraded images and presents a novel approach for addressing both destriping and denoising tasks simultaneously. We utilize the commonality that both can be treated as inverse problems and place them in the same optimization framework, while designing an alternating direction method of multipliers (ADMM) strategy for solving them, achieving the synchronous removal of both stripe noise and random noise. Compared with traditional approaches, synchronous denoising technology can more accurately evaluate the distribution characteristics of noise, better utilize the original information of the image, and achieve better destriping and denoising results. To assess the efficacy of the proposed algorithm, extensive simulations and experiments were conducted in this paper. The results show that compared with state-of-the-art algorithms, the proposed method can more effectively suppress random noise, achieve better synchronous denoising results, and it exhibits a stronger robustness.

1. Introduction

Due to factors such as imaging circuits and the environment, random noise widely exists in remote sensing images. Additionally, the existing photoelectric detectors often exhibit certain variations in response within the same light field due to manufacturing technology. This results in the inevitable occurrence of stripe noise during the imaging process. Taking the entire process into consideration, we can draw the following conclusion: remote sensing image degradation is simultaneously influenced by both stripe noise and random noise (see Figure 1). Unlike random noise, stripe noise is a typical structured noise that exhibits obvious structural and directional characteristics. Therefore, the combination of these two types of noise affects both the grayscale information and the structure of the image accordingly.
To tackle the previously mentioned concerns, researchers have proposed numerous algorithms for image denoising [1,2,3,4,5,6,7,8,9,10,11,12,13,14] and destriping [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32] over the past decade, achieving satisfactory results. However, most of these algorithms are only applicable to scenarios where random noise or stripe noise exists separately, requiring multiple processing steps to obtain the final image. This strategy overlooks the mutual influence among noise components while increasing the risks of information loss and structural damage to the image. In contrast, synchronous denoising schemes can effectively avoid these issues, providing a more accurate estimation of the distribution and structural characteristics of different types of noise. This holds significant value for the subsequent applications of image data. For instance, in the context of monitoring natural disasters as discussed in [33], incorporating more information about the image structure enables a more precise assessment of changes in mountainous areas and on water surfaces. This contributes to furnishing more reliable raw information for monitoring natural disasters, greatly enhancing the accuracy of early warnings. Simultaneously, it allows for a more accurate evaluation of destructiveness. Furthermore, in tasks related to image processing in [34], a more precise estimation of different types of noise can significantly reduce their impact, leading to superior processing results.

Related Work

Researchers have proposed a series of approaches to address the challenge of simultaneous destriping and denoising. In 2013, after noticing the mixture of stripe noise and random noise in images, Chang et al. proposed a joint model that combined variation and image sparse representation to simultaneously remove both stripe noise and random noise [35]. This approach cleverly exploited the property that random noise cannot be sparsely encoded and successfully achieved the purpose of synchronous denoising. After that, Liu et al. proposed an approach where the unidirectional gradient matrix was considered as a sparse prior. By applying the l 0 norm as a constraint, they effectively removed stripe noise. Simultaneously, the image was transformed into the wavelet domain for denoising purpose [36]. In addition, the low-rank-based single-image decomposition model (LRSID) proposed by Chang et al. can suppress the random noise while removing the stripe noise, and to a certain extent, it also achieves the purpose of simultaneous denoising [25]. In 2017, Kuang et al. used the SNRCNN network to remove optical noise in infrared images [28]. To achieve better results, Huang et al. proposed two different approaches in 2019 and 2020, respectively. In their research work in 2019, Huang et al. transferred the relevant studies on CNN denoising to the field of synchronous denoising. They combined the unidirectional variation model with CNN denoising networks to construct the UV-DCNN network [37]. In 2020, they exploited the advantages of analysis sparse representation and synthesis sparse representation and proposed a joint analysis and weighted synthesis sparsity (JAWS) model [38]. In 2020, Chang et al. designed a dual-stream CNN structure called TSWEU after considering various scenarios of stripe noise comprehensively [39]. The TSWEU network not only modeled stripes and image components but also incorporated a wavelet transform denoising module. In 2023, Song et al. employed the maximum a posteriori (MAP) estimation theory to model the synchronous denoising problem. They proposed the stripe estimation and image denoising (SEID) algorithm [40], which approximates the conditional expectation of the image using a modified NLM algorithm. The SEID algorithm achieved significant improvements in synchronous denoising, demonstrating better denoising performance.
The aforementioned research can be broadly categorized into two approaches. The first approach involves incorporating relevant theories from traditional image denoising techniques and utilizing the MAP estimation to reconstruct the underlying image from the degraded input. The other approach relies on deep learning networks, using an end-to-end network to simultaneously perform denoising and destriping tasks. These methods can achieve the goal of synchronous denoising, but there are still some issues in practical applications. For example, in the model proposed by Chang et al. in 2013 [21], it can be challenging to achieve stable processing results due to the sensitivity of sparse dictionaries to the specific characteristics of the image dataset. The models based on CNN networks (such as SNRCNN, UV-CNN, and TSWEU) face significant domain adaptation issues and exhibit noticeable performance loss when dealing with different images. In the SEID model proposed by Song et al. [40], the modified NLM algorithm discards the neighborhood windows with the same column of target pixels, resulting in blurred edge in the image (see Figure 2), especially the structure with the same direction as the stripe noise (see Figure 2b).
Considering the challenges mentioned in the previous approaches, we propose a synchronized denoising model based on double low-rank matrix recovery. The low-rank characteristics of the image prior and stripe noise are used to recover two low-rank matrices simultaneously, so as to realize the synchronous decoupling of the underlying prior. Additionally, we also design an ADMM strategy to approximate the optimal solution of the model. The experimental results, both in simulated and real images’ mixed noise removal, demonstrate that the proposed model in this paper outperforms the state-of-the-art models in terms of processing effectiveness, robustness, and applicability. The main ideas and contributions of this paper can be summarized as follows:
(1)
We propose a synchronous denoising model based on double low-rank matrix recovery by capitalizing on the full potential of the low-rank characteristics exhibited by both image prior and stripe noise.
(2)
By employing the research approach of image decomposition, this paper simultaneously optimizes the solutions for all underlying priors within a unified framework, achieving the goal of synchronous denoising.
(3)
To solve the proposed model, we devise an effective ADMM strategy and achieve excellent processing results.
The subsequent content is organized as follows: Section 2 provides a comprehensive introduction to the proposed synchronous denoising model, outlining its key components and methodology in detail. In Section 3, we meticulously design simulations and experiments to assess and validate the effectiveness and robustness of the proposed model. Section 4 discusses the experimental results and addresses the determination of important parameters. Finally, we conclude the paper and discuss future research in Section 5.

2. Simultaneous Destriping and Denoising

In this section, we provide a detailed introduction to the proposed double low-rank simultaneous destriping and denoising model (DLRSDD). Firstly, the degradation model of the image is explained. Then, based on the underlying prior information of the image, we construct the synchronous denoising model. Finally, we design a reasonable ADMM strategy to solve the model.

2.1. Degradation Model

Remote sensing images usually contain two types of noise: additive noise and multiplicative noise [41]. However, by applying logarithmic operations, multiplicative noise can be converted into additive noise [42]. As a result, the noise in this paper is considered as additive components. Therefore, we represent the degradation model of the image as follows:
F = U + S + N
where F, U, S, and N represent the degraded image, clean image, stripe noise, and random noise, respectively.

2.2. The DLRSDD Model

Estimating any of the underlying priors from the degraded image F is a typical ill-posed inverse problem. Previous research has mainly focused on separately recovering the clean image U, completely ignoring the other underlying priors, which limits the performance of the model. To overcome this limitation, we propose an approach to simultaneously solve three underlying priors, aiming to achieve better denoising results.
In order to approximate the optimal solution to the ill-posed inverse problem, we need to add regularization terms to the underlying priors. According to [25], we represent the image synchronous denoising model as follows:
arg min U , S , N 1 2 F U S N F 2 + λ R ( U ) + γ R ( S ) + τ R ( N )
The terms in Equation (2) are defined as follows:
  • 1 2 F U S N F 2 : data fidelity term.
  • R ( U ) : regularization term for the clean image.
  • R ( S ) : regularization term for stripe noise.
  • R ( N ) : regularization term for random noise.
  • λ , γ , τ : regularization parameters.
To preserve the texture information and structural features of the image, we employ the unidirectional total variation model [43]. Therefore, the regularization term λ R ( U ) can be expressed as:
λ R ( U ) = λ 1 x U 1 + λ 2 y U 1
where x and y represent the first-order derivative operators in the x and y directions, respectively.
To address the issue of stripe noise, we leverage the overall low-rank attributes of the stripe noise and the sparse properties of the gradient matrix in the stripe’s direction (in this paper, the y-direction) to impose constraints [44]. Therefore, γ R ( S ) can be expressed as:
γ R ( S ) = γ 1 S * + γ 2 y S 0
In addition, we indirectly construct the regularization term for random noise. The information in a clean image usually has high correlation, so we can use low-rank clustering to restore the low-rank components to achieve the purpose of image denoising. Similarly, the stripe noise is usually a low-rank matrix, which also means that its information is highly relevant. Therefore, for the mixed components of the clean image and stripe noise, we can also apply low-rank clustering to impose a low-rank constraint on them to remove random noise from the mixed components. Here, the weighted minimum nuclear norm is used to impose a low-rank constraint to achieve this purpose [8]. According to Equation (1), we consider this as the regularization term for random noise, and τ R ( N ) can be expressed as:
τ R ( N ) = τ F N w , *
The final representation of the synchronous denoising model we built can be expressed as:
arg min U , S , N 1 2 F U S N F 2 + λ 1 x U 1 + λ 2 y U 1 + γ 1 S * + γ 2 y S 0 + τ F N w , *

2.3. ADMM Optimization

In order to obtain the underlying priors in the model, we employ the ADMM to approximate the optimal solution of Equation (6). We decompose it into the following three independent subproblems:
arg min U 1 2 F U S N F 2 + λ 1 x U 1 + λ 2 y U 1
arg min S 1 2 F U S N F 2 + γ 1 S * + γ 2 y S 0
arg min N 1 2 F U S N F 2 + τ F N w , *
Equations (7)–(9) represent the subproblems regarding image U, stripe noise S, and random noise N, respectively.

2.3.1. Solution of Image U

To obtain the solution for Equation (7), we let X = x U , Y = y U . It is then transformed into the following constrained optimization problem:
arg min U , X , Y 1 2 F U S N F 2 + λ 1 X 1 + λ 2 Y 1
subject   to   X = x U ,   Y = y U
Based on [45,46], the representation of the augmented Lagrangian equation corresponding to Equation (10) is as follows:
arg min U , X , Y 1 2 F U S N F 2 + λ 1 X 1 + λ 2 Y 1 + U L + U α
where
U L = < L 1 , X x U > + < L 2 , Y y U >
U α = α 1 2 X x U F 2 + α 2 2 Y y U F 2
This leads to three subproblems:
arg min X λ 1 X 1 + < L 1 , X x U > + α 1 2 X x U F 2
arg min Y λ 2 Y 1 + < L 2 , Y y U > + α 2 2 Y y U F 2
arg min U 1 2 F U S N F 2 + U L + U α
The solutions of Equations (12) and (13) can be obtained by soft-threshold shrinkage [47] as follows:
X k + 1 = s o f t _ s h r i n k ( x U k L 1 k α 1 , λ 1 α 1 )
Y k + 1 = s o f t _ s h r i n k ( y U k L 2 k α 2 , λ 2 α 2 )
where
s o f t _ s h r i n k ( L , ξ ) = L L * max ( L ξ , 0 )
Equation (14) is a typical quadratic optimization problem. By setting its first derivative to zero and applying a two-dimensional Fourier transform, the solution can be obtained [48].
U k + 1 = F 1 F ( F S k N k ) + F ( α 1 x T ( X k + 1 + L 1 k α 1 ) ) + F ( α 2 y T ( Y k + 1 + L 2 α 2 ) ) F ( 1 + α 1 x T x + α 2 y T y )

2.3.2. Solution of Stripe Noise S

For the solution of Equation (8), we also make the following variable substitutions P = S and Q = y S , and it becomes the following constrained optimization problem:
arg min S 1 2 F U S N F 2 + γ 1 P * + γ 2 Q 0
Subject   to   P = S , Q = y S
The representation of the augmented Lagrangian equation corresponding to Equation (19) is as follows:
arg min S , P , Q 1 2 F U S N F 2 + γ 1 P * + γ 2 Q 0 + S L + S β
where
S L = < L 3 , P S > + < L 4 , Q y S >
S β = β 1 2 P S F 2 + β 2 2 Q y S F 2
This leads to three subproblems:
arg min P γ 1 P * + < L 3 , P S > + β 1 2 P S F 2
arg min P γ 2 Q 0 + < L 4 , Q y S > + β 2 2 Q y S F 2
arg min S 1 2 F U S N F 2 + S L + S β
To solve Equation (21) using singular-value soft-threshold shrinkage, one can follow the procedure outlined in reference [49].
P k + 1 = K ( s o f t _ s h r i n k ( , γ 1 ) ) V T
where F U k + 1 N k = K V T is the singular value decomposition of F U k + 1 N k , and i i is the diagonal element of the singular value matrix ∑.
Equation (22) can be solved by a hard-threshold shrinkage [50,51] as follows.
Q k + 1 = h a r d _ s h r i n k ( D y S k L 4 β 2 , 2 γ 2 β 2 )
where
h a r d _ s h r i n k ( θ , T ) = θ , θ T 0 , θ < T
Stripe noise S can be solved from Equation (23):
S k + 1 = F 1 F ( F U k + 1 N k + L 3 + β 1 P ) + F ( β 2 y T ( Q k + 1 + L 4 β 2 ) ) F ( 1 + β 1 + β 2 y T y )

2.3.3. Solution of Random Noise N

For Equation (9), we set M = F N , then the augmented Lagrange equation can be expressed as:
arg min N , M 1 2 F U S N F 2 + τ M w , * + N L + N μ
where
N L = < L 5 , M ( F N ) >
N μ = μ 2 M ( F N ) F 2
This leads to two subproblems:
arg min M τ M w , * + N L + N μ
arg min N 1 2 F U S N F 2 + N L + N μ
when we treat F N as a whole, Equation (29) represents a typical process of image denoising using weighted nuclear norm minimization (WNNM) for low-rank matrix recovery. Therefore, according to [8], we have:
M k + 1 = K S w ( ) V T
where
S w ( ) i i = max ( i i w i i , 0 )
F N k = K V T is the singular value decomposition of the matrix F N k , and w is the weight matrix.
For the estimation of the weight matrix w, we followed the strategy in [8]:
w i = c n / ( σ i ( M j ) + ε )
σ i ( M j ) = max ( σ i 2 ( F N ) j n σ n 2 , 0 )
where c is a constant, n represents the number of clusters of similar image patches, σ n 2 represents noise variance, ε = 10 16 is used to avoid division by zero. σ i ( M j ) and σ i ( F N ) j , respectively, denote the ith singular value of the jth low-rank clustered matrix for M and F N .
Then, we obtain random noise N from Equation (30):
N k + 1 = F 1 F ( F U k + 1 S k + 1 L 5 + μ ( F M k + 1 ) ) F ( 1 + μ )
Afterwards, we incorporate the following modification to the Lagrange multiplier:
L 1 k + 1 = L 1 k + α 1 ( X k + 1 x U k + 1 )
L 2 k + 1 = L 2 k + α 2 ( Y k + 1 y U k + 1 )
L 3 k + 1 = L 3 k + β 1 ( P k + 1 S k + 1 )
L 4 k + 1 = L 4 k + β 2 ( Q k + 1 y S k + 1 )
L 5 k + 1 = L 5 k + μ ( M k + 1 ( F N k + 1 ) )
The solving process is summarized as Algorithm 1:
Algorithm 1 Double low-rank simultaneous destriping and denoising algorithm
Input: Degraded image F, parameters.
1: Initialize.
2: For k = 1: K do
3: Solve  X k + 1 , Y k + 1 and U k + 1 via (15), (16), and (18).
4: Solve  P k + 1 , Q k + 1 and S k + 1 via (24), (25), and (27).
5: Solve  M k + 1 and N k + 1 via (31) and (35).
6: Update Lagrange multiplier L 1 k + 1 , L 2 k + 1 , L 3 k + 1 , L 4 k + 1 and L 5 k + 1 .
7: End for
Output: Image I, stripe noise S, and random noise N.

3. Simulation and Experiments

To evaluate the model’s real processing capabilities, we conducted simulations and validation experiments. We conducted a comparative analysis of the experimental results with four state-of-the-art methods: LRSID, SNRCNN, TSWEU, and SEID. Among them, LRSID and SEID represent approaches using the MAP estimation theory, while SNRCNN and TSWEU represent approaches using deep neural networks.

3.1. Experimental Settings

Throughout the entire experimental process, we conducted simulation on the Set12 dataset (see Figure 3) and remote sensing images with different spatial resolutions in Figure 4 (all from the DIOR dataset [52]). We also performed validation experiments on Compact High Resolution Imaging Spectrometer (CHRIS) images as well as actual remote sensing images acquired in the laboratory.
During the simulation, we controlled the degradation of the images using three parameters: r S t r , m S t r , and n S i g . r S t r reflects the proportion of image columns affected by stripe noise, m S t r represents the maximum intensity, and n S i g represents the standard deviation of the random noise. For example, r S t r = 0.5 , m S t r = 10 , and n S i g = 5 indicate that 50% of the columns in the image are affected by stripe noise, with a maximum noise intensity of 10, and there is also random noise with a standard deviation of 5. We conducted the simulation on degraded images with r S t r = 0.5 , m S t r = 5 , 10 , 15 , and n S i g = 5 , 10 . The peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) [53] are commonly used objective metrics for evaluating the quality of processed images. These metrics provide quantitative measures to assess the fidelity and similarity between the processed image and the original image.
In the actual validation experiment of remote sensing images, we performed synchronized denoising on the four test images shown in Figure 5. Figure 5a,b are CHRIS images that can be obtained from the website (http://www.brockmann-consult.de/beam/data/products/ (accessed on 15 July 2023)), while Figure 5c,d are test images from our laboratory. They represent four different noise-mixing scenarios: (a) and (b) represent the mixture of conventional stripe noise and random noise of different intensities, respectively, while (c) and (d) represent the mixture of two types of unconventional stripe noise (bidirectional stripe and incomplete stripe) and random noise. The synchronous denoising results under these four scenarios can comprehensively evaluate the denoising performance of the algorithms. To facilitate a direct comparison of the processing results, we introduced two objective evaluation metrics: the photoresponse nonuniformity (PRNU) and standard deviation (STD) of the image [44]. We evaluated the algorithm’s performance on stripe noise and random noise by computing the PRNU and STD of the uniform regions (green rectangular boxed region) in Figure 5. Smaller values of PRNU and STD typically indicate a lower intensity of stripe noise and random noise, resulting in a more uniform image. They also indicate a better performance of the algorithm in handling the uniform regions.

3.2. Results on Synthesized Images

Table 1 and Table 2 display the PSNR and SSIM values for the Set12 dataset at varying levels of mixed noise, while Table 3 are the results of images in Figure 4. To visually represent the processing results, we have marked the better metrics in the table. The best-performing metric is indicated in red, while the second-best is indicated in green.
By analyzing the data in Table 1 and Table 2 comprehensively, we can observe that SEID and the proposed DLRSDD exhibit overall excellent processing performance. Regardless of variations in the random noise or stripe noise, they achieve good synchronized denoising effects. However, when the intensity of the random noise is low, SEID has a significant performance degradation when processing Pep. and Cou. It may be related to the use of the median filter to obtain the initial value, which also affects the performance of the algorithm on SSIM. In scenarios where the random noise has the same intensity but the stripe noise has varying intensities, LRSID, SNRCNN, and TSWEU all demonstrate a highly stable processing performance. However, when confronted with situations where the stripe noise has the same intensity but the random noise has varying intensities, they all exhibit a significant performance degradation, indicating that these methods are highly effective in suppressing stripe noise but have limited capabilities in suppressing random noise. The simulation results for remote sensing images can be obtained from Table 3. Unlike the Set12 dataset, the overall performance of LRSID is superior to that of SEID. The latter shows a significant performance decline when dealing with remote sensing images with richer information. In addition, when the spatial resolution of the image changes, the processing results of SEID also exhibit significant differences. When the image’s spatial resolution is low, the edge-blurring phenomenon caused by the window-weighted averaging severely reduces image quality, leading to poor performance. However, when the spatial resolution is high, the impact of edge blurring is relatively reduced, and SEID can still maintain relatively excellent processing results. Overall, the robustness of SEID is weaker than that of the other algorithms compared in the experiment. In contrast, the proposed DLRSDD model achieves excellent processing results in various complex mixed-noise scenarios, and it does not show significant performance differences when processing different images. Although DLRSDD experiences a certain degree of performance degradation as the intensity of the random noise increases, it still outperforms the other compared algorithms comprehensively.
To obtain a more precise assessment of the performance of different algorithms, we present two sets of simulation results of Set12 in Figure 6 and Figure 7. The former shows the results of Cam. at n S i g = 5 and m S t r = 15 , while the latter represents the results of Man. at n S i g = 10 and m S t r = 15 . Additionally, Figure 8 displays the simulation results of remote sensing images with different spatial resolutions at n S i g = 5 and m S t r = 10 . In all sets of figures, (a,g) represent the original degraded simulated images, while (b–f) and (h–l) represent the results of LRSID, SNRCNN, TSWEU, SEID, and DLRSDD, respectively. In Figure 6, we can observe that the image processed by SNRCNN (Figure 6c) contains both residual stripes and random noise, resulting in poor performance. Although LRSID and TSWEU can effectively remove the stripe noise, their ability to suppress random noise is very limited (Figure 6b,d). Furthermore, LRSID may also damage the original structural information of the image (Figure 6b). In contrast, SEID achieves excellent results in smooth areas with less information, but noticeable artifacts appear when processing the edge information of the image (Figure 6e). In addition, the processing results of SEID also exhibit a pronounced oversmoothing (see Figure 7e), which easily leads to the loss of image details, posing a significant disadvantage for the subsequent applications of the images. When handling remote sensing images with different spatial resolutions, all methods can effectively remove the stripe noise from the images, but only SEID and DLRSDD can effectively suppress the random noise. By comparing the results in Figure 8e,k, we can observe that SEID shows a more obvious oversmoothing when dealing with information-rich remote sensing images. In contrast, the proposed DLRSDD not only thoroughly removes noise but also well preserves the contour and edge information of the images, achieving a more desirable processing outcome.

3.3. Results on Real Noisy Images

Table 4 and Table 5 present the comparative results of PRNU and STD, respectively, and the indicators with better performance are also identified. The results indicate that both SEID and DLRSDD outperform other algorithms in terms of PRNU and STD. This suggests that both of them can achieve good processing results for uniform regions. Upon examining the processing results of SEID and DLRSDD, it can be observed that SEID performs better overall in handling stripe noise (Table 4), while DLRSDD exhibits a superior suppression of random noise (Table 5). Additionally, considering the presence of edge artifacts in the SEID simulation, a subjective evaluation of the results is still necessary.
The experimental results of various algorithms on CHRIS images are illustrated in Figure 9. It is evident that the compared methods exhibit an effective suppression of conventional stripe noise. However, their performance significantly differs when it comes to addressing random noise. These methods can obtain clean images with clear details at low levels of random noise intensity (Figure 9a). However, as the intensity of the random noise increases (Figure 9g), the results show significant differences. SNRCNN has the weakest ability to suppress random noise, as evident from the presence of noticeable random noise in the image (Figure 9i). LRSID and TSWEU have similar abilities to suppress random noise, but neither of them can completely remove random noise in the image (Figure 9h,j). SEID and DLRSDD exhibit an excellent suppression of random noise (Figure 9k,l), but the proposed DLRSDD performs better in handling edge details.
In addition, we also conducted synchronous denoising experiments on test images collected in the laboratory (Figure 5c,d). The push-broom direction of Figure 5c was horizontal, and we rotated the image during processing. Different from the mixed noise in Figure 9, some unconventional stripe noises (bidirectional stripe and incomplete stripe) appear in Figure 5c,d, which greatly increases the difficulty of synchronous denoising. Figure 10 displays the synchronous denoising results of different algorithms on Figure 5c,d. By examining Figure 10a–f, it is apparent that the proposed DLRSDD demonstrates favorable denoising outcomes, even when faced with challenging bidirectional stripe noise. The stripe noise is entirely eliminated, resulting in a clear image structure. However, LRSID, SNRCNN, TSWEU, and SEID exhibit noticeable residual stripes in their results. When dealing with incomplete stripe noise (Figure 10g), the performance of TSWEU and SEID is unsatisfactory, showing obvious residual stripes. The image processed by SNRCNN has blurred edges, and the details are to some extent damaged. SNRCNN performs slightly better than TSWEU and SEID in handling stripe noise but still has a small number of residual stripes. In comparison, LRSID and DLRSDD perform well and obtain an ideal clean image. This is possibly because although the incomplete stripes violate the rank-one assumption, it still satisfies the low-rank characteristics, allowing LRSID and DLRSDD to effectively constrain the stripe noise.
Taking into account both subjective processing results and objective evaluation metrics, the proposed DLRSDD demonstrates excellent performance in handling actual degraded remote sensing images at various levels of degradation. It maintains robustness across different types of images and overall outperforms other compared algorithms.

4. Discussion

4.1. Parameter Determination

In the proposed algorithm, the values of λ 1 , λ 2 , γ 1 , γ 2 , and τ , along with their corresponding parameters α 1 , α 2 , β 1 , β 2 , and μ , have an impact on the model’s performance. Among these, λ 1 , λ 2 , α 1 , and α 2 are used to adjust the image prior, while γ 1 , γ 2 , β 1 , and β 2 control the stripe noise. τ and μ constrain the random noise prior. In order to achieve a favorable processing outcome, the following strategies were employed for parameter determination.
According to traditional experience, the initial value for all parameters were set to 0.1, and throughout the entire parameter adjustment process, the condition of α 1 = α 2 = β 1 = β 2 = μ was always maintained. Initially, we explored the relationship between the image prior regularization parameters λ 1 and λ 2 . When dealing with vertical stripes in the y-direction, a stronger constraint was needed for the image prior’s x-direction gradient. This implied that there should exist the following relationship: λ 1 > λ 2 . While keeping the other parameters constant, we tested the processing result for λ 1 / λ 2 ( 1 , 10 ) . The model’s performance was optimal when λ 1 = 5 λ 2 , so we maintained that setting. Subsequently, we conducted the same tests for λ 2 and α 2 (while keeping λ 1 = 5 λ 2 ). Ultimately, we derived the following relationship: λ 1 = 5 λ 2 = 5 α 1 = 5 α 2 .
After establishing the relationship for the image prior regularization parameters, we proceeded to investigate the selection of noise prior regularization parameters. Unlike the image prior, we applied the same level of constraint to the noise priors, meaning γ 1 = γ 2 = τ . Additionally, we obtained the results for γ 1 and β 1 at different ratios and found that the model could achieve a well-stabilized outcome when γ 1 5 β 1 . The final step was to determine the values of α 1 , α 2 , β 1 , β 2 , and μ . We conducted separate tests for the cases when they were set to (0.01, 1) and ultimately found that the model’s results were close to the optimal solution when they were in the range of (0.2, 0.4). Therefore, we set their values as 0.3. In summary, the preliminary determination of the parameters was as follows: λ 1 = 1.5 , λ 2 = 0.3 , γ 1 = γ 2 = τ = 1.5 , α 1 = α 2 = β 1 = β 2 = μ = 0.3 .
In order to ensure the robustness of the model, during the process of simulation and experiments, we made slight adjustments to the above-mentioned parameters based on the actual processing results and ultimately determined the model parameters as follows: λ 1 = 1.5 , λ 2 = 0.3 , γ 1 = γ 2 = 1.5 , τ = 5 , α 1 = α 2 = β 1 = β 2 = μ = 0.3 .

4.2. Nuclear Norm Minimization (NNM) and Weighted Nuclear Norm Minimization (WNNM)

In the proposed DLRSDD model, we applied the NNM and WNNM to impose low-rank constraints on different priors, respectively. In comparison to NNM, WNNM assigns distinct weights to each singular value, theoretically offering better performance. However, as observed in our earlier analysis, the similarity in stripe noise often results in a matrix with extremely low rank, even as low as one. This implies that it has only a few or even just one nonzero singular value, and applying different weights to it would not significantly enhance performance but would substantially increase computational load. Therefore, for the low-rank constraint on stripe noise, we adopted the NNM, while for the mixture of image prior and stripe noise, we employed the WNNM.

4.3. Results Discussion

Based on the overall simulation and experimental process, the compared methods show consistent issues. The LRSID model demonstrates excellent performance in handling stripe noise, and even when facing bidirectional stripes or incomplete stripes, it maintains satisfactory processing capability. Nonetheless, because of the low-rank constraint imposed by the algorithm on stripe noise, random noise that does not adhere to the low-rank property tends to persist in the image prior. Both SNRCNN and TSWEU achieve simultaneous denoising using trained models, and the characteristics of the training dataset greatly influence the processing outcomes. For instance, the SNRCNN network was trained on infrared images, which can lead to a noticeable decrease in performance when handling remote sensing images that contain more information. TSWEU encounters similar challenges in this regard. Additionally, SNRCNN and TSWEU face challenges in attaining the desired denoising outcomes on actual remote sensing images, primarily because of the discrepancy between the degradation simulation utilized during training and the actual degradation in the images. SEID is based on the MAP estimation theory and uses a modified NLM algorithm to approximate the conditional expectation of an image. This algorithm achieves good processing results for flat regions in images, but it introduces noticeable artifacts when handling image contours and edges. Furthermore, the modified NLM algorithm, while performing the weighted averaging process, excludes the neighboring windows of the target pixel in the same column. This exclusion has a notable effect on the columnwise structural information within the image, leading to subpar performance of SEID when confronted with incomplete stripes. DLRSDD utilizes low-rank constraints for both stripe noise and image priors, guaranteeing that random noise that violates the low-rank property is eliminated from both the image prior and the stripe noise. As a result, the stripe noise and image prior can be separated, and random noise within the image can be extracted simultaneously, leading to an improved synchronous denoising effect.

5. Conclusions

The article introduced a novel synchronized denoising algorithm that employed a double low-rank matrix recovery approach for effectively handling mixed noise in remote sensing images. The algorithm explored the underlying low-rank characteristics in degraded images and applied constraints to the image prior, stripe noise, and random noise. The ADMM strategy was used to approximate their optimal solutions at the same time to achieve synchronous denoising.
Firstly, addressing the issue of mixed noise removal, this study adopted a novel research approach. In the same framework, it simultaneously optimized the solution for all underlying priors, achieving synchronous denoising through image decomposition. This approach avoids the mutual interference between different noises, allowing the full utilization of the characteristics of various priors. It accurately assesses the noise distribution in degraded images, achieving the synchronous decoupling of underlying priors.
Furthermore, we conducted a detailed analysis of the relevant characteristics of the underlying priors in degraded images, fully leveraging the individual and common aspects of image priors, random noise priors, and stripe noise priors. We employed WNNM and NNM to apply low-rank constraints to the corresponding regularization terms. Simultaneously, by combining the image detail-preserving capability of the unidirectional total variation model, we achieved outstanding noise removal results. The ADMM solving strategy designed for the model allowed for a rapid determination of the model’s solution, leading to the acquisition of high-quality remote sensing image data.
Comprehensive simulations and experiments provided strong evidence that the proposed DLRSDD algorithm surpassed other commonly used algorithms in effectively addressing mixed noise, as demonstrated by superior performance in terms of objective metrics and subjective perception. It also exhibited a good robustness, enabling the synchronized removal of stripe noise and random noise under various conditions and obtaining high-quality remote sensing images.
Although DLRSDD demonstrated excellent performance, it still faces some challenges in practical applications. For instance, this paper assumed that random noise follows a Gaussian distribution, which generally satisfies most noise distribution conditions. However, it may not be applicable to certain extreme scenes where the noise characteristics deviate significantly from a Gaussian distribution. Additionally, there might be signal-related components present in mixed noise, which will restrict the processing performance of the model and affect the processing effect. Our future research will focus on more complex mixed-noise cases, analyze the potential distribution characteristics of different types of noise, and actively optimize the proposed model, so that it can have a relatively stable performance under different cases. In addition, the scope of application of the model will be actively expanded, and efforts will be made to enhance its application efficiency, enabling it to play a more pivotal role across diverse domains.

Author Contributions

Conceptualization, X.W., L.Z. and C.L.; methodology, X.W. and L.Z.; writing—original draft preparation, X.W. and L.Z.; writing—review and editing, X.W., L.Z., C.L., T.G., Z.Z. and B.Y.; funding acquisition, L.Z. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant 51827806.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The CHRIS data used in this paper are available at the following link: http://www.brockmann-consult.de/beam/data/products/ (accessed on 15 July 2023).

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  2. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
  3. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
  4. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar]
  5. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 479–486. [Google Scholar]
  6. Zuo, W.; Zhang, L.; Song, C.; Zhang, D. Texture enhanced image denoising via gradient histogram preservation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1203–1210. [Google Scholar]
  7. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Image Process. 2012, 22, 700–711. [Google Scholar] [CrossRef] [PubMed]
  8. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar]
  9. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
  10. Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef]
  11. Xu, X.; Li, M.; Sun, W.; Yang, M.H. Learning spatial and spatio-temporal pixel aggregations for image and video denoising. IEEE Trans. Image Process. 2020, 29, 7153–7165. [Google Scholar] [CrossRef]
  12. Song, Y.; Zhu, Y.; Du, X. Grouped multi-scale network for real-world image denoising. IEEE Signal Process. Lett. 2020, 27, 2124–2128. [Google Scholar] [CrossRef]
  13. Xu, X.; Li, M.; Sun, W. Learning deformable kernels for image and video denoising. arXiv 2019, arXiv:1904.06903. [Google Scholar]
  14. Anwar, S.; Barnes, N. Real image denoising with feature attention. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3155–3164. [Google Scholar]
  15. Wegener, M. Destriping multiple sensor imagery by improved histogram matching. Int. J. Remote Sens. 1990, 11, 859–875. [Google Scholar] [CrossRef]
  16. Gadallah, F.; Csillag, F.; Smith, E. Destriping multisensor imagery with moment matching. Int. J. Remote Sens. 2000, 21, 2505–2511. [Google Scholar] [CrossRef]
  17. Chen, J.; Shao, Y.; Guo, H.; Wang, W.; Zhu, B. Destriping CMODIS data by power filtering. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2119–2124. [Google Scholar] [CrossRef]
  18. Cao, Y.; Yang, M.Y.; Tisse, C.L. Effective strip noise removal for low-textured infrared images based on 1-D guided filtering. IEEE Trans. Circuits Syst. Video Technol. 2015, 26, 2176–2188. [Google Scholar] [CrossRef]
  19. Münch, B.; Trtik, P.; Marone, F.; Stampanoni, M. Stripe and ring artifact removal with combined wavelet—Fourier filtering. Opt. Express 2009, 17, 8567–8591. [Google Scholar] [CrossRef] [PubMed]
  20. Pande-Chhetri, R.; Abd-Elrahman, A. De-striping hyperspectral imagery using wavelet transform and adaptive frequency domain filtering. ISPRS J. Photogramm. Remote Sens. 2011, 66, 620–636. [Google Scholar] [CrossRef]
  21. Chang, Y.; Fang, H.; Yan, L.; Liu, H. Robust destriping method with unidirectional total variation and framelet regularization. Opt. Express 2013, 21, 23307–23323. [Google Scholar] [CrossRef] [PubMed]
  22. Dou, H.X.; Huang, T.Z.; Deng, L.J.; Zhao, X.L.; Huang, J. Directional l0 Sparse Modeling for Image Stripe Noise Removal. Remote Sens. 2018, 10, 361. [Google Scholar] [CrossRef]
  23. Yang, J.H.; Zhao, X.L.; Ma, T.H.; Chen, Y.; Huang, T.Z.; Ding, M. Remote sensing images destriping using unidirectional hybrid total variation and nonconvex low-rank regularization. J. Comput. Appl. Math. 2020, 363, 124–144. [Google Scholar] [CrossRef]
  24. Liu, X.; Lu, X.; Shen, H.; Yuan, Q.; Jiao, Y.; Zhang, L. Stripe noise separation and removal in remote sensing images by consideration of the global sparsity and local variational properties. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3049–3060. [Google Scholar] [CrossRef]
  25. Chang, Y.; Yan, L.; Wu, T.; Zhong, S. Remote sensing image stripe noise removal: From image decomposition perspective. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7018–7031. [Google Scholar] [CrossRef]
  26. Chen, Y.; Huang, T.Z.; Zhao, X.L.; Deng, L.J.; Huang, J. Stripe noise removal of remote sensing images by total variation regularization and group sparsity constraint. Remote Sens. 2017, 9, 559. [Google Scholar] [CrossRef]
  27. Kuang, X.; Sui, X.; Chen, Q.; Gu, G. Single Infrared Image Stripe Noise Removal Using Deep Convolutional Networks. IEEE Photonics J. 2017, 9, 3900913. [Google Scholar] [CrossRef]
  28. Kuang, X.; Sui, X.; Liu, Y.; Chen, Q.; Guohua, G. Single infrared image optical noise removal using a deep convolutional neural network. IEEE Photonics J. 2017, 10, 7800615. [Google Scholar] [CrossRef]
  29. He, Z.; Cao, Y.; Dong, Y.; Yang, J.; Cao, Y.; Tisse, C.L. Single-image-based nonuniformity correction of uncooled long-wave infrared detectors: A deep-learning approach. Appl. Opt. 2018, 57, D155–D164. [Google Scholar] [CrossRef] [PubMed]
  30. Shao, Y.; Sun, Y.; Zhao, M.; Chang, Y.; Zheng, Z.; Tian, C.; Zhang, Y. Infrared image stripe noise removing using least squares and gradient domain guided filtering. Infrared Phys. Technol. 2021, 119, 103968. [Google Scholar] [CrossRef]
  31. Chang, Y.; Yan, L.; Liu, L.; Fang, H.; Zhong, S. Infrared aerothermal nonuniform correction via deep multiscale residual network. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1120–1124. [Google Scholar] [CrossRef]
  32. Wang, Y.T.; Zhao, X.L.; Jiang, T.X.; Deng, L.J.; Chang, Y.; Huang, T.Z. Rain streaks removal for single image via kernel-guided convolutional neural network. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 3664–3676. [Google Scholar] [CrossRef] [PubMed]
  33. Shugar, D.; Jacquemart, M.; Shean, D.; Bhushan, S.; Upadhyay, K.; Sattar, A.; Schwanghart, W.; McBride, S.; Vries, M.; Mergili, M.; et al. A massive rock and ice avalanche caused the 2021 disaster at Chamoli, Indian Himalaya. Science 2021, 373, eabh4455. [Google Scholar] [CrossRef]
  34. Touzi, R. Target Scattering Decomposition in Terms of Roll-Invariant Target Parameters. IEEE Trans. Geosci. Remote Sens. 2007, 45, 73–84. [Google Scholar] [CrossRef]
  35. Chang, Y.; Yan, L.; Fang, H.; Liu, H. Simultaneous destriping and denoising for remote sensing images with unidirectional total variation and sparse representation. IEEE Geosci. Remote Sens. Lett. 2013, 11, 1051–1055. [Google Scholar] [CrossRef]
  36. Liu, H.; Zhang, Z.; Liu, S.; Liu, T.; Chang, Y. Destriping algorithm with l0 sparsity prior for remote sensing images. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 2295–2299. [Google Scholar]
  37. Huang, Z.; Zhang, Y.; Li, Q.; Li, Z.; Zhang, T.; Sang, N.; Xiong, S. Unidirectional variation and deep CNN denoiser priors for simultaneously destriping and denoising optical remote sensing images. Int. J. Remote Sens. 2019, 40, 5737–5748. [Google Scholar] [CrossRef]
  38. Huang, Z.; Zhang, Y.; Li, Q.; Li, X.; Zhang, T.; Sang, N.; Hong, H. Joint analysis and weighted synthesis sparsity priors for simultaneous denoising and destriping optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6958–6982. [Google Scholar] [CrossRef]
  39. Chang, Y.; Chen, M.; Yan, L.; Zhao, X.L.; Li, Y.; Zhong, S. Toward universal stripe removal via wavelet-based deep convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2880–2897. [Google Scholar] [CrossRef]
  40. Song, L.; Huang, H. Simultaneous Destriping and Image Denoising Using a Nonparametric Model with the EM Algorithm. IEEE Trans. Image Process. 2023, 32, 1065–1077. [Google Scholar] [CrossRef] [PubMed]
  41. Shen, H.; Zhang, L. A MAP-based algorithm for destriping and inpainting of remotely sensed images. IEEE Trans. Geosci. Remote Sens. 2008, 47, 1492–1502. [Google Scholar] [CrossRef]
  42. Carfantan, H.; Idier, J. Statistical linear destriping of satellite-based pushbroom-type images. IEEE Trans. Geosci. Remote Sens. 2009, 48, 1860–1871. [Google Scholar] [CrossRef]
  43. Bouali, M.; Ladjal, S. Toward optimal destriping of MODIS data using a unidirectional variational model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2924–2935. [Google Scholar] [CrossRef]
  44. Wu, X.; Qu, H.; Zheng, L.; Gao, T.; Zhang, Z. A remote sensing image destriping model based on low-rank and directional sparse constraint. Remote Sens. 2021, 13, 5126. [Google Scholar] [CrossRef]
  45. Eckstein, J.; Bertsekas, D.P. On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef]
  46. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  47. Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef]
  48. Ng, M.K.; Chan, R.H.; Tang, W.C. A fast algorithm for deblurring models with Neumann boundary conditions. SIAM J. Sci. Comput. 1999, 21, 851–866. [Google Scholar] [CrossRef]
  49. Cai, J.F.; Candès, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  50. Blumensath, T.; Davies, M.E. Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 2008, 14, 629–654. [Google Scholar] [CrossRef]
  51. Jiao, Y.; Jin, B.; Lu, X. A primal dual active set with continuation algorithm for the l0-regularized optimization problem. Appl. Comput. Harmon. Anal. 2015, 39, 400–426. [Google Scholar] [CrossRef]
  52. Li, K.; Wan, G.; Cheng, G.; Meng, L.; Han, J. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 159, 296–307. [Google Scholar] [CrossRef]
  53. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
Figure 1. Noise type and image degradation simulationImage degradation simulation.
Figure 1. Noise type and image degradation simulationImage degradation simulation.
Remotesensing 15 05710 g001
Figure 2. The processing result of SEID.
Figure 2. The processing result of SEID.
Remotesensing 15 05710 g002
Figure 3. Images in the Set12 dataset for the simulation.
Figure 3. Images in the Set12 dataset for the simulation.
Remotesensing 15 05710 g003
Figure 4. Remote sensing images with different spatial resolutions for the simulation.
Figure 4. Remote sensing images with different spatial resolutions for the simulation.
Remotesensing 15 05710 g004
Figure 5. Test images on validation experiments.
Figure 5. Test images on validation experiments.
Remotesensing 15 05710 g005
Figure 6. The processing results of Cam. using different methods when n S i g = 5 and m S t r = 15 .
Figure 6. The processing results of Cam. using different methods when n S i g = 5 and m S t r = 15 .
Remotesensing 15 05710 g006
Figure 7. The processing results of Man. using different methods when n S i g = 10 and m S t r = 15 .
Figure 7. The processing results of Man. using different methods when n S i g = 10 and m S t r = 15 .
Remotesensing 15 05710 g007
Figure 8. The processing results of remote sensing images with different spatial resolutions using different methods when n S i g = 5 and m S t r = 10 .
Figure 8. The processing results of remote sensing images with different spatial resolutions using different methods when n S i g = 5 and m S t r = 10 .
Remotesensing 15 05710 g008
Figure 9. The results of CHRIS images processed by different methods.
Figure 9. The results of CHRIS images processed by different methods.
Remotesensing 15 05710 g009
Figure 10. The results of laboratory images processed by different methods.
Figure 10. The results of laboratory images processed by different methods.
Remotesensing 15 05710 g010
Table 1. PSNR (dB) of Set12 processed by different methods under various conditions.
Table 1. PSNR (dB) of Set12 processed by different methods under various conditions.
Noise LevelMethodCam.Hou.Pep.Sta.But.Jet.Par.Riv.Bar.Shi.Man.Cou.Ave.
m S t r = 5
n S i g = 5
LRSID31.5429.9627.8232.4732.5234.1834.1632.5230.6033.1534.3131.2932.04
SNRCNN34.0034.1931.8533.7934.0633.8434.0933.6433.7833.8233.8133.5733.70
TSWEU33.7133.5832.9632.7432.8333.5433.3332.5532.6433.5032.9331.8533.01
SEID34.7935.9632.0934.2335.4335.1735.3834.1435.7335.7434.1132.8734.64
DLRSDD36.4437.3135.9836.0837.2335.7036.4335.8435.6436.8536.3635.0936.25
m S t r = 10
n S i g = 5
LRSID31.0229.9427.6632.2332.3633.9734.2932.3731.2933.1534.0831.2031.96
SNRCNN33.6333.7131.5233.2333.4233.3733.6233.1333.1733.3433.1933.0533.20
TSWEU33.7733.6532.9732.9332.7433.5533.4332.6832.7533.5633.0732.0233.09
SEID34.9535.9132.0034.0835.2835.0835.4034.1335.5135.7934.0232.8234.58
DLRSDD36.1936.9735.4935.7536.5835.4836.1835.3835.2036.5035.8434.8635.87
m S t r = 15
n S i g = 5
LRSID31.3129.6127.6432.0132.4134.0034.0632.1131.0933.1533.8531.2631.87
SNRCNN32.6132.8231.0032.1032.7332.3332.4132.1732.0032.3132.1732.1832.24
TSWEU33.7233.6333.1133.1733.0833.6833.4432.8532.9433.6833.0732.4733.24
SEID34.3136.0531.7934.0035.4735.0335.3233.9935.2935.6234.1632.9234.50
DLRSDD35.6736.1234.9634.9736.3434.8235.6034.6734.5235.6635.1734.4435.24
m S t r = 5
n S i g = 10
LRSID28.1227.6425.9128.2028.2228.6528.7328.1827.3828.5128.6727.7528.00
SNRCNN28.5528.5627.7328.3928.4728.4328.5528.3628.3828.4328.4128.3528.38
TSWEU27.7827.9327.6627.6127.6427.9327.5027.5827.6027.9527.7827.2227.68
SEID29.3732.6629.2929.6330.4830.0030.3728.8531.4730.8829.1927.9130.01
DLRSDD33.3334.6933.3732.6833.9332.3132.9732.7433.3934.0633.1332.6233.27
m S t r = 10
n S i g = 10
LRSID27.9627.3825.8428.0628.1528.5728.6528.0427.7228.3928.6227.8027.93
SNRCNN28.3828.4227.5728.0828.2328.1728.3628.1728.0928.2428.2328.1528.17
TSWEU27.8127.9327.7127.6827.7227.9227.4827.6827.6827.9727.8327.3027.73
SEID29.2632.5929.2829.4230.5930.0030.2628.9131.3830.8829.1927.9929.98
DLRSDD33.1834.6733.1232.2833.6932.0932.7932.5433.1233.8132.9732.5333.07
m S t r = 15
n S i g = 10
LRSID28.0527.4125.7527.9828.0528.5228.5927.9627.4428.2328.5027.6727.85
SNRCNN28.0328.0427.2427.7227.7427.9127.9827.7327.6327.8327.7927.6727.78
TSWEU27.7727.9327.7227.7927.9128.0027.6627.7127.7827.9627.8727.4127.79
SEID29.3132.6428.9729.5830.6330.0430.5828.8431.4530.8829.2227.7929.99
DLRSDD32.6734.0532.2031.8132.8331.7432.4431.8332.5533.2332.2831.7532.45
Table 2. SSIM of Set12 processed by different methods under various conditions.
Table 2. SSIM of Set12 processed by different methods under various conditions.
Noise LevelMethodCam.Hou.Pep.Sta.But.Jet.Par.Riv.Bar.Shi.Ma.Cou.Avg
m S t r = 5
n S i g = 5
LRSID0.8690.8560.8960.9290.9160.8920.8950.9650.9590.9610.9740.9370.921
SNRCNN0.8520.8440.8760.9150.9010.8720.8740.9700.9740.9650.9680.9700.915
TSWEU0.8490.8420.8750.9140.8970.8690.8720.9610.9680.9610.9670.9490.910
SEID0.9420.9070.9290.9380.9570.9320.9440.9590.9770.9710.9560.9280.945
DLRSDD0.9460.9360.9420.9580.9690.9390.9530.9810.9820.9830.9780.9710.961
m S t r = 10
n S i g = 5
LRSID0.8660.8540.8950.9280.9160.8910.8950.9630.9610.9610.9720.9360.920
SNRCNN0.8490.8410.8730.9100.8970.8680.8700.9650.9700.9600.9620.9650.911
TSWEU0.8490.8420.8750.9140.8970.8690.8720.9610.9690.9610.9670.9500.911
SEID0.9420.9070.9280.9370.9570.9320.9440.9600.9770.9710.9560.9300.945
DLRSDD0.9460.9360.9410.9570.9680.9390.9520.9770.9800.9810.9760.9710.960
m S t r = 15
n S i g = 5
LRSID0.8680.8530.8950.9280.9160.8920.8950.9600.9590.9610.9720.9370.920
SNRCNN0.8360.8280.8570.8970.8880.8580.8520.9520.9560.9450.9470.9550.898
TSWEU0.8490.8420.8750.9150.8970.8690.8720.9610.9690.9620.9680.9580.911
SEID0.9420.9070.9290.9390.9580.9310.9440.9590.9770.9710.9560.9310.945
DLRSDD0.9450.9330.9390.9540.9670.9350.9500.9700.9750.9770.9720.9670.957
m S t r = 5
n S i g = 10
LRSID0.6580.6430.7060.7910.7590.7040.7080.9020.9020.8830.9110.8890.788
SNRCNN0.6510.6290.6890.7760.7450.6890.6930.9100.9210.8920.9110.9190.785
TSWEU0.6330.6160.6780.7680.7340.6780.6770.8880.9050.8770.9000.8860.770
SEID0.8610.8650.8780.8660.9150.8840.8680.9090.9460.9160.8700.8480.886
DLRSDD0.9130.8830.9130.9180.9510.9040.9120.9590.9670.9650.9500.9490.932
m S t r = 10
n S i g = 10
LRSID0.6560.6390.7050.7890.7580.7030.7070.8970.9030.8810.9100.8910.787
SNRCNN0.6470.6260.6850.7720.7400.6850.6900.9040.9120.8860.9050.9140.780
TSWEU0.6320.6160.6790.7680.7350.6780.6770.8890.9060.8780.9000.8880.770
SEID0.8620.8650.8780.8660.9150.8850.8680.9090.9450.9160.8700.8490.886
DLRSDD0.9120.8860.9100.9170.9490.9030.9110.9550.9630.9630.9500.9490.931
m S t r = 15
n S i g = 10
LRSID0.6570.6400.7030.7890.7570.7030.7070.8960.8980.8790.9080.8880.785
SNRCNN0.6390.6170.6730.7620.7300.6790.6810.8910.9020.8720.8880.9010.770
TSWEU0.6320.6160.6790.7690.7360.6780.6770.8900.9070.8780.9010.8900.771
SEID0.8640.8650.8730.8660.9150.8850.8750.9090.9450.9160.8700.8480.886
DLRSDD0.9090.8880.9070.9110.9450.9010.9090.9490.9590.9590.9420.9420.927
Table 3. PSNR and SSIM of images in Figure 4 processed by different methods under various conditions.
Table 3. PSNR and SSIM of images in Figure 4 processed by different methods under various conditions.
Noisy LevelMethodPSNRSSIM
Dior-1 Dior-2 Dior-3 Dior-4 Dior-5 Dior-6 Avg. Dior-1 Dior-2 Dior-3 Dior-4 Dior-5 Dior-6 Avg.
m S t r = 5
n S i g = 5
LRSID34.3232.9533.3133.6833.7134.0433.670.9750.9650.9900.9920.9750.9700.978
SNRCNN33.8533.6633.2233.4833.7134.0633.660.9730.9670.9890.9910.9700.9690.977
TSWEU33.1332.7432.1732.5633.0133.2232.800.9730.9610.9870.9890.9700.9660.974
SEID33.3533.4829.9431.8134.3335.1233.000.9450.9330.9370.9720.9520.9450.947
DLRSDD35.5236.1934.5935.0236.0238.0035.890.9770.9760.9920.9930.9790.9780.983
m S t r = 10
n S i g = 5
LRSID34.1532.9133.0733.5633.6733.9433.550.9750.9650.9900.9910.9750.9690.977
SNRCNN33.2233.1532.5332.8433.2733.5333.090.9670.9620.9870.9890.9670.9640.973
TSWEU33.1932.7832.3032.5933.1233.2632.870.9730.9610.9870.9890.9700.9670.975
SEID33.2333.3929.9431.7234.3335.0232.940.9450.9320.9370.9720.9520.9450.947
DLRSDD34.9335.7434.0534.5535.6137.2735.360.9730.9740.9910.9920.9770.9760.980
m S t r = 15
n S i g = 5
LRSID33.9632.6132.8333.3333.4333.6933.310.9740.9600.9890.9910.9740.9680.976
SNRCNN32.1332.2831.1931.7332.2732.6232.040.9520.9500.9800.9840.9560.9550.963
TSWEU33.3932.9532.3032.8933.3833.4533.060.9730.9620.9870.9900.9720.9670.975
SEID33.3033.4129.8231.5634.2035.1632.910.9450.9330.9360.9720.9520.9450.947
DLRSDD34.3535.0433.1633.9534.7636.3834.610.9670.9690.9870.9900.9700.9720.976
m S t r = 5
n S i g = 10
LRSID28.6128.3628.2928.4528.5028.6828.480.9180.9040.9710.9750.9190.9060.932
SNRCNN28.4128.3728.1128.2528.3828.5228.340.9210.9090.9710.9750.9200.9110.935
TSWEU27.8627.6927.5327.6527.7927.8327.720.9130.8920.9650.9700.9100.8960.924
SEID28.3928.7527.4526.2129.4030.7728.490.8750.8470.9160.9090.8750.8870.885
DLRSDD32.0532.9230.4831.0432.6334.7632.310.9490.9460.9780.9820.9520.9530.960
m S t r = 10
n S i g = 10
LRSID28.5828.3028.2128.3628.4628.5828.410.9170.9010.9700.9740.9180.9040.931
SNRCNN28.2028.1727.8827.9928.1428.3028.110.9160.9020.9680.9720.9130.9060.929
TSWEU27.8827.7027.5827.7127.8327.8827.760.9130.8920.9650.9710.9100.8970.925
SEID28.4028.7527.4326.2229.3630.7728.490.8750.8470.9160.9090.8740.8870.885
DLRSDD31.8132.6730.3030.7632.3534.3932.050.9460.9420.9770.9810.9490.9510.958
m S t r = 15
n S i g = 10
LRSID28.4628.1828.1228.3028.3728.4928.320.9160.9000.9700.9740.9170.9020.930
SNRCNN27.8027.7227.3627.5627.7927.9227.690.9050.8870.9620.9670.9040.8950.920
TSWEU27.9227.7627.6227.7327.8727.9227.800.9130.8950.9660.9710.9110.8980.926
SEID28.3128.8427.4526.2529.3230.7928.490.8750.8470.9160.9090.8740.8870.885
DLRSDD31.2332.0429.7430.2631.7633.6831.450.9390.9350.9730.9770.9430.9450.952
Table 4. PRNU of uniform regions of images in Figure 5.
Table 4. PRNU of uniform regions of images in Figure 5.
Test ImageOriginalLRSIDSNRCNNTSWEUSEIDDLRSDD
a0.01550.01010.01140.01730.00870.0091
b0.03550.03040.03220.03010.02990.0292
c0.01620.01270.01280.01910.01070.0087
d0.04480.04430.03740.09860.01980.0283
Avg0.0280.02440.02350.04130.01730.0188
Table 5. STD of uniform regions of images in Figure 5.
Table 5. STD of uniform regions of images in Figure 5.
Test ImageOriginalLRSIDSNRCNNTSWEUSEIDDLRSDD
a0.8950.5800.6540.9960.5000.525
b3.9153.2983.4923.4573.2553.180
c0.7980.6160.6591.1130.5270.404
d1.5520.7100.8891.8280.4570.575
Avg1.7901.3011.4241.8481.1851.171
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, X.; Zheng, L.; Liu, C.; Gao, T.; Zhang, Z.; Yang, B. Single-Image Simultaneous Destriping and Denoising: Double Low-Rank Property. Remote Sens. 2023, 15, 5710. https://doi.org/10.3390/rs15245710

AMA Style

Wu X, Zheng L, Liu C, Gao T, Zhang Z, Yang B. Single-Image Simultaneous Destriping and Denoising: Double Low-Rank Property. Remote Sensing. 2023; 15(24):5710. https://doi.org/10.3390/rs15245710

Chicago/Turabian Style

Wu, Xiaobin, Liangliang Zheng, Chunyu Liu, Tan Gao, Ziyu Zhang, and Biao Yang. 2023. "Single-Image Simultaneous Destriping and Denoising: Double Low-Rank Property" Remote Sensing 15, no. 24: 5710. https://doi.org/10.3390/rs15245710

APA Style

Wu, X., Zheng, L., Liu, C., Gao, T., Zhang, Z., & Yang, B. (2023). Single-Image Simultaneous Destriping and Denoising: Double Low-Rank Property. Remote Sensing, 15(24), 5710. https://doi.org/10.3390/rs15245710

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop