Next Article in Journal
An Investigation of Early Detection of Driver Drowsiness Using Ensemble Machine Learning Based on Hybrid Sensing
Next Article in Special Issue
Unconstrained Bilingual Scene Text Reading Using Octave as a Feature Extractor
Previous Article in Journal
Donning/Doffing and Arm Positioning Influence in Upper Limb Adaptive Prostheses Control
Previous Article in Special Issue
Dental Images Recognition Technology and Applications: A Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Driven Redundant Transform Based on Parseval Frames

1
Beijing Key Laboratory of Multimedia and Intelligent Software Technology, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
2
Computer Science and Technology, Dalian University of Technology, Dalian 116023, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(8), 2891; https://doi.org/10.3390/app10082891
Submission received: 18 January 2020 / Revised: 16 April 2020 / Accepted: 17 April 2020 / Published: 22 April 2020
(This article belongs to the Special Issue Advances in Image Processing, Analysis and Recognition Technology)

Abstract

:
The sparsity of images in a certain transform domain or dictionary has been exploited in many image processing applications. Both classic transforms and sparsifying transforms reconstruct images by a linear combination of a small basis of the transform. Both kinds of transform are non-redundant. However, natural images admit complicated textures and structures, which can hardly be sparsely represented by square transforms. To solve this issue, we propose a data-driven redundant transform based on Parseval frames (DRTPF) by applying the frame and its dual frame as the backward and forward transform operators, respectively. Benefitting from this pairwise use of frames, the proposed model combines a synthesis sparse system and an analysis sparse system. By enforcing the frame pair to be Parseval frames, the singular values and condition number of the learnt redundant frames, which are efficient values for measuring the quality of the learnt sparsifying transforms, are forced to achieve an optimal state. We formulate a transform pair (i.e., frame pair) learning model and a two-phase iterative algorithm, analyze the robustness of the proposed DRTPF and the convergence of the corresponding algorithm, and demonstrate the effectiveness of our proposed DRTPF by analyzing its robustness against noise and sparsification errors. Extensive experimental results on image denoising show that our proposed model achieves superior denoising performance, in terms of subjective and objective quality, compared to traditional sparse models.

1. Introduction

A transform is a classical technique in signal processing, such as compression, classification, and recognition [1,2,3,4,5]. Traditional transforms, based on analytic orthogonal bases such as DCT, DFT, and Wavelets [1,6], suffer from two shortcomings: they do not depend on the data, and they reconstruct each image by approximation in the same subspace spanned by a non-redundant basis of the transforms, which limits the compact representation of natural signals.
Various models for sparse approximation have appeared in recent decades and play a fundamental role in modeling natural signals, with applications of denoising [7,8,9,10], super-resolution [11,12,13], and compression [1]. Such techniques exploit the sparsity of natural signals in analytic transform domains such as DCT, DFT, and various learning-based dictionaries [14,15,16].
There are two typical models for sparse representation: synthesis [10,14,15] and analysis [16,17,18,19] models. So far, most sparse models rely on the concept of synthesis, which represents the underlying signal as a sparse combination of atoms from a given dictionary. Specifically, x = D α , where x R N is the original signal, D R N × M is the given dictionary whose columns are the atoms, and  α R M is the sparse coefficient, which is usually measured by the 0 -norm · 0 . A learning analysis sparse model was proposed by Elad [14,19], formulated as Ω x 0 = r with notation similar to that of the synthesis one. Instead of reconstructing the signal using a few atoms in dictionary (like in the synthesis model), an analysis model decomposes a signal in a sparse fashion, based on an assumption that the signal lies in a sparse subset of the dictionary.
An analysis model can be straightforwardly regarded as a forward transform if its corresponding backward transform Ω is available. Recent research on transforms [2,4,5,20,21] has demonstrated the advantages of applying sparse constraints in transform learning. Motivated by this idea, many studies have been devoted to image denoising [5,20], classification [3,4], and other signal processing methods [21]. Learning-based transforms with sparse constraints measure the transform error, called sparsificaiton error, in the analysis or frequency domain, rather than in the temporal domain. Given training data X R N × L with signal vectors x i R N , i = 1 , , L as its columns, the problem of training a square sparsifying transform W R N × N  [21] is formulated as
min W , Y WX Y F 2 + μ W F 2 λ log det ( W ) s . t . y i 0 s ,
where y i , i = 1 , 2 , , L are the columns of Y satisfying a sparse constraint and μ W F 2 λ log det ( W ) is a regularizer, which keeps W non-singular.
As we can see, learning-based models effectively reveal the relationship between the transform and the data. The square transform, which consists of a non-redundant basis, cannot express complicated images. In 2014, an overcomplete transform learning model called OCTOBOS [20] was proposed, which consists of a series of square transforms to represent different features of natural images. However, the number of transforms must be pre-defined, which admits limited flexibility in applications.
In recent years, frames, as an overcomplete system, have been applied in image processing such as denoising [22,23], image compressive [24] and high resolution image reconstruction [25]. A frame can be regarded as an extension of an orthogonal basis, as a frame Φ R N × M ( N < M ) also spans an N-dimensional space. Compared to a general frame, a tight frame (e.g., wavelet tight frames [26], ridgelets [27], curvelets [28], shearlets [29], and others) can achieve wider use, as the lower and upper frame bounds are equal. A tight frame inherits the good characteristics of an orthogonal basis in signal processing, as its rows are orthogonal [30]. In a sparse representation, a redundant frame serves as an overcomplete dictionary to represent the signal [23]. With the development of data-driven approaches, learning-based tight frames have recently been researched [31,32,33]. In [31], redundant tight frames were used in compressed sensing. In [32], tight frames were applied to few-view image reconstruction. In [33], a data-driven method was presented, in which the dictionary atoms associated with a tight frame are generated by filters. In general, these studies model the frame learning problem in the dictionary learning form with tight frame constraints. These methods focus on tight frames, as the singular values of a tight frame are equal, which leads to simple optimization. A tight frame is a Parseval frame if the frame bounds are equal to 1. In fact, a Parseval frame is a redundant extension of the concept of a standard orthogonal basis. Due to its super-performance in linear signal representation, it can be well-used in sparse signal representation and optimization.
In this paper, we propose a data-driven redundant transform model based on Parseval frames (DRTPF for short), and present a model for learning DRTPF as well as a corresponding algorithm for solving the model. The algorithm consists of a sparse coding phase and a transform learning phase. The sparse coding phase updates the sparse coefficients and a threshold value using a conventional Batch Orthogonal Matching Pursuit (BtOMP) and pointwise thresholding. The transform learning phase performs the update of the frame using Gradient Descent and a relaxation or contraction singular values mapping, as well as updating the dual frame, in an atom-wise manner, using Least Squares. The advantages of the proposed DRTPF model (as well as the algorithm) are demonstrated with natural image denoising. To summarize, this paper makes the following contributions:
  • We propose the DRTPF method by integrating redundant Parseval frames with sparse constraints. The DRTPF method consists of a forward transform and a backward transform, which correspond to a frame and its dual frame, respectively. In other words, DRTPF bridges synthesis and analysis models by assuming that two models share almost the same sparse coefficients.
  • DRTPF outperforms traditional transforms and frames by learning from data which exploits the features of natural images, whereas traditional transforms and frames admit a uniform representation of various images, which tend to fail to characterize the intrinsic individual-specific features.
  • Traditional transforms are usually orthogonal transforms and the signals remain isometric, yet they suffer from weak robustness due to their strict properties. In contrast, DRTPF preserves the signals in a bounded fashion, which admits higher robustness and flexibility.
  • We propose a model for learning DRTPF and compare DRTPF with traditional transforms and sparse models in robustness analysis and image denoising experiments. Both qualitative and quantitative results demonstrate that DRTPF outperforms traditional transforms and sparse models.
The rest of this paper is organized as follows. Section 2 reviews the related work on frames. Section 3 proposes the framework of DRTPF, including the form of DRTPF (Section 3.1) and the learning model and corresponding algorithm for DRTPF (Section 3.2). In Section 4, we demonstrate the effectiveness of our DRTPF model by analyzing the convergence of the corresponding algorithm and give experimental results on robustness analysis and image denoising, as well as evaluating the effectiveness of DRTPF compared with traditional transforms and sparse models.

2. Related Work

Let H be an N-dimensional discrete Hilbert space. A sequence { ϕ i } i = 1 M H is a frame if and only if there exist two positive numbers A and B such that [30]
A x 2 2 i = 1 M | < x , ϕ i > | 2 B x 2 2 x H N .
A and B are called the bound of the frame and we call Formula 2 the frame condition, as it is a termination of frame. Furthermore, { ϕ i } i = 1 M is tight if A = B is possible [30]. In particular, { ϕ i } i = 1 M is a Parseval frame if A = B = 1 is satisfied. There are two associated operators can be defined between the Hilbert space H N and a Square integrable Space l 2 M ( · ) once a frame is defined: One is the analysis operator, T , defined by
( T x ) i = < x , ϕ i > , x H N ,
and the other is its adjoint operator, T , which is called the synthesis operator:
T c = i = 1 M c ϕ i c = ( c i ) i J l 2 M ( T ) .
Then, the frame operator can be defined by the following canonical expansion
Fx = T T x = i = 1 M < x , ϕ i > ϕ i .
Let x R N be an arbitrary vector in H . A reconstruction function is an expression with the following form
x = i = 1 M < x , ψ i > ϕ i , x H ,
where the sequence { ψ i } i = 1 M H is called the dual frame of { ϕ i } i = 1 M . Obviously, { ψ i } i = 1 M is not unique, unless  { ϕ i } i = 1 M is an orthogonal basis. In fact, for an arbitrary given frame { ϕ i } i = 1 M , there is a series of dual frames corresponding to it. The non-uniqueness of the dual frame allows us to achieve a better expression of the signal by optimizing the dual frame.
The frame Φ and its dual frame Ψ can be stacked as the matrices Φ = [ ϕ 1 , ϕ 2 , , ϕ M ] and Ψ = [ ψ 1 , ψ 2 , , ψ N ] , respectively. The matrices can be regard as sparse representation dictionaries, transform operators and so on. A frame Φ with the bounds A and B means that the maximum and minimum singular values of it are equal to A and B respectively. What’ more, the singular values of tight frame are all equal, particularly, the singular values of Parseval frame are all equal to 1. Thus, when the frame Φ is applied as sparse representation dictionary or transform operator, its condition number are determined by B A . In this way, the model will never provide degenerate dictionary or transform. In fact, frames are matrices with special structure.

3. Data-Driven Redundant Transform Model Based on Tight Frame

In this section, we present our data-driven redundant transform based on Parseval frames (DRTPF, Section 3.1) model along with an efficient redundant transform learning algorithm (Section 3.2) which contains the sparse coding algorithm (Section 3.2.1) and the transform pair update algorithm (Section 3.2.2).

3.1. Data-Driven Redundant Transform

In this subsection, we first propose a threshold-based reconstruction function, with the assumption that the signal is sparse in the dual frame domain. Then, we present the data-driven redundant transform based on Parseval frames model.
Let { ϕ i } i = 1 M be a frame and { ψ i } i = 1 M be its dual frame. For convenience, we stack them as the matrices Φ = [ ϕ 1 , ϕ 2 , , ϕ M ] and Ψ = [ ψ 1 , ψ 2 , , ψ N ] , respectively. Let x = x ^ + e be a signal vector, where x ^ is the original noiseless signal and e is a zero-mean white Gaussian noise. The frame reconstruction function (6) can be formulated as x = Φ Ψ T x = Φ Ψ T ( x ^ + e ) . By assuming the sparse prior of signals over the Ψ domain, we apply a columnwise hard thresholding operator S λ ( · ) (which shall be defined in the next subsection) on Ψ T ( x ^ + e ) , such that
x ^ = Φ S λ ( Ψ T x ) ,
where λ is a vector with elements λ i corresponding to ψ i , i = 1 , 2 , , M . Apparently, S λ ( Ψ T x ) is the sparse coefficients of x under Ψ in the sense of an analysis model, while it also serves as the sparse coefficients under Φ in the sense of a synthesis model. In other words, Equation (7) admits that the synthesis and analysis models share almost the same sparse coefficients.
As we all know, the standard orthogonal basis, which is a significant tool in signal representation and transformation, is a special kind of frame with frame bounds A = B = 1 . In fact, the standard orthogonal basis is a special case of a Parseval frame. In order to exceed the so-called perfect reconstruction property of the standard orthogonal basis in signal representation and transform, we refer to the Parseval frame. Therefore, we propose the data-driven redundant transform based on Parseval frame (DRTPF), as follows
y S λ ( Ψ T x ) ,
x ^ Φ y ,
s . t . Ψ Φ = I , | y 0 s ,
i = 1 M | < x , ϕ i > | 2 = x 2 2 ,
where (8) is the forward transform and (9) is the backward transform. The relationship between Φ and Ψ is formulated as (10), which implies the relationship between the frame and its dual frame. The forward transform operator Ψ is also a Parseval frame, as it is a dual frame of Φ . Thus, the projection of the signal x over the Ψ domain can be formulated as
i = 1 M | < x , ψ i > | 2 = x 2 2 .
Equation (12) indicates that the transform coefficients of the proposed DRTBF are bounded by the original signal x . This constraint leads to a more robust result than traditional sparse models.
To convert DRTPF into an optimization problem, (11) can be written as the more compact expression Φ Φ T = I , which characterizes Φ in a way that is unrelated to the data. This property indicates that the rows of the frame Φ are orthogonal, thus satisfying the so-called perfect reconstruction property which ensures that a given signal can be perfectly represented by its canonical expansion (in a manner similar to orthogonal bases).
Assuming X R N × L is the training data with signal vectors x i R N , i = 1 , 2 , , L as its columns, an optimization model for training DRTPF can be written as
min Φ , Ψ , λ , Y X Φ Y F 2 + η 1 Y S λ ( Ψ T x ) F 2 + η 2 Y 0 + η 3 Φ Ψ T I F 2 s . t . Φ Φ T = I .
The dual frame condition Φ Ψ T = I and the Parseval frame condition Φ Φ T = I imply that the difference of Φ and Ψ is in the null space of Φ . Denote [ a 1 T , a 2 T , , a N T ] T = Φ Ψ . The vectors a i , i = 1 , 2 , , N are orthogonal to Φ . Thus, it is clear that the dual frame Ψ contains two subspaces: one spanned by Φ and the one spanned by the a i , i = 1 , 2 , , N .

3.2. Transform Learning for the Drtbf Model

As there are no existing algorithm for solving problem (13), we apply the alternative direction method (ADM) and divide (13) into two sub-problems: A sparse coding phase, which updates the sparse coefficients Y and the threshold value λ , (Section 3.2.1); and the transform operator pair update phase, which computes Φ and Ψ , (Section 3.2.2).

3.2.1. Sparse Coding Phase

This subsection presents the sparse coding method for the proposed DRTBF model, in which the sparse coefficients of Y are obtained by OMP, and the threshold values λ are obtained by a designed elementwise method.

The Y Subproblem

The pursuit of Y is equivalent to solving the following problem with fixed Φ , Ψ , and  λ :
Y ^ = arg min Y X Φ Y F 2 + η 1 Y S λ ( Ψ T x ) F 2 + η 2 Y 0 ,
which can be easily solved by OMP [14,34], as (14) can be easily converted to the classical synthesis sparse expression min Z DY F 2 such that Y 0 , where Z = [ X η 1 S λ ( Ψ T x ) ] and D = [ Φ η 1 I ] .

The λ Subproblem

With fixed Φ , Ψ , and  Y , finding λ is equivalent to solving the following problem
λ ^ = arg min λ Y S λ ( Ψ T x ) F 2 ,
which can be decomposed into M individual optimization problems arg min λ i y ¯ i S λ i ( ψ i T X ) 2 2 , i = 1 , , M . By denoting J i : = supp ( S λ i ( ψ i T X ) ) to be the set of indices of non-zero elements of S λ i ( ψ i T X ) , we have
S λ i ( Ψ T x j ) = Ψ T x j , j J i S λ i ( Ψ T x j ) = 0 , j { 1 , , L } \ J i .
As the cardinality of J i depends on λ i , we transform (15) to another optimization problem:
λ ^ i = arg min λ i j { 1 , , L } \ J i y i j 2 f ( λ i ) + j J i ( y i j ψ i T x j ) 2 g ( λ i ) ,
where y i j denotes the ( i , j ) th entry of Y and x i denotes the i th column of X . Denote l ( λ i ) as
l ( λ i ) = j 1 , 2 , , L \ J ( y i j ψ i T x j ) 2
We observe that the function f ( λ i ) is a monotonically increasing function and that g ( λ i ) is monotonically decreasing. We take ψ i T x i , i = 1 , 2 , , L as candidates and compute all the values of f ( λ i ) + g ( λ i ) . Then, the optimal λ i should lie in an interval determined by ψ i T x k and ψ i T x l , which correspond to the smallest and the second smallest values of f ( λ i ) + g ( λ i ) , respectively. Then, any suitable value for λ i can be selected. The algorithm for the threshold is summarized as Algorithm 1.

3.2.2. Transform Pair Update Phase

The Ψ Subproblem

With fixed Y and λ , the optimization problem to obtain Ψ is given by
Ψ ^ = arg min Ψ Y S λ ( Ψ T X ) F 2 + η 3 η 1 Φ Ψ T I F 2 .
Algorithm 1: Sparse coding algorithm.
Input and Initialization:
  Training data X R N × L , iteration number r, initial value λ = 0 .
Output:
  Sparse coefficients Y , and threshold values λ
  Process:
1:
Compute the sparse coefficients Y via (14), according to the OMP algorithm [14,34].
2:
Sort the columns of X and y in increasing order of | ψ i T X | .
3:
Fori=1:r
For j=1:L
Compute all the possible values for f ( λ i ) + g ( λ i ) by f ( ψ i T x j ) = f ( ψ i T x j 1 ) + ( y i j ) 2 ; l ( ψ i T x j ) = l ( ψ i T x j 1 ) + ( y i j ψ i T x j ) 2 ; g ( ψ i T x j ) = ψ i T x 2 2 l ( ψ i T x j ) .; Denote them as a vector ν .
End for
4:
Sort the elements of | ψ i T X | and the columns of X in descending order of ν . Denote the first and second samples as x i 1 and x i 2 . Set λ i = | ψ i T x i 1 | + | ψ i T x i 2 | 2 .
End for
Such a problem is a highly nonlinear optimization problem, due to the definition of S λ . We (columnwise) solve Ψ by updating each column of Ψ while fixing others. The product Φ Ψ T can be written as
Φ Ψ T = p = 1 N ψ p ϕ p T = ψ i ϕ i T ( I p i N ψ p ϕ p T ) .
For each ψ i , we solve the following subproblem:
min ψ i y ¯ i S λ i ( ψ i T X ) 2 2 + η 3 η 1 ψ i ϕ i z 2 2 ,
where z = I p i N ψ p ϕ p T . We denote J i to be the indices (as before), and then separate the problem into the two following sub-problems:
ψ i 1 ^ = arg min ψ i j J i ( y i j ψ i T x j ) 2 + η 1 η 3 ψ i ϕ i T z 2 2 ,
ψ i 2 ^ = arg min ψ i 2 = 1 j { 1 , , L } \ J i ( ψ i T x j ) 2 ,
where y i j denotes the ( i , j ) th entry of Y and x i denotes the i th column of X . Equation (21) is a quadratic optimization, while Equation (22) has a closed form solution given by the normalized singular vector corresponding to the smallest singular value of X J ^ . Based on the solutions of the two sub-problems, we give the solution of (20) as the average of the two solutions; that is, ψ i ^ = 1 2 ( ψ i 1 ^ + ψ i 1 ^ 2 ψ i 2 ^ ) . Please note that the second solution is added with the magnitude of the norm of the first solution, as (21) serves as a dominant term for the Ψ subproblem, while the solution of (22) maintains no energy but direction.

The Φ Subproblem

With fixed Y , λ , and  Ψ , the model to obtain Φ is given by
min Φ X Φ Y F 2 + η 3 Φ Ψ T I F 2 s . t . Φ Φ T = I .
We convert (24) to an optimization problem which is formulated as
min Φ X Φ Y F 2 + η 3 Φ ( Φ Ψ ) T F 2 .
We denote the target function (24) by h ( Φ ) and apply the gradient descent method to the unconstrained version of (24) and project the solution to the feasible space. The gradient is given by
h ( Φ ) = ( Φ Y X ) Y T + η 3 [ Φ ( Φ Ψ ) T ( Φ Ψ ) + Φ ( Φ Ψ ) T Φ ] = ( Φ Y X ) Y T + η 3 Φ ( Φ Ψ ) T ( 2 Φ Ψ ) .
We summarize our overall algorithm in Algorithm 2.    
Algorithm 2: Transform pair learning algorithm.
Input and Initialization:
  Training data X , frame bound ( A , B ) , iteration n u m .
  Build frames Φ R M × N and Ψ R M × N , either by using random entries or using N randomly chosen data.
Output:
  Frames Φ , Ψ , Sparse coefficients Y , and thresholding values λ
  Process: For l=1: n u m
  Sparse Coding Step:
1:
Compute the sparse coefficients Y and the thresholding values λ via Algorithm 1.
  Frame Update Step:
2:
Update Ψ columnwise. Compute W = S λ ( Ψ T X ) .
For i = 1 : M
Denote J ^ i as the indices of zeros in the i th column of W . Set ψ i T X J ^ i = 0 . Compute ψ i via (21) and (22).
End For
3:
Update Φ via Gradient Descent, which is given as (25) and the step length is usually set to 0.01.
End for

4. Image Denoising

We introduce a novel problem formulation for signal denoising by applying the data-driven redundant transform DRTPF. Image denoising aims to reconstruct a high-quality image I from its noise corrupted version L , which is formulated as L = I + n where n is a noisy signal. For a signal satisfying the DRTPF, the denoising model based on DRTPF is formulated as
{ I ^ , Y ^ , λ } ^ = min I , { y i } i = 1 N , λ ^ L I F 2 + γ i R i I Φ y i F 2 + γ 1 i y i S λ ( Ψ T R i I ) F 2 + γ 2 i y i 0 ,
where R i is an operator that extracts the  i th patch of the image I y i is the  i th column of  Y , and  λ  denotes a vector [ λ 1 , λ 2 , , λ M ] with  λ j operating on the  j th element of  Ψ T R i I . On the right side of Equation (26), the first term is the global force, which demands proximity between the degraded image L and its high-quality version I . The other terms are the local constraints, which ensure that every patch at location i satisfies the DRTPF. This formulation assumes that the noise image L can be approximated by a noiseless image I ^ whose patch extracted by R i can be sparsely represented by the given transforms Φ and Ψ .
To solve Problem (26), we apply Algorithm 1 to obtain the sparse coefficients Y and the threshold values λ . We mainly state the iterative method to obtain I .
Denote d k = Ψ T R i I k 1 . We set O k as an index set that satisfies | d l k | λ l , l O k . Set u k R M as a vector with elements u l k = 1 l O k , 0 o t h e r w i s e . Then, the non-convex and non-smooth thresholds can be removed, with the substitution  y i S λ ( Ψ T R i I k ) y i Ψ T R i I k u k Thus, in the k th step, the problem that needs to be solved can be expressed as
{ I k ^ } = min I k 1 L I k 1 F 2 + γ i R i I k 1 Φ y i F 2 + γ 1 i y i Ψ T R i I k 1 u k F 2 ,
where ⊙ is pointwise multiplication. This convex problem can be easily solved by the gradient descent algorithm.
We summarize the restoration algorithm in Algorithm 3.    
Algorithm 3: Denoising algorithm.
Input:
  Training dictionaries Φ , Ψ , iteration number r, a degraded image L , set I 0 = L .
Output:
  The high-quality image I ^
1:
Compute Y and  λ via the method in Algorithm 1.
For k=1:r
2:
Compute d k = Ψ T R i I k 1 . Set O k as an index set that satisfies | d l k | λ l , l O k . Set u l k = 1 l O k , 0 o t h e r w i s e . .
3:
Solve Problem (27) via the gradient descent algorithm.
End for

5. Experimental Results

We demonstrate the effectiveness of our proposed data-driven redundant transform based on Parseval frames (DRTPF) by first analyzing the robustness of the model against Gaussian White Noise. Then we discuss the convergence of the proposed transform learning algorithm and the ability of the proposed DRTPF to provide low sparsification errors. Finally, we evaluate the effectiveness of the proposed DRTPF by applying it to nature image denoising. We use a fixed step size in the transform update and denoising steps of our algorithms.

5.1. Robustness Analysis

In this subsection, we illustrate the robustness of DRTPF by training DRTPF using the image ’Barbara’ and testing DRTPF for denoising the same image with Gaussian white noise added. The noise level (standard deviation) δ ranged from 20 to 60 with a step size of 2. In the experiment, the frames Φ and Ψ of size 100 × 200 were initialized as 1D overcomplete DCT (ODCT) and 10 × 10 overlapping mean-subtracted patches were used. The patch size was set as 8 × 8 with stripe 1. We set the parameters η 1 = 1.1 and η 3 = 1 e + 7 , and η 2 was replaced by the 0 thresholding 0.6 σ (i.e., Y 0 0.6 σ ). For comparison, our proposed algorithm was compared with K-SVD [14]. The size of dictionary learnt from K-SVD is 8 × 256 at its optimal state, according to the previous work.
We show the denoising result in Figure 1, from which it is apparent that with higher noise, our DRTPF method outperformed K-SVD more and more. In other words, our proposed model has good robustness. In fact, in our model, the sparse coefficients are calculated accurately by the inner product of the signals and the frame Ψ , and are limited to a certain range. Theoretically, it should be more robust. The learnt transforms Φ and Ψ are illustrated in Figure 2. These figures show that our frame learning method can capture the features in both analysis and synthesis ways. Figure 3 shows two exemplified visual results on the images ‘Babara’ at noise level σ = 30 and σ = 50 . From Figure 3 we know that our proposed DRTPF can obtain more clearer features than K-SVD [14].

5.2. Sparsification of Nature Images

A classic sparsifying transform learning model [21] is formulated as
min W , Y W X Y F 2 λ log det W + μ W F 2 s . t . Y i 0 s i ,
where X is the training data, Y are the sparse coefficients, and W is the learnt transform. The quality of the learnt transforms in the experiment [21] was judged based on their condition number and sparsification error. Similar to the experimental setting in [21], we also evaluated the effectiveness of the transforms learnt from our DRTPF by their condition number and sparsification error. The l 2 -norm condition number of the transform operator Φ is denoted as the ratio of the maximum singular value to the minimum singular value of Φ ; that is,
K Φ = δ m a x ( Φ ) δ m i n ( Φ ) .
In our case, the condition number K Φ = 1 , as the maximum and minimum singular values (which are determined by the optimal frame bounds) must be equal to 1. Similarly, we can obtain that K Ψ = 1 . It is the best case when the transform operators have condition number equal to 1. The sparsification error of the model (28) is defined as
S E = WX Y F 2 .
Similarly, we define the ‘sparsification error’ of the proposed DRTPF, to measure the energy loss due to sparse representation, which is formulated as
S E ˜ = Y S λ ( Ψ T X ) F 2 .
The ‘sparsification error’ indicates the compact ability of the transform Ψ with reasonable ignorance of the thresholding operator S λ ( · ) .
To demonstrate that our model and algorithms are insensitive to the initialized transforms, we applied the proposed sparse coding and transform operator pair learning algorithms to train a pair of transforms. The training data are patches of size 10 × 10 extracted from the image ‘Babara’ which is shown in Figure 4. The trained transform pair are of size 100 × 200 . We extracted the patches with non-overlap and removed the DC values of every sample. We set the parameters η 1 = 1.1 and η 3 = 1 e + 7 , and η 2 was replaced by the 0 thresholding 0.6 σ , as before. The matrices used for initialization were the 1D DCT matrix, the matrix with random columns sampled from the training data, and the redundant identity matrix. As the transform for DRTPF is redundant, the redundandt identity matrix here is formed as [ I I ] where I is the identity matrix of size 100 × 100 .
The convergence curve of the objective function and the ‘sparsification error’ are shown in Figure 5. From the left sub-figure of Figure 5 we know that our proposed algorithm for DRTPF is converged, and all the initializations converge to the same result after about 20 iterations which demonstrate that our proposed DRTPF and the corresponding algorithm are insensitive to different initializations. The right sub-figure of Figure 5 shows the ‘sparsification error’ of the three initialized methods, the 2D DCT transform of and the KLT transform. The 2D DCT is formed by the Kronecker product of two 1D DCT transform, i.e., D = D 0 D 0 , where D 0 is the 1D DCT transform of size 8 × 8 and ⊗ denotes the Kronecker product. The KLT transform K of size 64 × 64 is obtained by principle component analysis (PCA) method. The ‘sparsification error’ of 2D DCT and KLT are calculated via the model in [21] at iteration zero. This figure shows that the ‘sparsification error’ of the proposed DRTPF model is also converged and insensitive to the initialization matrices. In fact, the loss function of the proposed DRTPF mainly contains two partions: X Φ Y F 2 and Y S λ ( Ψ T X ) F 2 . The first partion is the recovery loss (i.e., the loss in temporal domain) and the second partion is the ‘sparsification error’ (i.e., the loss in frequency domain). Our proposed model aims to achieve low error both in temporal domain and frequency domain.
To illustrate the behavior of the proposed DRTPF in image representation, we choose six images shown in Figure 4 to train transforms and recover images. The Figure 6 shows the average sparsity curve and the recovery PSNR values with the increase of the sparsity. From the left sub-figure we know that the images are well sparsified along the iterative process. This figure is generated by setting y i 0 < 5 and the recovery PSNR is 32.27 dB. For each sample x i vectorized by a 10 × 10 patch, its correspondinge sparse coefficients y i is of length 200. It is easy to know that the sparsity rate is lower than 2.5 % . Furthermore, less than 5 % of the data need to be stored to recover an image with PSNR larger than 32.27 dB. The right sub-figure of Figure 6 shows the average recovery PSNR values with the increase of the sparsity which is a main measurement for the quality of the learnt transform. From the figure we know that in most of the case, our proposed DRTPF can obtain a better image quality in terms of PSNR with lower sparsity than the compared LST [21] method and the classic DCT transform. The ransform for LST [21] method and the classic DCT transform are of size 64 × 64 . The transform of LST [21] is trained by 4096 8 × 8 samples extracted from every image shown in Figure 6 with the main of the patches removed. The experiment is set as them illustrated in the paper [21]. When the total sparsity of a 512 × 512 image is more than 47,000, the recovery results of the proposed DRTPF and the LST [21] are nearly the same. The recovery PSNR at sparsity 47,000 is 37.3 dB.

5.3. Image Denoising

In this subsection, we evaluate the performance of our DRTPF model using six natural images of size 512 × 512 , which are shown in Figure 4. We added Gaussian white noise to these images at different noise levels ( σ = 20 , 30 , 40 , 50 , 60 ). We set the parameters η 1 = 1.1 and η 3 = 1 e + 7 , and η 2 was replaced by the 0 thresholding 0.6 σ , as before. We compared DRTPF with the three most related methods of sparse representation: K-SVD [14], the overcomplete transform (T.KSVD) [3], the learning-based frame (DTF [33]), the BM3D [35] and WNNM [36]. The BM3D and WNNM are nonlocal-based methods with the parameters setting as in corresponding paper. We note that DTF works on filters, instead of image patches. In the experiment, our DRTPF method and K-SVD were the same as in Section 5.1. All methods were trained iteratively (25 times). The DTF method was initialized by 64 3-level Harr wavelet filters of size 16 × 16 . The operator size of the T.KSVD method was 128 × 64 and the patch size it worked on was 8 × 8 overlapping mean-subtracted patches. The hard thresholding was s = 30 .
Table 1 shows the comparison results, in terms of average PSNR. As shown in Table 1, our DRTPF method and the DTF method outperformed K-SVD and T.KSVD on most images, i.e., our proposed DRTPF outperforms K-SVD for 0.47 dB and outperforms T.KSVD for 0.76 dB at noise level σ = 60 . This result implies that methods using frames are more robust against noise. Furthermore, the higher the noise level, the better the results of DRTPF method and the DTF method than K-SVD and T.KSVD. We can also see that our DRTPF method outperformed DTF on most of the images, especially when the noise level was very high. In fact, in our model, the sparse coefficients are calculated accurately by the inner product of the signals and the frame Ψ , and are limited to a certain range. Theoretically, it should perform better than the compared method. Figure 7 shows two exemplified visual results on the images ‘Boat’ and ‘Man’ at noise level σ = 40 . The PSNR of the K-SVD, T.KSVD, DTF, and the proposed DRTPF are 27.17 dB, 26.14 dB, 26.99 dB, 27.34 dB for ‘Man’ and 27.23 dB, 26.45 dB, 27.20 dB and 27.39 dB for ‘Boat’. Our proposed DRTPF and the DTF method provide more features and higher PSNR values of the two images than K-SVD and T.KSVD. Though the DTF provides higher PSNR values than K-SVD and T.KSVD, and better visual performance, the results of this method suffer from deformation and margin smoothing as it based on filter. The proposed DRTPF shows much clearer and better visual results than the other competing methods without any deformation.
All the six methods can be classified to two categories (1) without any extra constraint, e.g., nonlocal similarity, and (2) with additional prior like nonlocal similarity. Our proposed DRTPF belongs to category (1). We would like to point out that our goal was to establish a redundant transform learning method but not focus on image denoising. Our model is plain without applying any extra prior, besides the basic sparsity characteristics of the signals. The experimental results demonstrate that our proposed models can achieve better performance than traditional sparse models in image denoising. However, the methods BM3D and WNNM are based on image nonlocal self-similarity (NSS). The NSS prior refers to the fact that for a given local patch in a natural image, one can find many similar patches to it across the image. Intuitively, by stacking nonlocal similar patch vectors into a matrix, this matrix should be a low-rank matrix and have sparse singular values. The exploitation of NSS has been used to significantly boost image denoising performance. We have not involved this prior into our model.

6. Conclusions

In this paper, we propose a Parseval frame-based data-driven overcomplete transform (DRTPF) to capture features of images. We also propose the corresponding formulations, as well as algorithms for calculating the sparse coefficients and DRTPF model learning. We have proposed a general frame learning method without imposing any structure on the frame. By applying frames to redundant transforms, we combine the ideas of analysis and synthesis sparse models and let them share almost identical sparse coefficients. We conducted robustness analysis, sparsification of nature image and image denoising experiments, which demonstrated that DRTPF can outperform state-of-the-art models, as it exploits the underlying sparsity of natural signals by the integration of frames and sparse models.
In future work, we shall consider more efficient optimization algorithms for DRTPF, which facilitate the representation ability and application of the proposed method.

Author Contributions

M.Z. derived the theory, analyzed the data, performed the performance experiments, and wrote the original draft; Y.S. and N.Q. researched the relevant theory, participated in discussions of the work, and revised the manuscript; B.Y. supervised the project. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by NSFC (No. 61672066,61976011,U1811463, U19B2039, 61632006, 61906008, 61906009), Beijing municipal science and technology commission (No. Z171100004417023) and Common Program of Beijing Municipal Commission of Education (KM202010005018).

Acknowledgments

This work was supported by Beijing Advanced Innovation Center for Future Internet Technology, Beijing Key Laboratory of Multimedia, and Intelligent Software Technology. We deeply appreciate the organizations mentioned above.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Marcellin, M.W.; Gormish, M.J.; Bilgin, A.B.; Boliek, M.P. An overview of jpeg-2000. In Proceedings of the DCC 2000. Data Compression Conference, Snowbird, UT, USA, 28–30 March 2000; pp. 523–541. [Google Scholar]
  2. Patel, K.; Kurian, N.; George, V. Time Frequency Analysis: A Sparse S Transform Approach. In Proceedings of the 2016 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Phuket, Thailand, 24–27 October 2016. [Google Scholar]
  3. Eksioglu, M.; Bayir, O. K-SVD Meets Transform Learning: Transform K-SVD. IEEE Signal Process. Lett. 2014, 21, 347–351. [Google Scholar] [CrossRef]
  4. Ravishankar, S.; Bresler, Y. Learning Sparsifying Transforms For Image Processing. In Proceedings of the 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012. [Google Scholar]
  5. Ravishankar, S.; Bresler, Y. Learning Doubly Sarse Transforms for Image Representation. In Proceedings of the 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012. [Google Scholar]
  6. Mallat, S.; Lepennec, E. Sparse geometric image representation with bandelets. IEEE Trans. Image Process 2005, 14, 423–438. [Google Scholar]
  7. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process 2006, 15, 3736–3745. [Google Scholar] [CrossRef]
  8. Dong, W.; Zhang, L.; Shi, G.; Li, X. Nonlocally Centralized Sparse Representation for Image Restoration. IEEE Trans. Image Process 2013, 22, 1620–1630. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Qi, N.; Shi, Y.; Sun, X.; Yin, B. TenSR: Multi-dimensional Tensor Sparse Representation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5916–5925. [Google Scholar]
  10. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar]
  11. Wang, S.; Zhang, D.; Liang, Y.; Pan, Q. Semi-Coupled Dictionary Learning with Applications to Image Super-Resolution and Photo-Sketch Synthesis. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2216–2223. [Google Scholar]
  12. Yang, J.; Wang, Z.; Lin, Z.; Cohen, S.; Huang, T. Coupled dictionary training for image super-resolution. IEEE Trans. Image Process 2012, 21, 3467–3478. [Google Scholar] [CrossRef] [PubMed]
  13. Yang, J.; Wright, J.; Huang, T.; Ma, Y. Image superresolution via sparse representation. IEEE Trans. Image Process 2010, 19, 2861–2873. [Google Scholar] [CrossRef] [PubMed]
  14. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  15. Skretting, K.; Engan, K. Recursive least squares dictionary learning algorithm. IEEE Trans. Signal Process 2010, 58, 2121–2130. [Google Scholar] [CrossRef]
  16. Yaghoobi, M.; Nam, S.; Gribonval, R.; Davies, M.E. Noise aware analysis operator learning for approximately cosparse signals. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 5409–5412. [Google Scholar]
  17. Ophir, B.; Elad, M.; Bertin, N.; Plumbley, M. Sequential minimal eigenvalues—An approach to analysis dictionary learning. In Proceedings of the 2011 19th European Signal Processing Conference, Barcelona, Spain, 29 August–2 September 2011; pp. 1465–1469. [Google Scholar]
  18. Rubinstein, R.; Peleg, T.; Elad, M. Analysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model. IEEE Trans. Signal Process. 2012, 61, 661–677. [Google Scholar] [CrossRef] [Green Version]
  19. Rubinstein, R.; Faktor, T.; Elad, M. K-SVD dictionary-learning for the analysis sparse model. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 5405–5408. [Google Scholar]
  20. Wen, B.; Ravishankar, S.; Bresler, Y. Learning Overcomplete Sparsifying Transforms With Block Cosparsity. In Proceedings of the ICIP, Paris, France, 27–30 October 2014. [Google Scholar]
  21. Ravishankar, S.; Bresler, Y. Learning Sparsifying Transforms. IEEE Trans. Signal Process. 2013, 61, 1072–1086. [Google Scholar] [CrossRef]
  22. Parekh, A.; Selesnick, W. Convex Denoising using Non-Convex Tight Frame Regularization. IEEE Signal Process. Lett. 2015, 22, 1786–1790. [Google Scholar] [CrossRef] [Green Version]
  23. Lin, J.; Li, S. Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO. Appl. Comput. Harmon. Anal. 2014, 37, 126–139. [Google Scholar] [CrossRef]
  24. Liu, Y.; Zhan, Z.; Cai, J.; Guo, D.; Chen, Z.; Qu, X. Projected Iterative Soft-Thresholding Algorithm for Tight Frames in Compressed Sensing Magnetic Resonance Imaging. IEEE Trans. Med. Image 2016, 35, 2130–2140. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Chan, H.; Riemenschneider, D.; Shen, L.; Shen, A.Z. Tight frame: An efficient way for high-resolution image reconstruction. Appl. Comput. Anal. 2004, 17, 91–115. [Google Scholar] [CrossRef] [Green Version]
  26. Ron, A.; Shen, Z. Affine system in L2(Rd): The analysis of the analysis operator. J. Funct. Anal. 2013, 148, 408–447. [Google Scholar] [CrossRef] [Green Version]
  27. Candes, E. Ridgelets: Estimating with ridge functions. Ann. Statist. 1999, 31, 1561–1599. [Google Scholar] [CrossRef]
  28. Candes, E.J.; Donoho, D.L. Recovering edges in ill-posed inverse problems: Optimality of curvelet frames. Ann. Statist. 2002, 30, 784–842. [Google Scholar] [CrossRef]
  29. Kutyniok, G.; Labate, D. Construction of regular and irregular shearlet frames. J. Wavelet Theory Appl. 2007, 1, 1–10. [Google Scholar]
  30. Casazza, P.G.; Kutyniok, G.; Philipp, A.F. Introduction to Finite Frame Theory. In Finite Frames; Birkhäuser: Boston, MA, USA, 2012. [Google Scholar]
  31. Cao, C.; Gao, X. Compressed sensing image restoration based on data-driven multi-scale tight frame. J. Comput. Appl. Math. 2017, 309, 622–629. [Google Scholar] [CrossRef]
  32. Li, J.; Zhang, W.; Zhang, H.; Li, L.; Yan, A. Data driven tight frames regularization for few-view image reconstruction. In Proceedings of the 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, ICNC-FSKD 2017, Guilin, China, 29–31 July 2017; pp. 815–820. [Google Scholar]
  33. Cai, J.; Ji, H.; Shen, Z.; Ye, G. Data-driven tight frame construction and image denoising. Appl. Comput. Harmon. Anal. 2014, 37, 89–105. [Google Scholar] [CrossRef]
  34. Rubinstein, R.; Zibulevsky, M.; Elad, M. Efficient Implementation of the K-SVD Algorithm Using Batch Orthogonal Matching Pursuit; Technical Report CS-2008-08; Technion-Israel Institute of Technology: Haifa, Israel, 2008. [Google Scholar]
  35. Katkovnik, V.; Foi, A.; Egiazarian, K.; Astola, J. From local kernel to nonlocal multiple-model image denoising. Int. J. Comput. Vis. 2010, 86, 1–32. [Google Scholar] [CrossRef] [Green Version]
  36. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted Nuclear Norm Minimization with Application to Image Denoising. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar]
Figure 1. Robustness Analysis. DRTPF is trained and tested using the image ‘Barbara’. The X-label is the noise level δ and the Y-label is the PSNR. It can be seen that DRTPF performs more robustly than K-SVD.
Figure 1. Robustness Analysis. DRTPF is trained and tested using the image ‘Barbara’. The X-label is the noise level δ and the Y-label is the PSNR. It can be seen that DRTPF performs more robustly than K-SVD.
Applsci 10 02891 g001
Figure 2. The learnt operators Φ (left) and Ψ (right) for barbara.
Figure 2. The learnt operators Φ (left) and Ψ (right) for barbara.
Applsci 10 02891 g002
Figure 3. Reconstruction of barbara using DRTPF (left) and K-SVD [14] (right). Top: σ = 30 ; bottom: σ = 50 .
Figure 3. Reconstruction of barbara using DRTPF (left) and K-SVD [14] (right). Top: σ = 30 ; bottom: σ = 50 .
Applsci 10 02891 g003
Figure 4. The test images ‘Barbara’, ‘Lena’, ‘Hill’, ‘Couple’, ‘Boat’, and ‘Man’.
Figure 4. The test images ‘Barbara’, ‘Lena’, ‘Hill’, ‘Couple’, ‘Boat’, and ‘Man’.
Applsci 10 02891 g004
Figure 5. Convergence Curve and Sparsification Error. Left: The X-label is the iteration number. Right: The Y-label is the objective function and the sparsification error, respectively. It can be seen that our DRTPF learning algorithm is convergent and insensitive to initialization.
Figure 5. Convergence Curve and Sparsification Error. Left: The X-label is the iteration number. Right: The Y-label is the objective function and the sparsification error, respectively. It can be seen that our DRTPF learning algorithm is convergent and insensitive to initialization.
Applsci 10 02891 g005
Figure 6. The average Sparsity and Recovery PSNR. Left: The X-label is the iteration number and the Y-label is the average sparsity. Right: The X-label is the average sparsity and the Y-label is the average Recovery PSNR.
Figure 6. The average Sparsity and Recovery PSNR. Left: The X-label is the iteration number and the Y-label is the average sparsity. Right: The X-label is the average sparsity and the Y-label is the average Recovery PSNR.
Applsci 10 02891 g006
Figure 7. Visual comparison of reconstruction results by different methods on ‘Man’ and ‘Boat’. From left to right: original, T.KSVD [3], K-SVD [14], DFT [33], and DRTPF.
Figure 7. Visual comparison of reconstruction results by different methods on ‘Man’ and ‘Boat’. From left to right: original, T.KSVD [3], K-SVD [14], DFT [33], and DRTPF.
Applsci 10 02891 g007
Table 1. Average PSNR results of different noise levels on six images.
Table 1. Average PSNR results of different noise levels on six images.
σ ImageBarbaraBoatCoupleHillLenaManAverage
20K-SVD [14]31.0130.5030.1530.2732.5130.2630.78
T.KSVD [3]30.0229.3029.2529.2131.4529.0129.71
DTF [33]31.0730.3530.2030.3132.5630.0730.76
SSM-NTF31.0130.4730.2430.3432.5030.2330.80
BM3D [35]32.0131.0230.8830.8533.1930.8331.47
WNNM [36]32.3131.0930.9230.9433.1830.8431.55
30K-SVD [14]28.7528.6028.0728.5130.5928.4328.83
T.KSVD [3]27.7827.8627.4627.2329.2527.1327.79
DTF [33]29.0728.4828.2228.6430.6028.2628.88
SSM-NTF29.0028.6328.2428.6630.7328.4928.96
BM3D [35]30.1229.2228.9529.2331.4029.0429.66
WNNM [36]30.3229.3029.0229.3331.5029.1029.76
40K-SVD [14]27.0327.2326.5427.2329.1327.1727.39
T.KSVD [3]26.3526.4525.9826.4528.2026.1426.60
DTF [33]27.5827.2026.8727.4929.2526.9927.56
SSM-NTF27.5027.3927.0027.5629.3527.3427.69
BM3D [35]28.6827.9227.5828.0830.1127.8328.37
WNNM [36]28.8527.9927.6428.1830.2527.9028.47
50K-SVD [14]25.7126.0525.4226.2927.9226.1826.26
T.KSVD [3]25.1025.5625.0325.8927.0125.4025.67
DTF [33]26.4526.1525.8426.6328.1526.0926.55
SSM-NTF26.4326.3225.9926.7928.4026.4026.72
BM3D [35]27.4826.8926.4927.2029.0626.9427.34
WNNM [36]27.7026.9726.6027.3529.2327.0127.48
60K-SVD [14]24.4525.1824.5725.6927.0125.4025.38
T.KSVD [3]24.5024.8824.3625.4026.6024.7825.09
DTF [33]25.6425.3325.0425.9127.2225.3825.75
SSM-NTF25.5025.4525.1426.0327.3325.6725.85
BM3D [35]26.3626.0225.6126.4428.1426.1826.46
WNNM [36]26.5926.1225.7426.6028.3326.2626.61

Share and Cite

MDPI and ACS Style

Zhang, M.; Shi, Y.; Qi, N.; Yin, B. Data-Driven Redundant Transform Based on Parseval Frames. Appl. Sci. 2020, 10, 2891. https://doi.org/10.3390/app10082891

AMA Style

Zhang M, Shi Y, Qi N, Yin B. Data-Driven Redundant Transform Based on Parseval Frames. Applied Sciences. 2020; 10(8):2891. https://doi.org/10.3390/app10082891

Chicago/Turabian Style

Zhang, Min, Yunhui Shi, Na Qi, and Baocai Yin. 2020. "Data-Driven Redundant Transform Based on Parseval Frames" Applied Sciences 10, no. 8: 2891. https://doi.org/10.3390/app10082891

APA Style

Zhang, M., Shi, Y., Qi, N., & Yin, B. (2020). Data-Driven Redundant Transform Based on Parseval Frames. Applied Sciences, 10(8), 2891. https://doi.org/10.3390/app10082891

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop