Next Article in Journal
Small-Signal Analysis of All-Si Microring Resonator Photodiode
Next Article in Special Issue
Image Segmentation from Sparse Decomposition with a Pretrained Object-Detection Network
Previous Article in Journal
Research on Co-Estimation Algorithm of SOC and SOH for Lithium-Ion Batteries in Electric Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Group-Based Sparse Representation for Compressed Sensing Image Reconstruction with Joint Regularization

Institute of Fiber-Optic Communication and Information Engineering, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(2), 182; https://doi.org/10.3390/electronics11020182
Submission received: 7 December 2021 / Revised: 4 January 2022 / Accepted: 5 January 2022 / Published: 7 January 2022
(This article belongs to the Collection Graph Machine Learning)

Abstract

:
Achieving high-quality reconstructions of images is the focus of research in image compressed sensing. Group sparse representation improves the quality of reconstructed images by exploiting the non-local similarity of images; however, block-matching and dictionary learning in the image group construction process leads to a long reconstruction time and artifacts in the reconstructed images. To solve the above problems, a joint regularized image reconstruction model based on group sparse representation (GSR-JR) is proposed. A group sparse coefficients regularization term ensures the sparsity of the group coefficients and reduces the complexity of the model. The group sparse residual regularization term introduces the prior information of the image to improve the quality of the reconstructed image. The alternating direction multiplier method and iterative thresholding algorithm are applied to solve the optimization problem. Simulation experiments confirm that the optimized GSR-JR model is superior to other advanced image reconstruction models in reconstructed image quality and visual effects. When the sensing rate is 0.1, compared to the group sparse residual constraint with a nonlocal prior (GSRC-NLR) model, the gain of the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) is up to 4.86 dB and 0.1189, respectively.

1. Introduction

Compressed sensing (CS) [1,2,3] is a signal processing technique that allows for successful signal reconstruction with fewer measurements than Nyquist sampling [4]. The machinery not only overcomes Nyquist sampling’s constraints but also allows for simultaneous signal sampling and compression, lowering the cost of signal storage, transmission, and processing. Among the applications that have aroused the interest of researchers are single-pixel imaging [5], magnetic resonance imaging [6], radar imaging [7], wireless sensor networks [8], limited data computed tomography [9], optical diffusion tomography [10], ultrasound tomography [11], and electron tomography [12].
Since the measurements are much lower than the elements in the image, the reconstruction model is ill-posed [13], i.e., the solution to the optimization problem is not unique. In addressing the issue, the image prior information is gradually applied, which is used as a regularization constraint term in the reconstruction model to attain the optimal solution. In 2006, Candes et al. [14] proposed a minimum total variation (TV) model based on image gradient information [15]. It recovers the smoothed areas of the image while destroying the fine image structure. In 2013, Zhang et al. [16] introduced non-local similarity [17] as a regularization constraint into the TV model and proposed a non-local [18] regularization total variation model (TV-NLR). This model not only preserves the edges and details of the image but also promotes the development of TV-based image CS reconstruction. In 2007, Gan [19] presented a block-based compressed sensing (BCS) type natural image reconstruction model, which separates the image into image blocks, encodes them, and reconstructs each image block individually. However, there are block artifacts in the reconstructed image. In 2011, the Multi-hypothesis BCS with smoothed projected Landweber reconstruction (MH-BCS-SPL) [20] model was proposed to eliminate the block artifacts and improve reconstruction performance. It adopts an MH predictions strategy to attain an image block made from spatially surrounding blocks within an initial non-predicted reconstruction. The model is used to construct a preliminary reconstructed image owing to its short reconstruction time. Meanwhile, the sparse representation model based on image blocks has developed rapidly. However, the operation based on the image block ignores the connection of similar image blocks and the dictionaries learned from natural images with a high computational complexity. In 2018, Zha et al. [21] introduced an adaptive sparse non-local regularization CS reconstruction model (ASNR). The model employs the principal component analysis (PCA) [22] algorithm to learn dictionaries from the preliminary reconstruction of the image rather than genuine images, which reduces computational complexity and adds non-local similarity to preserve the image’s edges and details. Meanwhile, it promotes the further development of the patch-based sparse representation image CS reconstruction model.
The local and spatial connections of images play an important role in the field of image classification [23]. The goal is to investigate the structural correlation information between similar image blocks. In 2014, Zhang et al. [24] proposed a group-based sparse representation image restoration model (GSR). It uses image blocks with similar structures to build image groups as units for image processing and uses the singular value decomposition (SVD) algorithm to obtain an adaptive group dictionary, which improves the quality of reconstructed images. In 2018, Zha et al. [25,26] successively proposed a group-based sparse representation image CS reconstruction model with non-convex regularization and an image reconstruction model with the non-convex weighted l p nuclear norm. These models promote sparser group sparse coefficients by using the l p -norm or weighted l p nuclear norm to constrain the group sparse coefficients, reducing the computational complexity of the model while improving the quality of the reconstructed image. In 2020, Keshavarzian et al. [27] proposed an image reconstruction model based on a nonconvex LLP regularization of the group sparse representation, using an LLP norm closer to the l 0 norm to promote the sparsity of the group sparse coefficients and thus improve the quality of the reconstructed images. Zhao et al. [28] proposed an image reconstruction model based on group sparse representation and total variation to improve the quality of the reconstructed image by adding weights to the high-frequency components of the image. Zha et al. [29] proposed an image reconstruction model with a group sparsity residual constraint and non-local prior (GSRC-NLR), which uses the non-local similarity of the image to construct the group sparse residual [30] and converts the convex optimization problem into a problem of minimizing the group sparse residual. The reconstructed image quality is enhanced. However, the constraint on the group sparse coefficient is disregarded, resulting in longer reconstruction times.
Motivated by the group sparse representation and group sparse residual, in this paper, an optimization model is proposed for the group-based sparse representation of image CS reconstruction with joint regularization (GSR-JR). The model uses image groups as the unit of image processing. In order to reduce the complexity of the model and improve the quality of the model-reconstructed images, the group coefficient regularization constraint term and the group sparse residual regularization constraint term are added, respectively. The alternating direction multiplier method (ADMM) [31] framework and iterative thresholding algorithm [32] are also used to solve the model. Extensive simulation experiments verify the effectiveness and efficiency of the proposed model. The contents of this article are organized as follows:
Section 2 focuses on the theory of compressed sensing, the construction of image groups, group sparse representation, and the construction of group sparse residuals. In Section 3, the construction of the GSR-JR model and the solution scheme are described specifically. In Section 4, extensive simulation experiments are conducted to verify the performance of the GSR-JR model. In Section 5, we present the conclusion.

2. Related Work

2.1. Compressive Sensing

The CS theory reveals an image x N , which can be sparsely represented into the sparse transform domain. It can be expressed as x = Ψ α , where Ψ is the sparse matrix, and α is sparse coding coefficient. The sparse image can project into a low-dimensional space through a random measurement matrix Φ M × N . The matrix needs to meet the restricted isometry property conditions. The measurements of the image can be expressed as:
y = Φ x = Φ Ψ α
where y denotes the measurements. The sensing rate is defined as R = M / N . The purpose of CS recovery is to recover x from y as much as possible, which is usually expressed as the following l p optimization problem
α ^ = arg min α ( 1 2 y Φ Ψ α + λ α p )
where λ is a regulation parameter, and p is the l p norm to constrain sparse coding coefficients. When p is 0, the problem is a non-convex optimization problem, which is an NP problem and cannot be solved by polynomials. When p is 1, the problem is a convex optimization problem, which can be solved by various convex optimization algorithms. Generally, the optimization problem is solved as the l 1 minimization problem [30]. Once the sparsest coding coefficient is attained, the reconstructed image x ^ can be obtained from the sparse matrix and sparse coding coefficients
x ^ = Ψ α ^

2.2. Image Group Construction and Group Sparse Representation

In this paper, image groups are sparse representation units for image processing. The following illustrates the image group construction and group sparse representation. Figure 1 depicts the image group construction. The initial image x N is divided into n overlapped image blocks x i b   i = 1 , 2 , , n (red marked area). For each sample image block x i , m similar image blocks are searched for in the L × L search window (black marked area) to form the set S x i . All elements in the set S x i are converted into an image group x G i by column. The construction of the image group can be simply expressed as x G i = R G i ( x ) , where R G i ( · ) means the extraction operation of the image group from the image.
For each image group x G i , the PCA dictionary learning algorithm is used to attain the group dictionary D G i and group coefficient α G i , which is directly learned from each image group, and each image group only needs to perform singular value decomposition once. Then each image group can be sparsely represented by solving the following l 1 norm minimization problem,
α ^ G i = arg min ( x G i D G i α G i 2 2 + λ α G i 1 )
Once all the image groups x G i = D G i α ^ G i are obtained, the image group should be rearranged and restored according to the corresponding position of the original image to obtain the reconstructed image. Because the image group contains repeated pixels, it also needs to perform pixel averaging. The image restoration operation can be expressed as
x = D G α ^ G = i = 1 n R G i T ( D G i α ^ G i ) . / i = 1 n R G i T ( 1 b × m )
where R G i T ( ) indicates that the reverse operation of image extraction is to put the image group back to the corresponding position of the image. The 1 b × m is a matrix of image group size with all its elements being 1, which is used to count the weight of each pixel for pixel averaging. D G denotes the concatenation of all D G i , and α ^ G denotes the concatenation of all α ^ G i , The . / denotes the element-wise division of two vectors. The symbol is a simple representation of this operation.
According to the CS reconstruction model formula (1), the group sparse representation minimization problem can be obtained to realize image reconstruction
α ^ G = argmin α G ( 1 2 y Φ D G α G 2 2 + λ α G 1 ) i
By solving the minimization problem with different algorithms, the reconstructed image can be obtained by x ^ = D G α ^ G .

2.3. Image Group Sparse Residual Construction

To improve the quality of the reconstructed image, group sparse residual regularization constraints are considered for introduction into the reconstructed model. The group sparsity residual is the difference between the group sparse coefficients of the initial reconstructed image and the corresponding group sparse coefficients of the original image, which can be defined as
R G i = α G i b G i i
where the initial reconstructed image is reconstructed by the MH-BCS-SPL model. It is the strategy to obtain the initial reconstructed image owing to the reconstruction time being short. Additionally, the model will be compared and analyzed in the subsequent sections. Where α G i represents the group sparse coefficient of the initial reconstruction image, which can earn by
α G i = D G i 1 y G i
b G i represents the group sparsity coefficient corresponding to the original image, which cannot be received directly. Inspired by the non-local mean filter algorithm, it can attain from α G i . The α G i , j is the vector form of α G i , and the vector estimated b G i is denoted as b G i , j ,
b G i , 1 = j = 1 m w G i , j α G i , j
where w G i , j expresses the weight
w G i , j = 1 W exp ( y G i , 1 y G i , j 2 2 / h )
where y G i , 1 describes the 1st image block of the i-th image group in the initial reconstruction image, y G i , j represents the j-th similar image block of the image group y G i , h is a constant, and W is a normalization factor. The approximation of the group sparsity coefficient of the original image is obtained by replicating b G i , 1  m times.
b G i , j = { b G i , 1 , b G i , 2 , , b G i , m }
Therefore, the expressions for the group of sparse residuals are obtained.

3. The Scheme of GSR-JR Model

3.1. The Construction of the GSR-JR Model

The group sparse representation, Equation (6), and group sparsity residuals, Equation (7), are used to construct the optimization (GSR-JR) model, which can be expressed as
α ^ G = arg min α G ( 1 2 y Φ D G α G 2 2 + λ 1 α G 1 + λ 2 α G b G p )  
where the constrained l p norm of the group sparse residuals has not been determined; when p = 1 , the group coefficient residual distribution satisfies the Laplace distribution; and when p = 2 , it satisfies the Gaussian distribution. The image “house” is used to analyze the group’s sparse residual distribution in Figure 2.
Figure 2 shows the distribution curves of the group sparse residuals of “House” at different sensing rates. It demonstrates that the Laplace distribution, as opposed to the Gaussian distribution, can better fit the group sparse residual distribution. As a result, the Laplace distribution approximates the statistical distribution of the group sparse residual. In other words, the l 1 norm constrains the group sparsity residual.
The final representation of the proposed optimization model is
α ^ G = arg min α G ( 1 2 y Φ D G α G 2 2 + λ 1 α G 1 + λ 2 α G b G 1 )   i
where the first term is a fidelity term, and the second term is a group sparse coefficient regularization term, which ensures the sparsity of the group sparse coefficients and reduces the complexity of the model with a regularization parameter λ 1 . The third term is the group sparse residual regularization term, which improves the quality of the reconstructed image by increasing the prior information of the image, and the regularization parameter is λ 2 . Figure 3 depicts the complete flowchart of the optimized GSR-JR model.

3.2. The Solution of the GSR-RC Model

The ADMM is an effective algorithmic framework for solving convex optimization problems. The core of the ADMM algorithm is to transform the unconstrained optimization problem into a series of constrained sub-problems through variable separation and then use individual algorithms to solve the constrained sub-problems separately. In this paper, the ADMM algorithm is used to solve the model and find the optimal solution. The complete solving process of the optimization model is shown in Algorithm 1.
First, the auxiliary variable z and the constraint term z = D α are introduced to the optimization problem.
The optimization problem can transform into a constraint form by using the following formula:
min α ( 1 2 y Φ D G α G 2 2 + λ 1 α G 1 + λ 2 α G b G 1 s . t . z = D G α G
the augmented Lagrange form
min α , z ( 1 2 y Φ z 2 2 + λ 1 α G 1 + λ 2 α G b G 1 + μ 2 z D G α G g 2 2 μ 2 g 2 2 )
decomposes into three sub-problems
z t + 1 = arg min z ( 1 2 y Φ z 2 2 + μ 2 z D G α G t g t 2 2 )
α G t + 1 = arg min α G ( λ 1 α G 1 + λ 2 α G b G 1 + μ 2 z t + 1 D G α G g t 2 2 )
g t + 1 = g t ( z t + 1 D G α G t + 1 )
where μ is the regularization parameter, and the g is the Lagrange multiplier. z and α G sub-problems are specifically solved below, and the number of iterations t will be ignored for clarity.
A.
Solve the z sub-problem
Given α G , the z sub-problem is transformed into
z ^ = argmin z ( 1 2 y Φ z 2 2 + μ 2 z D G α G g 2 2 )
where Φ is the Gaussian random projection matrix, and it is difficult to solve the inverse of Φ in each iteration. To facilitate the solution, the gradient descent algorithm is used
z ^ = z η d
where η represents the step size, and the d represents the gradient direction of the objective function.
B.
Solve the α G sub-problem
Given z , the α G sub-problem is transformed as
α ^ G = arg min α G ( λ 1 α G 1 + λ 2 α G b G 1 + μ 2 z D G α G g 2 2 )
α ^ G = arg min α G ( λ 1 α G 1 + λ 2 α G b G 1 + μ 2 x l 2 2 )
where l = z g , x = D G α G . λ 1 and λ 2 are two regularization parameters, which will be set in subsequent operations.
Theorem 1.
Let x , l N X G i , L G i b × m , where e ( j )  represents the j-th element in the error vector e N , where e = x l , assuming that e ( j ) is an independent distribution with a mean of zero and variance of σ n 2 , so when ε > 0 , the relationship between x l 2 2  and i = 1 n X G i L G i 2 2  can be expressed according to the following properties
lim N K P ( | 1 N x l 2 2 1 K i = 1 n X G i L G i 2 2 | < ε ) = 1
where the P  represents the probability, K = b × m × n . The proof of the theorem is inAppendix A.
Formula (22) can simplify as
α ^ G = arg min α G ( 1 2 i = 1 n X G i L G i 2 2 + λ 1 K μ N α G 1 + λ 2 K μ N α G b G 1 ) = arg min α G i = 1 n   ( 1 2 X G i L G i 2 2 + η 1 α G i 1 + η 2 α G i b G i 1 )
where η 1 = λ 1 K μ N and η 2 = λ 2 K μ N .
The PCA learned a dictionary D G i is an orthogonal dictionary. The problem can be reduced to
α ^ G = arg min α G i = 1 n   ( 1 2 α G i u G i 2 2 + η 1 α G i 1 + η 2 α G i b G i 1 )
where the L G i = D G i u G i . According to the solution of the l 1 convex optimization problem
x ^ = arg min x ( 1 2 | | x a | | 2 2 + τ | | x | | 1 )
the solution is
x ^ = soft ( a , τ ) = sign ( a ) max ( | a | τ , 0 )
where soft ( · ) describes the operator of soft thresholding and denotes the element-wise product of two vectors. The sign ( · ) represents a symbolic function. The max ( · ) means to take the larger number between two elements.
Then α ^ G i can be attained
α ^ G i = soft ( soft ( u G i , η 1 ) b G i , η 2 ) + b G i
Therefore, the group sparsity coefficients α ^ G of all groups can be obtained, combined with the image group dictionary D G , which is obtained by PCA dictionary learning, and the reconstructed high-quality images can be obtained by x ^ = D G α ^ G .
Algorithm 1: The optimized GSR-JR for CS image reconstruction
Require: The measurements and the random matrix Φ
Initial reconstruction:
Initial reconstruction image y by measurements
Final reconstruction:
Initial t , σ n , b , L , m , h , W , c , μ
For t = 0 to Max-Iteration do
Update z t + 1 by Equation (16);
l = z t + 1 g ;
for Each group L G i in l do
Construction dictionary D G i by y G i using PCA.
Update α G i by computing Equation (8).
Estimate b G i by computing Equation (9) and Equation (10).
Update λ 1 , λ 2 by computing λ 1 = c × 2 2 σ n / σ A i λ 2 = c × 2 2 σ n / σ R i .
Update η 1 , η 2 by computing η 1 = λ 1 K / μ N , η 2 = λ 2 K / μ N
Update α G t + 1 by computing Equation (17).
end for
Update D G t + 1 by computing all D G i .
Update α G t + 1 by computing all α G i .
Update g t + 1 by computing Equation (18).
end for
Output: The final reconstruction image x ^ = D G α ^ G

4. Experiment and Discussion

In this paper, extensive simulation experiments are conducted to validate the performance of the optimized GSR-JR model. Peak signal-to-noise ratio (PSNR) [33] and structural similarity (SSIM) [34] are used to evaluate the quality of the reconstructed image. The experiment uses ten standard images with the size of 256 × 256 from the University of Southern California’s image library as the test image, as shown in Figure 4. All experimental simulation data are obtained by MATLAB R2020a simulation software on a Core i7-8565U 1.80 GHz computer with 4 GB RAM.
PSNR and SSIM are defined as
MSE = 1 m n i = 0 m 1 j = 0 n 1 I ( i , j ) J ( i , j ) 2
PSNR = 10 log 10 ( 255 2 MSE )
SSIM = l ( X , Y ) c ( X , Y ) s ( X , Y )
l ( X , Y ) = 2 μ X μ Y + C 1 μ X 2 + μ Y 2 + C 1 c ( X , Y ) = 2 σ X σ Y + C 2 σ X 2 + σ Y 2 + C 2 s ( X , Y ) = σ X Y + C 3 σ X σ Y + C 3
where MSE represents the mean square error between the original image I ( i , j ) and the reconstructed image J ( i , j ) , and m and n represent the height and width of the image, respectively. l ( X , Y ) means brightness, c ( X , Y ) means contrast, s ( X , Y ) means structure, μ X , μ Y represents the mean value of image X and Y , σ X , σ Y represents the variance of image X and Y , σ X Y represents the covariance of the image, and C 1   C 2   C 3 is a constant.

4.1. The Model Parameters Setting

In the simulation experiment, a random Gaussian matrix is employed to obtain measurements based on an image block of size 32 × 32. The variance of the noise σ n is set to 2 , and the small constant c is set to 0.4. Because the choice of regularization parameters will directly affect the performance of the model, two adaptive regularization parameters based on the representation-maximum posterior estimation relationship [35] are used. The forms of adaptive regularization parameters are λ 1 = ( c × 2 2 σ n ) / σ A i and λ 2 = ( c × 2 2 σ n ) / σ R i , where σ A i , and σ R i represents an estimate of the variance of the group sparse coefficient. The parameter μ is finally set to 0.01, 0.015, 0.025, 0.07, and 0.042 at different sensing rates. The size of the image block, the size of the search window, and the number of image similar blocks are all determined during the construction of an image group, as shown in Figure 5 and Figure 6.
Based on previous studies [24,25,26,27,28,29], the PSNR and reconstruction time are discussed for image block sizes from 5 × 5 to 9 × 9 and search window sizes from 20 × 20 to 50 × 50. From Figure 5, it can be observed that the PSNR varies less under different search windows for the same image block size, which indicates that the search window has less influence on the reconstruction model quality. The large variation of PSNR for different image block sizes under the same search window indicates that the image block size has a large impact on the model. It can be found that the PSNR is higher when the image block size is 7 × 7 and 8 × 8, and the PSNR is highest when the search window is 35 × 35. It can be noticed that the image reconstruction time increases with the increase of image blocks. However, it is worth noting that there is a minimum value of reconstruction time for each image block at different search windows. This indicates that the image blocks and the search window are matched. This is the reason why we discuss image blocks and search windows together. Considering the model performance and time together, the image block size is set to 7 × 7, and the search window size is set to 35 × 35.
In Figure 6, the effect of the number of similar blocks in an image group on the reconstruction model is discussed. It can be observed that as the sensing rate increases, the SSIM and PSNR of the images first increase and then decrease. The reason may be that when the image group contains a few image blocks, similar image blocks are processed in the same image group, increasing the connection between image blocks and thus improving the image quality. However, when the image group contains a large number of image blocks, i.e., the blocks with low similarity are processed in multiple image groups, errors occur in recovering the average pixels of the image group. Considering the SSIM and PSNR of the three images, the quality of the reconstructed image is relatively better when the number of image blocks is 60. Therefore, the number of similar blocks is set to 60. Based on the above discussion, the size of the image block is set to 7 × 7, the search window is 35 × 35, and the number of similar image blocks is 60.

4.2. The Effect of Group Sparse Coefficient Regularization Constraint

The goal of the group sparse coefficient regularization constraint is to reduce the complexity of the model by constraining the group sparse coefficients. Mallet demonstrated that when the signal is represented sparsely, the sparser the signal representation, the higher the signal reconstruction accuracy [36]. This section focuses on the role of the group sparse coefficient regularization constraint in the model. It is discussed whether the proposed model has the reconstruction performance under the group sparse representation regularization constraint when the sensing rate is 0.1, as shown in Table 1.
From Table 1, it can be observed that the PSNR and SSIM of the reconstructed images are relatively high when the model contains group sparse coefficient regularization constraints. In terms of reconstruction time, the reconstruction time of the model with the regularization constraint is significantly reduced by about a factor of two. This indicates that the group sparse coefficient regularization constraint term can drive the group coefficients to be more sparse and reduce the complexity of the model. The discussion demonstrates that adding group sparse coefficient regularization constraints to the model improves the efficiency of the model.

4.3. Data Results

To validate the performance of the proposed model GSR-JR, the proposed GSR-JR model is compared with five existing image reconstruction models, TV-NLR, MH-BCS-SPL, ASNR, GSR, and GSRC-NLR. All comparison models are loaded from the authors’ website and parameters are set to default values according to the authors. Table 2 shows the PSNR and SSIM for 10 test images with different reconstruction models at sensing rates ranging from 0.1 to 0.3. The best values are shown in bold for observation. From Table 2, it can be observed that the proposed model significantly outperforms the other models at low sensing rates. When the sensing rate is 0.1, the average PSNR (SSIM) of the GSR-JR model is improved by 3.78 dB (0.0929), 3.72 dB (0.1291), 1.40 dB (0.0227), 1.80 dB (0.0298), and 1.164 dB (0.0209), respectively, compared with the other models. It is also detected that the PSNR and SSIM of the image reconstruction model increases significantly with the sensing rate. To visualize the trends of the PSNR and SSIM of the reconstructed images, Figure 7 shows the PSNR and SSIM of “Pepper” and “Monarch” at different sensing rates, respectively.
Figure 7 shows the PSNR and SSIM of the six reconstructed models of “Peppers” and “Monarch” at different sensing rates. It can observe that the PSNR and SSIM increase gradually as the sensing rate increase, and the PSNR and SSIM of the proposed model are significantly higher than those of the other models. In the PSNR images of Figure 7a,c, it can be concluded that the PSNR growth rate of different models changes when the sensing rate is 0.2. The PSNR of the ASNR model increases significantly, the GSRC-NLR model increases slowly, and the PSNR of the proposed GSRC-JR model increases steadily.

4.4. Visual Effects

Visual perception is the subjective evaluation of the quality of reconstructed images. To illustrate the visual differences of the reconstructed images by six image reconstruction models, the reconstructed images of “Cameraman”, “Peppers”, “Monarch”, and “Resolution chart” at sensing rates of 0.1 and 0.2 are plotted, as shown in Figure 8 and Figure 9. Specific areas of the images are also enlarged to show the differences in the details of the images reconstructed by the reconstructed model.
Figure 8 shows the visual effect of the reconstructed image when the sensing rate is 0.1. It can observe that the reconstructed images of the TV-NLR model are severely blurred, and the contour boundaries and texture details of the images are hardly identified. The reconstructed images of the MH-BCS-SPL model can only identify the contours of the “Peppers” and “Monarch” images among the four images. The ASNR model also has a significant blurring effect on the reconstructed images, which is mainly around the image details. In the GSR model, although the “Monarch” still has unidentifiable artifacts, the stem of “Peppers” and the horizontal lines in the “Resolution chart” are identifiable. The GSRC-NLR model reconstructs the images visually well, although there are still some artifacts. Compared with the other models, the proposed GSR-JR model reconstructs the images with the best visual effect even though there are also artifacts, and the details of the images are easier to identify when comparing the magnified areas of the four images.
Figure 9 shows the visual effect of the reconstructed image when the sensing rate is 0.2. The TV-NLR model reconstructs the image with more detail, although there is still blurring in the “Peppers”. The MH-BCS-SPL model reconstructs the image with more details and textures, while the GSRC-NLR model reconstructs the image with relatively serious artifacts, and the GSR model reconstructs the image with some artifacts, but the reconstructed image has obvious visual effects. The visual effect of the reconstructed image of the ASNR model can only reach that of the proposed reconstructed model when the sensing rate is 0.2.

4.5. Reconstruction Time

Reconstruction time is also an important metric for evaluating image reconstruction models. The reconstruction times of the six image reconstruction models are analyzed in Figure 10. From the figure, it can be observed that the TV-NLR and MH-BCS-SPL models take relatively less time, but the quality of the reconstructed images is also worse. The optimized GSR-JR model takes less time than the ASNR model, though the reconstructed image is comparable. The proposed GSR-JR model combines the group sparse residual regularization constraint, so the reconstruction time is slightly higher than that of the GSR model. The reconstruction time is less than that of the GSRC-NLR model, owing to the group sparse coefficient regularization constraint. Considering the reconstruction quality and reconstruction time together, the proposed GSR-JR model is more practical. Meanwhile, it can be found that although the reconstruction time of the GSR-JR model is better than other reconstruction models based on image groups, it cannot achieve real-time image reconstruction, which is the limitation of the reconstruction model, and how to further reduce the reconstruction time of the model and achieve real-time image reconstruction is the direction that the model will continue to work on.

5. Conclusions

In this paper, image groups are used as the sparse representation units to discuss and determine the parameters of image group construction, where the image block is set to 7 × 7; the search window is set to 35 × 35; and the number of similar blocks is set to 60. A group coefficient regularization constraint term is also introduced to reduce the complexity of the model, and the group sparse residual regularization constraint term to increase the prior information of the image to improve the quality of the reconstructed images. The ADMM algorithm framework and iterative thresholding algorithm are used to solve the model. The experimental simulation results verify the effectiveness and efficiency of the GSR-JR model; however, the reconstruction model cannot achieve real-time image reconstruction. In view of the current rapid development of convolutional neural networks and some cases of successful image reconstruction by combining traditional algorithms with neural networks, future research focuses on how to implement the proposed model by solving it using neural networks so as to achieve real-time high-quality image reconstruction and promote further development in the field of image CS reconstruction.

Author Contributions

Concept and structure of this paper, R.W.; resources, Y.Q. and H.Z.; writing—original draft preparation, R.W.; writing—review and editing, R.W., Y.Q. and Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (NSFC) (61675184).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1.
Due to assuming an independent distribution with the mean E ( e ( j ) ) = 0 and the variance V a r ( e ( j ) ) = σ 2 , then each e ( j ) 2 is also independent and the mean is
E ( e ( j ) 2 ) = V a r [ e ( j ) ] + E ( e ( j ) ] 2 = σ 2 j = 1 , , N
By invoking the Law of Large Numbers in probability theory, for any ε > 0 , it produces
lim N P { | 1 N j = 1 N e ( j ) 2 σ 2 | < ε } = 1
lim N P { | 1 N x l 2 2 σ 2 | < ε } = 1
Further, let X G , L G denote the series of all the groups X G i and L G i , i = 1 , n , and denote each element of X G L G by e G ( i ) , i = 1 , , K . Due to the assumption, the e G ( i ) is independent with zero mean and the variance is σ 2 . Therefore, it is possible to obtain
lim N P { | 1 K i = 1 K e G ( i ) 2 σ 2 | < ε } = 1
lim N P { | 1 K i = 1 n X G i L G i 2 2 σ 2 | < ε } = 1
Therefore, the relationship between x l 2 2 and i = 1 n X G i L G i 2 2 is proved. □

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Candès, E.J.; Romberg, J.; Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 2006, 59, 1207–1223. [Google Scholar] [CrossRef] [Green Version]
  3. Candès, E.J.; Romberg, J. Sparsity and incoherence in compressive sampling. Inverse Probl. 2007, 23, 969–985. [Google Scholar] [CrossRef] [Green Version]
  4. Nyquist, H. Certain Topics in Telegraph Transmission Theory. Proc. IEEE 1928, 47, 617–644. [Google Scholar] [CrossRef]
  5. Monin, S.; Hahamovich, E.; Rosenthal, A. Single-pixel imaging of dynamic objects using multi-frame motion estimation. Sci. Rep. 2021, 11, 7712. [Google Scholar] [CrossRef]
  6. Zheng, L.; Brian, N.; Mitra, S. Fast magnetic resonance imaging simulation with sparsely encoded wavelet domain data in a compressive sensing framework. J. Electron. Imaging 2013, 22, 57–61. [Google Scholar] [CrossRef] [Green Version]
  7. Tello Alonso, M.T.; Lopez-Dekker, F.; Mallorqui, J.J. A Novel Strategy for Radar Imaging Based on Compressive Sensing. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4285–4295. [Google Scholar] [CrossRef] [Green Version]
  8. Liu, J.; Huang, K.; Yao, X. Common-innovation subspace pursuit for distributed compressed sensing in wireless sensor networks. IEEE Sens. J. 2019, 19, 1091–1103. [Google Scholar] [CrossRef]
  9. Courot, A.; Cabrera, D.; Gogin, N.; Gaillandre, L.; Lassau, N. Automatic cervical lymphadenopathy segmentation from CT data using deep learning. Diagn. Interv. Imaging 2021, 102, 675–681. [Google Scholar] [CrossRef] [PubMed]
  10. Markel, V.A.; Mital, V.; Schotland, J.C. Inverse problem in optical diffusion tomography. III. Inversion formulas and singular-value decomposition. J. Opt. Soc. Am. 2003, 20, 890–902. [Google Scholar] [CrossRef] [Green Version]
  11. Wiskin, J.; Malik, B.; Natesan, R.; Lenox, M. Quantitative assessment of breast density using transmission ultrasound tomography. Med. Phys. 2019, 46, 2610–2620. [Google Scholar] [CrossRef] [Green Version]
  12. Kiesel, P.; Alvarez, V.G.; Tsoy, N.; Maraspini, R.; Gorilak, P.; Varga, V.; Honigmann, A.; Pigino, G. The molecular structure of mammalian primary cilia revealed by cryo-electron tomography. Nat. Struct. Mol. Biol. 2020, 27, 1115–1124. [Google Scholar] [CrossRef]
  13. Vasin, V.V. Relationship of several variational methods for the approximate solution of ill-posed problems. Math. Notes Acad. Sci. USSR 1970, 7, 161–165. [Google Scholar] [CrossRef]
  14. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  15. Zenzo, S.D. A note on the gradient of a multi-image. Comput. Vis. Graph. Image Processing 1986, 33, 116–125. [Google Scholar] [CrossRef]
  16. Zhang, J.; Liu, S.; Zhao, D.; Xiong, R.; Ma, S. Improved total variation based image compressive sensing recovery by nonlocal regularization. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013; pp. 2836–2839. [Google Scholar] [CrossRef]
  17. Wang, S.; Liu, Z.W.; Dong, W.S.; Jiao, L.C.; Tang, Q.X. Total variation based image deblurring with nonlocal self-similarity constraint. Electron. Lett. 2011, 47, 916–918. [Google Scholar] [CrossRef]
  18. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar] [CrossRef]
  19. Gan, L. Block Compressed Sensing of Natural Images. In Proceedings of the 2007 15th International Conference on Digital Signal Processing, Wales, UK, 1–4 July 2007; pp. 403–406. [Google Scholar] [CrossRef]
  20. Chen, C.; Tramel, E.W.; Fowler, J.E. Compressed-sensing recovery of images and video using multi hypothesis predictions. In Proceedings of the 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 6–9 November 2011; pp. 1193–1198. [Google Scholar] [CrossRef]
  21. Zha, Z.; Liu, X.; Zhang, X.; Chen, Y.; Tang, L.; Bai, Y.; Wang, Q.; Shang, Z. Compressed sensing image reconstruction via adaptive sparse nonlocal regularization. Vis. Comput. 2018, 34, 117–137. [Google Scholar] [CrossRef]
  22. Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image Super-Resolution Via Sparse Representation. IEEE Trans. Image Processing 2010, 19, 2861–2873. [Google Scholar] [CrossRef] [PubMed]
  23. Manzo, M. Attributed Relational SIFT-Based Regions Graph: Concepts and Applications. Mach. Learn. Knowl. Extr. 2020, 3, 233–255. [Google Scholar] [CrossRef]
  24. Zhang, J.; Zhao, D.; Gao, W. Group-based Sparse Representation for Image Restoration. IEEE Trans. Image Processing 2014, 23, 3336–3351. [Google Scholar] [CrossRef] [Green Version]
  25. Zha, Z.; Zhang, X.; Wang, Q.; Tang, L.; Liu, X. Group-based Sparse Representation for Image Compressive Sensing Reconstruction with Non-Convex Regularization. Neurocomputing 2017, 296, 55–63. [Google Scholar] [CrossRef] [Green Version]
  26. Zha, Z.; Zhang, X.; Wu, Y.; Wang, Q.; Liu, X.; Tang, L.; Yuan, X. Non-Convex Weighted Lp Nuclear Norm based ADMM Framework for Image Restoration. Neurocomputing 2018, 311, 209–224. [Google Scholar] [CrossRef]
  27. Zhao, F.; Fang, L.; Zhang, T.; Li, Z.; Xu, X. Image compressive sensing reconstruction via group sparse representation and weighted total variation. Syst. Eng. Electron. Technol. 2020, 42, 2172–2180. Available online: https://www.sys-ele.com/CN/10.3969/j.issn.1001-506X.2020.10.04 (accessed on 6 December 2021).
  28. Keshavarzian, R.; Aghagolzadeh, A.; Rezaii, T. LLp norm regularization based group sparse representation for image compressed sensing recovery. Signal Processing: Image Commun. 2019, 78, 477–493. [Google Scholar] [CrossRef]
  29. Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhu, C. Group Sparsity Residual Constraint with Non-Local Priors for Image Restoration. IEEE Trans. Image Processing 2020, 29, 8960–8975. [Google Scholar] [CrossRef]
  30. Zha, Z.; Liu, X.; Zhou, Z.; Huang, X.; Shi, J.; Shang, Z.; Tang, L.; Bai, Y.; Wang, Q.; Zhang, X. Image denoising via group sparsity residual constraint. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 1787–1791. [Google Scholar] [CrossRef] [Green Version]
  31. Boyd, S. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2010, 3, 1–122. [Google Scholar] [CrossRef]
  32. Daubechies, I.; Defrise, M.; Mol, C.D. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2010, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  33. Avcibas, I.; Sankur, B.; Sayood, K. Statistical evaluation of image quality measures. J. Electron. Imaging 2002, 11, 206–223. [Google Scholar] [CrossRef] [Green Version]
  34. Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Processing 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  35. Chang, S.G.; Yu, B.; Vetterli, M. Adaptive wavelet thresholding for image denoising and compression. IEEE Trans. Image Processing 2000, 9, 1532–1546. [Google Scholar] [CrossRef] [Green Version]
  36. Mallat, S.G.; Zhang, Z. Matching Pursuits with Time-Frequency Dictionaries. IEEE Trans. Signal Processing 1993, 41, 3397–3415. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The flow chart of image group construction, where Extracting means extracting sample image blocks, Matching means matching similar image blocks, and Stacking means transforming similar image blocks to get image groups.
Figure 1. The flow chart of image group construction, where Extracting means extracting sample image blocks, Matching means matching similar image blocks, and Stacking means transforming similar image blocks to get image groups.
Electronics 11 00182 g001
Figure 2. The group sparse residual distribution of the original and initial reconstructed images at different sensing rates: (a) R = 0.1 , (b) R = 0.2 .
Figure 2. The group sparse residual distribution of the original and initial reconstructed images at different sensing rates: (a) R = 0.1 , (b) R = 0.2 .
Electronics 11 00182 g002
Figure 3. The complete flowchart of the optimized GSR-JR model.
Figure 3. The complete flowchart of the optimized GSR-JR model.
Electronics 11 00182 g003
Figure 4. Ten experimental test images: (a) Cameraman, (b) House, (c) Peppers, (d) Starfish, (e) Monarch, (f) Airplane, (g) Parrot, (h) Man, (i) Resolution chart, and (j) Camera test.
Figure 4. Ten experimental test images: (a) Cameraman, (b) House, (c) Peppers, (d) Starfish, (e) Monarch, (f) Airplane, (g) Parrot, (h) Man, (i) Resolution chart, and (j) Camera test.
Electronics 11 00182 g004
Figure 5. The PSNR and Time for “House” with different block sizes and different search windows sizes.
Figure 5. The PSNR and Time for “House” with different block sizes and different search windows sizes.
Electronics 11 00182 g005
Figure 6. The SSIM and PSNR with the number of similar blocks for three images.
Figure 6. The SSIM and PSNR with the number of similar blocks for three images.
Electronics 11 00182 g006
Figure 7. The PSNR and SSIM of different images at all sensing rates. (a) The PSNR of “Peppers”, (b) the SSIM of “Peppers”, (c) the PSNR of “Monarch”, and (d) the SSIM of “Monarch”.
Figure 7. The PSNR and SSIM of different images at all sensing rates. (a) The PSNR of “Peppers”, (b) the SSIM of “Peppers”, (c) the PSNR of “Monarch”, and (d) the SSIM of “Monarch”.
Electronics 11 00182 g007
Figure 8. Visual effects of images with various reconstruction models when R = 0.1 . (a) Original image, (b) TV-NLR, (c) MH-BCS-SPL, (d) ASNR, (e) GSR, (f) GSRC-NLR, and (g) GSR-JR.
Figure 8. Visual effects of images with various reconstruction models when R = 0.1 . (a) Original image, (b) TV-NLR, (c) MH-BCS-SPL, (d) ASNR, (e) GSR, (f) GSRC-NLR, and (g) GSR-JR.
Electronics 11 00182 g008
Figure 9. Visual effects of reconstructed images with various reconstruction models when R = 0.2 . (a) Original image, (b) TV-NLR, (c) MH-BCS-SPL, (d) ASNR, (e) GSR, (f) GSRC-NLR, and (g) GSR-JR.
Figure 9. Visual effects of reconstructed images with various reconstruction models when R = 0.2 . (a) Original image, (b) TV-NLR, (c) MH-BCS-SPL, (d) ASNR, (e) GSR, (f) GSRC-NLR, and (g) GSR-JR.
Electronics 11 00182 g009
Figure 10. The reconstruction time for the various model at R = 0.1 .
Figure 10. The reconstruction time for the various model at R = 0.1 .
Electronics 11 00182 g010
Table 1. The results of the model with and without group sparse coefficient regularization.
Table 1. The results of the model with and without group sparse coefficient regularization.
Regularization ConstraintWith Group Sparse
Coefficient Regularization
Without Group Sparse Coefficient Regularization
ImagePSNRSSIMTimePSNRSSIMTime
House34.300.880692.5034.140.87701746.53
Peppers28.260.8196796.0228.170.81892098.52
Monarch27.600.89101291.2126.940.88872102.25
Airplane25.910.84051227.7625.620.83932093.47
Table 2. The PSNR (dB) and SSIM of six image reconstruction models at various sensing rates.
Table 2. The PSNR (dB) and SSIM of six image reconstruction models at various sensing rates.
ImagesMethodsSensing Rates
R = 0.1R = 0.15R = 0.2R = 0.25R = 0.3
PSNR|SSIMPSNR|SSIMPSNR|SSIMPSNR|SSIMPSNR|SSIM
CameramanTV-NLR22.98|0.749824.51|0.797325.76|0.829527.38|0.860327.99|0.8746
MH-BCS-SPL22.13|0.679124.37|0.766425.88|0.811127.16|0.840828.08|0.8607
ASNR23.76|0.780226.17|0.837727.75|0.866828.87|0.888929.96|0.9037
GSR22.90|0.768025.50|0.830927.17|0.862728.40|0.884729.62|0.9041
GSRC-NLR23.79|0.777826.38|0.838727.19|0.857528.62|0.883329.66|0.9021
GSR-JR24.57|0.793826.62|0.840828.16|0.870229.15|0.891430.19|0.9073
HouseTV-NLR29.55|0.832631.55|0.860433.18|0.881934.39|0.896935.48|0.9104
MH-BCS-SPL30.28|0.835732.49|0.873633.84|0.893434.95|0.902935.69|0.9186
ASNR33.60|0.883635.79|0.907936.97|0.925038.12|0.940739.06|0.9503
GSR33.75|0.880735.88|0.912137.31|0.932238.36|0.945139.29|0.9539
GSRC-NLR34.07|0.879435.77|0.906537.11|0.927638.36|0.944339.30|0.9539
GSR-JR34.30|0.880535.71|0.903137.04|0.923938.29|0.9415 39.13|0.9506
PeppersTV-NLR25.72|0.762427.72|0.810529.04|0.841830.13|0.863030.70|0.8752
MH-BCS-SPL25.16|0.718727.44|0.787428.61|0.815729.63|0.840530.20|0.8513
ASNR27.50|0.803029.74|0.849030.91|0.870032.33|0.891933.28|0.9050
GSR26.93|0.794429.30|0.841130.83|0.869332.11|0.889033.02|0.9028
GSRC-NLR27.88|0.813429.92|0.851430.97|0.869632.17|0.888332.94|0.9003
GSR-JR28.26|0.819630.13|0.855731.51|0.879332.61|0.895333.48|0.9075
StarfishTV-NLR22.84|0.670924.40|0.747625.73|0.799126.52|0.831428.33|0.8739
MH-BCS-SPL22.54|0.684324.78|0.761725.93|0.797226.96|0.828927.90|0.8506
ASNR24.33|0.755427.22|0.839229.66|0.890931.77|0.922933.15|0.9387
GSR23.60|0.734426.99|0.840029.41|0.890131.38|0.918633.00|0.9375
GSRC-NLR24.41|0.761427.20|0.840828.24|0.870329.86|0.902831.37|0.9256
GSR-JR25.65|0.795228.20|0.858830.26|0.899732.02|0.926833.58|0.9415
MonarchTV-NLR23.01|0.772625.66|0.848427.16|0.883229.24|0.912829.73|0.9239
MH-BCS-SPL23.19|0.757525.64|0.838327.10|0.866028.25|0.885629.20|0.9005
ASNR25.86|0.870328.97|0.919431.88|0.947533.46|0.959534.78|0.9670
GSR25.29|0.864028.22|0.917930.77|0.943332.79|0.957834.25|0.9659
GSRC-NLR26.33|0.879529.01|0.922730.32|0.938832.15|0.954033.43|0.9626
GSR-JR27.60|0.891029.95|0.927032.05|0.948333.75|0.961335.06|0.9680
AirplaneTV-NLR23.44|0.756825.32|0.819726.81|0.864628.33|0.891728.79|0.9018
MH-BCS-SPL23.67|0.763825.44|0.819927.19|0.852528.59|0.887029.67|0.8945
ASNR25.13|0.824827.35|0.876429.10|0.906530.50|0.925332.15|0.9432
GSR24.57|0.821926.56|0.870328.96|0.908630.48|0.928332.03|0.9440
GSRC-NLR25.36|0.833527.54|0.881928.93|0.907030.49|0.928031.91|0.9432
GSR-JR25.91|0.840528.19|0.889729.98|0.917331.35|0.936132.70|0.9475
ParrotTV-NLR24.60|0.827325.93|0.859927.29|0.885228.21|0.900529.16|0.9139
MH-BCS-SPL25.34|0.821927.36|0.874929.23|0.897530.08|0.913331.01|0.9254
ASNR26.73|0.870728.44|0.897730.38|0.918931.46|0.931433.12|0.9420
GSR26.34|0.874728.97|0.907531.16|0.924732.36|0.933133.82|0.9472
GSRC-NLR27.35|0.881529.52|0.907930.74|0.922131.49|0.933132.41|0.9427
GSR-JR27.66|0.880529.84|0.907331.54|0.923732.16|0.935533.73|0.9450
ManTV-NLR23.49|0.636324.77|0.706726.05|0.758427.32|0.807727.85|0.8274
MH-BCS-SPL23.00|0.574624.44|0.653425.36/0.695926.44|0.745127.36|0.7811
ASNR24.16|0.680026.08|0.763627.55|0.819528.55|0.849929.81|0.8786
GSR23.80|0.665825.80|0.760227.440.818228.71|0.855129.81|0.8836
GSRC-NLR24.42|0.690026.29/0.768627.61/0.818028.86|0.853829.80|0.8804
GSR-JR24.91|0.695726.64|0.769528.06|0.820629.22|0.858230.45|0.8846
Resolution chartTV-NLR20.66|0.880225.05|0.947028.72|0.971032.29|0.981136.05|0.9880
MH-BCS-SPL17.69|0.648220.32|0.762622.75|0.860025.12|0.905127.10|0.9330
ASNR20.68|0.863625.95|0.956030.07|0.978532.70|0.986136.88|0.9911
GSR20.20|0.848926.03|0.956230.16|0.976436.28|0.989038.07|0.9912
GSRC-NLR20.26|0.826825.37|0.942725.55|0.939029.49|0.973331.83|0.9826
GSR-JR25.12|0.946631.99|0.981835.07|0.988934.65|0.990341.63|0.9953
Camera testTV-NLR15.72|0.699219.10|0.833021.74|0.900723.93|0.943422.32|0.8614
MH-BCS-SPL19.63|0.741622.25|0.807424.04|0.842826.09|0.886427.68|0.9025
ASNR24.14|0.957828.23|0.985932.03|0.991934.62|0.994436.78|0.9954
GSR24.47|0.965829.00|0.985632.52|0.991335.05|0.993736.42|0.9937
GSRC-NLR24.34|0.957928.51|0.985229.43|0.986132.45|0.993534.84|0.9956
GSR-JR25.87|0.973429.98|0.988833.30|0.993635.40|0.995637.83|0.9966
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, R.; Qin, Y.; Wang, Z.; Zheng, H. Group-Based Sparse Representation for Compressed Sensing Image Reconstruction with Joint Regularization. Electronics 2022, 11, 182. https://doi.org/10.3390/electronics11020182

AMA Style

Wang R, Qin Y, Wang Z, Zheng H. Group-Based Sparse Representation for Compressed Sensing Image Reconstruction with Joint Regularization. Electronics. 2022; 11(2):182. https://doi.org/10.3390/electronics11020182

Chicago/Turabian Style

Wang, Rongfang, Yali Qin, Zhenbiao Wang, and Huan Zheng. 2022. "Group-Based Sparse Representation for Compressed Sensing Image Reconstruction with Joint Regularization" Electronics 11, no. 2: 182. https://doi.org/10.3390/electronics11020182

APA Style

Wang, R., Qin, Y., Wang, Z., & Zheng, H. (2022). Group-Based Sparse Representation for Compressed Sensing Image Reconstruction with Joint Regularization. Electronics, 11(2), 182. https://doi.org/10.3390/electronics11020182

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop