Next Article in Journal
Data-Driven Trajectory Prediction of Grid Power Frequency Based on Neural Models
Previous Article in Journal
Integrated Management Strategy with Feasible Smartness over Heterogeneous IoT Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid High-Order and Fractional-Order Total Variation with Nonlocal Regularization for Compressive Sensing Image Reconstruction

College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(2), 150; https://doi.org/10.3390/electronics10020150
Submission received: 11 December 2020 / Revised: 7 January 2021 / Accepted: 7 January 2021 / Published: 12 January 2021
(This article belongs to the Section Circuit and Signal Processing)

Abstract

:
Total variation often yields staircase artifacts in the smooth region of the image reconstruction. This paper proposes a hybrid high-order and fractional-order total variation with nonlocal regularization algorithm. The nonlocal means regularization is introduced to describe image structural prior information. By selecting appropriate weights in the fractional-order and high-order total variation coefficients, the proposed algorithm makes the fractional-order and the high-order total variation complement each other on image reconstruction. It can solve the problem of non-smooth in smooth areas when fractional-order total variation can enhance image edges and textures. In addition, it also addresses high-order total variation alleviates the staircase artifact produced by traditional total variation, still smooth the details of the image and the effect is not ideal. Meanwhile, the proposed algorithm suppresses painting-like effects caused by nonlocal means regularization. The Lagrange multiplier method and the alternating direction multipliers method are used to solve the regularization problem. By comparing with several state-of-the-art reconstruction algorithms, the proposed algorithm is more efficient. It does not only yield higher peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) but also retain abundant details and textures efficiently. When the measurement rate is 0.1, the gains of PSNR and SSIM are up to 1.896 dB and 0.048 dB respectively compared with total variation with nonlocal regularization (TV-NLR).

1. Introduction

Compressive sensing (CS) [1,2] has been successfully applied in signal acquiring, processing, and compression. By exploiting the redundancy that existed in a signal, CS conducts sampling, and compression at the same time. CS theory demonstrates that a signal can be reconstructed with high probability when it exhibits sparse representation in some transformation domain, which has been widely used in many fields, such as single-pixel imaging [3], remote sensing imaging [4], and medical imaging [5].
Based on the CS theory, if an original signal x R N is sparse or after sparse transformation Ψ . The measured value y = Φ x , y R M ( M N ) can be obtained through the measurement matrix Φ . CS theory shows that when the sparse transformation matrix Ψ and the measurement matrix Φ satisfy the restricted isometry property (RIP) [6], the original signal x can be reconstructed by solving the following optimization problem
min x   Ψ T x 0       s . t .       y = Φ x ,
where signal α = Ψ T x , α is the sparse coefficient after sparse transformation. 0 represents the l0 norm. Since the solution of Equation (1) is an NP-hard problem, it can be solved by the approximate form l1 norm
min x   Ψ T x 1       s . t .       y = Φ x .
The above-constrained problem can be constructed by the Lagrangian multiplier method to an unconstrained problem
x ˜ = arg   min x { y Φ x 2 2       + λ Ψ T x 1 } ,
where y Φ x 2 2 is a cost function, λ is the Lagrangian parameter. Since prior characteristics exist in natural images generated by natural light imaging, the optimization problem of image compression sensing can be expressed as
u ˜ = arg   min u {   y Φ u 2 2       + λ R ( u ) } ,
where u is a 2D image. R ( u ) represents a regularization term representing prior information of an image.
The current CS recovery algorithms explore the prior knowledge that a natural image is sparse in some domains, such as discrete cosine transform (DCT) [7], wavelets [8], and gradient-domain utilized by total variation (TV) model [9,10]. Image sparse representation is a key factor to affect image reconstruction quality. Despite high effectiveness in image CS recovery, the TV-based [11] algorithms cannot recover the fine details and textures, reconstructed images suffer from undesirable staircase artifact. This problem has sparked numerous studies in designing regularizers that could efficiently suppress the staircase artifacts while retaining sharp edges. To overcome this drawback, a weighted total variation [12] is proposed to improve the sparsity of the TV norm. Furthermore, an improved total variation by intraprediction is proposed [13]. Besides, total generalized variation (TGV) [14,15] is widely discussed, which is more precise in describing pixels variations in smooth regions, and thus reduces oil painting artifacts while still being able to preserve sharp edges. Except for these TV and its variants, there also exists some combination of TV (or TGV) and different waves, such as TV + wavelet [16] and TGV + shearlet [17]. However, the methods described above are improved on TV, there is still a staircase artifact effect in the results. Further, nonlocal means (NLM) [18] uses the similarity between the surrounding pixels of the image to perform weighted filtering, which effectively utilizes the non-local self-similarity. NLM is successfully applied in the CS image reconstruction, but it often results in the painting-like effect.
Recently, a class of fractional-order TV regularization models has received considerable interest and it is widely used in image denoising [19,20]. Adaptive weighted high-frequency iterative fractional-order TV is proposed, the high frequency gradient of an image is reweighted in iterations adaptively when using fractional-order TV [21]. Another conventional way to suppress the staircase artifact is to use a high-order TV regularization [22,23], there exists a high-order total variation minimization model that removes undesired artifacts for restoring blurry and noisy images [24]. High-order TV regularization can reconstruct piecewise linear regions, but high-order TV may also smooth out the image details, and it may reduce the ability of edges-preserving [23]. In order to make full use of image prior information as much as possible to construct a regularized model. A generalized hybrid non-convex variational regularization model [25] and unidirectional hybrid total variation with nonconvex low-rank regularization [26] have been proposed.
Motivated by the aforementioned studies, both the fractional-order TV and high-order TV can solve the problem of staircase artifacts, but fractional-order TV will cause non-smooth effects just like noise in smooth areas of the image. The ability of high-order TV to reduce the staircase effect is not very ideal since it may smooth the image details. In this paper, a two-dimensional compressive sensing image reconstruction model based on hybrid high-order and fractional-order TV (HoFrTV) with nonlocal regularization is proposed. The proposed algorithm makes the fractional-order and the high-order TV complement each other on image reconstruction, which can effectively solve the problems caused by high-order and fractional-order TV. The proposed algorithm effectively reduces the problem of staircase artifacts while preserving the edges, and suppresses painting-like effects produced by nonlocal means regularization.
To effectively solve the proposed algorithm model, we introduce auxiliary variables to a construct constrained optimization problem in Section 2.2. We divide the proposed model into four subproblems to solve. These subproblems are fractional-order TV model, high-order TV model, nonlocal means regularization model, and the reconstructed image by iterative update respectively. The augmented Lagrangian method (ALM) and the alternating direction method of multipliers (ADMM) are incorporated to solve these problems. Each parameter of the Lagrangian function is optimized for the experiment results empirically. The experimental results show that the proposed algorithm outperforms the current state-of-the-art algorithms, the edges and details of the reconstructed image are more abundant, and the visual effect of the reconstructed image is better.
The remainder of this paper is organized as follows. Section 2 introduces the related regularization model and the proposed HoFrTV model. In Section 3, parameter selection and experimental results are presented. In Section 4, the conclusions are drawn.

2. The Proposed Algorithm Model

2.1. Regularization Model

2.1.1. Fractional-Order Total Variation Model

Fractional total variation can be regarded as the generalization of total variation composed of fractional orders. There are three widely used definitions of the model, including Riemann-Liouville (R-L), Grünwald-Letnikov (G-L), and Caputo model [27]. Here, we choose the G-L model because it is easier to implement for image reconstruction. The G-L model can be defined as
D t α a f ( t ) = lim h 0 1 h α k = 0 [ t a h ] ( 1 ) k ( α k ) f ( t k h ) ,
where α fractional order, t and a are the upper and lower boundaries of the independent variable respectively, and h is the differential step-size, ( α k ) is the binomial coefficient and defined as ( α k ) = Γ ( α + 1 ) Γ ( k + 1 ) Γ ( α k + 1 ) , where Γ ( ) is the Gamma function. Without loss of generality, let u denote the image. The fractional-order total variation in the horizontal and vertical directions can be expressed as
[ D α u ] = ( ( D h α u ) , ( D v α u ) ) ,
and
{ ( D h α u ) = k = 0 K 1 ( 1 ) k ( α k ) u ( i k , j ) ( D v α u ) = k = 0 K 1 ( 1 ) k ( α k ) u ( i , j k ) ,
where i and j denote the pixel in the i-th row and the j-th column of an N × N size image. K ≥ 3 is the number of items involved in the computation of the fractional-order derivative, which is usually set as K = N. Based on this definition, the fractional-order semi-norm is defined as
D α u F r T V = i = 1 N j = 1 N [ ( D h α u ) ] 2 + [ ( D v α u ) ] 2 .

2.1.2. High-Order Total Variation Model

On the basis of the first-order total variation operators of the image, we can obtain the high-order total variation, defined as
D 2 u = ( D v v u , D v h u , D h v u , D h h u ) D 2 u = | D v v u | 2 + | D v h u | 2 + | D h v u | 2 + | D h h u | 2 ,
where ( D v v u ) , ( D v h u ) , ( D h v u ) , and ( D h h u ) are high-order total variation operators in the horizontal and vertical directions, respectively. The high-order total variation can be seemed as performing the differential operation again on the first-order total variation.

2.1.3. Nonlocal Mean Regularization Model

The pixels in the image can be estimated by the weighted average of the surrounding pixels of similar neighborhoods in the similar neighborhood of the search window, which can be expressed as follows
u ^ i = j Ω w i , j u j ,
where Ω is a search window area Ds × Ds. let u i and u j denote the central pixel of similar neighborhood window ds × ds respectively. The weight of u j to u i , denoted by w i , j , which is determined by the similarity of the pixels in a similar neighborhood window. It can be calculated by the Gaussian l2 distance between pixels in the neighborhood window. It can be written in the following form
w i , j = 1 c exp ( u j u i 2 2 h 2 ) .
Specifically, assume that u j lies in the search window of u i . The neighborhood window centered on the central pixel u j can slide in the search window to calculate the similarity between two neighboring windows. h is the factor controlling the Gaussian kernel and c is the normalization constant. Applying nonlocal regularization in the CS reconstruction process, furthermore, it can be rewritten in a matrix form as
R ( u ) = u W u 2 2 ,
here, W represents a matrix composed of w i , j in Equation (10), let R ( u ) denote nonlocal regularization term. When calculating the second norm, vectorizing the elements in the norm is employed and then the second norm of the vectorized matrix is calculated.

2.2. The Proposed Algorithm Model

Incorporating Equations (8), (9), and (12) into the CS optimization problem jointly, the proposed hybrid high-order and fractional-order total variation with nonlocal regularization model image CS recovery algorithm are formulated as
arg   min u   τ D α u 1 + ε D 2 u 1 + β u W u 2 2       s . t .       y = Φ u .
Using variable splitting, we introduce auxiliary variables and change Equation (13) to a constrained optimization problem
arg   min u   τ w 1 + ε v 1 + β z W z 2 2       s . t .     D α u = w , D 2 u = v , u = z , y = Φ u
where τ and ε control the weights of fractional-order and high-order total variation. Note that the problem of Equation (14) is quite difficult to solve directly due to the non-differentiability and non-linearity of the combined regularization terms. An augmented Lagrangian based approach is developed to solve the problem
L A ( w , v , z , u ) = τ ( w 1 γ 1 T ( D α u w ) + μ 1 2 D α u w 2 2 )                                                       + ε ( v 1 γ 2 T ( D 2 u v ) + μ 2 2 D 2 u v 2 2 )                                                     + μ 3 2 Φ u y 2 2   γ 3 T ( Φ u y ) + β z W z 2 2                                                     γ 4 T ( u z ) + μ 4 2 u z 2 2
where μ 1 , μ 2 , μ 3 , μ 4 , and β are regularization parameters associated with the quadratic penalty terms. γ 1 , γ 2 , γ 3 , and γ 4 are the Lagrange multipliers associated with the constraints of Equation (15). The idea of using ADMM is to find a saddle point of L A ( w , v , z , u ) which is the solution to the original problem in Equation (13). It can minimize the augmented Lagrangian function L A ( w , v , z , u ) alternately, the problem in Equation (15) is decomposed into the four subproblems. We investigated the subproblems one by one.

2.2.1. w and v Subproblems

Given v, z, and u, the optimization problem associated with w can be expressed as
w ˜ = arg min w τ ( w 1 γ 1 T ( D α u w ) + μ 1 2 D α u w 2 2 ) .
According to Lemma 2 [28], the closed solution form of Equation (16) is
w ˜ = max τ { | D α u γ 1 μ 1 | 1 μ 1 , 0 } · sgn ( D α u γ 1 μ 1 ) .
When solving v subproblem, given w, v and z, similarly, the v subproblem becomes
v ˜ = arg min v ε ( v 1 γ 2 T ( D 2 u v ) + μ 2 2 D 2 u v 2 2 ) .
Same as w subproblems, the closed solution form of (18) can write as
v ˜ = max ε { | D 2 u γ 2 μ 2 | 1 μ 2 , 0 } · sgn ( D 2 u γ 2 μ 2 ) .

2.2.2. u Subproblems

Fixed w, v, and z, u subproblems is equivalently expressed as
u ˜ = arg   min z   τ ( γ 1 T ( D α u w ˜ ) + μ 1 2 D α u w ˜ 2 2 ) + ε ( γ 2 T ( D 2 u v ˜ ) + μ 2 2 D 2 u v ˜ 2 2 )                                               + μ 3 2 Φ u y 2 2   γ 3 T ( Φ u y ) γ 4 T ( u z ) + μ 4 2 u z 2 2 .
We can see that the problem in Equation (20) is a quadratic function optimization problem. To reduce the calculation, the gradient descent method was used
u ˜ = u η d ,
η is the step size. u can be reconstructed by every iterative update, d indicates its gradient,
d = τ ( D α ) T ( μ 1 D α γ 1 μ 1 w ˜ ) + ε ( D 2 ) T ( μ 2 D 2 γ 2 μ 2 v ˜ )             γ 4 + μ 4 ( u z ) + Φ T ( μ 3 ( Φ u y ) γ 3 ) ,
where η = a b s ( d T d / d T G d ) is the optimal step and G = μ 1 ( D α ) T D α + μ 4 I + μ 3 Φ T Φ .

2.2.3. z Subproblems

Similar to other subproblems, z subproblems becomes
z ˜ = arg   min z β z W z 2 2   γ 4 T ( u ˜ z ) + μ 4 2 u ˜ z 2 2 .
According to [18], the Equation (23) can be further transformed into
min z   1 2 z r 2 2 + β μ 4 z W z 2 2 ,
where r = ( u ˜ γ 4 μ 4 ) , r can be regarded as an approximation of z. Since the weight matrix W represents the nonlocal means operator, the Equation (24) can be rewritten as
min z   1 2 z r 2 2 + β μ 4 z W r 2 2 .
Setting the gradient of the Equation (25) to zero, we acquire the closed-form solution as follows
z ˜ = μ 4 r + 2 β W r μ 4 .
Finally, the Lagrange multipliers are updated by the following
{ γ 1 k + 1 = γ 1 k μ 1 ( D α u k + 1 w k + 1 ) γ 2 k + 1 = γ 2 k μ 2 ( D 2 u k + 1 w k + 1 ) γ 3 k + 1 = γ 3 k μ 3 ( Φ u k + 1 y ) γ 4 k + 1 = γ 4 k μ 4 ( u k + 1 z k + 1 ) .
Once we obtained an efficient solution for each separated subproblem, the overall algorithm will be more efficient to get better reconstruction. Given all the derivations above, the specific implementation process of the proposed algorithm is described in Algorithm 1.
Algorithm 1 HoFrTv algorithm.
Input: The observed measurement y , the measurement matrix Φ and μ 1 , μ 2 , μ 3 , μ 4 , β
Initialization: u 0 = Φ T y , γ 1 = γ 2 = γ 3 = γ 4 = 0 , w 0 = v 0 = z 0 = 0
While Outer iteration unsatisfied do
  While Inner iteration unsatisfied do
   Solve w subproblem via Equation (17)
   Solve v subproblem via Equation (19)
   Solve u subproblem via Equation (21)
   Compute the weight wij via Equation (11)
   Solve z subproblem via Equation (26)
  end while
 Update multipliers via Equation (27)
end while
Output: the reconstructed image

3. Experimental Results and Discussion

In this section, the experimental results are presented to demonstrate the performance of the proposed algorithm HoFrTv model. In our implementation, we chose the USC-SIPI image database, which is a collection of digitized images. The database was maintained primarily to support research in image processing, image analysis, and machine vision. Ten images were selected for verification, the original images are shown in Figure 1.
Reconstructed image quality was measured using the peak signal-to-noise ratio (PSNR) and the structure similarity (SSIM) [29]. The calculation expression is as follows
PSNR = 20 log 10 R RMSE RMSE = 1 M N i = 1 M j = 1 N ( u i j u ˜ i j ) 2 .
The root mean square error is the arithmetic square root of the mean square error, it can measure the deviation between the reconstructed image and the original image. Where u ˜ i j and u i j denote the pixel values of the reconstructed image and the original image, R is the maximum value of the image gray level range. SSIM is defined as
SSIM = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) ,
where μ x and μ y are the mean value of the reconstructed image and the original image. σ x and σ y are the standard deviations of the reconstructed image and the original image. C 1 and C 2 are small positive numbers to avoid μ x 2 + μ y 2 and σ x 2 + σ y 2 being zero, and
σ x y = ( 1 N 1 ) 2 i = 1 N j N ( u i j μ x ) ( u ˜ i j μ y ) .
The stopping criterion for all of the algorithms tested was set to
u ˜ t u ˜ t + 1 2 u ˜ t + 1 2 10 3 ,
where u ˜ t + 1 and u ˜ t are the restored images at the current iterate and previous iterate respectively.

3.1. Parameter Selection

Since τ and ε were related to the weights of fractional-order and high-order total variation, the respective ranges of τ and ε were greater than 0 and less than 1. We constrained the summation of these two weights to be 1 in our proposed algorithm model. In the following content, we analyzed the influence of high-order and fractional-order weights parameters on the reconstructed results.

3.1.1. The Influence of High-Order

This subsection discusses the influence of high-order TV when Equation (14) does not exist in fractional-order, which means that the fractional-order is the traditional total variation. In order to investigate its effect, experiments were implemented with different weight parameters ε to high-order TV, the weights parameters value range was greater than 0 and less than 1. We randomly selected Lena image, the results are shown in Table 1.
From Table 1, with the parameter ε increasing, the higher the parameter ε was, the better the result of reconstruction. This is due to high-order TV that can effectively reduce the stair casing artifacts in the reconstructed image. According to the experimental results, when parameter ε was between 0.6 and 0.9, the reconstruction results could achieve stable results. So, the parameter ε value range could be selected from 0.6 to 0.9. According to the experiment in the database, we could make appropriate selection values according to different images. In our experiments, we set ε = 0.7 and τ = 0.3 to balance the results.

3.1.2. The Influence of Fractional-Order

In this experiment, α is the fractional-order. The effect of this parameter on the image reconstruction performance was tested without high-order TV in Equation (14), ranging from 0.5 to 2. Image Lena was tested in the experiment. The results are shown in Table 2.
From Table 2, we could see that when α < 1, the reconstructed results were poor. The reason is fractional-order TV loses more details and textures. When α = 1, fractional-order TV converts to traditional TV. When α > 1, the larger the parameter α is, the better the textures and image details. We can see that the PSNR at α = 1 is lower than the PSNR at α > 1. We can select the fractional-order α between α = 1.3 and α = 2. However, when a value of α is too close to 2, the frequency of textures would be enhanced excessively that becomes a kind of noise. Finally, to achieve a good trade-off, in our experiments, we set α = 1.7 to balance the results.

3.1.3. The Influence of Non-Local Mean Regularization Kernel Window and Search Window

Non-local means regularization indicates the estimated value of the current pixel obtained by the weighted average of the pixels in the image that have a similar neighborhood structure. There is no doubt that the radius of the neighborhood kernel window and the radius of the neighborhood search window were vital to the experiment. If their values are too small, the self-similarity of the image cannot be fully utilized, and the characteristics of the non-local means cannot be used fully. If their values are too large, the image search area will become larger and the time will be longer, which leads to the fact that the algorithm efficiency will be lower. There are different kernel windows to search window ratio (k:s) result values for different measurement rates in Table 3. Figure 2 shows the PSNR and time curves with measurement rates = 0.1 and 0.15 for different k:s. We can see from Table 3 and Figure 2 that the performance was the best when k:s was 3:7 considering from time and PSNR. Therefore, the kernel window and search window were set to be 3 and 7 in the following experiments.

3.2. Parameter Verification

3.2.1. Verify Fractional Order Existence Performance

According to Section 3.1, we set fractional-order α = 1.7 in this experiment, in order to verify the validity of fractional-order existence. On the basis of Section 3.1.1, the fractional-order α = 1.7 was added. The test image selection was the same as the previous tested image Lena. The results are shown in Table 4.
Comparing Table 1 with Table 4, we could see that the image reconstructed results in Table 4 were overall higher. The performance of fractional-order existence was better than that without fractional-order.

3.2.2. Verify High-Order Existence Performance

Similarly, in this experiment. In Section 3.1.2, the weight of high-order TV was 0.7. In order to verify the validity of high-order existence. On the basis of Section 3.1.2, the high-order was added. The test image was Lena. The results are shown in Table 5.
From comparing Table 2 with Table 5, we could see that the performance of high-order existence was better than that without high-order by comparing data one by one.
In order to verify the effect of the proposed algorithm, we gave the visual comparison of different TV modes in algorithms to test images Lena and Peppers. From Figure 3 and Figure 4, traditional TV with nonlocal means regularization in Figure 3a and Figure 4a usually produced undesirable staircase artifact and painting-like effects. Even though fractional-order TV in Figure 3b and Figure 4b could enhance image edges and textures, it could cause non-smooth of smooth areas. Although, the high-order TV in Figure 3c and Figure 4c was capable of alleviating the problem caused by traditional TV, still smoothed out the details of the image, which led to the fact that the effect was not ideal. In Figure 3d and Figure 4d, this was obvious that our proposed algorithm made the fractional-order and the high-order TV complement each other on image reconstruction. From the figures, we could see that our proposed algorithm preserved image edges and textures more effectively, and could alleviate the staircase artifact and painting-like effects produced by nonlocal means regularization.

3.3. Comparison to Other Reconstruction Algorithms

In this experiment, we compared the proposed method with several state-of-the-art algorithms: TVNLR [18], BCS-TV [21], TVAL3 [28], and two non-TV based algorithms: BCS-SPL [30] and MH-BCS [31]. In order to reduce the computational complexity and memory requirements. Images were divided into non-overlapping blocks of size 32 × 32 for all algorithms. Table 6 displays their PSNR and SSIM values with the growth of the measurement rates for each image. We added additional test images and reconstruction results in the Appendix A. Figure 5 and Figure 6 display PSNR and SSIM values respectively with the growth of the measurement rates for two images. From Table 6 and Figure 5 and Figure 6, it can be seen that the HoFrTV algorithm performed best for the images at different measurement rates. The visual quality of the reconstructed images was used to further verify the effectiveness of the proposed algorithm. To compare the results visually, Figure 7 and Figure 8 optionally display some reconstructed images and local enlargements obtained using the different algorithms with a 0.2 measurement rate. We can clearly see that the visual quality of the images recovered by HoFrTV was better than that of others. HoFrTV could efficiently reconstruct fine details and textures while preserving sharp edges and avoid producing painting-like effects.

3.4. Computational Complexity

The experiments were performed under the MATLABR2018a environment with Intel Corei5-4200 CPU of 3.4 GHz and 4.0 GB RAM. Table 7 and Figure 9 give the time of reconstructing image at different measurement rates. From this result, we could see that BCS-SPL, TVAL3, and MH-BCS were faster than the others. Comparing with TVNLR, HoFrTV, and BCS-TV, HoFrTV was faster than BCS-TV and slower than the TVNLR algorithm. It is because at each iteration needs to calculate the high-order and fractional-order total variation in the reconstruction process. However, the HoFrTV algorithm had higher reconstruction quality.

4. Conclusions

In this paper, a hybrid high-order and fractional-order total variation with nonlocal means regularization model was proposed for compressive sensing image reconstruction. ALM and ADMM methods were used to solve this model. Our proposed algorithm makes the fractional-order and the high-order total variation complement each other on image reconstruction. It can solve the problem of non-smooth in smooth areas when fractional-order total variation can enhance image edges and textures. In addition, it also addresses high-order total variation alleviates the staircase artifact produced by traditional total variation, still smoothes the details of the image and the effect is not ideal. Meanwhile, the proposed algorithm suppresses painting-like effects produced by nonlocal means regularization. Experimental results show that, comparing with several state-of-the-art algorithms, the reconstructed images obtained by the proposed approach not only outperformed many existing methods in terms of PSNR and SSIM but also had better visual quality.

Author Contributions

Conceptualization, L.H., resources, Y.Q. and H.Z. validation, Z.P. and J.M., supervision, Y.H. Writing—original draft, L.H.; Writing—review and editing Y.Q. and H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (NSFC) (61675184 and 61405178).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Five additional test images.
Figure A1. Five additional test images.
Electronics 10 00150 g0a1
Table A1. The table of experimental results the PSNR (dB)/SSIM of five additional test images.
Table A1. The table of experimental results the PSNR (dB)/SSIM of five additional test images.
ImageAlgorithmsMeasurement Rates
0.10.150.20.250.3
WomanTV-NLR28.406/0.80429.886/0.84731.261/0.87732.540/0.90133.325/0.917
MH-BCS28.610/0.82530.779/0.81331.624/0.89033.175/0.91134.141/0.921
BCS-TV27.982/0.78129.581/0.82930.766/0.85831.914/0.88232.74/0.901
BCS-SPL27.560/0.77428.869/0.81430.117/0.84331.202/0.86432.201/0.880
TVAL327.259/0.78129.539/0.83530.786/0.86331.880/0.88732.941/0.904
HoFrTV29.099/0.83231.130/0.87832.241/0.90133.342/0.91634.555/0.931
CandyTV-NLR28.645/0.90530.659/0.93332.859/0.95434.560/0.96636.488/0.976
MH-BCS28.656/0.89131.161/0.92532.643/0.94034.158/0.95435.229/0.962
BCS-TV27.458/0.88129.729/0.91931.659/0.94333.465/0.95935.209/0.969
BCS-SPL26.467/0.84228.283/0.87730.261/0.90131.242/0.91932.352/0.932
TVAL326.238/0.85429.598/0.91731.184/0.93832.907/0.95534.333/0.966
HoFrTV30.266/0.92932.671/0.95334.671/0.96836.288/0.97537.970/0.982
BellTV-NLR25.070/0.84626.858/0.88328.615/0.91229.682/0.92731.401/0.943
MH-BCS25.262/0.82626.275/0.86428.666/0.89829.861/0.91630.838/0.928
BCS-TV23.937/0.79525.619/0.84727.044/0.88328.276/0.90529.621/0.923
BCS-SPL23.755/0.77424.928/0.80526.493/0.84127.402/0.85127.750/0.879
TVAL323.469/0.77525.899/0.86127.538/0.89528.915/0.91430.194/0.929
HoFrTV25.758/0.86027.827/0.89729.381/0.92030.862/0.93532.229/0.948
CoupleTV-NLR25.424/0.67627.417/0.75928.298/0.79828.994/0.82730.216/0.861
MH-BCS24.961/0.66726.805/0.74928.089/0.79528.890/0.82330.146/0.855
BCS-TV24.376/0.62526.153/0.70227.394/0.75628.537/0.79829.592/0.834
BCS-SPL23.773/0.58024.933/0.63225.827/0.67326.57/0.706327.286/0.736
TVAL323.426/0.61526.081/0.71927.745/0.77428.911/0.81630.117/0.848
HoFrTV25.765/0.69527.589/0.77429.323/0.82830.553/0.86431.728/0.891
ManTV-NLR23.469/0.63224.775/0.70725.972/0.75426.810/0.79027.875/0.828
MH-BCS23.328/0.62424.856/0.70125.920/0.74826.860/0.78427.564/0.808
BCS-TV22.902/0.59624.483/0.68125.593/0.73726.628/0.77927.543/0.816
BCS-SPL21.889/0.50522.955/0.56223.815/0.61124.587/0.65125.260/0.687
TVAL322.327/0.56723.211/0.61924.35/0.67825.206/0.72027.504/0.811
HoFrTV23.754/0.66025.4822/0.74126.719/0.79327.761/0.82628.748/0.855

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Candès, E.J.; Wakin, M.B.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  3. Duarte, M.F.; Davenport, M.A.; Takhar, D.; Laska, J.N.; Sun, T.; Kelly, K.F.; Baraniuk, R.G. Single-Pixel Imaging via Compressive Sampling. IEEE Signal Process. Mag. 2008, 25, 83–91. [Google Scholar] [CrossRef] [Green Version]
  4. Alonso, M.T.; Dekker, P.L.; Mallorqui, J.J. A Novel Strategy for Radar Imaging Based on Compressive Sensing. IEEE Trans. Geoence Remote Sens. 2011, 48, 4285–4295. [Google Scholar] [CrossRef] [Green Version]
  5. Lustig, M.; Donoho, D.L.; Santos, J.M.; Pauly, J.M. Compressed sensing MRI. IEEE Signal Process. Mag. 2008, 25, 72–82. [Google Scholar] [CrossRef]
  6. Candès, E.J. The restricted isometry property and its implications for compressed sensing. C. R.-Math. 2008, 346, 589–592. [Google Scholar] [CrossRef]
  7. Mutgekar, M.B.; Bhaskar, P.C. Analysis of DCT and FAST DCT using soft core processor. In Proceedings of the 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 23–25 April 2019; pp. 1128–1132. [Google Scholar] [CrossRef]
  8. Rousset, F.; Ducros, N.; Farina, A.; Valentini, G.; D’Andrea, C.; Peyrin, F. Adaptive Basis Scan by Wavelet Prediction for Single-Pixel Imaging. IEEE Trans. Comput. Imaging 2017, 3, 36–46. [Google Scholar] [CrossRef] [Green Version]
  9. Beck, A.; Teboulle, M. Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef] [Green Version]
  10. Iordache, M.-D.; Bioucas-Dias, J.M.; Plaza, A. Variation Spatial Regularization for Sparse Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4484–4502. [Google Scholar] [CrossRef] [Green Version]
  11. Li, C.; Yin, W.; Jiang, H.; Zhang, Y. An efficient augmented lagrangian method with applications to total variation minimization. Comput. Optim. Appl. 2013, 56, 507–530. [Google Scholar] [CrossRef] [Green Version]
  12. Candès, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing Sparsity by Reweighted L1 Minimization. J. Fourier Anal. Appl. 2007, 14, 877–905. [Google Scholar] [CrossRef]
  13. Xu, J.; Ma, J.; Zhang, D.; Zhang, Y.; Lin, S. Improved total variation minimization method for compressive sensing by intra-prediction. Signal Process. 2012, 92, 2614–2623. [Google Scholar] [CrossRef]
  14. Bredies, K.; Kunisch, K.; Pock, T. Total Generalized Variation. Siam J. Imaging Sci. 2010, 3, 492–526. [Google Scholar] [CrossRef]
  15. Florian, K.; Bredies, K.; Pock, T.; Stollberger, R. Second order total generalized variation (TGV) for MRI. Magn. Resonance Med. 2010, 65, 480–491. [Google Scholar] [CrossRef] [Green Version]
  16. Yang, J.; Zhang, Y.; Yin, W. A Fast Alternating Direction Method for TVL1-L2 Signal Reconstruction From Partial Fourier Data. IEEE J. Sel. Top. Signal Process. 2010, 4, 288–297. [Google Scholar] [CrossRef]
  17. Guo, W.; Qin, J.; Yin, W. A New Detail-Preserving Regularization Scheme. Siam J. Imaging Sci. 2014, 7, 1309–1334. [Google Scholar] [CrossRef]
  18. Zhang, J.; Liu, S.; Xiong, R.; Ma, S.; Zhao, D. Improved total variation based image compressive sensing recovery by nonlocal regularization. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013; pp. 2836–2839. [Google Scholar] [CrossRef]
  19. Jun, Z.; Zhihui, W. A class of fractional-order multi-scale variational models and alternating projection algorithm for image denoising. Appl. Math. Model. 2011, 35, 2516–2528. [Google Scholar] [CrossRef]
  20. Tian, D.; Xue, D.Y.; Wang, D.H. A fractional-order adaptive regularization primal-dual algorithm for image denoising. Inf. Sci. 2015, 296, 147–159. [Google Scholar] [CrossRef]
  21. Chen, H.; Qin, Y.; Ren, H.; Chang, L.; Hu, Y.; Zheng, H. Adaptive Weighted High Frequency Iterative Algorithm for Fractional-Order Total Variation with Nonlocal Regularization for Image Reconstruction. Electronics 2020, 9, 1103. [Google Scholar] [CrossRef]
  22. Adam, T.; Paramesran, R. Image denoising using combined higher order non-convex total variation with overlapping group sparsity. Multidimens. Syst. Signal Process. 2019, 30, 503–527. [Google Scholar] [CrossRef]
  23. Liu, P. Hybrid higher-order total variation model for multiplicative noise removal. IET Image Process. 2020, 14, 862–873. [Google Scholar] [CrossRef]
  24. Mei, J.J.; Huang, T.Z. Primal-dual splitting method for high-order model with application to image restoration. Appl. Math. Model. 2015, S0307904X15006022. [Google Scholar] [CrossRef]
  25. Tang, L.; Ren, Y.; Fang, Z.; He, C. A generalized hybrid nonconvex variational regularization model for staircase reduction in image restoration. Neurocomputing 2019, 359, 15–31. [Google Scholar] [CrossRef]
  26. Yang, J.-H.; Zhao, X.-L.; Ma, T.-H.; Chen, Y.; Huang, T.-Z.; Ding, M. Remote sensing images destriping using unidirectional hybrid total variation and nonconvex low-rank regularization. J. Comput. Appl. Math. 2020, 363, 124–144. [Google Scholar] [CrossRef]
  27. Zhang, J.; Chen, K. Variational image registration by a total fractional-order variation model. J. Comput. Phys. 2015, 293, 442–461. [Google Scholar] [CrossRef] [Green Version]
  28. Li, C. An Efficient Algorithm for Total Variation Regularization with Applications to the Single Pixel Camera and Compressive Sensing. Master’s Thesis, Rice University, Houston, TX, USA, 2010. Available online: https://hdl.handle.net/1911/62229 (accessed on 1 September 2009).
  29. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  30. Mun, S.; Fowler, J.E. Block Compressed Sensing of Images Using Directional Transforms. In Proceedings of the 2010 Data Compression Conference, Snowbird, UT, USA, 24–28 March 2010; p. 547. [Google Scholar] [CrossRef]
  31. Chen, C.; Tramel, E.W.; Fowler, J.E. Compressed-sensing recovery of images and video using multihypothesis predictions. In Proceedings of the 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 6–9 November 2011; pp. 1193–1198. [Google Scholar] [CrossRef]
Figure 1. Test images.
Figure 1. Test images.
Electronics 10 00150 g001
Figure 2. The PSNR and time curves for different k:s ratios: (a) measurement rates = 0.1 and (b) measurement rates = 0.15.
Figure 2. The PSNR and time curves for different k:s ratios: (a) measurement rates = 0.1 and (b) measurement rates = 0.15.
Electronics 10 00150 g002
Figure 3. Lena image measurement rate = 0.1. (a) Reconstructed image by traditional TV, PSNR =26.272 dB, and SSIM = 0.789; (b) Reconstructed image by only fractional-order (order = 1.7) TV, PSNR = 26.956 dB, and SSIM = 0.809. (c) Reconstructed image by only higher-order (weight = 0.7) TV, PSNR = 26.670 dB, and SSIM = 0.803. (d) Reconstructed image by high-order (weight = 0.7) and fractional-order (order = 1.7) TV, PSNR = 27.493 dB, and SSIM = 0.829.
Figure 3. Lena image measurement rate = 0.1. (a) Reconstructed image by traditional TV, PSNR =26.272 dB, and SSIM = 0.789; (b) Reconstructed image by only fractional-order (order = 1.7) TV, PSNR = 26.956 dB, and SSIM = 0.809. (c) Reconstructed image by only higher-order (weight = 0.7) TV, PSNR = 26.670 dB, and SSIM = 0.803. (d) Reconstructed image by high-order (weight = 0.7) and fractional-order (order = 1.7) TV, PSNR = 27.493 dB, and SSIM = 0.829.
Electronics 10 00150 g003
Figure 4. Peppers image measurement rate = 0.1. (a) Reconstructed image by traditional TV, PSNR = 26.601 dB, and SSIM = 0.829; (b) reconstructed image by only fractional-order (order = 1.7) TV, PSNR = 27.748 dB, and SSIM = 0.847; (c) reconstructed image by only higher-order (weight = 0.7) TV, PSNR = 26.787 dB, and SSIM = 0.828; (d) reconstructed image by high-order (weight = 0.7) and fractional-order (order = 1.7) TV, PSNR = 28.497 dB, and SSIM = 0.868.
Figure 4. Peppers image measurement rate = 0.1. (a) Reconstructed image by traditional TV, PSNR = 26.601 dB, and SSIM = 0.829; (b) reconstructed image by only fractional-order (order = 1.7) TV, PSNR = 27.748 dB, and SSIM = 0.847; (c) reconstructed image by only higher-order (weight = 0.7) TV, PSNR = 26.787 dB, and SSIM = 0.828; (d) reconstructed image by high-order (weight = 0.7) and fractional-order (order = 1.7) TV, PSNR = 28.497 dB, and SSIM = 0.868.
Electronics 10 00150 g004
Figure 5. The PSNR curves of two images: (a) Barbara and (b) Boats.
Figure 5. The PSNR curves of two images: (a) Barbara and (b) Boats.
Electronics 10 00150 g005
Figure 6. The SSIM curves of two images: (a) Barbara and (b) Boats.
Figure 6. The SSIM curves of two images: (a) Barbara and (b) Boats.
Electronics 10 00150 g006
Figure 7. Reconstructed images (barbara) for measurement rate = 0.2 (a) PSNR = 28.103 dB and SSIM = 0.815; (b) PSNR = 27.748 dB and SSIM = 0.847; (c) PSNR = 26.585 dB and SSIM = 0.778; (d) PSNR = 26.812 dB and SSIM = 0.788; (e) PSNR = 27.237 dB and SSIM = 0.808; and (f) PSNR = 28.304 dB and SSIM = 0.834.
Figure 7. Reconstructed images (barbara) for measurement rate = 0.2 (a) PSNR = 28.103 dB and SSIM = 0.815; (b) PSNR = 27.748 dB and SSIM = 0.847; (c) PSNR = 26.585 dB and SSIM = 0.778; (d) PSNR = 26.812 dB and SSIM = 0.788; (e) PSNR = 27.237 dB and SSIM = 0.808; and (f) PSNR = 28.304 dB and SSIM = 0.834.
Electronics 10 00150 g007aElectronics 10 00150 g007b
Figure 8. Reconstructed images (boat) for measurement rate = 0.2 (a) PSNR = 27.170 dB and SSIM = 0.796; (b) PSNR = 25.266 dB and SSIM = 0.697; (c) PSNR = 26.013 dB and SSIM = 0.758; (d) PSNR = 26.421 dB and SSIM = 0.774; (e) PSNR = 26.476 dB and SSIM = 0.783; and (f) PSNR = 27.523 dB, and SSIM = 0.816.
Figure 8. Reconstructed images (boat) for measurement rate = 0.2 (a) PSNR = 27.170 dB and SSIM = 0.796; (b) PSNR = 25.266 dB and SSIM = 0.697; (c) PSNR = 26.013 dB and SSIM = 0.758; (d) PSNR = 26.421 dB and SSIM = 0.774; (e) PSNR = 26.476 dB and SSIM = 0.783; and (f) PSNR = 27.523 dB, and SSIM = 0.816.
Electronics 10 00150 g008
Figure 9. Average reconstruction time (s).
Figure 9. Average reconstruction time (s).
Electronics 10 00150 g009
Table 1. Peak signal-to-noise ratio (PSNR) (dB)/structural similarity (SSIM) results for the different ε for Lena image.
Table 1. Peak signal-to-noise ratio (PSNR) (dB)/structural similarity (SSIM) results for the different ε for Lena image.
ε Measurement Rates
0.10.150.20.250.3
0.126.539/0.80128.398/0.84529.526/0.87530.699/0.89531.733/0.916
0.226.721/0.80728.411/0.84629.545/0.87730.799/0.89731.797/0.916
0.326.790/0.80928.485/0.85329.696/0.87930.905/0.90031.896/0.918
0.426.949/0.81428.537/0.85329.706/0.88030.957/0.90332.023/0.920
0.526.984/0.81528.681/0.85429.805/0.88231.059/0.90532.140/0.922
0.627.320/0.82428.763/0.86029.947/0.88631.158/0.90732.235/0.924
0.727.235/0.82328.891/0.85630.030/0.88831.277/0.91032.401/0.927
0.826.670/0.80328.911/0.86130.130/0.89531.312/0.91232.503/0.929
0.926.671/0.81328.677/0.86030.059/0.89330.953/0.90132.583/0.930
Table 2. PSNR (dB)/SSIM results for the different fractional-order for Lena image.
Table 2. PSNR (dB)/SSIM results for the different fractional-order for Lena image.
αMeasurement Rates
0.10.150.20.250.3
0.520.427/0.58022.578/0.65126.361/0.71825.553/0.75826.919/0.801
0.724.061/0.72025.876/0.77327.534/0.82228.800/0.85429.841/0.879
0.926.065/0.78227.729/0.82828.988/0.86030.252/0.88831.362/0.908
126.483/0.79428.187/0.83929.379/0.86930.567/0.89431.595/0.912
1.126.823/0.80328.305/0.84229.631/0.87430.727/0.89731.775/0.914
1.327.021/0.81028.506/0.84729.833/0.87930.848/0.90031.865/0.917
1.526.996/0.81028.604/0.85230.092/0.88731.024/0.90332.071/0.922
1.726.956/0.80928.591/0.85330.125/0.88931.093/0.90632.074/0.924
1.926.858/0.80628.561/0.85430.119/0.88931.191/0.90832.252/0.925
2.026.700/0.80128.526/0.85230.108/0.88931.250/0.90832.282/0.926
Table 3. PSNR (dB)/SSIM and time (s) results for the different k:s ratios.
Table 3. PSNR (dB)/SSIM and time (s) results for the different k:s ratios.
RatesPSNR (dB)/SSIM and Time (s)
k:s = 1:3k:s = 3:7k:s = 5:11k:s = 7:15
0.126.448/0.79319827.460/0.82828927.463/0.82890127.510/0.8323715
0.1528.553/0.85411629.108/0.87044029.214/0.87083129.161/0.8723996
0.230.097/0.8889330.379/0.89636630.525/0.896132730.837/0.9022439
0.2531.183/0.90812031.612/0.91631132.158/0.916103932.034/0.9213639
0.332.208/0.9259932.821/0.93320832.958/0.933118132.869/0.9333610
Table 4. PSNR (dB)/SSIM results for the different ε for the Lena image.
Table 4. PSNR (dB)/SSIM results for the different ε for the Lena image.
εMeasurement Rates
0.10.150.20.250.3
0.127.016/0.81128.731/0.85630.155/0.89031.209/0.91032.204/0.925
0.227.097/0.81428.795/0.85830.190/0.89131.286/0.91132.241/0.925
0.327.186/0.81728.852/0.85930.238/0.89231.348/0.91232.292/0.926
0.427.283/0.82028.939/0.86230.291/0.89331.422/0.91332.365/0.927
0.527.407/0.82429.056/0.86430.358/0.89531.469/0.91432.450/0.928
0.627.497/0.82729.159/0.86730.396/0.89631.581/0.91632.525/0.929
0.727.600/0.83129.222/0.86930.400/0.89731.610/0.91732.636/0.931
0.827.405/0.82829.139/0.86730.347/0.89431.731/0.91932.714/0.932
0.926.883/0.81028.967/0.86430.156/0.89231.499/0.91332.713/0.932
Table 5. PSNR (dB)/SSIM results for the different fractional-order for the Lena image.
Table 5. PSNR (dB)/SSIM results for the different fractional-order for the Lena image.
αMeasurement Rates
0.10.150.20.250.3
0.521.541/0.62626.831/0.81028.642/0.85629.952/0.88531.306/0.909
0.725.537/0.76228.106/0.84129.492/0.87230.613/0.89531.920/0.917
0.926.482/0.79128.523/0.85430.015/0.88531.189/0.90832.437/0.926
126.745/0.80028.611/0.85330.113/0.88631.337/0.90932.524/0.927
1.127.086/0.81428.725/0.85730.195/0.88731.367/0.9132.553/0.928
1.327.495/0.82628.996/0.86630.328/0.89231.450/0.91232.556/0.928
1.527.552/0.82928.946/0.86530.319/0.89431.445/0.91232.611/0.929
1.727.493/0.82928.977/0.86630.347/0.89431.489/0.91632.714/0.932
1.927.190/0.82229.053/0.86830.357/0.89531.617/0.91832.769/0.932
2.026.975/0.82028.947/0.86730.500/0.89931.787/0.92032.852/0.933
Table 6. The PSNR (dB)/SSIM results of various algorithms with different measurement rates.
Table 6. The PSNR (dB)/SSIM results of various algorithms with different measurement rates.
ImageAlgorithmsMeasurement Rates
0.10.150.20.250.3
barbaraTV-NLR24.926/0.72326.458/0.78127.236/0.80727.970/0.83229.113/0.859
MH-BCS25.605/0.74027.162/0.79828.103/0.81529.126/0.85330.015/0.853
BCS-TV24.215/0.68125.560/0.73826.585/0.77827.421/0.80728.167/0.830
BCS-SPL23.619/0.64025.067/0.69726.209/0.73927.105/0.77027.807/0.793
TVAL323.563/0.66425.814/0.75126.812/0.78827.480/0.81028.335/0.834
HoFrTV25.706/0.75827.253/0.80728.304/0.83429.317/0.86130.211/0.879
boatTV-NLR24.136/0.67425.553/0.74626.475/0.78227.509/0.81928.350/0.847
MH-BCS24.322/0.68225.879/0.74527.170/0.79628.198/0.83129.011/0.851
BCS-TV23.190/0.62324.691/0.70226.013/0.75827.156/0.80028.152/0.834
BCS-SPL22.974/0.59624.079/0.64725.090/0.6926.088/0.73326.929/0.766
TVAL323.393/0.63125.035/0.71626.421/0.77327.403/0.81028.445/0.84
HoFrTV24.465/0.69926.258/0.77427.523/0.81628.615/0.85129.953/0.881
cameramanTV-NLR24.561/0.79426.245/0.83827.356/0.86428.904/0.89029.968/0.909
MH-BCS24.366/0.75126.672/0.81327.939/0.85329.229/0.87530.892/0.912
BCS-TV23.613/0.76125.196/0.80726.652/0.84427.864/0.87128.910/0.892
BCS-SPL22.785/0.69124.284/0.74025.626/0.78326.686/0.81327.744/0.838
TVAL323.169/0.72725.442/0.81726.741/0.85028.018/0.87729.284/0.900
HoFrTV25.102/0.81326.992/0.85828.791/0.88730.244/0.90731.600/0.926
houseTV-NLR29.519/0.83031.809/0.86233.018/0.87934.153/0.89435.306/0.909
MH-BCS30.006/0.82832.348/0.86633.235/0.87834.825/0.89935.609/0.911
BCS-TV27.763/0.7929.784/0.82931.111/0.85232.282/0.87033.413/0.888
BCS-SPL26.627/0.74228.104/0.77029.815/0.81431.059/0.83831.464/0.840
TVAL326.047/0.75029.885/0.83731.385/0.86032.653/0.87833.584/0.892
HoFrTV30.122/0.83832.446/0.87033.881/0.88735.143/0.90335.953/0.915
lenaTV-NLR26.272/0.78928.030/0.84029.334/0.86930.373/0.89331.249/0.909
MH-BCS26.773/0.79728.520/0.85329.757/0.87230.815/0.89731.840/0.918
BCS-TV25.486/0.75227.043/0.80528.478/0.84629.569/0.87230.623/0.894
BCS-SPL24.56/0.68926.142/0.74227.423/0.78428.473/0.81629.443/0.841
TVAL324.510/0.72427.469/0.81828.633/0.85029.644/0.87630.570/0.895
HoFrTV27.493/0.82929.071/0.86930.403/0.89631.745/0.91632.655/0.931
mandrillTV-NLR22.014/0.46922.483/0.52723.495/0.60224.018/0.65623.835/0.671
MH-BCS22.003/0.43923.087/0.54524.036/0.60224.472/0.67725.172/0.713
BCS-TV21.895/0.45322.695/0.52523.380/0.58723.956/0.63824.499/0.682
BCS-SPL22.069/0.44922.701/0.50423.264/0.55923.695/0.59924.186/0.639
TVAL322.351/0.47122.875/0.53723.437/0.59223.954/0.64324.590/0.686
HoFrTV22.202/0.48623.189/0.56023.942/0.62124.613/0.68025.231/0.720
peppersTV-NLR26.601/0.82929.119/0.87930.762/0.90632.295/0.92533.845/0.940
MH-BCS26.929/0.80529.035/0.85430.614/0.88431.739/0.90232.938/0.918
BCS-TV25.802/0.79028.270/0.85030.001/0.88231.432/0.90532.799/0.924
BCS-SPL24.241/0.69525.974/0.74727.280/0.78328.544/0.81229.604/0.836
TVAL324.515/0.76727.949/0.85329.709/0.88531.198/0.90832.467/0.924
HoFrTV28.497/0.86830.991/0.90632.755/0.92934.309/0.94435.732/0.954
rulerTV-NLR14.799/0.29815.496/0.44116.445/0.54616.827/0.59119.348/0.783
MH-BCS14.895/0.23819.312/0.63820.312/0.63821.965/0.79222.895/0.856
BCS-TV15.172/0.30915.811/0.45116.707/0.53917.457/0.60518.059/0.656
BCS-SPL15.870/0.40116.550/0.49717.493/0.59518.365/0.66119.141/0.712
TVAL315.300/0.27415.392/0.36316.269/0.48317.158/0.58518.019/0.661
HoFrTV15.117/0.34616.188/0.51217.589/0.61818.783/0.71020.373/0.814
testpatTV-NLR16.397/0.72319.372/0.83119.440/0.79224.283/0.92926.333/0.959
MH-BCS16. 819/0.77919.135/0.82820.142/0.85622.379/0.87924.798/0.956
BCS-TV15.981/0.70918.798/0.82021.241/0.88823.224/0.93224.934/0.960
BCS-SPL14.734/0.49716.637/0.57118.178/0.62619.342/0.66620.452/0.696
TVAL314.834/0.61317.719/0.75219.338/0.79922.402/0.90024.212/0.936
HoFrTV17.799/0.80322.762/0.91026.047/0.94428.512/0.96530.191/0.970
ResolutionchartTV-NLR20.726/0.88025.863/0.95027.620/0.96632.921/0.98235.516/0.988
MH-BCS18.511/0.70720.473/0.78322.234/0.81924.008/0.86225.640/0.886
BCS-TV9.173/0.5789.649/0.671810.276/0.7219.714/0.74110.012/0.76
BCS-SPL16.213/0.55018.068/0.60719.215/0.62720.384/0.65421.410/0.676
TVAL316.411/0.67521.741/0.88824.951/0.94227.899/0.96630.475/0.980
HoFrTV20.311/0.86124.204/0.92326.925/0.95630.100/0.97133.498/0.983
Table 7. Average reconstruction time(s).
Table 7. Average reconstruction time(s).
AlgorithmsBCS-SPLTVAL3MH-BCSTV-NLRHoFrTVBCS-TV
Time (s)Rate = 0.14.83.515.6150.6322.3392.3
Rate = 0.153.13.118.3150.2327.5465.4
Rate = 0.22.52.416.6148.9316.4530.3
Rate = 0.252.42.218.4149.4326.5627.4
Rate = 0.32.52.014.2151.3324.4775.6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hou, L.; Qin, Y.; Zheng, H.; Pan, Z.; Mei, J.; Hu, Y. Hybrid High-Order and Fractional-Order Total Variation with Nonlocal Regularization for Compressive Sensing Image Reconstruction. Electronics 2021, 10, 150. https://doi.org/10.3390/electronics10020150

AMA Style

Hou L, Qin Y, Zheng H, Pan Z, Mei J, Hu Y. Hybrid High-Order and Fractional-Order Total Variation with Nonlocal Regularization for Compressive Sensing Image Reconstruction. Electronics. 2021; 10(2):150. https://doi.org/10.3390/electronics10020150

Chicago/Turabian Style

Hou, Lijia, Yali Qin, Huan Zheng, Zemin Pan, Jicai Mei, and Yingtian Hu. 2021. "Hybrid High-Order and Fractional-Order Total Variation with Nonlocal Regularization for Compressive Sensing Image Reconstruction" Electronics 10, no. 2: 150. https://doi.org/10.3390/electronics10020150

APA Style

Hou, L., Qin, Y., Zheng, H., Pan, Z., Mei, J., & Hu, Y. (2021). Hybrid High-Order and Fractional-Order Total Variation with Nonlocal Regularization for Compressive Sensing Image Reconstruction. Electronics, 10(2), 150. https://doi.org/10.3390/electronics10020150

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop