Image Restoration Based on End-to-End Unrolled Network
Abstract
:1. Introduction
2. Related Work
2.1. Deep Learning
2.2. IR Methods under Unrolled Optimization
3. Proposed Algorithm for Image Restoration
3.1. Our End-to-End Unrolled Network
Algorithm 1. DCNNs based end-to-end unrolled network for IR |
Input:,, |
Initialization: |
For do |
(1) Analytic updates: |
end |
Output |
3.2. Structure of the Deep Denoiser Network
3.3. Variation in Three Applications
4. Experiments
4.1. Ablation Study
4.2. Image Denoising
4.3. Image Deblurring
4.4. Lensless Imaging
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Boyat, A.K.; Joshi, B.K. A review paper: Noise models in digital image processing. arXiv 2015, arXiv:1505.03489. [Google Scholar] [CrossRef]
- Yang, C.; Feng, H.; Xu, Z.; Chen, Y.; Li, Q. Image Deblurring Utilizing Inertial Sensors and a Short-Long-Short Exposure Strategy. IEEE Trans. Image Process. 2020, 29, 4614–4626. [Google Scholar] [CrossRef]
- Zhang, L.; Zuo, W. Image Restoration: From Sparse and Low-rank Priors to Deep Priors. IEEE Signal. Process. Mag. 2017, 34, 172–179. [Google Scholar] [CrossRef]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-d transform-domaIn collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
- Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 468–479. [Google Scholar]
- Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar]
- Elad, M.; Aharon, M. Image denoising via Sparse and Redundant Representations over Learned Dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
- Guo, S.; Yan, Z.; Zhang, K.; Zuo, W.; Zhang, L. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1712–1722. [Google Scholar]
- Anwar, S.; Barnes, N. Real image denoising with feature attention. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 3155–3164. [Google Scholar]
- Chang, M.; Li, Q.; Feng, H.; Xu, Z. Spatial-Adaptive Network for Single Image Denoising. arXiv 2020, arXiv:2001.10291. [Google Scholar]
- Danielyan, A.; Katkovnik, V.; Egiazarian, K. BM3D Frames and Variational Image Deblurring. IEEE Trans. Image Process. 2012, 21, 1715–1728. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ji, H.; Wang, K. Robust Image Deblurring With an Inaccurate Blur Kernel. IEEE Trans. Image Processi. 2012, 21, 1624–1634. [Google Scholar] [CrossRef] [PubMed]
- Schmidt, U.; Rother, C.; Nowozin, S.; Jancsary, J.; Roth, S. Discriminative Non-blind Deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 604–611. [Google Scholar]
- Pan, J.; Sun, D.; Pfister, H.; Yang, M. Blind image deblurring using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1628–1636. [Google Scholar]
- Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. DeblurGAN: Blind Motion Deblurring Using Condational Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8183–8192. [Google Scholar]
- Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image Super-resolution via Sparse Representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef] [PubMed]
- Egiazarian, K.; Katkovnik, V. Single image super-resolution via BM3D sparse coding. In Proceedings of the European Signal Processing Conference, Nice, France, 31 August–4 September 2015; pp. 2849–2853. [Google Scholar]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef] [Green Version]
- Shi, W.; Caballero, J.; Huszar, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Netwirk. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
- Tai, Y.; Yang, J.; Liu, X. Image Super-resolution via Deep Recursive Residual Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3147–3155. [Google Scholar]
- Asif, M.S.; Ayremlou, A.; Sankaranarayanan, A.; Veeraraghavan, A.; Baraniuk, R.G. FlatCam: Thin, Lensless Cameras Using Coded Aperture and Computation. IEEE Trans. Comput. Imaging 2017, 3, 384–397. [Google Scholar] [CrossRef]
- Canh, T.N.; Nagahara, H. Deep Compressive Sensing for Visual Privacy Protection in FlatCam Imaging. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019; pp. 3978–3986. [Google Scholar]
- Khan, S.S.; Adarsh, V.R.; Boominathan, V.; Tan, J.; Veeraraghavan, A.; Mitra, K. Towards photorealistic reconstruction of highly multiplexed lensless images. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 7860–7869. [Google Scholar]
- Monakhova, K.; Yurtsever, J.; Kuo, G.; Antipa, N.; Yanny, K.; Waller, L. Learned reconstructions for practical mask-based lensless imaging. Opt. Express 2019, 27, 28075–29090. [Google Scholar] [CrossRef] [PubMed]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Pho-to-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Dong, W.; Zhang, L.; Shi, G.; Li, X. Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 2013, 22, 1620–1630. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Krishnan, D.; Fergus, R. Fast image deconvolution using hyper-Laplacian priors. In Advances In Neural Information Processing Systems; Curran Associates Inc.: Red Hook, NY, USA, 2009; pp. 1033–1041. [Google Scholar]
- Bioucas-Dias, J.M.; Ario, M.; Figueiredo, A.T. A new TwIST: Two-step iterative shrinkage/thresholding algorithms for im-age restoration. IEEE Trans. Image Process. 2007, 16, 2992–3004. [Google Scholar] [CrossRef] [Green Version]
- Burger, H.C.; Schuler, C.J.; Harmeling, S. Image denoising: Can plaIn neural networks compete with BM3D? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2392–2399. [Google Scholar]
- Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [Green Version]
- Xu, L.; Ren, J.S.; Liu, C.; Jia, J. Deep convolutional neural network for image deconvolution. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; pp. 1790–1798. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Osher, S.; Burger, M.; Goldfarb, D.; Xu, J.; Yin, W. An iterative regularization method for total variation-based image resto-ration. Multiscale Modelding Simul. 2005, 4, 460–489. [Google Scholar] [CrossRef]
- Mairal, J.; Elad, M.; Sapiro, G. Sparse representation for color image restoration. IEEE Trans. Image Process. 2008, 17, 53–69. [Google Scholar] [CrossRef] [Green Version]
- Aharon, M.; Elad, M.; Bruckstein, A. K-SVD, An algorithm for designing overcomplete dictionaries for sparse representa-tion. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
- Dong, W.; Li, X.; Zhang, L.; Shi, G. Sparsity-based image denoising via dictionary learning and structural clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 457–464. [Google Scholar]
- Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
- Xu, J.; Zhang, L.; Zuo, W.; Zhang, D.; Feng, X. Patch group based nonlocal self-similarity prior learning for image denoising. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 244–252. [Google Scholar]
- Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Image Process. 2013, 22, 700–711. [Google Scholar] [CrossRef]
- Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar]
- Barbu, A. Traing an active random field for real-time image denoising. IEEE Trans. Image Process. 2009, 18, 2451–2462. [Google Scholar] [CrossRef]
- Roth, S.; Black, M.J. Field of experts. Int. J. Comput. Vis. 2009, 82, 205–229. [Google Scholar] [CrossRef]
- Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef] [Green Version]
- Buades, A.; Coll, B.; Morel, J.M. Image denoising methods. A new nonlocal principle. SIAM Rev. 2010, 52, 113–147. [Google Scholar] [CrossRef]
- Cai, J.; Candes, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
- Sun, J.; Tappen, M.F. Separable Markov random field model and its application in low level vision. IEEE Trans. Image Process. 2013, 22, 402–407. [Google Scholar] [CrossRef] [PubMed]
- Schmidt, U.; Roth, S. Shrinkage fields for effective image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2774–2781. [Google Scholar]
- Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, Y.; Tian, Y.; Kong, Y.; Fu, B.Z.Y. Residual Dense Network for Image Restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2480–2495. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Venkatakrishnan, S.V.; Bouman, C.A.; Wohlberg, B. Plug-and-play priors for model based reconstruction. In Proceedings of the IEEE Global Conference on Signal and Information Processing, Austin, TX, USA, 3–5 December 2013; pp. 945–948. [Google Scholar]
- Dong, W.; Wang, P.; Yin, W.; Shi, G.; Wu, F.; Lu, X. Denoising prior driven deep neural network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 2305–2318. [Google Scholar] [CrossRef] [Green Version]
- Tai, Y.; Yang, J.; Liu, X.; Xu, C. MemNet: A persistent memory network for image restoration. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4539–4547. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Goodfellow, I.J.; Abadie, J.P.; Mirza, M.; Xu, B.; Farley, D.W.; Ozair, S.; Courville, A. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar]
- Johnson, J.; Alahi, A.; Li, F. Perceptual loss for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 694–711. [Google Scholar]
- Loffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Zhang, J.; He, T.; Sra, S.; Jadbabaie, A. Why gradient clipping accelerates training: Theoretical justification for adaptivity. arXiv 2019, arXiv:1905.11881. [Google Scholar]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
- Chen, Y.; Pock, T. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1256–1272. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learing deep CNN denoiser prior for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3929–3938. [Google Scholar]
- Bertocchi, C.; Chouzenoux, E.; Corbineau, M.C.; Pesquet, J.C.; Prato, M. Deep unfolding of a proximal interior point method for image restoration. Inverse Probl. 2020, 36, 34005. [Google Scholar] [CrossRef] [Green Version]
- Teodoro, A.M.; Bioucas-Dias, J.M.; Figueiredo, M.A.T. Image restoration and reconstruction using variable splitting and class-adapted image priors. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016; pp. 3518–3522. [Google Scholar]
- Kamilov, U.S.; Mansour, H.; Wohlberg, B. A Plug-and-play priors approach for solving nonlinear imaging inverse problems. IEEE Signal. Process. Lett. 2007, 24, 1872–1876. [Google Scholar] [CrossRef]
- Tirer, T.; Giryes, R. Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 2019, 28, 1220–1234. [Google Scholar] [CrossRef]
- Brifman, A.; Romano, Y.; Elad, M. Turning a denoiser into a super-resolver using plug and play priors. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016; pp. 1404–1408. [Google Scholar]
- Sun, Y.; Xu, S.; Li, Y.; Tian, L.; Wohlberg, B.; Kamilov, U.S. Regularized fourier ptychography using an online plug-and-play algorithm. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 12–17 May 2019; pp. 7665–7669. [Google Scholar]
- Sreehari, S.; Venkatakrishnan, S.V.; Wohlberg, B.; Buzzard, G.T.; Drummy, L.F.; Simmons, J.P. Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2016, 2, 408–423. [Google Scholar] [CrossRef] [Green Version]
- Bigdeli, S.; Honzatko, D.; Susstrunk, S.; Dunbar, L.A. Image restoration using plug-and-play cnn map denoisers. arXiv 2019, arXiv:1912.09299. [Google Scholar]
- Chan, S.H.; Wang, X.; Elgendy, O.A. Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 2017, 3, 84–98. [Google Scholar] [CrossRef] [Green Version]
- Ryu, E.K.; Liu, J.; Wang, S.; Chen, X.; Wang, Z.; Yin, W. Plug-and-play methods provably converge with properly trained denoisers. arXiv 2019, arXiv:1905.05406. [Google Scholar]
- Zhang, J.; Ghanem, B. ISTA-Net: Interpretable Optimization-inspired deep network for image compressive sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1828–1837. [Google Scholar]
- Jeon, D.S.; Baek, S.H.; Yi, S.; Fu, Q.; Dun, X.; Heidrich, W.; Kim, M.H. Compact snapshot hyperspectral imaging with dif-fracted rotation. ACM Trans. Graph. 2019, 38, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Zhou, H.; Feng, H.; Xu, W.; Xu, Z.; Li, Q.; Chen, Y. Deep denoiser prior based deep analytic network for lensless image res-toration. Opt. Express 2021, 29, 27237–27253. [Google Scholar] [CrossRef]
- Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the 30th International Conference on International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; Volume 28. [Google Scholar]
- Zhou, H.; Feng, H.; Hu, Z.; Xu, Z.; Li, Q.; Chen, Y. Lensless cameras using a mask based on almost perfect sequence through deep learning. Opt. Express 2020, 28, 30248–30262. [Google Scholar] [CrossRef]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. Ten-sorFlow: Large-scale machine learning on Heterogeneous distributed systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1964–1971. [Google Scholar]
- Agustsson, E.; Timofte, R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 126–135. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; BernsteIn, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2019, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
Kernel | 17 × 17 Kernel in [80] | 19 × 19 Kernel in [80] | 25 × 25 Gaussian | ||
---|---|---|---|---|---|
Noise σ | 2.55 | 7.65 | 2.55 | 7.65 | 2 |
DPDNN [51] | 31.97 | 28.65 | 32.53 | 29.01 | 31.01 |
DPDNN-AS | 32.30 | 28.89 | 32.89 | 29.20 | 31.24 |
Ours | 32.42 | 29.01 | 32.91 | 29.25 | 31.38 |
Number of Iterations | PSNR | SSIM |
---|---|---|
17 × 17 motion blur kernel of [80], σn = 2.55 | ||
Ours-1 | 31.02 | 0.857 |
Ours-2 | 31.48 | 0.865 |
Ours-4 | 31.80 | 0.872 |
Ours-6 (Ours) | 31.94 | 0.874 |
Ours-8 | 32.00 | 0.875 |
19 × 19 motion blur kernel of [80], σn = 2.55 | ||
Ours-1 | 31.32 | 0.865 |
Ours-2 | 31.81 | 0.873 |
Ours-4 | 32.18 | 0.880 |
Ours-6 (Ours) | 32.33 | 0.882 |
Ours-8 | 32.41 | 0.884 |
Image Size | 256 × 256 × 1 | |||||
---|---|---|---|---|---|---|
Method | DPDNN | Ours-1 | Ours-2 | Ours-4 | Ours | Ours-8 |
Parameters | 1249K | 393K | 787K | 1573K | 2359K | 3146K |
FLOPs | 794G | 29G | 59G | 118G | 177G | 236G |
Run time (Sec) | 0.0712 | 0.0211 | 0.0256 | 0.0453 | 0.0575 | 0.0615 |
Method | BM3D [4] | WNNM [40] | FFDNet-cl [48] | TNRD [60] | IRCNN [61] | Ours | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | ||
BSD68 | 15 | 31.02 | 0.873 | 31.23 | 0.876 | 31.65 | 0.890 | 31.42 | - | 31.63 | - | 31.70 | 0.891 |
25 | 28.34 | 0.797 | 28.52 | 0.803 | 29.21 | 0.829 | 28.92 | - | 29.15 | - | 29.25 | 0.831 | |
50 | 24.86 | 0.669 | 24.81 | 0.664 | 26.28 | 0.725 | 25.97 | - | 26.19 | - | 26.28 | 0.726 | |
Method | BM3D [4] | WNNM [40] | FFDNet-cl [48] | EPLL [5] | DnCNN-S [30] | Ours | |||||||
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | ||
Kodak24 | 15 | 32.23 | 0.877 | 32.46 | 0.880 | 32.81 | 0.892 | 32.10 | 0.881 | 32.72 | 0.890 | 32.85 | 0.893 |
25 | 29.68 | 0.814 | 29.89 | 0.818 | 30.47 | 0.838 | 29.54 | 0.815 | 30.13 | 0.832 | 30.51 | 0.840 | |
50 | 26.22 | 0.707 | 26.23 | 0.705 | 27.61 | 0.748 | 25.94 | 0.696 | 26.40 | 0.717 | 27.53 | 0.750 |
Image | Boat | C. Man | Flower | House | Lena256 | Man | Monar. | Parrots | Peppers | Plant | Ave. | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Method | ||||||||||||
Gaussian blur with standard deviation 1.6, | ||||||||||||
IDD-BM3D [11] | 29.97 | 26.65 | 28.40 | 32.49 | 29.58 | 30.43 | 28.37 | 29.62 | 29.43 | 32.25 | 29.72 | |
EPLL [5] | 30.55 | 26.66 | 28.81 | 32.83 | 30.03 | 30.63 | 29.37 | 29.80 | 30.02 | 32.88 | 30.16 | |
NCSR [26] | 31.19 | 27.62 | 29.28 | 33.33 | 30.30 | 30.93 | 29.86 | 30.52 | 30.24 | 33.56 | 30.68 | |
IRCNN [61] | 31.20 | 27.94 | 29.63 | 33.53 | 30.44 | 30.99 | 30.58 | 30.24 | 30.76 | 33.86 | 30.92 | |
DPDNN [51] | 31.10 | 28.08 | 29.66 | 33.27 | 30.71 | 31.13 | 30.76 | 30.81 | 30.66 | 33.89 | 31.01 | |
Ours | 31.44 | 28.53 | 30.01 | 33.73 | 30.95 | 31.28 | 31.06 | 31.13 | 30.96 | 34.69 | 31.38 | |
motion blur kernel of [81], | ||||||||||||
IDD-BM3D [11] | 30.24 | 29.36 | 28.70 | 32.71 | 30.30 | 30.11 | 27.39 | 31.70 | 28.93 | 32.34 | 30.18 | |
EPLL [5] | 31.85 | 29.98 | 30.03 | 33.90 | 31.70 | 31.20 | 30.02 | 32.29 | 31.03 | 33.21 | 31.52 | |
IRCNN [61] | 31.95 | 30.84 | 30.51 | 33.49 | 31.90 | 31.31 | 29.20 | 33.15 | 29.80 | 34.09 | 31.62 | |
DPDNN [51] | 32.02 | 30.45 | 30.39 | 33.90 | 32.35 | 31.65 | 31.15 | 32.86 | 31.13 | 33.82 | 31.97 | |
Ours | 32.50 | 30.90 | 30.77 | 34.66 | 32.72 | 31.88 | 31.67 | 33.29 | 31.37 | 34.45 | 32.42 | |
motion blur kernel of [81], | ||||||||||||
IDD-BM3D [11] | 27.22 | 25.78 | 25.61 | 30.20 | 27.59 | 27.20 | 25.25 | 27.85 | 26.86 | 29.20 | 27.28 | |
EPLL [5] | 26.96 | 24.87 | 25.07 | 28.93 | 27.33 | 27.24 | 23.73 | 26.14 | 27.04 | 28.65 | 26.60 | |
IRCNN [61] | 28.56 | 27.69 | 26.92 | 31.40 | 28.81 | 28.41 | 27.25 | 29.55 | 27.75 | 30.52 | 28.69 | |
DPDNN [51] | 28.60 | 27.28 | 26.82 | 31.08 | 28.85 | 28.51 | 27.47 | 29.45 | 28.18 | 30.30 | 28.65 | |
Ours | 29.03 | 27.63 | 27.20 | 31.75 | 29.17 | 28.70 | 27.84 | 29.70 | 28.38 | 30.70 | 29.01 | |
motion blur kernel of [81], | ||||||||||||
IDD-BM3D [11] | 30.29 | 29.42 | 29.38 | 31.82 | 30.49 | 30.52 | 28.93 | 31.21 | 28.97 | 32.72 | 30.38 | |
EPLL [5] | 32.13 | 30.57 | 30.47 | 33.19 | 32.31 | 31.58 | 30.91 | 32.62 | 31.41 | 33.74 | 31.89 | |
IRCNN [61] | 31.59 | 30.57 | 30.93 | 32.00 | 31.74 | 31.30 | 30.52 | 32.48 | 29.88 | 34.19 | 31.52 | |
DPDNN [51] | 32.59 | 31.06 | 31.36 | 33.63 | 32.94 | 32.05 | 32.02 | 33.31 | 31.66 | 34.67 | 32.53 | |
Ours | 33.13 | 31.42 | 31.81 | 34.27 | 33.21 | 32.25 | 32.43 | 33.61 | 31.85 | 35.15 | 32.91 | |
motion blur kernel of [81], | ||||||||||||
IDD-BM3D [11] | 27.54 | 26.32 | 25.62 | 30.17 | 27.89 | 27.38 | 25.58 | 28.00 | 27.29 | 29.42 | 27.52 | |
EPLL [5] | 27.01 | 25.61 | 24.85 | 29.04 | 27.51 | 27.54 | 24.48 | 26.63 | 27.43 | 28.99 | 26.91 | |
IRCNN [61] | 28.80 | 27.77 | 27.43 | 30.92 | 29.25 | 28.52 | 28.04 | 29.68 | 28.47 | 31.12 | 29.00 | |
DPDNN [51] | 28.84 | 27.61 | 27.23 | 30.78 | 29.27 | 28.72 | 28.20 | 29.76 | 28.64 | 31.02 | 29.01 | |
Ours | 29.17 | 27.77 | 27.53 | 31.24 | 29.39 | 28.86 | 28.43 | 29.91 | 28.77 | 31.44 | 29.25 |
Method | Gaussian Blur with Standard Deviation 1.6, | Motion Blur Kernel of [80], | Motion Blur Kernel of [80], | Motion Blur Kernel of [80], | Motion Blur Kernel of [80], | |||||
---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
IDD-BM3D [11] | 29.54 | 0.828 | 30.36 | 0.821 | 26.74 | 0.706 | 30.30 | 0.827 | 26.88 | 0.713 |
EPLL [5] | 29.48 | 0.823 | 31.13 | 0.842 | 26.82 | 0.706 | 31.49 | 0.852 | 27.09 | 0.715 |
NCSR [26] | 29.96 | 0.833 | - | - | - | - | - | - | - | - |
IRCNN [61] | 29.99 | 0.831 | 31.33 | 0.849 | 28.28 | 0.765 | 30.88 | 0.841 | 28.52 | 0.773 |
DPDNN [51] | 30.15 | 0.842 | 31.57 | 0.860 | 28.38 | 0.765 | 32.00 | 0.871 | 28.72 | 0.780 |
DPDNN-AS | 30.29 | 0.847 | 31.84 | 0.870 | 28.58 | 0.775 | 32.28 | 0.879 | 28.89 | 0.786 |
Ours | 30.43 | 0.850 | 31.94 | 0.874 | 28.68 | 0.779 | 32.33 | 0.882 | 28.99 | 0.790 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tao, X.; Zhou, H.; Chen, Y. Image Restoration Based on End-to-End Unrolled Network. Photonics 2021, 8, 376. https://doi.org/10.3390/photonics8090376
Tao X, Zhou H, Chen Y. Image Restoration Based on End-to-End Unrolled Network. Photonics. 2021; 8(9):376. https://doi.org/10.3390/photonics8090376
Chicago/Turabian StyleTao, Xiaoping, Hao Zhou, and Yueting Chen. 2021. "Image Restoration Based on End-to-End Unrolled Network" Photonics 8, no. 9: 376. https://doi.org/10.3390/photonics8090376
APA StyleTao, X., Zhou, H., & Chen, Y. (2021). Image Restoration Based on End-to-End Unrolled Network. Photonics, 8(9), 376. https://doi.org/10.3390/photonics8090376