Deep Image Prior for Super Resolution of Noisy Image
Abstract
:1. Introduction
- We present a GAN [8] framework to estimate the noise in a target image. Given only a noisy LR image without the ground truth, our generator reconstructs a clean HR image. The noise is estimated by learning the noise distribution in the LR image.
- We introduce the self-supervision loss (SSL), a novel approach for resolving the dependency on early-stopping and instability in the DIP [1] optimization process.
2. Related Works
3. Proposed Method
3.1. Deep Image Prior (DIP)
3.2. Noise Estimation Using GAN
3.3. Self-Supervision Loss (SSL)
3.4. Total Loss Functions
Algorithm 1: Training scheme of proposed method. |
Require: Maximum iteration number T, noise level , noisy LR image , |
randomly-initialized Generator , randomly-initialized Downsampler , |
randomly-initialized Discriminator |
1: |
2: |
3: for to T do |
4: perturb z |
5: |
6: Calculate the discriminator loss using Equation (6) |
7: Compute the gradient w.r.t. |
8: Update the parameters of |
9: perturb z |
10: |
11: Calculate the reconstruction loss using Equation (2) |
12: |
13: Calculate the adversarial loss using Equation (7) |
14: if then |
15: |
16: |
17: |
18: else |
19: |
20: |
21: Calculate the self-supervision loss using Equation (8) |
22: end if |
23: Calculate the total loss for generator using Equation (9) |
24: Compute the gradient w.r.t. |
25: Update the parameters of |
26: end for |
27: |
28: return Clean HR image |
4. Experimental Results
4.1. Dataset
4.2. Implementation Details
4.3. Comparison with Existing Methods
4.3.1. Quantitative Comparison
4.3.2. Qualitative Comparison
4.3.3. Runtime Comparison
4.4. Ablation Study
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 9446–9454. [Google Scholar]
- Ma, X.; Hong, Y.; Song, Y. Super resolution land cover mapping of hyperspectral images using the deep image prior-based approach. Int. J. Remote Sens. 2020, 41, 2818–2834. [Google Scholar] [CrossRef]
- Sidorov, O.; Yngve Hardeberg, J. Deep hyperspectral prior: Single-image denoising, inpainting, super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar]
- Sagel, A.; Roumy, A.; Guillemot, C. Sub-Dip: Optimization on a Subspace with Deep Image Prior Regularization and Application to Superresolution. In Proceedings of the ICASSP 2020—IEEE International Conference on Acoustics, Barcelona, Spain, 4–8 May 2020; pp. 2513–2517. [Google Scholar] [CrossRef] [Green Version]
- Mataev, G.; Milanfar, P.; Elad, M. Deepred: Deep image prior powered by red. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar]
- Abdelhamed, A.; Lin, S.; Brown, M.S. A high-quality denoising dataset for smartphone cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1692–1700. [Google Scholar]
- Chen, J.; Chen, J.; Chao, H.; Yang, M. Image blind denoising with generative adversarial network based noise modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3155–3164. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. arXiv 2014, arXiv:1406.2661. [Google Scholar]
- Cattin, D.P. Image restoration: Introduction to signal and image processing. MIAC Univ. Basel Retrieved 2013, 11, 93. [Google Scholar]
- Gandelsman, Y.; Shocher, A.; Irani, M. “Double-DIP”: Unsupervised Image Decomposition via Coupled Deep-Image-Priors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 11026–11035. [Google Scholar]
- Bevilacqua, M.; Roumy, A.; Guillemot, C.; Alberi Morel, M.L. Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding. In Proceedings of the British Machine Vision Conference, Surrey, UK, 3–7 September 2012; pp. 135.1–135.10. [Google Scholar] [CrossRef] [Green Version]
- Zeyde, R.; Elad, M.; Protter, M. On Single Image Scale-Up Using Sparse-Representations. In Proceedings of the International Conference on Curves and Surfaces, Avigon, France, 24–30 June 2010; pp. 711–730. [Google Scholar]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 184–199. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Wang, Z.; Chen, J.; Hoi, S.C. Deep learning for image super-resolution: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
- Guo, T.; Seyed Mousavi, H.; Huu Vu, T.; Monga, V. Deep wavelet prediction for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 104–113. [Google Scholar]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Change Loy, C. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2472–2481. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Anwar, S.; Barnes, N. Densely residual laplacian super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Tong, T.; Li, G.; Liu, X.; Gao, Q. Image super-resolution using dense skip connections. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4799–4807. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 214–223. [Google Scholar]
- Fan, W.; Yu, H.; Chen, T.; Ji, S. OCT Image Restoration Using Non-Local Deep Image Prior. Electronics 2020, 9, 784. [Google Scholar] [CrossRef]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–12 December 2019; pp. 8026–8037. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Li, C.; Wand, M. Precomputed real-time texture synthesis with markovian generative adversarial networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 702–716. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (Poster), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Niu, B.; Wen, W.; Ren, W.; Zhang, X.; Yang, L.; Wang, S.; Zhang, K.; Cao, X.; Shen, H. Single image super-resolution via a holistic attention network. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 191–207. [Google Scholar]
- Dai, T.; Cai, J.; Zhang, Y.; Xia, S.T.; Zhang, L. Second-order attention network for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11065–11074. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Method | Scale | Noise | Set5 | Set14 | ||||
---|---|---|---|---|---|---|---|---|
PSNR | SSIM | FSIM | PSNR | SSIM | FSIM | |||
Bicubic | 25.74 | 0.8447 | 0.8620 | 24.44 | 0.7723 | 0.8831 | ||
DRLN [22] | 22.03 | 0.7136 | 0.7545 | 21.40 | 0.6592 | 0.8241 | ||
HAN [32] | 21.81 | 0.7055 | 0.7519 | 21.19 | 0.6488 | 0.8206 | ||
SAN [33] | 22.06 | 0.7162 | 0.7573 | 21.36 | 0.6575 | 0.8237 | ||
DIP-SR [1] | 23.07 | 0.7680 | 0.7881 | 22.59 | 0.7125 | 0.8561 | ||
DIP-Seq [1] | 26.97 | 0.9050 | 0.8926 | 25.64 | 0.8253 | 0.9100 | ||
Ours | 27.81 | 0.9127 | 0.8886 | 24.96 | 0.7871 | 0.8658 | ||
Bicubic | 22.91 | 0.7473 | 0.7882 | 22.06 | 0.6703 | 0.8212 | ||
DRLN [22] | 17.71 | 0.5438 | 0.6181 | 17.38 | 0.4925 | 0.7187 | ||
HAN [32] | 17.73 | 0.5413 | 0.6273 | 17.29 | 0.4850 | 0.7214 | ||
SAN [33] | 17.73 | 0.5444 | 0.6284 | 17.24 | 0.4858 | 0.7214 | ||
DIP-SR [1] | 18.35 | 0.5676 | 0.6478 | 18.44 | 0.5330 | 0.7469 | ||
DIP-Seq [1] | 22.36 | 0.7695 | 0.7872 | 23.08 | 0.7367 | 0.8643 | ||
Ours | 26.72 | 0.8906 | 0.8806 | 24.15 | 0.7631 | 0.8495 | ||
Bicubic | 22.81 | 0.7862 | 0.7945 | 21.81 | 0.6553 | 0.7954 | ||
DRLN [22] | 20.77 | 0.6913 | 0.7425 | 19.85 | 0.5931 | 0.7513 | ||
HAN [32] | 20.92 | 0.6909 | 0.7453 | 19.88 | 0.5900 | 0.7538 | ||
SAN [33] | 20.58 | 0.6804 | 0.7430 | 19.75 | 0.5745 | 0.7533 | ||
DIP-SR [1] | 21.43 | 0.7153 | 0.7627 | 20.69 | 0.6241 | 0.7874 | ||
DIP-Seq [1] | 22.86 | 0.7960 | 0.8084 | 22.23 | 0.6988 | 0.8372 | ||
Ours | 25.13 | 0.8710 | 0.8457 | 23.26 | 0.7742 | 0.8414 | ||
Bicubic | 21.04 | 0.7025 | 0.7563 | 20.31 | 0.5933 | 0.7549 | ||
DRLN [22] | 16.91 | 0.5312 | 0.6234 | 16.15 | 0.4359 | 0.6373 | ||
HAN [32] | 17.31 | 0.5371 | 0.6360 | 16.66 | 0.4466 | 0.6529 | ||
SAN [33] | 16.95 | 0.5242 | 0.6330 | 16.29 | 0.4343 | 0.6463 | ||
DIP-SR [1] | 17.58 | 0.5421 | 0.6479 | 17.16 | 0.4610 | 0.6753 | ||
DIP-Seq [1] | 18.83 | 0.6150 | 0.6976 | 18.76 | 0.5481 | 0.7428 | ||
Ours | 22.03 | 0.7696 | 0.7909 | 21.10 | 0.6589 | 0.7931 |
Method | DRLN [22] | HAN [32] | SAN [33] | DIP-SR [1] | DIP-Seq [1] | Ours |
---|---|---|---|---|---|---|
Runtime (s) | 0.663 | 1.258 | 0.946 | 149.815 | 225.087 | 146.334 |
Method | Loss | Baby | Bird | Butterfly | Head | Woman | Avg. | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | FSIM | PSNR | SSIM | FSIM | PSNR | SSIM | FSIM | PSNR | SSIM | FSIM | PSNR | SSIM | FSIM | PSNR | SSIM | FSIM | ||
Baseline | 19.32 | 0.5766 | 0.7806 | 18.43 | 0.6129 | 0.6041 | 17.36 | 0.7302 | 0.6175 | 18.54 | 0.3808 | 0.6051 | 18.12 | 0.5375 | 0.6319 | 18.35 | 0.5676 | 0.6478 | |
+ noise estimation | 27.94 | 0.9036 | 0.9335 | 25.49 | 0.8921 | 0.8382 | 23.92 | 0.9253 | 0.8580 | 26.50 | 0.7661 | 0.8449 | 25.38 | 0.8980 | 0.8729 | 25.85 | 0.8770 | 0.8695 | |
+ noise estimation + SSL | 28.09 | 0.8983 | 0.9226 | 26.67 | 0.9129 | 0.8824 | 24.91 | 0.9401 | 0.8854 | 27.23 | 0.7840 | 0.8233 | 26.68 | 0.9175 | 0.8893 | 26.72 | 0.8906 | 0.8806 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Han, S.; Lee, T.B.; Heo, Y.S. Deep Image Prior for Super Resolution of Noisy Image. Electronics 2021, 10, 2014. https://doi.org/10.3390/electronics10162014
Han S, Lee TB, Heo YS. Deep Image Prior for Super Resolution of Noisy Image. Electronics. 2021; 10(16):2014. https://doi.org/10.3390/electronics10162014
Chicago/Turabian StyleHan, Sujy, Tae Bok Lee, and Yong Seok Heo. 2021. "Deep Image Prior for Super Resolution of Noisy Image" Electronics 10, no. 16: 2014. https://doi.org/10.3390/electronics10162014
APA StyleHan, S., Lee, T. B., & Heo, Y. S. (2021). Deep Image Prior for Super Resolution of Noisy Image. Electronics, 10(16), 2014. https://doi.org/10.3390/electronics10162014