Image Super-Resolution via Dual-Level Recurrent Residual Networks
Abstract
:1. Introduction
- This paper proposes a single image super-resolution network via dual-level recurrent residuals (DLRRN), which use both feedforward and feedback connections to generate HR images with rich details. This recursive structure with feedback connections has a small number of parameters, while providing a powerful early reconstruction capability.
- Inspired by [14], in this paper, a cross-layer feature fusion block (CLFFB) for the SR task is designed as the core part of DLRRB, which can enhance information by effectively processing cross-layer information flow.
- Since the self-attention module [15] can describe the spatial correlation of any two positions in an image, in this paper, we use it to propose the self-attention feature extraction block (SAFEB). SAFEB models the local features by contextual relevance; it cooperates with the applied MS-SSIM [16] to improve the reconstruction performance and produce better visual effects.
2. Related Work
2.1. Deep-Learning-Based Image Super-Resolution
2.2. Feedback Mechanism
2.3. Attention Mechanism
3. Methods
3.1. Network Structure
3.2. Dual-Level Recurrent Residual Block
3.3. Cross-Level Feature Fusion Block
3.4. Self-Attention Feature Extraction Block
3.5. Loss Function
3.6. Network Details
4. Experimental Section
4.1. Implementation Details
4.2. Experimental Analysis
4.2.1. Study of T and G
4.2.2. Analysis of Loss Function
4.2.3. Ablation Analysis of SAFEB
4.3. Comparison with Previous Work
4.3.1. Network Parameters and Complexity
4.3.2. Results of Evaluation on BI Model
4.3.3. Results of Evaluation on BD and DN Models
5. Conclusions and Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
LR (HR) | Low-(high-)resolution |
DLRRB | The dual-level recurrent residual block |
HRL (LRL) | HR-level (LR-level) |
CLFFB_S/CLFFB_L | Cross-level feature fusion block of HRL/(LRL) |
SAFEB | The self-attention feature extraction block |
CLFFB | Collectively referred to as CLFFB_S and CLFFB_L |
DRB | dimension reduction block |
BI | The process of obtaining LR image by bicubic downsampling of HR image. |
BD | First blurring the HR image with a Gaussian kernel with size and standard deviation of 1.6, and then performing a downsampling operation |
DN | The process of first adding Gaussian noise with a noise level of 30 to the HR and then obtaining the LR by standard bicubic downsampling |
References
- Shi, W.Z.; Ledig, J.C.; Zhuang, X.H.; Bai, W.J.; Bhatia, K.; Marvao, A.; Dawes, T.; Rueckert, D. Cardiac Image Super-Resolution with Global Correspondence Using Multi-Atlas PatchMatch. In Proceedings of the 16th International Conference on Medical Image Computing and Computer Assisted Intervention, Nagoya, Japan, 23–36 September 2014; pp. 9–16. [Google Scholar]
- Zou, W.W.; Yuen, P.C. Very low resolution face recognition problem. IEEE Trans. Image Process. 2011, 21, 327–340. [Google Scholar] [CrossRef] [PubMed]
- Thornton, M.W.; Atkinson, P.M.; Holland, D.A. Sub-pixel mapping of rural land cover objects from fine spatial resolution satellite sensor imagery using super-resolution pixel-swapping. Inter. J. Remote Sens. 2006, 27, 473–491. [Google Scholar] [CrossRef]
- Zhang, L.; Wu, X. An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Trans. Image Process. 2006, 15, 2226–2238. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, K.; Gao, X.; Tao, D.; Li, X. Single image super-resolution with non-local means and steering kernel regression. IEEE Trans. Image Process. 2012, 21, 4544–4556. [Google Scholar] [CrossRef] [PubMed]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26–30 June 2016; pp. 1646–1654. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26–30 June 2016; pp. 1637–1645. [Google Scholar]
- Tai, Y.; Yang, J.; Liu, X. Image super-resolution via deep recursive residual network. In Proceedings of the 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3147–3155. [Google Scholar]
- Kravitz, D.J.; Saleem, K.S.; Baker, C.I.; Ungerleider, L.G.; Mishkin, M. The ventral visual pathway: An expanded neural framework for the processing of object quality. Trends Cogn. Sci. 2013, 17, 26–49. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Shen, T.; Zhou, T.; Long, G.; Jiang, J.; Pan, S.; Zhang, C. Disan: Directional self-attention network for rnn/cnn-free language understanding. In Proceedings of the 32th AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 5446–5455. [Google Scholar]
- Li, Z.; Yang, J.; Liu, Z.; Yang, X.; Jeon, G.; Wu, W. Feedback network for image super-resolution. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 3867–3876. [Google Scholar]
- Han, W.; Chang, S.; Liu, D. Image super-resolution via dual-state recurrent networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Munich, Germany, 8–14 September 2018; pp. 1654–1663. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the 15th European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–7 December 2017; pp. 5998–6008. [Google Scholar]
- Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 2016, 3, 47–57. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26–30 June 2016; pp. 770–778. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Shi, W. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
- Tong, T.; Li, G.; Liu, X.; Gao, Q. Image super-resolution using dense skip connections. In Proceedings of the 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4799–4807. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image super-resolution. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Munich, Germany, 8–14 September 2018; pp. 2472–2481. [Google Scholar]
- Haris, M.; Shakhnarovich, G.; Ukita, N. Deep back-projection networks for super-resolution. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Munich, Germany, 8–14 September 2018; pp. 1664–1673. [Google Scholar]
- Cao, C.; Liu, X.; Yang, Y.; Yu, Y.; Wang, J.; Wang, Z.; Huang, T.S. Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks. In Proceedings of the 2015 IEEE/CVF International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2956–2964. [Google Scholar]
- Bevilacqua, M.; Roumy, A.; Guillemot, C.; Alberi-Morel, M.L. Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding. In Proceedings of the Electronic Proceedings of the British Machine Vision Conference 2012 (BMVC), Guildford, UK, 3–7 September 2012; pp. 1–10. [Google Scholar]
- Lin, Z.; Feng, M.; Santos, C.N.D.; Yu, M.; Xiang, B.; Zhou, B.; Bengio, Y. A structured self-attentive sentence embedding. arXiv 2017, arXiv:1703.03130. [Google Scholar]
- Zhang, H.; Goodfellow, I.; Metaxas, D.; Odena, A. Self-attention generative adversarial networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 7354–7363. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Munich, Germany, 8–14 September 2018; pp. 7132–7141. [Google Scholar]
- Wang, X.; Yu, K.; Dong, C. Recovering realistic texture in image super-resolution by deep spatial feature transform. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Munich, Germany, 8–14 September 2018; pp. 606–615. [Google Scholar]
- Dong, R.; Zhang, L.; Fu, H. Rrsgan: Reference-based super-resolution for remote sensing image. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–17. [Google Scholar] [CrossRef]
- Wang, L.; Wang, Y.; Dong, X.; Xu, Q.; Yang, J.; An, W.; Guo, Y. Unsupervised degradation representation learning for blind super-resolution. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 25 June 2021; pp. 10581–10590. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the 2015 IEEE/CVF International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
- Agustsson, E.; Timofte, R. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In Proceedings of the 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 1122–1131. [Google Scholar]
- Yang, J.C.; Wright, J.; Huang, T.S.; Ma, Y. Image Super-Resolution Via Sparse Representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef] [PubMed]
- Martin, D.R.; Fowlkes, C.C.; Tal, D.; Malik, J. A Database of Human Segmented Natural Images and its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics. In Proceedings of the 2001 IEEE/CVF International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; pp. 416–425. [Google Scholar]
- Huang, J.B.; Singh, A.; Ahuja, N. Single image super-resolution from transformed self-exemplars. In Proceedings of the 2015 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–10 June 2015; pp. 5197–5206. [Google Scholar]
- Matsui, Y.; Ito, K.; Aramaki, Y.; Fujimoto, A.; Ogawa, T.; Yamasaki, T.; Aizawa, K. Sketch-based manga retrieval using manga109 dataset. Multimed. Tools Appl. 2017, 76, 21811–21838. [Google Scholar] [CrossRef] [Green Version]
- Ma, C.; Yang, C.Y.; Yang, X.; Yang, M.H. Learning a no-reference quality metric for single-image super-resolution. Comput. Vis. Image Underst. 2017, 158, 1–16. [Google Scholar] [CrossRef] [Green Version]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Munich, Germany, 8–14 September 2018; pp. 586–595. [Google Scholar]
- Tai, Y.; Yang, J.; Liu, X.M.; Xu, C.Y. MemNet: A Persistent Memory Network for Image Restoration. In Proceedings of the 2017 IEEE/CVF International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4549–4557. [Google Scholar]
- Zhang, K.; Gool, L.V.; Timofte, R. Deep unfolding network for image super-resolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 3217–3226. [Google Scholar]
- Liu, J.; Zhang, W.; Tang, Y.; Tang, J.; Wu, G. Residual feature aggregation network for image super-resolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 2359–2368. [Google Scholar]
- Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learning deep CNN denoiser prior for image restoration. In Proceedings of the 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3929–3938. [Google Scholar]
- Zhang, K.; Zuo, W.; Zhang, L. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Munich, Germany, 8–14 September 2018; pp. 3262–3271. [Google Scholar]
Scale | Kernel_Size | Padding | Stride |
---|---|---|---|
2 | 6 | 2 | 2 |
3 | 7 | 2 | 3 |
4 | 8 | 2 | 4 |
Degeneration | Definition |
---|---|
2 | Under BI degradation, the scaling factor is 2. |
3 | Under BI degradation, the scaling factor is 3. |
4 | Under BI degradation, the scaling factor is 4. |
3 | Under DN degradation, the scaling factor is 3. |
3 | Under BD degradation, the scaling factor is 3. |
Scale | |||
---|---|---|---|
Input patchsize | 60 | 50 | 40 40 |
DLRRN | DLRRN-SAFEB | ||
---|---|---|---|
Set5 (PI/LPIPS) | 5.944/0.1730 | 6.054/0.1745 | 6.123/0.1745 |
a | b | |
---|---|---|
Base | √ | |
Base + SAFEB | √ | |
4) | 32.26 | 32.28 |
Scale | Method | Set5 PSNR/SSIM | Set14 PSNR/SSIM | BSD100 PSNR/SSIM | Urban100 PSNR/SSIM | Manga109 PSNR/SSIM |
---|---|---|---|---|---|---|
2 | Bicubic | 33.66/0.9299 | 30.24/0.8688 | 29.56/0.8431 | 26.88/0.8403 | 30.30/0.9339 |
SRCNN [10] | 36.66/0.9542 | 32.45/0.9067 | 31.36/0.8879 | 29.50/0.8946 | 35.60/0.9663 | |
VDSR [6] | 37.53/0.9590 | 33.05/0.9130 | 31.90/0.8960 | 30.77/0.9140 | 37.22/0.9750 | |
DRRN [8] | 37.74/0.9591 | 33.23/0.9136 | 32.05/0.8973 | 32.23/0.9188 | 37.60/0.9736 | |
MemNet [40] | 37.78/0.9597 | 33.28/0.9142 | 32.08/0.8978 | 31.31/0.9195 | 37.72/0.9740 | |
EDSR [19] | 38.11/0.9602 | 33.92/0.9195 | 32.32/0.9013 | 32.93/0.9351 | 39.10/0.9773 | |
D-DBPN [23] | 38.09/0.9600 | 33.85/0.9190 | 32.27/0.9000 | 32.55/0.9324 | 38.89/0.9775 | |
SRFBN [12] | 38.02/0.9601 | 33.74/0.9190 | 32.21/0.9004 | 32.53/0.9320 | 38.99/0.9771 | |
USRNet [41] | 37.71/0.9592 | 33.49/0.9156 | 32.10/0.8981 | 31.79/0.9255 | 38.37/0.9760 | |
RFANet [42] | 38.26/0.9615 | 34.16/0.9220 | 32.41/0.9026 | 33.33/0.9389 | 39.44/0.9783 | |
DLRRN (ours) | 38.19/0.9612 | 34.05/0.9219 | 32.33/0.9012 | 33.02/0.9357 | 39.24/0.9783 | |
3 | Bicubic | 30.39/0.8682 | 27.55/0.7742 | 27.21/0.7385 | 24.46/0.7349 | 26.95/0.8556 |
SRCNN [10] | 32.75/0.9090 | 29.30/0.8215 | 28.41/0.7863 | 26.24/0.7989 | 30.48/0.9117 | |
VDSR [6] | 33.67/0.9210 | 29.78/0.8320 | 28.83/0.7990 | 27.14/0.8290 | 32.01/0.9340 | |
DRRN [8] | 34.03/0.9244 | 29.96/0.8349 | 28.95/0.8004 | 27.53/0.8378 | 32.42/0.9359 | |
MemNet [40] | 34.09/0.9248 | 30.00/0.8350 | 28.96/0.8001 | 27.56/0.8376 | 32.51/0.9369 | |
EDSR [19] | 34.65/0.9280 | 30.52/0.8462 | 29.25/0.8092 | 28.80/0.8653 | 34.17/0.9476 | |
D-DBPN [23] | -/- | -/- | -/- | -/- | -/- | |
SRFBN [12] | 34.59/0.9283 | 30.45/0.8450 | 29.16/0.8071 | 28.58/0.8628 | 34.03/0.9462 | |
USRNet [41] | 34.43/0.9279 | 30.51/0.8446 | 29.18/0.8076 | 28.38/0.8575 | 34.05/0.9466 | |
RFANet [42] | 34.79/0.9300 | 30.67/0.8487 | 29.34/0.8115 | 29.15/0.8720 | 34.59/0.9506 | |
DLRRN | 34.74/0.9297 | 30.61/0.8473 | 29.27/0.8088 | 29.06/0.8684 | 34.32/0.9489 | |
4 | Bicubic | 28.42/0.8104 | 26.00/0.7027 | 25.96/0.6675 | 23.14/0.6577 | 24.89/0.7866 |
SRCNN [10] | 30.48/0.8628 | 27.50/0.7513 | 26.90/0.7101 | 24.52/0.7221 | 27.58/0.8555 | |
VDSR [6] | 31.35/0.8830 | 28.02/0.7680 | 27.29/0.7260 | 25.18/0.7540 | 28.83/0.8870 | |
DRRN [8] | 31.68/0.8888 | 28.21/0.7721 | 27.38/0.7284 | 25.44/0.7638 | 29.18/0.8914 | |
MemNet [40] | 31.74/0.8893 | 28.26/0.7723 | 27.40/0.7281 | 25.50/0.7630 | 29.42/0.8942 | |
EDSR [19] | 32.46/0.8968 | 28.80/0.7876 | 27.71/0.7420 | 26.64/0.8033 | 31.02/0.9148 | |
D-DBPN [23] | 32.47/0.8980 | 28.82/0.7860 | 27.72/0.7400 | 26.38/0.7946 | 30.91/0.9137 | |
SRFBN [12] | 32.36/0.8970 | 28.77/0.7863 | 27.67/0.7392 | 26.49/0.7979 | 30.99/0.9142 | |
USRNet [41] | 32.42/0.8978 | 28.83/0.7871 | 27.69/0.7404 | 26.44/0.7976 | 31.11/0.9154 | |
RFANet [42] | 32.66/0.9004 | 28.88/0.7894 | 27.79/0.7442 | 26.92/0.8112 | 31.41/0.9187 | |
DLRRN (ours) | 32.55/0.8994 | 28.90/0.7887 | 27.74/0.7408 | 26.82/0.8057 | 31.38/0.9176 |
Method | Model | Set5 PSNR/SSIM | Set14 PSNR/SSIM | BSD100 PSNR/SSIM | Urban100 PSNR/SSIM | Manga109 PSNR/SSIM |
---|---|---|---|---|---|---|
Bicubic | BD | 28.34/0.8161 | 26.12/0.7106 | 26.02/0.6733 | 23.20/0.6661 | 25.03/0.7987 |
DN | 24.14/0.5445 | 23.14/0.4828 | 22.94/0.4461 | 31.63/0.4701 | 23.08/0.5448 | |
SRCNN [10] | BD | 31.63/0.8888 | 28.52/0.7924 | 27.76/0.7526 | 25.31/0.7612 | 28.79/0.8851 |
DN | 27.16/0.7672 | 25.49/0.6580 | 25.11/0.6151 | 23.32/0.6500 | 25.78/0.7889 | |
VDSR [6] | BD | 33.30/0.9159 | 29.67/0.8269 | 28.63/0.7903 | 26.75/0.8145 | 31.66/0.9260 |
DN | 27.72/0.7872 | 25.92/0.6786 | 25.52/0.6345 | 23.83/0.6797 | 26.41/0.8130 | |
IRCNN_G [43] | BD | 33.38/0.9182 | 29.73/0.8292 | 28.65/0.7922 | 26.77/0.8154 | 31.15/0.9245 |
DN | 24.85/0.7205 | 23.84/0.6091 | 23.89/0.5688 | 21.96/0.6018 | 23.18/0.7466 | |
IRCNN_C [43] | BD | 29.55/0.8246 | 27.33/0.7135 | 26.46/0.6572 | 24.89/0.7172 | 28.68/0.8574 |
DN | 26.18/0.7430 | 24.68/0.6300 | 24.52/0.5850 | 22.63/0.6205 | 24.74/0.7701 | |
SRMD(NF) [44] | BD | 34.09/0.9242 | 30.11/0.8364 | 28.98/0.8009 | 27.50/0.8370 | 32.97/0.9391 |
DN | 27.74/0.8026 | 26.13/0.6974 | 25.64/0.6495 | 24.28/0.7092 | 26.72/0.8424 | |
RDN [22] | BD | 34.57/0.9280 | 30.53/0.8447 | 29.23/0.8079 | 28.46/0.8581 | 33.97/0.9465 |
DN | 28.46/0.8151 | 26.60/0.7101 | 25.93/0.6573 | 24.92/0.7362 | 28.00/0.8590 | |
SRFBN [12] | BD | 34.65/0.9283 | 30.64/0.8435 | 29.18/0.8066 | 28.43/0.8578 | 34.02/0.9462 |
DN | 28.52/0.8180 | 26.58/0.7140 | 25.94/0.6615 | 24.96/0.7120 | 27.98/0.8612 | |
RFANet [42] | BD | 34.77/0.9292 | 30.68/0.8473 | 29.34/0.8104 | 28.89/0.8661 | 34.49/0.9492 |
DN | -\- | -\- | -\- | -\- | -\- | |
DLRRN(ours) | BD | 34.80/0.9295 | 30.68/0.8469 | 29.32/0.8094 | 28.95/0.8658 | 34.57/0.9490 |
DN | 28.64/0.8210 | 26.70/0.7147 | 26.00/0.6630 | 25.24/0.7485 | 28.24/0.8650 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tan, C.; Wang, L.; Cheng, S. Image Super-Resolution via Dual-Level Recurrent Residual Networks. Sensors 2022, 22, 3058. https://doi.org/10.3390/s22083058
Tan C, Wang L, Cheng S. Image Super-Resolution via Dual-Level Recurrent Residual Networks. Sensors. 2022; 22(8):3058. https://doi.org/10.3390/s22083058
Chicago/Turabian StyleTan, Congming, Liejun Wang, and Shuli Cheng. 2022. "Image Super-Resolution via Dual-Level Recurrent Residual Networks" Sensors 22, no. 8: 3058. https://doi.org/10.3390/s22083058
APA StyleTan, C., Wang, L., & Cheng, S. (2022). Image Super-Resolution via Dual-Level Recurrent Residual Networks. Sensors, 22(8), 3058. https://doi.org/10.3390/s22083058