Online Learning for Reference-Based Super-Resolution
Abstract
:1. Introduction
- We propose an online learning method for reference-based super-resolution with various data pairs for supervision. To this end, we present three methods for SISR models and four methods for RefSR models;
- Our method is very simple, but it is effective, and can be seamlessly combined with both SISR and RefSR models;
- Our method shows consistent performance improvements without being significantly affected by the degree of similarity between the reference and input images.
2. Related Works
3. Methods
3.1. Online Learning
3.1.1. SISR Model
3.1.2. RefSR Model
3.2. Inference
4. Experiments
4.1. Implementation Details
4.2. Experimental Results
4.3. Empirical Analyses
- Reference Similarity
- Pseudo HR vs. LR for Supervision
- Non-Bicubic Degradation
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Dong, C.; Loy, C.C.; Tang, X. Accelerating the super-resolution convolutional neural network. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; pp. 391–407. [Google Scholar]
- Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Deeply-Recursive Convolutional Network for Image Super-Resolution. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1637–1645. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1132–1140. [Google Scholar]
- Dai, T.; Cai, J.; Zhang, Y.; Xia, S.T.; Zhang, L. Second-Order Attention Network for Single Image Super-Resolution. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 11065–11074. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the 2018 European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Super-Resolution. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 2472–2481. [Google Scholar]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 2016, 38, 295–307. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sajjadi, M.S.M.; Scholkopf, B.; Hirsch, M. EnhanceNet: Single Image Super-Resolution through Automated Texture Synthesis. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4501–4510. [Google Scholar]
- Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; pp. 694–711. [Google Scholar]
- Liu, C.; Sun, D. A Bayesian approach to adaptive video super resolution. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 209–216. [Google Scholar]
- Caballero, J.; Ledig, C.; Aitken, A.P.; Acosta, A.; Totz, J.; Wang, Z.; Shi, W. Real-Time Video Super-Resolution with Spatio-Temporal Networks and Motion Compensation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2848–2857. [Google Scholar]
- Yue, H.; Sun, X.; Yang, J.; Wu, F. Landmark Image Super-Resolution by Retrieving Web Images. IEEE Trans. Image Process. (TIP) 2013, 22, 4865–4878. [Google Scholar]
- Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. [Google Scholar]
- Wu, Y.; Ma, W.; Gong, M.; Su, L.; Jiao, L. A Novel Point-Matching Algorithm Based on Fast Sample Consensus for Image Registration. IEEE Geosci. Remote Sens. Lett. 2014, 12, 43–47. [Google Scholar] [CrossRef]
- Sarlin, P.E.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperGlue: Learning Feature Matching With Graph Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 4938–4947. [Google Scholar]
- Zheng, H.; Ji, M.; Han, L.; Xu, Z.; Wang, H.; Liu, Y.; Fang, L. Learning Cross-scale Correspondence and Patch-based Synthesis for Reference-based Super-Resolution. In Proceedings of the 28th British Machine Vision Conference (BMVC), Imperial College, London, 4–7 September 2017; pp. 138.1–138.13. [Google Scholar]
- Shim, G.; Park, J.; Kweon, I.S. Robust Reference-Based Super-Resolution With Similarity-Aware Deformable Convolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 8425–8434. [Google Scholar]
- Yang, F.; Yang, H.; Fu, J.; Lu, H.; Guo, B. Learning Texture Transformer Network for Image Super-Resolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 5790–5799. [Google Scholar]
- Shocher, A.; Cohen, N.; Irani, M. “Zero-Shot” Super-Resolution using Deep Internal Learning. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 3118–3126. [Google Scholar]
- Glasner, D.; Bagon, S.; Irani, M. Super-resolution from a single image. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 349–356. [Google Scholar]
- Zontak, M.; Irani, M. Internal statistics of a single natural image. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 977–984. [Google Scholar]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a Deep Convolutional Network for Image Super-Resolution. In Proceedings of the 13th European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 184–199. [Google Scholar]
- Lai, W.; Huang, J.; Ahuja, N.; Yang, M. Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5835–5843. [Google Scholar]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Proceedings of the 2018 European Conference on Computer Vision Workshops (ECCVW), Munich, Germany, 8–14 September 2018; pp. 63–79. [Google Scholar]
- Dosovitskiy, A.; Fischer, P.; Ilg, E.; Häusser, P.; Hazirbas, C.; Golkov, V.; van der Smagt, P.; Cremers, D.; Brox, T. FlowNet: Learning Optical Flow with Convolutional Networks. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 2758–2766. [Google Scholar]
- Ilg, E.; Mayer, N.; Saikia, T.; Keuper, M.; Dosovitskiy, A.; Brox, T. FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1647–1655. [Google Scholar]
- Zheng, H.; Ji, M.; Wang, H.; Liu, Y.; Fang, L. CrossNet: An End-to-end Reference-based Super Resolution Network using Cross-scale Warping. In Proceedings of the 2018 European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 88–104. [Google Scholar]
- Gatys, L.; Ecker, A.S.; Bethge, M. Texture Synthesis Using Convolutional Neural Networks. In Proceedings of the Twenty-ninth Conference on Neural Information Processing Systems (NeurIPS), Montreal, Quebec, Canada, 7–12 December 2015; pp. 262–270. [Google Scholar]
- Gatys, L.A.; Ecker, A.S.; Bethge, M. Image Style Transfer Using Convolutional Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2414–2423. [Google Scholar]
- Zhang, Z.; Wang, Z.; Lin, Z.; Qi, H. Image Super-Resolution by Neural Texture Transfer. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 7974–7983. [Google Scholar]
- Jiang, Y.; Chan, K.C.; Wang, X.; Loy, C.C.; Liu, Z. Robust Reference-based Super-Resolution via C2-Matching. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 2103–2112. [Google Scholar]
- Lu, L.; Li, W.; Tao, X.; Lu, J.; Jia, J. MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 6368–6377. [Google Scholar]
- Soh, J.W.; Cho, S.; Cho, N.I. Meta-Transfer Learning for Zero-Shot Super-Resolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3516–3525. [Google Scholar]
- Park, S.; Yoo, J.; Kim, J.; Cho, D.; Kim, T.H. Fast Adaptation to Super-Resolution Networks via Meta-learning. In Proceedings of the 16th European Conference on Computer Vision (ECCV), Glasgow, United Kingdom, 23–28 August 2020; pp. 754–769. [Google Scholar]
- Yoo, J.; Kim, T.H. Self-Supervised Adaptation for Video Super-Resolution. arXiv 2021, arXiv:2103.10081. [Google Scholar]
Model | SISR | RefSR | |||||
---|---|---|---|---|---|---|---|
Data Pair | |||||||
X | |||||||
R | - | - | - | ||||
Y |
Model | Method | PSNR | SSIM | LPIPS |
---|---|---|---|---|
SRCNN [9] | Pre-trained | 25.475 | 0.737 | 0.3369 |
25.379 | 0.732 | 0.3388 | ||
25.563 | 0.741 | 0.3273 | ||
+ | 25.559 | 0.741 | 0.3275 | |
VDSR [3] | Pre-trained | 25.660 | 0.746 | 0.3332 |
25.500 | 0.740 | 0.3229 | ||
25.709 | 0.748 | 0.3256 | ||
+ | 25.734 | 0.749 | 0.3245 | |
SimpleNet [22] | Pre-trained | 25.800 | 0.753 | 0.3267 |
25.727 | 0.750 | 0.3128 | ||
25.941 | 0.757 | 0.3152 | ||
+ | 25.958 | 0.757 | 0.3136 | |
EDSR [5] | Pre-trained | 26.198 | 0.771 | 0.2955 |
26.132 | 0.765 | 0.2897 | ||
26.422 | 0.774 | 0.2956 | ||
+ | 26.440 | 0.775 | 0.2932 | |
RCAN [7] | Pre-trained | 26.243 | 0.774 | 0.2906 |
26.147 | 0.767 | 0.2883 | ||
26.500 | 0.777 | 0.2912 | ||
+ | 26.512 | 0.778 | 0.2892 |
Model | Method | Similarity | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
XL | L | M | H | XH | ||||||||||||
PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | ||
25.888 | 0.755 | 0.3159 | 25.932 | 0.756 | 0.3153 | 25.925 | 0.757 | 0.3148 | 25.990 | 0.758 | 0.3140 | 26.046 | 0.760 | 0.3127 | ||
Ours + | + | 25.894 | 0.755 | 0.3134 | 25.960 | 0.757 | 0.3119 | 25.950 | 0.758 | 0.3115 | 26.003 | 0.758 | 0.3107 | 26.058 | 0.761 | 0.3093 |
SimpleNet [22] | + | 25.936 | 0.757 | 0.3147 | 25.963 | 0.758 | 0.3145 | 25.979 | 0.758 | 0.3143 | 25.985 | 0.758 | 0.3140 | 26.018 | 0.759 | 0.3134 |
+ + | 25.973 | 0.758 | 0.3131 | 25.991 | 0.758 | 0.3130 | 25.997 | 0.759 | 0.3130 | 26.010 | 0.759 | 0.3129 | 26.049 | 0.760 | 0.3122 | |
26.354 | 0.772 | 0.2959 | 26.418 | 0.773 | 0.2937 | 26.417 | 0.774 | 0.2932 | 26.512 | 0.776 | 0.2922 | 26.645 | 0.780 | 0.2888 | ||
Ours + | + | 26.385 | 0.773 | 0.2889 | 26.438 | 0.775 | 0.2875 | 26.444 | 0.775 | 0.2875 | 26.553 | 0.777 | 0.2861 | 26.699 | 0.782 | 0.2833 |
EDSR [5] | + | 26.452 | 0.775 | 0.2949 | 26.467 | 0.775 | 0.2944 | 26.500 | 0.776 | 0.2935 | 26.497 | 0.776 | 0.2938 | 26.559 | 0.778 | 0.2926 |
+ + | 26.462 | 0.775 | 0.2925 | 26.484 | 0.776 | 0.2922 | 26.508 | 0.776 | 0.2916 | 26.522 | 0.776 | 0.2917 | 26.577 | 0.778 | 0.2902 | |
26.402 | 0.773 | 0.2919 | 26.465 | 0.775 | 0.2906 | 26.465 | 0.775 | 0.2900 | 26.581 | 0.778 | 0.2886 | 26.703 | 0.782 | 0.2856 | ||
Ours + | + | 26.418 | 0.774 | 0.2862 | 26.499 | 0.777 | 0.2853 | 26.505 | 0.777 | 0.2845 | 26.635 | 0.780 | 0.2828 | 26.810 | 0.785 | 0.2796 |
RCAN [7] | + | 26.511 | 0.777 | 0.2908 | 26.547 | 0.778 | 0.2901 | 26.567 | 0.778 | 0.2901 | 26.589 | 0.779 | 0.2895 | 26.634 | 0.781 | 0.2887 |
+ + | 26.543 | 0.778 | 0.2890 | 26.562 | 0.779 | 0.2885 | 26.574 | 0.779 | 0.2884 | 26.607 | 0.780 | 0.2877 | 26.681 | 0.782 | 0.2862 |
Model | Method | Similarity | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
XL | L | M | H | XH | ||||||||||||
PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | ||
SRNTT [33] | Pre-trained | 25.14 | 0.729 | 0.2476 | 25.07 | 0.720 | 0.2410 | 25.06 | 0.728 | 0.2354 | 25.13 | 0.734 | 0.2294 | 25.17 | 0.734 | 0.2099 |
SRNTT- [33] | Pre-trained | 25.87 | 0.757 | 0.2949 | 25.88 | 0.758 | 0.2916 | 25.90 | 0.758 | 0.2893 | 25.97 | 0.760 | 0.2856 | 26.06 | 0.765 | 0.2758 |
SSEN [20] | Pre-trained | 26.156 | 0.768 | 0.2979 | 26.151 | 0.768 | 0.2980 | 26.149 | 0.768 | 0.2979 | 26.154 | 0.768 | 0.2977 | 26.152 | 0.769 | 0.2976 |
26.109 | 0.764 | 0.2879 | 26.107 | 0.764 | 0.2881 | 26.116 | 0.764 | 0.2889 | 26.108 | 0.764 | 0.2883 | 26.112 | 0.764 | 0.2884 | ||
26.434 | 0.774 | 0.2951 | 26.459 | 0.775 | 0.2946 | 26.480 | 0.775 | 0.2944 | 26.480 | 0.775 | 0.2940 | 26.527 | 0.777 | 0.2930 | ||
Ours + | 26.226 | 0.767 | 0.2931 | 26.206 | 0.768 | 0.2925 | 26.241 | 0.768 | 0.2921 | 26.284 | 0.769 | 0.2903 | 26.276 | 0.770 | 0.2895 | |
SSEN [20] | 26.343 | 0.771 | 0.2946 | 26.383 | 0.772 | 0.2936 | 26.475 | 0.774 | 0.2920 | 26.509 | 0.775 | 0.2911 | 26.675 | 0.780 | 0.2874 | |
+ | 26.205 | 0.767 | 0.2852 | 26.206 | 0.767 | 0.2856 | 26.221 | 0.767 | 0.2854 | 26.261 | 0.768 | 0.2843 | 26.257 | 0.769 | 0.2946 | |
+ | 26.392 | 0.773 | 0.2955 | 26.460 | 0.774 | 0.2942 | 26.475 | 0.774 | 0.2946 | 26.505 | 0.775 | 0.2935 | 26.568 | 0.777 | 0.2924 | |
TTSR-rec [21] | Pre-trained | 26.586 | 0.783 | 0.2825 | 26.623 | 0.785 | 0.2800 | 26.685 | 0.787 | 0.2782 | 26.787 | 0.789 | 0.2759 | 27.039 | 0.799 | 0.2653 |
26.407 | 0.775 | 0.2711 | 26.455 | 0.776 | 0.2689 | 26.502 | 0.778 | 0.2675 | 26.579 | 0.780 | 0.2643 | 26.812 | 0.788 | 0.2545 | ||
26.822 | 0.786 | 0.2815 | 26.866 | 0.788 | 0.2792 | 26.937 | 0.790 | 0.2782 | 27.027 | 0.791 | 0.2760 | 27.337 | 0.801 | 0.2663 | ||
Ours + | 26.540 | 0.778 | 0.2791 | 26.563 | 0.781 | 0.2757 | 26.622 | 0.782 | 0.2750 | 26.769 | 0.785 | 0.2712 | 26.986 | 0.794 | 0.2614 | |
TTSR-rec [21] | 26.658 | 0.782 | 0.2818 | 26.717 | 0.785 | 0.2788 | 26.836 | 0.787 | 0.2757 | 26.959 | 0.790 | 0.2730 | 27.383 | 0.802 | 0.2578 | |
+ | 26.497 | 0.777 | 0.2696 | 26.522 | 0.779 | 0.2668 | 26.592 | 0.780 | 0.2660 | 26.698 | 0.782 | 0.2635 | 26.900 | 0.790 | 0.2529 | |
+ | 26.845 | 0.786 | 0.2816 | 26.877 | 0.788 | 0.2796 | 26.980 | 0.790 | 0.2780 | 27.056 | 0.792 | 0.2760 | 27.400 | 0.801 | 0.2663 |
Model | Kernel | Blind | Method | PSNR | SSIM | LPIPS | Model | Kernel | Blind | Method | PSNR | SSIM | LPIPS |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ours+ EDSR [5] | - | Pre-trained | 18.754 | 0.534 | 0.4068 | Ours+ SSEN [20] | - | Pre-trained | 18.538 | 0.521 | 0.4142 | ||
Non-blind | + | 24.335 | 0.726 | 0.3035 | Non-blind | 23.565 | 0.694 | 0.3431 | |||||
24.155 | 0.720 | 0.3239 | |||||||||||
- | Pre-trained | 21.387 | 0.606 | 0.3771 | - | Pre-trained | 20.706 | 0.586 | 0.3541 | ||||
Non-blind | + | 26.263 | 0.772 | 0.2665 | Non-blind | 25.436 | 0.741 | 0.2891 | |||||
26.105 | 0.765 | 0.2838 | |||||||||||
- | Pre-trained | 21.364 | 0.593 | 0.3639 | - | Pre-trained | 21.269 | 0.590 | 0.3633 | ||||
Non-blind | + | 26.164 | 0.765 | 0.2764 | Non-blind | 25.213 | 0.728 | 0.3062 | |||||
25.882 | 0.753 | 0.2953 | |||||||||||
- | Pre-trained | 25.595 | 0.741 | 0.3288 | - | Pre-trained | 25.522 | 0.740 | 0.3273 | ||||
Non-blind | + | 26.655 | 0.780 | 0.2663 | Non-blind | 26.010 | 0.758 | 0.2773 | |||||
26.569 | 0.778 | 0.2789 | |||||||||||
- | - | Pre-trained | 21.896 | 0.608 | 0.3892 | - | - | Pre-trained | 21.836 | 0.606 | 0.3881 | ||
Blind | + | 24.354 | 0.697 | 0.3459 | Blind | 23.953 | 0.676 | 0.3654 | |||||
24.201 | 0.685 | 0.3608 | |||||||||||
Ours+ RCAN [7] | - | Pre-trained | 17.938 | 0.497 | 0.4111 | Ours+ TTSR-rec [21] | - | Pre-trained | 18.415 | 0.524 | 0.4039 | ||
Non-blind | + | 24.532 | 0.737 | 0.2910 | Non-blind | 23.489 | 0.688 | 0.3423 | |||||
24.168 | 0.717 | 0.3232 | |||||||||||
- | Pre-trained | 21.131 | 0.597 | 0.3335 | - | Pre-trained | 21.211 | 0.609 | 0.3127 | ||||
Non-blind | + | 26.545 | 0.783 | 0.2586 | Non-blind | 25.911 | 0.760 | 0.2647 | |||||
26.561 | 0.784 | 0.2624 | |||||||||||
- | Pre-trained | 21.198 | 0.587 | 0.3609 | - | Pre-trained | 21.199 | 0.596 | 0.3367 | ||||
Non-blind | + | 26.414 | 0.775 | 0.2679 | Non-blind | 25.512 | 0.741 | 0.2841 | |||||
26.199 | 0.768 | 0.2754 | |||||||||||
- | Pre-trained | 25.484 | 0.738 | 0.3314 | - | Pre-trained | 26.147 | 0.767 | 0.2912 | ||||
Non-blind | + | 26.909 | 0.790 | 0.2597 | Non-blind | 26.599 | 0.781 | 0.2471 | |||||
26.989 | 0.796 | 0.2471 | |||||||||||
- | - | Pre-trained | 21.798 | 0.606 | 0.3914 | - | - | Pre-trained | 21.820 | 0.615 | 0.3603 | ||
Blind | + | 24.277 | 0.692 | 0.3480 | Blind | 23.928 | 0.672 | 0.3535 | |||||
24.010 | 0.684 | 0.3461 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chae, B.; Park, J.; Kim, T.-H.; Cho, D. Online Learning for Reference-Based Super-Resolution. Electronics 2022, 11, 1064. https://doi.org/10.3390/electronics11071064
Chae B, Park J, Kim T-H, Cho D. Online Learning for Reference-Based Super-Resolution. Electronics. 2022; 11(7):1064. https://doi.org/10.3390/electronics11071064
Chicago/Turabian StyleChae, Byungjoo, Jinsun Park, Tae-Hyun Kim, and Donghyeon Cho. 2022. "Online Learning for Reference-Based Super-Resolution" Electronics 11, no. 7: 1064. https://doi.org/10.3390/electronics11071064
APA StyleChae, B., Park, J., Kim, T. -H., & Cho, D. (2022). Online Learning for Reference-Based Super-Resolution. Electronics, 11(7), 1064. https://doi.org/10.3390/electronics11071064