A Blind Image Quality Index for Synthetic and Authentic Distortions with Hierarchical Feature Fusion
Abstract
:1. Introduction
- •
- Different from the previous simple stitching-based method, the proposed HFNet gradually integrates the features of different levels; so, the low-level features to high-level features can be used to represent the different types and degrees of distortion more effectively.
- •
- It has been proved in the literature that people always view images in a combination of local and global ways in order to better understand the content of images. To simulate this visual property, we propose to extract local and global features to understand the distortion more comprehensively.
- •
- We conducted extensive experiments on both synthetic and authentic distortion datasets, and the proposed HFNet is compared with both the traditional and deep learning state-of-the-art quality metrics. The superiority of the proposed HFNet is verified by extensive experiments.
2. Related Work
2.1. Blind Image Quality Assessment
2.2. Vision Transformer Network
3. Proposed Method
3.1. Multiscale Feature Extraction
3.2. Hierarchical Feature Fusion
3.3. Local and Global Feature Aggregation
3.3.1. Local Feature Extraction
3.3.2. Global Feature Extraction
3.4. Deep Quality Regression
4. Experiments
4.1. Databases
- The CSIQ consists of 866 synthetically distorted images, which contain a total of six distortion types. The database was rated by 25 observers, with a DMOS value ranging from 0 to 1. The higher the DMOS value, the lower the quality of the image.
- The TID2013 contains 3000 synthetically distorted images, with four times the number of distortion types of the CSIQ. The database was rated by 971 observers from five different countries with an MOS ranging from 0 to 9, with a higher MOS associated with better image quality.
- The BID was scored by about 180 observers; it is a distorted image dataset with blur. It contains 586 images with realistic blur distortions, such as the common out of focus and motion blur, etc.
- The LIVEC consists of 1162 distorted images, which were rated by 8100 observers; the MOS value of each image was averaged by 175 observers individually.
- The Konik-10k consists of 10,073 distorted images that were rated by 1459 observers. The MOS values of both databases were between 0 and 100, and the higher the MOS value, the better the image quality.
4.2. Implementation Details and Evaluation Criteria
4.3. Performance Evaluation on Synthetic Distortion
4.4. Performance Evaluation on Authentic Distortion
4.5. Ablation Studies
4.6. Generalization Ability Study
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Mohan, C.; Kiran, S.; Vasudeva, V. Improved procedure for multi-focus image quality enhancement using image fusion with rules of texture energy measures in the hybrid wavelet domain. Appl. Sci. 2023, 13, 2138. [Google Scholar] [CrossRef]
- You, N.; Han, L.; Zhu, D.; Song, W. Research on image denoising in edge detection based on wavelet transform. Appl. Sci. 2023, 13, 1837. [Google Scholar] [CrossRef]
- Hu, B.; Li, L.; Liu, H.; Lin, W.; Qian, J. Pairwise-comparison-based rank learning for benchmarking image restoration algorithms. IEEE Trans. Multimed. 2019, 21, 2042–2056. [Google Scholar] [CrossRef]
- Hu, B.; Li, L.; Wu, J.; Qian, J. Subjective and objective quality assessment for image restoration: A critical survey. Signal Process. Image Commun. 2020, 85, 1–20. [Google Scholar] [CrossRef]
- Ribeiro, R.; Trifan, A.; Neves, A. Blind image quality assessment with deep learning: A replicability study and its reproducibility in lifelogging. Appl. Sci. 2023, 13, 1837. [Google Scholar] [CrossRef]
- Athar, S.; Wang, Z. Degraded reference image quality assessment. IEEE Trans. Image Process. 2023, 32, 822–837. [Google Scholar] [CrossRef]
- Ryu, J. A Visual saliency-based neural network architecture for no-reference image quality assessment. Appl. Sci. 2022, 12, 9567. [Google Scholar] [CrossRef]
- Zhu, H.; Li, L.; Wu, J.; Dong, W.; Shi, G. Generalizable no-reference image quality assessment via deep meta-learning. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1048–1060. [Google Scholar] [CrossRef]
- Hu, B.; Li, L.; Wu, J.; Wang, S.; Tang, L.; Qian, J. No-reference quality assessment of compressive sensing image recovery. Signal Process. Image Commun. 2017, 58, 165–174. [Google Scholar] [CrossRef]
- Hu, B.; Li, L.; Qian, J. Internal generative mechanism driven blind quality index for deblocked images. Multimed. Tools Appl. 2019, 78, 12583–12605. [Google Scholar] [CrossRef]
- Wu, Q.; Li, H.; Ngan, K.; Ma, K. Blind image quality assessment using local consistency aware retriever and uncertainty aware evaluator. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 2078–2089. [Google Scholar] [CrossRef]
- Xue, W.; Zhang, L.; Mou, X.; Bovik, A. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 2014, 23, 684–695. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chang, H.; Yang, H.; Gan, Y.; Wang, M. Sparse feature fidelity for perceptual image quality assessment. IEEE Trans. Image Process. 2013, 22, 4007–4018. [Google Scholar] [CrossRef]
- Ma, L.; Li, S.; Zhang, F.; Ngan, K. Reduced-reference image quality assessment using reorganized DCT-based image representation. IEEE Trans. Multimed. 2011, 13, 824–829. [Google Scholar] [CrossRef]
- Liu, Y.; Zhai, G.; Gu, K.; Liu, X.; Zhao, D.; Gao, W. Reduced reference image quality assessment in free-energy principle and sparse representation. IEEE Trans. Multimed. 2018, 20, 379–391. [Google Scholar] [CrossRef]
- Wu, J.; Liu, Y.; Li, L.; Shi, G. Attended visual content degradation based reduced reference image quality assessment. IEEE Access 2018, 6, 12493–12504. [Google Scholar] [CrossRef]
- Zhu, W.; Zhai, G.; Min, X.; Hu, M.; Liu, J.; Guo, G.; Yang, X. Multi-channel decomposition in tandem with free-energy principle for reduced reference image quality assessment. IEEE Trans. Multimed. 2019, 21, 2334–2346. [Google Scholar] [CrossRef]
- Gu, K.; Wang, S.; Zhai, G.; Ma, S.; Yang, X.; Lin, W.; Zhang, W.; Gao, W. Blind quality assessment of tone-mapped images via analysis of information, naturalness, and structure. IEEE Trans. Multimed. 2016, 18, 432–443. [Google Scholar] [CrossRef]
- Zhu, H.; Li, L.; Wu, J.; Dong, W.; Shi, G. MetaIQA: Deep meta-learning for no-reference image quality assessment. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2022; pp. 14131–14140. [Google Scholar]
- Zhang, L.; Zhang, L.; Bovik, A. A feature-enriched completely blind image quality evaluator. IEEE Trans. Image Process. 2015, 24, 2579–2591. [Google Scholar] [CrossRef] [Green Version]
- Mittal, A.; Moorthy, A.; Bovik, A. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
- Ma, K.; Liu, W.; Zhang, K.; Duanmu, Z.; Wang, Z.; Zuo, W. End-to-end blind image quality assessment using deep neural networks. IEEE Trans. Image Process. 2018, 27, 1202–1213. [Google Scholar] [CrossRef] [PubMed]
- Bosse, S.; Maniry, D.; M¨¹ller, K.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2018, 27, 206–219. [Google Scholar] [CrossRef] [Green Version]
- Su, S.; Yan, Q.; Zhu, Y.; Zhang, C.; Ge, X.; Sun, J.; Zhang, Y. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3664–3673. [Google Scholar]
- Ye, P.; Kumar, J.; Kang, L.; Doermann, D. Unsupervised feature learning framework for no-reference image quality assessment. In Proceedings of the 2012 IEEE conference on computer vision and pattern recognition, Providence, RI, USA, 16–21 June 2012; pp. 1098–1105. [Google Scholar]
- Mittal, A.; Soundararajan, R.; Bovik, A. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- Xu, J.; Ye, P.; Li, Q.; Du, H.; Liu, Y.; Doermann, D. Blind image quality assessment based on high order statistics aggregation. IEEE Trans. Image Process. 2016, 25, 4444–4457. [Google Scholar] [CrossRef]
- Saad, M.; Bovik, A.; Charrier, C. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 2012, 21, 3339–3352. [Google Scholar] [CrossRef]
- Bianco, S.; Celona, L.; Napoletano, P.; Schettini, R. On the use of deep learning for blind image quality assessment. Signal Image Video Process. 2018, 12, 355–362. [Google Scholar] [CrossRef] [Green Version]
- Kim, J.; Nguyen, A.; Ahn, S.; Luo, C.; Lee, S. Multiple level feature-based universal blind image quality assessment model. In Proceedings of the 2018 IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 291–295. [Google Scholar]
- Gao, F.; Yu, J.; Zhu, S.; Huang, Q.; Tian, Q. Blind image quality prediction by exploiting multi-level deep representations. Pattern Recognit. 2018, 81, 432–442. [Google Scholar] [CrossRef]
- Sang, Q.; Wu, L.; Li, C.; Wu, X. No-reference quality assessment for multiply distorted images based on deep learning. In Proceedings of the 2017 International Smart Cities Conference (ISC2), Wuxi, China, 14–17 September 2017; pp. 6373–6382. [Google Scholar]
- Kim, J.; Lee, S. Fully deep blind image quality predictor. IEEE J. Sel. Top. Signal Process. 2017, 11, 206–220. [Google Scholar] [CrossRef]
- Zeng, H.; Zhang, L.; Bovik, A. A probabilistic quality representation approach to deep blind image quality prediction. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 609–613. [Google Scholar]
- Ren, H.; Chen, D.; Wang, Y. RAN4IQA: Restorative adversarial nets for no-reference image quality assessment. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), San Francisco, CA, USA, 4–9 February 2018; pp. 7308–7314. [Google Scholar]
- Li, D.; Jiang, T.; Lin, W.; Jiang, M. Which has better visual quality: The clear blue sky or a blurry animal? IEEE Trans. Multimed. 2019, 21, 1221–1234. [Google Scholar] [CrossRef]
- Yan, B.; Bare, B.; Tan, W. Naturalness-aware deep no-reference image quality assessment. IEEE Trans. Multimed. 2019, 21, 2603–2615. [Google Scholar] [CrossRef]
- Zhang, W.; Ma, K.; Yan, J.; Deng, D.; Wang, Z. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 36–47. [Google Scholar] [CrossRef] [Green Version]
- Wu, J.; Ma, J.; Liang, F.; Dong, W.; Shi, G.; Lin, W. End-to-end blind image quality prediction with cascaded deep neural network. IEEE Trans. Image Process. 2020, 29, 7414–7426. [Google Scholar] [CrossRef]
- Song, T.; Li, L.; Chen, P.; Liu, H.; Qian, J. Blind image quality assessment for authentic distortions by intermediary enhancement and iterative training. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 7592–7604. [Google Scholar] [CrossRef]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. arXiv 2020, arXiv:2005.12872,. [Google Scholar]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.; Luo, P. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. In Proceedings of the 2021 Neural Information Processing Systems (NeurIPS), Montreal, QC, Canada, 6–14 December; pp. 1–18.
- Meinhardt, T.; Kirillov, A.; Leal-Taixe, L.; Feichtenhofer, C. Trackformer: Multi-object tracking with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 21–24 June; pp. 8844–8854.
- Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; Sutskever, I. Zero-shot text-to-image generation. arXiv 2021. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. In Proceedings of the International Conference on Learning Representations (ICLR), Venna, Austria, 4–8 May 2021; pp. 1–21. [Google Scholar]
- Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jgou, H. Training data-efficient image transformers and distillation through attention. arXiv 2021. [Google Scholar] [CrossRef]
- Yuan, L.; Chen, Y.; Wang, T.; Yu, W.; Shi, Y.; Jiang, Z.; Tay, F.; Feng, J.; Yan, S. Tokens-to-token vit: Training vision transformers from scratch on imagenet. arXiv 2021, arXiv:2101.11986. [Google Scholar]
- Wang, W.; Xie, E.; Li, X.; Fan, D.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 548–558. [Google Scholar]
- Wang, W.; Xie, E.; Li, X.; Fan, D.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. PVTv2: Improved baselines with pyramid vision transformer. Comput. Vis. Media 2022, 8, 415–424. [Google Scholar] [CrossRef]
- Han, K.; Xiao, A.; Wu, E.; Guo, J.; Xu, C.; Wang, Y. Transformer in transformer. arXiv 2021, arXiv:2103.00112. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 9992–10002. [Google Scholar]
- Rasheed, M.; Ali, A.; Alabdali, O.; Shihab, S.; Rashid, A.; Rashid, T.; Hamad, S. The effectiveness of the finite differences method on physical and medical images based on a heat diffusion equation. J. Phys. Conf. Ser. 2021, 1999, 012080. [Google Scholar] [CrossRef]
- Abdulrahman, A.; Rasheed, M.; Shihab, S. The analytic of image processing smoothing spaces using wavelet. J. Phys. Conf. Ser. 2021, 1879, 022118. [Google Scholar] [CrossRef]
- Larson, E.; Chandler, D. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron. Imaging 2010, 19, 1–21. [Google Scholar]
- Ponomarenko, N.; Jin, L.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; et al. Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun. 2015, 30, 57–77. [Google Scholar] [CrossRef] [Green Version]
- Ciancio, A.; Costa, A.; Silva, E.; Said, A.; Samadani, R.; Obrador, P. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Trans. Image Process. 2011, 20, 64–75. [Google Scholar] [CrossRef] [PubMed]
- Ghadiyaram, D.; Bovik, A. Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process. 2016, 25, 372–387. [Google Scholar] [CrossRef] [Green Version]
- Hosu, V.; Lin, H.; Sziranyi, T.; Saupe, D. Koniq-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Trans. Image Process. 2020, 29, 4041–4056. [Google Scholar] [CrossRef] [Green Version]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Moorthy, A.; Bovik, A. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef]
- Zhang, W.; Ma, K.; Zhai, G.; Yang, X. Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Trans. Image Process. 2021, 30, 3474–3486. [Google Scholar] [CrossRef]
- Qu, F.; Wang, Y.; Li, J.; Zhu, G.; Kwong, S. A novel rank learning based no-reference image quality assessment method. IEEE Trans. Multimed. 2022, 24, 4197–4211. [Google Scholar]
- Pan, Z.; Yuan, F.; Lei, J.; Fang, Y.; Shao, X.; Kwong, S. VCRNet: Visual compensation restoration network for no-reference image quality assessment. IEEE Trans. Image Process. 2022, 31, 1613–1627. [Google Scholar] [CrossRef] [PubMed]
- Chan, T.; Jia, K.; Gao, S.; Lu, J.; Zeng, Z.; Ma, Y. PCANet: A simple deep learning baseline for image classification. IEEE Trans. Image Process. 2015, 24, 5017–5032. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Aiadi, O.; Khaldi, B.; Saadeddine, C. MDFNet: An unsupervised lightweight network for ear print recognition. J. Ambient. Intell. Humaniz Comput. 2022, 18, 1–14. [Google Scholar] [CrossRef] [PubMed]
Database | RIN | DIN | DTN | SST | SSR | Year |
---|---|---|---|---|---|---|
CSIQ [55] | 30 | 866 | 6 | DMOS | [0,1] | 2010 |
TID2013 [56] | 25 | 3000 | 24 | MOS | [0,9] | 2013 |
BID [57] | N/A | 585 | authentic | MOS | [0,5] | 2011 |
LIVEC [58] | N/A | 1162 | authentic | MOS | [0,100] | 2016 |
KONIQ-10k [59] | N/A | 10,073 | authentic | MOS | [0,100] | 2020 |
Method | Published | CSIQ | TID2013 | ||
---|---|---|---|---|---|
SRCC | PLCC | SRCC | PLCC | ||
DIIVINE [61] | TIP11 | 0.777 | 0.743 | 0.535 | 0.664 |
BRISQUE [22] | TIP12 | 0.746 | 0.829 | 0.604 | 0.694 |
CORNIA [26] | CVPR12 | 0.678 | 0.776 | 0.678 | 0.768 |
NIQE [27] | SPL13 | 0.821 | 0.865 | 0.521 | 0.648 |
ILNIQE [21] | TIP15 | 0.806 | 0.808 | - | - |
HOSA [28] | TIP16 | 0.741 | 0.823 | 0.735 | 0.815 |
BIECON [34] | JSTSP17 | 0.815 | 0.823 | 0.717 | 0.762 |
PQR [35] | ICIP18 | 0.873 | 0.901 | - | - |
WaDIaM-NR [24] | TIP18 | 0.955 | 0.973 | 0.761 | 0.787 |
RAN4IQA [36] | AAAI18 | 0.914 | 0.931 | 0.820 | 0.859 |
SFA [37] | TMM19 | 0.796 | 0.818 | - | - |
NSSADNN [38] | TMM19 | 0.893 | 0.927 | 0.844 | 0.910 |
HyperNet [25] | CVPR20 | 0.923 | 0.942 | - | - |
DBCNN [39] | TCSVT20 | 0.946 | 0.959 | 0.816 | 0.865 |
CaHDC [40] | TIP20 | 0.903 | 0.914 | 0.862 | 0.878 |
UNIQUE [62] | TIP21 | 0.902 | 0.927 | - | - |
CLRIQA [63] | TMM22 | 0.915 | 0.938 | 0.837 | 0.863 |
VCRNet [64] | TIP22 | 0.943 | 0.955 | 0.846 | 0.875 |
HFNet | 0.956 | 0.964 | 0.893 | 0.911 |
Method | Published | BID | LIVEC | KonIQ-10k | |||
---|---|---|---|---|---|---|---|
SRCC | PLCC | SRCC | PLCC | SRCC | PLCC | ||
DIIVINE [61] | TIP11 | - | - | 0.523 | 0.551 | 0.579 | 0.632 |
BRISQUE [22] | TIP12 | 0.562 | 0.593 | 0.608 | 0.629 | 0.665 | 0.681 |
CORNIA [26] | CVPR12 | - | - | 0.618 | 0.662 | 0.738 | 0.773 |
NIQE [27] | SPL13 | - | - | 0.594 | 0.589 | - | - |
ILNIQE [21] | TIP15 | 0.516 | 0.554 | 0.432 | 0.508 | 0.507 | 0.523 |
HOSA [28] | TIP16 | 0.721 | 0.736 | 0.640 | 0.678 | 0.761 | 0.791 |
BIECON [34] | JSTSP17 | - | - | 0.595 | 0.613 | - | - |
PQR [35] | ICIP18 | 0.775 | 0.794 | 0.857 | 0.882 | 0.880 | 0.884 |
WaDIaM-NR [24] | TIP18 | 0.725 | 0.742 | 0.671 | 0.680 | 0.797 | 0.805 |
RAN4IQA [36] | AAAI18 | - | - | 0.586 | 0.612 | 0.752 | 0.763 |
SFA [37] | TMM19 | 0.826 | 0.840 | 0.812 | 0.833 | 0.856 | 0.872 |
NSSADNN [38] | TMM19 | - | - | 0.745 | 0.813 | 0.912 | 0.887 |
HyperNet [25] | CVPR20 | 0.869 | 0.878 | 0.859 | 0.882 | 0.906 | 0.917 |
DBCNN [39] | TCSVT20 | 0.845 | 0.859 | 0.851 | 0.869 | 0.868 | 0.892 |
CaHDC [40] | TIP20 | - | - | 0.738 | 0.744 | - | - |
UNIQUE [62] | TIP21 | 0.858 | 0.873 | 0.854 | 0.890 | 0.896 | 0.901 |
CLRIQA [63] | TMM22 | - | - | 0.832 | 0.866 | 0.831 | 0.846 |
MetaIQA+ [8] | TCSVT22 | - | - | 0.852 | 0.872 | 0.909 | 0.921 |
VCRNet [64] | TIP22 | - | - | 0.856 | 0.865 | 0.894 | 0.909 |
IEIT [41] | TCSVT22 | - | - | 0.833 | 0.865 | 0.892 | 0.916 |
HFNet | 0.883 | 0.897 | 0.901 | 0.908 | 0.910 | 0.928 |
CSIQ | TID2013 | |||
---|---|---|---|---|
Method | SRCC | PLCC | SRCC | PLCC |
Local | 0.958 | 0.962 | 0.866 | 0.862 |
Global | 0.945 | 0.952 | 0.845 | 0.861 |
HFNet | 0.956 | 0.964 | 0.893 | 0.911 |
BID | LIVEC | KonIQ-10k | ||||
---|---|---|---|---|---|---|
Method | SRCC | PLCC | SRCC | PLCC | SRCC | PLCC |
Local | 0.863 | 0.880 | 0.825 | 0.837 | 0.868 | 0.872 |
Global | 0.877 | 0.883 | 0.891 | 0.899 | 0.892 | 0.909 |
HFNet | 0.883 | 0.897 | 0.901 | 0.908 | 0.910 | 0.928 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hu, L.; Peng, J.; Zhao, T.; Yu, W.; Hu, B. A Blind Image Quality Index for Synthetic and Authentic Distortions with Hierarchical Feature Fusion. Appl. Sci. 2023, 13, 3591. https://doi.org/10.3390/app13063591
Hu L, Peng J, Zhao T, Yu W, Hu B. A Blind Image Quality Index for Synthetic and Authentic Distortions with Hierarchical Feature Fusion. Applied Sciences. 2023; 13(6):3591. https://doi.org/10.3390/app13063591
Chicago/Turabian StyleHu, Lingbi, Juan Peng, Tuoxun Zhao, Wei Yu, and Bo Hu. 2023. "A Blind Image Quality Index for Synthetic and Authentic Distortions with Hierarchical Feature Fusion" Applied Sciences 13, no. 6: 3591. https://doi.org/10.3390/app13063591
APA StyleHu, L., Peng, J., Zhao, T., Yu, W., & Hu, B. (2023). A Blind Image Quality Index for Synthetic and Authentic Distortions with Hierarchical Feature Fusion. Applied Sciences, 13(6), 3591. https://doi.org/10.3390/app13063591