A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps
Abstract
:1. Introduction
1.1. Related Work
1.2. Contributions
1.3. Structure
2. Proposed Method
Architecture
3. Experimental Results
3.1. Evaluation Metrics
3.2. Experimental Setup
3.3. Parameter Study
3.4. Performance over Different Distortion Types and Levels
3.5. Effect of the Training Set Size
3.6. Comparison to the State-of-the-Art
3.7. Cross Database Test
4. Conclusions
Funding
Acknowledgments
Conflicts of Interest
References
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Oord, A.v.d.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. Wavenet: A generative model for raw audio. arXiv 2016, arXiv:1609.03499. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Sharif Razavian, A.; Azizpour, H.; Sullivan, J.; Carlsson, S. CNN features off-the-shelf: An astounding baseline for recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops; IEEE Computer Society: Washington, DC, USA, 2014; pp. 806–813. [Google Scholar]
- Penatti, O.A.; Nogueira, K.; Dos Santos, J.A. Do deep features generalize from everyday objects to remote sensing and aerial scenes domains? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, 7–12 June 2015; pp. 44–51. [Google Scholar]
- Bousetouane, F.; Morris, B. Off-the-shelf CNN features for fine-grained classification of vessels in a maritime environment. In International Symposium on Visual Computing; Springer: Berlin/Heidelberg, Germany, 2015; pp. 379–388. [Google Scholar]
- Chou, C.H.; Li, Y.C. A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile. IEEE Trans. Circuits Syst. Video Technol. 1995, 5, 467–476. [Google Scholar] [CrossRef]
- Daly, S.J. Visible differences predictor: An algorithm for the assessment of image fidelity. In Human Vision, Visual Processing, and Digital Display III. International Society for Optics and Photonics; SPIE: Bellingham, WA, USA, 1992; Volume 1666, pp. 2–15. [Google Scholar]
- Watson, A.B.; Borthwick, R.; Taylor, M. Image quality and entropy masking. In Human Vision and Electronic Imaging II. International Society for Optics and Photonics; SPIE: Bellingham, WA, USA, 1997; Volume 3016, pp. 2–12. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
- Chen, G.H.; Yang, C.L.; Po, L.M.; Xie, S.L. Edge-based structural similarity for image quality assessment. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; Volume 2, p. II. [Google Scholar]
- Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
- Li, C.; Bovik, A.C. Three-component weighted structural similarity index. In Proceedings of the Image Quality and System Performance VI, International Society for Optics and Photonics, San Jose, CA, USA, 19–21 January 2009; Volume 7242, p. 72420Q. [Google Scholar]
- Liu, H.; Heynderickx, I. Visual attention in objective image quality assessment: Based on eye-tracking data. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 971–982. [Google Scholar]
- Wang, Z.; Li, Q. Information content weighting for perceptual image quality assessment. IEEE Trans. Image Process. 2010, 20, 1185–1198. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [Green Version]
- Xue, W.; Zhang, L.; Mou, X.; Bovik, A.C. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 2013, 23, 684–695. [Google Scholar] [CrossRef] [Green Version]
- Reisenhofer, R.; Bosse, S.; Kutyniok, G.; Wiegand, T. A Haar wavelet-based perceptual similarity index for image quality assessment. Signal Process. Image Commun. 2018, 61, 33–43. [Google Scholar] [CrossRef] [Green Version]
- Wang, Y.; Liu, W.; Wang, Y. Color image quality assessment based on quaternion singular value decomposition. In Proceedings of the 2008 Congress on Image and Signal Processing, Sanya, China, 27–30 May 2008; Volume 3, pp. 433–439. [Google Scholar]
- Kolaman, A.; Yadid-Pecht, O. Quaternion structural similarity: A new quality index for color images. IEEE Trans. Image Process. 2011, 21, 1526–1536. [Google Scholar] [CrossRef] [PubMed]
- Głowacz, A.; Grega, M.; Gwiazda, P.; Janowski, L.; Leszczuk, M.; Romaniak, P.; Romano, S.P. Automated qualitative assessment of multi-modal distortions in digital images based on GLZ. Ann. Telecommun. 2010, 65, 3–17. [Google Scholar] [CrossRef]
- Liang, Y.; Wang, J.; Wan, X.; Gong, Y.; Zheng, N. Image quality assessment using similar scene as reference. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 3–18. [Google Scholar]
- Kim, J.; Lee, S. Deep learning of human visual sensitivity in image quality assessment framework. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1676–1684. [Google Scholar]
- Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv 2013, arXiv:1312.6229. [Google Scholar]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Ali Amirshahi, S.; Pedersen, M.; Yu, S.X. Image quality assessment by comparing cnn features between images. Electron. Imaging 2017, 2017, 42–51. [Google Scholar] [CrossRef] [Green Version]
- Bosse, S.; Maniry, D.; Müller, K.R.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2017, 27, 206–219. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Varga, D. Composition-preserving deep approach to full-reference image quality assessment. Signal Image Video Process. 2020, 14, 1265–1272. [Google Scholar] [CrossRef]
- Okarma, K. Combined full-reference image quality metric linearly correlated with subjective assessment. In International Conference on Artificial Intelligence and Soft Computing; Springer: Berlin/Heidelberg, Germany, 2010; pp. 539–546. [Google Scholar]
- Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2006, 15, 430–444. [Google Scholar] [CrossRef]
- Mansouri, A.; Aznaveh, A.M.; Torkamani-Azar, F.; Jahanshahi, J.A. Image quality assessment using the singular value decomposition theorem. Opt. Rev. 2009, 16, 49–53. [Google Scholar] [CrossRef]
- Okarma, K. Combined image similarity index. Opt. Rev. 2012, 19, 349–354. [Google Scholar] [CrossRef]
- Okarma, K. Extended hybrid image similarity–combined full-reference image quality metric linearly correlated with subjective scores. Elektron. Elektrotechnika 2013, 19, 129–132. [Google Scholar] [CrossRef] [Green Version]
- Oszust, M. Image quality assessment with lasso regression and pairwise score differences. Multimed. Tools Appl. 2017, 76, 13255–13270. [Google Scholar] [CrossRef] [Green Version]
- Yuan, Y.; Guo, Q.; Lu, X. Image quality assessment: A sparse learning way. Neurocomputing 2015, 159, 227–241. [Google Scholar] [CrossRef]
- Lukin, V.V.; Ponomarenko, N.N.; Ieremeiev, O.I.; Egiazarian, K.O.; Astola, J. Combining full-reference image visual quality metrics by neural network. In Human Vision and Electronic Imaging XX; International Society for Optics and Photonics: San Diego, CA, USA, 2015; Volume 9394, p. 93940K. [Google Scholar]
- Oszust, M. Full-reference image quality assessment with linear combination of genetically selected quality measures. PLoS ONE 2016, 11, e0158333. [Google Scholar] [CrossRef] [Green Version]
- Amirshahi, S.A.; Pedersen, M.; Beghdadi, A. Reviving Traditional Image Quality Metrics Using CNNs. In Proceedings of the Color and Imaging Conference, Society for Imaging Science and Technology, Vancouver, BC, Canada, 12–16 November 2018; Volume 2018, pp. 241–246. [Google Scholar]
- Lin, H.; Hosu, V.; Saupe, D. DeepFL-IQA: Weak Supervision for Deep IQA Feature Learning. arXiv 2020, arXiv:2001.08113. [Google Scholar]
- Gao, F.; Yu, J.; Zhu, S.; Huang, Q.; Tian, Q. Blind image quality prediction by exploiting multi-level deep representations. Pattern Recognit. 2018, 81, 432–442. [Google Scholar] [CrossRef]
- Lin, H.; Hosu, V.; Saupe, D. KADID-10k: A Large-scale Artificially Distorted IQA Database. In Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5–7 June 2019; pp. 1–3. [Google Scholar]
- Lin, H.; Hosu, V.; Saupe, D. KonIQ-10K: Towards an ecologically valid and large-scale IQA database. arXiv 2018, arXiv:1803.08489. [Google Scholar]
- Ponomarenko, N.; Lukin, V.; Zelensky, A.; Egiazarian, K.; Carli, M.; Battisti, F. TID2008-a database for evaluation of full-reference visual quality assessment metrics. Adv. Mod. Radioelectron. 2009, 10, 30–45. [Google Scholar]
- Zarić, A.; Tatalović, N.; Brajković, N.; Hlevnjak, H.; Lončarić, M.; Dumić, E.; Grgić, S. VCL@ FER image quality assessment database. AUTOMATIKA Časopis Autom. Mjer. Elektron. Računarstvo Komun. 2012, 53, 344–354. [Google Scholar] [CrossRef]
- Sun, W.; Zhou, F.; Liao, Q. MDID: A multiply distorted image database for image quality assessment. Pattern Recognit. 2017, 61, 153–168. [Google Scholar] [CrossRef]
- Larson, E.C.; Chandler, D.M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron. Imaging 2010, 19, 011006. [Google Scholar]
- Hii, Y.L.; See, J.; Kairanbay, M.; Wong, L.K. Multigap: Multi-pooled inception network with text augmentation for aesthetic prediction of photographs. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1722–1726. [Google Scholar]
- Drucker, H.; Burges, C.J.; Kaufman, L.; Smola, A.J.; Vapnik, V. Support vector regression machines. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 1997; pp. 155–161. [Google Scholar]
- Williams, C.K.; Rasmussen, C.E. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006; Volume 2. [Google Scholar]
- Ponomarenko, N.; Jin, L.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; et al. Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun. 2015, 30, 57–77. [Google Scholar] [CrossRef] [Green Version]
- Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef] [PubMed]
- Cho, J.; Lee, K.; Shin, E.; Choy, G.; Do, S. How much data is needed to train a medical image deep learning system to achieve necessary high accuracy? arXiv 2015, arXiv:1511.06348. [Google Scholar]
- Liu, A.; Lin, W.; Narwaria, M. Image quality assessment based on gradient similarity. IEEE Trans. Image Process. 2011, 21, 1500–1512. [Google Scholar]
- Nafchi, H.Z.; Shahkolaei, A.; Hedjam, R.; Cheriet, M. Mean deviation similarity index: Efficient and reliable full-reference image quality evaluator. IEEE Access 2016, 4, 5579–5590. [Google Scholar] [CrossRef]
- Temel, D.; AlRegib, G. CSV: Image quality assessment based on color, structure, and visual system. Signal Process. Image Commun. 2016, 48, 92–103. [Google Scholar] [CrossRef] [Green Version]
- Balanov, A.; Schwartz, A.; Moshe, Y.; Peleg, N. Image quality assessment based on DCT subband similarity. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 2105–2109. [Google Scholar]
- Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [Green Version]
- Temel, D.; AlRegib, G. PerSIM: Multi-resolution image quality assessment in the perceptually uniform color domain. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 1682–1686. [Google Scholar]
- Temel, D.; AlRegib, G. BLeSS: Bio-inspired low-level spatiochromatic similarity assisted image quality assessment. In Proceedings of the 2016 IEEE International Conference on Multimedia and Expo (ICME), Seattle, WA, USA, 11–15 July 2016; pp. 1–6. [Google Scholar]
- Temel, D.; AlRegib, G. ReSIFT: Reliability-weighted sift-based image quality assessment. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2047–2051. [Google Scholar]
- Prabhushankar, M.; Temel, D.; AlRegib, G. Ms-unique: Multi-model and sharpness-weighted unsupervised image quality estimation. Electron. Imaging 2017, 2017, 30–35. [Google Scholar] [CrossRef] [Green Version]
- Yang, G.; Li, D.; Lu, F.; Liao, Y.; Yang, W. RVSIM: A feature similarity method for full-reference image quality assessment. EURASIP J. Image Video Process. 2018, 2018, 6. [Google Scholar] [CrossRef] [Green Version]
- Yu, X.; Bampis, C.G.; Gupta, P.; Bovik, A.C. Predicting the quality of images compressed after distortion in two steps. IEEE Trans. Image Process. 2019, 28, 5757–5770. [Google Scholar] [CrossRef] [PubMed]
- Temel, D.; AlRegib, G. Perceptual image quality assessment through spectral analysis of error representations. Signal Process. Image Commun. 2019, 70, 37–46. [Google Scholar] [CrossRef] [Green Version]
- Layek, M.; Uddin, A.; Le, T.P.; Chung, T.; Huh, E.-N. Center-emphasized visual saliency and a contrast-based full reference image quality index. Symmetry 2019, 11, 296. [Google Scholar] [CrossRef] [Green Version]
- Shi, C.; Lin, Y. Full Reference Image Quality Assessment Based on Visual Salience with Color Appearance and Gradient Similarity. IEEE Access 2020. [Google Scholar] [CrossRef]
- Ding, K.; Ma, K.; Wang, S.; Simoncelli, E.P. Image quality assessment: Unifying structure and texture similarity. arXiv 2020, arXiv:2004.07728. [Google Scholar]
- Liu, X.; van de Weijer, J.; Bagdanov, A.D. Rankiqa: Learning from rankings for no-reference image quality assessment. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1040–1049. [Google Scholar]
- Pan, D.; Shi, P.; Hou, M.; Ying, Z.; Fu, S.; Zhang, Y. Blind predicting similar quality map for image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6373–6382. [Google Scholar]
- Yan, B.; Bare, B.; Tan, W. Naturalness-aware deep no-reference image quality assessment. IEEE Trans. Multimed. 2019, 21, 2603–2615. [Google Scholar] [CrossRef]
- ITU-T. P.1401: Methods, Metrics and Procedures for Statistical Evaluation, Qualification and Comparison of Objective Quality Prediction Models. 2012. Available online: https://www.itu.int/rec/T-REC-P.1401-202001-I/en (accessed on 13 January 2020).
Database | Ref. Images | Test Images | Resolution | Distortion Levels | Number of Distortions |
---|---|---|---|---|---|
TID2008 [46] | 25 | 1700 | 4 | 17 | |
CSIQ [49] | 30 | 866 | 4–5 | 6 | |
VCL-FER [47] | 23 | 552 | 6 | 4 | |
TID2013 [53] | 25 | 3000 | 5 | 24 | |
MDID [48] | 20 | 1600 | 4 | 5 | |
KADID-10k [44] | 81 | 10,125 | 5 | 25 |
Distortion | PLCC | SROCC | KROCC |
---|---|---|---|
Gaussian blur | 0.987 | 0.956 | 0.828 |
Lens blur | 0.971 | 0.923 | 0.780 |
Motion blur | 0.976 | 0.960 | 0.827 |
Color diffusion | 0.971 | 0.906 | 0.744 |
Color shift | 0.942 | 0.866 | 0.698 |
Color quantization | 0.902 | 0.868 | 0.692 |
Color saturation 1. | 0.712 | 0.654 | 0.484 |
Color saturation 2. | 0.973 | 0.945 | 0.798 |
JPEG2000 | 0.977 | 0.941 | 0.800 |
JPEG | 0.983 | 0.897 | 0.741 |
White noise | 0.921 | 0.919 | 0.758 |
White noise in color component | 0.958 | 0.946 | 0.802 |
Impulse noise | 0.875 | 0.872 | 0.694 |
Multiplicative noise | 0.958 | 0.952 | 0.813 |
Denoise | 0.955 | 0.941 | 0.799 |
Brighten | 0.969 | 0.951 | 0.815 |
Darken | 0.973 | 0.919 | 0.769 |
Mean shift | 0.778 | 0.777 | 0.586 |
Jitter | 0.981 | 0.962 | 0.834 |
Non-eccentricity patch | 0.693 | 0.667 | 0.489 |
Pixelate | 0.909 | 0.854 | 0.681 |
Quantization | 0.893 | 0.881 | 0.705 |
Color block | 0.647 | 0.539 | 0.386 |
High sharpen | 0.948 | 0.938 | 0.786 |
Contrast change | 0.802 | 0.805 | 0.607 |
All | 0.959 | 0.957 | 0.819 |
Level of Distortion | PLCC | SROCC | KROCC |
---|---|---|---|
Level 1 | 0.889 | 0.843 | 0.659 |
Level 2 | 0.924 | 0.918 | 0.748 |
Level 3 | 0.935 | 0.933 | 0.777 |
Level 4 | 0.937 | 0.922 | 0.765 |
Level 5 | 0.931 | 0.897 | 0.725 |
All | 0.959 | 0.957 | 0.819 |
KADID-10k [44] | TID2013 [53] | |||||
---|---|---|---|---|---|---|
PLCC | SROCC | KROCC | PLCC | SROCC | KROCC | |
SSIM [12] | 0.670 | 0.671 | 0.489 | 0.618 | 0.616 | 0.437 |
MS-SSIM [14] | 0.819 | 0.821 | 0.630 | 0.794 | 0.785 | 0.604 |
MAD [49] | 0.716 | 0.724 | 0.535 | 0.827 | 0.778 | 0.600 |
GSM [56] | 0.780 | 0.780 | 0.588 | 0.789 | 0.787 | 0.593 |
HaarPSI [20] | 0.871 | 0.885 | 0.699 | 0.886 | 0.859 | 0.678 |
MDSI [57] | 0.887 | 0.885 | 0.702 | 0.867 | 0.859 | 0.677 |
CSV [58] | 0.671 | 0.669 | 0.531 | 0.852 | 0.848 | 0.657 |
GMSD [19] | 0.847 | 0.847 | 0.664 | 0.846 | 0.844 | 0.663 |
DSS [59] | 0.855 | 0.860 | 0.674 | 0.793 | 0.781 | 0.604 |
VSI [60] | 0.874 | 0.861 | 0.678 | 0.900 | 0.894 | 0.677 |
PerSIM [61] | 0.819 | 0.824 | 0.634 | 0.825 | 0.826 | 0.655 |
BLeSS-SR-SIM [62] | 0.820 | 0.824 | 0.633 | 0.814 | 0.828 | 0.648 |
BLeSS-FSIM [62] | 0.814 | 0.816 | 0.624 | 0.824 | 0.830 | 0.649 |
BLeSS-FSIMc [62] | 0.845 | 0.848 | 0.658 | 0.846 | 0.849 | 0.667 |
LCSIM1 [40] | - | - | - | 0.914 | 0.904 | 0.733 |
ReSIFT [63] | 0.648 | 0.628 | 0.468 | 0.630 | 0.623 | 0.471 |
IQ() [28] | 0.853 | 0.852 | 0.641 | 0.844 | 0.842 | 0.631 |
MS-UNIQUE [64] | 0.845 | 0.840 | 0.648 | 0.865 | 0.871 | 0.687 |
SSIM CNN [41] | 0.811 | 0.814 | 0.630 | 0.759 | 0.752 | 0.566 |
RVSIM [65] | 0.728 | 0.719 | 0.540 | 0.763 | 0.683 | 0.520 |
2stepQA [66] | 0.768 | 0.771 | 0.571 | 0.736 | 0.733 | 0.550 |
SUMMER [67] | 0.719 | 0.723 | 0.540 | 0.623 | 0.622 | 0.472 |
CEQI [68] | 0.862 | 0.863 | 0.681 | 0.855 | 0.802 | 0.635 |
CEQIc [68] | 0.867 | 0.864 | 0.682 | 0.858 | 0.851 | 0.638 |
VCGS [69] | 0.873 | 0.871 | 0.683 | 0.900 | 0.893 | 0.712 |
DISTS [70] | 0.809 | 0.814 | 0.626 | 0.759 | 0.711 | 0.524 |
DeepFL-IQA [42] | 0.938 | 0.936 | - | 0.876 | 0.858 | - |
BLINDER [43] | - | - | - | 0.819 | 0.838 | - |
RankIQA [71] | - | - | - | 0.799 | 0.780 | - |
BPSOM-MD [72] | - | - | - | 0.879 | 0.863 | - |
NSSADNN [73] | - | - | - | 0.910 | 0.844 | - |
ActMapFeat (ours) | 0.959 | 0.957 | 0.819 | 0.943 | 0.936 | 0.780 |
VCL-FER [47] | TID2008 [46] | |||||
---|---|---|---|---|---|---|
PLCC | SROCC | KROCC | PLCC | SROCC | KROCC | |
SSIM [12] | 0.751 | 0.859 | 0.666 | 0.669 | 0.675 | 0.485 |
MS-SSIM [14] | 0.917 | 0.925 | 0.753 | 0.838 | 0.846 | 0.648 |
MAD [49] | 0.904 | 0.906 | 0.721 | 0.831 | 0.829 | 0.639 |
GSM [56] | 0.904 | 0.905 | 0.721 | 0.782 | 0.781 | 0.578 |
HaarPSI [20] | 0.938 | 0.946 | 0.789 | 0.916 | 0.897 | 0.723 |
MDSI [57] | 0.935 | 0.939 | 0.774 | 0.877 | 0.892 | 0.724 |
CSV [58] | 0.951 | 0.952 | 0.798 | 0.852 | 0.851 | 0.659 |
GMSD [19] | 0.918 | 0.918 | 0.741 | 0.879 | 0.879 | 0.696 |
DSS [59] | 0.925 | 0.927 | 0.757 | 0.860 | 0.860 | 0.672 |
VSI [60] | 0.929 | 0.932 | 0.763 | 0.898 | 0.896 | 0.709 |
PerSIM [61] | 0.926 | 0.928 | 0.761 | 0.826 | 0.830 | 0.655 |
BLeSS-SR-SIM [62] | 0.899 | 0.909 | 0.727 | 0.846 | 0.850 | 0.672 |
BLeSS-FSIM [62] | 0.927 | 0.924 | 0.751 | 0.853 | 0.851 | 0.669 |
BLeSS-FSIMc [62] | 0.932 | 0.935 | 0.768 | 0.871 | 0.871 | 0.687 |
LCSIM1 [40] | - | - | - | 0.896 | 0.906 | 0.727 |
ReSIFT [63] | 0.914 | 0.917 | 0.733 | 0.627 | 0.632 | 0.484 |
IQ() [28] | 0.910 | 0.912 | 0.718 | 0.841 | 0.840 | 0.629 |
MS-UNIQUE [64] | 0.954 | 0.956 | 0.840 | 0.846 | 0.869 | 0.681 |
SSIM CNN [41] | 0.917 | 0.921 | 0.743 | 0.770 | 0.737 | 0.551 |
RVSIM [65] | 0.894 | 0.901 | 0.719 | 0.789 | 0.743 | 0.566 |
2stepQA [66] | 0.883 | 0.887 | 0.698 | 0.757 | 0.769 | 0.574 |
SUMMER [67] | 0.750 | 0.754 | 0.596 | 0.817 | 0.823 | 0.637 |
CEQI [68] | 0.894 | 0.920 | 0.747 | 0.887 | 0.891 | 0.714 |
CEQIc [68] | 0.906 | 0.918 | 0.744 | 0.892 | 0.895 | 0.719 |
VCGS [69] | 0.940 | 0.937 | 0.773 | 0.878 | 0.887 | 0.705 |
DISTS [70] | 0.923 | 0.922 | 0.746 | 0.705 | 0.668 | 0.488 |
DeepFL-IQA [42] | - | - | - | - | - | - |
BLINDER [43] | - | - | - | - | - | - |
RankIQA [71] | - | - | - | - | - | - |
BPSOM-MD [72] | - | - | - | - | - | - |
NSSADNN [73] | - | - | - | - | - | - |
ActMapFeat (ours) | 0.960 | 0.961 | 0.826 | 0.941 | 0.937 | 0.790 |
MDID [48] | CSIQ [49] | |||||
---|---|---|---|---|---|---|
PLCC | SROCC | KROCC | PLCC | SROCC | KROCC | |
SSIM [12] | 0.581 | 0.576 | 0.411 | 0.812 | 0.812 | 0.606 |
MS-SSIM [14] | 0.836 | 0.841 | 0.654 | 0.913 | 0.917 | 0.743 |
MAD [49] | 0.742 | 0.725 | 0.533 | 0.950 | 0.947 | 0.796 |
GSM [56] | 0.825 | 0.827 | 0.636 | 0.906 | 0.910 | 0.729 |
HaarPSI [20] | 0.904 | 0.903 | 0.734 | 0.946 | 0.960 | 0.823 |
MDSI [57] | 0.829 | 0.836 | 0.653 | 0.953 | 0.957 | 0.812 |
CSV [58] | 0.879 | 0.881 | 0.700 | 0.933 | 0.933 | 0.766 |
GMSD [19] | 0.864 | 0.862 | 0.680 | 0.954 | 0.957 | 0.812 |
DSS [59] | 0.870 | 0.866 | 0.679 | 0.953 | 0.955 | 0.811 |
VSI [60] | 0.855 | 0.857 | 0.671 | 0.928 | 0.942 | 0.785 |
PerSIM [61] | 0.823 | 0.820 | 0.630 | 0.924 | 0.929 | 0.768 |
BLeSS-SR-SIM [62] | 0.805 | 0.815 | 0.626 | 0.892 | 0.893 | 0.718 |
BLeSS-FSIM [62] | 0.848 | 0.847 | 0.658 | 0.882 | 0.885 | 0.701 |
BLeSS-FSIMc [62] | 0.878 | 0.883 | 0.702 | 0.913 | 0.917 | 0.743 |
LCSIM1 [40] | - | - | - | 0.897 | 0.949 | 0.799 |
ReSIFT [63] | 0.905 | 0.895 | 0.716 | 0.884 | 0.868 | 0.695 |
IQ() [28] | 0.867 | 0.865 | 0.708 | 0.915 | 0.912 | 0.720 |
MS-UNIQUE [64] | 0.863 | 0.871 | 0.689 | 0.918 | 0.929 | 0.759 |
SSIM CNN [41] | 0.904 | 0.907 | 0.732 | 0.952 | 0.946 | 0.794 |
RVSIM [65] | 0.884 | 0.884 | 0.709 | 0.923 | 0.903 | 0.728 |
2stepQA [66] | 0.753 | 0.759 | 0.562 | 0.841 | 0.849 | 0.655 |
SUMMER [67] | 0.742 | 0.734 | 0.543 | 0.826 | 0.830 | 0.658 |
CEQI [68] | 0.863 | 0.864 | 0.685 | 0.956 | 0.956 | 0.814 |
CEQIc [68] | 0.864 | 0.863 | 0.684 | 0.956 | 0.955 | 0.810 |
VCGS [69] | 0.867 | 0.869 | 0.687 | 0.931 | 0.944 | 0.790 |
DISTS [70] | 0.862 | 0.860 | 0.669 | 0.930 | 0.930 | 0.764 |
DeepFL-IQA [42] | - | - | - | 0.946 | 0.930 | - |
BLINDER [43] | - | - | - | 0.968 | 0.961 | - |
RankIQA [71] | - | - | - | 0.960 | 0.947 | - |
BPSOM-MD [72] | - | - | - | 0.860 | 0.904 | - |
NSSADNN [73] | - | - | - | 0.927 | 0.893 | - |
ActMapFeat(ours) | 0.930 | 0.927 | 0.769 | 0.971 | 0.970 | 0.850 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Varga, D. A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps. Algorithms 2020, 13, 313. https://doi.org/10.3390/a13120313
Varga D. A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps. Algorithms. 2020; 13(12):313. https://doi.org/10.3390/a13120313
Chicago/Turabian StyleVarga, Domonkos. 2020. "A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps" Algorithms 13, no. 12: 313. https://doi.org/10.3390/a13120313
APA StyleVarga, D. (2020). A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps. Algorithms, 13(12), 313. https://doi.org/10.3390/a13120313