Progress in Blind Image Quality Assessment: A Brief Review
Abstract
:1. Introduction
2. Overview of Blind Image Quality Assessment
3. Conventional Two-Stage BIQA Methods
3.1. Hand-Crafted Features
3.1.1. Statistical Features
3.1.2. Texture Features
3.1.3. Key Point Descriptors
3.2. Learning-Based Features
3.3. Quality Regression Models
4. DNN-Based One-Stage BIQA Methods
4.1. Simple Convolutional Neural Network Models
4.2. Multi-Task Architectures
4.3. Dual Branch Architectures
4.4. Transformer Based Models
4.5. Other Representative Models
5. Performance Comparison of BIQA Methods
5.1. Evaluation Metrics
5.2. Typical Datasets
5.3. Performance Comparison on Typical IQA Datasets
6. Discussion and Future Perspectives
- Improve the adaptability of DNN models to both synthetic and authentic distortions. It is challenging to adapt to synthetic and authentic distortions at the same time as there are significant differences between them. Although previous efforts have been made to solve this problem (e.g., DB-CNN, see Figure 10), it is an area for further exploration.
- Build effective BIQA learning models based on limited training samples. The volume of the training set significantly affects the performance of BIQA methods. Since it is difficult to collect sufficient training samples, effective quality assessment is necessary through limited samples.
- Balance model performance and complexity. Although DNN-based BIQA models have achieved remarkable performance, the models lack deployability due to their model complexity. In fact, simple quality evaluators such as a structure similarity index measure (SSIM) are still widely used due to their simplicity. Thus, the trade-off between the performance and complexity of DNN-based BIQA models should be taken into account.
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Han, J.; Ji, X.; Hu, X.; Zhu, D.; Li, K.; Jiang, X.; Cui, G.; Guo, L.; Liu, T. Representing and retrieving video shots in human-centric brain imaging space. IEEE Trans. Image Process. 2013, 22, 2723–2736. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Tao, D.; Tang, X.; Li, X.; Wu, X. Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1088–1099. [Google Scholar] [PubMed] [Green Version]
- Tao, D.; Li, X.; Wu, X.; Maybank, S.J. General tensor discriminant analysis and gabor features for gait recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1700–1715. [Google Scholar] [CrossRef]
- Zhu, F.; Shao, L. Weakly-supervised cross-domain dictionary learning for visual recognition. Int. J. Comput. Vis. 2014, 109, 42–59. [Google Scholar] [CrossRef]
- Li, F.; Shuang, F.; Liu, Z.; Qian, X. A cost-constrained video quality satisfaction study on mobile devices. IEEE Trans. Multimed. 2017, 20, 1154–1168. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Zhang, L.; Mou, X.Q.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [Green Version]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
- Cheon, M.; Yoon, S.J.; Kang, B.; Lee, J. Perceptual image quality assessment with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 433–442. [Google Scholar]
- Narvekar, N.D.; Karam, L.J. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE Trans. Image Process. 2011, 20, 2678–2683. [Google Scholar] [CrossRef]
- Liu, Y.; Zhai, G.; Gu, K.; Liu, X.; Zhao, D.; Gao, W. Reduced-reference image quality assessment in free-energy principle and sparse representation. IEEE Trans. Multimed. 2017, 20, 379–391. [Google Scholar] [CrossRef]
- Li, L.; Yan, Y.; Lu, Z.; Wu, J.; Gu, K.; Wang, S. No-reference quality assessment of deblurred images based on natural scene statistics. IEEE Access 2017, 5, 2163–2171. [Google Scholar] [CrossRef]
- Manap, R.A.; Shao, L. Non-distortion-specific no-reference image quality assessment: A survey. Inf. Sci. 2015, 301, 141–160. [Google Scholar] [CrossRef]
- Xu, S.; Jiang, S.; Min, W. No-reference/blind image quality assessment: A survey. IETE Tech. Rev. 2017, 34, 2163–2171. [Google Scholar] [CrossRef]
- Yang, X.H.; Li, F.; Liu, H.T. A survey of DNN methods for blind image quality assessment. IEEE Access 2019, 7, 123788–123806. [Google Scholar] [CrossRef]
- Gu, K.; Xu, X.; Qiao, J.F.; Jiang, Q.P.; Lin, W.S.; Thalmann, D. Learning a unified blind image quality metric via on-line and off-line big training instances. IEEE Trans. Big Data 2019, 6, 780–791. [Google Scholar] [CrossRef]
- Yue, G.H.; Hou, C.P.; Gu, K.; Zhou, T.W.; Zhai, G.T. Combining local and global measures for DIBR-synthesized image quality evaluation. IEEE Trans. Image Process. 2018, 28, 2075–2088. [Google Scholar] [CrossRef] [PubMed]
- Gu, K.; Qiao, J.F.; Callet, P.L.; Xia, Z.F.; Lin, W.S. Using multiscale analysis for blind quality assessment of DIBR-synthesized images. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 745–749. [Google Scholar]
- Sun, W.; Min, X.K.; Zhai, G.T.; Gu, K.; Duan, H.Y.; Ma, S.W. MC360IQA: A multi-channel CNN for blind 360-degree image quality assessment. IEEE J. Sel. Top. Signal Process. 2019, 14, 64–77. [Google Scholar] [CrossRef]
- Su, S.L.; Yan, Q.S.; Zhu, Y.; Zhang, C.; Ge, X.; Sun, J.; Zhang, Y. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3667–3676. [Google Scholar]
- Sun, S.; Yu, T.; Xu, J.; Zhou, W.; Chen, Z. GraphIQA: Learning distortion graph representations for blind image quality assessment. IEEE Trans. Multimed. 2022. [Google Scholar] [CrossRef]
- Golestaneh, S.A.; Dadsetan, S.; Kitani, K.M. No-reference image quality assessment via transformers, relative ranking, and self-consistency. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4–8 January 2022; pp. 1220–1230. [Google Scholar]
- Moorthy, A.K.; Bovik, A.C. A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 2010, 17, 513–516. [Google Scholar] [CrossRef]
- Moorthy, A.K.; Bovik, A.C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef]
- Liu, L.; Dong, H.; Huang, H.; Bovik, A.C. No-reference image quality assessment in curvelet domain. Signal Process. Image Commun. 2014, 29, 494–505. [Google Scholar] [CrossRef]
- Saas, M.A.; Bovik, A.C.; Charier, C. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 2012, 21, 3339–3352. [Google Scholar]
- Tang, H.X.; Joshi, N.; Kapoor, A. Blind image quality assessment using semi-supervised rectifier networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 287–2884. [Google Scholar]
- Ghadiyaram, D.; Bovik, A.C. Blind image quality assessment on real distorted images using deep belief nets. In Proceedings of the IEEE Global Conference on Signal and Information Processing, Atlanta, GA, USA, 3–5 December 2014; pp. 946–950. [Google Scholar]
- Li, D.Q.; Jiang, T.T.; Lin, W.S.; Jiang, M. Which has better visual quality: The clear blue sky or a blurry animal? IEEE Trans. Multimed. 2018, 21, 1221–1234. [Google Scholar] [CrossRef]
- Sun, C.R.; Li, H.Q.; Li, W.P. No-reference image quality assessment based on global and local content perception. In Proceedings of the Visual Communications and Image Processing, Chengdu, China, 27–30 November 2016; pp. 1–4. [Google Scholar]
- Wang, X.H.; Pang, Y.J.; Ma, X.C. Real distorted images quality assessment based on multi-layer visual perception mechanism and high-level semantics. Multimed. Tools Appl. 2020, 79, 25905–25920. [Google Scholar] [CrossRef]
- Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1733–1740. [Google Scholar]
- Kim, J.; Lee, S. Fully deep blind image quality predictor. IEEE J. Sel. Top. Signal Process. 2016, 11, 206–220. [Google Scholar] [CrossRef]
- Yan, Q.S.; Gong, D.; Zhang, Y.N. Two-stream convolutional networks for blind image quality assessment. IEEE Trans. Image Process. 2018, 28, 2200–2211. [Google Scholar] [CrossRef]
- Kang, L.; Ye, P.; Li, Y.; Doermann, D. Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks. In Proceedings of the IEEE International Conference on Image Processing, Quebec, QC, Canada, 27–30 September 2015; pp. 2791–2795. [Google Scholar]
- Ma, K.; Liu, W.; Zhang, K.; Duanmu, Z.; Wang, Z.; Zuo, W. End-to-End Blind Image Quality Assessment Using Deep Neural Networks. IEEE Trans. Image Process. 2018, 27, 1202–1213. [Google Scholar] [CrossRef]
- Zhang, W.; Ma, K.; Yan, J.; Deng, D.; Wang, Z. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 2018, 30, 36–47. [Google Scholar] [CrossRef] [Green Version]
- Ren, H.Y.; Chen, D.Q.; Wang, Y.Z. RAN4IQA: Restorative adversarial nets for no-reference image quality assessment. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 7308–7314. [Google Scholar]
- Lin, K.Y.; Wang, G.X. Hallucinated-IQA: No-reference image quality assessment via adversarial learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 732–741. [Google Scholar]
- Zhang, P.Y.; Shao, X.; Li, Z.H. Cycleiqa: Blind Image Quality Assessment Via Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Multimedia and Expo, Taipei, Taiwan, 18–22 June 2022; pp. 1–6. [Google Scholar]
- Sheikh, H.R.; Bovik, A.C.; De Veciana, G. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans. Image Process. 2005, 14, 2117–2128. [Google Scholar] [CrossRef] [Green Version]
- Saad, M.A.; Bovik, A.C.; Charier, C. A DCT statistics-based blind image quality index. IEEE Signal Process. Lett. 2010, 17, 494–505. [Google Scholar] [CrossRef] [Green Version]
- Xue, W.; Mou, X.; Zhang, L.; Bovik, A.C.; Feng, X. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Trans. Image Process. 2014, 23, 4850–4862. [Google Scholar] [CrossRef]
- Zhang, M.; Xie, J.; Zhou, X.; Fujita, H. No reference image quality assessment based on local binary pattern statistics. In Proceedings of the Visual Communications and Image Processing (VCIP), Kuching, Malaysia, 17–20 November 2013; pp. 1–6. [Google Scholar]
- Li, Q.H.; Lin, W.S.; Xu, J.T.; Fang, Y. Blind image quality assessment using statistical structural and luminance features. IEEE Trans. Multimed. 2016, 18, 2457–2469. [Google Scholar] [CrossRef]
- Zhang, M.; Muramatsu, C.; Zhou, X.; Hara, T.; Fujita, H. Blind image quality assessment using the joint statistics of generalized local binary pattern. IEEE Signal Process. Lett. 2014, 22, 207–210. [Google Scholar] [CrossRef]
- Freitas, P.G.; Akamine, W.Y.L.; Farias, M.C.Q. No-reference image quality assessment based on statistics of local ternary pattern. In Proceedings of the 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal, 6–8 June 2016; pp. 1–6. [Google Scholar]
- Freitas, P.G.; Akamine, W.Y.L.; Farias, M.C.Q. Blind image quality assessment using multiscale local binary patterns. J. Imaging Sci. Technol. 2017, 29, 7–14. [Google Scholar] [CrossRef] [Green Version]
- Freitas, P.G.; Alamgeer, S.; Akamine, W.Y.L.; Farias, M.C.Q. Blind image quality assessment based on multiscale salient local binary patterns. In Proceedings of the 9th ACM Multimedia Systems Conference, Amsterdam, The Netherlands, 12–15 June 2018; pp. 52–63. [Google Scholar]
- Freitas, P.G.; Akamine, W.Y.L.; Farias, M.C.Q. No-reference image quality assessment using orthogonal color planes patterns. IEEE Trans. Multimed. 2018, 20, 3353–3360. [Google Scholar] [CrossRef]
- Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
- Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
- Torralba, A.; Oliva, A.; Castelhano, M.S.; Henderson, J.M. Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychol. Rev. 2006, 113, 766–786. [Google Scholar] [CrossRef] [Green Version]
- Freitas, P.G.; Da Eira, L.P.; Santos, S.S.; De Farias, M.C.Q. On the application LBP texture descriptors and its variants for no-reference image quality assessment. J. Imaging 2018, 4, 114. [Google Scholar] [CrossRef] [Green Version]
- Guo, Y.; Zhao, G.; Pietikainen, M. Texture classification using a linear configuration model based descriptor. In Proceedings of the British Machine Vision Conference, Dundee, UK, 29 August–2 September 2011; pp. 119.1–119.10. [Google Scholar]
- Ojansivu, V.; Heikkila, J. Blur insensitive texture classification using local phase quantization. Lect. Notes Comput. Sci. 2018, 5099, 236–243. [Google Scholar]
- Freitas, P.G.; Da Eira, L.P.; Santos, S.S.; Farias, M.C.Q. Image quality assessment using BSIF, CLBP, LCP, and LPQ operators. Theor. Comput. Sci. 2020, 805, 37–61. [Google Scholar] [CrossRef]
- Sun, T.F.; Ding, S.F.; Xu, X.Z. No-reference image quality assessment through sift intensity. Appl. Math. Inf. Sci. 2014, 8, 1925–1934. [Google Scholar] [CrossRef] [Green Version]
- Nizami, I.F.; Majid, M.; Rehman, M.U.; Anwar, S.M.; Nasim, A.; Khurshid, K. No-reference image quality assessment using bag-of-features with feature selection. Multimed. Tools Appl. 2020, 79, 7811–7836. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Liu, W.; Jia, Y.Q.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1733–1740. [Google Scholar]
- He, K.M.; Zhang, X.Y.; Ren, S.P.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Madhusudana, P.C.; Birkbeck, N.; Wang, Y.; Adsumilli, B.; Bovik, A.C. Image quality assessment using contrastive learning. IEEE Trans. Image Process. 2022, 31, 4149–4161. [Google Scholar] [CrossRef]
- Oord, A.V.D.; Li, Y.Z.; Vinyals, O. Representation learning with contrastive predictive coding. arXiv 2018, arXiv:1807.03748. [Google Scholar]
- Scholkopf, B.; Smola, A.J.; Bach, F. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
- Gu, K.; Zhai, G.T.; Yang, X.K.; Zhang, W.J. Using free energy principle for blind image quality assessment. IEEE Trans. Multimed. 2014, 17, 50–63. [Google Scholar] [CrossRef]
- Li, C.F.; Bovik, A.C.; Wu, X.J. Blind image quality assessment using a general regression neural network. IEEE Trans. Neural Netw. 2011, 22, 793–799. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the International Conference Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Gu, K.; Zhai, G.T.; Yang, X.K.; Zhang, W.J. Deep learning network for blind image quality assessment. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 511–515. [Google Scholar]
- Clevert, D.A.; Unterthiner, T.; Hochreiter, S. Fast and accurate deep network learning by exponential linear units (elus). arXiv 2015, arXiv:1511.07289. [Google Scholar]
- Balle, J.; Laparra, V.; Simoncelli, E.P. End-to-end optimized image compression. arXiv 2016, arXiv:1611.01704. [Google Scholar]
- Ma, K.; Duanmu, Z.; Wu, Q.; Wang, Z.; Yong, H.; Li, H.; Zhang, L. Waterloo exploration database: New challenges for image quality assessment models. IEEE Trans. Image Process. 2016, 26, 1004–1016. [Google Scholar] [CrossRef]
- Everingham, M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems; Curran Associates: San Francisco, CA, USA, 2017; pp. 5998–6008. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 213–229. [Google Scholar]
- Zhu, Y.C.; Li, Y.H.; Sun, W.; Min, X.K.; Zhai, G.T.; Yang, X.K. Blind Image Quality Assessment via Cross-View Consistency. IEEE Trans. Multimed. 2022, 1–14. [Google Scholar] [CrossRef]
- Ha, D.; Dai, A.; Le, Q.V. Hypernetworks. arXiv 2016, arXiv:1609.09106. [Google Scholar]
- Sun, W.; Duan, H.Y.; Min, X.K.; Chen, L.; Zhai, G.T. Blind Quality Assessment for in-the-Wild Images via Hierarchical Feature Fusion Strategy. In Proceedings of the 2022 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Bilbao, Spain, 15–17 June 2022; pp. 1–6. [Google Scholar]
- Gao, Y.X.; Min, X.K.; Zhu, Y.C.; Li, J.; Zhang, X.P.; Zhai, G.T. Image Quality Assessment: From Mean Opinion Score to Opinion Score Distribution. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10–14 October 2022; pp. 997–1005. [Google Scholar]
- Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef] [PubMed]
- Larson, E.C.; Chandler, D.M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron. Imaging 2010, 19, 011006. [Google Scholar]
- Ponomarenko, N.; Jin, L.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; et al. Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun. 2015, 30, 57–77. [Google Scholar] [CrossRef] [Green Version]
- Ghadivaram, D.; Bovik, A.C. Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process. 2015, 25, 372–387. [Google Scholar] [CrossRef] [Green Version]
- Ciancio, A.; da Costa, A.L.N.T.; da Silva, E.A.B.; Said, A.; Samadani, R.; Obrador, P. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Trans. Image Process. 2010, 20, 64–75. [Google Scholar] [CrossRef]
- Hosu, V.; Lin, H.; Sziranyi, T.; Saupe, D. KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Trans. Image Process. 2020, 29, 4041–4056. [Google Scholar] [CrossRef] [Green Version]
Method | Transform | NSS Model | Total Features | Description |
---|---|---|---|---|
BIQI [23] | Wavelet transform | GGD | 18 | Shape parameter and variance of GGD from three orientations over three scales |
DIIVINE [24] | Wavelet transform | GGD | 88 | Improved BIQI by considering the relationship between sub-band coefficients |
BRISQUE [8] | None | GGD and AGGD | 36 | Model parameters from normalized luminance and pairwise products of neighbouring normalized luminance |
CurveletQA [25] | Discrete curvelet transform | AGGD | 12 | Parameters of AGGD model that fits the logarithm of the magnitude of the curvelet coefficients |
BLIINDS-II [26] | DCT | GGD | 24 | Parameters of GGD model fit for each DCT block and partitions within the block, coefficient of frequency variation and energy sub-band ratio measure, etc. |
Layer Name | Activation Function | Layer Information |
---|---|---|
Convolutional layer | / | Fifty kernels with a size of and a stride of one pixel |
Pooling layer | / | One max pooling and one min pooling |
Fully connected layer | ReLU | One fully connected layer with eight hundred neurons |
Fully connected layer | ReLU | One fully connected layer with eight hundred neurons |
Linear regression layer | / | One fully connected layer with one neuron |
Layer Name | Activation Function | Layer Information |
---|---|---|
Conv1 | ELU | Forty-eight kernels with a size of |
Max pooling layer | ELU | max pooling |
Conv2 | ELU | sixty-four kernels with a size of |
Max pooling layer | ELU | max pooling |
FC1 | ELU | One fully connected layer with one thousand six hundred neurons |
FC2 | ELU | One fully connected layer with four hundred neurons |
FC3 | ELU | One fully connected layer with two hundred neurons |
FC4 | ELU | One fully connected layer with one hundred neurons |
Layer Name | Activation Function | Layer Information |
---|---|---|
Conv1 | ReLU | Forty-eight kernels with a size of |
Conv2 | ReLU | Forty-eight kernels with a size of |
Conv3–Conv6 | ReLU | Sixty-four kernels with a size of |
Conv7–Conv9 | ReLU | One hundred and twenty-eight kernels with a size of |
Average pooling | / | |
FC1 | ReLU | One fully connected layer with one hundred and twenty-eight neurons |
FC2 | ReLU | One fully connected layer with two hundred and fifty-six neurons |
FC3 | ReLU | One fully connected layer with thirty-nine neurons |
Softmax | / | Probabilities for thirty-nine classes |
Dataset | DT | No. Ref | No. Dist | No. DT | SST | RSS |
---|---|---|---|---|---|---|
LIVE | Synthetic | 29 | 779 | 5 | DMOS | [0, 100] |
CSIQ | Synthetic | 30 | 866 | 6 | DMOS | [0, 1] |
TID2013 | Synthetic | 25 | 3000 | 24 | MOS | [0, 9] |
LIVEC | Authentic | N/A | 1162 | - | MOS | [0, 100] |
BID | Authentic | N/A | 586 | - | MOS | [0, 5] |
KonIQ-10k | Authentic | N/A | 10,073 | - | MOS | [0, 100] |
Types | Method | Publication Year | LIVE | CSIQ | TID2013 | |||
---|---|---|---|---|---|---|---|---|
SRCC | PLCC | SRCC | PLCC | SRCC | PLCC | |||
Two-stage_H | BIQI [23] | 2010 | 0.820 | 0.821 | 0.760 | 0.835 | 0.349 | 0.366 |
DIIVINE [24] | 2011 | 0.916 | 0.917 | 0.835 | 0.855 | 0.795 | 0.794 | |
BRISQUE [8] | 2012 | 0.940 | 0.942 | 0.909 | 0.937 | 0.883 | 0.900 | |
CurveletQA [25] | 2014 | 0.930 | 0.933 | - | - | - | - | |
BLIINDS-II [26] | 2012 | 0.931 | 0.930 | 0.900 | 0.928 | 0.536 | 0.538 | |
Xue’s [43] | 2014 | 0.951 | 0.955 | 0.924 | 0.945 | - | - | |
NR-LBPSriu2 [44] | 2013 | 0.932 | 0.937 | - | - | - | - | |
NRSL [45] | 2016 | 0.952 | 0.956 | 0.930 | 0.954 | 0.945 | 0.959 | |
NR-GLBP [46] | 2014 | 0.951 | 0.954 | 0.916 | 0.948 | 0.920 | 0.939 | |
BOF-GS [59] | 2020 | 0.973 | 0.978 | 0.971 | 0.976 | 0.716 | 0.718 | |
LTP [47] | 2016 | 0.942 | 0.949 | 0.864 | 0.880 | 0.841 | - | |
MLBP [48] | 2016 | 0.954 | - | 0.816 | - | 0.816 | - | |
MSLBP [49] | 2018 | 0.945 | - | 0.831 | - | 0.711 | - | |
OCPP [50] | 2018 | 0.956 | - | 0.925 | - | 0.762 | - | |
Two-stage_L | SFA [29] | 2018 | 0.963 | 0.972 | - | - | 0.948 | 0.954 |
GLCP [30] | 2016 | 0.958 | 0.959 | - | - | - | - | |
CONTRIQUE [63] | 2022 | 0.969 | 0.968 | 0.902 | 0.927 | 0.843 | 0.857 | |
One-stage | CNN [32] | 2014 | 0.956 | 0.953 | - | - | - | - |
BIECON [33] | 2016 | 0.961 | 0.962 | 0.815 | 0.823 | 0.717 | 0.762 | |
Two-stream CNN [34] | 2018 | 0.969 | 0.978 | - | - | - | - | |
IQA-CNN++ [35] | 2015 | 0.950 | 0.950 | - | - | - | - | |
MEON [36] | 2018 | 0.951 | 0.955 | 0.852 | 0.864 | 0.808 | 0.824 | |
DB-CNN [37] | 2018 | 0.968 | 0.971 | 0.946 | 0.959 | 0.816 | 0.865 | |
HyperIQA [20] | 2020 | 0.962 | 0.966 | 0.923 | 0.942 | 0.840 | 0.858 | |
TReS [22] | 2022 | 0.969 | 0.968 | 0.922 | 0.942 | 0.863 | 0.883 | |
RAN4IQA [38] | 2018 | 0.962 | 0.967 | 0.911 | 0.926 | 0.816 | 0.825 | |
Hall-IQA [39] | 2018 | 0.982 | 0.982 | 0.884 | 0.901 | 0.879 | 0.880 | |
CYCLEIQA [40] | 2022 | 0.970 | 0.971 | 0.926 | 0.928 | 0.832 | 0.838 |
Types | Method | Publication Year | LIVEC | BID | KonIQ-10k | |||
---|---|---|---|---|---|---|---|---|
SRCC | PLCC | SRCC | PLCC | SRCC | PLCC | |||
Two-stage_H | BIQI [23] | 2010 | 0.532 | 0.557 | 0.573 | 0.598 | - | - |
DIIVINE [24] | 2011 | 0.597 | 0.627 | 0.610 | 0.646 | - | - | |
BRISQUE [8] | 2012 | 0.607 | 0.645 | 0.581 | 0.605 | 0.700 | 0.704 | |
BLIINDS-II [26] | 2012 | 0.463 | 0.507 | 0.532 | 0.560 | 0.575 | 0.584 | |
NRSL [45] | 2016 | 0.631 | 0.654 | 0.638 | 0.663 | - | - | |
NR-GLBP [46] | 2014 | 0.612 | 0.634 | 0.628 | 0.654 | - | - | |
Two-stage_L | FRIQUEE + DBN [28] | 2014 | 0.672 | 0.705 | - | - | - | - |
SFA [29] | 2018 | 0.812 | 0.833 | 0.826 | 0.840 | 0.685 | 0.764 | |
CONTRIQUE [63] | 2022 | 0.845 | 0.857 | - | - | 0.894 | 0.906 | |
One-stage | CNN [32] | 2014 | 0.634 | 0.671 | ||||
BIECON [33] | 2016 | 0.595 | 0.613 | 0.539 | 0.576 | 0.618 | 0.651 | |
MEON [36] | 2018 | 0.697 | 0.710 | - | - | 0.611 | 0.628 | |
DB-CNN [37] | 2018 | 0.851 | 0.869 | 0.845 | 0.859 | 0.875 | 0.884 | |
HyperIQA [20] | 2020 | 0.859 | 0.882 | 0.869 | 0.878 | 0.906 | 0.917 | |
TReS [22] | 2022 | 0.846 | 0.877 | - | - | 0.915 | 0.928 | |
RAN4IQA [38] | 2018 | 0.591 | 0.603 | - | - | - | - | |
CYCLEIQA [40] | 2022 | 0.786 | 0.794 | - | - | - | - | |
HFF [78] | 2022 | 0.862 | 0.882 | 0.872 | 0.883 | 0.919 | 0.935 | |
FOSD-IQA [79] | 2022 | - | - | - | - | 0.905 | 0.919 | |
CVC-T [76] | 2022 | 0.872 | 0.891 | - | - | 0.915 | 0.941 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, P.; Sturtz, J.; Qingge, L. Progress in Blind Image Quality Assessment: A Brief Review. Mathematics 2023, 11, 2766. https://doi.org/10.3390/math11122766
Yang P, Sturtz J, Qingge L. Progress in Blind Image Quality Assessment: A Brief Review. Mathematics. 2023; 11(12):2766. https://doi.org/10.3390/math11122766
Chicago/Turabian StyleYang, Pei, Jordan Sturtz, and Letu Qingge. 2023. "Progress in Blind Image Quality Assessment: A Brief Review" Mathematics 11, no. 12: 2766. https://doi.org/10.3390/math11122766
APA StyleYang, P., Sturtz, J., & Qingge, L. (2023). Progress in Blind Image Quality Assessment: A Brief Review. Mathematics, 11(12), 2766. https://doi.org/10.3390/math11122766