Quality Assessment of View Synthesis Based on Visual Saliency and Texture Naturalness
Abstract
:1. Introduction
2. Proposed Method
2.1. Visual Saliency Detection
2.2. Texture Naturalness Detection
2.3. Quality Assessment Model
3. Experimental Results
3.1. Experimental Databases
3.2. Performance Evaluation Criteria
3.3. LBP Parameter Settings and Computational Complexity Analysis
3.4. Compared with Existing DIBR View Synthesis Metrics
3.5. Generalization Ability Study
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Tian, S.; Zhang, L.; Zou, W.; Li, X.; Su, T.; Morin, L.; Deforges, O. Quality assessment of DIBR-synthesized views: An overview. Neurocomputing 2021, 423, 158–178. [Google Scholar] [CrossRef]
- Wang, G.; Wang, Z.; Gu, K.; Li, L.; Xia, Z.; Wu, L. Blind Quality Metric of DIBR-Synthesized Images in the Discrete Wavelet Transform Domain. IEEE Trans. Image Process. 2020, 29, 1802–1814. [Google Scholar] [CrossRef] [PubMed]
- PhiCong, H.; Perry, S.; Cheng, E.; HoangVan, X. Objective Quality Assessment Metrics for Light Field Image Based on Textural Features. Electronics 2022, 11, 759. [Google Scholar] [CrossRef]
- Huang, H.Y.; Huang, S.Y. Fast Hole Filling for View Synthesis in Free Viewpoint Video. Electronics 2020, 9, 906. [Google Scholar] [CrossRef]
- Zhou, Y.; Li, L.; Wang, S.; Wu, J.; Fang, Y.; Gao, X. No-Reference Quality Assessment for View Synthesis Using DoG-Based Edge Statistics and Texture Naturalness. IEEE Trans. Image Process. 2019, 28, 4566–4579. [Google Scholar] [CrossRef] [PubMed]
- Li, L.; Zhou, Y.; Gu, K.; Lin, W.; Wang, S. Quality Assessment of DIBR-Synthesized Images by Measuring Local Geometric Distortions and Global Sharpness. IEEE Trans. Multimed. 2018, 20, 914–926. [Google Scholar] [CrossRef]
- Gellert, A.; Brad, R. Image inpainting with Markov chains. Sinal Image Video Process. 2020, 14, 1335–1343. [Google Scholar] [CrossRef]
- Cai, L.; Kim, T. Context-driven hybrid image inpainting. IET Image Process. 2015, 9, 866–873. [Google Scholar] [CrossRef] [Green Version]
- Sun, K.; Tang, L.; Qian, J.; Wang, G.; Lou, C. A deep learning-based PM2.5 concentration estimator. Displays 2021, 69, 102072. [Google Scholar] [CrossRef]
- Wang, G.; Shi, Q.; Wang, H.; Sun, K.; Lu, Y.; Di, K. Multi-modal image feature fusion-based PM2.5 concentration estimation. Atmos. Pollut. Res. 2022, 13, 101345. [Google Scholar] [CrossRef]
- Sun, K.; Tang, L.; Huang, S.; Qian, J. A photo-based quality assessment model for the estimation of PM2.5 concentrations. IET Image Process. 2022, 16, 1008–1016. [Google Scholar] [CrossRef]
- Gu, K.; Lin, W.; Zhai, G.; Yang, X.; Zhang, W.; Chen, C.W. No-Reference Quality Metric of Contrast-Distorted Images Based on Information Maximization. IEEE Trans. Cybern. 2017, 47, 4559–4565. [Google Scholar] [CrossRef]
- Gu, K.; Tao, D.; Qiao, J.F.; Lin, W. Learning a No-Reference Quality Assessment Model of Enhanced Images with Big Data. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 1301–1313. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gu, K.; Zhai, G.; Lin, W.; Yang, X.; Zhang, W. No-Reference Image Sharpness Assessment in Autoregressive Parameter Space. IEEE Trans. Image Process. 2015, 24, 3218–3231. [Google Scholar]
- Li, L.; Lin, W.; Wang, X.; Yang, G.; Bahrami, K.; Kot, A.C. No-Reference Image Blur Assessment Based on Discrete Orthogonal Moments. IEEE Trans. Cybern. 2016, 46, 39–50. [Google Scholar] [CrossRef] [PubMed]
- Okarma, K.; Lech, P.; Lukin, V.V. Combined Full-Reference Image Quality Metrics for Objective Assessment of Multiply Distorted Images. Electronics 2021, 10, 2256. [Google Scholar] [CrossRef]
- Wang, G.; Wang, Z.; Gu, K.; Jiang, K.; He, Z. Reference-Free DIBR-Synthesized Video Quality Metric in Spatial and Temporal Domains. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1119–1132. [Google Scholar] [CrossRef]
- Gu, K.; Qiao, J.; Lee, S.; Liu, H.; Lin, W.; Le Callet, P. Multiscale Natural Scene Statistical Analysis for No-Reference Quality Evaluation of DIBR-Synthesized Views. IEEE Trans. Broadcast. 2020, 66, 127–139. [Google Scholar] [CrossRef]
- Gu, K.; Jakhetiya, V.; Qiao, J.F.; Li, X.; Lin, W.; Thalmann, D. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description. IEEE Trans. Image Process. 2018, 27, 394–405. [Google Scholar] [CrossRef]
- Sandic Stankovic, D.; Kukolj, D.; Le Callet, P. DIBR synthesized image quality assessment based on morphological wavelets. In Proceedings of the 2015 Seventh International Workshop on Quality of Multimedia Experience (QOMEX), Messinia, Greece, 26–29 May 2015. [Google Scholar]
- Sandic-Stankovic, D.; Kukolj, D.; Le Callet, P. DIBR-synthesized image quality assessment based on morphological multi-scale approach. EURASIP J. Image Video Process. 2016, 4. [Google Scholar] [CrossRef]
- Sandic-Stankovic, D.; Kukolj, D.; Le Callet, P. Multi-scale Synthesized View Assessment based on Moprhological Pyramids. J. Electr.-Eng.-Elektrotechnicky Cas. 2016, 67, 3–11. [Google Scholar]
- Jakhetiya, V.; Gu, K.; Singhal, T.; Guntuku, S.C.; Xia, Z.; Lin, W. A Highly Efficient Blind Image Quality Assessment Metric of 3-D Synthesized Images Using Outlier Detection. IEEE Trans. Ind. Inform. 2019, 15, 4120–4128. [Google Scholar] [CrossRef]
- Tian, S.; Zhang, L.; Morin, L.; Deforges, O. NIQSV: A No Reference Image Quality Assessment Metric for 3D Synthesized Views. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, New Orleans, LA, USA, 5–9 March 2017; pp. 1248–1252. [Google Scholar]
- Yue, G.; Hou, C.; Gu, K.; Zhou, T.; Zhai, G. Combining Local and Global Measures for DIBR-Synthesized Image Quality Evaluation. IEEE Trans. Image Process. 2019, 28, 2075–2088. [Google Scholar] [CrossRef] [PubMed]
- Zheng, H.; Zhong, X.; Huang, W.; Jiang, K.; Liu, W.; Wang, Z. Visible-Infrared Person Re-Identification: A Comprehensive Survey and a New Setting. Electronics 2022, 11, 454. [Google Scholar] [CrossRef]
- Jiang, K.; Wang, Z.; Yi, P.; Chen, C.; Wang, G.; Han, Z.; Jiang, J.; Xiong, Z. Multi-Scale Hybrid Fusion Network for Single Image Deraining. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef] [PubMed]
- Jiang, K.; Wang, Z.; Yi, P.; Chen, C.; Wang, Z.; Wang, X.; Jiang, J.; Lin, C.W. Rain-Free and Residue Hand-in-Hand: A Progressive Coupled Network for Real-Time Image Deraining. IEEE Trans. Image Process. 2021, 30, 7404–7418. [Google Scholar] [CrossRef]
- Wang, Z.; Jiang, J.; Wu, Y.; Ye, M.; Bai, X.; Satoh, S. Learning Sparse and Identity-Preserved Hidden Attributes for Person Re-Identification. IEEE Trans. Image Process. 2020, 29, 2013–2025. [Google Scholar] [CrossRef]
- Wang, Z.; Jiang, J.; Yu, Y.; Satoh, S. Incremental Re-Identification by Cross-Direction and Cross-Ranking Adaption. IEEE Trans. Multimed. 2019, 21, 2376–2386. [Google Scholar] [CrossRef]
- Varga, D. Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency. Electronics 2022, 11, 559. [Google Scholar] [CrossRef]
- Jiang, K.; Wang, Z.; Yi, P.; Wang, G.; Gu, K.; Jiang, J. ATMFN: Adaptive-Threshold-Based Multi-Model Fusion Network for Compressed Face Hallucination. IEEE Trans. Multimed. 2020, 22, 2734–2747. [Google Scholar] [CrossRef]
- Bosc, E.; Pepion, R.; Le Callet, P.; Koeppel, M.; Ndjiki-Nya, P.; Pressigout, M.; Morin, L. Towards a New Quality Metric for 3-D Synthesized View Assessment. IEEE J. Sel. Top. Signal Process. 2011, 5, 1332–1343. [Google Scholar] [CrossRef] [Green Version]
- Tian, S.; Zhang, L.; Morin, L.; Deforges, O. A Benchmark of DIBR Synthesized View Quality Assessment Metrics on a New Database for Immersive Media Applications. IEEE Trans. Multimed. 2019, 21, 1235–1247. [Google Scholar] [CrossRef]
- Gu, K.; Wang, S.; Yang, H.; Lin, W.; Zhai, G.; Yang, X.; Zhang, W. Saliency-Guided Quality Assessment of Screen Content Images. IEEE Trans. Multimed. 2016, 18, 1098–1110. [Google Scholar] [CrossRef]
- Li, Q.; Lin, W.; Fang, Y. No-Reference Quality Assessment for Multiply-Distorted Images in Gradient Domain. IEEE Signal Process. Lett. 2016, 23, 541–545. [Google Scholar] [CrossRef]
- Fang, Y.; Lin, W.; Lee, B.S.; Lau, C.T.; Chen, Z.; Lin, C.W. Bottom-Up Saliency Detection Model Based on Human Visual Sensitivity and Amplitude Spectrum. IEEE Trans. Multimed. 2012, 14, 187–198. [Google Scholar] [CrossRef]
- Scholkopf, B.; Smola, A.; Williamson, R.; Bartlett, P. New support vector algorithms. Neural Comput. 2000, 12, 1207–1245. [Google Scholar] [CrossRef]
- Gu, K.; Xia, Z.; Qiao, J.; Lin, W. Deep Dual-Channel Neural Network for Image-Based Smoke Detection. IEEE Trans. Multimed. 2020, 22, 311–323. [Google Scholar] [CrossRef]
Database | Radius | Neighbors | PLCC | SRCC | RMSE | Time (S) |
---|---|---|---|---|---|---|
IRCCyN/IVC | 1 | 8 | 0.8877 | 0.8511 | 0.2264 | 53 |
IRCCyN/IVC | 1.5 | 12 | 0.9593 | 0.9301 | 0.1703 | 87 |
IRCCyN/IVC | 2 | 16 | 0.9780 | 0.9510 | 0.1184 | 125 |
IRCCyN/IVC | 3 | 24 | 0.9760 | 0.9441 | 0.1235 | 6087 |
IETR | 1 | 8 | 0.8349 | 0.8029 | 0.0925 | 132 |
IETR | 1.5 | 12 | 0.7642 | 0.7477 | 0.088 | 195 |
IETR | 2 | 16 | 0.7770 | 0.7277 | 0.0873 | 338 |
IETR | 3 | 24 | 0.8586 | 0.8158 | 0.0824 | 8392 |
Model | Type | PLCC | SRCC | RMSE |
---|---|---|---|---|
MP-PSNR | FR | 0.6174 | 0.6227 | 0.5238 |
MW-PSNR | FR | 0.5622 | 0.5757 | 0.5506 |
LOGS | FR | 0.8256 | 0.7812 | 0.3601 |
APT | NR | 0.7307 | 0.7157 | 0.4546 |
OUT | NR | 0.7678 | 0.7036 | 0.4266 |
NIQSV | NR | 0.7114 | 0.6668 | 0.4679 |
GDSIC | NR | 0.7867 | 0.7995 | 0.4000 |
MNSS | NR | 0.7704 | 0.7854 | 0.4122 |
CLGM | NR | 0.6750 | 0.6528 | 0.4620 |
Proposed | NR | 0.8877 | 0.8511 | 0.2264 |
Model | Type | PLCC | SRCC | RMSE |
---|---|---|---|---|
MP-PSNR | FR | 0.6190 | 0.5809 | 0.1947 |
MW-PSNR | FR | 0.5389 | 0.4875 | 0.2088 |
LOGS | FR | 0.6638 | 0.6679 | 0.1854 |
APT | NR | 0.4225 | 0.4141 | 0.2252 |
OUT | NR | 0.2409 | 0.2378 | 0.2406 |
NIQSV | NR | 0.2095 | 0.2190 | 0.2429 |
GDSIC | NR | 0.4338 | 0.4254 | 0.2244 |
MNSS | NR | 0.2285 | 0.3387 | 0.2333 |
CLGM | NR | 0.1146 | 0.0860 | 0.2463 |
Proposed | NR | 0.8349 | 0.8029 | 0.0925 |
Model | PLCC | SRCC | RMSE |
---|---|---|---|
Proposed | 0.4901 | 0.4408 | 0.2031 |
Model | PLCC | SRCC | RMSE |
---|---|---|---|
Proposed | 0.7489 | 0.7390 | 0.4139 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tang, L.; Sun, K.; Huang, S.; Wang, G.; Jiang, K. Quality Assessment of View Synthesis Based on Visual Saliency and Texture Naturalness. Electronics 2022, 11, 1384. https://doi.org/10.3390/electronics11091384
Tang L, Sun K, Huang S, Wang G, Jiang K. Quality Assessment of View Synthesis Based on Visual Saliency and Texture Naturalness. Electronics. 2022; 11(9):1384. https://doi.org/10.3390/electronics11091384
Chicago/Turabian StyleTang, Lijuan, Kezheng Sun, Shuaifeng Huang, Guangcheng Wang, and Kui Jiang. 2022. "Quality Assessment of View Synthesis Based on Visual Saliency and Texture Naturalness" Electronics 11, no. 9: 1384. https://doi.org/10.3390/electronics11091384
APA StyleTang, L., Sun, K., Huang, S., Wang, G., & Jiang, K. (2022). Quality Assessment of View Synthesis Based on Visual Saliency and Texture Naturalness. Electronics, 11(9), 1384. https://doi.org/10.3390/electronics11091384