No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement
Abstract
:1. Introduction
2. Proposed Method
2.1. Image Preprocessing
2.1.1. Retinex Decomposition
2.1.2. Patch Partition and Local Normalization
2.2. Feature Extraction Module
2.3. Attention Module
2.3.1. Channel Attention
2.3.2. Spatial Attention
2.4. Quality Regression Module
3. Experimental Results
3.1. Experimental Setups
3.1.1. Benchmark Datasets
3.1.2. Evaluation Protocols and Performance Criteria
3.2. Performance Evaluation
3.2.1. Performance Comparisons on SAUD
3.2.2. Performance Comparisons on UIQE
3.2.3. Model Efficiency
3.3. Ablation Experiment
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Xue, C.; Liu, Q.; Huang, Y.; Cheng, E.; Yuan, F. A Dual-Branch Autoencoder Network for Underwater Low-Light Polarized Image Enhancement. Remote Sens. 2024, 16, 1134. [Google Scholar] [CrossRef]
- Zhang, W.; Li, X.; Xu, S.; Li, X.; Yang, Y.; Xu, D.; Liu, T.; Hu, H. Underwater Image Restoration via Adaptive Color Correction and Contrast Enhancement Fusion. Remote Sens. 2023, 15, 4699. [Google Scholar] [CrossRef]
- Kang, Y.; Jiang, Q.; Li, C.; Ren, W.; Liu, H.; Wang, P. A perception-aware decomposition and fusion framework for underwater image enhancement. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 988–1002. [Google Scholar] [CrossRef]
- Zhou, J.; Pang, L.; Zhang, D.; Zhang, W. Underwater image enhancement method via multi-interval subhistogram perspective equalization. IEEE J. Ocean. Eng. 2023, 48, 474–488. [Google Scholar] [CrossRef]
- Lu, Y.; Yang, D.; Gao, Y.; Liu, R.W.; Liu, J.; Guo, Y. AoSRNet: All-in-One Scene Recovery Networks via Multi-knowledge Integration. Knowl. Based Syst. 2024, 294, 111786. [Google Scholar] [CrossRef]
- Song, W.; Wang, Y.; Huang, D.; Liotta, A.; Perra, C. Enhancement of underwater images with statistical model of background light and optimization of transmission map. IEEE Trans. Broadcast. 2020, 66, 153–169. [Google Scholar] [CrossRef]
- Li, C.-Y.; Guo, J.-C.; Cong, R.-M.; Pang, Y.-W.; Wang, B. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 2016, 25, 5664–5677. [Google Scholar] [CrossRef]
- Wang, Z.; Li, C.; Mo, Y.; Shang, S. RCA-CycleGAN: Unsupervised underwater image enhancement using Red Channel attention optimized CycleGAN. Displays 2023, 76, 102359. [Google Scholar] [CrossRef]
- Abdul Ghani, A.S.; Mat Isa, N.A. Underwater image quality enhancement through composition of dual-intensity images and Rayleigh-stretching. SpringerPlus 2014, 3, 757. [Google Scholar] [CrossRef]
- Fu, X.; Zhuang, P.; Huang, Y.; Liao, Y.; Zhang, X.-P.; Ding, X. A retinex-based enhancing approach for single underwater image. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 4572–4576. [Google Scholar]
- Huang, D.; Wang, Y.; Song, W.; Sequeira, J.; Mavromatis, S. Shallow-water image enhancement using relative global histogram stretching based on adaptive parameter acquisition. In Proceedings of the MultiMedia Modeling: 24th International Conference, MMM 2018, Bangkok, Thailand, 5–7 February 2018; Proceedings, Part I 24. pp. 453–465. [Google Scholar]
- Fu, X.; Fan, Z.; Ling, M.; Huang, Y.; Ding, X. Two-step approach for single underwater image enhancement. In Proceedings of the 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Xiamen, China, 6–9 November 2017; pp. 789–794. [Google Scholar]
- Drews, P.L.; Nascimento, E.R.; Botelho, S.S.; Campos, M.F.M. Underwater depth estimation and image restoration based on single images. IEEE Comput. Graph. Appl. 2016, 36, 24–35. [Google Scholar] [CrossRef]
- Li, C.; Anwar, S.; Porikli, F. Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognit. 2020, 98, 107038. [Google Scholar] [CrossRef]
- Fu, X.; Cao, X. Underwater image enhancement with global–local networks and compressed-histogram equalization. Signal Process. Image Commun. 2020, 86, 115892. [Google Scholar] [CrossRef]
- Ding, D.; Gan, S.; Chen, L.; Wang, B. Learning-based underwater image enhancement: An efficient two-stream approach. Displays 2023, 76, 102337. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef]
- Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef]
- Peng, Y.-T.; Cosman, P.C. Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Fabbri, C.; Islam, M.J.; Sattar, J. Enhancing underwater imagery using generative adversarial networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 7159–7165. [Google Scholar]
- Li, C.; Guo, J.; Guo, C. Emerging from water: Underwater image color correction based on weakly supervised color transfer. IEEE Signal Process. Lett. 2018, 25, 323–327. [Google Scholar] [CrossRef]
- Li, H.; Li, J.; Wang, W. A fusion adversarial underwater image enhancement network with a public test dataset. arXiv 2019, arXiv:1906.06819. [Google Scholar]
- Wu, S.; Luo, T.; Jiang, G.; Yu, M.; Xu, H.; Zhu, Z.; Song, Y. A two-stage underwater enhancement network based on structure decomposition and characteristics of underwater imaging. IEEE J. Ocean. Eng. 2021, 46, 1213–1227. [Google Scholar] [CrossRef]
- Wang, B.; Xu, H.; Jiang, G.; Yu, M.; Chen, Y.; Ding, L.; Zhang, X.; Luo, T. Underwater image co-enhancement based on physical-guided transformer interaction. Displays 2023, 79, 102505. [Google Scholar] [CrossRef]
- Zhai, G.; Min, X. Perceptual image quality assessment: A survey. Sci. China Inf. Sci. 2020, 63, 211301. [Google Scholar] [CrossRef]
- Zhang, C.; Huang, Z.; Liu, S.; Xiao, J. Dual-channel multi-task CNN for no-reference screen content image quality assessment. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5011–5025. [Google Scholar] [CrossRef]
- Moorthy, A.K.; Bovik, A.C. A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 2010, 17, 513–516. [Google Scholar] [CrossRef]
- Moorthy, A.K.; Bovik, A.C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef] [PubMed]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
- Li, Q.; Lin, W.; Fang, Y. No-reference quality assessment for multiply-distorted images in gradient domain. IEEE Signal Process. Lett. 2016, 23, 541–545. [Google Scholar] [CrossRef]
- Liu, L.; Liu, B.; Huang, H.; Bovik, A.C. No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun. 2014, 29, 856–863. [Google Scholar] [CrossRef]
- Min, X.; Zhai, G.; Gu, K.; Liu, Y.; Yang, X. Blind image quality estimation via distortion aggravation. IEEE Trans. Broadcast. 2018, 64, 508–517. [Google Scholar] [CrossRef]
- Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1733–1740. [Google Scholar]
- Yang, S.; Jiang, Q.; Lin, W.; Wang, Y. SGDNet: An end-to-end saliency-guided deep neural network for no-reference image quality assessment. In Proceedings of the 27th ACM International Conference on Multimedia, New York, NY, USA, 21–25 October 2019; pp. 1383–1391. [Google Scholar]
- Liu, X.; Van De Weijer, J.; Bagdanov, A.D. Rankiqa: Learning from rankings for no-reference image quality assessment. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1040–1049. [Google Scholar]
- Ke, J.; Wang, Q.; Wang, Y.; Milanfar, P.; Yang, F. Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 5148–5157. [Google Scholar]
- Pan, Z.; Yuan, F.; Lei, J.; Fang, Y.; Shao, X.; Kwong, S. VCRNet: Visual compensation restoration network for no-reference image quality assessment. IEEE Trans. Image Process. 2022, 31, 1613–1627. [Google Scholar] [CrossRef]
- Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
- Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef] [PubMed]
- Yang, N.; Zhong, Q.; Li, K.; Cong, R.; Zhao, Y.; Kwong, S. A reference-free underwater image quality assessment metric in frequency domain. Signal Process. Image Commun. 2021, 94, 116218. [Google Scholar] [CrossRef]
- Li, W.; Lin, C.; Luo, T.; Li, H.; Xu, H.; Wang, L. Subjective and objective quality evaluation for underwater image enhancement and restoration. Symmetry 2022, 14, 558. [Google Scholar] [CrossRef]
- Jiang, Q.; Gu, Y.; Li, C.; Cong, R.; Shao, F. Underwater image enhancement quality evaluation: Benchmark dataset and objective metric. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5959–5974. [Google Scholar] [CrossRef]
- Liu, Y.; Gu, K.; Cao, J.; Wang, S.; Zhai, G.; Dong, J.; Kwong, S. UIQI: A comprehensive quality evaluation index for underwater images. IEEE Trans. Multimed. 2023, 13, 600–612. [Google Scholar] [CrossRef]
- Yi, X.; Jiang, Q.; Zhou, W. No-reference quality assessment of underwater image enhancement. Displays 2024, 81, 102586. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Wang, X.; Jiang, H.; Mu, M.; Dong, Y. A trackable multi-domain collaborative generative adversarial network for rotating machinery fault diagnosis. Mech. Syst. Signal Process. 2025, 224, 111950. [Google Scholar] [CrossRef]
- Wang, X.; Jiang, H.; Wu, Z.; Yang, Q. Adaptive variational autoencoding generative adversarial networks for rolling bearing fault diagnosis. Adv. Eng. Inform. 2023, 56, 102027. [Google Scholar] [CrossRef]
- Fu, Z.; Fu, X.; Huang, Y.; Ding, X. Twice mixing: A rank learning based quality assessment approach for underwater image enhancement. Signal Process. Image Commun. 2022, 102, 116622. [Google Scholar] [CrossRef]
- Guo, C.; Wu, R.; Jin, X.; Han, L.; Zhang, W.; Chai, Z.; Li, C. Underwater ranker: Learn which is better and how to be better. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; pp. 702–709. [Google Scholar]
- Dong, Y.; Jiang, H.; Wu, Z.; Yang, Q.; Liu, Y. Digital twin-assisted multiscale residual-self-attention feature fusion network for hypersonic flight vehicle fault diagnosis. Reliab. Eng. Syst. Saf. 2023, 235, 109253. [Google Scholar] [CrossRef]
- Dong, Y.; Jiang, H.; Liu, Y.; Yi, Z. Global wavelet-integrated residual frequency attention regularized network for hypersonic flight vehicle fault diagnosis with imbalanced data. Eng. Appl. Artif. Intell. 2024, 132, 107968. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32, 8024–8035. [Google Scholar]
- Wang, Y.; Li, N.; Li, Z.; Gu, Z.; Zheng, H.; Zheng, B.; Sun, M. An imaging-inspired no-reference underwater color image quality assessment metric. Comput. Electr. Eng. 2018, 70, 904–913. [Google Scholar] [CrossRef]
Methods | SROCC | KROCC | PLCC | |
---|---|---|---|---|
In-air IQA | DIIVINE [29] | 0.6198 | 0.4533 | 0.6487 |
BRISQUE [30] | 0.5866 | 0.4244 | 0.6066 | |
GLBP [31] | 0.4583 | 0.3218 | 0.5011 | |
SSEQ [32] | 0.5475 | 0.4249 | 0.7417 | |
BMPRI [33] | 0.5209 | 0.3939 | 0.7326 | |
CNN-IQA [34] | 0.7722 | 0.5799 | 0.7594 | |
MUSIQ [37] | 0.6361 | 0.4539 | 0.6165 | |
VCRNet [38] | 0.6829 | 0.5054 | 0.6454 | |
UIQA | UIQM [39] | 0.4085 | 0.3098 | 0.6957 |
UCIQE [40] | 0.5184 | 0.3863 | 0.7219 | |
CCF [56] | 0.4640 | 0.3080 | 0.4791 | |
FDUM [41] | 0.1947 | 0.1321 | 0.1520 | |
NUIQ [43] | 0.7900 | 0.6555 | 0.8679 | |
Twice-Mixing [49] | 0.4729 | 0.3295 | 0.4428 | |
Uranker [50] | 0.7366 | 0.5636 | 0.7252 | |
UIQI [44] | 0.4901 | 0.3447 | 0.5695 | |
Proposed | 0.8741 | 0.6971 | 0.8744 |
Methods | SROCC | KROCC | PLCC | |
---|---|---|---|---|
In-air IQA | DIIVINE [29] | 0.1278 | 0.1084 | 0.0997 |
BRISQUE [30] | 0.7278 | 0.5000 | 0.7507 | |
GLBP [31] | 0.5870 | 0.4150 | 0.6236 | |
SSEQ [32] | 0.6580 | 0.4695 | 0.6772 | |
BMPRI [33] | 0.7200 | 0.5399 | 0.7380 | |
CNN-IQA [34] | 0.7840 | 0.5849 | 0.7765 | |
MUSIQ [37] | 0.7497 | 0.5654 | 0.7383 | |
VCRNet [38] | 0.8727 | 0.6870 | 0.8712 | |
UIQA | UIQM [39] | 0.1556 | 0.1984 | 0.3112 |
UCIQE [40] | 0.2838 | 0.1381 | 0.3851 | |
CCF [56] | 0.2556 | 0.1517 | 0.3010 | |
FDUM [41] | 0.2733 | 0.2474 | 0.1759 | |
NUIQ [43] | 0.4433 | 0.3067 | 0.4023 | |
UIQEI [40] | 0.8568 | 0.6456 | 0.8705 | |
Twice-Mixing [49] | 0.5690 | 0.4142 | 0.5506 | |
Uranker [50] | 0.8188 | 0.6504 | 0.8172 | |
UIQI [44] | 0.7131 | 0.5157 | 0.7270 | |
Proposed | 0.8850 | 0.7095 | 0.8765 |
Method | DIIVINE | BRISQUE | BMPRI | CNN-IQA | UCIQE | UIQM |
---|---|---|---|---|---|---|
Time/s | 2.0374 | 0.0134 | 0.2068 | 0.0572 | 0.0186 | 0.0500 |
Method | CCF | FDUM | UIQI | Twice-Mixing | Uranker | Proposed |
Time/s | 0.1060 | 0.2550 | 0.2358 | 0.1065 | 0.0786 | 0.0719 |
Model | SROCC | PLCC |
---|---|---|
Model-1 | 0.8256 | 0.8177 |
Model-2 | 0.8329 | 0.8242 |
Model-3 | 0.8119 | 0.8003 |
Proposed | 0.8850 | 0.8765 |
Model | SROCC | PLCC |
---|---|---|
No_FFM | 0.8768 | 0.8714 |
No_Attention | 0.8743 | 0.8697 |
Proposed | 0.8850 | 0.8765 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hu, R.; Luo, T.; Jiang, G.; Lin, Z.; He, Z. No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement. Electronics 2024, 13, 4451. https://doi.org/10.3390/electronics13224451
Hu R, Luo T, Jiang G, Lin Z, He Z. No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement. Electronics. 2024; 13(22):4451. https://doi.org/10.3390/electronics13224451
Chicago/Turabian StyleHu, Renzhi, Ting Luo, Guowei Jiang, Zhiqiang Lin, and Zhouyan He. 2024. "No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement" Electronics 13, no. 22: 4451. https://doi.org/10.3390/electronics13224451
APA StyleHu, R., Luo, T., Jiang, G., Lin, Z., & He, Z. (2024). No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement. Electronics, 13(22), 4451. https://doi.org/10.3390/electronics13224451