TCRN: A Two-Step Underwater Image Enhancement Network Based on Triple-Color Space Feature Reconstruction
Abstract
:1. Introduction
- We designed an end-to-end triple-color space feature extraction network coupled with DPM to achieve cleaner visual characteristic information.
- We reinforced the long-term dependency of the different color spaces from a fully connected layer perspective and utilized a group structure for spatial reconstruction.
2. Related Works
3. Proposed Method
3.1. Feature Extraction Module
3.2. Dense Pixel Attention Module
3.3. Pre-Fusion Module
3.4. Group Structure
3.5. Loss Function
4. Experiments
4.1. Benchmarks
4.2. Evaluation Metrics
4.3. Implementation Details
4.4. Quantitative Comparisons
4.5. Qualitative Comparisons
4.6. Ablation
- 1.
- The designations w/o LAB, w/o HSV, and w/o HSV + LAB represent TCRN without LAB, HSV, and LAB+HSV color space feature, respectively.
- 2.
- The designations w/o DPM, w/o PFM, and w/o GS indicate a TCRN without a dense pixel attention module, pre-fusion module, and group structure.
- 3.
- w/o per loss indicates that only the l1 loss function is included.
- (1)
- The quantitative PSNR/SSIM scores on Test-90 and Test-120 can be seen in Table 5. The full TCRN model realized the best performance, which confirms the effectiveness of the core components.
- (2)
- (3)
- For module ablation, the haze result generated by the w/o DPM can be seen in Figure 8b, which effectively verifies the performance of the DPM module for feature filtering in STEP I. In Figure 8c, the w/o PFM preserved a larger area of blue artifact than the full model; we speculate that the reason for this is due to the lack of feature interaction between the different color spaces, which leads to the model’s inability to accurately identify information. PFM successfully guided the interaction of different visual characteristics. As seen in Figure 8d, GS had a positive effect on color reconstruction.
- (4)
4.7. Complexity Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Hu, K.; Weng, C.; Zhang, Y.; Jin, J.; Xia, Q. An overview of underwater vision enhancement: From traditional methods to recent deep learning. J. Mar. Sci. Eng. 2022, 10, 241. [Google Scholar] [CrossRef]
- Yanwen, Z.; Kai, H.; Pengsheng, W. Review of 3D reconstruction algorithms. Nanjing Xinxi Gongcheng Daxue Xuebao 2020, 12, 591–602. (In Chinese) [Google Scholar]
- Akkaynak, D.; Treibitz, T.; Shlesinger, T.; Loya, Y.; Tamir, R.; Iluz, D. What is the space of attenuation coefficients in underwater computer vision? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4931–4940. [Google Scholar]
- Ghani, A.S.A.; Isa, N.A.M. Automatic system for improving underwater image contrast and color through recursive adaptive histogram modification. Comput. Electron. Agric. 2017, 141, 181–195. [Google Scholar] [CrossRef] [Green Version]
- Buchsbaum, G. A spatial processor model for object colour perception. J. Frankl. Inst. 1980, 310, 1–26. [Google Scholar] [CrossRef]
- Li, X.; Hou, G.; Tan, L.; Liu, W. A hybrid framework for underwater image enhancement. IEEE Access 2020, 8, 197448–197462. [Google Scholar] [CrossRef]
- Chen, D.; Zhang, Y.; Shen, M.; Zhao, W. A Two-Stage Network Based on Transformer and Physical Model for Single Underwater Image Enhancement. J. Mar. Sci. Eng. 2023, 11, 787. [Google Scholar]
- Drews, P.; Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission estimation in underwater single images. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 1–8 December 2013; pp. 825–830. [Google Scholar]
- Drews, P.; Nascimento, E.; Botelho, S.; Campos, M. Underwater depth estimation and image restoration based on single images. IEEE Comput. Graph. Appl. 2016, 36, 24–35. [Google Scholar] [CrossRef]
- Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 2017, 3, 387–394. [Google Scholar] [CrossRef] [Green Version]
- Guo, Y.; Li, H.; Zhuang, P. Underwater image enhancement using a multiscale dense generative adversarial network. IEEE J. Ocean. Eng. 2019, 45, 862–870. [Google Scholar] [CrossRef]
- Yang, M.; Hu, K.; Du, Y.; Wei, Z.; Sheng, Z.; Hu, J. Underwater image enhancement based on conditional generative adversarial network. Signal Process. Image Commun. 2020, 81, 115723. [Google Scholar] [CrossRef]
- Li, H.; Zhuang, P. DewaterNet: A fusion adversarial real underwater image enhancement network. Signal Process. Image Commun. 2021, 95, 116248. [Google Scholar] [CrossRef]
- Liu, X.; Gao, Z.; Chen, B.M. IPMGAN: Integrating physical model and generative adversarial network for underwater image enhancement. Neurocomputing 2021, 453, 538–551. [Google Scholar] [CrossRef]
- Hong, L.; Wang, X.; Xiao, Z.; Zhang, G.; Liu, J. WSUIE: Weakly supervised underwater image enhancement for improved visual perception. IEEE Robot. Autom. Lett. 2021, 6, 8237–8244. [Google Scholar] [CrossRef]
- Lyu, Z.; Peng, A.; Wang, Q.; Ding, D. An efficient learning-based method for underwater image enhancement. Displays 2022, 74, 102174. [Google Scholar] [CrossRef]
- Qi, Q.; Zhang, Y.; Tian, F.; Wu, Q.J.; Li, K.; Luan, X.; Song, D. Underwater image co-enhancement with correlation feature matching and joint learning. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 1133–1147. [Google Scholar] [CrossRef]
- Li, C.; Anwar, S.; Porikli, F. Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognit. 2020, 98, 107038. [Google Scholar] [CrossRef]
- Xiao, Z.; Han, Y.; Rahardja, S.; Ma, Y. USLN: A statistically guided lightweight network for underwater image enhancement via dual-statistic white balance and multi-color space stretch. arXiv 2022, arXiv:02221. [Google Scholar]
- Li, C.; Anwar, S.; Hou, J.; Cong, R.; Guo, C.; Ren, W. Underwater image enhancement via medium transmission-guided multi-color space embedding. IEEE Trans. Image Process. 2021, 30, 4985–5000. [Google Scholar] [CrossRef]
- Peng, L.; Zhu, C.; Bian, L. U-shape transformer for underwater image enhancement. arXiv 2022, arXiv:2111.11843. [Google Scholar]
- Xue, X.; Hao, Z.; Ma, L.; Wang, Y.; Liu, R. Joint luminance and chrominance learning for underwater image enhancement. IEEE Signal Process. Lett. 2021, 28, 818–822. [Google Scholar] [CrossRef]
- Huang, S.; Wang, K.; Liu, H.; Chen, J.; Li, Y. Contrastive Semi-supervised Learning for Underwater Image Restoration via Reliable Bank. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 18145–18155. [Google Scholar]
- Li, F.; Zheng, J.; Zhang, Y.-f.; Jia, W.; Wei, Q.; He, X. Cross-domain learning for underwater image enhancement. Signal Process. Image Commun. 2023, 110, 116890. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar]
- Pleiss, G.; Chen, D.; Huang, G.; Li, T.; Van Der Maaten, L.; Weinberger, K.Q. Memory-efficient implementation of densenets. arXiv 2017, arXiv:06990. [Google Scholar]
- Ramachandran, P.; Zoph, B.; Le, Q.V. Searching for activation functions. arXiv 2017, arXiv:05941. [Google Scholar]
- Peng, Y.-T.; Cao, K.; Cosman, P.C. Generalization of the dark channel prior for single image restoration. IEEE Trans. Image Process. 2018, 27, 2856–2868. [Google Scholar] [CrossRef]
- Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 2017, 27, 379–393. [Google Scholar] [CrossRef] [Green Version]
- Li, N.; Hou, G.; Liu, Y.; Pan, Z.; Tan, L. Single underwater image enhancement using integrated variational model. Digit. Signal Process. 2022, 129, 103660. [Google Scholar] [CrossRef]
- Chen, Y.-W.; Pei, S.-C. Domain Adaptation for Underwater Image Enhancement via Content and Style Separation. IEEE Access 2022, 10, 90523–90534. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef] [Green Version]
- Naik, A.; Swarnakar, A.; Mittal, K. Shallow-uwnet: Compressed model for underwater image enhancement (student abstract). In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 2–9 February 2021; pp. 15853–15854. [Google Scholar]
- Islam, M.J.; Xia, Y.; Sattar, J. Fast underwater image enhancement for improved visual perception. IEEE Robot. Autom. Lett. 2020, 5, 3227–3234. [Google Scholar] [CrossRef] [Green Version]
- Islam, M.J.; Luo, P.; Sattar, J. Simultaneous enhancement and super-resolution of underwater imagery for improved visual perception. arXiv 2020, arXiv:01155. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, S.; Ma, K.; Yeganeh, H.; Wang, Z.; Lin, W. A patch-structure representation method for quality assessment of contrast changed images. IEEE Signal Process. Lett. 2015, 22, 2387–2390. [Google Scholar] [CrossRef]
- Bakurov, I.; Buzzelli, M.; Schettini, R.; Castelli, M.; Vanneschi, L. Structural similarity index (SSIM) revisited: A data-driven approach. Expert Syst. Appl. 2022, 189, 116087. [Google Scholar] [CrossRef]
- Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
- Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
- Guan, J.; Zhang, W.; Gu, J.; Ren, H. No-reference blur assessment based on edge modeling. J. Vis. Commun. Image Represent. 2015, 29, 1–7. [Google Scholar] [CrossRef]
- Qin, X.; Wang, Z.; Bai, Y.; Xie, X.; Jia, H. FFA-Net: Feature fusion attention network for single image dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 11908–11915. [Google Scholar]
- Zhou, J.; Wei, X.; Shi, J.; Chu, W.; Zhang, W. Underwater image enhancement method with light scattering characteristics. Comput. Electr. Eng. 2022, 100, 107898. [Google Scholar] [CrossRef]
- Wang, Y.; Li, N.; Li, Z.; Gu, Z.; Zheng, H.; Zheng, B.; Sun, M. An imaging-inspired no-reference underwater color image quality assessment metric. Comput. Electr. Eng. 2018, 70, 904–913. [Google Scholar] [CrossRef]
- Gu, K.; Tao, D.; Qiao, J.-F.; Lin, W. Learning a no-reference quality assessment model of enhanced images with big data. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 1301–1313. [Google Scholar] [CrossRef] [Green Version]
Train Set | Test Set | ||
---|---|---|---|
Paired | Unpaired | ||
UIEB | Train-800 | Test-90 | Test-60 |
Uimagine | Train-3700 | Test-515 | None |
UFO | Train-1500 | Test-120 | None |
Dark | Train-5500 | None | Test-570 |
Models | Test-90 (UIEB) | Test-120 (UFO) | Test-515 (Uimage) | ||||||
---|---|---|---|---|---|---|---|---|---|
PCQI | PSNR | SSIM | PCQI | PSNR | SSIM | PCQI | PSNR | SSIM | |
raw | 1.126 | 16.40 | 0.749 | 0.819 | 20.32 | 0.760 | 0.894 | 20.03 | 0.791 |
UDCP | 0.942 | 12.99 | 0.608 | 0.678 | 16.82 | 0.635 | 0.732 | 16.36 | 0.634 |
Fusion | 0.979 | 20.59 | 0.873 | 0.648 | 18.26 | 0.733 | 0.691 | 18.76 | 0.775 |
GDCP | 0.821 | 14.18 | 0.732 | 0.496 | 14.02 | 0.634 | 0.517 | 13.33 | 0.647 |
UIEIVM | 0.741 | 20.38 | 0.839 | 0.430 | 17.06 | 0.663 | 0.456 | 17.15 | 0.683 |
Water-net | 0.982 | 21.13 | 0.858 | 0.661 | 19.40 | 0.749 | 0.704 | 19.70 | 0.790 |
Shallow-uwnet | 1.060 | 16.11 | 0.727 | 0.755 | 22.16 | 0.760 | 0.800 | 22.58 | 0.793 |
FUnIE-GAN | 0.885 | 16.72 | 0.726 | 0.662 | 21.39 | 0.741 | 0.702 | 23.41 | 0.801 |
Uiess | 0.926 | 18.48 | 0.781 | 0.705 | 20.46 | 0.758 | 0.741 | 20.74 | 0.793 |
TCRN | 0.941 | 22.13 | 0.905 | 0.776 | 26.75 | 0.835 | 0.778 | 22.59 | 0.794 |
Models | Test-60(UIEB) | Test-570 (Dark) | ||||
---|---|---|---|---|---|---|
UIQM | UCIQE | EMBM | UIQM | UCIQE | EMBM | |
raw | 0.383 | 0.366 | 0.563 | 1.116 | 0.419 | 0.486 |
UDCP | 0.313 | 0.504 | 0.531 | 1.501 | 0.486 | 0.482 |
Fusion | 0.622 | 0.427 | 0.523 | 1.024 | 0.404 | 0.482 |
GDCP | 0.616 | 0.445 | 0.498 | 1.610 | 0.465 | 0.485 |
UIEIVM | 1.176 | 0.479 | 0.461 | 2.178 | 0.472 | 0.468 |
Water-net | 0.605 | 0.432 | 0.498 | 1.067 | 0.423 | 0.476 |
Shallow-uwnet | 0.366 | 0.341 | 0.481 | 0.919 | 0.414 | 0.442 |
FUnIE-GAN | 0.521 | 0.415 | 0.556 | 1.017 | 0.402 | 0.471 |
Uiess | 0.513 | 0.418 | 0.460 | 0.995 | 0.414 | 0.442 |
TCRN | 0.655 | 0.481 | 0.476 | 0.845 | 0.361 | 0.487 |
Models | D10 | Z23 | T6000 | T8000 | W60 | W80 | TS1 | Avg |
---|---|---|---|---|---|---|---|---|
raw | 22.19 | 23.26 | 23.28 | 29.08 | 22.07 | 26.17 | 24.34 | 24.34 |
UDCP | 24.41 | 25.08 | 22.40 | 30.89 | 21.17 | 27.93 | 24.64 | 25.22 |
Fusion | 19.63 | 18.95 | 16.46 | 19.52 | 20.19 | 24.54 | 19.34 | 19.80 |
GDCP | 25.47 | 24.12 | 22.78 | 22.90 | 23.04 | 30.85 | 21.14 | 24.33 |
UIEIVM | 22.64 | 21.93 | 19.75 | 22.83 | 21.86 | 23.40 | 19.35 | 21.68 |
Water-net | 19.41 | 19.80 | 17.74 | 24.60 | 19.35 | 21.00 | 19.24 | 20.16 |
Shallow-uwnet | 20.25 | 18.86 | 17.15 | 27.45 | 19.78 | 25.06 | 27.62 | 22.31 |
FUnIE-GAN | 23.99 | 21.21 | 18.42 | 27.02 | 21.20 | 24.53 | 28.29 | 23.52 |
Uiess | 19.42 | 21.31 | 17.69 | 23.54 | 17.67 | 19.63 | 21.46 | 21.10 |
TCRN | 19.12 | 19.00 | 18.90 | 21.50 | 17.45 | 18.04 | 16.13 | 18.59 |
Modules | Baselines | Test-90 | Test-120 | ||
---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | ||
Full model | 22.13 | 0.91 | 26.75 | 0.84 | |
TTCN | w/o LAB | 21.75 | 0.89 | 17.80 | 0.74 |
w/o HSV | 21.66 | 0.89 | 26.31 | 0.83 | |
w/o LAB HSV | 21.29 | 0.88 | 26.66 | 0.83 | |
MA | w/o DPM | 21.07 | 0.89 | 26.65 | 0.83 |
w/o PFM | 21.92 | 0.89 | 26.63 | 0.83 | |
w/o GS | 22.01 | 0.90 | 26.63 | 0.83 | |
LOSS | w/o per loss | 21.76 | 0.90 | 26.53 | 0.83 |
Water-Net | Shallow-uwnet | FUnIE-GAN | Uiess | Ours | |
---|---|---|---|---|---|
Parameters (m) | 1.23 | 0.24 | 7.73 | 2.23 | 22.37 |
Per image (s) | 0.52 | 0.11 | 0.48 | 0.62 | 0.51 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, S.; Zhang, R.; Ning, Z.; Luo, J. TCRN: A Two-Step Underwater Image Enhancement Network Based on Triple-Color Space Feature Reconstruction. J. Mar. Sci. Eng. 2023, 11, 1221. https://doi.org/10.3390/jmse11061221
Lin S, Zhang R, Ning Z, Luo J. TCRN: A Two-Step Underwater Image Enhancement Network Based on Triple-Color Space Feature Reconstruction. Journal of Marine Science and Engineering. 2023; 11(6):1221. https://doi.org/10.3390/jmse11061221
Chicago/Turabian StyleLin, Sen, Ruihang Zhang, Zemeng Ning, and Jie Luo. 2023. "TCRN: A Two-Step Underwater Image Enhancement Network Based on Triple-Color Space Feature Reconstruction" Journal of Marine Science and Engineering 11, no. 6: 1221. https://doi.org/10.3390/jmse11061221
APA StyleLin, S., Zhang, R., Ning, Z., & Luo, J. (2023). TCRN: A Two-Step Underwater Image Enhancement Network Based on Triple-Color Space Feature Reconstruction. Journal of Marine Science and Engineering, 11(6), 1221. https://doi.org/10.3390/jmse11061221