A Dual-Branch Autoencoder Network for Underwater Low-Light Polarized Image Enhancement
Abstract
:1. Introduction
- We propose a Stokes-domain underwater low-light polarized image enhancement paradigm inspired by a polarization antagonistic relationship, which can effectively avoid the damage caused by directly amplifying brightness and can restore the details by using the physical prior. To our best knowledge, this paradigm designed according to mutual constraints of Stokes parameters is proposed for the first time and demonstrates superior performance compared to other methods.
- Based on the proposed paradigm, we propose a dual-branch network based on an improved autoencoder. We design a GRD feature extraction module specifically for edge extraction, which effectively captures details and structural information at various scales. Additionally, we incorporate a polarization loss function to further preserve the polarization constraint relationships and prevent their disruption.
- We construct a simulation dataset based on the underwater polarization imaging model and a camera response function (CRF). To verify the generalization performance of the algorithm, we build an underwater polarimetric imaging system and create a real-world dataset. Extensive experimental results on real-world datasets demonstrate the effectiveness of our proposed approach and its superiority against other state-of-the-art methods.
2. Related Work
2.1. Low-Light Intensity Image Enhancement
2.2. Low-Light Polarized Image Enhancement
3. Methods
3.1. Underwater Polarization Imaging Model
3.1.1. Traditional Underwater Image Model
3.1.2. Underwater Polarization Imaging Model
3.2. Polarization-Based Low-Light Image Enhancement Pipeline
3.3. Polarization-Based Low-Light Image Enhancement Network
3.3.1. Brightness Adjustment Network
3.3.2. Detail Enhancement Network
4. Experiments
4.1. Data Preparation
4.1.1. Simulation Dataset
- (1)
- Clear image with depth map . Using Equation (1), we can generate by simulating , and . , settings refer to [31]. is in [0.85, 0.95].
- (2)
- Paired semantic segmentation labels S. We can generate reasonable based on semantic information, generate from S. (in [0.025, 0.2]), (in [0.05, 0.4]).
4.1.2. Real-World Datasets
4.2. Training Details
4.3. Qualitative and Quantitative Analysis
4.3.1. Qualitative Analysis of Final Results
4.3.2. Qualitative Analysis of Intermediate Results
4.3.3. Quantitative Analysis
4.4. Ablation Study
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Schechner, Y.Y.; Karpel, N. Recovery of Underwater Visibility and Structure by Polarization Analysis. IEEE J. Ocean. Eng. 2005, 30, 570–587. [Google Scholar] [CrossRef]
- Ibrahim, H.; Kong, N.S.P. Brightness Preserving Dynamic Histogram Equalization for Image Contrast Enhancement. IEEE Trans. Consum. Electron. 2007, 53, 1752–1758. [Google Scholar] [CrossRef]
- Park, S.; Yu, S.; Moon, B.; Ko, S.; Paik, J. Low-light image enhancement using variational optimization-based retinex model. IEEE Trans. Consum. Electron. 2017, 63, 178–184. [Google Scholar] [CrossRef]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A Deep Autoencoder Approach to Natural Low-light Image Enhancement. arXiv 2016, arXiv:1511.03995. [Google Scholar] [CrossRef]
- Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to See in the Dark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the Darkness: A Practical Low-light Image Enhancer. arXiv 2019, arXiv:1905.04161. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep Light Enhancement without Paired Supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward Fast, Flexible, and Robust Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Hu, H.; Lin, Y.; Li, X.; Qi, P.; Liu, T. IPLNet: A neural network for intensity-polarization imaging in low light. Opt. Lett. 2020, 45, 6162. [Google Scholar] [CrossRef]
- Zhou, C.; Teng, M.; Lyu, Y.; Li, S.; Xu, C.; Shi, B. Polarization-Aware Low-Light Image Enhancement. In Proceeding of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023. [Google Scholar]
- Muhammad, S.H.; Ezmahamrul, A.A.; Wan, N.J.H.W.Y.; Bachok, Z. Mixture contrast limited adaptive histogram equalization for underwater image enhancement. In Proceedings of the International Conference on Computer Applications Technology (ICCAT), Sousse, Tunisia, 20–22 January 2013. [Google Scholar]
- Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef]
- Lv, F.; Lu, F.; Wu, J.; Lim, C. MBLLEN: Low-light Image/Video Enhancement Using CNNs. In Proceedings of the British Machine Vision Conference, Newcastle, UK, 3–6 September 2018. [Google Scholar]
- Li, C.; Guo, J.; Porikli, F.; Pang, Y. LightenNet: A Convolutional Neural Network for weakly illuminated image enhancement. Pattern Recognit. Lett. 2018, 104, 15–22. [Google Scholar] [CrossRef]
- Lim, S.; Kim, W. DSLR: Deep Stacked Laplacian Restorer for Low-Light Image Enhancement. IEEE Trans. Multimed. 2021, 23, 4272–4284. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Loy, C.C. Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4225–4238. [Google Scholar] [CrossRef]
- Wang, W.; Yan, D.; Wu, X.; He, W.; Chen, Z.; Yuan, X.; Li, L. Low-light image enhancement based on virtual exposure. Signal Process. Image Commun. 2023, 118, 117016. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond Brightening Low-light Images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar] [CrossRef]
- Yang, W.; Cao, Y.; Zha, Z.J.; Zhang, J.; Xiong, Z.; Zhang, W.; Wu, F. Progressive Retinex: Mutually Reinforced Illumination-Noise Perception Network for Low-Light Image Enhancement. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019. [Google Scholar]
- Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-inspired Unrolling with Cooperative Prior Architecture Search for Low-light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Wang, M.; Li, J.; Zhang, C. Low-light image enhancement by deep learning network for improved illumination map. Comput. Vis. Image Underst. 2023, 232, 103681. [Google Scholar] [CrossRef]
- Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023. [Google Scholar]
- Zhang, J.; Luo, H.; Liang, R.; Zhou, W.; Hui, B.; Chang, Z. PCA-based denoising method for division of focal plane polarimeters. Opt. Express 2017, 25, 2391. [Google Scholar] [CrossRef]
- Ye, W.; Li, S.; Zhao, X.; Abubakar, A.; Bermak, A. A K Times Singular Value Decomposition Based Image Denoising Algorithm for DoFP Polarization Image Sensors with Gaussian Noise. IEEE Sens. J. 2018, 18, 6138–6144. [Google Scholar] [CrossRef]
- Tibbs, A.B.; Daly, I.M.; Roberts, N.W.; Bull, D.R. Denoising imaging polarimetry by adapted BM3D method. J. Opt. Soc. Am. A 2018, 35, 690. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Li, H.; Lin, Y.; Guo, J.; Yang, J.; Yue, H.; Li, K.; Li, C.; Cheng, Z.; Hu, H.; et al. Learning-based denoising for polarimetric images. Opt. Express 2020, 28, 16309. [Google Scholar] [CrossRef]
- Liu, H.; Zhang, Y.; Cheng, Z.; Zhai, J.; Hu, H. Attention-based neural network for polarimetric image denoising. Opt. Lett. 2022, 47, 2726. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar]
- Lu, J.; Yuan, F.; Yang, W.; Cheng, E. An Imaging Information Estimation Network for Underwater Image Color Restoration. IEEE J. Ocean. Eng. 2021, 46, 1228–1239. [Google Scholar] [CrossRef]
- Ba, Y.; Gilbert, A.; Wang, F.; Yang, J.; Chen, R.; Wang, Y.; Yan, L.; Shi, B.; Kadambi, A. Deep Shape from Polarization. In Proceeding of the Computer Vision—ECCV, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Kupinski, M.K.; Bradley, C.L.; Diner, D.J.; Xu, F.; Chipman, R.A. Angle of linear polarization images of outdoor scenes. Opt. Eng. 2019, 58, 082419. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Maaten, L.V.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2018, arXiv:1608.06993. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. arXiv 2017, arXiv:1707.01083. [Google Scholar]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Michael, G.S.; Curtis, D.M. Inherent optical properties of Jerlov water types. Appl. Opt. 2015, 54, 5392. [Google Scholar]
- Jiang, J.; Liu, D.; Gu, J.; Susstrunk, S. What is the space of spectral sensitivity functions for digital color cameras? In Proceeding of the IEEE Workshop on Applications of Computer Vision (WACV), Clearwater Beach, FL, USA, 15–17 January 2013.
- Sakaridis, C.; Dai, D.; Hecker, S.; Van, L.G. Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding. In Proceeding of the Computer Vision—ECCV 2018, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Kingma, D.P.; Adam, J.B. A Method for Stochastic Optimization. arXiv 2017, arXiv:1412.6980. [Google Scholar]
- Lehtinen, J.; Munkberg, J.; Hasselgren, J.; Laine, S.; Karras, T.; Aittala, M.; Aila, T. Noise2Noise-Learning Image Restoration without Clean Data. arXiv 2018, arXiv:1803.04189. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Hore, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceeding of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010. [Google Scholar]
Metrics | Retinex-Net | MBLLEN | KinD | EnlightenGAN | DSLR |
PSNR | 9.8347 | 5.7199 | 10.2895 | 14.4324 | 6.7146 |
SSIM | 0.2041 | 0.1516 | 0.2718 | 0.2410 | 0.2446 |
Metrics | ZeroDCE | RUAS | SCI | Polar | Ours |
PSNR | 17.2694 | 15.8941 | 17.0511 | 20.9472 | 24.9282 |
SSIM | 0.1992 | 0.3799 | 0.2938 | 0.3241 | 0.4674 |
Metrics | Single Branch | w/o GRD | Full Method | |
---|---|---|---|---|
PSNR | 22.8461 | 21.8611 | 22.0593 | 24.9282 |
SSIM | 0.3392 | 0.3146 | 0.3166 | 0.4674 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xue, C.; Liu, Q.; Huang, Y.; Cheng, E.; Yuan, F. A Dual-Branch Autoencoder Network for Underwater Low-Light Polarized Image Enhancement. Remote Sens. 2024, 16, 1134. https://doi.org/10.3390/rs16071134
Xue C, Liu Q, Huang Y, Cheng E, Yuan F. A Dual-Branch Autoencoder Network for Underwater Low-Light Polarized Image Enhancement. Remote Sensing. 2024; 16(7):1134. https://doi.org/10.3390/rs16071134
Chicago/Turabian StyleXue, Chang, Qingyu Liu, Yifan Huang, En Cheng, and Fei Yuan. 2024. "A Dual-Branch Autoencoder Network for Underwater Low-Light Polarized Image Enhancement" Remote Sensing 16, no. 7: 1134. https://doi.org/10.3390/rs16071134
APA StyleXue, C., Liu, Q., Huang, Y., Cheng, E., & Yuan, F. (2024). A Dual-Branch Autoencoder Network for Underwater Low-Light Polarized Image Enhancement. Remote Sensing, 16(7), 1134. https://doi.org/10.3390/rs16071134