Self-Supervised Non-Uniform Low-Light Image Enhancement Combining Image Inversion and Exposure Fusion
Abstract
:1. Introduction
- (1)
- To propose an image exposure enhancement network that achieves impressive performance in terms of image quality by designing loss functions for self-supervised illumination enhancement and noise smoothing.
- (2)
- To propose a three-branch asymmetric fusion network as the MEF model, which can reconstruct details and fuse images more efficiently. Experiments show that the model is more effective in addressing image overexposure than networks guided by attention mechanisms.
- (3)
- To build a self-supervised image enhancement framework combining image inversion and exposure fusion. This framework is trained using non-uniform low-light images on their own, which can eliminate the need for well-designed data sets and be more beneficial for practical applications.
2. Related Work
2.1. Low-Light Image Enhancement Methods
2.2. Multi-Exposure Image Fusion Methods
3. Proposed Method
3.1. Enhancement and Denoising Network
3.2. Three-Branch Asymmetric Exposure Fusion Network
4. Experiments and Analysis
4.1. Experimental Settings
4.2. Main Experiments
4.2.1. Qualitative Comparison
4.2.2. Quantitative Comparison
4.3. Ablation Experiment
4.3.1. Contribution of EENet
4.3.2. Contribution of TAFNet
4.4. Effect of Parameters of the Loss Functions
4.4.1. Parametric Analysis of the Loss Function of EENet
4.4.2. Parametric Analysis of the Loss Function of TAFNet
4.5. Effect of Data Set
4.5.1. Training on Different Data Sets
4.5.2. Testing in Various Challenging Environments
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhou, B.; Krähenbühl, P. Cross-view transformers for real-time map-view semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 13760–13769. [Google Scholar]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 18–22 June 2023; pp. 7464–7475. [Google Scholar]
- Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–20 June 2020; pp. 2636–2645. [Google Scholar]
- Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
- Abdullah-Al-Wadud, M.; Kabir, M.H.; Dewan, M.A.A.; Chae, O. A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 593–600. [Google Scholar] [CrossRef]
- Reza, A.M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal Image Video Technol. 2004, 38, 35–44. [Google Scholar] [CrossRef]
- Rahman, Z.-U.; Jobson, D.J.; Woodell, G.A. Retinex processing for automatic image enhancement. J. Electron. Imaging 2004, 13, 100–110. [Google Scholar]
- Brainard, D.H.; Wandell, B.A. Analysis of the retinex theory of color vision. JOSA A 1986, 3, 1651–1661. [Google Scholar] [CrossRef] [PubMed]
- Jobson, D.J.; Rahman, Z.-u.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond Brightening Low-light Images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Zhang, Y.; Di, X.; Zhang, B.; Wang, C. Self-supervised Image Enhancement Network: Training with Low Light Images Only. arXiv 2020, arXiv:2002.11300. [Google Scholar]
- Zhang, Y.; Di, X.; Zhang, B.; Li, Q.; Yan, S.; Wang, C. Self-supervised low light image enhancement and denoising. arXiv 2021, arXiv:2103.00832. [Google Scholar]
- Zhang, F.; Shao, Y.; Sun, Y.; Zhu, K.; Gao, C.; Sang, N. Unsupervised low-light image enhancement via histogram equalization prior. arXiv 2021, arXiv:2112.01766. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep Light Enhancement without Paired Supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
- Ni, Z.; Yang, W.; Wang, S.; Ma, L.; Kwong, S. Towards unsupervised deep image enhancement with generative adversarial network. IEEE Trans. Image Process. 2020, 29, 9140–9151. [Google Scholar] [CrossRef] [PubMed]
- Wang, R.; Jiang, B.; Yang, C.; Li, Q.; Zhang, B. MAGAN: Unsupervised low-light image enhancement guided by mixed-attention. Big Data Min. Anal. 2022, 5, 110–119. [Google Scholar] [CrossRef]
- Fu, Y.; Hong, Y.; Chen, L.; You, S. LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss. Knowl. Based Syst. 2022, 240, 108010. [Google Scholar] [CrossRef]
- Xu, H.; Ma, J.; Zhang, X.-P. MEF-GAN: Multi-exposure image fusion via generative adversarial networks. IEEE Trans. Image Process. 2020, 29, 7203–7216. [Google Scholar] [CrossRef]
- Han, D.; Li, L.; Guo, X.; Ma, J. Multi-exposure image fusion via deep perceptual enhancement. Inf. Fusion 2022, 79, 248–262. [Google Scholar] [CrossRef]
- Liu, J.; Wu, G.; Luan, J.; Jiang, Z.; Liu, R.; Fan, X. HoLoCo: Holistic and local contrastive learning network for multi-exposure image fusion. Inf. Fusion 2023, 95, 237–249. [Google Scholar] [CrossRef]
- Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
- Wang, S.; Zheng, J.; Hu, H.-M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusion-based enhancing method for weakly illuminated images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef]
- Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; Jiang, J. Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5901–5910. [Google Scholar]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–20 June 2020; pp. 1780–1789. [Google Scholar]
- Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5637–5646. [Google Scholar]
- Mertens, T.; Kautz, J.; Van Reeth, F. Exposure fusion: A simple and practical alternative to high dynamic range photography. Comput. Graph. Forum 2009, 28, 161–171. [Google Scholar] [CrossRef]
- Ram Prabhakar, K.; Sai Srikar, V.; Venkatesh Babu, R. Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4714–4722. [Google Scholar]
- Xu, H.; Ma, J.; Le, Z.; Jiang, J.; Guo, X. Fusiondn: A unified densely connected network for image fusion. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 12484–12491. [Google Scholar]
- Xu, H.; Ma, J.; Jiang, J.; Guo, X.; Ling, H. U2Fusion: A unified unsupervised image fusion network. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 502–518. [Google Scholar] [CrossRef] [PubMed]
- Zhang, L.; Liu, X.; Learned-Miller, E.; Guan, H. SID-NISM: A self-supervised low-light image enhancement framework. arXiv 2020, arXiv:2012.08707. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Zhu, M.; Pan, P.; Chen, W.; Yang, Y. Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 13106–13113. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; pp. 694–711. [Google Scholar]
- Loh, Y.P.; Chan, C.S. Getting to know low-light images with the exclusively dark dataset. Comput. Vis. Image Underst. 2019, 178, 30–42. [Google Scholar] [CrossRef]
- Lee, C.; Lee, C.; Kim, C.-S. Contrast enhancement based on layered difference representation. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 965–968. [Google Scholar]
- Ma, K.; Zeng, K.; Wang, Z. Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef]
- Vasileios Vonikakis Dataset. Available online: https://sites.google.com/site/vonikakis/datasets (accessed on 16 June 2021).
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Yuan, Y.; Yang, W.; Ren, W.; Liu, J.; Scheirer, W.J.; Wang, Z. UG2+ Track 2: A Collective Benchmark Effort for Evaluating and Advancing Image Understanding in Poor Visibility Environments. arXiv 2019, arXiv:1904.04474. [Google Scholar]
Module | Encoder | Decoder | ||
---|---|---|---|---|
m | n | m | n | |
1 | 32 | 64 | 64 | 32 |
2 | 64 | 128 | 128 | 64 |
3 | 128 | 256 | 256 | 128 |
4 | 256 | 512 | 512 | 256 |
Method | DICM | LIME | MEF | NPE | VV | Mean |
---|---|---|---|---|---|---|
RetinexNet | 4.51 | 5.39 | 4.35 | 4.40 | 3.48 | 4.43 |
KinD | 3.74 | 4.31 | 4.31 | 4.47 | 3.83 | 4.13 |
HEP | 3.65 | 4.28 | 4.08 | 5.27 | 4.78 | 4.41 |
EnlightenGAN | 2.93 | 3.88 | 3.31 | 3.77 | 3.19 | 3.41 |
Zero-DCE | 2.86 | 4.45 | 3.43 | 3.90 | 3.70 | 3.66 |
SCI-e | 3.59 | 4.62 | 3.88 | 3.96 | 3.69 | 3.95 |
SCI-m | 3.71 | 4.95 | 3.70 | 4.51 | 4.18 | 4.21 |
MEFGAN | 5.26 | 5.15 | 6.17 | 5.66 | 6.41 | 5.73 |
DPE-MEF | 4.38 | 5.49 | 4.62 | 4.62 | 3.00 | 4.42 |
EENet | 3.04 | 4.10 | 3.67 | 3.77 | 3.01 | 3.52 |
Ours | 2.84 | 4.03 | 3.29 | 3.81 | 2.91 | 3.37 |
Method | DICM | LIME | MEF | NPE | VV | Mean |
---|---|---|---|---|---|---|
RetinexNet | 31.62 | 33.94 | 20.57 | 21.97 | 32.42 | 28.10 |
KinD | 25.09 | 21.98 | 35.66 | 23.78 | 31.31 | 27.56 |
HEP | 23.01 | 24.16 | 30.60 | 29.44 | 40.66 | 29.57 |
EnlightenGAN | 10.56 | 30.31 | 22.83 | 22.39 | 28.31 | 22.88 |
Zero-DCE | 19.99 | 24.17 | 22.28 | 16.56 | 29.21 | 22.44 |
SCI-e | 12.31 | 19.33 | 19.95 | 13.51 | 28.20 | 18.66 |
SCI-m | 24.89 | 24.49 | 18.67 | 27.24 | 34.80 | 26.02 |
MEFGAN | 47.96 | 36.17 | 53.11 | 37.70 | 56.92 | 46.37 |
DPE-MEF | 14.11 | 16.47 | 17.26 | 20.02 | 16.98 | 16.97 |
EENet | 15.54 | 19.33 | 23.63 | 15.27 | 14.03 | 17.56 |
Ours | 11.03 | 15.87 | 16.97 | 17.40 | 15.25 | 15.30 |
Loss Function | PSNR | SSIM | NIQE |
---|---|---|---|
w/o | 20.8190 | 0.9522 | 4.7776 |
w/o | 10.8013 | 0.3289 | 4.9191 |
w/o | 21.5714 | 0.9541 | 3.8347 |
w/o | 20.0018 | 0.9586 | 4.5173 |
EENet | 22.3414 | 0.9605 | 3.5511 |
Network STRUCTURE | NIQE | BRIAQUE |
---|---|---|
prior | 2.9547 | 15.7336 |
w/o fusion block | 2.9496 | 15.2582 |
w/o ResUNet | 2.9857 | 15.3832 |
TAFNet | 2.9165 | 15.2548 |
Parameters | PSNR | SSIM | NIQE |
---|---|---|---|
, , | 22.3946 | 0.9556 | 3.5592 |
, , | 11.1583 | 0.3502 | 4.5630 |
, , | 21.7534 | 0.9757 | 3.7421 |
, , | 21.7233 | 0.9749 | 3.5479 |
, , | 18.4398 | 0.9295 | 4.3532 |
, , | 19.7033 | 0.9512 | 3.4036 |
, , (EENet) | 22.3414 | 0.9605 | 3.5511 |
Parameters | NIQE | BRISQUE |
---|---|---|
, | 3.2320 | 15.2759 |
, | 3.2035 | 15.9877 |
, | 3.1615 | 18.7176 |
, | 3.1635 | 18.4551 |
, | 3.1753 | 17.7044 |
, (TAFNet) | 2.9165 | 15.2548 |
Data Sets | ExDark | ImageNet | DICM | LIME | MEF | NPE | VV | DarkFace |
---|---|---|---|---|---|---|---|---|
ImageNet | 3.82 | 4.87 | 3.15 | 4.06 | 3.38 | 3.75 | 3.03 | 2.60 |
ExDark | 3.64 | 4.82 | 2.84 | 4.03 | 3.29 | 3.81 | 2.91 | 2.59 |
Methods | HEP | Zero-DCE | EnlightenGAN | SCI-e | Ours |
---|---|---|---|---|---|
Complex scenes | 3.70 | 3.72 | 3.69 | 3.41 | 2.49 |
Images with different noise levels | 4.51 | 5.16 | 4.11 | 5.47 | 3.64 |
Extreme low-light environments | 3.36 | 3.19 | 3.85 | 3.92 | 3.64 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, W.; Li, K.; Xu, M.; Huang, R. Self-Supervised Non-Uniform Low-Light Image Enhancement Combining Image Inversion and Exposure Fusion. Electronics 2023, 12, 4445. https://doi.org/10.3390/electronics12214445
Huang W, Li K, Xu M, Huang R. Self-Supervised Non-Uniform Low-Light Image Enhancement Combining Image Inversion and Exposure Fusion. Electronics. 2023; 12(21):4445. https://doi.org/10.3390/electronics12214445
Chicago/Turabian StyleHuang, Wei, Kaili Li, Mengfan Xu, and Rui Huang. 2023. "Self-Supervised Non-Uniform Low-Light Image Enhancement Combining Image Inversion and Exposure Fusion" Electronics 12, no. 21: 4445. https://doi.org/10.3390/electronics12214445
APA StyleHuang, W., Li, K., Xu, M., & Huang, R. (2023). Self-Supervised Non-Uniform Low-Light Image Enhancement Combining Image Inversion and Exposure Fusion. Electronics, 12(21), 4445. https://doi.org/10.3390/electronics12214445