DCTE-LLIE: A Dual Color-and-Texture-Enhancement-Based Method for Low-Light Image Enhancement
Abstract
:1. Introduction
- A novel method called the DCTE-LLIE method is proposed in this paper. The proposed DCTE-LLIE method can extract more realistic color and texture feature representations of low-light images during the enhancement process, which can help eliminate the color distortion and local blurring effect on the final enhanced image more effectively.
- A novel color enhancement block (CEB) is proposed to extract more realistic color representations by maintaining the color distribution of the low-light images during the enhancement process, help extract more reasonable color representations, and finally, eliminate the color distortion of the final enhanced image.
- A multiscale attention-based texture enhancement block (ATEB) is proposed to help the network focus on the local regions and extract more effective and reliable texture feature representations during the training process; also, a multiscale feature fusion strategy is proposed to fuse multiscale features automatically and finally generate more reliable texture representations.
2. Related Work
3. DCTE-LLIE Method
3.1. Decomposition Subnetwork
3.2. Color Enhancement Block
3.3. Attention-Based Texture Enhancement Block
3.4. Training Strategy
4. Experiment
4.1. Experiment Details
4.2. Evaluation Index
4.3. Comparison with State-of-the-Art Methods
4.4. The Effectiveness of Color Enhancement Block
4.5. The Effectiveness of Attention-Based Texture Enhancement Block
4.6. Further Experimental Results on MIT-Adobe FiveK Dataset
4.7. Further Experimental Results on the SID Dataset
4.8. Visualization Performance on Real-World Scenarios
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Liu, J.; Xu, D.; Yang, W.; Fan, M.; Huang, H. Benchmarking low-light image enhancement and beyond. Int. J. Comput. Vis. 2021, 129, 1153–1184. [Google Scholar] [CrossRef]
- Ibrahim, H.; Kong, N.S.P. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 1752–1758. [Google Scholar] [CrossRef]
- Nakai, K.; Hoshi, Y.; Taguchi, A. Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms. In Proceedings of the 2013 International Symposium on Intelligent Signal Processing and Communication Systems, Naha, Japan, 12–15 November 2013; pp. 445–449. [Google Scholar]
- Celik, T.; Tjahjadi, T. Contextual and variational contrast enhancement. IEEE Trans. Image Process. 2011, 20, 3431–3441. [Google Scholar] [CrossRef] [PubMed]
- Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar] [CrossRef]
- Land, E.H. The retinex theory of color vision. Sci. Am. 1974, 237, 108–128. [Google Scholar] [CrossRef] [PubMed]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Zhang, Y.; Li, W.; Sun, W.; Tao, R.; Du, Q. Single-source domain expansion network for cross-scene hyperspectral image classification. IEEE Trans. Image Process. 2023, 32, 1498–1512. [Google Scholar] [CrossRef]
- Bhatti, U.A.; Huang, M.; Neira-Molina, H.; Marjan, S.; Baryalai, M.; Tang, H.; Wu, G.; Bazai, S.U. MFFCG–Multi feature fusion for hyperspectral image classification using graph attention network. Expert Syst. Appl. 2023, 229, 120496. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems 28, Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
- Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
- Chen, S.; Sun, P.; Song, Y.; Luo, P. Diffusiondet: Diffusion model for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 19830–19843. [Google Scholar]
- Wang, Z.; Li, Y.; Chen, X.; Lim, S.N.; Torralba, A.; Zhao, H.; Wang, S. Detecting everything in the open world: Towards universal object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 11433–11443. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Jain, J.; Li, J.; Chiu, M.T.; Hassani, A.; Orlov, N.; Shi, H. Oneformer: One transformer to rule universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 2989–2998. [Google Scholar]
- Wu, J.; Fu, R.; Fang, H.; Zhang, Y.; Yang, Y.; Xiong, H.; Liu, H.; Xu, Y. Medsegdiff: Medical image segmentation with diffusion probabilistic model. In Proceedings of the Medical Imaging with Deep Learning, Nashville, NJ, USA, 10–12 July 2023; pp. 1623–1639. [Google Scholar]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Wang, L.W.; Liu, Z.S.; Siu, W.C.; Lun, D.P. Lightening network for low-light image enhancement. IEEE Trans. Image Process. 2020, 29, 7984–7996. [Google Scholar] [CrossRef]
- Zhao, Z.; Xiong, B.; Wang, L.; Ou, Q.; Yu, L.; Kuang, F. RetinexDIP: A unified deep framework for low-light image enhancement. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 1076–1088. [Google Scholar] [CrossRef]
- Wang, Y.; Wan, R.; Yang, W.; Li, H.; Chau, L.P.; Kot, A. Low-light image enhancement with normalizing flow. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 22 February–1 March 2022; Volume 36, pp. 2604–2612. [Google Scholar]
- Hai, J.; Xuan, Z.; Yang, R.; Hao, Y.; Zou, F.; Lin, F.; Han, S. R2rnet: Low-light image enhancement via real-low to real-normal network. J. Vis. Commun. Image Represent. 2023, 90, 103712. [Google Scholar] [CrossRef]
- Yang, S.; Zhou, D.; Cao, J.; Guo, Y. LightingNet: An integrated learning method for low-light image enhancement. IEEE Trans. Comput. Imaging 2023, 9, 29–42. [Google Scholar] [CrossRef]
- Guo, X.; Hu, Q. Low-light image enhancement via breaking down the darkness. Int. J. Comput. Vis. 2023, 131, 48–66. [Google Scholar] [CrossRef]
- Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 12504–12513. [Google Scholar]
- Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
- Hussain, K.; Rahman, S.; Khaled, S.M.; Abdullah-Al-Wadud, M.; Shoyaib, M. Dark image enhancement by locally transformed histogram. In Proceedings of the 8th International Conference on Software, Knowledge, Information Management and Applications (SKIMA 2014), Dhaka, Bangladesh, 18–20 December 2014; pp. 1–7. [Google Scholar]
- Liu, B.; Jin, W.; Chen, Y.; Liu, C.; Li, L. Contrast enhancement using non-overlapped sub-blocks and local histogram projection. IEEE Trans. Consum. Electron. 2011, 57, 583–588. [Google Scholar] [CrossRef]
- Kim, J.Y.; Kim, L.S.; Hwang, S.H. An advanced contrast enhancement using partially overlapped sub-block histogram equalization. IEEE Trans. Circuits Syst. Video Technol. 2001, 11, 475–484. [Google Scholar]
- Dong, X.; Pang, Y.; Wen, J. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the ACM SIGGRAPH 2010 Posters, Los Angeles, CA, USA, 26–30 July 2010; p. 1. [Google Scholar]
- Jobson, D.J.; Rahman, Z.-U.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef] [PubMed]
- Jobson, D.J.; Rahman, Z.-u.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
- Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef] [PubMed]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
- Li, M.; Liu, J.; Yang, W.; Guo, Z. Joint denoising and enhancement for low-light images via retinex model. In Proceedings of the International Forum on Digital TV and Wireless Multimedia Communications, Shanghai, China, 8–9 November 2017; Springer: Singapore, 2017; pp. 91–99. [Google Scholar]
- Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef] [PubMed]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems 30, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conerence on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 586–595. [Google Scholar]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
- Li, C.; Guo, J.; Porikli, F.; Pang, Y. LightenNet: A convolutional neural network for weakly illuminated image enhancement. Pattern Recognit. Lett. 2018, 104, 15–22. [Google Scholar] [CrossRef]
- Lv, F.; Lu, F.; Wu, J.; Lim, C. MBLLEN: Low-Light Image/Video Enhancement Using CNNs. In Proceedings of the BMVC Newcastle, Newcastle, UK, 3–6 September 2018; Volume 220, p. 4. [Google Scholar]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
- Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond brightening low-light images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar] [CrossRef]
- Lu, K.; Zhang, L. TBEFN: A two-branch exposure-fusion network for low-light image enhancement. IEEE Trans. Multimed. 2020, 23, 4093–4105. [Google Scholar] [CrossRef]
- Lim, S.; Kim, W. DSLR: Deep stacked Laplacian restorer for low-light image enhancement. IEEE Trans. Multimed. 2020, 23, 4272–4284. [Google Scholar] [CrossRef]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
- Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; Liu, J. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 3063–3072. [Google Scholar]
- Zhang, L.; Zhang, L.; Liu, X.; Shen, Y.; Zhang, S.; Zhao, S. Zero-shot restoration of back-lit images using deep internal learning. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1623–1631. [Google Scholar]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 1780–1789. [Google Scholar]
- Zhu, A.; Zhang, L.; Shen, Y.; Ma, Y.; Zhao, S.; Zhou, Y. Zero-shot restoration of underexposed images via robust retinex decomposition. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
Method | PSNR | SSIM | LPIPS |
---|---|---|---|
LLNet [43] | 15.96 | 0.611 | 0.529 |
LightenNet [44] | 15.35 | 0.604 | 0.541 |
RetinexNet [21] | 16.77 | 0.635 | 0.436 |
MBLLEN [45] | 17.91 | 0.729 | 0.356 |
KinD [46] | 20.87 | 0.804 | 0.207 |
KinD++ [47] | 21.30 | 0.822 | 0.175 |
TBEFN [48] | 19.35 | 0.671 | 0.237 |
DSLR [49] | 19.05 | 0.723 | 0.244 |
ElightenGAN [50] | 17.48 | 0.683 | 0.314 |
DRBN [51] | 19.86 | 0.748 | 0.261 |
ExCNet [52] | 18.78 | 0.716 | 0.282 |
Zero-DCE [53] | 18.86 | 0.734 | 0.311 |
RRDNet [54] | 21.39 | 0.791 | 0.157 |
LLFlow [24] | 24.13 | 0.872 | 0.117 |
DCTE-LLIE | 24.21 | 0.826 | 0.131 |
Method | PSNR | SSIM | LPIPS |
---|---|---|---|
DCTE-LLIE without CEB | 23.05 | 0.814 | 0.174 |
DCTE-LLIE without ATEB | 23.68 | 0.808 | 0.157 |
DCTE-LLIE without multiscale feature fusion strategy | 23.14 | 0.796 | 0.163 |
DCTE-LLIE | 24.21 | 0.826 | 0.131 |
Method | PSNR | SSIM | LPIPS |
---|---|---|---|
LLNet [43] | 15.84 | 0.624 | 0.537 |
LightenNet [44] | 15.57 | 0.618 | 0.528 |
RetinexNet [21] | 16.83 | 0.674 | 0.462 |
MBLLEN [45] | 18.15 | 0.740 | 0.374 |
KinD [46] | 19.22 | 0.783 | 0.241 |
KinD++ [47] | 20.41 | 0.801 | 0.188 |
TBEFN [48] | 19.06 | 0.781 | 0.237 |
DSLR [49] | 18.71 | 0. 739 | 0.244 |
ElightenGAN [50] | 17.07 | 0.655 | 0.339 |
DRBN [51] | 18.53 | 0.718 | 0.320 |
ExCNet [52] | 18.04 | 0.724 | 0.401 |
Zero-DCE [53] | 17.33 | 0.736 | 0.397 |
RRDNet [54] | 20.14 | 0.785 | 0.196 |
LLFlow [24] | 22.64 | 0.824 | 0.149 |
DCTE-LLIE | 22.39 | 0.813 | 0.137 |
Dataset | ||||||
---|---|---|---|---|---|---|
Method | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS |
LLNet [43] | 15.21 | 0.602 | 0.527 | 15.49 | 0.611 | 0.547 |
LightenNet [44] | 15.77 | 0.611 | 0.533 | 16.04 | 0.602 | 0.516 |
RetinexNet [21] | 16.81 | 0.647 | 0.419 | 16.75 | 0.688 | 0.497 |
MBLLEN [45] | 17.69 | 0.686 | 0.385 | 17.53 | 0.670 | 0.402 |
KinD [46] | 20.03 | 0.788 | 0.181 | 20.36 | 0.796 | 0.193 |
KinD++ [47] | 20.97 | 0.801 | 0.149 | 21.13 | 0.814 | 0.161 |
TBEFN [48] | 19.78 | 0.783 | 0.274 | 19.62 | 19.83 | 0.261 |
DSLR [49] | 19.41 | 0.774 | 0.258 | 19.73 | 19.54 | 0.245 |
ElightenGAN [50] | 18.50 | 0.758 | 0.284 | 18.37 | 0.703 | 0.278 |
DRBN [51] | 19.11 | 0.773 | 0.262 | 19.02 | 0.779 | 0.283 |
ExCNet [52] | 18.47 | 0.713 | 0.302 | 18.61 | 0.714 | 0.297 |
Zero-DCE [53] | 18.59 | 0.748 | 0.281 | 18.28 | 0.743 | 0.272 |
RRDNet [54] | 21.33 | 0.792 | 0.154 | 20.69 | 0.782 | 0.188 |
LLFlow [24] | 23.15 | 0.824 | 0.129 | 22.83 | 0.820 | 0.120 |
DCTE-LLIE | 23.04 | 0.829 | 0.125 | 23.32 | 0.818 | 0.127 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, H.; Cao, J.; Yang, L.; Huang, J. DCTE-LLIE: A Dual Color-and-Texture-Enhancement-Based Method for Low-Light Image Enhancement. Computers 2024, 13, 134. https://doi.org/10.3390/computers13060134
Wang H, Cao J, Yang L, Huang J. DCTE-LLIE: A Dual Color-and-Texture-Enhancement-Based Method for Low-Light Image Enhancement. Computers. 2024; 13(6):134. https://doi.org/10.3390/computers13060134
Chicago/Turabian StyleWang, Hua, Jianzhong Cao, Lei Yang, and Jijiang Huang. 2024. "DCTE-LLIE: A Dual Color-and-Texture-Enhancement-Based Method for Low-Light Image Enhancement" Computers 13, no. 6: 134. https://doi.org/10.3390/computers13060134
APA StyleWang, H., Cao, J., Yang, L., & Huang, J. (2024). DCTE-LLIE: A Dual Color-and-Texture-Enhancement-Based Method for Low-Light Image Enhancement. Computers, 13(6), 134. https://doi.org/10.3390/computers13060134