Two Residual Attention Convolution Models to Recover Underexposed and Overexposed Images
Abstract
:1. Introduction
- We propose a novel illumination and color correction method, employing a dual convolution network based on dissimilar residual attention blocks to refine underexposed and overexposed images.
- Our model offers a solution to optimize image restoration results by separating the illumination and color correction processes through two convolution networks using the CIELab color space.
- We propose to add a self-attention layer to all residual blocks in our system to enhance system performance.
- We create a synthetic image dataset for underexposure and underexposure cases, along with related ground-truth images, based on two public datasets for the training process.
2. Related Works
3. Proposed Method
3.1. System Overview
3.2. ICANet Architecture
3.3. Self-Attention Mechanism
3.4. CCANet Architecture
3.5. Loss Function
4. Experiments
4.1. Datasets and Metrics
4.2. Performance Evaluation
4.3. Ablation Study
5. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Payne, T. Another Photography Book; Adobe Education Exchange: San Jose, CA, USA, 2018; Volume 2020-7, pp. 121–122. Available online: https://edex.adobe.com/teaching-resources/another-photography-book (accessed on 12 January 2023).
- Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
- Song, Q.; Cosman, P.C. Luminance enhancement and detail preservation of images and videos adapted to ambient illumination. IEEE Trans. Image Process. 2018, 27, 4901–4915. [Google Scholar] [CrossRef] [PubMed]
- Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic tone reproduction for digital images. In Seminal Graphics Papers: Pushing the Boundaries; Association for Computing Machinery: New York, NY, USA, 2023; Volume 2, pp. 661–670. [Google Scholar]
- Fu, X.; Liao, Y.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation. IEEE Trans. Image Process. 2015, 24, 4965–4977. [Google Scholar] [CrossRef] [PubMed]
- Abdullah-Al-Wadud, M.; Kabir, M.H.; Dewan, M.A.A.; Chae, O. A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 593–600. [Google Scholar] [CrossRef]
- Veluchamy, M.; Subramani, B. Image contrast and color enhancement using adaptive gamma correction and histogram equalization. Optik 2019, 183, 329–337. [Google Scholar] [CrossRef]
- Li, C.; Tang, S.; Yan, J.; Zhou, T. Low-light image enhancement based on quasi-symmetric correction functions by fusion. Symmetry 2020, 12, 1561. [Google Scholar] [CrossRef]
- Zhang, W.; Liu, X.; Wang, W.; Zeng, Y. Multi-exposure image fusion based on wavelet transform. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418768939. [Google Scholar] [CrossRef]
- Jung, C.; Yang, Q.; Sun, T.; Fu, Q.; Song, H. Low light image enhancement with dual-tree complex wavelet transform. J. Vis. Commun. Image Represent. 2017, 42, 28–36. [Google Scholar] [CrossRef]
- Demirel, H.; Ozcinar, C.; Anbarjafari, G. Satellite image contrast enhancement using discrete wavelet transform and singular value decomposition. IEEE Geosci. Remote Sens. Lett. 2009, 7, 333–337. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.u.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.u.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
- Li, J. Application of image enhancement method for digital images based on Retinex theory. Optik 2013, 124, 5986–5988. [Google Scholar] [CrossRef]
- Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
- Ren, X.; Yang, W.; Cheng, W.H.; Liu, J. LR3M: Robust low-light enhancement via low-rank regularized retinex model. IEEE Trans. Image Process. 2020, 29, 5862–5876. [Google Scholar] [CrossRef] [PubMed]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
- Jiang, L.; Jing, Y.; Hu, S.; Ge, B.; Xiao, W. Deep refinement network for natural low-light image enhancement in symmetric pathways. Symmetry 2018, 10, 491. [Google Scholar] [CrossRef]
- Li, Q.; Wu, H.; Xu, L.; Wang, L.; Lv, Y.; Kang, X. Low-light image enhancement based on deep symmetric encoder-decoder convolutional networks. Symmetry 2020, 12, 446. [Google Scholar] [CrossRef]
- Guo, Y.; Ke, X.; Ma, J.; Zhang, J. A pipeline neural network for low-light image enhancement. IEEE Access 2019, 7, 13737–13744. [Google Scholar] [CrossRef]
- Li, C.; Guo, J.; Porikli, F.; Pang, Y. LightenNet: A convolutional neural network for weakly illuminated image enhancement. Pattern Recognit. Lett. 2018, 104, 15–22. [Google Scholar] [CrossRef]
- Wang, R.; Zhang, Q.; Fu, C.W.; Shen, X.; Zheng, W.S.; Jia, J. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6849–6857. [Google Scholar]
- Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4225–4238. [Google Scholar] [CrossRef]
- Gao, Z.; Edirisinghe, E.; Chesnokov, S. OEC-cnn: A simple method for over-exposure correction in photographs. Electron. Imaging 2020, 32, 1–8. [Google Scholar] [CrossRef]
- Wang, J.; Tan, W.; Niu, X.; Yan, B. RDGAN: Retinex decomposition based adversarial learning for low-light enhancement. In Proceedings of the 2019 IEEE international conference on multimedia and expo (ICME), Shanghai, China, 8–12 July 2019; pp. 1186–1191. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
- Ma, T.; Guo, M.; Yu, Z.; Chen, Y.; Ren, X.; Xi, R.; Li, Y.; Zhou, X. RetinexGAN: Unsupervised low-light enhancement with two-layer convolutional decomposition networks. IEEE Access 2021, 9, 56539–56550. [Google Scholar] [CrossRef]
- Cao, Y.; Ren, Y.; Li, T.H.; Li, G. Over-exposure correction via exposure and scene information disentanglement. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020. [Google Scholar]
- Zhang, Q.; Nie, Y.; Zheng, W.S. Dual illumination estimation for robust exposure correction. Comput. Graph. Forum 2019, 38, 243–252. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusion-based enhancing method for weakly illuminated images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
- Steffens, C.R.; Messias, L.R.V.; Drews, P., Jr.; Botelho, S.S.d.C. Contrast enhancement and image completion: A cnn based model to restore ill exposed images. In Proceedings of the 2019 IEEE 17th International Conference on Industrial Informatics (INDIN), Helsinki, Finland, 22–25 July 2019; Volume 1, pp. 226–232. [Google Scholar]
- Goswami, S.; Singh, S.K. A simple deep learning based image illumination correction method for paintings. Pattern Recognit. Lett. 2020, 138, 392–396. [Google Scholar] [CrossRef]
- Li, X.; Zhang, B.; Liao, J.; Sander, P.V. Document rectification and illumination correction using a patch-based CNN. ACM Trans. Graph. (TOG) 2019, 38, 1–11. [Google Scholar] [CrossRef]
- Ma, L.; Jin, D.; Liu, R.; Fan, X.; Luo, Z. Joint over and under exposures correction by aggregated retinex propagation for image enhancement. IEEE Signal Process. Lett. 2020, 27, 1210–1214. [Google Scholar] [CrossRef]
- Afifi, M.; Derpanis, K.G.; Ommer, B.; Brown, M.S. Learning multi-scale photo exposure correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 9157–9167. [Google Scholar]
- Shen, Y.; Sheng, V.S.; Wang, L.; Duan, J.; Xi, X.; Zhang, D.; Cui, Z. Empirical comparisons of deep learning networks on liver segmentation. Comput. Mater. Contin. 2020, 62, 1233–1247. [Google Scholar] [CrossRef]
- Cao, Y.; Liu, S.; Peng, Y.; Li, J. DenseUNet: Densely connected UNet for electron microscopy image segmentation. IET Image Process. 2020, 14, 2682–2689. [Google Scholar] [CrossRef]
- Tai, Y.; Yang, J.; Liu, X.; Xu, C. Memnet: A persistent memory network for image restoration. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4539–4547. [Google Scholar]
- Atoum, Y.; Ye, M.; Ren, L.; Tai, Y.; Liu, X. Color-wise attention network for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 13–19 June 2020; pp. 506–507. [Google Scholar]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2480–2495. [Google Scholar] [CrossRef] [PubMed]
- Bychkovsky, V.; Paris, S.; Chan, E.; Durand, F. Learning photographic global tonal adjustment with a database of input/output image pairs. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 97–104. [Google Scholar]
- Everingham, M.; Winn, J. The PASCAL visual object classes challenge 2012 (VOC2012) development kit. Pattern Anal. Stat. Model. Comput. Learn. Tech. Rep 2012, 2007, 5. [Google Scholar]
- Zhang, H.; Goodfellow, I.; Metaxas, D.; Odena, A. Self-attention generative adversarial networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 10–15 June 2019; pp. 7354–7363. [Google Scholar]
- Huang, Z.; Chen, Z.; Zhang, Q.; Quan, G.; Ji, M.; Zhang, C.; Yang, Y.; Liu, X.; Liang, D.; Zheng, H.; et al. CaGAN: A cycle-consistent generative adversarial network with attention for low-dose CT imaging. IEEE Trans. Comput. Imaging 2020, 6, 1203–1218. [Google Scholar] [CrossRef]
- Guo, M.; Lan, H.; Yang, C.; Liu, J.; Gao, F. AS-Net: Fast photoacoustic reconstruction with multi-feature fusion from sparse data. IEEE Trans. Comput. Imaging 2022, 8, 215–223. [Google Scholar] [CrossRef]
- Jin, Z.; Iqbal, M.Z.; Bobkov, D.; Zou, W.; Li, X.; Steinbach, E. A flexible deep CNN framework for image restoration. IEEE Trans. Multimed. 2019, 22, 1055–1068. [Google Scholar] [CrossRef]
- Wang, J.; Wang, X.; Zhang, P.; Xie, S.; Fu, S.; Li, Y.; Han, H. Correction of uneven illumination in color microscopic image based on fully convolutional network. Opt. Express 2021, 29, 28503–28520. [Google Scholar] [CrossRef]
- Kim, B.; Jung, H.; Sohn, K. Multi-Exposure Image Fusion Using Cross-Attention Mechanism. In Proceedings of the 2022 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 7–9 January 2022; pp. 1–6. [Google Scholar]
- Yoo, S.; Bahng, H.; Chung, S.; Lee, J.; Chang, J.; Choo, J. Coloring with limited data: Few-shot colorization via memory augmented networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11283–11292. [Google Scholar]
- Zhang, R.; Zhu, J.Y.; Isola, P.; Geng, X.; Lin, A.S.; Yu, T.; Efros, A.A. Real-time user-guided image colorization with learned deep priors. arXiv 2017, arXiv:1705.02999. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Shi, C.; Lin, Y. Full reference image quality assessment based on visual salience with color appearance and gradient similarity. IEEE Access 2020, 8, 97310–97320. [Google Scholar] [CrossRef]
- Hammell, R. Ships in Satellite Imagery. 2018. Available online: https://www.kaggle.com/datasets/rhammell/ships-in-satellite-imagery/ (accessed on 20 May 2023).
- Kuo, C.W.; Ashmore, J.; Huggins, D.; Kira, Z. Data-Efficient Graph Embedding Learning for PCB Component Detection. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019. [Google Scholar]
- Candemir, S.; Jaeger, S.; Palaniappan, K.; Musco, J.P.; Singh, R.K.; Xue, Z.; Karargyris, A.; Antani, S.; Thoma, G.; McDonald, C.J. Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE Trans. Med. Imaging 2013, 33, 577–590. [Google Scholar] [CrossRef] [PubMed]
Method | Number of Networks | Architecture | Number of Residual Blocks | Type of Connection | Color Spaces |
---|---|---|---|---|---|
DualIE [30] | - | Dual Illumination * | - | - | RGB |
FBEI [31] | - | Reflectance and Illumination * | - | - | RGB |
ReExposeNet [32] | 1 | UNet | - | - | RGB |
FCN20 [33] | 1 | Fully Convolutional Network | - | - | RGB |
IllNet [34] | 1 | Residual Network | 5 | Regular Skip | RGB |
ARPNet [35] | 2 | Residual Network | 16 | Regular Skip | RGB |
MSPEC [36] | 1 | UNet | - | - | RGB |
Ours | 2 | Residual Attention Network | 3 and 4 | Recursive and Dense | CIELab |
Method | MIT-Adobe FiveK-Based | PASCAL VOC2012-Based | Afifi et al. [36] | ||||||
---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | VCGS | PSNR | SSIM | VCGS | PSNR | SSIM | VCGS | |
DualIE [30] | 17.83 | 0.686 | 0.913 | 17.81 | 0.687 | 0.912 | 19.16 | 0.855 | 0.967 |
FBEI [31] | 16.84 | 0.681 | 0.913 | 16.34 | 0.671 | 0.911 | 15.82 | 0.800 | 0.959 |
ReExposeNet [32] | 13.44 | 0.544 | 0.892 | 13.05 | 0.537 | 0.896 | 15.11 | 0.596 | 0.909 |
FCN20 [33] | 18.64 | 0.655 | 0.916 | 18.08 | 0.647 | 0.914 | 16.81 | 0.755 | 0.946 |
IllNet [34] | 18.77 | 0.680 | 0.931 | 18.56 | 0.690 | 0.931 | 17.45 | 0.790 | 0.954 |
ARPNet [35] | 18.67 | 0.673 | 0.926 | 18.34 | 0.675 | 0.925 | 17.35 | 0.785 | 0.954 |
MSPEC [36] | 19.43 | 0.730 | 0.935 | 19.33 | 0.727 | 0.936 | 21.23 | 0.874 | 0.971 |
Ours | 22.38 | 0.828 | 0.963 | 22.23 | 0.836 | 0.961 | 22.52 | 0.888 | 0.974 |
Color Spaces | MIT Adobe FiveK-Based | VOC2012-Based | ||||
---|---|---|---|---|---|---|
PSNR | SSIM | VCGS | PSNR | SSIM | VCGS | |
HSV | 21.58 | 0.724 | 0.923 | 21.12 | 0.738 | 0.927 |
YCbCr | 21.98 | 0.734 | 0.930 | 21.76 | 0.752 | 0.934 |
Luv | 21.41 | 0.735 | 0.934 | 20.75 | 0.744 | 0.935 |
CIELab | 22.38 | 0.828 | 0.963 | 22.23 | 0.836 | 0.961 |
ICANet | CCANet | MIT Adobe FiveK-Based | VOC2012-Based | ||||
---|---|---|---|---|---|---|---|
PSNR | SSIM | VCGS | PSNR | SSIM | VCGS | ||
−SA | −SA | 19.35 | 0.712 | 0.923 | 19.53 | 0.720 | 0.922 |
+SA | −SA | 20.19 | 0.732 | 0.926 | 20.77 | 0.738 | 0.926 |
−SA | +SA | 19.48 | 0.770 | 0.951 | 19.19 | 0.773 | 0.950 |
+SA | +SA | 22.38 | 0.828 | 0.963 | 22.23 | 0.836 | 0.961 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rinanto, N.; Su, S.-F. Two Residual Attention Convolution Models to Recover Underexposed and Overexposed Images. Symmetry 2023, 15, 1850. https://doi.org/10.3390/sym15101850
Rinanto N, Su S-F. Two Residual Attention Convolution Models to Recover Underexposed and Overexposed Images. Symmetry. 2023; 15(10):1850. https://doi.org/10.3390/sym15101850
Chicago/Turabian StyleRinanto, Noorman, and Shun-Feng Su. 2023. "Two Residual Attention Convolution Models to Recover Underexposed and Overexposed Images" Symmetry 15, no. 10: 1850. https://doi.org/10.3390/sym15101850
APA StyleRinanto, N., & Su, S. -F. (2023). Two Residual Attention Convolution Models to Recover Underexposed and Overexposed Images. Symmetry, 15(10), 1850. https://doi.org/10.3390/sym15101850