Semi-RainGAN: A Semisupervised Coarse-to-Fine Guided Generative Adversarial Network for Mixture of Rain Removal
Abstract
:1. Introduction
- We propose a novel semisupervised coarse-to-fine guided generative adversarial network, dubbed Semi-RainGAN, to remove the mixture of rain. Semi-RainGAN leverages both synthetic (paired) and real-world (unpaired) rainy images for training, boosting the generalization ability on real-world rainy images.
- We propose two parallel subnetworks, i.e., a multiscale attention prediction network (MAPN) to fully exploit complementary multiscale information for attention map prediction and a global depth prediction network (GDPN) for accurate depth map prediction. These predicted attention and depth maps guide Semi-RainGAN to remove entangled rain streaks and rainy haze.
- We propose a coarse-to-fine guided rain removal network (CFRN) to integrate the predicted image features with estimated depth and attention features. This subnetwork is connected with the first two subnetworks in a cascaded way and provides sufficient and robust feature fusion to generate high-quality derained images.
2. Related Work
2.1. Rain Streaks Removal
2.2. Rain Streaks and Rainy Haze Removal
3. Proposed Method
3.1. Generator
3.1.1. Multiscale Attention Prediction Network
3.1.2. Global Depth Prediction Network
3.1.3. Coarse-to-Fine Guided Rain Removal Network
3.2. Discriminators
3.3. Comprehensive Loss Function
4. Experimental Results and Analysis
4.1. Experimental Settings
4.1.1. Datasets
4.1.2. Training Details
4.1.3. Evaluation Metrics
4.2. Comparison with State-of-the-Art
4.2.1. Baselines
4.2.2. Results on the RainCityscapes Dataset
4.2.3. Results on the Rain200L and Rain200H Datasets
4.2.4. Results on Real-World Rainy Images
4.3. Ablation Study
4.3.1. Component Analysis
- M1: A single rain removal network (baseline) is used for rain removal. It regresses the final rain-free images directly, without guidance from the depth map and attention map.
- M2: The attention prediction network is utilized to predict an attention map but without the multiscale attention module (MSAM). The attention-guided channel fusion module (AGCM) is substituted with a simple fusion operation that entails matrix multiplication between the attention map and feature map, followed by the addition of the original feature map.
- M3: Only one SAM is added, and the output of the SAM is concatenated directly with another branch.
- M4: SAM is replaced with MSAM in the attention prediction network to construct the complete multiscale attention prediction network (MAPN).
- M5: The simple fusion operation is replaced with the AGCM.
- M6: The depth prediction network is added to forecast a depth map as guidance but without the position attention module (PAM). Two depth-guided crisscross fusion modules (DGCM) are substituted with a dot product operation.
- M7: The PAM is included in the depth prediction network to build the complete global depth prediction network (GDPN).
- M8: The dot product is replaced with two DGCMs.
4.3.2. Loss Function Analysis
4.3.3. Semisupervised Paradigm Analysis
4.4. Application
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Janai, J.; Güney, F.; Behl, A.; Geiger, A. Computer vision for autonomous vehicles: Problems, datasets and state of the art. In Foundations and Trends® in Computer Graphics and Vision; Adobe Research: San Francisco, CA, USA, 2020; Volume 12, pp. 1–308. [Google Scholar]
- Buch, N.; Velastin, S.A.; Orwell, J. A review of computer vision techniques for the analysis of urban traffic. IEEE Trans. Intell. Transp. Syst. 2011, 12, 920–939. [Google Scholar] [CrossRef]
- Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S. Traffic-sign detection and classification in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2110–2118. [Google Scholar]
- Luo, Y.; Xu, Y.; Ji, H. Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 3397–3405. [Google Scholar]
- Li, Y.; Tan, R.T.; Guo, X.; Lu, J.; Brown, M.S. Rain streak removal using layer priors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2736–2744. [Google Scholar]
- Chen, Y.L.; Hsu, C.T. A generalized low-rank appearance model for spatio-temporally correlated rain streaks. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 1968–1975. [Google Scholar]
- Zhang, H.; Patel, V.M. Convolutional sparse and low-rank coding-based rain streak removal. In Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, 24–31 March 2017; IEEE: New York, NY, USA, 2017; pp. 1259–1267. [Google Scholar]
- Wang, T.; Yang, X.; Xu, K.; Chen, S.; Zhang, Q.; Lau, R.W. Spatial attentive single-image deraining with a high quality real rain dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12270–12279. [Google Scholar]
- Fu, X.; Huang, J.; Zeng, D.; Huang, Y.; Ding, X.; Paisley, J. Removing rain from single images via a deep detail network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3855–3863. [Google Scholar]
- Deng, S.; Wei, M.; Wang, J.; Feng, Y.; Liang, L.; Xie, H.; Wang, F.L.; Wang, M. Detail-recovery image deraining via context aggregation networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 14560–14569. [Google Scholar]
- Jiang, K.; Wang, Z.; Yi, P.; Chen, C.; Huang, B.; Luo, Y.; Ma, J.; Jiang, J. Multi-scale progressive fusion network for single image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8346–8355. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 10–25 June 2021; pp. 14821–14831. [Google Scholar]
- Yasarla, R.; Sindagi, V.A.; Patel, V.M. Syn2Real transfer learning for image deraining using Gaussian processes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2726–2736. [Google Scholar]
- Ye, Y.; Chang, Y.; Zhou, H.; Yan, L. Closing the Loop: Joint Rain Generation and Removal via Disentangled Image Translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 10–25 June 2021; pp. 2053–2062. [Google Scholar]
- Hu, X.; Zhu, L.; Wang, T.; Fu, C.W.; Heng, P.A. Single-image real-time rain removal based on depth-guided non-local features. IEEE Trans. Image Process. 2021, 30, 1759–1770. [Google Scholar] [CrossRef] [PubMed]
- Hu, X.; Fu, C.W.; Zhu, L.; Heng, P.A. Depth-attentional features for single-image rain removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8022–8031. [Google Scholar]
- Bhutto, J.A.; Zhang, R.; Rahman, Z. Symmetric Enhancement of Visual Clarity through a Multi-Scale Dilated Residual Recurrent Network Approach for Image Deraining. Symmetry 2023, 15, 1571. [Google Scholar] [CrossRef]
- Santhaseelan, V.; Asari, V.K. Utilizing local phase information to remove rain from video. Int. J. Comput. Vis. 2015, 112, 71–89. [Google Scholar] [CrossRef]
- Liu, J.; Yang, W.; Yang, S.; Guo, Z. Erase or fill? Deep joint recurrent rain removal and reconstruction in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3233–3242. [Google Scholar]
- Zhu, L.; Fu, C.W.; Lischinski, D.; Heng, P.A. Joint bi-layer optimization for single-image rain streak removal. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 27–29 October 2017; pp. 2526–2534. [Google Scholar]
- Huang, H.; Yu, A.; He, R. Memory oriented transfer learning for semi-supervised image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 10–25 June 2021; pp. 7732–7741. [Google Scholar]
- Kang, L.W.; Lin, C.W.; Fu, Y.H. Automatic single-image-based rain streaks removal via image decomposition. IEEE Trans. Image Process. 2011, 21, 1742–1755. [Google Scholar] [CrossRef] [PubMed]
- Gu, S.; Meng, D.; Zuo, W.; Zhang, L. Joint convolutional analysis and synthesis sparse representation for single image layer separation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 27–29 October 2017; pp. 1708–1716. [Google Scholar]
- Wei, M.; Shen, Y.; Wang, Y.; Xie, H.; Wang, F.L. RainDiffusion: When Unsupervised Learning Meets Diffusion Models for Real-world Image Deraining. arXiv 2023, arXiv:2301.09430. [Google Scholar]
- Fu, X.; Huang, J.; Ding, X.; Liao, Y.; Paisley, J. Clearing the skies: A deep network architecture for single-image rain removal. IEEE Trans. Image Process. 2017, 26, 2944–2956. [Google Scholar] [CrossRef] [PubMed]
- Li, G.; He, X.; Zhang, W.; Chang, H.; Dong, L.; Lin, L. Non-locally enhanced encoder-decoder network for single image de-raining. In Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Republic of Korea, 22–26 October 2018; pp. 1056–1064. [Google Scholar]
- Wang, G.; Sun, C.; Sowmya, A. Erl-net: Entangled representation learning for single image de-raining. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 29 October–1 November 2019; pp. 5644–5652. [Google Scholar]
- Ren, D.; Zuo, W.; Hu, Q.; Zhu, P.; Meng, D. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3937–3946. [Google Scholar]
- Zhang, H.; Sindagi, V.; Patel, V.M. Image de-raining using a conditional generative adversarial network. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 3943–3956. [Google Scholar] [CrossRef]
- Wei, W.; Meng, D.; Zhao, Q.; Xu, Z.; Wu, Y. Semi-supervised transfer learning for image rain removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3877–3886. [Google Scholar]
- Wei, Y.; Zhang, Z.; Wang, Y.; Zhang, H.; Zhao, M.; Xu, M.; Wang, M. Semi-deraingan: A new semi-supervised single image deraining. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 27–29 October 2017; pp. 2223–2232. [Google Scholar]
- Qu, Y.; Chen, Y.; Huang, J.; Xie, Y. Enhanced pix2pix dehazing network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8160–8168. [Google Scholar]
- Wang, Y.; Yan, X.; Guan, D.; Wei, M.; Chen, Y.; Zhang, X.P.; Li, J. Cycle-snspgan: Towards real-world image dehazing via cycle spectral normalized soft likelihood estimation patch gan. IEEE Trans. Intell. Transp. Syst. 2022, 23, 20368–20382. [Google Scholar] [CrossRef]
- Li, R.; Cheong, L.F.; Tan, R.T. Heavy rain image restoration: Integrating physics model and conditional adversarial learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1633–1642. [Google Scholar]
- Wang, Y.; Song, Y.; Ma, C.; Zeng, B. Rethinking image deraining via rain streaks and vapors. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: New York, NY, USA, 2020; pp. 367–382. [Google Scholar]
- Shen, Y.; Feng, Y.; Wang, W.; Liang, D.; Qin, J.; Xie, H.; Wei, M. MBA-RainGAN: A Multi-Branch Attention Generative Adversarial Network for Mixture of Rain Removal. In Proceedings of the ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 22–27 May 2022; IEEE: New York, NY, USA, 2022; pp. 3418–3422. [Google Scholar]
- Li, L.; Dong, Y.; Ren, W.; Pan, J.; Gao, C.; Sang, N.; Yang, M.H. Semi-supervised image dehazing. IEEE Trans. Image Process. 2019, 29, 2766–2779. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 July 2016; pp. 770–778. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
- Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; Liu, W. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 29 October–1 November 2019; pp. 603–612. [Google Scholar]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 27–29 October 2017; pp. 2794–2802. [Google Scholar]
- Aly, H.A.; Dubois, E. Image up-sampling using total-variation regularization with a new observation model. IEEE Trans. Image Process. 2005, 14, 1647–1659. [Google Scholar] [CrossRef] [PubMed]
- Yang, W.; Tan, R.T.; Feng, J.; Liu, J.; Guo, Z.; Yan, S. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1357–1366. [Google Scholar]
- Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Yasarla, R.; Patel, V.M. Uncertainty guided multi-scale residual learning-using a cycle spinning cnn for single image de-raining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8405–8414. [Google Scholar]
- Wang, C.; Xing, X.; Wu, Y.; Su, Z.; Chen, J. Dcsfn: Deep cross-scale fusion network for single image rain removal. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 1643–1651. [Google Scholar]
- Chen, C.; Li, H. Robust representation learning with feedback for single image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 10–25 June 2021; pp. 7742–7751. [Google Scholar]
- Quan, R.; Yu, X.; Liang, Y.; Yang, Y. Removing raindrops and rain streaks in one go. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 10–25 June 2021; pp. 9147–9156. [Google Scholar]
- Qin, X.; Wang, Z.; Bai, Y.; Xie, X.; Jia, H. FFA-Net: Feature fusion attention network for single image dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 11908–11915. [Google Scholar]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
Method | RainCityscapes | Rain200L | Rain200H | Time(s) | |||
---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | ||
Rain Streak Removal | |||||||
DSC [4] | 16.41 | 0.771 | 25.68 | 0.875 | 15.29 | 0.423 | 199.5 |
GMM [5] | 18.39 | 0.819 | 27.16 | 0.898 | 14.54 | 0.548 | 600.4 |
UMRL [49] | 27.97 | 0.912 | 31.24 | 0.954 | 27.27 | 0.898 | 1.349 |
SPA-Net [8] | 20.90 | 0.862 | 31.59 | 0.965 | 23.85 | 0.852 | 0.154 |
PReNet [28] | 26.83 | 0.910 | 36.76 | 0.980 | 28.08 | 0.887 | 0.262 |
DCSFN [50] | 26.37 | 0.872 | 38.21 | 0.982 | 28.26 | 0.899 | 1.524 |
MSPFN [11] | 25.51 | 0.903 | 32.98 | 0.969 | 27.38 | 0.869 | 3.863 |
MPRNet [12] | 29.06 | 0.918 | 37.32 | 0.981 | 28.32 | 0.916 | 3.668 |
DerainRLNet [51] | 27.39 | 0.881 | 37.38 | 0.980 | 28.87 | 0.895 | 0.925 |
CCN [52] | 29.34 | 0.950 | 37.01 | 0.982 | 29.12 | 0.921 | 0.518 |
SIRR [30] | 28.74 | 0.920 | 35.32 | 0.968 | 26.21 | 0.813 | 0.281 |
Syn2Real [13] | 28.66 | 0.919 | 34.26 | 0.946 | 25.19 | 0.806 | 1.241 |
JRGR [14] | 23.85 | 0.877 | 30.15 | 0.934 | 22.19 | 0.801 | 0.401 |
Rain Streak Removal + Rainy Haze Removal | |||||||
MSPFN [11] + FFA [53] | 25.56 | 0.906 | 32.98 | 0.969 | 27.40 | 0.869 | 3.927 |
MPRNet [12] + FFA [53] | 29.10 | 0.920 | 37.33 | 0.981 | 28.33 | 0.917 | 3.733 |
Syn2Real [13] + FFA [53] | 28.72 | 0.922 | 34.27 | 0.947 | 25.21 | 0.807 | 1.304 |
JRGR [14] + FFA [53] | 23.89 | 0.879 | 30.14 | 0.934 | 22.21 | 0.802 | 0.465 |
Rain Streak Removal + Rainy Haze Removal | |||||||
DAF-Net [16] | 30.66 | 0.924 | 34.07 | 0.964 | 24.65 | 0.860 | 0.209 |
DGNL-Net [15] | 32.21 | 0.936 | 36.42 | 0.979 | 27.79 | 0.886 | 0.332 |
MBA-RainGAN [37] | 29.51 | 0.917 | 33.51 | 0.948 | 23.73 | 0.854 | 0.377 |
Ours | 33.82 | 0.956 | 38.41 | 0.985 | 29.17 | 0.917 | 0.346 |
Model | M1 | M2 | M3 | M4 | M5 | M6 | M7 | M8 |
---|---|---|---|---|---|---|---|---|
BL | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
ATT | w/o | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
SAM | w/o | w/o | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
MSAM | w/o | w/o | w/o | ✓ | ✓ | ✓ | ✓ | ✓ |
AGCM | w/o | w/o | w/o | w/o | ✓ | ✓ | ✓ | ✓ |
DEPTH | w/o | w/o | w/o | w/o | w/o | ✓ | ✓ | ✓ |
PAM | w/o | w/o | w/o | w/o | w/o | w/o | ✓ | ✓ |
DGCM | w/o | w/o | w/o | w/o | w/o | w/o | w/o | ✓ |
PSNR | 30.65 | 31.51 | 31.54 | 31.77 | 31.94 | 32.87 | 33.38 | 33.82 |
SSIM | 0.910 | 0.921 | 0.922 | 0.926 | 0.930 | 0.945 | 0.950 | 0.956 |
Loss | L1 | L2 | L3 | L4 | L5 |
---|---|---|---|---|---|
✓ | ✓ | ✓ | ✓ | ✓ | |
w/o | ✓ | ✓ | ✓ | ✓ | |
w/o | w/o | ✓ | ✓ | ✓ | |
w/o | w/o | w/o | ✓ | ✓ | |
w/o | w/o | w/o | w/o | ✓ | |
PSNR | 31.91 | 32.27 | 32.82 | 33.51 | 33.82 |
SSIM | 0.939 | 0.942 | 0.947 | 0.951 | 0.956 |
Syn2Real | Ours | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | |||||||||
Gain | Gain | Gain | Gain | |||||||||
10% | 22.89 | 23.51 | 0.62 | 0.740 | 0.759 | 0.019 | 26.28 | 27.01 | 0.73 | 0.851 | 0.874 | 0.023 |
20% | 23.15 | 23.87 | 0.72 | 0.752 | 0.774 | 0.022 | 27.11 | 27.76 | 0.65 | 0.869 | 0.886 | 0.017 |
40% | 23.80 | 24.59 | 0.79 | 0.770 | 0.791 | 0.021 | 27.73 | 28.36 | 0.63 | 0.885 | 0.903 | 0.018 |
60% | 24.51 | 25.16 | 0.65 | 0.785 | 0.804 | 0.019 | 28.35 | 28.97 | 0.62 | 0.901 | 0.916 | 0.015 |
100% | 25.19 | - | - | 0.806 | - | - | 29.17 | - | - | 0.917 | - | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, R.; Shu, N.; Zhang, P.; Li, Y. Semi-RainGAN: A Semisupervised Coarse-to-Fine Guided Generative Adversarial Network for Mixture of Rain Removal. Symmetry 2023, 15, 1832. https://doi.org/10.3390/sym15101832
Yu R, Shu N, Zhang P, Li Y. Semi-RainGAN: A Semisupervised Coarse-to-Fine Guided Generative Adversarial Network for Mixture of Rain Removal. Symmetry. 2023; 15(10):1832. https://doi.org/10.3390/sym15101832
Chicago/Turabian StyleYu, Rongwei, Ni Shu, Peihao Zhang, and Yizhan Li. 2023. "Semi-RainGAN: A Semisupervised Coarse-to-Fine Guided Generative Adversarial Network for Mixture of Rain Removal" Symmetry 15, no. 10: 1832. https://doi.org/10.3390/sym15101832
APA StyleYu, R., Shu, N., Zhang, P., & Li, Y. (2023). Semi-RainGAN: A Semisupervised Coarse-to-Fine Guided Generative Adversarial Network for Mixture of Rain Removal. Symmetry, 15(10), 1832. https://doi.org/10.3390/sym15101832