AutoGAN: An Automated Human-Out-of-the-Loop Approach for Training Generative Adversarial Networks
Abstract
:1. Introduction
- Present a comprehensive review of the literature on: (i) multiple distances and performance measures that can be used to assess the performance of a GAN; and (ii) existing algorithms involving automation in GANs;
- Introduce the AutoGAN Algorithm, a new automatic human-out-of-the-loop approach for determining when to stop training a GAN that is applicable to a variety of data modalities, including imagery and tabular datasets;
- Provide an extensive experimental comparison using multiple imagery and tabular datasets that include multiple GAN evaluation metrics;
- Provide all of our code so that AutoGAN can be easily used and to allow the reproducibility of our research.
2. Background and Related Work
2.1. Generative Adversarial Networks
2.2. Relevant Distances and Performance Measures
2.3. From Distances and Measures to Experimental Configurations for Evaluating the Performance of GANs
2.3.1. Classification Accuracy Score: Train on Synthetic Data, and Test on Real Data
2.3.2. Classification Accuracy Score: Train on Real Data, and Test on Synthetic Data
2.3.3. Inception Score
2.3.4. Confidence and Diversity Score
2.3.5. The Fréchet Inception Distance
2.3.6. Fréchet Confidence and Diversity Score
2.4. Algorithms Involving Automation in GANs
3. Proposed AutoGAN Method
3.1. Algorithm Goals and Requisites
3.2. The AutoGAN Algorithm
3.3. Potential Oracle Instances
Algorithm 1 The AutoGAN algorithm |
3.3.1. Oracle Instance Based on CAS-Real
3.3.2. Oracle Instance Based on CAS-syn
3.3.3. Oracle Instance Based on Inception Score
3.3.4. Oracle Instance Based on Fréchet Inception Distance
3.3.5. Oracle Instance Based on Fréchet Confidence and Diversity Score
3.3.6. Oracle Instance Based on Confidence and Diversity Score
3.3.7. Overview of the Oracle Instances and Their Characteristics
4. Experimental Evaluation
4.1. Experiments’ Overview
4.2. Datasets
4.2.1. Tabular Datasets
4.2.2. Imagery Datasets
4.3. Experimental Setting
5. Results and Discussion
5.1. Experiment #1: Tabular Datasets
- tor-based datasets
- iscx_defacement-based datasets
- cira-based datasets
5.2. Experiment #2: Imagery Datasets
5.2.1. Black-and-White Imagery Datasets
- kmnist-based datasets
5.2.2. Color Imagery Datasets
- cifar17 datasets
5.3. Experiment #3: Imagery Datasets with Human Inspection
- Fashion-MNIST-based datasets
5.4. Correlation Analysis of the Methods Used for Stopping the GAN Training
6. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Implementation Details
Appendix A.1. Details of the CGAN
Explanation | Layer | Output |
---|---|---|
Input1: noise | 1 | 100 Neurons |
Input2: class label | 1 | 1 Neuron |
transforming class label into 100 | 2 | 100 Neurons |
multiply transformed class label with input noise | 3 | 100 Neurons |
Dense | 4 | 100 Neurons |
Dense | 5 | 1024 Neurons |
Dense | 6 | 784 Neurons |
Explanation | Layer | Output |
---|---|---|
Input | 1 | 784 Neurons |
Dense | 2 | 512 Neurons |
Dense | 3 | 512 Neurons |
Dense | 4 | 512 Neurons |
Dense | 5 | 1 Neuron |
Explanation | Layer | Output |
---|---|---|
Input1: noise | 1 | 100 Neurons |
Input2: class label | 1 | 1 Neuron |
Transforming class label into 100 | 2 | 100 Neurons |
Multiply transformed class label with input noise | 3 | 100 Neurons |
Dense | 4 | 4096 Neurons |
Reshape to (4, 4, 256) | 5 | (4, 4, 256) |
Conv2DTranspose: filters: 128; kernel shape: (4, 4) | 6 | (8, 8, 128) |
Conv2DTranspose: filters: 128; kernel shape: (4, 4) | 7 | (16, 16, 128) |
Conv2DTranspose: filters: 128; kernel shape: (4, 4) | 8 | (32, 32, 128) |
Conv2D: filters: 3; kernel shape: (3, 3) | 9 | (32, 32, 3) |
Reshape to 3072 | 10 | 3072 Neurons |
Explanation | Layer | Output |
---|---|---|
Input1: the image | 1 | 3072 Neurons |
Input2: class label | 1 | 1 Neuron |
Transforming class label into 100 | 2 | 100 Neurons |
Multiply transformed class label with input noise | 3 | 100 Neurons |
Reshape to (32, 32, 3) | 4 | (32, 32, 3) |
Conv2D: filters: 64; kernel shape: (3, 3) | 5 | (32, 32, 64) |
Conv2D: filters: 128; kernel shape: (3, 3) | 6 | (16, 16, 128) |
Conv2D: filters: 128; kernel shape: (3, 3) | 7 | (8, 8, 128) |
Conv2D: filters: 256; kernel shape: (3, 3) | 8 | (4, 4, 256) |
Flatten | 9 | 4096 |
Dense | 10 | 1 Neuron |
Appendix A.2. Parameters Used for AutoGAN Algorithm
- The number of accepted failed attempts: 15;
- Iterations unit: 100;
- The number of generated samples per class for calculating the scores = 500.
- Oracle CAS_syn:
- -
- The number of hidden layers for the classifier = 2;
- -
- The number of perceptrons for the classifier = 100;
- -
- The number of training epochs for the classifier = 100;
- -
- Optimizer: adam;
- -
- Batch size: 32.
- Oracle CAS_real:
- -
- The number of hidden layers for the classifier = 2;
- -
- The number of perceptrons for the classifier = 100;
- -
- The number of training epochs for the classifier = 100;
- -
- Optimizer: adam;
- -
- Batch size: 32.
- Oracle CDS:
- -
- The number of hidden layers for the classifier of CDS = 2;
- -
- The number of perceptrons for the classifier of CDS = 100;
- -
- The number of training epochs for the classifier of CDS = 100;
- -
- Optimizer: adam;
- -
- Batch size: 32.
- Oracle FCD:
- -
- The autoencoder consists of 6 layers of sizes: 784, 784 × 2, 784, 784/2 (the bottleneck), 784 × 2, 784;
- -
- Optimizer = ‘adam’;
- -
- Loss = ‘mse’;
- -
- The number of training epochs for the autoencoder = 200.
Appendix A.3. Classifier Details
- Tor-based datasets: one hidden layer of 10 perceptrons;
- cic_syscallsbinders_adware-based datasets: two hidden layers of 20 perceptrons;
- cic_syscallsbinders_smsmalware-based datasets: two hidden layers of 20 perceptrons;
- cic_syscalls_adware-based datasets: two hidden layers of 100 perceptrons;
- iscx_spam-based datasets: two hidden layers of 20 perceptrons;
- iscx_defacement: two hidden layers of 100 perceptrons;
- cira-based datasets: one hidden layer of 10 perceptrons;
- mnist-based, fashion-mnist-based, Kuzushiji-mnist-based, and cifar10-based datasets: one hidden layers of five perceptrons.
References
- Borji, A. Pros and Cons of GAN Evaluation Measures. arXiv 2018, arXiv:1802.03446. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
- Brock, A.; Donahue, J.; Simonyan, K. Large Scale GAN Training for High Fidelity Natural Image Synthesis. arXiv 2018, arXiv:1809.11096. [Google Scholar] [CrossRef]
- Karras, T.; Aila, T.; Laine, S.; Lehtinen, J. Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv 2017, arXiv:1710.10196. [Google Scholar] [CrossRef]
- Karras, T.; Laine, S.; Aila, T. A Style-Based Generator Architecture for Generative Adversarial Networks. arXiv 2018, arXiv:1812.04948. [Google Scholar] [CrossRef]
- Karras, T.; Aittala, M.; Laine, S.; Härkönen, E.; Hellsten, J.; Lehtinen, J.; Aila, T. Alias-Free Generative Adversarial Networks. arXiv 2021, arXiv:2106.12423. [Google Scholar] [CrossRef]
- Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; Aila, T. Analyzing and Improving the Image Quality of StyleGAN. arXiv 2019, arXiv:1912.04958. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. arXiv 2016, arXiv:1611.07004. [Google Scholar] [CrossRef]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. arXiv 2017, arXiv:1703.10593. [Google Scholar] [CrossRef]
- Emami, H.; Aliabadi, M.M.; Dong, M.; Chinnam, R.B. SPA-GAN: Spatial Attention GAN for Image-to-Image Translation. IEEE Trans. Multimed. 2021, 23, 391–401. [Google Scholar] [CrossRef] [Green Version]
- Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arXiv 2016, arXiv:1609.04802. [Google Scholar] [CrossRef]
- Tulyakov, S.; Liu, M.Y.; Yang, X.; Kautz, J. MoCoGAN: Decomposing Motion and Content for Video Generation. arXiv 2017, arXiv:1707.04993. [Google Scholar] [CrossRef]
- Munoz, A.; Zolfaghari, M.; Argus, M.; Brox, T. Temporal Shift GAN for Large Scale Video Generation. arXiv 2020, arXiv:2004.01823. [Google Scholar] [CrossRef]
- Dong, H.W.; Hsiao, W.Y.; Yang, L.C.; Yang, Y.H. MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment. arXiv 2017, arXiv:1709.06298. [Google Scholar] [CrossRef]
- Bojchevski, A.; Shchur, O.; Zügner, D.; Günnemann, S. NetGAN: Generating Graphs via Random Walks. arXiv 2018, arXiv:1803.00816. [Google Scholar] [CrossRef]
- Guo, J.; Lu, S.; Cai, H.; Zhang, W.; Yu, Y.; Wang, J. Long Text Generation via Adversarial Training with Leaked Information. arXiv 2017, arXiv:1709.08624. [Google Scholar] [CrossRef]
- Park, N.; Mohammadi, M.; Gorde, K.; Jajodia, S.; Park, H.; Kim, Y. Data synthesis based on generative adversarial networks. Proc. VLDB Endow. 2018, 11, 1071–1083. [Google Scholar] [CrossRef]
- Nazari, e.; Branco, P. On Oversampling via Generative Adversarial Networks under Different Data Difficult Factors. In Proceedings of the International Workshop on Learning with Imbalanced Domains: Theory and Applications, Online, 17 September 2021; PMLR 2021. pp. 76–89. [Google Scholar]
- Nazari, E.; Branco, P.; Jourdan, G.V. Using CGAN to Deal with Class Imbalance and Small Sample Size in Cybersecurity Problems. In Proceedings of the 2021 18th International Conference on Privacy, Security and Trust (PST), Auckland, New Zealand, 13–15 December 2021; IEEE: New York, NY, USA, 2021; pp. 1–10. [Google Scholar] [CrossRef]
- Pennisi, M.; Palazzo, S.; Spampinato, C. Self-improving classification performance through GAN distillation. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; pp. 1640–1648. [Google Scholar] [CrossRef]
- Chaudhari, P.; Agrawal, H.; Kotecha, K. Data augmentation using MG-GAN for improved cancer classification on gene expression data. Soft Comput. 2020, 24, 11381–11391. [Google Scholar] [CrossRef]
- Luo, Y.; Cai, X.; Zhang, Y.; Xu, J.; Yuan, X. Multivariate Time Series Imputation with Generative Adversarial Networks. In Advances in Neural Information Processing Systems; Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R., Eds.; Curran Associates, Inc.: Montreal, QC, Canada, 2018; Volume 31. [Google Scholar]
- Gupta, A.; Johnson, J.; Fei-Fei, L.; Savarese, S.; Alahi, A. Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks. arXiv 2018, arXiv:1803.10892. [Google Scholar] [CrossRef]
- Saxena, D.; Cao, J. D-GAN: Deep Generative Adversarial Nets for Spatio-Temporal Prediction. arXiv 2019, arXiv:1907.08556. [Google Scholar] [CrossRef]
- Mescheder, L.; Geiger, A.; Nowozin, S. Which Training Methods for GANs do actually Converge? arXiv 2018, arXiv:1801.04406. [Google Scholar] [CrossRef]
- Daskalakis, C.; Ilyas, A.; Syrgkanis, V.; Zeng, H. Training GANs with Optimism. arXiv 2017, arXiv:1711.00141. [Google Scholar] [CrossRef]
- Goodfellow, I. NIPS 2016 Tutorial: Generative Adversarial Networks. arXiv 2017, arXiv:1701.00160. [Google Scholar] [CrossRef]
- Zhou, S.; Gordon, M.L.; Krishna, R.; Narcomey, A.; Fei-Fei, L.; Bernstein, M.S. HYPE: A Benchmark for Human eYe Perceptual Evaluation of Generative Models. arXiv 2019, arXiv:1904.01121. [Google Scholar] [CrossRef]
- Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X.; Chen, X. Improved Techniques for Training GANs. In Advances in Neural Information Processing Systems; Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: Barcelona, Spain, 2016; Volume 29. [Google Scholar]
- Borji, A. Pros and Cons of GAN Evaluation Measures: New Developments. arXiv 2021, arXiv:2103.09396. [Google Scholar] [CrossRef]
- Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved Techniques for Training GANs. arXiv 2016, arXiv:1606.03498. [Google Scholar] [CrossRef]
- Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv 2017, arXiv:1706.08500. [Google Scholar] [CrossRef]
- Ravuri, S.; Vinyals, O. Classification Accuracy Score for Conditional Generative Models. arXiv 2019, arXiv:1905.10887. [Google Scholar] [CrossRef]
- Shmelkov, K.; Schmid, C.; Alahari, K. How good is my GAN? In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018.
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:1701.07875. [Google Scholar] [CrossRef]
- Papadimitriou, C.H. Computational Complexity; Addison-Wesley: Reading, MA, USA, 1994. [Google Scholar]
- Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar] [CrossRef]
- Kullback, S. Information Theory and Statistics; Courier Corporation: New York, NY, USA, 1997. [Google Scholar]
- Ramdas, A.; Garcia, N.; Cuturi, M. On Wasserstein Two Sample Testing and Related Families of Nonparametric Tests. arXiv 2015, arXiv:1509.02237. [Google Scholar] [CrossRef]
- Barratt, S.; Sharma, R. A Note on the Inception Score. arXiv 2018, arXiv:1801.01973. [Google Scholar] [CrossRef]
- Obukhov, A.; Krasnyanskiy, M. Quality Assessment Method for GAN Based on Modified Metrics Inception Score and Fréchet Inception Distance. In Software Engineering Perspectives in Intelligent Systems; Silhavy, R., Silhavy, P., Prokopova, Z., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 102–114. [Google Scholar]
- Tan, M.; Chen, B.; Pang, R.; Vasudevan, V.; Sandler, M.; Howard, A.; Le, Q.V. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 2820–2828. [Google Scholar]
- Fu, Y.; Chen, W.; Wang, H.; Li, H.; Lin, Y.; Wang, Z. Autogan-distiller: Searching to compress generative adversarial networks. arXiv 2020, arXiv:2006.08198. [Google Scholar]
- Wang, H.; Huan, J. Agan: Towards automated design of generative adversarial networks. arXiv 2019, arXiv:1906.11080. [Google Scholar]
- Gong, X.; Chang, S.; Jiang, Y.; Wang, Z. Autogan: Neural architecture search for generative adversarial networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 3224–3234. [Google Scholar]
- Morozov, S.; Voynov, A.; Babenko, A. On Self-Supervised Image Representations for {GAN} Evaluation. In Proceedings of the International Conference on Learning Representations, Virtual Event, 3–7 May 2021. [Google Scholar]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-To-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved Training of Wasserstein GANs. In Advances in Neural Information Processing Systems; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Long Beach, CA, USA, 2017; Volume 30. [Google Scholar]
- Habibi Lashkari, A.; Draper Gil, G.; Mamun, M.S.I.; Ghorbani, A.A. Characterization of Tor Traffic using Time based Features. In Proceedings of the 3rd International Conference on Information Systems Security and Privacy—ICISSP, Porto, Portugal, 19–21 February 2017; INSTICC. SciTePress: Setubal, Portugal, 2017; pp. 253–262. [Google Scholar] [CrossRef]
- Mahdavifar, S.; Abdul Kadir, A.F.; Fatemi, R.; Alhadidi, D.; Ghorbani, A.A. Dynamic Android Malware Category Classification using Semi-Supervised Deep Learning. In Proceedings of the 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Online Event, 17–22 August 2020; pp. 515–522. [Google Scholar] [CrossRef]
- Mamun, M.S.I.; Rathore, M.A.; Lashkari, A.H.; Stakhanova, N.; Ghorbani, A.A. Detecting Malicious URLs Using Lexical Analysis. In Network and System Security; Chen, J., Piuri, V., Su, C., Yung, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 467–482. [Google Scholar]
- MontazeriShatoori, M.; Davidson, L.; Kaur, G.; Habibi Lashkari, A. Detection of DoH Tunnels using Time-series Classification of Encrypted Traffic. In Proceedings of the 2020 IEEE DASC/PiCom/CBDCom/CyberSciTech, Online Event, 17–22 August 2020; pp. 63–70. [Google Scholar] [CrossRef]
- Deng, L. The mnist database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 2012, 29, 141–142. [Google Scholar] [CrossRef]
- Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar] [CrossRef]
- Clanuwat, T.; Bober-Irizar, M.; Kitamoto, A.; Lamb, A.; Yamamoto, K.; Ha, D. Deep learning for classical japanese literature. arXiv 2018, arXiv:1812.01718. [Google Scholar]
- Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images; Technical Report; Computer Science-University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
Oracle Instance | Required Labels | Type of GAN | Type of Data | Train Times | Metric | Source | |||
---|---|---|---|---|---|---|---|---|---|
Labeled Data during Training | Labeled Data to Generate Score | Imagery Color | Imagery B & W | Tabular | |||||
CAS-real | Required | Required | Requires a CGAN | √ | √ | √ | Multiple | F1-score | [33,34] |
CAS-syn | Required | Not required | Requires a CGAN | √ | √ | √ | One time | F1-score | [8,34] |
IS | Required | Not required | Any GAN | √ | One time | KL-divergence | [31] | ||
FID | Required | Not required | Any GAN | √ | √ | One time | Fréchet distance | [32] | |
CDS | Required | Not required | Any GAN | √ | √ | √ | One time | KL-divergence | [41] |
FCD | Not required | Not required | Any GAN | √ | √ | √ | One time | Fréchet distance | [41] |
Experiment | Experiment #1 | Experiment #2 | Experiment #3 |
---|---|---|---|
Data Used | Tabular Data | Imagery Data 1 | Imagery Data 2 |
Initial | Initial | Initial | |
Fixed | Fixed | Fixed | |
Alternative Methods | AutoGAN-CAS-real | AutoGAN-CAS-real | Manual |
AutoGAN-CAS-syn | AutoGAN-CAS-syn | AutoGAN-CAS-real | |
AutoGAN-CDS | AutoGAN-CDS | AutoGAN-CAS-syn | |
AutoGAN-FCD | AutoGAN-FCD | AutoGAN-CDS | |
AutoGAN-FID | AutoGAN-FCD | ||
AutoGAN-IS * | AutoGAN-FID |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nazari, E.; Branco, P.; Jourdan, G.-V. AutoGAN: An Automated Human-Out-of-the-Loop Approach for Training Generative Adversarial Networks. Mathematics 2023, 11, 977. https://doi.org/10.3390/math11040977
Nazari E, Branco P, Jourdan G-V. AutoGAN: An Automated Human-Out-of-the-Loop Approach for Training Generative Adversarial Networks. Mathematics. 2023; 11(4):977. https://doi.org/10.3390/math11040977
Chicago/Turabian StyleNazari, Ehsan, Paula Branco, and Guy-Vincent Jourdan. 2023. "AutoGAN: An Automated Human-Out-of-the-Loop Approach for Training Generative Adversarial Networks" Mathematics 11, no. 4: 977. https://doi.org/10.3390/math11040977
APA StyleNazari, E., Branco, P., & Jourdan, G. -V. (2023). AutoGAN: An Automated Human-Out-of-the-Loop Approach for Training Generative Adversarial Networks. Mathematics, 11(4), 977. https://doi.org/10.3390/math11040977