Training Autoencoders Using Relative Entropy Constraints
Abstract
:1. Introduction
2. Background
Algorithm 1 Autoencoder forward propagation algorithm. |
|
3. Relative Entropy Autoencoder
3.1. Solving for Feature Mapping Parameters
3.2. Solving for Decoder Parameters
Algorithm 2 Forward training algorithm for autoencoders based on relative entropy constraints. |
|
4. Experiments
4.1. Experimental Setup
4.2. Analysis of Factors Affecting Classification Performance
4.2.1. Average Activation Value of the Hidden Layer Outputs
4.2.2. Number of Hidden Nodes
4.3. Comparison of Algorithm Performance
5. Discussion and Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhou, Y. Rethinking Reconstruction Autoencoder-Based Out-of-Distribution Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 7379–7387. [Google Scholar]
- Liu, X.; Ma, Z.; Ma, J.; Zhang, J.; Schaefer, G.; Fang, H. Image Disentanglement Autoencoder for Steganography Without Embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 2303–2312. [Google Scholar]
- Kim, M. Gaussian Process Modeling of Approximate Inference Errors for Variational Autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 244–253. [Google Scholar]
- Yu, M.; Quan, T.; Peng, Q.; Yu, X.; Liu, L. A model-based collaborate filtering algorithm based on stacked autoencoder. Neural Comput. Appl. 2022, 34, 2503–2511. [Google Scholar] [CrossRef]
- Yang, J.; Ahn, P.; Kim, D.; Lee, H.; Kim, J. Progressive Seed Generation Auto-Encoder for Unsupervised Point Cloud Learning. In Proceedings of the IEEE International Conference on Computer Vision, Online, 11–17 October 2021; pp. 6413–6422. [Google Scholar]
- Wang, C.; Lucey, S. PAUL: Procrustean Autoencoder for Unsupervised Lifting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Online, 19–25 June 2021; pp. 434–443. [Google Scholar]
- Parmar, G.; Li, D.; Lee, K.; Tu, Z. Dual Contradistinctive Generative Autoencoder. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Online, 19–25 June 2021; pp. 823–832. [Google Scholar]
- Preechakul, K.; Chatthee, N.; Wizadwongsa, S.; Suwajanakorn, S. Diffusion Autoencoders: Toward a Meaningful and Decodable Representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 10619–10629. [Google Scholar]
- Meng, Q.; Catchpoole, D.; Skillicom, D.; Kennedy, P.J. Relational Autoencoder for Feature Extraction. In Proceedings of the 2017 International Joint Conference on Neural Networks, Anchorage, AK, USA, 14–19 May 2017; pp. 364–371. [Google Scholar]
- Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P.A. Extracting and Composing Robust Features with Denoising Autoencoders. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; pp. 1096–1103. [Google Scholar]
- Wu, B.; Nair, S.; Martin-Martin, R.; Fei-Fei, L.; Finn, C. Greedy Hierarchical Variational Autoencoders for Large-Scale Video Prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Online, 19–25 June 2021; pp. 2318–2328. [Google Scholar]
- Ashfahani, A.; Pratama, M.; Lughofer, E.; Ong, Y.S. DEVDAN: Deep evolving denoising autoencoder. Neurocomputing 2020, 390, 297–314. [Google Scholar] [CrossRef] [Green Version]
- Zhou, C.; Paffenroth, R.C. Anomaly Detection with Robust Deep Autoencoders. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 665–674. [Google Scholar]
- Qiao, C.; Hu, X.Y.; Xiao, L.; Calhoun, V.D.; Wang, Y.P. A deep autoencoder with sparse and graph Laplacian regularization for characterizing dynamic functional connectivity during brain development. Neurocomputing 2021, 456, 97–108. [Google Scholar] [CrossRef]
- Jian, L.; Rayhana, R.; Ma, L.; Wu, S.; Liu, Z.; Jiang, H. Infrared and visible image fusion based on deep decomposition network and saliency analysis. IEEE Trans. Multimed. 2022, 24, 3314–3326. [Google Scholar] [CrossRef]
- Shi, C.; Pun, C.M. Multiscale superpixel-based hyperspectral image classification using recurrent neural networks with stacked autoencoders. IEEE Trans. Multimed. 2020, 22, 487–501. [Google Scholar] [CrossRef]
- Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
- Wang, K.; Guo, P.; Xin, X. Autoencoder, Low Rank Approximation and Pseudoinverse Learning Algorithm. In Proceedings of the IEEE 2017 International Conference on Systems, Man, and Cybernetics, Banff, AB, Canada, 1–4 October 2017; pp. 948–953. [Google Scholar]
- Kasun, L.L.C.; Zhou, H.; Huang, G.B.; Vong, C.M. Representational learning with elms for big data. IEEE Intell. Syst. 2013, 28, 31–34. [Google Scholar]
- The Mnist Database of Handwritteen Digits. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 1 November 2022).
- The Cifar-10 Database. Available online: http://www.cs.toronto.edu/~kriz/cifar.html (accessed on 1 November 2022).
- The Street View House Numbers Dataset. Available online: http://ufldl.stanford.edu/housenumbers (accessed on 1 November 2022).
Name | MNIST | CIFAR-10 | SVHN | ||||||
---|---|---|---|---|---|---|---|---|---|
Accuracy | Parameter | Accuracy | Parameter | Accuracy | Parameter | ||||
REAN | 97.3 | 40.0 | 62.4 | ||||||
PIAN | 97.0 | 37.0 | 56.5 | ||||||
RAN | 96.9 | 39.3 | 60.2 | ||||||
ROAN | 96.8 | 39.5 | 60.7 |
Name | REAN vs. PIAN | REAN vs. RAN | REAN vs. ROAN | |||
---|---|---|---|---|---|---|
H | P | H | P | H | P | |
MNIST | 1 | 1.34e-05 | 1 | 4.12e-07 | 1 | 5.03e-06 |
CIFAR-10 | 1 | 3.34e-09 | 1 | 2.13e-04 | 1 | 2.69e-02 |
SVHN | 1 | 3.54e-10 | 1 | 5.55e-06 | 1 | 4.42e-05 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Yan, Y. Training Autoencoders Using Relative Entropy Constraints. Appl. Sci. 2023, 13, 287. https://doi.org/10.3390/app13010287
Li Y, Yan Y. Training Autoencoders Using Relative Entropy Constraints. Applied Sciences. 2023; 13(1):287. https://doi.org/10.3390/app13010287
Chicago/Turabian StyleLi, Yanjun, and Yongquan Yan. 2023. "Training Autoencoders Using Relative Entropy Constraints" Applied Sciences 13, no. 1: 287. https://doi.org/10.3390/app13010287
APA StyleLi, Y., & Yan, Y. (2023). Training Autoencoders Using Relative Entropy Constraints. Applied Sciences, 13(1), 287. https://doi.org/10.3390/app13010287