Multi-Layered Projected Entangled Pair States for Image Classification
Abstract
:1. Introduction
2. Literature Review
3. Tensor Network for Image Classification
4. MLPEPS Classifier
4.1. PEPS Classifier
4.2. MLPEPS Classifier
4.3. Model Optimisation
5. Experiments
5.1. Fashion-MNIST Dataset
5.2. COVID-19 Radiography Dataset
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Hariri, R.H.; Fredericks, E.M.; Bowers, K.M. Uncertainty in big data analytics: Survey, opportunities, and challenges. J. Big Data 2019, 6, 44. [Google Scholar] [CrossRef] [Green Version]
- Manyika, J.; Chui, M.; Brown, B.; Bughin, J.; Dobbs, R.; Roxburgh, C.; Hung Byers, A. Big Data: The Next Frontier for Innovation, Competition, and Productivity; McKinsey Global Institute: Washington, DC, USA, 2011. [Google Scholar]
- Rosenfeld, A. Computer vision: Basic principles. Proc. IEEE 1988, 76, 863–868. [Google Scholar] [CrossRef]
- Rawat, W.; Wang, Z. Deep convolutional neural networks for image classification: A comprehensive review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning. PMLR, Lille, France, 7–9 July 2015; pp. 448–456. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Dong, S.; Wang, P.; Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 2021, 40, 100379. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Stoudenmire, E.; Schwab, D.J. Supervised learning with tensor networks. Adv. Neural Inf. Process. Syst. 2016, 29, 4806–4814. [Google Scholar]
- Selvan, R.; Ørting, S.; Dam, E.B. Multi-layered tensor networks for image classification. arXiv 2020, arXiv:2011.06982. [Google Scholar]
- Kong, F.; Liu, X.Y.; Henao, R. Quantum tensor network in machine learning: An application to tiny object classification. arXiv 2021, arXiv:2101.03154. [Google Scholar]
- Liu, D.; Ran, S.J.; Wittek, P.; Peng, C.; García, R.B.; Su, G.; Lewenstein, M. Machine learning by unitary tensor network of hierarchical tree structure. New J. Phys. 2019, 21, 073059. [Google Scholar] [CrossRef]
- Orús, R. Advances on tensor network theory: Symmetries, fermions, entanglement, and holography. Eur. Phys. J. B 2014, 87, 280. [Google Scholar] [CrossRef] [Green Version]
- Verstraete, F.; Murg, V.; Cirac, J.I. Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems. Adv. Phys. 2008, 57, 143–224. [Google Scholar] [CrossRef] [Green Version]
- Evenbly, G.; Vidal, G. Tensor network states and geometry. J. Stat. Phys. 2011, 145, 891–918. [Google Scholar] [CrossRef] [Green Version]
- Cheng, S.; Chen, J.; Wang, L. Information perspective to probabilistic modeling: Boltzmann machines versus born machines. Entropy 2018, 20, 583. [Google Scholar] [CrossRef] [Green Version]
- Stoudenmire, E.; Schwab, D. Supervised learning with quantum-inspired tensor networks (2016). arXiv 2020, arXiv:1605.05775. [Google Scholar]
- Stoudenmire, E.M. Learning relevant features of data with multi-scale tensor networks. Quantum Sci. Technol. 2018, 3, 034003. [Google Scholar] [CrossRef] [Green Version]
- Cheng, S.; Wang, L.; Zhang, P. Supervised learning with projected entangled pair states. Phys. Rev. B 2021, 103, 125117. [Google Scholar] [CrossRef]
- Sun, Z.Z.; Peng, C.; Liu, D.; Ran, S.J.; Su, G. Generative tensor network classification model for supervised machine learning. Phys. Rev. B 2020, 101, 075135. [Google Scholar] [CrossRef] [Green Version]
- Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar]
- Sciuto, G.L.; Napoli, C.; Capizzi, G.; Shikler, R. Organic solar cells defects detection by means of an elliptical basis neural network and a new feature extraction technique. Optik 2019, 194, 163038. [Google Scholar] [CrossRef]
- Capizzi, G.; Sciuto, G.L.; Napoli, C.; Połap, D.; Woźniak, M. Small lung nodules detection based on fuzzy-logic and probabilistic neural network with bioinspired reinforcement learning. IEEE Trans. Fuzzy Syst. 2019, 28, 1178–1189. [Google Scholar] [CrossRef]
- Selvan, R.; Dam, E.B. Tensor networks for medical image classification. In Proceedings of the Medical Imaging with Deep Learning. PMLR, Montreal, QC, Canada, 6–8 July 2020; pp. 721–732. [Google Scholar]
- Jordan, J.; Orús, R.; Vidal, G.; Verstraete, F.; Cirac, J.I. Classical simulation of infinite-size quantum lattice systems in two spatial dimensions. Phys. Rev. Lett. 2008, 101, 250602. [Google Scholar] [CrossRef] [Green Version]
- Efthymiou, S.; Hidary, J.; Leichenauer, S. Tensornetwork for machine learning. arXiv 2019, arXiv:1906.06329. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Krizhevsky, A. One weird trick for parallelizing convolutional neural networks. arXiv 2014, arXiv:1404.5997. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Blagoveschensky, P.; Phan, A.H. Deep convolutional tensor network. arXiv 2020, arXiv:2005.14506. [Google Scholar]
- Vidal, G. Classical simulation of infinite-size quantum lattice systems in one spatial dimension. Phys. Rev. Lett. 2007, 98, 070201. [Google Scholar] [CrossRef] [Green Version]
- Liao, H.J.; Liu, J.G.; Wang, L.; Xiang, T. Differentiable programming tensor networks. Phys. Rev. X 2019, 9, 031041. [Google Scholar] [CrossRef] [Green Version]
- Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
- Chowdhury, M.E.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Al Emadi, N.; et al. Can AI help in screening viral and COVID-19 pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
- Rahman, T.; Khandakar, A.; Qiblawey, Y.; Tahir, A.; Kiranyaz, S.; Kashem, S.B.A.; Islam, M.T.; Al Maadeed, S.; Zughaier, S.M.; Khan, M.S.; et al. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Comput. Biol. Med. 2021, 132, 104319. [Google Scholar] [CrossRef] [PubMed]
- Glasser, I.; Pancotti, N.; Cirac, J.I. Supervised learning with generalized tensor networks. arXiv 2018, arXiv:1806.05964. [Google Scholar]
- Nie, C.; Wang, H.; Lai, Z. Multi-Tensor Network Representation for High-Order Tensor Completion. arXiv 2021, arXiv:2109.04022. [Google Scholar]
- Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
- Zhou, Y.; Chen, S.; Wang, Y.; Huan, W. Review of research on lightweight convolutional neural networks. In Proceedings of the 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 12–14 June 2020; pp. 1713–1720. [Google Scholar]
Model | Test Accuracy |
---|---|
MLPEPS | 90.44% |
PEPS [20] | 88.30% |
MPS [27] | 88.00% |
Multi-scale TNs [19] | 88.97% |
DCTN [34] | 89.38% |
XGBoost [19] | 89.80% |
AlexNet [40] | 89.90% |
GoogLeNet [40] | 93.70% |
Model | Train Accuracy | Test Accuracy |
---|---|---|
single layer PEPS () | 100% | 87.08% |
2-layer PEPS () | 100% | 91.63% |
GoogLeNet | 99.95% | 92.75% |
AlexNet | 99.72% | 93.95% |
VGG-16 | 99.07% | 94.50% |
Model | Parameter | Ratio |
---|---|---|
single layer PEPS (D = 4) | 1,064,964 | 0.76 |
2-layer PEPS () | 1,394,102 | 1.00 |
2-layer PEPS () | 4,404,102 | 3.16 |
2-layer PEPS () | 10,750,902 | 7.71 |
GoogLeNet | 5,604,004 | 4.02 |
AlexNet | 57,020,228 | 40.90 |
VGG-16 | 134,276,932 | 90.32 |
Model | m | o | D | Test Accuracy |
---|---|---|---|---|
3-layer PEPS | 4 | 90.33% | ||
4-layer PEPS | 4 | 89.12% | ||
5-layer PEPS | 4 | 88.84% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, L.; Lai, H. Multi-Layered Projected Entangled Pair States for Image Classification. Sustainability 2023, 15, 5120. https://doi.org/10.3390/su15065120
Li L, Lai H. Multi-Layered Projected Entangled Pair States for Image Classification. Sustainability. 2023; 15(6):5120. https://doi.org/10.3390/su15065120
Chicago/Turabian StyleLi, Lei, and Hong Lai. 2023. "Multi-Layered Projected Entangled Pair States for Image Classification" Sustainability 15, no. 6: 5120. https://doi.org/10.3390/su15065120
APA StyleLi, L., & Lai, H. (2023). Multi-Layered Projected Entangled Pair States for Image Classification. Sustainability, 15(6), 5120. https://doi.org/10.3390/su15065120