Food Recognition and Food Waste Estimation Using Convolutional Neural Network
Abstract
:1. Introduction
2. Materials and Methods
2.1. Data Acquisition for Food Recognition
2.2. Food Waste Evaluation
2.3. Convolutional Neural Networks
2.3.1. Convolution
2.3.2. Pooling
2.3.3. Fully Connected Layer
2.3.4. Output Layer
2.3.5. Proposed CNN Model
2.4. Food Waste Calculation
3. Results and Discussion
3.1. CNN Model Results
3.2. Food Waste Evaluation Results
4. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Bio Intelligence Service. Preparatory Study on Food Waste across EU 27: Final Report. Publications Office. 2011. Available online: https://data.europa.eu/doi/10.2779/85947 (accessed on 2 October 2021).
- Food and Agriculture Organization of the United Nations. Global Initiative on Food Loss and Waste. 2017. Available online: http://www.fao.org/3/i7657e/i7657e.pdf (accessed on 2 October 2021).
- Gustavsson, J.; Cederberg, C.; Sonesson, U. Global Food Losses and Food Waste: Extent, Causes and Prevention. In Study Conducted for the International Congress Save Food! at Interpack, 16–17 May 2011; Food and Agriculture Organization of the United Nations: Düsseldorf, Germany, 2011; Available online: http://www.fao.org/3/mb060e/mb060e.pdf (accessed on 2 October 2021).
- European Commission. Communication Closing the loop—An EU Action Plan for the Circular Economy. 2015. Available online: https://eurlex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52015DC0614&from=LT (accessed on 2 October 2021).
- Stenmarck, Å.; Jensen, C.; Quested, T.; Moates, G.; Buksti, M.; Cseh, B.; Juul, S.; Parry, A.; Politano, A.; Redlingshofer, B.; et al. Estimates of European Food Waste Levels, Reducing food waste through social innovation, Fusions EU project, European Commission (FP7), Coordination and Support Action –CSA, Grant Agreement no. 311972. 2016. [CrossRef]
- Zębek, E.; Žilinskienė, L. The legal regulation of food waste in Poland and Lithuania in compliance with EU directive 2018/851. Entrep. Sustain. Issues 2021, 9, 221–238. [Google Scholar] [CrossRef]
- Ministerie van Landbouw, Natuur en Voedselkwaliteit. Available online: https://www.agroberichtenbuitenland.nl/actueel/nieuws/2021/09/24/serbia-food-waste (accessed on 15 September 2022).
- United Nations Serbia. Available online: https://serbia.un.org/en/158555-how-why-and-how-much-do-we-throw-food-away (accessed on 15 September 2022).
- Kagaya, H.; Aizawa, K.; Ogawa, M. Food detection and recognition using convolutional neural network. In Proceedings of the 22nd ACM international conference on Multimedia, Orlando, FL, USA, 3 November 2014. [Google Scholar]
- Zhang, W.; Zhao, D.; Gong, W.; Li, Z.; Lu, Q.; Yang, S. Food image recognition with convolutional neural networks. In Proceedings of the 2015 IEEE 12th International Conference on Ubiquitous Intelligence and Computing and 2015 IEEE 12th International Conference on Autonomic and Trusted Computing and 2015 IEEE 15th International Conference on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), Beijing, China, 15 August 2015. [Google Scholar]
- Chauhan, R.; Ghanshala, K.K.; Joshi, R.C. Convolutional neural network (CNN) for image detection and recognition. In Proceedings of the 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC), Jalandhar, India, 15th December 2018. [Google Scholar]
- Wu, M.; Chen, L. Image recognition based on deep learning. In Proceedings of the 2015 Chinese Automation Congress (CAC), Wuhan, China, 27 November 2015. [Google Scholar]
- Cheng, F.; Zhang, H.; Fan, W.; Harris, B. Image Recognition Technology Based on Deep Learning. Wirel. Pers. Commun. 2018, 102, 1917–1933. [Google Scholar] [CrossRef]
- Traore, B.B.; Kamsu-Foguem, B.; Tangara, F. Deep convolution neural network for image recognition. Ecol. Inform. 2018, 48, 257–268. [Google Scholar] [CrossRef] [Green Version]
- Fakhrou, A.; Kunhoth, J.; Al Maadeed, S. Smartphone-based food recognition system using multiple deep CNN models. Multimedia Tools Appl. 2021, 80, 33011–33032. [Google Scholar] [CrossRef]
- Nath, S.; Naskar, R. Automated image splicing detection using deep CNN-learned features and ANN-based classifier. Signal, Image Video Proc. 2021, 15, 1601–1608. [Google Scholar] [CrossRef]
- Agha, R.A.A.R.; Sefer, M.N.; Fattah, P. A comprehensive study on sign languages recognition systems using (SVM, KNN, CNN and ANN). In Proceedings of the First International Conference on Data Science, E-learning and Information Systems, Madrid, Spain, 1 October 2018. [Google Scholar] [CrossRef]
- Kareem, S.; Hamad, Z.J.; Askar, S. An evaluation of CNN and ANN in prediction weather forecasting: A review. Sustain. Eng. Innov. 2021, 3, 148–159. [Google Scholar] [CrossRef]
- Anwar, S.M.; Majid, M.; Qayyum, A.; Awais, M.; Alnowami, M.; Khan, M.K. Medical Image Analysis using Convolutional Neural Networks: A Review. J. Med. Syst. 2018, 42, 226. [Google Scholar] [CrossRef] [Green Version]
- Brodrick, P.G.; Davies, A.B.; Asner, G.P. Uncovering Ecological Patterns with Convolutional Neural Networks. Trends Ecol. Evol. 2019, 34, 734–745. [Google Scholar] [CrossRef]
- Sameen, M.I.; Pradhan, B.; Lee, S. Application of convolutional neural networks featuring Bayesian optimization for landslide susceptibility assessment. CATENA 2019, 186, 104249. [Google Scholar] [CrossRef]
- Hasan, M.; Ullah, S.; Khan, M.J.; Khurshid, K. Comparative Analysis of SVM, Ann and Cnn for Classifying Vegetation Species Using Hyperspectral Thermal Infrared Data. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, XLII-2/W13, 1861–1868. [Google Scholar] [CrossRef] [Green Version]
- Ravi, D.; Wong, C.; Deligianni, F.; Berthelot, M.; Andreu-Perez, J.; Lo, B.; Yang, G.-Z. Deep Learning for Health Informatics. IEEE J. Biomed. Health Inform. 2016, 21, 4–21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kamilaris, A.; Prenafeta-Boldú, F.X. A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 2018, 156, 312–322. [Google Scholar] [CrossRef] [Green Version]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
- Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef] [Green Version]
- Zhou, D.-X. Theory of deep convolutional neural networks: Downsampling. Neural Netw. 2020, 124, 319–327. [Google Scholar] [CrossRef] [PubMed]
- Sarıgül, M.; Ozyildirim, B.; Avci, M. Differential convolutional neural network. Neural Netw. 2019, 116, 279–287. [Google Scholar] [CrossRef]
- Aghdam, H.H.; Jahani Heravi, E. Guide to Convolutional Neural Networks; Springer International Publishing: New York, NY, USA, 2017; pp. 973–978. [Google Scholar]
- Sarigul, M.; Ozyildirim, B.M.; Avci, M. Deep Convolutional Generalized Classifier Neural Network. Neural Proc. Lett. 2020, 51, 2839–2854. [Google Scholar] [CrossRef]
- Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D convolutional neural networks and applications: A survey. Mech. Syst. Signal Proc. 2020, 151, 107398. [Google Scholar] [CrossRef]
- Lee, H.; Kwon, H. Going Deeper with Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Proc. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [Green Version]
- Sun, Y.; Xue, B.; Zhang, M.; Yen, G.G.; Lv, J. Automatically Designing CNN Architectures Using the Genetic Algorithm for Image Classification. IEEE Trans. Cybern. 2020, 50, 3840–3854. [Google Scholar] [CrossRef] [Green Version]
- Hussain, M.; Bird, J.J.; Faria, D.R. A Study on CNN Transfer Learning for Image Classification. In UK Workshop on Computational Intelligence; Springer: Cham, Switzerland, 2018; pp. 191–202. [Google Scholar]
- Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects. IEEE Trans. Neural Networks Learn. Syst. 2021, 1, 1–21. [Google Scholar] [CrossRef] [PubMed]
- Kakarla, J.; Isunuri, B.V.; Doppalapudi, K.S.; Bylapudi, K.S.R. Three—Class classification of brain magnetic resonance images using average—Pooling convolutional neural network. Int. J. Imaging Syst. Technol. 2021, 31, 1731–1740. [Google Scholar] [CrossRef]
- Kuo, C.-C.J. Understanding convolutional neural networks with a mathematical model. J. Vis. Commun. Image Represent. 2016, 41, 406–413. [Google Scholar] [CrossRef] [Green Version]
- Yiğit, G.; Ozyildirim, B.M. Comparison of convolutional neural network models for food image classification. J. Inf. Telecommun. 2017, 2, 347–357. [Google Scholar] [CrossRef]
- Zhang, Q.; Zhang, M.; Chen, T.; Sun, Z.; Ma, Y.; Yu, B. Recent advances in convolutional neural network acceleration. Neurocomputing 2018, 323, 37–51. [Google Scholar] [CrossRef] [Green Version]
- Yin, X.; Liu, Q.; Huang, X.; Pan, Y. Real-time prediction of rockburst intensity using an integrated CNN-Adam-BO algorithm based on microseismic data and its engineering application. Tunn. Undergr. Space Technol. 2021, 117, 104133. [Google Scholar] [CrossRef]
- Guzmán-Torres, J.A.; Domínguez-Mota, F.J.; Alonso-Guzmán, E.M. A multi-layer approach to classify the risk of corrosion in concrete specimens that contain different additives. Case Stud. Constr. Mater. 2021, 15, e00719. [Google Scholar] [CrossRef]
- Taqi, A.M.; Awad, A.; Al-Azzo, F.; Milanova, M. The Impact of Multi-Optimizers and Data Augmentation on TensorFlow Convolutional Neural Network Performance. In Proceedings of the IEEE 1st Conference on Multimedia Information Processing and Retrieval, Miami, FL, USA, 18 April 2018; pp. 140–145. [Google Scholar]
- Menaka, D.; Vaidyanathan, S.G. Chromenet: A CNN architecture with comparison of optimizers for classification of human chromosome images. Multidimens. Syst. Signal Process. 2022, 33, 747–768. [Google Scholar] [CrossRef]
- Fang, W.; Ding, Y.; Zhang, F.; Sheng, V.S. DOG: A new background removal for object recognition from images. Neurocomputing 2019, 361, 85–91. [Google Scholar] [CrossRef]
- Feng, X.; Pei, W.; Jia, Z.; Chen, F.; Zhang, D.; Lu, G. Deep-Masking Generative Network: A Unified Framework for Background Restoration from Superimposed Images. IEEE Trans. Image Proc. 2021, 30, 4867–4882. [Google Scholar] [CrossRef]
- Wang, X.; Tang, J.; Whitty, M. Side-view apple flower mapping using edge-based fully convolutional networks for variable rate chemical thinning. Comput. Electron. Agric. 2020, 178, 105673. [Google Scholar] [CrossRef]
- Wu, J.; Yin, J.; Zhang, Q. Institute of Electrical and Electronics Engineers. In Proceedings of the IEEE 13th International Conference on Electronic Measurement & Instruments, Yangzhou, China, 20–22 October 2017. [Google Scholar]
- Parveen, S.; Shah, J. A Motion Detection System in Python and Opencv. In Proceedings of the 3rd International Conference on Intelligent Communication Technologies and Virtual Mobile Networks, ICICV 2021, Tirunelveli, India, 4–6 February 2021; pp. 1378–1382. [Google Scholar]
Variable | Description | Assigned Value |
---|---|---|
Blur | Affects the smoothness of the dividing line between the background and foreground | 21 |
Low canny | The minimum intensity for drawing the edges | 10 |
High canny | The maximum intensity for drawing the edges | 200 |
Dilation iteration | The dilation’s number of iteration for masking | 10 |
Erode iteration | The erosion’s number of iteration for masking | 10 |
Mask color | The masked background color | (0, 0, 0) |
Fruit | Vegetable | Processed Fruits and Vegetables | Potatoes | Pasta, Rice, Cereal | Meat and Meat Products | Fish | Milk and Dairy Products | Bread | Cookies | Prepared Meals | Other | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Fruit | 99.9 | 2.8 × 10−14 | 2.2 × 10−17 | 1.2 × 10−20 | 6.1 × 10−9 | 1.0 × 10−12 | 1.4 × 10−6 | 1.4 × 10−5 | 3.5 × 10−22 | 1.2 × 10−3 | 2.9 × 10−15 | 9.8 × 10−2 |
Vegetable | 1.4 × 10−6 | 99.9 | 3.5 × 10−22 | 1.2 × 10−3 | 2.9 × 10−15 | 7.6 × 10−16 | 6.8 × 10−7 | 6.1 × 10−8 | 1.2 × 10−3 | 2.9 × 10−6 | 1.4 × 10−9 | 9.7 × 10−2 |
Processed fruits and vegetables | 6.8 × 10−7 | 6.1 × 10−8 | 99.9 | 2.9 × 10−6 | 1.4 × 10−9 | 3.8 × 10−19 | 3.0 × 10−2 | 1.2 × 10−3 | 3.2 × 10−2 | 3.5 × 10−22 | 7.7 × 10−6 | 3.6 × 10−2 |
Potatoes | 3.3 × 10−1 | 1.2 × 10−3 | 7.2 × 10−2 | 99.2 | 7.7 × 10−6 | 5.9 × 10−6 | 5.5 × 10−7 | 1.5 × 10−5 | 4.2 × 10−9 | 9.2 × 10−10 | 1.5 × 10−6 | 3.9 × 10−1 |
Pasta, rice, cereal | 5.5 × 10−7 | 1.5 × 10−5 | 4.2 × 10−9 | 9.2 × 10−10 | 99.9 | 1.5 × 10−8 | 3.5 × 10−2 | 6.0 × 10−9 | 5.5 × 10−9 | 2.1 × 10−15 | 2.8 × 10−2 | 3.7 × 10−2 |
Meat and meat products | 5.0 × 10−4 | 6.0 × 10−9 | 5.5 × 10−9 | 2.1 × 10−15 | 8.0 × 10−5 | 99.8 | 5.5 × 10−7 | 1.5 × 10−5 | 4.2 × 10−9 | 9.2 × 10−10 | 3.3 × 10−2 | 1.6 × 10−1 |
Fish | 1.4 × 10−6 | 1.5 × 10−5 | 3.5010−22 | 1.2 × 10−3 | 2.9 × 10−15 | 1.4 × 10−6 | 99.9 | 2.8 × 10−14 | 2.2 × 10−17 | 1.2 × 10−20 | 6.1 × 10−9 | 9.8 × 10−2 |
Milk and dairy products | 6.8 × 10−7 | 6.1 × 10−8 | 5.5 × 10−9 | 2.9 × 10−6 | 1.4 × 10−9 | 6.8 × 10−7 | 1.4 × 10−6 | 99.9 | 3.5 × 10−22 | 1.2 × 10−3 | 2.9 × 10−15 | 9.8 × 10−2 |
Bread | 3.3 × 10−2 | 1.2 × 10−3 | 2.0 × 10−3 | 1.4 × 10−9 | 7.7 × 10−6 | 3.3 × 10−2 | 6.8 × 10−7 | 6.1 × 10−8 | 99.9 | 2.9 × 10−6 | 1.4 × 10−9 | 3.0 × 10−2 |
Cookies | 5.5 × 10−7 | 1.5 × 10−5 | 4.2 × 10−9 | 9.2 × 10−10 | 2.9 × 10−15 | 5.5 × 10−7 | 3.3 × 10−1 | 1.2 × 10−3 | 7.2 × 10−2 | 99.2 | 7.7 × 10−6 | 3.9 × 10−1 |
Prepared meals | 3.5 × 10−3 | 6.0 × 10−9 | 5.5 × 10−9 | 2.1 × 10−15 | 8.8 × 10−3 | 3.5 × 10−2 | 5.5 × 10−7 | 1.5 × 10−5 | 4.2 × 10−9 | 9.2 × 10−10 | 99.9 | 5.2 × 10−2 |
Other | 5.5 × 10−7 | 1.5 × 10−5 | 4.2 × 10−9 | 9.2 × 10−10 | 6.3 × 10−2 | 7.7 × 10−6 | 3.5 × 10−2 | 6.0 × 10−9 | 5.5 × 10−9 | 2.1 × 10−15 | 1.2 × 10−3 | 99.9 |
Food Waste, % | No of Students | No of Images |
---|---|---|
21.3 | 30 | 1354 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lubura, J.; Pezo, L.; Sandu, M.A.; Voronova, V.; Donsì, F.; Šic Žlabur, J.; Ribić, B.; Peter, A.; Šurić, J.; Brandić, I.; et al. Food Recognition and Food Waste Estimation Using Convolutional Neural Network. Electronics 2022, 11, 3746. https://doi.org/10.3390/electronics11223746
Lubura J, Pezo L, Sandu MA, Voronova V, Donsì F, Šic Žlabur J, Ribić B, Peter A, Šurić J, Brandić I, et al. Food Recognition and Food Waste Estimation Using Convolutional Neural Network. Electronics. 2022; 11(22):3746. https://doi.org/10.3390/electronics11223746
Chicago/Turabian StyleLubura, Jelena, Lato Pezo, Mirela Alina Sandu, Viktoria Voronova, Francesco Donsì, Jana Šic Žlabur, Bojan Ribić, Anamarija Peter, Jona Šurić, Ivan Brandić, and et al. 2022. "Food Recognition and Food Waste Estimation Using Convolutional Neural Network" Electronics 11, no. 22: 3746. https://doi.org/10.3390/electronics11223746
APA StyleLubura, J., Pezo, L., Sandu, M. A., Voronova, V., Donsì, F., Šic Žlabur, J., Ribić, B., Peter, A., Šurić, J., Brandić, I., Klõga, M., Ostojić, S., Pataro, G., Virsta, A., Oros, A. E., Micić, D., Đurović, S., De Feo, G., Procentese, A., & Voća, N. (2022). Food Recognition and Food Waste Estimation Using Convolutional Neural Network. Electronics, 11(22), 3746. https://doi.org/10.3390/electronics11223746