Emotional Intelligence for the Decision-Making Process of Trajectories in Collaborative Robotics
Abstract
:1. Introduction
- A novel dataset, coming from existing ones but ad hoc manipulated, for a typical HRI in the industrial environment was created;
- The ViT architecture was trained and validated for the specific application in which hands and facial expressions are monitored. It was demonstrated that this kind of architecture, requiring millions of images, can be adopted in scenarios like the one reported in the manuscript;
- Since the outcome of the proposed system was a trajectory generated by emotional intelligence, whose value is predictable in the described application but for more complex situations could generate more complex trajectories, it was necessary that the trajectory was not harmful to the cobot. In the case of the adopted cobot, a simulator and a software integrator of the simulator with an external ViT architecture did not exist. For this reason, a DT of the adopted cobot for communication with the ViT architecture, simulation of the cobot behavior, and communication with the control unit of the real cobot was created;
- The aim of the work was not to create a comfortable working environment or monitor the operator’s emotions due to the HRI; on the contrary, it aimed to create a more collaborative, safe, efficient, and comfortable HRI when the cobot guides the operator to pay the due attention, as occurs among human colleagues.
2. The LoA Estimation
2.1. The Dataset
2.2. The ViT Architectures
2.3. The Recognition of Facial Expressions and Hand Gestures
3. The Emotional Intelligence Implementation: An Application
4. The Digital Twin of Cobot Omron TM5-700
4.1. The Aim
4.2. The Structure
4.3. The Validation
5. The Experimental Activity
6. Conclusions
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Schwab, K. The Fourth Industrial Revolution; World Economic Forum: Geneva, Switzerland, 2016. [Google Scholar]
- Dzedzickis, A.; Subačiuté-Žemaitié, J.; Šutinys, E.; Samukaité-Bubniené, U.; Bučinskas, V. Advanced Applications of Industrial Robotics: New Trends and Possibilities. Appl. Sci. 2022, 12, 135. [Google Scholar] [CrossRef]
- Kyrarini, M.; Lygerakis, F.; Rajavenkatanarayanan, A.; Sevastopoulos, C.; Nambiappan, H.R.; Chaitanya, K.K.; Babu, A.R.; Mathew, J.; Makedon, F. A Survey of Robots in Healthcare. Technologies 2021, 9, 8. [Google Scholar] [CrossRef]
- Yin, S.; Yuschenko, A. Application of Convolutional Neural Network to Organize the Work of Collaborative Robot as a Surgeon Assistant. In Proceedings of the International Conference on Interactive Collaborative Robotics ICR2019, Istanbul, Turkey, 20–25 August 2019; pp. 287–297. [Google Scholar]
- Antonelli, M.G.; Beomonte Zobel, P. Automated screwing of fittings in pneumatic manifolds. Int. J. Autom. Technol. 2021, 15, 140–148. [Google Scholar] [CrossRef]
- Kim, W.; Lorenzini, M.; Balatti, P.; Nguyen, P.D.H.; Pattacini, U.; Tikhanoff, V.; Peternel, L.; Fantacci, C.; Natale, L.; Metta, G.; et al. Adaptable Workstations for Human-Robot Collaboration: A Reconfigurable Framework for Improving Worker Ergonomics and Productivity. IEEE Robot. Autom. 2019, 26, 14–26. [Google Scholar] [CrossRef]
- Lanzoni, D.; Negrello, F.; Fornaciari, A.; Lentini, G.; Ierace, S.; Vitali, A.; Regazzoni, D.; Ajoudani, A.; Rizzi, C.; Bicchi, A.; et al. Collaborative Workcell in Industrial Assembly Process with Online Ergonomics Monitoring. In Proceedings of the I-RIM Conference, Milan, Italy, 13–14 October 2022; pp. 201–203. [Google Scholar]
- ISO/TS 15066:2016; Robots and Robotic Devices-Collaborative Robots. ISO: Geneva, Switzerland, 2016.
- Grella, F.; Baldini, G.; Canale, R.; Sagar, K.; Wang, S.A.; Albini, A.; Jilich, M.; Cannata, G.; Zoppi, M. A Tactile Sensor-Based Architecture for Collaborative Assembly Tasks with Heavy-Duty Robots. In Proceedings of the 20th ICAR, Ljubljana, Slovenia, 6–10 December 2021; pp. 1030–1035. [Google Scholar]
- AIRSKIN®. Available online: https://www.airskin.io/ (accessed on 4 November 2023).
- Malekzadeh, M.S.; Queißer, J.F.; Steil, J.J. Multi-Level Control Architecture for Bionic Handling Assistant Robot Augmented by Learning from Demonstration for Apple-Picking. Adv. Robot. 2019, 33, 469–485. [Google Scholar] [CrossRef]
- Manti, M.; Hassan, T.; Passetti, G.; D’Elia, N.; Laschi, C.; Cianchetti, M. A Bioinspired Soft Robotic Gripper for Adaptable and Effective Grasping. Soft Robot. 2015, 2, 107–116. [Google Scholar] [CrossRef]
- Antonelli, M.G.; Zobel, P.B.; D’ambrogio, W.; Durante, F. Design Methodology for a Novel Bending Pneumatic Soft Actuator for Kinematically Mirroring the Shape of Objects. Actuators 2020, 9, 113. [Google Scholar] [CrossRef]
- Antonelli, M.G.; D’Ambrogio, W. Soft Pneumatic Helical Actuator for Collaborative Robotics. In Proceedings of the 4th International Conference of IFToMM, Naples, Italy, 7–9 September 2022; pp. 702–709. [Google Scholar]
- Neri, F.; Forlini, M.; Scoccia, C.; Palmieri, G.; Callegari, M. Experimental Evaluation of Collision Avoidance Techniques for Collaborative Robots. Appl. Sci. 2023, 13, 2944. [Google Scholar] [CrossRef]
- Scoccia, C.; Palmieri, G.; Palpacelli, M.C.; Callegari, M. A Collision Avoidance Strategy for Redundant Manipulators in Dynamically Variable Environments: On-Line Perturbations of Off-Line Generated Trajectories. Machines 2021, 9, 30. [Google Scholar] [CrossRef]
- Scalera, L.; Giusti, A.; Vidoni, R.; Gasparetto, A. Enhancing fluency and productivity in human-robot collaboration through online scaling of dynamic safety zones. Int. J. Adv. Manuf. Technol. 2022, 121, 6783–6798. [Google Scholar] [CrossRef]
- Liu, Q.; Liu, Z.; Xu, W.; Tang, Q.; Zhou, Z.; Pham, D.T. Human-robot collaboration in disassembly for sustainable manufacturing. Int. J. Prod. Res. 2019, 57, 4027–4044. [Google Scholar] [CrossRef]
- Sajedi, S.; Liu, W.; Eltouny, K.; Behdad, S.; Zheng, M.; Xiao, L. Uncertainty-Assisted Image-Processing for Human-Robot Close Collaboration. IEEE Robot. Autom. Lett. 2022, 7, 4236–4243. [Google Scholar] [CrossRef]
- Dimitropoulos, N.; Togias, T.; Zacharaki, N.; Michalos, G.; Makris, S. Seamless Human–Robot Collaborative Assembly Using Artificial Intelligence and Wearable Devices. Appl. Sci. 2021, 11, 5699. [Google Scholar] [CrossRef]
- Neto, P.; Simão, M.; Mendes, N.; Safeea, M. Gesture-based human-robot interaction for human assistance in manufacturing. Int. J. Adv. Manuf. Technol. 2019, 101, 119–135. [Google Scholar] [CrossRef]
- Abdullah, M.; Lihui, W. Advanced Human-Robot Collaborative Assembly Using Electroencephalogram Signals of Human Brains. In Proceedings of the 53rd CIRP Conference on Manufacturing Systems (CIRP-CMS 2020), Chicago, IL, USA, 1–3 July 2020; pp. 1200–1205. [Google Scholar] [CrossRef]
- Adel, A. Future of industry 5.0 in society: Human-centric solutions, challenges and prospective research areas. J. Cloud Comp. 2022, 11, 40. [Google Scholar] [CrossRef] [PubMed]
- Kahneman, D. Thinking, Fast and Slow; Macmillan: New York, NY, USA, 2011. [Google Scholar]
- James, W. The Principles of Psychology; Henry Hold and Company: New York, NY, USA, 1890; Volume 1. [Google Scholar]
- Norman, G.J.; Necka, E.; Berntson, G.G. 4—The Psychophysiology of Emotions. In Emotion Measurement; Herbert, L., Meise Man, A., Eds.; Woodhead Publishing: Sawston, UK, 2016; pp. 83–98. [Google Scholar]
- Buodo, G.; Sarlo, M.; Palomba, D. Attentional Resources Measured by Reaction Times Highlight Differences Within Pleasant and Unpleasant, High Arousing Stimuli. Motiv. Emot. 2002, 26, 123–138. [Google Scholar] [CrossRef]
- Palomba, D.; Sarlo, M.; Angrilli, A.; Mini, A.; Stegagno, L. Cardiac Responses Associated with Affective Processing of Unpleasant Film Stimuli. Int. J. Psychophysiol. 2000, 36, 45–47. [Google Scholar] [CrossRef] [PubMed]
- Mayer, J.D.; Salovey, P.; Caruso, D.R. Emotional Intelligence: New Ability or Eclectic Traits? Am. Psychol. 2008, 63, 503–517. [Google Scholar] [CrossRef] [PubMed]
- Marcos-Pablos, S.; García-Peñalvo, F.J. Emotional Intelligence in Robotics: A Scoping Review. In New Trends in Disruptive Technologies, Tech Ethics and Artificial Intelligence: The DITTET Collection, 1st ed.; Springer International Publishing: Cham, Switzerland, 2022; pp. 66–75. [Google Scholar]
- Spezialetti, M.; Placidi, G.; Rossi, S. Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives. Front. Robot. AI 2020, 7, 145. [Google Scholar] [CrossRef] [PubMed]
- Kollias, K.-F.; Syriopoulou-Delli, C.K.; Sarigiannidis, P.; Fragulis, G.F. The Contribution of Machine Learning and Eye-Tracking Technology in Autism Spectrum Disorder Research: A Systematic Review. Electronics 2021, 10, 2982. [Google Scholar] [CrossRef]
- Banire, B.; Al Thani, D.; Qaraqe, M.; Mansoor, B. Face-Based Attention Recognition Model for Children with Autism Spectrum Disorder. J. Healthc. Inform. Res. 2021, 5, 420–445. [Google Scholar] [CrossRef] [PubMed]
- Geetha, M.; Latha, R.S.; Nivetha, S.K.; Hariprasath, S.; Gowtham, S.; Deepak, C.S. Design of face detection and recognition system to monitor students during online examinations using Machine Learning algorithms. In Proceedings of the 2021 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 27–29 January 2021. [Google Scholar] [CrossRef]
- Tawari, A.; Trivedi, M.M. Robust and Continuous Estimation of Driver Gaze Zone by Dynamic Analysis of Multiple Face Videos. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium (IV), Dearborn, MI, USA, 8–11 June 2014. [Google Scholar]
- Hu, Z.; Lv, C.; Hang, P.; Huang, C.; Xing, Y. Data-Driven Estimation of Driver Attention Using Calibration-Free Eye Gaze and Scene Features. IEEE Trans. Ind. Electron. 2022, 69, 2. [Google Scholar] [CrossRef]
- Chu, H.C.; Tsai, W.W.J.; Liao, M.J.; Chen, Y.M. Facial emotion recognition with transition detection for students with high-functioning autism in adaptive e-learning. Soft Comput. 2018, 22, 2973–2999. [Google Scholar] [CrossRef]
- Kotsia, I.; Pitas, I. Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans. Image Process 2007, 16, 172–187. [Google Scholar] [CrossRef] [PubMed]
- Hua, W.; Dai, F.; Huang, L.; Xiong, J.; Gui, G. Hero: Human emotions recognition for realizing intelligent internet of things. IEEE Access 2019, 7, 24321–24332. [Google Scholar] [CrossRef]
- Lu, C.T.; Su, C.W.; Jiang, H.L.; Lu, Y.Y. An interactive greeting system using convolutional neural networks for emotion recognition. Entertain. Comput. 2022, 40, 100452. [Google Scholar] [CrossRef]
- Heredia, J.; Lopes-Silva, E.; Cardinale, Y.; Diaz-Amado, J.; Dongo, I.; Graterol, W.; Aguilera, A. Adaptive Multimodal Emotion Detection Architecture for Social Robots. IEEE Access 2022, 10, 20727–20744. [Google Scholar] [CrossRef]
- Vazquez-Rodriguez, J.; Lefebvre, G.; Cumin, J.; Crowley, J.L. Transformer-Based Self-Supervised Learning for Emotion Recognition. In Proceedings of the 26th International Conference on Pattern Recognition (ICPR 2022), Montreal, QC, Canada, 21–25 August 2022. [Google Scholar]
- Chaudhari, A.; Bhatt, C.; Krishna, A.; Mazzeo, P.L. ViTFER: Facial Emotion Recognition with Vision Transformers. Appl. Syst. Innov. 2022, 5, 80. [Google Scholar] [CrossRef]
- Siriwardhana, S.; Kaluarachchi, T.; Billinghurst, M.; Nanayakkara, S. Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion. IEEE Access 2020, 8, 176274–176285. [Google Scholar] [CrossRef]
- Karatay, B.; Bestepe, D.; Sailunaz, K.; Ozyer, T.; Alhajj, R. A Multi-Modal Emotion Recognition System Based on CNN-Transformer Deep Learning Technique. In Proceedings of the 7th International Conference on Data Science and Machine Learning Applications (CDMA 2022), Riyadh, Saudi Arabia, 1–3 March 2022; pp. 145–150. [Google Scholar]
- Toichoa Eyam, A.; Mohammed, W.M.; Martinez Lastra, J.L. Emotion-Driven Analysis and Control of Human-Robot Interactions in Collaborative Applications. Sensors 2021, 21, 4626. [Google Scholar] [CrossRef]
- Lagomarsino, M.; Lorenzini, M.; Balatti, P.; De Momi, E.; Ajoudani, A. Pick the Right Co-Worker: Online Assessment of Cognitive Ergonomics in Human-Robot Collaborative Assembly. IEEE Trans. Cogn. Dev. Syst. 2022, 15, 1928–1937. [Google Scholar] [CrossRef]
- Brandizzi, N.; Bianco, V.; Castro, G.; Russo, S.; Wajda, A. Automatic RGB inference based on facial emotion recognition. In Proceedings of the 2021 Scholar’s Yearly Symposium of Technology, Engineering and Mathematics, Catania, Italy, 27–29 July 2021. [Google Scholar]
- Goodfellow, I.J.; Erhan, D.; Carrier, P.L.; Courville, A.; Mirza, M.; Hamner, B.; Cukierski, W.; Tang, Y.; Thaler, D.; Lee, D.-H.; et al. Challenges in representation learning: A report on three machine learning contests. In Proceedings of the International Conference on Neural Information Processing, Daegu, Republic of Korea, 3–7 November 2013; pp. 117–124. [Google Scholar] [CrossRef]
- Nuzzi, C.; Pasinetti, S.; Pagani, R.; Coffetti, G.; Sansoni, G. MEGURU: A gesture-based robot program builder for Meta-Collaborative workstations. Robot. Comput.-Integr. Manuf. 2021, 68, 102085. [Google Scholar] [CrossRef]
- Kapitanov, A.; Makhlyarchuk, A.; Kvanchiani, K. HaGRID-H and Gesture Recognition Image Dataset. arXiv 2022, arXiv:2206.08219. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training Data-Efficient Image Transformers & Distillation through Attention. arXiv 2020, arXiv:2012.12877. [Google Scholar]
- Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (System Demonstrations), Virtual Conference, 16–20 November 2020; pp. 38–45. [Google Scholar]
- Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019; pp. 94–156. [Google Scholar]
- Chen, X.; Zheng, X.; Sun, K.; Liu, W.; Zhang, Y. Self-supervised vision transformer-based few-shot learning for facial expression recognition. Inf. Sci. 2023, 634, 206–226. [Google Scholar] [CrossRef]
- Minaee, S.; Minaei, M.; Abdolrashidi, A. Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network. Sensors 2021, 21, 3046. [Google Scholar] [CrossRef] [PubMed]
- Padhi, P.; Das, M. Hand Gesture Recognition using DenseNet201-Mediapipe Hybrid Modelling. In Proceedings of the International Conference on Automation, Computing and Renewable Systems (ICACRS 2022), Pudukkottai, India, 13–15 December 2022. [Google Scholar]
- Mathworks. Available online: https://it.mathworks.com/products/robotics.html (accessed on 5 November 2023).
- Omron Industrial Automation. Available online: https://industrial.omron.eu/en/products/collaborative-robots (accessed on 6 November 2023).
Section | Class | Number of Images |
---|---|---|
Facial expressions | NLoA | 6000 |
MloA | 6000 | |
LloA | 4440 | |
Hand gestures | NloA | 2640 |
MloA | 3564 | |
LloA | 1800 |
Model Face | Training Loss | Validation Loss | Validation Accuracy |
ViT-B | 0.40 | 0.46 | 0.83 |
DeiT-B | 0.37 | 0.50 | 0.81 |
DeiT-Tiny | 0.54 | 0.57 | 0.76 |
Model Hand | Training Loss | Validation Loss | Validation Accuracy |
ViT-B | 0.07 | 0.09 | 0.99 |
DeiT-B | 0.05 | 0.05 | 0.97 |
DeiT-Tiny | 0.02 | 0.01 | 0.98 |
Model Face | |||||
Dataset | Classes | ViT-B [43] | SSF-ViT-B [56] | A-CNN [57] | ViT-B (our) |
FER-2013 | neutral (NLoA) | 0.61 | 0.73 | 0.80 | 0.80 |
happy (MLoA) | 0.77 | 0.89 | 0.69 | 0.91 | |
angry (LLoA) | 0.63 | 0.69 | 0.53 | 0.78 | |
Model Hand | |||||
Dataset | Classes | R-FCN [50] | ViT-B [51] | DenseNet201 [58] | ViT-B (our) |
HaGRID | no gesture (NLoA) | - | 0.98 | 0.92 | 1 |
HANDS | Horiz (MLoA) | 0.85 | - | - | 0.99 |
Punch (LLoA) | 1 | - | - | 1 |
Subject | Facial Expressions (Phase 1) | Hand Gestures (Phase 2) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
No. | Age | Gender | NLoA | MLoA | LLoA | Inference [%] | NLoA | MLoA | LLoA | Inference [%] |
1 | 26 | M | o | o | o | 100 | o | o | o | 100 |
2 | 31 | M | o | o | o | 100 | X | o | o | 66.6 |
3 | 27 | M | o | o | o | 100 | X | X | X | 0 |
4 | 22 | M | o | o | o | 100 | o | o | o | 100 |
5 | 28 | M | o | o | X | 66.6 | o | o | o | 100 |
6 | 27 | M | o | o | X | 66.6 | o | o | o | 100 |
7 | 25 | F | o | o | o | 100 | o | o | X | 66.6 |
8 | 26 | F | o | o | X | 66.6 | o | o | o | 100 |
9 | 26 | M | o | o | o | 100 | o | o | o | 100 |
10 | 28 | F | o | o | X | 66.6 | o | o | o | 100 |
Average Inference | 86.6 | Average Inference | 73.3 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Antonelli, M.G.; Beomonte Zobel, P.; Manes, C.; Mattei, E.; Stampone, N. Emotional Intelligence for the Decision-Making Process of Trajectories in Collaborative Robotics. Machines 2024, 12, 113. https://doi.org/10.3390/machines12020113
Antonelli MG, Beomonte Zobel P, Manes C, Mattei E, Stampone N. Emotional Intelligence for the Decision-Making Process of Trajectories in Collaborative Robotics. Machines. 2024; 12(2):113. https://doi.org/10.3390/machines12020113
Chicago/Turabian StyleAntonelli, Michele Gabrio, Pierluigi Beomonte Zobel, Costanzo Manes, Enrico Mattei, and Nicola Stampone. 2024. "Emotional Intelligence for the Decision-Making Process of Trajectories in Collaborative Robotics" Machines 12, no. 2: 113. https://doi.org/10.3390/machines12020113
APA StyleAntonelli, M. G., Beomonte Zobel, P., Manes, C., Mattei, E., & Stampone, N. (2024). Emotional Intelligence for the Decision-Making Process of Trajectories in Collaborative Robotics. Machines, 12(2), 113. https://doi.org/10.3390/machines12020113