Hey Max, Can You Help Me? An Intuitive Virtual Assistant for Industrial Robots
Abstract
:1. Introduction
Contribution
- a novel VA trained on the state-of-the-art bidirectional encoder representations from transformers (BERT) [7] model with a high human intent-recognition capability focusing on the industrial robots and satisfies downstream assistance needs of factories for employees to successfully obtain the acquired knowledge and skills for manufacturing tasks;
- three common manufacturing scenarios are selected and analyzed in a genuine production setting with an emphasis on each level of LTA-FIT. To our knowledge, this is the first time an industrial-oriented VA has been integrated into an industrial setting with the goal of providing LTA-FIT for shop floor workers; and
- a user study of human–machine interaction with both the VA and the robot. This analysis evaluates the system’s usability, the perceived comfort, and mental load while working on the task, and the comfortability, perceived competence, performance expectancy, and effort expectancy.
2. Related Work
3. System Overview
3.1. Language Services
3.2. Robot Service
is a finite set of nodes (i.e., entities) |
where is a finite set of edges (i.e., relationship) |
is a labeling function for nodes attribute |
is a labeling function for relationship and |
attaches the edge to its source and target node. |
Algorithm 1 Robot skill and response computation. |
|
3.3. Voice Interface
3.4. Web Interface
3.5. Robot Control Module
4. Selected LTA-FIT Scenarios
4.1. Learning Scenario
4.2. Training Scenario
4.3. Assistance Scenario
5. Experimental Setup
5.1. Evaluation Metrics
5.1.1. Metrics Used in the Learning and Training Scenarios
- The comfortability scale measures the participant’s level of comfort with the system at the beginning and conclusion of the interaction.
- Perceived competence refers to the participant’s conviction that they can complete the task with the help of the system.
- Performance expectancy is the degree to which participants feel that utilizing the system would help them perform better in a work environment.
- Effort expectancy defines the degree of easiness involved with using the system.
5.1.2. Metrics Used in the Assistance Scenario
5.2. Physical Setup
5.3. Evaluation Procedure
6. Experimental Results
6.1. System Usability Scale
6.2. NASA-TLX
6.3. Subjective Metrics
6.4. Quantitative Performance Results
7. Discussion & Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ACT | Average Communication Time |
AI | Artificial intelligence |
BERT | Bidirectional Encoder Representations from Transformers |
CS | Client-Server |
CS | Comfortability Scale |
DSR | Dialogue State Rule |
EE | Effort Expectancy |
FFNN | Feed-Forward Neural Network |
HRC | Human-Robot Collaboration |
HRI | Human-Robot Interaction |
IER | Intent Error Rate |
JSON | JavaScript Object Notation |
KG | Knowledge Graph |
LH8 | Little Helper 8 |
LTA-FIT | Learning, Training, Assistant-Formats, Issues and Tools |
MD | Mental Demand |
PC | Perceived Competence |
PCB | Printed Circuit Board |
PD | Physical Demand |
PE | Performance Expectancy |
Q&A | Question Answering |
RSE | Robot Service Execution |
RSM | Robot Service Management |
SER | Slot Error Rate |
SUS | System Usability Scale |
TLX | Task Load Index |
TP | Temporal Demand |
TSR | Task Success Rate |
VA | Virtual Assistant |
References
- Xu, X.; Lu, Y.; Vogel-Heuser, B.; Wang, L. Industry 4.0 and Industry 5.0—Inception, conception and perception. J. Manuf. Syst. 2021, 61, 530–535. [Google Scholar] [CrossRef]
- Wang, L. A futuristic perspective on human-centric assembly. J. Manuf. Syst. 2022, 62, 199–201. [Google Scholar] [CrossRef]
- Adam, C.; Aringer-Walch, C.; Bengler, K. Digitalization in Manufacturing–Employees, Do You Want to Work There? In Proceedings of the Congress of the International Ergonomics Association, Florence, Italy, 26–30 August 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 267–275. [Google Scholar] [CrossRef]
- Rehe, G.; Schneider, R.; Thimm, D.D. Transformation towards Human Workforce 4.0 through the Qualification Model LTA-FIT. In Proceedings of the 53rd Hawaii International Conference on System Sciences, Maui, HI, USA, 7–10 January 2020. [Google Scholar] [CrossRef] [Green Version]
- Li, C.; Hansen, A.K.; Chrysostomou, D.; Bøgh, S.; Madsen, O. Bringing a Natural Language-enabled Virtual Assistant to Industrial Mobile Robots for Learning, Training and Assistance of Manufacturing Tasks. In Proceedings of the 2022 IEEE/SICE International Symposium on System Integration (SII), Narvik, Norway, 9–12 January 2022; pp. 238–243. [Google Scholar] [CrossRef]
- Schou, C.; Andersen, R.S.; Chrysostomou, D.; Bøgh, S.; Madsen, O. Skill-based instruction of collaborative robots in industrial settings. Robot. Comput.-Integr. Manuf. 2018, 53, 72–80. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Lu, Y.; Zheng, H.; Chand, S.; Xia, W.; Liu, Z.; Xu, X.; Wang, L.; Qin, Z.; Bao, J. Outlook on human-centric manufacturing towards Industry 5.0. J. Manuf. Syst. 2022, 62, 612–627. [Google Scholar] [CrossRef]
- Lester, S.; Costley, C. Work-based learning at higher education level: Value, practice and critique. Stud. High. Educ. 2010, 35, 561–575. [Google Scholar] [CrossRef] [Green Version]
- Ansari, F.; Erol, S.; Sihn, W. Rethinking human-machine learning in industry 4.0: How does the paradigm shift treat the role of human learning? Procedia Manuf. 2018, 23, 117–122. [Google Scholar] [CrossRef]
- Smith, C.; Worsfold, K. Unpacking the learning–work nexus:‘priming’as lever for high-quality learning outcomes in work-integrated learning curricula. Stud. High. Educ. 2015, 40, 22–42. [Google Scholar] [CrossRef]
- Lamancusa, J.S.; Jorgensen, J.E.; Zayas-Castro, J.L. The learning factory—A new approach to integrating design and manufacturing into the engineering curriculum. J. Eng. Educ. 1997, 86, 103–112. [Google Scholar] [CrossRef]
- Enke, J.; Tisch, M.; Metternich, J. Learning factory requirements analysis–Requirements of learning factory stakeholders on learning factories. Procedia Cirp 2016, 55, 224–229. [Google Scholar] [CrossRef] [Green Version]
- Zhang, W.; Cai, W.; Min, J.; Fleischer, J.; Ehrmann, C.; Prinz, C.; Kreimeier, D. 5G and AI Technology Application in the AMTC Learning Factory. Procedia Manuf. 2020, 45, 66–71. [Google Scholar] [CrossRef]
- Schou, C.; Avhad, A.; Bøgh, S.; Madsen, O. Towards the Swarm Production Paradigm. In Towards Sustainable Customization: Bridging Smart Products and Manufacturing Systems; Springer: Berlin/Heidelberg, Germany, 2021; pp. 105–112. [Google Scholar] [CrossRef]
- Mortensen, S.T.; Chrysostomou, D.; Madsen, O. A novel framework for virtual recommissioning in reconfigurable manufacturing systems. In Proceedings of the 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Limassol, Cyprus, 12–15 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–4. [Google Scholar] [CrossRef]
- Wurhofer, D.; Meneweger, T.; Fuchsberger, V.; Tscheligi, M. Reflections on operators’ and maintenance engineers’ experiences of smart factories. In Proceedings of the 2018 ACM Conference on Supporting Groupwork, Sanibel Island, FL, USA, 7–10 January 2018; pp. 284–296. [Google Scholar] [CrossRef]
- Kohl, M.; Heimeldinger, C.; Brieke, M.; Fottner, J. Competency model for logistics employees in smart factories. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Washington, DC, USA, 24–28 July 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 133–145. [Google Scholar] [CrossRef]
- Simic, M.; Nedelko, Z. Development of competence model for Industry 4.0: A theoretical approach. Economic and Social Development: Book of Proceedings. 2019, pp. 1288–1298. Available online: https://www.esd-conference.com/upload/book_of_proceedings/Book_of_Proceedings_esdPrague2019_Online.pdf (accessed on 1 October 2022).
- Daling, L.M.; Schröder, S.; Haberstroh, M.; Hees, F. Challenges and requirements for employee qualification in the context of human-robot-collaboration. In Proceedings of the 2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Genova, Italy, 27–29 September 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 85–90. [Google Scholar] [CrossRef]
- Li, C.; Park, J.; Kim, H.; Chrysostomou, D. How Can I Help You? An Intelligent Virtual Assistant for Industrial Robots; HRI’21 Companion; Association for Computing Machinery: New York, NY, USA, 2021; pp. 220–224. [Google Scholar] [CrossRef]
- Nardello, M.; Madsen, O.; Møller, C. The smart production laboratory: A learning factory for industry 4.0 concepts. In Proceedings of the CEUR Workshop Proceedings. CEUR Workshop Proceedings, Copenhagen, Denmark, 28–30 August 2017; Volume 1898. [Google Scholar]
- Rawassizadeh, R.; Sen, T.; Kim, S.J.; Meurisch, C.; Keshavarz, H.; Mühlhäuser, M.; Pazzani, M. Manifestation of virtual assistants and robots into daily life: Vision and challenges. CCF Trans. Pervasive Comput. Interact. 2019, 1, 163–174. [Google Scholar] [CrossRef]
- Pogorskiy, E.; Beckmann, J.F.; Joksimović, S.; Kovanović, V.; West, R. Utilising a Virtual Learning Assistant as a Measurement and Intervention Tool for Self-Regulation in Learning. In Proceedings of the 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), Wollongong, NSW, Australia, 4–7 December 2018; pp. 846–849. [Google Scholar] [CrossRef]
- Preece, A.; Webberley, W.; Braines, D.; Zaroukian, E.G.; Bakdash, J.Z. Sherlock: Experimental Evaluation of a Conversational Agent for Mobile Information Tasks. IEEE Trans. Hum.-Mach. Syst. 2017, 47, 1017–1028. [Google Scholar] [CrossRef] [Green Version]
- da Silva Barbosa, A.; Silva, F.P.; dos Santos Crestani, L.R.; Otto, R.B. Virtual assistant to real time training on industrial environment. In Proceedings of the Transdisciplinary Engineering Methods for Social Innovation of Industry 4.0, Proceedings of the 25th ISPE Inc. International Conference on Transdisciplinary Engineering, Modena, Italy, 3–6 July 2018; IOS Press: Amsterdam, The Netherlands, 2018; Volume 7, p. 33. [Google Scholar] [CrossRef]
- Casillo, M.; Colace, F.; Fabbri, L.; Lombardi, M.; Romano, A.; Santaniello, D. Chatbot in Industry 4.0: An Approach for Training New Employees. In Proceedings of the 2020 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), Takamatsu, Japan, 8–11 December 2020; pp. 371–376. [Google Scholar] [CrossRef]
- Zimmer, M.; Al-Yacoub, A.; Ferreira, P.; Lohse, N. Towards Human-Chatbot Interaction: A Virtual Assistant for the Ramp-Up Process. 2020. Available online: https://repository.lboro.ac.uk/articles/conference_contribution/Towards_human-chatbot_interaction_a_virtual_assistant_for_the_ramp-up_process/12091137/1/files/22231974.pdf (accessed on 1 October 2022).
- Li, C.; Yang, H.J. Bot-X: An AI-based virtual assistant for intelligent manufacturing. Multiagent Grid Syst. 2021, 17, 1–14. [Google Scholar] [CrossRef]
- Maksymova, S.; Matarneh, R.; Lyashenko, V.; Belova, N. Voice Control for an Industrial Robot as a Combination of Various Robotic Assembly Process Models. J. Comput. Commun. 2017, 5, 1–15. [Google Scholar] [CrossRef] [Green Version]
- Bingol, M.C.; Aydogmus, O. Performing predefined tasks using the human–robot interaction on speech recognition for an industrial robot. Eng. Appl. Artif. Intell. 2020, 95, 103903. [Google Scholar] [CrossRef]
- Russo, A.; D’Onofrio, G.; Gangemi, A.; Giuliani, F.; Mongiovi, M.; Ricciardi, F.; Greco, F.; Cavallo, F.; Dario, P.; Sancarlo, D.; et al. Dialogue systems and conversational agents for patients with dementia: The human–robot interaction. Rejuvenation Res. 2019, 22, 109–120. [Google Scholar] [CrossRef]
- Bernard, D. Cognitive interaction: Towards “cognitivity” requirements for the design of virtual assistants. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 210–215. [Google Scholar] [CrossRef]
- Skantze, G. Turn-taking in conversational systems and human-robot interaction: A review. Comput. Speech Lang. 2021, 67, 101178. [Google Scholar] [CrossRef]
- Weiss, A.; Huber, A.; Minichberger, J.; Ikeda, M. First application of robot teaching in an existing industry 4.0 environment: Does it really work? Societies 2016, 6, 20. [Google Scholar] [CrossRef]
- Meissner, A.; Trübswetter, A.; Conti-Kufner, A.S.; Schmidtler, J. Friend or foe? understanding assembly workers’ acceptance of human-robot collaboration. ACM Trans. Hum-Robot. Interact. THRI 2020, 10, 1–30. [Google Scholar] [CrossRef]
- Chacón, A.; Ponsa, P.; Angulo, C. Usability Study through a Human-Robot Collaborative Workspace Experience. Designs 2021, 5, 35. [Google Scholar] [CrossRef]
- Jost, C.; Le Pévédic, B.; Belpaeme, T.; Bethel, C.; Chrysostomou, D.; Crook, N.; Grandgeorge, M.; Mirnig, N. (Eds.) Human-Robot Interaction: Evaluation Methods and Their Standardization; Springer: Berlin/Heidelberg, Germany, 2020; Volume 12, p. 385. [Google Scholar] [CrossRef]
- Rueben, M.; Elprama, S.A.; Chrysostomou, D.; Jacobs, A. Introduction to (Re) Using Questionnaires in Human-Robot Interaction Research. In Human-Robot Interaction: Evaluation Methods and Their Standardization; Springer: Berlin/Heidelberg, Germany, 2020; pp. 125–144. [Google Scholar] [CrossRef]
- Bangor, A.; Kortum, P.T.; Miller, J.T. An empirical evaluation of the system usability scale. Int. J. Hum. Comput. Interact. 2008, 24, 574–594. [Google Scholar] [CrossRef]
- Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Adv. Psychol. 1988, 52, 139–183. [Google Scholar] [CrossRef]
- Deemter, K.v.; Theune, M.; Krahmer, E. Real versus template-based natural language generation: A false opposition? Comput. Linguist. 2005, 31, 15–24. [Google Scholar] [CrossRef]
- Yu, Z.; Xu, Z.; Black, A.W.; Rudnicky, A. Strategy and policy learning for non-task-oriented conversational systems. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Los Angeles, CA, USA, 13–15 September 2016; pp. 404–412. [Google Scholar] [CrossRef]
- Fensel, D.; Şimşek, U.; Angele, K.; Huaman, E.; Kärle, E.; Panasiuk, O.; Toma, I.; Umbrich, J.; Wahler, A. Introduction: What is a knowledge graph? In Knowledge Graphs; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 1–10. [Google Scholar] [CrossRef]
- Brooke, J. Sus: A “quick and dirty’usability. Usability Eval. Ind. 1996, 189, 4–7. [Google Scholar]
- National Aeronautics and Space Administration. NASA TLX: Task Load Index. 2020. Available online: https://humansystems.arc.nasa.gov/groups/TLX/ (accessed on 1 October 2022).
- Babakus, E.; Mangold, W.G. Adapting the SERVQUAL scale to hospital services: An empirical investigation. Health Serv. Res. 1992, 26, 767–786. [Google Scholar]
- Chyung, S.Y.; Roberts, K.; Swanson, I.; Hankinson, A. Evidence-Based Survey Design: The Use of a Midpoint on the Likert Scale. Perform. Improv. 2017, 56, 15–27. [Google Scholar] [CrossRef]
Category | Task Description | Intent Recognition |
---|---|---|
Learning | 1: Explain terminology | Keywords |
Learning | 2: Reason operation relationships | Keywords |
Training | 3: Demonstrate auto phone assembly | Keywords |
Training | 4: Guide phone assembly | Keywords |
Assistance | 5: Assist package delivery | BERT Model |
Assistance | 6: Check system status | BERT Model |
Task ID | IER | SER | TSR | ACT (in secs) |
---|---|---|---|---|
5 | 0.1 | 0.43 | 0.50 | 31.08 |
6 | 0.2 | 0.06 | 0.76 | 15.28 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, C.; Chrysostomou, D.; Pinto, D.; Hansen, A.K.; Bøgh, S.; Madsen, O. Hey Max, Can You Help Me? An Intuitive Virtual Assistant for Industrial Robots. Appl. Sci. 2023, 13, 205. https://doi.org/10.3390/app13010205
Li C, Chrysostomou D, Pinto D, Hansen AK, Bøgh S, Madsen O. Hey Max, Can You Help Me? An Intuitive Virtual Assistant for Industrial Robots. Applied Sciences. 2023; 13(1):205. https://doi.org/10.3390/app13010205
Chicago/Turabian StyleLi, Chen, Dimitrios Chrysostomou, Daniela Pinto, Andreas Kornmaaler Hansen, Simon Bøgh, and Ole Madsen. 2023. "Hey Max, Can You Help Me? An Intuitive Virtual Assistant for Industrial Robots" Applied Sciences 13, no. 1: 205. https://doi.org/10.3390/app13010205
APA StyleLi, C., Chrysostomou, D., Pinto, D., Hansen, A. K., Bøgh, S., & Madsen, O. (2023). Hey Max, Can You Help Me? An Intuitive Virtual Assistant for Industrial Robots. Applied Sciences, 13(1), 205. https://doi.org/10.3390/app13010205