Augmented Reality Interface for Adverse-Visibility Conditions Validated by First Responders in Rescue Training Scenarios
Abstract
:1. Introduction
2. Materials and Method
2.1. Roles and Contextualization of the Architecture
- Command Center (C2): The personnel at the control station in charge of the C2 must receive all information from the mission members to monitor progress and make appropriate decisions for mission success. They also need the capability of sending messages to the FRs, as it is essential to update mission objectives as needed. Although communication with the command center is crucial in rescue scenarios, this study focuses on the AR interface deployed on FRs and their limited equipment. C2 information is displayed on regular screens and managed with fixed computer equipment, which allows greater processing power at the expense of added weight;
- First responder (FR): The professionals addressing the requirements of the emergency field who need access to accurate information for each situation. They face several limitations regarding the equipment, as extra weight can hinder movement or endanger the FR during operations. Therefore, the presented architecture considers hardware that is as integrated as possible with the equipment that the FR already carries, while being capable of running the necessary software for new functionalities. In addition, the same way that PPE is evolving towards light heat-efficient materials [28], the lighter the hardware that the FR has to carry, the easier it would be for the FRs to move during the mission.
2.2. AR Display Technology
2.3. Thermal Camera Device and Streaming Configuration
2.4. Sensor Kit and Data-Flow Communication System
2.5. Validation in Rescue Training Scenarios
3. Results
3.1. Basis of the Design of a Modular Interface and Development Frameworks
3.2. AR Sensor Data View
3.3. Robust Vision Module Integrated View
- The label of each item indicates the type of object that has been detected; in this case, only two possibilities have been considered: person and car;
- The confidence of each detection ranges from 0 to 1, i.e., the confidence value could be 0.1 if the algorithm has low certainty that the item detected is correct, and it could be 0.9 if the probability of that detection being reliable is high;
- The coordinates of a bounding box that contains the detection. These coordinates are represented as the relative positions of the image analyzed, given by four numbers: the normalized x and y coordinates of the center point of the object bounding box, setting the origin at the top left corner, and the normalized length and width of the bounding box.
3.4. Live Situation Map View
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Arregui, H.; Irigoyen, E.; Cejudo, I.; Simonsen, S.; Ribar, D.; Kourtis, M.A.; Spyridis, Y.; Stathakarou, N.; Batistatos, M.C. An Augmented Reality Framework for First Responders: The RESPOND-A project approach. In Proceedings of the 2022 Panhellenic Conference on Electronics & Telecommunications (PACET), Tripolis, Greece, 2–3 December 2022; pp. 1–6. [Google Scholar]
- Kapalo, K.A.; Bockelman, P.; LaViola, J.J., Jr. “Sizing Up” Emerging Technology for Firefighting: Augmented Reality for Incident Assessment. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Perth, Australia, 26–28 November 2018; SAGE Publications: Los Angeles, CA, USA, 2018; Volume 62, pp. 1464–1468. [Google Scholar]
- NG911. Next Generation 911. Available online: https://www.911.gov/issues/ng911/ (accessed on 17 September 2024).
- Camp, P.J.; Hudson, J.M.; Keldorph, R.B.; Lewis, S.; Mynatt, E.D. Supporting communication and collaboration practices in safety-critical situations. In Proceedings of the CHI’00 Extended Abstracts on Human Factors in Computing Systems, The Hague, The Netherlands, 1–6 April 2000; pp. 249–250. [Google Scholar]
- Neustaedter, C.; McGee, J.; Dash, P. Sharing 9-1-1 video call information between dispatchers and firefighters during everyday emergencies. In Proceedings of the 2019 on Designing Interactive Systems Conference, San Diego, CA, USA, 23–28 June 2019; pp. 567–580. [Google Scholar]
- Ludwig, T.; Reuter, C.; Pipek, V. What you see is what I need: Mobile reporting practices in emergencies. In ECSCW 2013: Proceedings of the 13th European Conference on Computer Supported Cooperative Work, Paphos, Cyprus, 21–25 September 2013; Springer: London, UK, 2013; pp. 181–206. [Google Scholar]
- Fernández García, A.; Oregui, X.; Lingos, K.; Konstantoudakis, K.; Belmonte Hernández, A.; Iragorri, I.; Zarpalas, D. Smart Helmet: Combining Sensors, AI, Augmented Reality, and Personal Protection to Enhance First Responders’ Situational Awareness. IT Prof. 2023, 25, 45–53. [Google Scholar] [CrossRef]
- Oregui, X.; Azpiroz, I.; Ruiz, V.; Larraga, B.; Gutiérrez, Á.; Olaizola, I.G. Modular Multi-Platform Interface to Enhance the Situational Awareness of the First Responders. In Proceedings of the ISCRAM Proceeding, Münster, Germany, 25–29 May 2024; Volume 21. [Google Scholar]
- Amon, F.; Hamins, A.; Rowe, J. First responder thermal imaging cameras: Establishment of representative performance testing conditions. In Proceedings of the Thermosense XXVIII SPIE, Kissimmee, FL, USA, 17–20 April 2006; Volume 6205, pp. 293–304. [Google Scholar]
- Konsin, L.S.; Nixdorff, S. Fire service and first responder thermal imaging camera (TIC) advances and standards. In Proceedings of the Infrared Technology and Applications XXXIII, SPIE, Orlando, FL, USA, 9–13 April 2007; Volume 6542, pp. 1096–1098. [Google Scholar]
- Park, H.; Park, J.; Lin, S.H.; Boorady, L.M. Assessment of Firefighters’ needs for personal protective equipment. Fash. Text. 2014, 1, 1–13. [Google Scholar] [CrossRef]
- Chalimas, T.; Mania, K. Cross-Device Augmented Reality Systems for Fire and Rescue based on Thermal Imaging and Live Tracking. In Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Sydney, Australia, 16–20 October 2023; pp. 50–54. [Google Scholar] [CrossRef]
- Bhattarai, M.; Jensen-Curtis, A.R.; Martínez-Ramón, M. An embedded deep learning system for augmented reality in firefighting applications. In Proceedings of the 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 14–17 December 2020; pp. 1224–1230. [Google Scholar] [CrossRef]
- Yan, X.; Tian, D.; Zhou, D.; Wang, C.; Zhang, W. IV-YOLO: A Lightweight Dual-Branch Object Detection Network. Preprints 2024, 2024082054. [Google Scholar] [CrossRef]
- Meneguzzi, F.; Oh, J.; Chakraborty, N.; Sycara, K.; Mehrotra, S.; Tittle, J.; Lewis, M. A Cognitive Architecture for Emergency Response. In Proceedings of the 11th ACM International Conference on Autonomous Agents and Multiagent Systems, Valencia, Spain, 4–8 June 2012. [Google Scholar]
- Cooper, G. Cognitive load theory as an aid for instructional design. Aust. J. Educ. Technol. 1990, 6, 1–6. [Google Scholar] [CrossRef]
- Sweller, J.; van Merriënboer, J.; Paas, F. Cognitive Architecture and Instructional Design: 20 years later. Educ. Psychol. Rev. 2019, 31, 261–292. [Google Scholar] [CrossRef]
- deBettencourt, M.; Keene, P.; Awh, E.; Vogel, E. Real-time triggering reveals concurrent lapses of attention and working memory. Nat. Hum. Behav. 2019, 3, 808–816. [Google Scholar] [CrossRef] [PubMed]
- Zavitsanou, A.; Drigas, A. Attention and working memory. Int. J. Recent Contrib. Eng. Sci. IT 2021, 9, 81–92. [Google Scholar] [CrossRef]
- Haapalainen, E.; Kim, S.; Forlizzi, J.; Dey, A. Psycho-physiological measures for assessing cognitive load. In Proceedings of the 12th ACM International Conference on Ubiquitous Computing, Copenhagen, Denmark, 26–29 September 2010. [Google Scholar]
- Nourbakhsh, N.; Wang, Y.; Chen, F.; Calvo, R. Using galvanic skin response for cognitive load measurement in arithmetic and reading tasks. In Proceedings of the 24th Australian Computer-Human Interaction Conference, Melbourne, Australia, 26–30 November 2012. [Google Scholar]
- Hughes, A.; Hancock, M.; Marlow, S.; Stowers, K.; Salas, E. Cardiac measures of cognitive workload: A meta-analysis. Hum. Factors J. Hum. Factors Ergon. Soc. 2019, 9, 393–414. [Google Scholar] [CrossRef] [PubMed]
- Grassmann, M.; Vlemincs, E.; Leupoldt, A.; Mittelstädt, J.; Bergh, O. Respiratory Changes in Response to Cognitive Load: A Systematic Review. Neural Plast. 2016, 9, 1–16. [Google Scholar] [CrossRef] [PubMed]
- Ikehara, C.; Crosby, M. Assessing cognitive load with physiological sensors. In Proceedings of the 38th Annual Hawaii International Conference On System Sciences, Big Island, HI, USA, 6 January 2005. [Google Scholar]
- Bräker, J.; Osterbrink, A.; Semmann, M.; Wiesche, M. User-centered requirements for augmented reality as a cognitive assistant for safety-critical services. Bus. Inf. Syst. Eng. 2023, 65, 161–178. [Google Scholar] [CrossRef]
- Siltanen, S.; Oksman, V.; Ainasoja, M. User-centered design of augmented reality interior design service. Int. J. Arts Sci. 2013, 6, 547. [Google Scholar]
- RESCUER. First RESponder-Centered Support Toolkit for Operating in Adverse and InfrastrUcture-Less EnviRonments. Available online: https://cordis.europa.eu/project/id/101021836 (accessed on 17 September 2024).
- Santos, G.; Marques, R.; Ribeiro, J.; Moreira, A.; Fernandes, P.; Silva, M.; Fonseca, A.; Miranda, J.M.; Campos, J.B.; Neves, S.F. Firefighting: Challenges of smart PPE. Forests 2022, 13, 1319. [Google Scholar] [CrossRef]
- v4l2loopback. 2024. Available online: https://github.com/umlaeute/v4l2loopback (accessed on 17 September 2024).
- Zeng, H.; Zhang, Z.; Shi, L. Research and implementation of video codec based on FFmpeg. In Proceedings of the 2016 International Conference on Network and Information Systems for Computers (ICNISC), Wuhan, China, 15–17 April 2016; pp. 184–188. [Google Scholar]
- Soni, D.; Makwana, A. A survey on mqtt: A protocol of internet of things (iot). In Proceedings of the INTERNATIONAL Conference on Telecommunication, Power Analysis and Computing Techniques (ICTPACT-2017), Chennai, India, 6–8 April 2017; Volume 20, pp. 173–177. [Google Scholar]
- Wright, P.; Asani, I.; Pimenta, N.; Chaves, P.; Oliff, W.; Sakellari, G. Infrastructure-Less Prioritized Communication Platform for First Responders. IT Prof. 2023, 25, 29–37. [Google Scholar] [CrossRef]
- Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A comprehensive review of YOLO architectures in computer vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
- Teledyne FLIR Company. Teledyne FLIR Thermal Dataset. Available online: https://www.flir.eu/oem/adas/adas-dataset-form/ (accessed on 17 September 2024).
Tool Icon | Description of Corresponding Sensors |
---|---|
Biosignals: Description of the biological signals of the FR. The color of the icons will change from green to red depending on if the values received are good or bad. It includes the information on the heart rate and the breath rate of the user. | |
Black box: The black box is a device that has some environmental sensors inside: temperature, humidity, and number of people around it. | |
Ad hoc network: Information of the wireless network generated by the gateways that each FR carries during an operation. It provides information about the battery level of the gateway and the status of communications between the C2 and other FRs. | |
Augmented olfaction: It provides the concentration values of up to 5 types of gases, turning from green to yellow to red, depending on if the gas levels are dangerous or not. | |
Signs of life: This tool provides information if it detects life (through walls, for example); it provides an estimated value of the distance to the found individual. The icon turns from yellow to green when life is detected by the device. | |
Radar: This device shows the number of objects that are approaching the FR. The interface blinks when an object is approaching. The icon blinks red when an object is approaching. | |
Wireless finder: This device measures the distance to devices that produce wireless signals such as Bluetooth or Wi-Fi, which usually come from mobile phones from victims under the rubble. The interface also guides the FR in the initial calibration phase. The icon turns from green to yellow to red as it gets closer to the wireless device. | |
Robust Vision: Displays the type and number of objects detected by the object-detection module. It can be switched to a full-camera view that will show the IR video feed with the red squares instead of the transparent view. |
Tool | Navacerrada | Thessaloniki | Modane |
---|---|---|---|
Biosignals | 4.14 | 4.00 | 4.38 |
Robust Vision | 3.43 | 3.33 | 4.38 |
Black Box | not tested | 3.33 | 4.25 |
Localisation | 3.57 | 2.33 | 4.25 |
Wireless Finder | 3.43 | 3.00 | 4.00 |
Signs of Life | 3.00 | 3.00 | 3.75 |
Augmented Olfaction | not tested | not tested | 3.67 |
Ad-hoc Network | 3.86 | 2.67 | 3.63 |
Radar | not tested | 3.00 | 3.13 |
Overall satisfaction 1 | 3.57 | 3.00 | 3.89 |
Question | Navacerrada | Thessaloniki | Modane | Overall 1 |
---|---|---|---|---|
How would you rate the capacity of the user interface to improve your Situational Awareness? | 4.29 | 3.67 | 4.75 | 4.40 |
How would you rate the user-friendliness/ease to control and operate this subsystem | 3.86 | 3.33 | 4.25 | 3.95 |
Were you satisfied with the way the information is provided and displayed for the functionalities? | 3.57 | 3.00 | 3.89 | 3.62 |
Overall satisfaction with the interface 2 | 3.91 | 3.33 | 4.30 | 3.99 |
How would you rate the relevance of the information provided by RVM? | 4.14 | 3.67 | 4.33 | 4.15 |
How would you rate the visualisation of RV Module output? | 3.93 | 3.33 | 4.33 | 4.01 |
How would you rate the capacity of RV Module to enhance your SA compare to existing situation? | 3.93 | 3.67 | 4.33 | 4.11 |
The functionality could improve my efficiency in operations | 4.07 | 3.67 | 4.67 | 4.27 |
The functionality could improve my safety in operations | 4.14 | 4.00 | 4.44 | 4.25 |
The information provided disturbed me during the tests | 2.93 | 4.00 | 2.44 | 2.89 |
I would use this functionality during a real mission | 3.93 | 3.67 | 4.44 | 4.11 |
Overall satisfaction with RV module 2 | 3.74 | 3.29 | 4.13 | 3.84 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Oregui, X.; Fernández García, A.; Azpiroz, I.; Larraga-García, B.; Ruiz, V.; García Olaizola, I.; Gutiérrez, Á. Augmented Reality Interface for Adverse-Visibility Conditions Validated by First Responders in Rescue Training Scenarios. Electronics 2024, 13, 3739. https://doi.org/10.3390/electronics13183739
Oregui X, Fernández García A, Azpiroz I, Larraga-García B, Ruiz V, García Olaizola I, Gutiérrez Á. Augmented Reality Interface for Adverse-Visibility Conditions Validated by First Responders in Rescue Training Scenarios. Electronics. 2024; 13(18):3739. https://doi.org/10.3390/electronics13183739
Chicago/Turabian StyleOregui, Xabier, Anaida Fernández García, Izar Azpiroz, Blanca Larraga-García, Verónica Ruiz, Igor García Olaizola, and Álvaro Gutiérrez. 2024. "Augmented Reality Interface for Adverse-Visibility Conditions Validated by First Responders in Rescue Training Scenarios" Electronics 13, no. 18: 3739. https://doi.org/10.3390/electronics13183739
APA StyleOregui, X., Fernández García, A., Azpiroz, I., Larraga-García, B., Ruiz, V., García Olaizola, I., & Gutiérrez, Á. (2024). Augmented Reality Interface for Adverse-Visibility Conditions Validated by First Responders in Rescue Training Scenarios. Electronics, 13(18), 3739. https://doi.org/10.3390/electronics13183739