An Indoor Navigation Methodology for Mobile Devices by Integrating Augmented Reality and Semantic Web
Abstract
:1. Introduction
- The design of a methodology for indoor navigation systems incorporating Augmented Reality and Semantic Web technologies.
- The implementation and availability of a functional indoor navigation prototype application and ontology for academic environments.
2. Related Work
2.1. Indoor Navigation Using AR
2.2. Common Component Strategies and Tools
3. Methodology Design
3.1. Spatial Modeling
3.2. Data Management and Structuring
3.3. Positioning and Navigation
- Establishing the initial location. This phase establishes the user’s initial position within the spatial model, presenting the whole navigation environment and virtual objects relative to that position. Thus, an initial position can be defined as , which can be established in multiple ways, as previously mentioned. For example, through marker detection [15], Lbeacons and sensors [21], and WiFi fingerprinting [27], to mention a few.
- Tracking user’s movements. Once the initial position is established, the user’s position is tracked and updated to correctly place all virtual objects involved during navigation (displayed in positions relative to the tracked movements). Thus, when the user is moving throughout the building, its position is commonly updated from a point to an adjacent point through the mobile device’s sensors (e.g., WiFi, Bluetooth, compass) [21,22,23] or computer vision [24,28], to mention a few.
3.4. Content Visualization
- Data presentation. This component presents the information of the points of interest through non-augmented interfaces. The goal is to show an interface to read and view concisely such information. Moreover, it allows the user to begin the setting and select for navigation.
- Navigation guidance. In contrast, navigation guidance is provided by augmented objects visible through the mobile device screen, ensuring that the user can follow the guidelines while still paying attention to their step. Of course, according to the user position tracking, such augmented objects stay fixed at the points given by so that the user can follow the navigation route. Moreover, additional augmented objects can be displayed during navigation, such as discovering point of interest within a radio from the user position (contextual information) or when the desired position is reached.
4. Methodology Implementation
4.1. Spatial Model
- Digitalization. Scale plans of the building are used to create the graph, generating digital images used as a reference for placing the graph’s nodes and route visualization being imported into Unity3D. Thus, the plans should be digitized, and their scale (pixels per unit) must be defined to import them. In this sense, 1 unit in Unity3D represents approximately 1 m in a real environment (digitalized plans and scale examples are later presented in the experiments section). Moreover, to keep the images homogeneous (no annotations of measurements, indications, etc.), it is recommended to import the plans using a design software tool such as SketchUp [36].
- Graph creation. Within the system, nodes refer to points of interest or landmarks within buildings, storing information that aids in the extraction of information from the KB (ready for Semantic Web usage). On the other hand, edges represent corridors and help in the creation and visualization of navigation paths. In this regard, native Unity3D objects were used to represent the graph nodes and edges. Thus, the nodes have their corresponding location within the native coordinate system of the engine using three-axis to enable multi-floor positioning. In turn, each of the nodes contains an array of adjacent nodes (a GameObject array in C#), which has the function of representing the edges of the graph. Although the methodology does not limit the representation of the number of floors of a building, the implemented prototype uses a spatial model of two floors for the two tested scenarios (later presented in the experiments). It is worth mentioning that the performance of the Unity3D platform is affected according to those objects that are visible at any given time [37] and thus, it is possible to have several objects for modeling any building.
4.2. Data Structuring
- The Protégé tool [38] was used to build the ontology, maintaining a hierarchy of classes to reduce redundant properties and in compliance with the Linked Data principles [39]. The scheme of the proposed and implemented ontology can be seen in Figure 2. In this sense, arNav refers to the prefix of the application, Proposal refers to the smallest part of the ontology to work under any environment, and Implementation gives particular definitions according to the domain at hand, in this case for the academic domain. Please note that some labels and properties (alignments, equivalences) are not shown to keep the figure legible.
- Once the ontology was defined, the information for creating the KB can be manually entered (RDF files) or through a system assistant. For the implemented system, the applied strategy was to load information from a spreadsheet file into the ontology using the Protégé tool Cellfie (https://github.com/protegeproject/cellfie-plugin, accessed on 7 June 2021), which allows the creation of classes, entities, attributes, and relationships through the specification of rules. Please note that information from multiple navigation scenarios can be stored in the same KB. In this sense, the KB for the two cases in the experiments contains 1347 RDF triples (81 points of interest from a total of 149 entities).
- The client-server model was used to keep the KB updated and reduce the mobile device’s processing load regarding information consumption. For this purpose, the Virtuoso Open Server tool [40] was used since it supports RDF/OWL storage and offers a SPARQL endpoint for its consumption. For testing purposes, the developed prototype was configured under a local network using a 2.4 Ghz WiFi connection, where the SPARQL queries were submitted from the mobile device to the server through HTTP requests using Unity3D’s native networking library and then parsing the JSON response for the correct visualization in the prototype interface. Five different SPARQL queries are performed through the menus and interfaces offered by the prototype, extracting the following information: (1) Information linking the nodes with the points of interest of the navigation environment, (2) Information of all points of interest by category, (3) Information of all academic staff, (4) Detailed information of a selected point of interest, and (5) Detailed information of a person. Listing 1 presents a SPARQL query example to retrieve all the information of a point of interest of a certain institution and the staff members related to that location. The retrieved information by such query is later used by the next module for visualizing content.
PREFIX arNav: <http://www.indoornav.com/arNav#> PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ∗ WHERE { arNav:Research_Lab arNav:nodeID ?id; foaf:name ?name; sch:floorLevel ?floor; arNav:institution "UAT". OPTIONAL{ arNav:Research_Lab arNav:inChargeBy ?person. ?person foaf:firstName ?firstName; arNav:fatherName ?lastName; arNav:extension ?extension. OPTIONAL {?person foaf:title ?nameTitle.} } OPTIONAL{ arNav:Research_Lab sch:description ?desc. } OPTIONAL{ arNav:Research_Lab arNav:areas ?areas. } }
4.3. Positioning and Navigation
- Initial position. In order to establish the user’s initial position within the spatial model, QR codes were used to store information about the building and the location under the Unity3D coordinate system. The ZXing library [44] was used for decoding the QR codes. It is worth mentioning that a QR code must be scanned if the user position is lost (recalibration/repositioning), resuming the navigation automatically.
- Tracking. Once the initial position is established, the user location is tracked down using the pose tracking component (TrackedPoseDriver) from the Unity Engine Spatial Tracking library and attaching it to the AR Camera. This component captures visually distinctive features in the environment to work as anchors and track the device movements and rotations relative to the detected features’ position. Such component is also supported by the mobile phone’s sensors (gyroscope and accelerometer). Afterward, the detected movement changes are processed by the ARSessionOrigin component, provided by the AR Foundation framework, to translate the movements to the Unity Engine coordinate system.
- Navigation. Regarding navigation, the graph denoting the spatial model is represented as an adjacency matrix (using each node’s adjacency nodes array), so then the Dijkstra’s pathfinding algorithm [35] is applied for finding the shortest route for navigation towards the destination.
4.4. Visualizing Content
5. Experiments and Results
5.1. Evaluation Criteria
- Position setting. The effectiveness of QR code scanning is measured. The initial positioning method considers how often this function was performed correctly, positioning the user in the right place and orientation.
- Distance offset. The efficiency during continuous tracking of the user’s location using the AR functionality by measuring the distance offset between the user’s real location and the one registered by the system.
- Navigation time. This aspect considers the navigation time in seconds to the different points of interest established as objectives for the tests. Additionally, it is compared the time recorded in the scenarios where the mobile device is used with the proposed system and in those where the user navigates without using it.
- AR display issues. This aspect counts the number of errors during the content display presented for virtual objects, particularly the incorrect display of navigation guides and menus, preventing partial or incomplete navigation.
5.2. Case 1: University Building
5.2.1. Participants
- All users are familiar with the mobile platform and understand the usefulness and scope of augmented reality applications.
- 9 of the 24 participants had no prior knowledge of the environment (being unfamiliar with the location of the targets) and the remaining ones only have knowledge of some areas. However, all the participants were instructed and supervised to only follow the directions provided by the implemented system.
- Each of the participants performed a set of 6 navigation tasks, half of them using the implemented system and the rest without it. Moreover, each of the users alternated the tasks where the implemented system was used, generating a data set of 144 entries in total (72 entries using the implemented system and 72 without its use).
5.2.2. Scenario Description and Navigation Tasks
5.2.3. Results
Position Setting
Distance Offset
AR Display Issues
User Feedback
5.3. Case 2: Cinvestav Building
5.3.1. Participants
- The users became familiar with the mobile platform and understood the usefulness and scope of augmented reality applications.
- All participants had prior knowledge of the environment but were instructed and supervised to follow only the directions provided by the device.
- The participants performed the navigation using the same mobile device (alternating its use), generating a dataset of 25 results.
5.3.2. Scenario Description and Navigation Tasks
5.3.3. Results
Position Setting
Distance Offset
AR Display Issues
User Feedback
6. Discussion
6.1. Implementation
6.2. Position Setting
6.3. Distance Offset
6.4. Navigation Time
6.5. AR Display Issues
6.6. User Feedback
6.7. Data Model
6.8. Data Storage and Consumption
6.9. Limitations
6.10. Application Usage
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AR | Augmented Reality |
KB | Knowledge base |
References
- Giudice, N.A.; Guenther, B.A.; Kaplan, T.M.; Anderson, S.M.; Knuesel, R.J.; Cioffi, J.F. Use of an Indoor Navigation System by Sighted and Blind Travelers: Performance Similarities across Visual Status and Age. ACM Trans. Access. Comput. 2020, 13, 1–27. [Google Scholar] [CrossRef]
- Ma, C.; Yang, J.; Chen, J.; Tang, Y. Indoor and outdoor positioning system based on navigation signal simulator and pseudolites. Adv. Space Res. 2018, 62, 2509–2517. [Google Scholar] [CrossRef]
- Gang, H.S.; Pyun, J.Y. A Smartphone Indoor Positioning System Using Hybrid Localization Technology. Energies 2019, 12, 3702. [Google Scholar] [CrossRef] [Green Version]
- Liu, L.; Li, B.; Zlatanova, S.; van Oosterom, P. Indoor navigation supported by the Industry Foundation Classes (IFC): A survey. Autom. Constr. 2021, 121, 103436. [Google Scholar] [CrossRef]
- Yang, J.; Kang, Z.; Zeng, L.; Hope Akwensi, P.; Sester, M. Semantics-guided reconstruction of indoor navigation elements from 3D colorized points. ISPRS J. Photogramm. Remote. Sens. 2021, 173, 238–261. [Google Scholar] [CrossRef]
- Alnabhan, A.; Tomaszewski, B.M. INSAR: Indoor navigation system using augmented reality. In Proceedings of the Sixth ACM SIGSPATIAL International Workshop on Indoor Spatial Awareness, ISA ’14, Dallas/Fort Worth, TX, USA, 4 November 2014; pp. 36–43. [Google Scholar] [CrossRef]
- Hoang, M.K.; Schmitz, S.; Drueke, C.; Vu, D.H.T.; Schmalenstroeer, J.; Haeb-Umbach, R. Server based indoor navigation using RSSI and inertial sensor information. In Proceedings of the 10th Workshop on Positioning, Navigation and Communication, WPNC 2013, Dresden, Germany, 20–21 March 2013; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
- Ozdenizci, B.; Coskun, V.; Ok, K. NFC Internal: An Indoor Navigation System. Sensors 2015, 15, 7571–7595. [Google Scholar] [CrossRef] [Green Version]
- Rao, J.; Qiao, Y.; Ren, F.; Wang, J.; Du, Q. A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization. Sensors 2017, 17, 1951. [Google Scholar] [CrossRef] [Green Version]
- Neges, M.; Koch, C.; König, M.; Abramovici, M. Combining visual natural markers and IMU for improved AR based indoor navigation. Adv. Eng. Inform. 2017, 31, 18–31. [Google Scholar] [CrossRef]
- Lin, C.H.; Chung, Y.; Chou, B.Y.; Chen, H.Y.; Tsai, C.Y. A novel campus navigation APP with augmented reality and deep learning. In Proceedings of the 2018 IEEE International Conference on Applied System Invention (ICASI), Tokyo, Japan, 13–17 April 2018; pp. 1075–1077. [Google Scholar] [CrossRef]
- Kokorogianni, V.; Papaioannou, E.; Kaklamanis, C. MyARCampus: An augmented reality application for facilitating navigation in a university campus. In Proceedings of the SOCIOINT 4th International Conference on Education, Social Sciences and Humanities, Dubai, United Arab Emirates, 10–12 July 2017; pp. 49–58. [Google Scholar]
- Katsaros, A.; Keramopoulos, E. FarmAR, a farmer’s augmented reality application based on semantic web. In Proceedings of the 2017 South Eastern European Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Kastoria, Greece, 23–25 September 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Lee, K.; Lee, J.; Kwan, M.P. Location-based service using ontology-based semantic queries: A study with a focus on indoor activities in a university context. Comput. Environ. Urban Syst. 2017, 62, 41–52. [Google Scholar] [CrossRef]
- Kumar, P.; Akhila, N.; Aravind, R.; Mohith, P. Indoor navigation using AR technology. IJEAST Int. J. Eng. Appl. Sci. Technol. 2020, 356–359. [Google Scholar] [CrossRef]
- Matuszka, T.; Gombos, G.; Kiss, A. A New Approach for Indoor Navigation Using Semantic Webtechnologies and Augmented Reality. In Virtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments; Shumaker, R., Ed.; Lecture Notes in Computer, Science; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8021, pp. 202–210. [Google Scholar] [CrossRef]
- Huang, H.; Gartner, G. A survey of mobile indoor navigation systems. In Cartography in Central and Eastern Europe; Springer: Berlin/Heidelberg, Germany, 2009; pp. 305–319. [Google Scholar]
- Fallah, N.; Apostolopoulos, I.; Bekris, K.E.; Folmer, E. Indoor Human Navigation Systems: A Survey. Interact. Comput. 2013, 25, 21–33. [Google Scholar] [CrossRef] [Green Version]
- Sato, D.; Oh, U.; Guerreiro, J.; Ahmetovic, D.; Naito, K.; Takagi, H.; Kitani, K.M.; Asakawa, C. NavCog3 in the Wild: Large-scale Blind Indoor Navigation Assistant with Semantic Features. ACM Trans. Access. Comput. 2019, 12, 1–30. [Google Scholar] [CrossRef]
- Cankaya, I.A.; Koyun, A.; Yigit, T.; Yuksel, A.S. Mobile indoor navigation system in iOS platform using augmented reality. In Proceedings of the 2015 9th International Conference on Application of Information and Communication Technologies (AICT), Rostov-on-Don, Russia, 14–16 October 2015; pp. 281–284. [Google Scholar]
- Huang, B.; Hsu, J.; Chu, E.T.; Wu, H. ARBIN: Augmented Reality Based Indoor Navigation System. Sensors 2020, 20, 5890. [Google Scholar] [CrossRef]
- Contreras, P.; Chimbo, D.; Tello, A.; Espinoza, M. Semantic web and augmented reality for searching people, events and points of interest within of a university campus. In Proceedings of the 2017 XLIII Latin American Computer Conference, CLEI 2017, Córdoba, Argentina, 4–8 September 2017; pp. 1–10. [Google Scholar] [CrossRef] [Green Version]
- Delail, B.A.; Weruaga, L.; Zemerly, M.J. CAViAR: Context Aware Visual Indoor Augmented Reality for a University Campus. In Proceedings of the 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology, Macau, China, 4–7 December 2012; pp. 286–290. [Google Scholar] [CrossRef]
- Chidsin, W.; Gu, Y.; Goncharenko, I. AR-Based Navigation Using RGB-D Camera and Hybrid Map. Sustainability 2021, 13, 5585. [Google Scholar] [CrossRef]
- Schmalstieg, D.; Reitmayr, G. The World as a User Interface: Augmented Reality for Ubiquitous Computing. In Location Based Services and TeleCartography; Gartner, G., Cartwright, W.E., Peterson, M.P., Eds.; Lecture Notes in Geoinformation and Cartography; Springer: Berlin/Heidelberg, Germany, 2007; pp. 369–391. [Google Scholar]
- Szasz, B.; Fleiner, R.; Micsik, A. iLOC–Building indoor navigation services using Linked Data. In Proceedings of the Joint Proceedings of the Posters and Demos Track of the 12th International Conference on Semantic Systems-SEMANTiCS2016 and the 1st International Workshop on Semantic Change & Evolving Semantics (SuCCESS’16) Co-Located with the 12th International Conference on Semantic Systems (SEMANTiCS 2016), Leipzig, Germany, 12–15 September 2016; Volume 1695, p. 4. [Google Scholar]
- Ng, X.H.; Lim, W.N. Design of a Mobile Augmented Reality-based Indoor Navigation System. In Proceedings of the 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Istanbul, Turkey, 22–24 October 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Noreikis, M.; Xiao, Y.; Ylä-Jääski, A. SeeNav: Seamless and Energy-Efficient Indoor Navigation using Augmented Reality. In Proceedings of the on Thematic Workshops of ACM Multimedia 2017, Mountain View, CA, USA, 23–27 October 2017; pp. 186–193. [Google Scholar] [CrossRef]
- Beetz, J.; van Leeuwen, J.P.; de Vries, B. IfcOWL: A case of transforming EXPRESS schemas into ontologies. Artif. Intell. Eng. Des. Anal. Manuf. 2009, 23, 89–101. [Google Scholar] [CrossRef]
- Graves, M.; Constabaris, A.; Brickley, D. Foaf: Connecting people on the semantic web. Cat. Classif. Q. 2007, 43, 191–202. [Google Scholar] [CrossRef]
- Lehmann, J.; Isele, R.; Jakob, M.; Jentzsch, A.; Kontokostas, D.; Mendes, P.N.; Hellmann, S.; Morsey, M.; van Kleef, P.; Auer, S.; et al. DBpedia—A large-scale, multilingual knowledge base extracted from Wikipedia. Semant. Web 2015, 6, 167–195. [Google Scholar] [CrossRef] [Green Version]
- Cyganiak, R.; Wood, D.; Lanthaler, M.; Klyne, G.; Carroll, J.J.; McBride, B. RDF 1.1 concepts and abstract syntax. W3C Recomm. 2014, 25, 1–22. [Google Scholar]
- Hogan, A. Knowledge Graphs: Research Directions. In Reasoning Web Declarative Artificial Intelligence, Proceedings of the 16th International Summer School 2020, Oslo, Norway, 24–26 June 2020; Manna, M., Pieris, A., Eds.; Tutorial, Lectures; Lecture Notes in Computer, Science; Springer: Cham, Switzerland, 2020; Volume 12258, pp. 223–253. [Google Scholar] [CrossRef]
- Dürst, M.J.; Suignard, M. Internationalized Resource Identifiers (IRIs). RFC 2005, 3987, 1–46. [Google Scholar] [CrossRef] [Green Version]
- Delling, D.; Sanders, P.; Schultes, D.; Wagner, D. Engineering Route Planning Algorithms. In Algorithmics of Large and Complex Networks—Design, Analysis, and Simulation [DFG Priority Program 1126]; Lerner, J., Wagner, D., Zweig, K.A., Eds.; Lecture Notes in Computer, Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5515, pp. 117–139. [Google Scholar]
- Sketchup. 2021. Available online: https://www.sketchup.com/ (accessed on 25 June 2021).
- Unity—Manual: Optimizing Graphics Performance. Available online: https://docs.unity3d.com/Manual/OptimizingGraphicsPerformance.html (accessed on 29 July 2021).
- Musen, M.A. The protégé project: A look back and a look forward. AI Matters 2015, 1, 4–12. [Google Scholar] [CrossRef] [PubMed]
- Bizer, C.; Heath, T.; Berners-Lee, T. Linked Data—The Story So Far. Int. J. Semantic Web Inf. Syst. 2009, 5, 1–22. [Google Scholar] [CrossRef] [Green Version]
- OpenLink Virtuoso—Product Value Proposition Overview. 2010. Available online: http://www.openlinksw.com (accessed on 7 June 2021).
- About AR Foundation. 2020. Available online: https://docs.unity3d.com/Packages/[email protected]/manual/index.html (accessed on 14 May 2021).
- ARCore Overview. 2021. Available online: https://developers.google.com/ar/discover/ (accessed on 7 June 2021).
- ARKit Overview. 2021. Available online: https://developer.apple.com/documentation/arkit/ (accessed on 7 June 2021).
- ZXing Project. 2020. Available online: https://github.com/zxing/zxing (accessed on 7 June 2021).
- Joshi, A.; Kale, S.; Chandel, S.; Pal, D.K. Likert scale: Explored and explained. Br. J. Appl. Sci. Technol. 2015, 7, 396. [Google Scholar] [CrossRef]
- Randolph, J.J. Free-Marginal Multirater Kappa (Multirater K [free]): An Alternative to Fleiss’ Fixed-Marginal Multirater Kappa. In Proceedings of the Joensuu Learning and Instruction Symposium, Joensuu, Finland, 14–15 October 2005. [Google Scholar]
- Yue, S. Human motion tracking and positioning for augmented reality. J. Real Time Image Process. 2021, 18, 357–368. [Google Scholar] [CrossRef]
- Ali, W.; Georgsson, F.; Hellstrom, T. Visual tree detection for autonomous navigation in forest environment. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 560–565. [Google Scholar] [CrossRef] [Green Version]
- Kumar, A.P.; Kumar, A.; Kumar, V.N. A comprehensive comparative study of SPARQL and SQL. Int. J. Comput. Sci. Inf. Technol. 2011, 2, 1706–1710. [Google Scholar]
- Pacaci, A.; Zhou, A.; Lin, J.; Özsu, M.T. Do We Need Specialized Graph Databases? Benchmarking Real-Time Social Networking Applications. In Proceedings of the Fifth International Workshop on Graph Data-management Experiences & Systems, GRADES@SIGMOD/PODS, Chicago, IL, USA, 14–19 May 2017; pp. 1–7. [Google Scholar] [CrossRef]
- Bellini, P.; Nesi, P. Performance assessment of RDF graph databases for smart city services. J. Vis. Lang. Comput. 2018, 45, 24–38. [Google Scholar] [CrossRef]
- Huang, W.; Raza, S.A.; Mirzov, O.; Harrie, L. Assessment and Benchmarking of Spatially Enabled RDF Stores for the Next Generation of Spatial Data Infrastructure. ISPRS Int. J. Geo Inf. 2019, 8, 310. [Google Scholar] [CrossRef] [Green Version]
- ARCore Supported Devices–Google Developers. Available online: https://developers.google.com/ar/devices (accessed on 2 August 2021).
- Javed, A.; Shahid, M.A.; Sharif, M.; Yasmin, M. Energy Consumption in Mobile Phones. Int. J. Comput. Netw. Inf. Secur. 2017, 9, 18–28. [Google Scholar] [CrossRef] [Green Version]
Work | Components | |||||
---|---|---|---|---|---|---|
Spatial Model | Position Tracking | Data Storing | Pathfinding Algorithm | Navigation Content Display | POI Information Content | |
Al Delail et al. [23] | Third-party map | Marker detection and Mobile’s sensors | MySQL | A* | Lines on a digital plane | Location name |
INSAR [6] | GM | WiFi fingerprinting, compass | SQLite | Record matching | AR arrows | – |
Matuszka et al. [16] | Third-party map | Marker detection and Mobile’s sensors | Semantic Web | No info | AR arrows and Lines on a digital plane | – |
Cankaya et al. [20] | GM | Mobile’s sensors | Database (Not specified) | Dijkstra | AR arrows | – |
Noreikis et al. [28] | GM | Features matching, camera | Not specified | A* | AR arrows | – |
Gang and Pyun [3] | 3D model | Bluetooth, Geomagnetic field, Marker detection and Mobile’s sensors | Database (Not specified) | Dijkstra | AR lines | – |
Huang et al. [21] | 3D model | Lbeacons | No info | Dijkstra | AR arrows | – |
Kumar et al. [15] | 3D model | Marker detection | Firebase | A* | AR arrows | – |
Ng and Lim [27] | GM | WiFi fingerprinting, sensors | SQLite | No info | Robot guide | Info at search |
Chidsin et al. [24] | Point cloud | RGB-D camera | No info | SLAM | AR arrows | Location name |
Start Point | Start Floor | Destination | Destination Floor | Distance (Meters) |
---|---|---|---|---|
1 | BF | Laboratory 1 | BF | 17.60 |
2 | BF | Men bathroom | BF | 21.80 |
3 | BF | Coordination Office | UF | 28.80 |
4 | UF | Classroom 5 | UF | 45.00 |
5 | UF | Reception | BF | 16.50 |
6 | UF | Researcher Office | UF | 22.60 |
Question | Average |
---|---|
1. Navigation with the application was easier than navigation without its use. | 4 |
2. Navigation time with the application was faster than without its use. | 4 |
3. The application gives you more confidence in finding locations within a building. | 5 |
4. The points of interest information provided by the application is useful in a school setting. | 4 |
5. The teacher information provided by the application is useful in a school setting. | 5 |
6. The included mini-map is a navigational aid. | 4 |
7. The assistance of the application to find places is preferred rather than asking for directions. | 4 |
Start Point | Start Floor | Destination | Destination Floor | Distance (Meters) |
---|---|---|---|---|
1 | BF | Office of Profr. 1 | BF | 30.10 |
2 | BF | Laboratory 8 | UF | 41.42 |
3 | UF | Office of Prof. 2 | BF | 31.20 |
4 | BF | Office of Prof. 3 | BF | 22.74 |
5 | BF | Office of Prof. 2 | UF | 43.93 |
Question | Average |
---|---|
1. The application gives you more confidence in finding locations within a building | 5 |
2. The points of interest information provided by the application is useful in a school setting | 5 |
3. The teacher information provided by the application is useful in a school setting. | 5 |
4. The included mini-map is a navigational aid. | 4 |
5. The assistance of the application to find places is preferred rather than asking for directions. | 5 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rubio-Sandoval, J.I.; Martinez-Rodriguez, J.L.; Lopez-Arevalo, I.; Rios-Alvarado, A.B.; Rodriguez-Rodriguez, A.J.; Vargas-Requena, D.T. An Indoor Navigation Methodology for Mobile Devices by Integrating Augmented Reality and Semantic Web. Sensors 2021, 21, 5435. https://doi.org/10.3390/s21165435
Rubio-Sandoval JI, Martinez-Rodriguez JL, Lopez-Arevalo I, Rios-Alvarado AB, Rodriguez-Rodriguez AJ, Vargas-Requena DT. An Indoor Navigation Methodology for Mobile Devices by Integrating Augmented Reality and Semantic Web. Sensors. 2021; 21(16):5435. https://doi.org/10.3390/s21165435
Chicago/Turabian StyleRubio-Sandoval, Jesus Ivan, Jose L. Martinez-Rodriguez, Ivan Lopez-Arevalo, Ana B. Rios-Alvarado, Adolfo Josue Rodriguez-Rodriguez, and David Tomas Vargas-Requena. 2021. "An Indoor Navigation Methodology for Mobile Devices by Integrating Augmented Reality and Semantic Web" Sensors 21, no. 16: 5435. https://doi.org/10.3390/s21165435
APA StyleRubio-Sandoval, J. I., Martinez-Rodriguez, J. L., Lopez-Arevalo, I., Rios-Alvarado, A. B., Rodriguez-Rodriguez, A. J., & Vargas-Requena, D. T. (2021). An Indoor Navigation Methodology for Mobile Devices by Integrating Augmented Reality and Semantic Web. Sensors, 21(16), 5435. https://doi.org/10.3390/s21165435