SARA: A Microservice-Based Architecture for Cross-Platform Collaborative Augmented Reality
Abstract
:1. Introduction
2. State of the Art
3. Analysis of the Collaboration in AR Context
3.1. Definition of Collaboration in AR Context and Its Main Characteristics
- Real time vs. shift collaboration: Time management restrictions are important when defining collaborative AR applications. It is possible for example that all the users are working with the AR content at the same time, which could be seen in real time. However, it is also possible that they work in shifts and that they do not even coincide in time. In this case, the state in which a user left the digital information must be able to be retrieved by a second user in order to continue working.
- Shared physical space vs. remote: If all participants of the AR collaboration experiences are located in the same physical space, it is usually said that they are co-located and they are working “locally”. On the contrary, if the participants do not share the same space, it is usually said that they work “remotely”. It is also possible to have a mixed situation in which some users are co-located and they are working remotely with another one.
- Interaction methods: According to the target AR platform, interaction with the virtual content is implemented in a different manner. For example, in hand-held devices such as smartphones, the main interaction method is touch-based. However, in case AR Head Mounted Displays (HMDs), usually the interaction is carried out by hand gestures and/or voice commands. Thus, the type of interaction will condition the design of the means to implement the course of action.
- Information exchange channel: in order to collaborate, exchange of information between the participants is required. This can be done by visual means, including annotations or check lists, speech or combined strategies.
- Content alignment and profiled visualization: Digital elements may have different views tailored to the specifications of different users. Thus, the visual aspect presented by AR assets may not be exactly the same for all users. Among these characteristics are the object position, scale and rotation as well as more advanced ones such as color or shape. For example, let us consider a 3D AR cube that is positioned on a real, physical table. If the AR content is aligned, the cube will be seen by all participants in the same exact position (setting aside possible drift errors). On the contrary, if the content is not aligned the same cube will be represented in different positions for each user. For example, one may see it on the table while another user may see it on the floor. However, both representations will refer to the same cube. In this case, being aligned refers not only to the position, but also to characteristics such as rotation and scale (e.g., it is possible that each uses sees the cube in the same position but with different sizes). On the other hand, the information displayed to one user may not be visible to another, or it may show added or removed elements. In this sense we say that the content has been adapted to the user’s profile; in other words it has been profiled.
- Collaboration model: the collaboration model defines how users organize themselves to visualize and interact with AR content. This organization may come from themselves (they agree by speech), due to implemented control mechanisms (e.g., if the element of interaction has to be transferred) or by high-level rules (e.g., it is defined that they have to work in shifts).
3.2. Conceptual Framework to Define Collaboration
- Turn-based model: the interaction with the AR elements is managed by using a token. Only the user who has the token (therefore in her turn) can interact with the virtual objects. The interactions of other users with the AR content are in principle discarded.
- Ownership-based model: in this collaboration model, each digital element has an associated owner. Thus, the visibility and interaction with each element may be restricted only to its owner. It is also possible that the elements are visible for all or a group of users with only the interaction being limited.
- Layer-based model: This collaboration model is characterized by having one or many layers, to which the AR objects are linked to. One digital element may be present in several of those layers. Users on their side will have access to certain layers and they will be able to visualize and also interact with its associated digital elements.
- Hierarchy-based model: in this case users are organized in a tree-based hierarchical structure. On one hand, the user at the top of the hierarchy has access to all AR elements, both for interaction and visualization. Changes performed by that user will always prevail over those initiated at a lower level of the hierarchy. On the other hand, users located in the lower branches of the tree will have limited both their visibility and their ability to interact. It is reasonable to assume that it will be the participants themselves who define these restrictions for their subordinates.
- Unconstrained model: in this collaboration model there are no restrictions. All user interactions are processed and no control over the visibility of AR elements is established. In this case a FIFO (First In, First Out) policy will be applied.
4. SARA: Main Characteristics and Architecture Components
- Multi-user management: Since SARA’s main objective is to facilitate the development of collaborative AR applications, multi-user management is a vital concept of architecture. With this characteristic we refer to both user access control (e.g., logins, logouts) as well as the exchange of information between participants in the collaboration.
- Session management: in order to orchestrate the collaboration, SARA uses the concept of Session. Several users may connect to the same Session, each session with a different functional concept and objective. The AR content associated with a Session may be shared among the participants in the collaboration. Each session will be treated independently within the system, so the same instance can be used for handling totally different applications at the same time.
- AR content visualization management: SARA is able to manage in a flexible way the visualization of AR assets, i.e., it is capable of controlling features over the shared AR element (e.g., it is possible to control that visualization based on permissions) and also who visualizes the common features. This management can be carried out from the functional logic of a Session or externally through the use of collaboration models.
- Device-optimized interaction management: in addition to display AR content, the system handles interaction with AR content adapting it to the specific standard interaction means of the target device. Each device has an inherent form of interaction associated with it. SARA integrates all these interaction types. As an example, when a user works with a smartphone the main interaction is performed through its touch screen. On the contrary, when the user is wearing an HMD such as the HoloLens, the main interaction is performed by hand gestures. However, the user expects that a touch on a digital object and a tap gesture to have a similar functional meaning. Moreover, interactions from different device types are translated to a common understanding.
- AR platform and development framework independence: the whole architecture has been designed as a platform-independent environment. This feature means that different devices can cooperate over the platform. From smartphones to wearable AR devices, going through standard platforms such as computers, SARA provides a cross-platform information exchange system. Besides that, the creation of new end-point applications for the architecture is platform and framework independent, which allows developers to work with the tools of their preference.
- Location independence: the collaboration between participants can be performed both remotely and locally. This process is transparent to the user, since all she has to do is connect to a session. Then, according to the session details, if required, the alignment process will begin.
- AR Content Alignment: in case that AR content alignment is required, the system establishes it according to the requirements of the session. To do so, SARA offers different strategies. The basic mode of operation is intended to be through the use of markers, which are 2D physical images located on the real world. Although it would be ideal, the process of aligning cloud of points (also known as SLAM maps [32]) extracted from the real world reveals such difficulty that it is proposed to be integrated in a future version of the architecture.
- Time decoupling: Collaboration is allowed to take place both in real time (or near-real time) with all the users collaborating at the same time and in shifts, with the participants not being coincident in time. To do so, SARA keeps a central state of the collaboration session must be maintained and changes on this state can be made at any time. Later, that state may be recovered on demand (e.g., if a participant connects later to the collaboration).
- Extensible: It is possible to add new features to the system easily. This affects both the addition of new services with new functionality to the system, and the adaptation and extension of the associated data model.
- Independent of the communication network protocol: in order to cover as many devices as possible, the system offers a transparent way of establishing the communication. At the moment of truth, there are certain frameworks and platforms that present different difficulties to work with certain protocols, hence the option of choosing is given to the developer. The use of TCP, UDP, Websockets and MQTT is currently included.
- Logic decoupled and reusable: Thanks to SARA, all the logic of the application can be implemented for a single device/platform, which we call the provider. This provider will inject AR content to a session and the other participants will send interaction events to content. This second group is labeled as consumers. The main core of consumer clients is common to all the applications and hence it can be reused and taken as starting point for development. Over that core, some logic such as interface management may be implemented, but it is not strictly necessary.
- Adaptation of non-AR, non-collaborative applications: by using SARA it is easy to inject the content of applications that were not generated with Augmented Reality in mind and expose it to users. It is also possible to update applications that were not initially developed as collaborative ones to allow that functionality.
4.1. The Communication Service (CS)
4.2. The SARA Client (SC)
4.3. The Collaboration Services
5. SARA Data Model
5.1. Sessions
5.2. Events
- When a client application wants to connect to the system, a NewUserConnection event must be generated. In this event the user that will be connected must be specified, as well as what we call a Connection Method. As will be explained later, the system permit the network connection to be performed using different protocols such as TCP, UDP, WebSockets and MQTT. Once the client has connected with the system, it can be easily known which of these protocols was used to establish the connection. However, in order to keep the data model clear, it has been decided to indicate this information within the event itself. At this moment, her connection information has been stored, but has not been connected to any Session yet. In order to do this, a ConnectToSession event is required.
- A ConnectToSession event must contain the identifier of the session to which the user is going to connect, the user itself and a status reception format. The reason of the last field is closely related to another of the types of information events, the SetSessionState.
- the SetSessionState event is used to completely replace the state of a session or to start it for the first time. As it can be seen, it has a format field and a new session state one. In order to maximize compatibility with as many platforms as possible, the system allows coding the state of a session using different formats. Some of them, such as OBJ or COLLADA, are the most popular on all platforms. However, the use of a custom JSON format is introduced here with the aim of allowing greater clarity to the exchange of events as well as greater flexibility. The idea is to export the status of the sessions (i.e., the nodes structure) to one of these formats and to encode the result as a base64 string. With this differentiation, different users may receive the same SessionStatus by means of different formats. It can be argued that this type of communication is not the most efficient. In future SARA implementations, the entire data model could be encoded as flat bytes, which would reduce the amount of information moved within the system (since the JSON format introduces redundant characters) and the communication times.
- Finally, as opposed to the SetSessionState which replaces the old state of a session with a new one, the IncrementalUpdate event is used to update specific properties of the nodes such as the position or the rotation. To do so, the id of the target node is required, as well as the path to the property. Let’s take as an example an update event for the position of a node (which would be thrown, for example, when the node gets moved due to some kind of logic). The property_path would have a value of “” and the new_value would store the new state of that property (e.g., an array like [1.0, 0.0, 0.0]). By using this event, selective changes can be made without requiring to override the whole state of the Session.
5.3. Managing Collaboration Models
6. Creating and Deploying Applications Over SARA: The Case of a Collaborative Voxel-Based Game
- Communication service configuration: the first step is to launch the artifact that represents the Communication Service. In this case it consists of a Node.js application that has to run on a computer. This is one of the artifacts that have already been generated when designing SARA and therefore do not require implementation.
- Implementing the main logic of the application: the second step is to create the application logic which, as has been said earlier, it will be a voxel-based creative game. It is also possible to reuse an application already implemented, in order to adapt it to collaborative AR. All this logic will be grouped within a SARA Client, which also incorporates the Session Manager component used for establishing the communication. To implement this point, the first step is to import the SARA library, which has already been generated and do not require intervention. Once done that, it is the turn to set some basic information within the SARA adaptor instance, such as the IP of the Communication Service, the Session to connect to or the format to be used for the session updates. When the application runs, the SARA instance will generate the required events to connect this client to the selected session. At this point, this client (the provider) is able to send events to the CS. It is labeled as provider because it will inject the AR content to the session. Once launched, it will make a general scan of all the elements within the Unity Scene, in this case the voxel world. From there, the updates of the session nodes would be made when some element in the scene changes. Additions and removals of cubes in the world will also generate update events, so all participants will receive the new state of the world. Figure 13 shows a screenshot of the Unity application with a small terrain already generated.
- Creating theSARA Clientsand enabling user interaction: for the third step it is required to create the clients that will be used by the other participants, the consumers. To simplify, in this case we are going to stick to only two devices; a HoloLens device and an iOS one. Through these clients, users can see the session representation by means of AR. The implementation of this point is similar to that of the previous one. On either platform (Unity for the HoloLens and ARKit for the iOS one), the first step for generating the application is to import the SARA artifact. After setting the required parameters, the application is ready to receive the update events and to generate the proper AR representation. In both cases it is necessary to establish some way to detect user interaction with AR elements. One possibility is to detect user gestures (hand gestures in HoloLens and touches on the screen in the iOS client) at any time and if she is pointing to an AR element, then generating an Interaction Event. Another possibility is to add listeners to the AR elements themselves (GameObjects in Unity and SCNNodes in ARKit) with only the elements with these listeners generating those events. Finally, the functionality described up to this point is common to all SARA clients, i.e., it is independent of the application and could be used for any context.
- Adding specific logic toSARA Clients: the fourth step is to implement the specific logic of the application for the users’ clients. This point will be completely dependent on the application. In the example of voxel-based game, it is necessary to offer the users an interface from which they can select tools to use them. For example, in the case of the iOS client, a toolbar can be exposed to the users from where the action to be used is selected. After that, a cube is added at the point of the AR world that the user has touched. In the HoloLens case, the choice of the tool must be adapted to the form of interaction of that platform (e.g., voice commands, 3D tool palettes or 2D overlays may be used). Figure 14 shows a screenshot of the iOS client. From the tools in the bottom-left corner of the figure the user may select the tool to be used. Figure 15 on the other hand, shows a capture taken from the HoloLens, which in this case only receives the status of the session and has no interaction capabilities. Thus, the HoloLens participant can only inspect the AR content. we can imagine a scenario in which the playing field of some e-sport (e.g., League of Legends [36]) is injected into SARA. The spectators could then see an AR copy of that land where they wanted, but without the ability to interact. However, specialists could have access to a set of tools that allow adding annotations on the map.
- Selecting and applying the Collaboration Model: the fifth and final step is to establish the Collaboration Model that the session will present. In the voxels example, several possibilities are viable. For example, it is possible to use a Turn-based model, which will only allow the user who has the turn to make changes on the world. Another possibility is to limit the interaction capabilities of the users, allowing only one user to add cubes and another one to only delete them. However, it is also possible to set aside any rule by choosing the Unconstrained model. In that case, all participants may add and remove cubes from the world at their will, in a manner similar to the original version of the game where no control was performed.
7. Conclusions and Future Work
Author Contributions
Funding
Conflicts of Interest
References
- Gartner 100 Million Consumers Will Shop in AR by 2020. 2019. Available online: https://www.gartner.com/en/newsroom/press-releases/2019-04-01-gartner-says-100-million-consumers-will-shop-in-augme (accessed on 17 March 2020).
- Azuma, R.T. A survey of augmented reality. Presence Teleoper. Virtual Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
- Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
- Caudell, T.P.; Mizell, D.W. Augmented reality: An application of heads-up display technology to manual manufacturing processes. In Proceedings of the IEEE 25th Hawaii International Conference on System Sciences, Kauai, HI, USA, 7–10 January 1992; Volume 2, pp. 659–669. [Google Scholar]
- Rekimoto, J. Transvision: A hand-held augmented reality system for collaborative design. In Proceedings of the Virtual Systems and Multimedia, Gifu, Japan, 18–20 September 1996; Volume 96, pp. 18–20. [Google Scholar]
- Schmalstieg, D.; Fuhrmann, A.; Szalavari, Z.; Gervautz, M. Studierstube-an environment for collaboration in augmented reality. In Proceedings of the CVE’96 Workshop Proceedings, Nottingham, UK, 26–28 May 1996; Volume 19. [Google Scholar]
- Szalavári, Z.; Schmalstieg, D.; Fuhrmann, A.; Gervautz, M. “Studierstube”: An environment for collaboration in augmented reality. Virtual Real. 1998, 3, 37–48. [Google Scholar] [CrossRef]
- Schmalstieg, D.; Fuhrmann, A.; Hesina, G.; Szalavári, Z.; Encarnaçao, L.M.; Gervautz, M.; Purgathofer, W. The studierstube augmented reality project. Presence Teleoperators Virtual Environ. 2002, 11, 33–54. [Google Scholar] [CrossRef]
- Kaufmann, H. Collaborative Augmented Reality in Education; Institute of Software Technology and Interactive Systems, Vienna University of Technology: Vienna, Austria, 2003. [Google Scholar]
- Fuhrmann, A.L.; Purgathofer, W. Studierstube: An Application Environment for Multi-User Games in Virtual Reality. GI Jahrestag. 2001, 2, 1185–1190. [Google Scholar]
- Höllerer, T.; Feiner, S.; Terauchi, T.; Rashid, G.; Hallaway, D. Exploring MARS: Developing indoor and outdoor user interfaces to a mobile augmented reality system. Comput. Gr. (Pergamon) 1999, 23, 779–785. [Google Scholar] [CrossRef]
- Benko, H.; Ishak, E.W.; Feiner, S. Collaborative mixed reality visualization of an archaeological excavation. In Proceedings of the Third IEEE and ACM International Symposium on Mixed and Augmented Reality, Arlington, VA, USA, 2–5 November 2004; pp. 132–140. [Google Scholar]
- Nilsson, S.; Johansson, B.; Jonsson, A. Using AR to support cross-organisational collaboration in dynamic tasks. In Proceedings of the 8th IEEE International Symposium on Mixed and Augmented Reality, Orlando, FL, USA, 19–22 October 2009; pp. 3–12. [Google Scholar]
- Hammad, A.; Wang, H.; Mudur, S.P. Distributed augmented reality for visualizing collaborative construction tasks. J. Comput. Civil Eng. 2009, 23, 418–427. [Google Scholar] [CrossRef]
- Datcu, D.; Cidota, M.; Lukosch, S.; Oliveira, D.M.; Wolff, M. Virtual co-location to support remote assistance for inflight maintenance in ground training for space missions. In Proceedings of the 15th International Conference on Computer Systems and Technologies, Ruse, Bulgaria, 19–20 June 2014; ACM: New York, NY, USA, 2014; pp. 134–141. [Google Scholar]
- Aschenbrenner, D.; Li, M.; Dukalski, R.; Verlinden, J.; Lukosch, S. Collaborative production line planning with augmented fabrication. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Reutlingen, Germany, 18–22 March 2018; pp. 509–510. [Google Scholar]
- Coppens, A.; Mens, T. Towards Collaborative Immersive Environments for Parametric Modelling. In Proceedings of the International Conference on Cooperative Design, Visualization and Engineering, Hangzhou, China, 21–24 October 2018; Springer: Berlin, Germany, 2018; pp. 304–307. [Google Scholar]
- Blanco-Fernández, Y.; López-Nores, M.; Pazos-Arias, J.J.; Gil-Solla, A.; Ramos-Cabrer, M.; García-Duque, J. REENACT: A step forward in immersive learning about Human History by augmented reality, role playing and social networking. Exp. Syst. Appl. 2014, 41, 4811–4828. [Google Scholar] [CrossRef]
- Sanabria, J.C.; Arámburo-Lizárraga, J. Enhancing 21st century skills with AR: Using the gradual immersion method to develop collaborative creativity. Eurasia J. Math. Sci. Technol. Educ. 2017, 13, 487–501. [Google Scholar] [CrossRef]
- Datcu, D.; Lukosch, S.G.; Lukosch, H.K. A collaborative game to study the perception of presence during virtual co-location. In Proceedings of the Companion Publication of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, Baltimore, MD, USA, 15 February 2014; ACM: New York, NY, USA, 2014; pp. 5–8. [Google Scholar]
- Huo, K.; Wang, T.; Paredes, L.; Villanueva, A.M.; Cao, Y.; Ramani, K. SynchronizAR: Instant Synchronization for Spontaneous and Spatial Collaborations in Augmented Reality. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, Berlin, Germany, 14–17 October 2018; ACM: New York, NY, USA, 2018; pp. 19–30. [Google Scholar]
- De Belen, R.A.J.; Nguyen, H.; Filonik, D.; Del Favero, D.; Bednarz, T. A systematic review of the current state of collaborative mixed reality technologies: 2013–2018. AIMS Electron. Electr. Eng. 2019. [Google Scholar] [CrossRef]
- Chi, P.Y.P.; Li, Y. Weave: Scripting cross-device wearable interaction. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems; ACM: New York, NY, USA, 2015; pp. 3923–3932. [Google Scholar]
- Houben, S.; Marquardt, N. Watchconnect: A toolkit for prototyping smartwatch-centric cross-device applications. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems; ACM: New York, NY, USA, 2015; pp. 1247–1256. [Google Scholar]
- Nebeling, M.; Teunissen, E.; Husmann, M.; Norrie, M.C. XDKinect: Development framework for cross-device interaction using kinect. In Proceedings of the 2014 ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Rome, Italy, 17–20 June 2014; ACM: New York, NY, USA, 2014; pp. 65–74. [Google Scholar]
- Yang, J.; Wigdor, D. Panelrama: enabling easy specification of cross-device web applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Rome, Italy, 17–20 June 2014; ACM: New York, NY, USA, 2014; pp. 2783–2792. [Google Scholar]
- Nebeling, M.; Mintsi, T.; Husmann, M.; Norrie, M. Interactive development of cross-device user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Rome, Italy, 17–20 June 2014; ACM: New York, NY, USA, 2014; pp. 2793–2802. [Google Scholar]
- Speicher, M.; Hall, B.D.; Yu, A.; Zhang, B.; Zhang, H.; Nebeling, J.; Nebeling, M. XD-AR: Challenges and opportunities in cross-device augmented reality application development. In Proceedings of the ACM on Human-Computer Interaction, Montreal, QB, Canada, 21–26 April 2018; Volume 2, p. 7. [Google Scholar]
- Azure Spatial Anchors. Microsoft. 2019. Available online: https://azure.microsoft.com/id-id/services/spatial-anchors/ (accessed on 17 March 2020).
- Spectator View. Microsoft. 2019. Available online: https://docs.microsoft.com/en-us/windows/mixed-reality/spectator-view/ (accessed on 17 March 2020).
- Merriam-Webster’s Definition of Collaboration. 2019. Available online: https://www.merriam-webster.com/dictionary/collaboration (accessed on 17 March 2020).
- Dissanayake, M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A Solution to the Simultaneous Localization and Map Building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef] [Green Version]
- Soldani, J.; Tamburri, D.A.; Van Den Heuvel, W.J. The pains and gains of microservices: A systematic grey literature review. J. Syst. Softw. 2018, 146, 215–232. [Google Scholar] [CrossRef]
- Minecraft. 2019. Available online: https://www.minecraft.net/en-us/ (accessed on 17 March 2020).
- Minecraft Earth. 2019. Available online: https://www.minecraft.net/es-es/about-earth (accessed on 17 March 2020).
- League Of Legends Game. 2019. Available online: https://www.leagueoflegends.com (accessed on 17 March 2020).
- SARA in Action, Video on YouTube. Available online: https://youtu.be/EBMPeK7gJ0Q (accessed on 17 March 2020).
Use Case | Voxel-Based Prototype | SmARt City | |
---|---|---|---|
Unconstrained | When parties coordination is not needed or relies on human means | A choice if every player is empowered to add and remove cubes simultaneously and coordination relies on peer-to-peer communication | Not adequate, as it is assumed that the number of services service providers and resources in the city will be large enough to need at least information filtering services for visualization |
Turn based | When resources need to be univocally controlled by a single user at a given moment in time | A choice if every player is empoweredto add and remove cubes and coordination needs to be orchestrated to avoid conflicts and organize interaction | A choice in case two or more service providers share a common resource and may have control over it: e.g., a signage panel in which alerts can be presented may be admitting new information in case the control token is free |
Layer based | When resources need to be grouped and the access to those groups has to be managed | It does not apply, as this application only contains a single node: the terrain mesh that players will modify | Layers will be in this application associated to a given service (cleaning, garbage collection, air quality monitoring, traffic controlling…). Each layer will contain a set of nodes in it and will be secured by applying role-based access. Users authorized for each layer will be able to visualize the resources on it. |
Ownership based | When a user owns one or more given nodes with full control over them. These resources could be within the same layer or not | Not applicable, as this application only requires a single resource for common interaction: the terrain mesh that players will modify | Applicable to determine e.g., which operators can access and visualize the state of specific resources. |
Hierarchy based | When a hierarchy of roles is needed to control the different resources | The basic application is built on players with the same role, thus the hierarchy-based model does not fit. In case hierarchies are implemented, it is assumed that a user with upper role (game master) would have at least to approve the initiatives of the others | A hierarchical model does fit the collaboration control in this application, as there will be users with different roles that will be authorized to access functionalities depending on their role, e.g., municipality (overall control), service providers (overall monitoring of the service, interaction with shared resources), service managers (tactical decisions in a service layer and over owned resources) and operators (visualization and information generation over authorized resources) |
Common Objective | To have fun while creating structures and landscapes together |
Individual Objectives | Not specified. Users may distribute the work while playing |
Number of Users | Unlimited |
Location of the Collaboration | Mainly local |
Involved Devices | Computer, HoloLens and iOS devices |
Collaboration Model | Unconstrained model, users will have to agree the actions. This is done to mimic the operation model of the original Minecraft. Turn-based model also makes sense in this context |
Temporality | Real time |
Interaction Capabilities | All users have the same capabilities: selection of the tool to be used and selection of a point in the voxel world to apply the tool action |
AR Content Alignment | The content alignment is done through 2D, physical marks |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vaquero-Melchor, D.; Bernardos, A.M.; Bergesio, L. SARA: A Microservice-Based Architecture for Cross-Platform Collaborative Augmented Reality. Appl. Sci. 2020, 10, 2074. https://doi.org/10.3390/app10062074
Vaquero-Melchor D, Bernardos AM, Bergesio L. SARA: A Microservice-Based Architecture for Cross-Platform Collaborative Augmented Reality. Applied Sciences. 2020; 10(6):2074. https://doi.org/10.3390/app10062074
Chicago/Turabian StyleVaquero-Melchor, Diego, Ana M. Bernardos, and Luca Bergesio. 2020. "SARA: A Microservice-Based Architecture for Cross-Platform Collaborative Augmented Reality" Applied Sciences 10, no. 6: 2074. https://doi.org/10.3390/app10062074
APA StyleVaquero-Melchor, D., Bernardos, A. M., & Bergesio, L. (2020). SARA: A Microservice-Based Architecture for Cross-Platform Collaborative Augmented Reality. Applied Sciences, 10(6), 2074. https://doi.org/10.3390/app10062074