Next Article in Journal
Scare-Away Risks: The Effects of a Serious Game on Adolescents’ Awareness of Health and Security Risks in an Italian Sample
Next Article in Special Issue
Narrative Visualization with Augmented Reality
Previous Article in Journal
A Dialogue System That Models User Opinions Based on Information Content
Previous Article in Special Issue
Participatory Design of Sonification Development for Learning about Molecular Structures in Virtual Reality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Are the Instructions Clear? Evaluating the Visual Characteristics of Augmented Reality Content for Remote Guidance

1
DigiMedia, DeCA, University of Aveiro, 3810-193 Aveiro, Portugal
2
IEETA, DETI, University of Aveiro, 3810-193 Aveiro, Portugal
3
IEETA, DEGEIT, University of Aveiro, 3810-193 Aveiro, Portugal
4
GeoBioTec, DGeo, University of Aveiro, 3810-193 Aveiro, Portugal
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2022, 6(10), 92; https://doi.org/10.3390/mti6100092
Submission received: 6 September 2022 / Revised: 30 September 2022 / Accepted: 2 October 2022 / Published: 14 October 2022
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)

Abstract

:
Augmented Reality (AR) solutions are emerging in multiple scenarios of application as Industry 4.0 takes shape. In particular, for remote collaboration, flexible mechanisms such as authoring tools can be used to generate instructions and assist human operators as they experience increased complexity in their daily tasks. In addition to the traditional handicap of ensuring instructions can be intuitively created without having to understand complicated AR concepts, another relevant topic is the fact that the quality of said instructions is not properly analyzed prior to the tools being evaluated. This means that the characteristics of the visual content are not adequately assessed beforehand. Hence, it is essential to be aware of the cognitive workload associated with AR instructions to assert if they can be easily understood and accepted before being deployed in real-world scenarios. To address this, we focused on AR during sessions of remote guidance. Based on a participatory process with domain experts from the industry sector, a prototype for creating AR-based instructions was developed, and a user study with two parts was conducted: (1) first, a set of step-by-step instructions was produced, and their visual characteristics were evaluated by 129 participants based on a set of relevant dimensions; (2) afterward, these instructions were used by nine participants to understand if they could be used to assist on-site collaborators during real-life remote maintenance tasks. The results suggest that the AR instructions offer low visual complexity and considerable visual impact, clarity, and directed focus, thus improving situational understanding and promoting task resolution.

1. Introduction

Human operators face increased complexity during everyday practices as Industry 4.0 emerges. In turn, this forces them to be highly flexible in these dynamic working environments as well as resort to remote experts to combine knowledge when additional know-how not available on-site is required [1,2,3]. Therefore, it is important to guarantee that the conditions to support such processes are properly handled [4,5,6,7].
One of the most promising innovation accelerators to support existing needs is Augmented Reality (AR). This technology is considered a key pillar of Industry 4.0 to facilitate the digitization of the manufacturing sector, contributing to a higher level of efficiency by speeding up the entire production chain [8,9,10,11]. Although the potential benefits are widely recognized, there are still numerous barriers to the adoption of AR in industrial scenarios. Current limitations include existing constraints of hardware, limitations of interaction methods to explore virtual content, or technical issues related to authoring tools [12,13,14]. Focusing on the last topic, authoring tools can be defined as content creation platforms, specifically designed toward educational content. These can be used, for example, to aid during assembly procedures that guide an operator in how to perform a given task or collaborative environments informing team members of each other’s intent [15].
A possible analogy can be made with video editors and viewer platforms, which allows cutting the video as well as adding sound and video effects, among other possibilities, that are rendered into a final composite video. Then, the video can be viewed by individuals through a video player. Similarly, an AR authoring tool uses various textual, graphical and sensory inputs to design and implement customized AR experiences. Afterwards, these data can then be presented by an AR viewer using different hardware platforms. Among different possibilities, the most popular approach of authoring AR content consists in using a combination between a game engine: for example, considering the Unity3D for handling the display, graphics, and interaction. Plus, we include a third-party application addressing the tracking and registration of AR-based content: for instance, the Vuforia library [15,16,17] or the ARToolKit. Although this last application is targeted for users with no programming knowledge [18], most AR authoring tools still require a considerable amount of scripting [19]; i.e., they are designed with domain experts skills in mind and not for the larger public [20].
To elaborate, such approaches require considerable specialized know-how in several concepts and domains such as tracking and rendering, computer vision, computer graphics, and others. In addition, larger periods of time must be spent to master all relevant features, which is one of the reasons that prevented its widespread use [21]. Another aggravating factor is associated with the lack of analysis regarding the quality of the content generated through these tools. More specifically, the characteristics of the visual content remain explored, which are essential to become aware of users’ cognitive workload as well as assert if they can be easily understood and accepted before being deployed in real-world scenarios. Furthermore, most tools are rarely designed based on the experience and needs of human operators, which limits the way knowledge is shared, often leading to abandoning such solutions [22,23,24]. Therefore, the design and development of novel methods that make authoring AR easier is of paramount importance for Industry 4.0.
Based on the research gaps described, this paper contributes with a first approach toward better understanding how visual characteristics of AR content influence the collaborative process, which is a field that warrants further attention from the scientific community. The main goal is to establish groundwork over which the novel questions can be answered and the field can be matured. Hence, we shift the paradigm from looking exclusively to the technology, as the core variable, to how technology is serving the collaborative effort. This translates, in our work, by the adoption of different assessment dimensions and laying the ground to emphasize how relevant these topics are for collaborative scenarios, aiming to increase the level of awareness of the academic community. To achieve this, a 2D AR-based authoring tool for remote scenarios was created. This tool is the result of a Human-Centered Design (HCD) approach with partners from the industry sector. It can be used to create augmented instructions while a remote expert provides guidance to on-site operators. Apart from describing the main features, a user study with two parts is reported based on a real-life use-case, which was selected due to its potential to evolve and provide increasingly complex tasks resulting from the industry’s needs. First, we recruited 129 participants to evaluate the visual characteristics of a set of instructions. The goal was to understand if the instructions (created by said tool) have low visual complexity, are clear, draw attention to relevant items and can be easily understood with reduced cognitive effort, to ensure its use contributes to scenarios of remote collaboration. Second, nine independent participants (i.e., they had not participated in the previous task) evaluated if the instructions could be used during maintenance tasks requiring additional information from a remote expert. Finally, concluding remarks and ideas for future work are drawn.

2. AR-Based Authoring Tool for Remote Collaboration

Collaborative AR is a promising technology for the fourth industrial revolution [25,26,27] in particular, for training, assembly, quality control, repair or maintenance [28,29,30]. Using AR technology allows enhancing the perception of the shared understanding among team members [16,31,32] as well as improving collaboration times, knowledge retention and awareness [33,34,35].
Solutions exploring AR can provide distributed team members with a common ground environment [36], i.e., serve as a basis for situation mapping. Thus, it may be possible to inform where to act, and what to do, making assumptions and beliefs visible. This can be accomplished by overlaying responsive computer-generated information on top of the real-world environment, combining the advantages of virtual environments and the possibility for seamless interaction with the real-world objects and other collaborators [37,38,39]. The use of such technologies is expected to improve efficiency and data transfer, thus disseminating knowledge that challenges physical boundaries. Moreover, it can help mitigate impacts of physical distancing, i.e., minimize the need for expertise individuals to travel abroad. In addition, a reduction in errors, downtime, and training time is expected (vsight.io/ar-remote-support-keeping-people-connected-during-covid-19/, accessed on 1 September 2022).
The work described benefited from an HCD approach conducted under the scope of a research project with partners from the industry sector. Based on the analysis of industrial needs through a focus group with eight domain experts in remote collaboration, several requirements were identified, such as the need for video streaming, the possibility of freezing a live stream as well as creating annotations using different types of visual cues (e.g., drawing, gestures, pointing, etc). In addition, being able to sort annotations, and enabling augmentation of content, among others. These lead to the design and development of an AR tool capable of supporting scenarios where operators may require know-how and additional information from professionals unavailable on-site, as is the case of maintenance scenarios [40,41,42].
During this process, one important topic that stands out was the lack of authoring tools allowing remote experts to create step-by-step instructions to guide on-site technicians during maintenance procedures (Figure 1). The possibility of creating 2D step-by-step instructions contributes to assist distributed collaborators with content authoring, which remains a significant barrier to the widespread use of AR tools in industrial scenarios [15,24]. In addition, using such features allows the generation of documentation captured during maintenance procedures, which can help substitute traditional manuals without the need for content libraries or programming expertise. This can potentially improve procedures understanding and content creation activities by non-experts in what are considered more technological tasks. Such type of content can even be re-used later in other collaborative sessions with other team members, if an identical intervention demands it, allowing to reduce the response time. Thus, instructions can be leveraged to minimize the need for expert assistance if similar situations happen in the future. Specific sets of sequences, created to address a maintenance task can be stored in a server. As such, if the same malfunction appears, a possible solution can be re-used to instantly recall existing AR sequences.
Figure 2 provides an overview of the AR-based authoring tool. In this vein, when facing unfamiliar tasks, on-site technicians can point a Handheld Device (HHD) to the situation where assistance is required in order to capture its context through video sharing with the remote expert. Then, using voice communication, discussion and situation understanding may occur among the distributed team members, allowing the remote expert to manually select the best moment for freezing the on-site live stream.
Next, using different collaboration mechanisms on a computer, the expert can enhance the picture captured by creating 2D step-by-step instructions. This means generating layers of additional information in the form of virtual annotations, such as notes, drawings, pointing through arrows or gestures, or sorting content, allowing identifying specific areas of interest or indicate actions to be performed. Afterwards, the on-site technician receives the instructions showing the annotations from the remote expert. Technicians can place the HHD nearby and follow the instructions in a hands-free setting. At any time, the technician can pick up the device and perform an augmentation of the instructions on top of the real world (Figure 3). This process can be repeated iteratively until the tasks are successfully accomplished.
The tool was developed using the Unity game engine based on C# scripts. To place the augmented content in the real-world environment, we used the Vuforia library. Communication between the different collaborators was performed over Wi-Fi through specific calls to a PHP server, which was responsible for storing and sharing the enhanced content accordingly among them. Furthermore, on-site technicians may also use see-though Head-Mounted Devices (HMDs) to capture the task context while performing the procedures in a hands-free setting. Regarding the remote expert, he/she may also use additional devices such as interactive projectors or HHDs to create content.

3. User Study on a Remote Maintenance Scenario

This section describes a user study to evaluate the visual characteristics of a set of AR-based instructions created for a scenario of remote maintenance.

3.1. Part I—Evaluate the Visual Characteristics of a Set of AR-Instructions

We strongly believe that the analysis of the visual characteristics is essential to be aware of the cognitive workload associated with the AR-based content being used for such purpose. In particular, since the step-by-step instructions are aimed to be re-used in real-world scenarios, it is important to assert if the content created with the proposed tool can be easily understood and accepted.
Participants
We recruited 129 participants (25 female—19.4%), aged from 20 to 45 years old (M = 21.3, SD = 3.4) who performed the tasks and completed the post-experience questionnaire. Participants had various occupations and professions, e.g., Masters and PhD students, researchers and faculty members, as well as software engineers and front-end developers. All participants had experience in Human–Computer Interaction (HCI) and 58 participants (45%) had previous experience in the field of AR. Plus, 126 participants (97.7%) had previous experience using tools for remote collaboration such as Skype, Zoom, Team Viewer, etc.
Tasks
Participants needed to visualize a set of pre-existing AR-based step-by-step instructions regarding a maintenance procedure on how to replace a specific component of an equipment (Figure 4). To elaborate, the instructions explain how to remove and install a new fan of a boiler, including the following steps: (1) push the cables to the side to make some space; (2) remove the screws that hold the fan; (3) unplug the power cables from the energy module; (4) reach in and remove the fan; (5) install the new fan by justing repeat the opposite procedures. Then, participants were required to evaluate the visual characteristics associated with each step of the maintenance procedure.
Procedure
Participants were introduced to the AR-based authoring tool through a video demonstrating its capabilities, thus allowing them to understand how AR-based instructions could be created and shared to assist in remote scenarios. Afterwards, they were instructed about the tasks and gave their informed consent. Due to the pandemic, participants performed the tasks and answer the survey in a remote setting of their choosing. During this period, a researcher was always available to clarify any doubts that could appear.
Data Collection
The evaluation process was based on pre-defined visual dimensions [44] that have been used in several research works over the years [45,46,47,48,49,50]. These were deemed relevant to understand participants’ acceptance toward the augmented content created using the authoring possibilities of the proposed tool:
  • Visual Complexity (VC), i.e., property that refers to the amount of detail present within the image;
  • Visual Impact (VI), i.e., extent to which the image is attractive and facilitates attention and recall;
  • Clarity (CLA), i.e., property of the image to be self-explanatory and easily understandable with reduced cognitive effort;
  • Directed Focus (DF), i.e., extent to which the image draws attention to one or more items;
  • Inference Support (IS), i.e., extent to which new insights may emerge as a result of the visualization used.
Participants’ opinions were obtained through a survey, including: (1) demographic information (age, gender, occupation, experience with AR and with remote tools); (2) additional questions concerning the visual characteristics evaluated using a five-level Likert-type scale (from 1—very low to 5—very high).
Results and Discussion
Next, the results obtained are presented using exploratory data analysis, non-parametric tests and multivariate analysis considering the visual dimensions: Visual Complexity (VC), Visual Impact (VI), Clarity (CLA), Directed Focus (DF) and Inference Support (IS).
To elaborate, Figure 5 presents the boxplots for Step 1—push the cables to the side to make space (Figure 4—1). The median of 4 for DF stands out, i.e., has a higher value, while the other dimensions are 3. The equality of medians was tested with the Friedman test (within-subjects ANOVA non-parametric test), showing a significant difference in the five medians. Removing DF, the hypothesis of medians equality is not rejected. In addition, Spearman correlations values were significant for almost all pairs and higher between VI/CLA and VI/IS. Furthermore, a cluster analysis shows that VC is isolated. Plus, there is a greater similarity in the response profile between VI and IS, which is joined later by CLA and finally by DF. Moreover, no differences were detected in these five dimensions for Step 1 when categorized by the experience with AR. Likewise, no differences were detected when categorized by gender. There was also no categorization with tools for remote collaboration because there are only three NOs.
Figure 6 illustrates the boxplots for Step 2—remove the screws that hold the fan (Figure 4—2). Compared to the previous boxplots, the median of 4 occurs in the VI, CLA and DF, while the other medians are 3. The Friedman test was used once again to compare these, showing a significant difference among the five medians. The DF has the higher sum of ranks value, while the lowest values are associated to the VC and IS. Furthermore, the Spearman correlations were significant for almost all pairs and higher between VI/CLA, VI/DF and DF/IS. Additionally, a cluster analysis shows that VC now joins IS. Plus, there is a greater similarity in the response profile between VI and CLA, which is joined later by DF. Again, no differences were detected in these dimensions for Step 2 when categorized by the experience with AR. Likewise, no differences were detected when categorized by gender.
Figure 7 shows the boxplots for Step 3—unplug the power cables from the energy module (Figure 4—3). The medians are the same as the previous step, with the median of 4 occurring in the VI, CLA and DF, while the other dimensions are 3. Despite this, the values for VI, CLA and DF are higher when compared to the previous step. The Friedman test was used yet again to compare these, showing a significant difference among the five medians. The CLA has the higher sum of ranks, while the lowest values are associated to the VC and IS. Moreover, the Spearman correlations were significant for almost all pairs and higher between VI/CLA, VI/DF, DF/IS and CLA/DF. In addition, a cluster analysis shows that VC is isolated. Plus, there is a greater similarity in the response profile between VI and CLA, which is joined later by DF and IS. No differences were detected in these dimensions for Step 3 when categorized by the experience with AR, with a significance level of 5%. Nevertheless, a difference was detected for the variable DF, with α = 10% (p-value = 0.049, Mann–Whitney test). No differences were detected when categorized by gender.
Figure 8 displays the boxplots for Step 4—reach in and remove the fan (Figure 4—4). Once again, the medians are the same as the previous step, with the median of 4 occurring in the VI, CLA and DF, while the other dimensions are 3. Plus, the value of DF is higher when compared to the previous step. The Friedman test was used once more to compare these, showing a significant difference among the five medians. The DF has the higher sum of ranks value, while the lowest values are associated to the VC and IS. For this step, all Spearman correlations are significant, and values were higher between VI/CLA, VI/DF, VI/IS and DF/IS. Furthermore, a cluster analysis shows that VC is again isolated. In addition, there is a greater similarity in the response profile between VI and CLA, which is joined later by IS and DF. When categorized by AR experience, a difference was detected in the variable DF (p-value = 0.035, Mann–Whitney test). In any case, no differences were detected when categorized by gender.
Figure 9 illustrates the boxplots for Step 5—install the new fan, and repeat the opposite procedure (Figure 4—5). For this, the median of 4 occurred in the VC, VI and DF, while the other dimensions are 3. One last time, the Friedman test was used to compare these, showing a significant difference in the equality of the five medians. The DF has the higher sum of ranks value, while the lowest values are associated to the CLA and IS. For this step, almost all Spearman correlations are significant, and values were higher between VI/CLA, VI/IS, VI/DF and CLA/DF. Furthermore, a cluster analysis shows once again that VC is isolated. In addition, there is a greater similarity in the response profile between VI and CLA, which is joined later by IS and DF. No differences were detected in these dimensions for Step 5 when categorized by the experience with AR, with a significance level of 5%. However, a difference was detected for the variable DF, with α = 10% (p-value = 0.07, Mann–Whitney test). No differences were detected when categorized by gender.
In summary, the Visual Complexity remains the same for all steps (median = 3), with the exception of Step 5, where it was higher (median = 4), as expected, since the instructions presented in this step encompass (in the reverse way) all the other steps. Regarding Visual Impact, the majority of participants rated the instructions with a higher value (median = 4), which means the steps were attractive and facilitated attention and recall. As for Clarity, it follows the previous trend, which means there is a relation between these two dimensions (CLA/VI), with step 3 clearly having the best results, although the median was 4. Concerning the Directed Focus, for all steps, the median was 4, with step 2 having the best results overall. In addition, for this variable and for steps 3 to 5, participants with and without previous experience with AR technologies answered in a different way, with higher values for the first group. This suggests participants with AR previous experience were more alert and could better understand the information available in the instructions in particular selected areas of interest. Finally, with respect to Inference Support, similarly to what happened with the Visual Complexity, this dimension had lower values (median = 3 for all steps), as already discussed above.
In addition, Figure 10 presents the sum of participants ratings for the visual characteristics of the instructions provided. Clearly, the level of Visual Complexity was deemed low, showing that the multiple steps provided presented a lower amount of detail in general, with Step 3 being the one with the lower sum of them all. In fact, Step 3 can be considered the image with the best classification by the participants overall for all dimensions. Likewise, the Inference Support was also considered low, meaning participants were not able to infer new information as a result of the visualization. This may be caused by two reasons: (1) the meaning of this dimension may not have been properly understood by the participants; (2) the lack of need to use the instructions. By not having to apply the instructions being analyzed, participants could have inadvertently neglected the need to comprehend how to proper apply said information, which in a real-life context could have a different result, e.g., generation of new insights. By comparison, Visual Impact, Clarity and Directed Focus rated higher, which means the instructions provided were attractive, easy to recall and understood with reduced cognitive effort, while drawing attention to the necessary areas of interest.
Another comparison among the steps was performed through a cluster analysis based on the similarity of the answers given by the participants concerning each dimension. A hierarchical clustering method (Ward linkage rule) was used, as illustrated by the dendrogram in Figure 11. Observing the figure, it is possible to notice two different groups. On the right (red arrow—A), the Visual Complexity for all steps, which are tightly grouped, following what had been described above individually. Thus, this is the most different dimension, as expected, given lower values are a better indicator when compared to the others dimensions. Likewise, the Inference Support also tends to form a group by itself (yellow arrow—B), which also occurs for the Directed Focus (orange arrows—C). Last, it seems that the Visual Impact and Clarity present similar values, i.e., they are closely related (green arrows—D).
Finally, given the relations obtained among the visual characteristics used, we argue that the selection of such dimensions from the literature helped to characterize the instructions with an interesting level of detail. We also obtained an important level of insight toward the authoring tool capabilities, re-enforcing that said dimensions are relevant and should be used in these types of studies. Hence, the data analysis allows inferring that it is capable of creating step-by-step instructions for remote scenarios, which were well classified and understood by the 129 participants. These instructions were characterized by having rather low visual complexity as well as high levels of Visual Impact, Clarity, and Directed Focus.

3.2. Part II—Using Step-By-Step AR Instructions during Remote Collaboration

To understand the visual characteristics of a set of instructions, during the second part of the study, an independent group of participants used the proposed tool to fulfil maintenance procedures. The goal was to understand if said instructions would be usable in a real remote setting as well as identify usability constrains and participants satisfaction toward the proposed tool.
Tasks
Participants would act as on-site technicians, while a researcher was the counter part. The goal was to conduct remote maintenance procedures (Figure 12). We defined the task with the assistance of our partners from the industry sector. Maintenance is a core activity of the production life-cycle, accounting for as much as 60 to 70% of its total costs [51]. Therefore, the provision of the right information to the right professional with the right quality and in time is critical to increase efficiency [24,52,53]. The task defined included capturing the equipment context and requesting what to do and how (e.g., (1) replace interconnected components, (2) plug and unplug some energy modules, (3) remove a specific sensor, as well as (4) integrate new components into the equipment). Then, the team members follow the instructions provided by the expert using the augmented annotations displayed on top of the equipment. During this process, the expert would force multiple iterations, resulting in the need for collaboration to fulfill the task.
Procedure
Participants were instructed on the experimental setup and tasks, and they gave their informed consent. Afterwards, they were introduced to the prototype, and a time for adaptation was provided, i.e., a training period to freely interact with its functions. Then, the tasks were performed, and after being finished, participants answered a post-task questionnaire.
Participants
An independent group of participants was used, which had not participated in the prior part of the study. Nine participants (3 female—33.3%), aged from 20 to 63 years old performed the tasks and completed the post-experience questionnaire (although a sample of just five users is anticipated to find approximately 80% of usability issues [54,55]). Participants had various occupations and professions, e.g., Masters and PhD students, researchers and faculty members from different areas of application that had no prior experience with the defined case study but had experience with collaborative tools (e.g., Skype, Team Viewer, etc.) in their daily activities as well as in evaluating AR solutions.
Data Collection
Two types of data were collected. First, there was task performance, which comprised the time needed to complete all procedures, which was logged in seconds by the device. Although we were not focused on comparing the usage of the design against any other experimental condition, we wanted to understand the time required to perform such tasks. Second, we collected participants’ opinions, which were gathered through a post-task questionnaire, including: demographic information and questions concerning collaborative aspects and through notes from a post-task interview to understand participants’ opinions toward the collaborative process and to assess ease of use of the prototype features as well as preferences. We decided to prioritize participant opinions at this stage instead of using validated methods, such as the System Usability Scale (SUS), or NASA TLX to assess the collaborative process. This choice takes into account recent work that report the fact that these single-user methods are not the most adequate for collaborative scenarios. Applying these traditional methods can lead to an incomplete vision and in turn to dubious results, thus falling short to retrieve the necessary amount of data for improving distributed solutions and the collaborative effort [39,56]. The data collection was conducted under the guidelines of the 1964 Helsinki Declaration. All measures were followed to ensure a COVID-19 safe environment during each session of the user study.
Results and Discussion
All participants were able to collaborate using the proposed AR-based tool. On average, each evaluation session lasted for 70 min (the tasks took 40 min to complete). After understanding the tool mechanisms during the adaptation period, participants were able to quickly capture the context of the equipment and using the different annotation features, express themselves to question the remote counterpart on how to perform a given activity. After receiving support in the form of AR-based annotations, they were able to remove, replace and insert new components into the equipment (boiler) where the tasks took place.
Participants found seeing AR-based annotations relevant and recognized that it contributed to a better understanding of where to perform a given action, which facilitated communication and discussion. This is an important result, since participants had no experience using AR-remote tools, showing they were able to adopt a new technology while also collaborating with a remote team member and performing the proposed procedures.
Regarding the use of step-by-step instructions, participants stated that they were clear and useful to accomplish the established goals, i.e., they were able to capture their attention to the area of interest with lower visual complexity. In fact, they recognized that the tool’s ability to offer a set of simpler annotations at a time, instead of presenting larger ones with more visual content, could substantially improve how on-site team-members understand what they need to do, while reducing mental effort. Participants also emphasized that using this feature allows each team member to perform the procedures at their own pace.
In addition, re-visiting instructions created for a specific problem at a later time was considered very important to reduce response time and minimize the need for remote assistance in some cases. A good example may be a situation where an on-site collaborator has a limited network access and is not able to properly communicate and interact with the remote expert. Another option is asynchronous scenarios of remote collaboration, e.g., team members may act in different time zones, or when a single remote expert must provide support to several on-site workers roughly at the same time.
Another important topic worth mentioning is related to the lack of 3D virtual content. Some participants noticed this fact and were interested in understanding why during the post-task interview, since they believe the use of such models could be useful, not always, but for specific use-cases. To elaborate on why that is, during the participatory process with domain experts and targeted users from the industry sector, it was emphasized that their line of equipment features more than 150 models, with thousands of individual components, which may hamper the process of making the models available to designers, developers and technicians. Hence, it was hypothesized that the large variety of 3D models could affect the performance of collaborative solutions and as a consequence the collaborative process itself, since an approach based on 3D content would require sharing large amounts of data when compared to smaller 2D AR-based annotations. Given these limitations, the industrial partners would give priority to a simpler, more generic solution for a wider range of scenarios, which was the focus of our research. Notwithstanding, now that the first milestone has been attained, we intend to start exploring and integrating 3D models into the proposed tool.
In addition, from the participatory process, it was made clear that the collaborative tools currently available are very limited for the use-case industry scenarios implied. It was reported that the ’best’ solutions use rather simple features (e.g., mostly voice communication, and only for some cases image sharing, but without any annotation possibility for any of the collaborators), which seem far behind from what the proposed AR-based tool can provide to team members during collaborative scenarios. Furthermore, the results also reinforce the validity of the tool, since the evaluation moved beyond typical toy problems with Lego blocks or Tangram puzzles, which have been used in the literature and present rather low complexity, to which most participants are familiarized. This shows a real need for more research associated to real-life scenarios.
Therefore, the research community must move forward and offer their knowhow to use-cases that urgely required more specialized expertise and experience in order to improve the workforce capacity to overcome existing remote problems. Subsequently, this is also an opportunity to learn from these scenarios and create more sophisticated collaborative tools based on AR-based technologies moving forward.
Last, we strongly believe that the positive results presented in this subsection are a direct consequence of having conducted a first assessment of the quality of the AR instructions. To elaborate, by evaluating if the visual instructions that can be created have a positive effect, we were able to understand if changes were required and take one step closer to ensuring its adoption later on during real-life scenarios. With that, the authoring tool may be largely accepted by target users and not depreciate over time, which happens with some solutions after a short period of time, because the suitability of the visual instructions was not properly analyzed.

4. Conclusions and Future Work

In this paper, we explored the quality of AR-based instructions for scenarios of remote collaboration in which on-site collaborators require guidance from remote experts. A participatory process was conducted to obtain insights on the real needs of human operators based on the expertise of target users and domain experts from the industry sector. In turn, these motivated the development of an authoring tool for the creation of situated instructions and a two-fold user study to assess the quality of a set of step-by-step instructions, as well as evaluating their use in a real-life maintenance task.
The results demonstrate the tool potential in creating AR-based instructions characterized by low visual complexity and high visual impact, clarity and directed focus. In addition, participants with different backgrounds were able to collaborate and fulfill the established real-life tasks shortly after they were introduced to the proposed prototype: hence, showing its potential to intuitively generate instructions without having to master complex AR concepts. Nevertheless, successfully conducting pre-defined tasks cannot be interpreted as synonymous with user acceptance and adoption toward AR remote scenarios. We argue that, first, it is paramount to understand users’ response to the quality of the instructions as a precursor to verify if they have the capacity to be attractive, maintain user attention and interest, as well as assess the level of cognitive workload. In fact, understanding all these dimensions prior to using them in real-life tasks may contribute to growing the adoption of authoring tools for remote scenarios by a larger audience later on. This is essential to increase the adoption of AR in the industry sector. Furthermore, having data regarding these dimensions can also work as a benchmark to compare different approaches. This way, ensuring this adds to the body of knowledge and provides enough context to empower a transparent account and transferability for achieving a better understanding of the AR landscape in remote scenarios concerning Industry 4.0.
This study is being expanded by the integration of new features into the AR-based tool, namely the inclusion of videos as well as 3D virtual models, which may support more complex scenarios moving forward. We also intend to conduct an additional study with real-life technicians moving forward. In particular, we will explore additional tasks with different levels of complexity as well as have different experts creating instructions to be consumed.

Author Contributions

Conceptualization, B.M., C.F., S.S., A.S., P.D. and B.S.S.; methodology, B.M., C.F., S.S., A.S., P.D. and B.S.S.; software, B.M.; validation, B.M.; formal analysis, C.F.; investigation, B.M., C.F., S.S., A.S., P.D. and B.S.S.; resources, B.M., P.D. and B.S.S.; data curation, B.M.; writing—original draft preparation, B.M., C.F., S.S., A.S., P.D. and B.S.S.; writing—review and editing, B.M., S.S., A.S., P.D. and B.S.S.; visualization, B.M., C.F., A.S. and B.S.S.; supervision, P.D. and B.S.S.; project administration, B.M.; funding acquisition, B.M., P.D. and B.S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by IEETA—Institute of Electronics and Informatics Engineering of Aveiro, funded by National Funds through FCT, in the context of the project [UIDB/00127/2020].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all participants.

Data Availability Statement

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

We would like to thank the reviewers for their thoughtful comments and suggestions towards improving on an earlier version of this manuscript. To everyone involved in the user studies, and discussion groups, thanks for your time and expertise. The maintenance context was based on an industrial collaboration under the Smart Green Homes Project [POCI-01-0247-FEDER-007678], a co-promotion between Bosch Termotecnologia S.A. and the University of Aveiro. It is financed by Portugal 2020 under the Competitiveness and Internationalization Operational Program, and by the European Regional Development Fund.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Geng, J.; Song, X.; Pan, Y.; Tang, J.; Liu, Y.; Zhao, D.; Ma, Y. A systematic design method of adaptive augmented reality work instruction for complex industrial operations. Comput. Ind. 2020, 119, 103229. [Google Scholar] [CrossRef]
  2. Longo, F.; Nicoletti, L.; Padovano, A. Smart operators in industry 4.0: A human-centered approach to enhance operators’ capabilities and competencies within the new smart factory context. Comput. Ind. Eng. 2017, 113, 144–159. [Google Scholar] [CrossRef]
  3. Bottani, E.; Vignali, G. Augmented reality technology in the manufacturing industry: A review of the last decade. IISE Trans. 2019, 51, 284–310. [Google Scholar] [CrossRef] [Green Version]
  4. Ens, B.; Lanir, J.; Tang, A.; Bateman, S.; Lee, G.; Piumsomboon, T.; Billinghurst, M. Revisiting Collaboration through Mixed Reality: The Evolution of Groupware. Int. J. Hum.-Comput. Stud. 2019, 131, 81–98. [Google Scholar] [CrossRef]
  5. Kim, S.; Lee, G.; Billinghurst, M.; Huang, W. The combination of visual communication cues in mixed reality remote collaboration. J. Multimodal User Interfaces 2020, 14, 321–335. [Google Scholar] [CrossRef]
  6. Alves, J.B.; Marques, B.; Dias, P.; Santos, B.S. Using augmented reality for industrial quality assurance: A shop floor user study. Int. J. Adv. Manuf. Technol. 2021, 115, 105–116. [Google Scholar] [CrossRef]
  7. Calandra, D.; Cannavò, A.; Lamberti, F. Improving AR-powered remote assistance: A new approach aimed to foster operator’s autonomy and optimize the use of skilled resources. Int. J. Adv. Manuf. Technol. 2021, 114, 3147–3164. [Google Scholar] [CrossRef]
  8. Uva, A.E.; Gattullo, M.; Manghisi, V.M.; Spagnulo, D.; Cascella, G.L.; Fiorentino, M. Evaluating the effectiveness of spatial augmented reality in smart manufacturing: A solution for manual working stations. Int. J. Adv. Manuf. Technol. 2018, 94, 509–521. [Google Scholar] [CrossRef]
  9. Zubizarreta, J.; Iker, A.; Aiert, A. A framework for augmented reality guidance in industry. Int. J. Adv. Manuf. Technol. 2019, 102, 4095–4108. [Google Scholar] [CrossRef]
  10. Masood, T.; Egger, J. Augmented reality in support of Industry 4.0—Implementation challenges and success factors. Robot. Comput.-Integr. Manuf. 2019, 58, 181–195. [Google Scholar] [CrossRef]
  11. Hernandez-de Menendez, M.; Morales-Menendez, R.; Escobar, C.A.; McGovern, M. Competencies for Industry 4.0. Int. J. Interact. Des. Manuf. 2020, 14, 1511–1524. [Google Scholar] [CrossRef]
  12. Bruno, F.; Barbieri, L.; Marino, E.; Muzzupappa, M.; D’Oriano, L.; Colacino, B. An augmented reality tool to detect and annotate design variations in an Industry 4.0 approach. Int. J. Adv. Manuf. Technol. 2019, 105, 875–887. [Google Scholar] [CrossRef]
  13. Boboc, R.G.; Gîrbacia, F.; Butilă, E.V. The Application of Augmented Reality in the Automotive Industry: A Systematic Literature Review. Appl. Sci. 2020, 10, 4259. [Google Scholar] [CrossRef]
  14. Laviola, E.; Gattullo, M.; Manghisi, V.M.; Fiorentino, M.; Uva, A.E. Minimal AR: Visual asset optimization for the authoring of augmented reality work instructions in manufacturing. Int. J. Adv. Manuf. Technol. 2021, 119, 1769–1784. [Google Scholar] [CrossRef]
  15. Bhattacharya, B.; Winer, E.H. Augmented reality via expert demonstration authoring (AREDA). Comput. Ind. 2019, 105, 61–79. [Google Scholar] [CrossRef]
  16. van Lopik, K.; Sinclair, M.; Sharpe, R.; Conway, P.; West, A. Developing augmented reality capabilities for industry 4.0 small enterprises: Lessons learnt from a content authoring case study. Comput. Ind. 2020, 117, 103208. [Google Scholar] [CrossRef]
  17. Winer, E.H. Authoring Augmented Reality Work Instructions by Expert Demonstration; Technical Report; Iowa State University: Ames, IA, USA, 2018. [Google Scholar]
  18. Gimeno, J.; Morillo, P.; Orduña, J.M.; Fernández, M. An easy-to-use AR authoring tool for industrial applications. In Computer Vision, Imaging and Computer Graphics. Theory and Application; Springer: Berlin/Heidelberg, Germany, 2013; pp. 17–32. [Google Scholar]
  19. Dengel, A.; Iqbal, M.; Grafe, S.; Mangina, E. A Review on Augmented Reality Authoring Toolkits for Education. Front. Virtual Real. 2022, 3, 798032. [Google Scholar] [CrossRef]
  20. Bégout, P.; Duval, T.; Kubicki, S.; Charbonnier, B.; Bricard, E. WAAT: A Workstation AR Authoring Tool for Industry 4.0. In Proceedings of the International Conference on Augmented Reality, Virtual Reality and Computer Graphics, Lecce, Italy, 7–10 September 2020; pp. 304–320. [Google Scholar]
  21. Roberto, R.A.; Lima, J.P.; Mota, R.C.; Teichrieb, V. Authoring tools for augmented reality: An analysis and classification of content design tools. In Proceedings of the International Conference of Design, User Experience, and Usability, Toronto, ON, Canada, 17–22 July 2016; pp. 237–248. [Google Scholar]
  22. Ramirez, H.; Mendivil, E.G.; Flores, P.R.; Gonzalez, M.C. Authoring Software for Augmented Reality Applications for the Use of Maintenance and Training Process. Procedia Comput. Sci. 2013, 25, 189–193. [Google Scholar] [CrossRef] [Green Version]
  23. Quint, F.; Loch, F.; Bertram, P. The Challenge of Introducing AR in Industry—Results of a Participative Process Involving Maintenance Engineers. Procedia Manuf. 2017, 11, 1319–1323. [Google Scholar] [CrossRef]
  24. del Amo, I.F.; Erkoyuncu, J.A.; Roy, R.; Palmarini, R.; Onoufriou, D. A systematic review of Augmented Reality content-related techniques for knowledge transfer in maintenance applications. Comput. Ind. 2018, 103, 47–71. [Google Scholar] [CrossRef]
  25. Quandt, M.; Knoke, B.; Gorldt, C.; Freitag, M.; Thoben, K.D. General Requirements for Industrial Augmented Reality Applications. Procedia CIRP 2018, 72, 1130–1135. [Google Scholar] [CrossRef]
  26. Masood, T.; Egger, J. Adopting augmented reality in the age of industrial digitalisation. Comput. Ind. 2020, 115, 103112. [Google Scholar] [CrossRef]
  27. Kim, S.; Billinghurst, M.; Lee, C.; Lee, G. Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration. In Proceedings of the Transactions on Internet & Information Systems, San Francisco, CA, USA, 13–16 December 2018; Volume 12, pp. 6034–6056. [Google Scholar]
  28. Johnson, S.; Gibson, M.; Mutlu, B. Handheld or Handsfree? Remote Collaboration via Lightweight Head-Mounted Displays and Handheld Devices. In Proceedings of the ACM Conference on Computer Supported Cooperative Work & Social Computing, Vancouver, BC, Canada, 14–18 March 2015; pp. 1825–1836. [Google Scholar]
  29. Piumsomboon, T.; Dey, A.; Ens, B.; Lee, G.; Billinghurst, M. The Effects of Sharing Awareness Cues in Collaborative Mixed Reality. Front. Robot. AI 2019, 6, 5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Kim, K.; Billinghurst, M.; Bruder, G.; Duh, H.B.; Welch, G.F. Revisiting Trends in Augmented Reality Research: A Review of the 2nd Decade of ISMAR (2008–2017). IEEE Trans. Vis. Comput. Graph. 2018, 24, 2947–2962. [Google Scholar] [CrossRef] [PubMed]
  31. de Souza Cardoso, L.F.; Mariano, F.C.M.Q.; Zorzal, E.R. A survey of industrial augmented reality. Comput. Ind. Eng. 2020, 139, 106159. [Google Scholar] [CrossRef]
  32. Marques, B.; Silva, S.; Alves, J.; Araujo, T.; Dias, P.; Santos, B.S. A Conceptual Model and Taxonomy for Collaborative Augmented Reality. IEEE Trans. Vis. Comput. Graph. 2021; early access. [Google Scholar]
  33. Röltgen, D.; Dumitrescu, R. Classification of industrial Augmented Reality use cases. Procedia CIRP 2020, 91, 93–100. [Google Scholar] [CrossRef]
  34. Fernández del Amo, I.; Erkoyuncu, J.A.; Roy, R.; Wilding, S. Augmented Reality in Maintenance: An information-centred design framework. Procedia Manuf. 2018, 19, 148–155. [Google Scholar] [CrossRef]
  35. Jetter, J.; Eimecke, J.; Rese, A. Augmented reality tools for industrial applications: What are potential key performance indicators and who benefits? Comput. Hum. Behav. 2018, 87, 18–33. [Google Scholar] [CrossRef]
  36. Marques, B.; Silva, S.; Dias, P.; Santos, B.S. Remote Collaboration using Augmented Reality: Development and Evaluation. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Christchurch, New Zealand, 12–16 March 2022; pp. 1–2. [Google Scholar]
  37. Palmarini, R.; Erkoyuncu, J.A.; Roy, R.; Torabmostaedi, H. A systematic review of augmented reality applications in maintenance. Robot. Comput.-Integr. Manuf. 2018, 49, 215–228. [Google Scholar] [CrossRef] [Green Version]
  38. Egger, J.; Masood, T. Augmented reality in support of intelligent manufacturing—A systematic literature review. Comput. Ind. Eng. 2020, 140, 106195. [Google Scholar] [CrossRef]
  39. Marques, B.; Silva, S.; Teixeira, A.; Dias, P.; Santos, B.S. A vision for contextualized evaluation of remote collaboration supported by AR. Comput. Graph. 2021, 102, 413–425. [Google Scholar] [CrossRef]
  40. Marques, B.; Silva, S.; Alves, J.; Rocha, A.; Dias, P.; Santos, B.S. Remote Collaboration in Maintenance Contexts using Augmented Reality: Insights from a Participatory Process. Int. J. Interact. Des. Manuf. 2022, 16, 419–438. [Google Scholar] [CrossRef]
  41. Marques, B.; Silva, S.; Rocha, A.; Dias, P.; Santos, B.S. Remote Asynchronous Collaboration in Maintenance scenarios using Augmented Reality and Annotations. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Lisbon, Portugal, 27 March–3 April 2021; pp. 567–568. [Google Scholar]
  42. Madeira, T.; Marques, B.; Alves, J.; Dias, P.; Santos, B.S. Exploring Annotations and Hand Tracking in Augmented Reality for Remote Collaboration. In Proceedings of the Human Systems Engineering and Design III, Pula, Croatia, 23–25 September 2021; pp. 83–89. [Google Scholar]
  43. Marques, B.; Ferreira, C.; Silva, S.; Santos, A.; Dias, P.; Sousa Santos, B. Exploring Remote Augmented Reality as a 2D Authoring Tool for Creation of Guiding Instructions. In Proceedings of the International Conference on Graphics and Interaction, ICGI 2021, Porto, Portugal, 4–5 November 2021; pp. 1–4. [Google Scholar]
  44. Bresciani, S.; Blackwell, A.F.; Eppler, M. A Collaborative Dimensions Framework: Understanding the Mediating Role of Conceptual Visualizations in Collaborative Knowledge Work. In Proceedings of the Hawaii International Conference on System Sciences, Waikoloa, HI, USA, 7–10 January 2008; p. 364. [Google Scholar]
  45. Bresciani, S.; Eppler, M.J. Beyond Knowledge Visualization Usability: Toward a Better Understanding of Business Diagram Adoption. In Proceedings of the International Conference Information Visualisation, Barcelona, Spain, 15–17 July 2009; pp. 474–479. [Google Scholar]
  46. Bresciani, S.; Eppler, M.J. The Benefits of Synchronous Collaborative Information Visualization: Evidence from an Experimental Evaluation. IEEE Trans. Vis. Comput. Graph. 2009, 15, 1073–1080. [Google Scholar] [CrossRef] [PubMed]
  47. Eppler, M.J. What is an Effective Knowledge Visualization? Insights from a Review of Seminal Concepts. In Proceedings of the 2011 15th International Conference on Information Visualisation, London, UK, 13–15 July 2011; pp. 349–354. [Google Scholar]
  48. Barthel, R.; Ainsworth, S.; Sharples, M. Collaborative knowledge building with shared video representations. Int. J. Hum.-Comput. Stud. 2013, 71, 59–75. [Google Scholar] [CrossRef]
  49. Ribeiro, F.C.; de Souza, J.M.; de Paula, M.M.V. Use of information visualization techniques in a collaborative context. In Proceedings of the IEEE International Conference on Computer Supported Cooperative Work in Design (CSCWD), Calabria, Italy, 6–8 May 2015; pp. 79–84. [Google Scholar]
  50. Kernbach, S.; Svetina Nabergoj, A. Visual Design Thinking: Understanding the Role of Knowledge Visualization in the Design Thinking Process. In Proceedings of the International Conference Information Visualisation (IV), Fisciano, Italy, 10–13 July 2018; pp. 362–367. [Google Scholar]
  51. Mourtzis, D.; Zogopoulos, V.; Vlachou, E. Augmented Reality Application to Support Remote Maintenance as a Service in the Robotics Industry. Procedia CIRP 2017, 63, 46–51. [Google Scholar] [CrossRef]
  52. Zhu, J.; Ong, S.K.; Nee, A.Y. A context-aware augmented reality system to assist the maintenance operators. Int. J. Interact. Des. Manuf. 2014, 8, 293–304. [Google Scholar]
  53. Fiorentino, M.; Uva, A.E.; Gattullo, M.; Debernardis, S.; Monno, G. Augmented reality on large screen for interactive maintenance instructions. Comput. Ind. 2014, 65, 270–278. [Google Scholar] [CrossRef]
  54. Nielsen, J.; Landauer, T.K. A Mathematical Model of the Finding of Usability Problems. In Proceedings of the Proceedings of the INTERACT’93 and CHI’93 Conference on Human Factors in Computing Systems, Amsterdam, The Netherlands, 24–29 April 1993; pp. 206–213. [Google Scholar]
  55. Tullis, T.; Albert, W. Measuring the User Experience, Second Edition: Collecting, Analyzing, and Presenting Usability Metrics; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 2013. [Google Scholar]
  56. Marques, B.; Teixeira, A.; Silva, S.; Alves, J.; Dias, P.; Santos, B.S. A critical analysis on remote collaboration mediated by augmented reality: Making a case for improved characterization and evaluation of the collaborative process. Comput. Graph. 2022, 102, 619–633. [Google Scholar] [CrossRef]
Figure 1. On-site technician performing an intervention based on a set of instructions provided by a remote expert using the proposed AR-based authoring tool. Adapted from [43].
Figure 1. On-site technician performing an intervention based on a set of instructions provided by a remote expert using the proposed AR-based authoring tool. Adapted from [43].
Mti 06 00092 g001
Figure 2. Information flow overview. Goal: Allow an on-site technician to share the context of the task (through video stream) for discussion and situation understanding with a remote expert. The expert can freeze the on-site worker video stream and create AR-based instructions using mechanisms to annotate it. Finally, the technician can view the real world augmented with the instructions and perform an intervention.
Figure 2. Information flow overview. Goal: Allow an on-site technician to share the context of the task (through video stream) for discussion and situation understanding with a remote expert. The expert can freeze the on-site worker video stream and create AR-based instructions using mechanisms to annotate it. Finally, the technician can view the real world augmented with the instructions and perform an intervention.
Mti 06 00092 g002
Figure 3. On-site collaborator performing an augmentation of the instructions suggesting where a new component should be installed (provided by the remote expert) on top of the real world.
Figure 3. On-site collaborator performing an augmentation of the instructions suggesting where a new component should be installed (provided by the remote expert) on top of the real world.
Mti 06 00092 g003
Figure 4. Step-by-step instructions analysed: 1—push the cables to the side to make some space; 2—remove the screws that hold the fan; 3—unplug the power cables from the energy module; 4—reach in and remove the fan; 5—install the new fan by just repeating the opposite procedures.
Figure 4. Step-by-step instructions analysed: 1—push the cables to the side to make some space; 2—remove the screws that hold the fan; 3—unplug the power cables from the energy module; 4—reach in and remove the fan; 5—install the new fan by just repeating the opposite procedures.
Mti 06 00092 g004
Figure 5. Boxplot chart for the results associated with the visual characteristics of Step 1 instructions—push the cables to the side to make space.
Figure 5. Boxplot chart for the results associated with the visual characteristics of Step 1 instructions—push the cables to the side to make space.
Mti 06 00092 g005
Figure 6. Boxplot chart for the results associated with the visual characteristics of Step 2 instructions—remove the screws that hold the fan.
Figure 6. Boxplot chart for the results associated with the visual characteristics of Step 2 instructions—remove the screws that hold the fan.
Mti 06 00092 g006
Figure 7. Boxplot chart for the results associated with the visual characteristics of Step 3 instructions—unplug the power cables from the energy module.
Figure 7. Boxplot chart for the results associated with the visual characteristics of Step 3 instructions—unplug the power cables from the energy module.
Mti 06 00092 g007
Figure 8. Boxplot chart for the results associated with the visual characteristics of Step 4 instructions—reach in and remove the fan.
Figure 8. Boxplot chart for the results associated with the visual characteristics of Step 4 instructions—reach in and remove the fan.
Mti 06 00092 g008
Figure 9. Boxplot chart for the results associated with the visual characteristics of Step 5 instructions—repeat the opposite procedure to install the new fan.
Figure 9. Boxplot chart for the results associated with the visual characteristics of Step 5 instructions—repeat the opposite procedure to install the new fan.
Mti 06 00092 g009
Figure 10. Sum of participants ratings for the visual characteristics of the step-by-step instructions, according to: Visual Complexity (VC), Visual Impact (VI), Clarity (CLA), Directed Focus (DF) and Inference Support (IS).
Figure 10. Sum of participants ratings for the visual characteristics of the step-by-step instructions, according to: Visual Complexity (VC), Visual Impact (VI), Clarity (CLA), Directed Focus (DF) and Inference Support (IS).
Mti 06 00092 g010
Figure 11. Dendrogram of the visual characteristics for five dimensions: Visual Complexity for selected step (VC_STP_x); Visual Impact for selected step (VI_STP_x); Clarity for selected step (CLA_STP_x); Directed Focus for selected step (DF_STP_x); Inference Support for selected step (IS_STP_x).
Figure 11. Dendrogram of the visual characteristics for five dimensions: Visual Complexity for selected step (VC_STP_x); Visual Impact for selected step (VI_STP_x); Clarity for selected step (CLA_STP_x); Directed Focus for selected step (DF_STP_x); Inference Support for selected step (IS_STP_x).
Mti 06 00092 g011
Figure 12. Remote team members collaborating through the AR-based tool: on-site participant being assisted by a remote expert.
Figure 12. Remote team members collaborating through the AR-based tool: on-site participant being assisted by a remote expert.
Mti 06 00092 g012
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Marques, B.; Ferreira, C.; Silva, S.; Santos, A.; Dias, P.; Santos, B.S. Are the Instructions Clear? Evaluating the Visual Characteristics of Augmented Reality Content for Remote Guidance. Multimodal Technol. Interact. 2022, 6, 92. https://doi.org/10.3390/mti6100092

AMA Style

Marques B, Ferreira C, Silva S, Santos A, Dias P, Santos BS. Are the Instructions Clear? Evaluating the Visual Characteristics of Augmented Reality Content for Remote Guidance. Multimodal Technologies and Interaction. 2022; 6(10):92. https://doi.org/10.3390/mti6100092

Chicago/Turabian Style

Marques, Bernardo, Carlos Ferreira, Samuel Silva, Andreia Santos, Paulo Dias, and Beatriz Sousa Santos. 2022. "Are the Instructions Clear? Evaluating the Visual Characteristics of Augmented Reality Content for Remote Guidance" Multimodal Technologies and Interaction 6, no. 10: 92. https://doi.org/10.3390/mti6100092

APA Style

Marques, B., Ferreira, C., Silva, S., Santos, A., Dias, P., & Santos, B. S. (2022). Are the Instructions Clear? Evaluating the Visual Characteristics of Augmented Reality Content for Remote Guidance. Multimodal Technologies and Interaction, 6(10), 92. https://doi.org/10.3390/mti6100092

Article Metrics

Back to TopTop