Next Article in Journal
Specifying Internet of Things Behaviors in Behavior-Driven Development: Concurrency Enhancement and Tool Support
Next Article in Special Issue
Presenting Job Instructions Using an Augmented Reality Device, a Printed Manual, and a Video Display for Assembly and Disassembly Tasks: What Are the Differences?
Previous Article in Journal
Experimental Study of Vertical Tail Model Flow Control Based on Oscillating Jet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Movement Time for Pointing Tasks in Real and Augmented Reality Environments

1
School of Safety and Management Engineering, Hunan Institute of Technology, Hengyang 421002, China
2
Department of Industrial Management, Chung Hua University, Hsin-Chu 30012, Taiwan
3
College of Information Management, Nanjing Agricultural University, Nanjing 210095, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(2), 788; https://doi.org/10.3390/app13020788
Submission received: 5 December 2022 / Revised: 29 December 2022 / Accepted: 2 January 2023 / Published: 5 January 2023

Abstract

:
Human–virtual target interactions are becoming more and more common due to the emergence and application of augmented reality (AR) devices. They are different from interacting with real objects. Quantification of movement time (MT) for human–virtual target interactions is essential for AR-based interface/environment design. This study aims to investigate the motion time when people interact with virtual targets and to compare the differences in motion time between real and AR environments. An experiment was conducted to measure the MT of pointing tasks on the basis of both a physical and a virtual calculator panel. A total of 30 healthy adults, 15 male and 15 female, joined. Each participant performed pointing tasks on both physical and virtual panels with an inclined angle of the panel, hand movement direction, target key, and handedness conditions. The participants wore an AR head piece (Microsoft Hololens 2) when they pointed on the virtual panel. When pointing on the physical panel, the participants pointed on a panel drawn on board. The results showed that the type of panel, inclined angle, gender, and handedness had significant (p < 0.0001) effects on the MT. A new finding of this study was that the MT of the pointing task on the virtual panel was significantly (p < 0.0001) higher than that of the physical one. Users using a Hololens 2 AR device had inferior performance in pointing tasks than on a physical panel. A revised Fitts’s model was proposed to incorporate both the physical–virtual component and inclined angle of the panel in estimating the MT. This model is novel. The index of difficulty and throughput of the pointing tasks between using the physical and virtual panels were compared and discussed. The information in this paper is beneficial to AR designers in promoting the usability of their designs so as to improve the user experience of their products.

1. Introduction

One of the most commonly adopted laws for discussing the behaviors of a human interacting with an object is Fitts’s law. This law estimates the time required to touch a target of a certain size over a certain distance. It suggests that the movement time (MT) of a body part from one location to a target depends on the distance of movement and the size of the target. A term, index of difficulty (ID) of the movement, was defined, incorporating the movement distance of the body segment and target size (W), and MT can be estimated using the following equations [1,2].
MT = a + b ID
ID = log 2 [ 2 × d i s t a n c e W ]
where a and b are constants to be determined.
ID itself is unitless, and a unit of bit is given that may be regarded as the amount of information transferred via the movement. This implies that the difficulty of a target pointing or tapping task may be quantified using the information metric bit.
Fitts’s law was originally applied to study the MT of hand and finger movements. These movements may include tapping, pointing, tracking, positioning, and other hand–object contact tasks. They are required in human–machine and human–object interactions. In addition to hand and finger movements, scientists have applied this law to study the MT of other body segments such as the head [3,4,5,6], leg [7,8], foot [9,10,11,12], and trunk [13,14]. Fitts’s law has been applied not only in ordinary environments but also in special environments such as underwater [15,16,17]. In addition to assessing the MT of body parts for healthy people, Fitts’s law has been adopted to assess the body functions of patients for medical and healthcare purposes [18,19].
Since Fitts’s law was announced, many variants have been proposed to accurately quantify ID and thus MT. Both Welford [20] and MacKenzie [21] proposed revised equations to predict the MT of moving the right index finger from the starting point to a target. These authors suggested using the terms log2[(A/W) + 0.5] and log2[(A/W) + 1], respectively, to replace the original ID definition. They mentioned that such revision was necessary considering the deviation of the original definition of ID from Theorem 17 of Shannon’s information theory [22]. Gan and Hoffman [23] found that MT is linearly related to the square root of the movement amplitude for ballistic movements when the ID is small. This implies that considering the target size is no longer necessary in estimating the MT under such circumstances. Hoffmann [24] investigated the use of an effective target tolerance in an inverted Fitts task. Considering the finite width of the finger probe, he believed that the original Fitts’s law could overestimate the ID of the movement. He introduced a finger index pad size into the original ID equation.
Murata and Iwase [25] found that the MT of the 3D pointing task was not satisfactorily explained by the original Fitts’s law because the duration of the 3D movements was affected by the direction of the target. They introduced a directional parameter (θ) of the pointing movements into the original Fitts’s law. Similar results were reported by Cha and Myung [26]. They added a second directional parameter (θ2), in addition to the θ in Murata and Iwase [25], to delineate the 3D movement of finger touch. Baird et al. [27] conducted a study to test the effects of probe length on the MT of pointing tasks. They suggested introducing a function of probe length plus a logarithm of the amplitude into the original Fitts’s law. They claimed that such additions allow for better prediction of the MT for a worker using a long tool such as a screwdriver or a crowbar.
Virtual reality (VR) devices have been widely used for both entertainment and commercial purposes. VR strives to immerse users completely within a designed scenario so as to provide an intuitive link between a computer and the user [28]. VR users may see the virtual image displayed and interact with virtual objects using one or two handheld remote controllers, which may encompass push buttons, touch pads, joysticks, and so on. They may also manipulate one or two virtual hands and/or raycasting to “contact” virtual targets. Augmented reality (AR), on the other hand, deals with the combination of virtual objects and real environments [29,30,31,32,33,34]. AR users may see the real world with some virtual, unreal objects in their vision. They may also interact with virtual objects using similar ways as in a VR system. Alternatively, they may “contact” virtual objects using hands directly, such as touching a virtual button using their finger to activate an input command [35,36].
Human interaction with virtual objects is, apparently, quite different from a human interacting with real objects [37]. When studying the MT for VR and AR users, the original Fitts’s law and its variants may no longer be valid because human–object interactions are replaced by the more complicated human–virtual object interactions. One of the major issues in interacting with a virtual object is the depth perception deficiencies of stereoscopic displays [38,39,40]. The literature [41] has shown that the accuracy rate for localizing targets in virtual environments using stereoscopic displays is relatively lower than that in the real world. It was also found that participants tend to overestimate the depth of targets in virtual environments [42,43]. If users cannot locate an object accurately, they need more time to perform a task requiring contact of their hands with objects. This will lead to inferior performance [40,44,45].
Investigations of the MT in virtual environments and human–virtual object interaction tasks have been reported. Ha and Woo [46] performed an MT study of 3D object manipulation techniques based on a virtual hand metaphor in a tangible augmented reality environment. Lubos et al. [38] reported that, when handling a virtual object in an immersive virtual environment, both ID and interaction of target position and movement distance were significant on the MT. They found high selection error along the view axis as compared to those in the other axes. Machuca and Stuerzlinger [45] found that their participants needed significantly more time to select virtual 3D targets than to select identical physical targets. Deng et al. [47] indicated that the MT in operating a handheld controller to perform a target positioning task in a virtual environment may be split into the acceleration, deceleration, and correction phases of hand movement. They included amplitude, object size, and target tolerance in their MT equations and suggested calculating the MT in different phases using different equations. Clark et al. [48] proposed an extended Fitts’s law model to consider the effects of the inclined angle of the virtual targets.
Even though there were studies investigating the MT dealing with hand–virtual object interactions, there are still many untouched issues. For example, users have no haptic feedback when “contacting” a virtual object. Lacking haptic feedback could make fine adjustment of the hand/finger before touching the virtual target difficult and thus lead to longer MT than that of contacting a real target. This has not been discussed in the literature and is the main issue considered in the current study. We believe that this could be attributed to a tactile (T) factor. This factor is determined by the presence of tactile feedback. This factor should be incorporated in Fitts’s law (Equation (2)) so as to provide good fitness of the MT when the pointing tasks are performed on virtual targets. Equations (3) and (4) show the inclusion of T in the function estimating the MT:
MT = f(distance, W, T)
MT = a + b   log 2 [ 2 × d i s t a n c e W ] + f ( T ) ,
where f(T) is a function of the tactile factor T.
The hypothesis of this study was that the MT of pointing tasks on virtual targets in an AR environment should be longer than that on a physical target in a real environment because of the T factor.
The traditional Fitts’s law (Equation (1)) used to predict the MT of pointing tasks on virtual targets could underestimate the MT. The literature has recommended incorporating a triangular function of the inclined angle of pointing 3D targets in the original Fitts’s equation [25]. We verified whether such an addition was also valid in both physical and virtual pointing tasks. In addition, it was hypothesized that both gender and handedness are significant factors affecting the MT of physical and virtual pointing tasks. The objective of this study was to test these hypotheses. Predictive equations on the MT for the pointing tasks on both virtual and physical targets were established and compared, considering the presence of the T factor, so as to enhance our understanding of the efficiency of giving input commands via a virtual keypad using hand gestures.
This study provides valuable information for AR designers to realize the inefficiency of using a virtual keypad in the AR device tested in this study. Such information is important and may generate new ideas concerning input designs in virtual environments for AR devices.

2. Methods

An experiment was performed in the laboratory. The illuminance on the workbench, measured using a light meter (Lux Meter, Trans Instrument, Singapore), for the physical touch panel was between 600 and 700 lx.

2.1. Participants

A total of 30 adults, 15 men and 15 women, were recruited as human participants. These participants were healthy and had normal corrected vision without color deficiency. Four of them, two male and two female, were lefthanded. The others were all righthanded. The average age and stature of the participants were 21.0 (±2.5) y and 165.9 (±7.8) cm, respectively. All the participants signed informed consent forms before joining the experiment.

2.2. AR Device and Pointing Tasks

A Microsoft® Hololens 2 headpiece (Microsoft Inc., Redmond, WA, USA) was used. This device has been reported as the most commonly adopted one in the medical community to display virtual objects [49]. It has a weight of 566 g. The Windows Holographic Operating System was installed. The glasses use see-through holographic lenses. The resolution of the display is 2 k with 3:2 light engines. The holographic density is 2.5 k radians (light points per radian) or higher with eye-based rendering. This device tracks hand location and gestures using built-in cameras and a two-handed fully articulated model with directional manipulation. The world-scale positional tracking and real-time environment mesh are employed in the system for six-degrees-of-freedom tracking and spatial mapping purposes, respectively. The app Graph Calculator (Microsoft Inc., Redmond, WA, USA) was adopted for the pointing tasks.
The pointing tasks were performed in both real (or physical) and AR (or virtual) environments. In the AR environment, the participant wore the Hololens headpiece. The virtual calculator panel appeared in the vision of the participant (see Figure 1). The size, location, ratio of the horizontal to the vertical dimension, and tilt angle of the keyboard were adjustable using hand gestures. The inclined angle of the panel to a flat surface was adjusted to either 90° or 30°. These angles were confirmed using a goniometer. Hand movements of the pointing task included horizontal and vertical movements. There were three sizes (large, medium, and small) of the keys on the panel (see Table 1). Each pointing task was performed by moving the tip of the index finger from the original key to touch a target key. There were three target keys in each of the horizontal and vertical movement trials.
For horizontal pointing tasks, the key at origin was “C” on the upper left corner of the keypad (see Figure 1). For each of the three key sizes, the third (or “ln”), fifth (or “(”), and seventh (or Applsci 13 00788 i001) keys in the same row were selected as one of the target keys. The sizes (W) of the original and the three target sizes were the widths of the marks on these keys. An averaged W (Wave) was the averaged width of the marks of the origin and each of the target key. For vertical pointing tasks, the original key was the “ln” on the first row on the keypad (see Figure 1). For each of the three key sizes, the second (or “sinh”), fourth (or “tanh“), and sixth (or “0 × 16”) keys on the same column were selected as one of the target keys. The sizes (W) of the original and each of the three target sizes were the heights of the marks on these keys. The Wave was also the average height of the marks of the origin and each of the target key and was 4 mm, as the height of the marks on all the original and target keys were this height. The distance between the origin and target key was the distance between the middle of the marks on the two keys. The distance between the origin and a target key, Wave, and ID calculated using Equation (2) of the pointing tasks were summarized in Table 2. The sizes, locations, distances between the origin and end point of the pointing tasks, and distance between each key to the participant were measured using a measuring tape. The angles of the virtual panel were measured using a goniometer.
For the pointing tasks in the real environment, the calculator panels mimicked those of the virtual ones and were drawn on plastic boards. These boards were attached on a wood panel that was positioned on a workbench (see Figure 2). The inclined angle of the wood panel was also 30° or 90°. The sizes and distances of the keys and the angles of the panels between the virtual and physical settings were the same and were confirmed using a measuring tape.

2.3. Procedure

In the pointing task, the participant was requested to stand approximately 25–35 cm away from the panel. The bottom of the panel was approximately 120 cm above the floor. The heights of the keys were somewhere between the abdomen and chest of the participants.
Before the trial, the participant placed his/her index fingertip on the original key. Upon hearing the verbal start hint, the participant moved his/her fingertip to touch the target key and then returned to touch the original key. The participant was instructed to touch the letter, symbol, or mark on the key so as to ensure the input of that key was accurate. In addition, the participants were requested to complete the task as quickly and accurately as possible. Each pointing task was repeated 10 times and there were 20 fingertip movements between the origin and the target keys. The moment the participant touched the origin key upon completing the 20 movements, he or she gave a verbal hint of the end. The images of these 20 movements were recorded in a laptop computer and the average time of these movements retrieved from the videos in the computer was recorded as the movement time (MT). Both the dominant and nondominant hands of the participant were tested.
Each participant had an opportunity to practice pointing on the virtual key before the first session (see Figure 3). Pointing accurately on a virtual key was found to be relatively difficult as compared to that on a physical key. When the fingertip was approaching a key, the participant could see a virtual ring indicating the location of his or her fingertip and his or her hand became invisible. The ring became smaller when the fingertip drew close to the key (see Figure 4). It became a solid circle when the pointing was done. The participant could hear auditory feedback of the “touch” similar to that on a traditional keyboard at this moment. The virtual image displayed in the AR device was also displayed on a computer monitor so that the research personnel could confirm the success of each key touch.
Because all the participants had no prior experience of using the Hololens device, learning effects would surely affect the MT, especially for the virtual tasks. Randomization of the order of the trials was therefore very important so that the learning effects could be evenly distributed into the treatments. Each participant completed 144 trials. These trials were split into two sessions randomly. Each session included 72 randomly arranged trials and lasted for approximately 2 h. There was a 2-min break for the participant upon completing one trial within a session. Each participant completed one session and returned for the second session at least one day later.

2.4. Design of Experiment and Data Analyses

A randomized completely block design experiment was performed. Each participant was considered a block. Randomization of the factors was arranged for each participant. These factors included the type of panel (physical and virtual), inclined angle (30° and 90°), hand used (dominant vs. nondominant hand), hand movement direction (horizontal or vertical), key size (large, medium, or small), and target key (3rd, 5th, and 7th for horizontal movement and 2nd, 4th, and 6th for vertical movement). A total of 4320 trials were performed. Descriptive statistics and analysis of variance (ANOVA) were performed to determine the effects of the type of panel, incline angle, gender, and handedness on the MT. The least significant difference (LSD) test was performed for posterior comparisons for the levels for each factor if the factor was significant on MT at the α = 0.05 level.
The experiment was conducted for hand movement direction, key size, and target key conditions, but we did not examine the effects of these factors on MT because the distance between the original and target key was confounded within these factors. Discussing whether these factors affected MT was not meaningful because the distance is known to be a significant factor affecting MT.
Regression analyses were performed to fit the Fitts’s model of the MT for both hands. In addition to ID, an inclined angle was introduced to incorporate the effects of this parameter. The f(T) in Equation (4) was replaced by a parameter T multiplied by a constant d. We then have Equation (5):
MT = a + b ID + c sinθ + d T,
where a, b, c, and d are constants to be determined, ID is the index of difficulty in Table 2, and θ is the inclined angle of the panel. T is a dummy variable. This was 1 or 0 depending on whether the participant was pointing on a virtual panel or not:
T = 1 the panel is real,
= 0 otherwise.
The MT equations between the virtual and physical tasks under hand movement direction, gender, and handedness conditions were compared and discussed. Statistical analyses were performed using SPSS 20.0 software (IBM®, Armonk, NY, USA).

3. Results

3.1. ANOVA Results on MT

The ANOVA results showed that the MT of virtual pointing tasks was significantly (p < 0.0001) higher than that of physical pointing tasks. The effects of inclined angle were also significant (p < 0.0001) on the MT. Both the gender and handedness significantly (p < 0.0001) affected the MT of the physical and virtual pointing tasks.
The LSD test results indicated that the MT of the virtual pointing task (1533.5 ms) was significantly (p < 0.05) higher than that (628.6 ms) of the physical one. The MT of the 90° inclined angle condition (1118.1 ms) was significantly (p < 0.05) higher than that (1049.0 ms) of the 30° condition. For physical pointing tasks, the MT of female participants (639.1 ms) was significantly (p < 0.05) higher than that (618.1 ms) of the male participants. The MT of the nondominant hand (642.8 ms) was significantly (p < 0.05) higher than that (614.4 ms) of the dominant hand. For virtual pointing tasks, the MT of female participants (1630.2 ms) was significantly (p < 0.05) higher than that (1446.7 ms) of the male participants. The MT of the nondominant hand (1581.1 ms) was significantly (p < 0.05) higher than that (1495.9 ms) of the dominant hand.

3.2. MT Versus ID

The ID was calculated using Equation (2), while W was substituted using Wave or the average size of the marks on both the origin and target keys. The distance between the origin and the target keys varied depending on both the direction of the hand movement and the specific target key selected. Figure 5 and Figure 6 show the means and standard deviations of the MT versus the ID for the horizontal and vertical hand movement pointing tasks, respectively.

3.3. Regression Models

Regression analyses were performed for each of the gender and handedness conditions for both horizontal and vertical movement tasks using Equation (5). The regression modeling was based on all the MT data using the Wave, distance, and ID in Table 2. The results are shown in Table 3. The R2 values of the models developed were between 0.73 and 0.79. All the VIF values of b, c, and d were 1, indicating that there was no correlation between any two of the independent variables.
An IDadj was defined using the following equation to incorporate the components of both the inclined angle and T:
IDadj = ID + c′ sinθ + d′ T,
where c′ and d′ are constants to be determined.
Both c′ and d′ may be calculated by dividing c and d, respectively, by the b in Table 3. Table 4 shows the c′, d′, and 1/b. Equation (5) may then be rewritten as follows:
MT = a + b IDadj,
where a and b are constants to be determined.
Equation (6) shows that ID may be adjusted because of the incline angle and T. By substituting the c′ and d′ in Table 4 into Equation (6), we obtained the corresponding IDadj. An increase in ID (IDincrease) using the following equation may be defined:
ID increase   ( % ) =   ( I D a d j I D I D ) × 100 % .  
ANOVA results indicated that both T and movement direction significantly affected the IDincrease (p < 0.0001). The IDincrease of the virtual condition (197.5%) was significantly higher than that (25.3%) of the physical condition. The IDincrease of the vertical movement (159.4%) was significantly higher than that (63.4%) of the horizontal movement. These facts imply that T is the dominant factor affecting the increase of ID, and the incline angle was also a factor to be considered. The results indicate that incorporating the T factor increased the difficulty of the pointing tasks more for the vertical movements than for the horizontal movements.

4. Discussion

4.1. Movement Time of Pointing Tasks on Physical and Virtual Targets

Pointing tasks are required when users are using their fingertips to touch a target. Pointing on a physical key and on a virtual one were quite different. For the physical pointing, the control of hand movement was intuitive. When the fingertip touched the target, it was stopped by a board. There was strong tactile feedback. The literature [50] has shown that mechanical interactions of the hand and an object could interfere with the trajectory of the hand and thus affect the MT. For the virtual pointing, the participant needed to move his/her fingertip to the exact location of the target, both in the 2D plane of the virtual panel and in the axis perpendicular to the panel. There was no tactile feedback. Both a visual (the ring became a solid circle) and an auditory signal were presented when a “touch” was successful. However, the feedback, especially the visual one, was not as prominent as that of the tactile one. The fingertip might move over and “penetrate” the virtual panel, as there was no solid to stop it. It was also possible that there may be errors between the visual location of the virtual panel and actual location as recognized by the app in the AR glass [51]. The participant, therefore, needed to make fine adjustments following the cue of visual feedback of the fingertip location in the axis perpendicular to the panel until he or she received visual feedback of a successful input. Such a fine adjustment significantly increased the time for the pointing tasks on the virtual panel.
The regression coefficient d in Table 3 indicates the average extra amount of MT (755–1026 ms) required because the panel was a virtual one. The ratios of MT using the virtual panel versus the one using the physical panel may be calculated using our MT data. It was found that the participants needed to spend 2.23–2.79-fold more time on the MT, on average, to complete the pointing task on the virtual panel than on the physical panel under our experimental conditions. These numbers indicated the effects of the factor T and the inefficiency of using a virtual panel for data input. These results were consistent with the findings in the literature that pointing virtual objects required more time than pointing physical ones [45].
The results of this study support the hypothesis that the MT of pointing tasks on the virtual targets in an AR environment is longer than that on physical targets. This was consistent with the findings in the literature [48]. In addition, the hypotheses that the MT of the pointing tasks was affected by the gender and handedness of the participants were also supported.

4.2. Information Processing Performance

The inverse of the regression coefficient b in Fitts’s law equation was adopted to indicate the information processing rate of the hand movement performing the pointing task [52]. The literature has shown that the information processing rate for arm movements was approximately 10 bit/s [25,53]. The inverse of b (or 1/b) in Table 4 for horizontal and vertical pointing tasks was between 6.9 bit/s and 7.4 bit/s and between 13.4 bit/s and 15.9 bit/s, respectively. The former was approximately half of the latter, indicating that horizontal pointing was less efficient than vertical pointing. It was anticipated that the information processing rate for the virtual pointing tasks should be different from that of the physical ones. However, the 1/b values of these two types of tasks in Table 4 were confounded because T was incorporated in IDadj. Therefore, the inverse of b could not accurately represent the information processing rate when the ID was adjusted with factors in addition to movement distance and target size. Mackenzie [54] also indicated that 1/b cannot be used to quantify the information transfer because of the wavering influence of the intercept a and the inconsistency between 1/b and Fitts’s original definition of index of performance, or throughput. We then calculated the throughput (TP) using the following equation:
TP = I D a d j M T .  
Table 5 shows the TP for the pointing tasks. The TP values for the physical pointing tasks were between 11.6 and 12.6 bit/s for both the horizontal and vertical movements and were relatively stable. They were between 8.8 and 9.8 bit/s for horizontal or virtual–horizontal pointing tasks and were significantly (p < 0.001) lower than those of the physical–horizontal tasks. On the contrary, the TP values of the virtual–horizontal pointing tasks were between 12.2 and 15.0 bit/s and were significantly (p < 0.001) higher than those of the physical tasks in the same direction. This implies that the information processing performance between pointing at physical and virtual panels was dependent on the hand movement direction. The participants had inferior information processing performance when pointing at virtual targets than that on physical targets if they were pointing laterally. On the contrary, they had better performance when pointing at virtual targets than on physical targets if they were pointing in the top-down direction. This might be attributed to the fact that the top-down pointing movements involved less movement of the shoulder joint than the lateral pointing movements. It was also likely that the visual depth deficiency problems in the top-down movements were less prominent than in the lateral direction.

4.3. Limitations of the Study

There were limitations to this study. The first was that the participants were requested to touch the letter or the symbol on both the original and target keys to ensure successful input. This resulted in a change in the target size. In fact, even if the participants did not touch the mark on the key, they could still succeed in the pointing task so long as they touched the key without touching the border of the key. Our Wave was, therefore, not the target size in the typical Fitts’s law. Instead, it represented the effective target size due to our mark-touching request. This should be considered when interpreting the results of this study. The second limitation was that the virtual panel in this study was generated by the Microsoft Hololens 2 device. The results of this study may, therefore, not be generalizable to the virtual targets generated by other AR/VR devices.

5. Conclusions

This study confirmed that the MT of pointing tasks on virtual targets was significantly (p < 0.0001) longer than that on physical ones in real environments. Both gender and handedness significantly affected the MT of the pointing tasks. We have also verified that adding a parameter of the inclined angle of the control panel, for both physical and virtual targets, could increase the fitness of the regression equation in predicting the MT of pointing tasks. A revised equation of Fitts’s law, using an adjusted index of difficulty (IDadj), was proposed. The IDadj incorporated both the inclined angle of the panel and the factor T of the tasks. The predictive MT equation proposed in this study may be adopted to estimate the MT of pointing tasks under similar AR environments. AR engineers and designers may consider the pointing behaviors of our participants in touching the keys on the virtual panel in the AR environment so as to promote their designs for better user experience.
Our society is advancing into the world of the metaverse. However, there are many unknowns concerning human interaction with virtual objects. A 2D calculator panel, both physical and virtual, was adopted in the pointing tasks performed in this study. However, there are various 3D objects in the metaverse environments. Future research may be designed to study hand gestures and movement time when in contact with 3D virtual objects for pointing and other tasks. Such research will be helpful to fill the gaps in our knowledge concerning the world of metaverse.

Author Contributions

Conceptualization, C.Z. and K.W.L.; methodology, C.Z. and K.W.L.; validation L.P.; investigation, C.Z.; data curation, C.Z. and K.W.L.; writing—original draft preparation, K.W.L.; writing—review and editing, L.P.; visualization, L.P.; supervision, K.W.L.; funding acquisition, K.W.L. and L.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by research funding from the Ministry of Science and Technology of the ROC (MOST 110-2221-E-216-004) and funding from the 2nd Batch of 2022 MOE of PRC Industry-University Collaborative Education Program (Program No. 202209SZ08, Kingfar-CES “Human Factors and Ergonomics” program).

Institutional Review Board Statement

This study was conducted according to the guidelines of the Declaration of Helsinki and was approved by an external ethics committee (Jen-Ai Hospital, 110-55).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy reasons.

Conflicts of Interest

The authors declare no conflict of interests.

Abbreviations

ARaugmented reality
IDindex of difficulty
IDadjadjusted index of difficulty
IDincreaseincrease in index of difficulty due to pointing on virtual targets
MTmovement time
Ttactile factor
TPthroughput
VRvirtual reality
Waveaverage width of the targets

References

  1. Fitts, P.M. The information capacity of the human motor system in controlling the amplitude of movement. J. Exp. Psychol. 1954, 47, 381–391. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Fitts, P.M.; Peterson, J.R. Information capacity of discrete motor responses. J. Exp. Psychol. 1964, 67, 103–112. [Google Scholar] [CrossRef] [PubMed]
  3. Jagacinski, R.J.; Monk, D.L. Fitts’ law in two dimensions with hand and head movements. J. Motor Behav. 1985, 17, 77–95. [Google Scholar] [CrossRef] [PubMed]
  4. Andres, R.O.; Hartung, K.J. Prediction of head movement time using Fitts’ law. Hum. Factors 1989, 31, 703–713. [Google Scholar] [CrossRef]
  5. Radwin, R.; Vanderheiden, G.C.; Lin, M.L. A method for evaluating head-controlled computer input devices using Fitts’ law. Hum. Factors 1990, 32, 423–438. [Google Scholar] [CrossRef]
  6. Hoffmann, E.R.; Chan, A.H.S.; Heung, P.T. Head rotation movement times. Hum. Factors 2017, 59, 986–994. [Google Scholar] [CrossRef]
  7. Chan, A.H.S.; Hoffmann, E.R. Effect of movement direction and sitting/standing on leg movement time. Int. J. Ind. Ergon. 2015, 47, 30–36. [Google Scholar] [CrossRef]
  8. Chan, A.H.S.; Hoffmann, E.R.; Ip, K.M.; Siu, S.C.H. Leg/foot movement times with lateral constraints. Int. J. Ind. Ergon. 2018, 67, 6–12. [Google Scholar] [CrossRef]
  9. Drury, C.G. Application of Fitts’ law to foot-pedal design. Hum. Factors 1975, 17, 368–373. [Google Scholar] [CrossRef]
  10. Springer, J.; Siebes, C. Position controlled input device for handicapped: Experimental studies with a footmouse. Int. J. Ind. Ergon. 1996, 17, 135–152. [Google Scholar] [CrossRef]
  11. Chan, A.H.S.; Ng, A.W.Y. Lateral foot-movement times in sitting and standing postures. Percept. Mot. Ski. 2008, 106, 215–224. [Google Scholar] [CrossRef] [PubMed]
  12. Park, J.; Myung, R. Fitts’ law for angular foot movement in the foot tapping task. J. Ergon. Soc. Korea 2012, 31, 647–655. [Google Scholar] [CrossRef]
  13. Danion, F.; Duarte, M.; Grosjean, M. Fitts’ law in human standing: The effect of scaling. Neurosci. Lett. 1999, 277, 131–133. [Google Scholar] [CrossRef] [PubMed]
  14. Hoffmann, E.R.; Chan, A.H.S. Movement of loads with trunk rotation. Ergonomics 2014, 58, 1547–1556. [Google Scholar] [CrossRef] [PubMed]
  15. Kerr, R. Diving, adaptation, and Fitts’ law. J. Motor. Behav. 1978, 10, 255–260. [Google Scholar] [CrossRef]
  16. Hancock, P.A.; Milner, E.K. Task performance under water-An evaluation of manual dexterity efficiency in the open ocean underwater environment. Appl. Ergon. 1986, 17, 143–147. [Google Scholar] [CrossRef]
  17. Hoffmann, E.R.; Chan, A.H.S. Underwater movement times with ongoing visual control. Ergonomics 2012, 55, 1513–1523. [Google Scholar] [CrossRef]
  18. Drews, F.A.; Zadra, J.R.; Gleed, J. Electronic health record on the go: Device form factor and Fitts’ law. Int. J. Med. Inform. 2018, 111, 37–44. [Google Scholar] [CrossRef]
  19. Melo, F.; Conde, M.; Godinho, C.; Domingos, J.; Sanchez, C. Hand motor slowness in Parkinson disease patients performing Fitts’ task. Annu. Med. 2019, 51, 49. [Google Scholar] [CrossRef] [Green Version]
  20. Welford, A.T. The measurement of sensory-motor performance: Survey and reappraisal of twelve years progress. Ergonomics 1960, 3, 189–230. [Google Scholar] [CrossRef]
  21. Mackenzie, I.S. A note on the information-theoretic basis for Fitts’ law. J. Motor Behav. 1989, 21, 323–330. [Google Scholar] [CrossRef] [PubMed]
  22. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  23. Gan, K.C.; Hoffmann, E.R. Geometrical conditions for ballistic and visually controlled movements. Ergonomics 1988, 31, 829–839. [Google Scholar] [CrossRef]
  24. Hoffmann, E.R. Effective target tolerance in an inverted Fitts’ task. Ergonomics 1995, 38, 828–836. [Google Scholar] [CrossRef]
  25. Murata, A.; Iwase, H. Extending Fitts’ law to a three-dimensional pointing task. Hum. Mov. Sci. 2001, 20, 791–805. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Cha, Y.; Myung, R. Extended Fitts’ law for 3D pointing tasks using 3D target arrangements. Int. J. Ind. Ergon. 2013, 43, 350–355. [Google Scholar] [CrossRef]
  27. Baird, K.M.; Hoffmann, E.; Drury, C.G. The effects of probe length on Fitts’ law. Appl. Ergon. 2002, 33, 9–14. [Google Scholar] [CrossRef]
  28. Guttentag, D.A. Virtual reality: Applications and implications for tourism. Appl. Ergon. 2010, 31, 637–651. [Google Scholar] [CrossRef]
  29. Carmigniani, J.; Furht, B.; Anisetti, M.; Ceravolo, P.; Damiani, E.; Ivkovic, M. Augmented reality technologies, systems, and applications. Multimed. Tools. Appl. 2011, 51, 341–377. [Google Scholar] [CrossRef]
  30. Drascic, D.; Milgram, P. Perceptual issues in augmented reality. In Proceedings of the Sterescopic Displays and Virtual Reality Systems III, San Jose, CA, USA, 10 April 1996; Volume 2653, pp. 123–134. [Google Scholar]
  31. Wu, H.K.; Lee, S.; Chang, H.Y.; Liang, J.C. Current status, opportunities and challenges of augmented reality in education. Comput. Educ. 2013, 62, 41–49. [Google Scholar] [CrossRef]
  32. Ibáñez, M.B.; Kloos, C.D. Augmented reality for STEM learning: A systematic review. Comput. Educ. 2018, 123, 109–123. [Google Scholar] [CrossRef]
  33. Terhoeven, J.; Schiefelbein, F.P.; Wischniewski, S. User expectations on smart glasses as work assistance in electronics manufacturing. Procedia CIRP 2018, 72, 1028–1032. [Google Scholar] [CrossRef]
  34. Cardoso, L.F.S.; Mariano, F.C.M.Q.; Zorzal, E.R. A survey of industrial augmented reality. Comput. Ind. Eng. 2020, 139, 106159. [Google Scholar] [CrossRef]
  35. Milgram, P.; Kishino, F. A Taxonomy of mixed reality visual displays. IEICE T. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  36. Wang, X.; Ong, S.K.; Nee, A.Y.C. Multi-modal augmented-reality assembly guidance based on bare-hand interface. Adv. Eng. Inform. 2016, 30, 406–421. [Google Scholar] [CrossRef]
  37. Gattullo, M.; Laviola, E.; Evangelista, A.; Fiorentino, M.; Uva, A.E. Towards the evaluation of augmented reality in the metaverse: Information presentation modes. Appl. Sci. 2022, 12, 12600. [Google Scholar] [CrossRef]
  38. Lubos, P.; Bruder, G.; Steinicke, F. Analysis of direct selection in head-mounted display environments. In Proceedings of the IEEE Symp 3D User Interfaces, Minneapolis, MN, USA, 29–30 March 2014; pp. 11–18. [Google Scholar] [CrossRef]
  39. Schwind, V.; Leusmann, J.; Henze, N. Understanding visual-haptic integration of avatar hands using a Fitts’ law task in virtual reality. In Proceedings of the Mensch und Computer 2019, Hamburg, Germany, 8–11 September 2019; pp. 211–222. [Google Scholar] [CrossRef]
  40. Triantafyllidis, E.; Li, Z. The challenges in modeling human performance in 3d space with Fitts’ law. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–9. [Google Scholar] [CrossRef]
  41. Lin, C.J.; Woldegiorgis, B.H. Interaction and visual performance in stereoscopic displays: A review. J. Soc. Inf. Disp. 2015, 23, 319–332. [Google Scholar] [CrossRef]
  42. Swan, J.E.; Singh, G.; Ellis, S.R. Matching and reaching depth judgments with real and augmented reality targets. IEEE T. Vis. Comput. Gr. 2015, 21, 1289–1298. [Google Scholar] [CrossRef]
  43. Lin, C.J.; Woldegiorgis, B.H. Egocentric distance perception and performance of direct pointing in stereoscopic displays. Appl. Ergon. 2017, 64, 66–74. [Google Scholar] [CrossRef]
  44. Batmaz, A.U.; Machuca, M.D.B.; Pham, D.M.; Stuerzlinger, W. Do head-mounted display stereo deficiencies affect 3D pointing tasks in AR and VR? In Proceedings of the IEEE Conference Virtual Real 3D User Interfaces, Osaka, Japan, 23–27 March 2019; pp. 585–592. [Google Scholar] [CrossRef]
  45. Machuca, M.D.B.; Stuerzlinger, W. The effect of stereo display deficiencies on virtual hand pointing. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–14. [Google Scholar] [CrossRef]
  46. Ha, T.; Woo, W. An empirical evaluation of virtual hand techniques for 3D object manipulation in a tangible augmented reality environment. In Proceedings of the 2010 IEEE Symposium on 3D User Interface, Waltham, MA, USA, 20–21 March 2010; pp. 91–98. [Google Scholar] [CrossRef]
  47. Deng, C.L.; Geng, P.; Hu, Y.F.; Kuai, S.G. Beyond Fitts’ law: A three-phase model predicts movement time to position an object in an immersive 3D virtual environment. Hum. Factors 2019, 61, 879–894. [Google Scholar] [CrossRef]
  48. Clark, L.D.; Bhagat, A.B.; Riggs, S.L. Extending Fitts’ law in three-dimensional virtual environments with current low-cost virtual reality technology. Int. J. Hum-Comput. St. 2020, 139, 102413. [Google Scholar] [CrossRef]
  49. Barcali, E.; Iadanza, E.; Manetti, L.; Francia, P.; Nardi, C.; Bocchi, L. Augmented reality in surgery: A scoping review. Appl. Sci. 2022, 12, 6890. [Google Scholar] [CrossRef]
  50. Crossman, E.R.F.W.; Goodeve, P.J. Feedback control of hand movement and Fitts’ law. Q. J. Exp. Psychol. 1983, 35A, 251–278. [Google Scholar] [CrossRef]
  51. El Barhoumi, N.; Hajji, R.; Bouali, Z.; Ben Brahim, Y.; Kharroubi, A. Assessment of 3D models placement methods in augmented reality. Appl. Sci. 2022, 12, 10620. [Google Scholar] [CrossRef]
  52. Card, S.K.; English, W.K.; Burr, B.J. Evaluation of mouse, rate-controlled isometric joystick, step keys, and text keys for text selection on a CRT. Ergonomics 1978, 21, 601–613. [Google Scholar] [CrossRef]
  53. Langolf, G.D.; Chaffin, D.B.; Foulke, J.A. An investigation of Fitts’ law using a wide range of movement amplitudes. J. Motor Behav. 1976, 8, 113–128. [Google Scholar] [CrossRef]
  54. Mackenzie, I.S. Fitts’ throughput and the remarkable case of touch-based target selection. In Human-Computer Interaction: Interaction Technologies, Lecture Notes Computer Science, Proceedings of the HCI 2015, Los Angeles, CA, USA, 2–7 August 2015; Kurosu, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2015; Volume 9170. [Google Scholar] [CrossRef]
Figure 1. Virtual calculator panel.
Figure 1. Virtual calculator panel.
Applsci 13 00788 g001
Figure 2. Calculator panel drawn on a plastic board for the physical pointing tasks: (a) 90°, (b) 30°.
Figure 2. Calculator panel drawn on a plastic board for the physical pointing tasks: (a) 90°, (b) 30°.
Applsci 13 00788 g002
Figure 3. Pointing task on virtual keys wearing the Hololens device.
Figure 3. Pointing task on virtual keys wearing the Hololens device.
Applsci 13 00788 g003
Figure 4. A virtual ring indicating the location of the fingertip in front of a virtual panel (a) approaching the key “ln”; (b) almost touching.
Figure 4. A virtual ring indicating the location of the fingertip in front of a virtual panel (a) approaching the key “ln”; (b) almost touching.
Applsci 13 00788 g004
Figure 5. MT of female (a) and male (b) participants performing the horizontal pointing task.
Figure 5. MT of female (a) and male (b) participants performing the horizontal pointing task.
Applsci 13 00788 g005aApplsci 13 00788 g005b
Figure 6. MT of female (a) and male (b) participants performing the vertical pointing task.
Figure 6. MT of female (a) and male (b) participants performing the vertical pointing task.
Applsci 13 00788 g006
Table 1. Size of the keys on the keypad for both real and virtual conditions.
Table 1. Size of the keys on the keypad for both real and virtual conditions.
Horizontal MovementVerticalMovement
WidthHeightWidthHeight
Large76143737
Medium56143725
Small37143714
Note: Size for all the keys except the “0” key; unit: mm.
Table 2. Key size, distance, and index of difficulty (ID) of the pointing tasks.
Table 2. Key size, distance, and index of difficulty (ID) of the pointing tasks.
Origin-Target KeyWave (mm)Distance (mm)ID
Horizontal
LargeC-ln3.51566.5
C-( 2.53128
C-Applsci 13 00788 i0024.54687.7
MediumC-ln3.51165.1
C-(2.52327.5
C-Applsci 13 00788 i0034.53487.3
SmallC-ln3.5785.5
C-(2.51567
C-Applsci 13 00788 i0044.52346.7
Vertical
Largeln-sinh4394.3
ln-tanh41175.9
Ln-0 × 1641956.6
Mediumln-sinh4273.8
ln-tanh4815.3
ln-0 × 1641356.1
Smallln-sinh4163
ln-tanh4484.6
ln-0 × 164805.3
Note: “ln”, “(”, and “Applsci 13 00788 i005” were the 3rd, 5th, and 7th keys for horizontal pointing; “sinh”, “tanh”, and “0 × 16” were the 2nd, 4th, and 6th keys for vertical pointing.
Table 3. Results of regression modeling of the MT.
Table 3. Results of regression modeling of the MT.
RegressionCoefficient
GenderHandabcdR2
HorizontalMovement
FemaleD633.8134.4118.7 *−970.10.76
ND660.6144.1109.8 *−1025.90.73
MaleD365.7141.6227.3−846.80.75
ND454.8143.4217.7−910.10.77
VerticalMovement
FemaleD1028.666.868.0 −855.60.74
ND1051.474.585.7 −908.50.73
MaleD87862.8133.5−755.20.79
ND924.367.6143.8−803.40.78
Note: D: dominant; ND: nondominant. All the regression coefficients, except those marked with * and , were significant at p < 0.001 for a two-tailed t test of μ = 0. * p < 0.05; p > 0.05.
Table 4. Coefficients of IDadj and 1/b.
Table 4. Coefficients of IDadj and 1/b.
GenderHandcd1/b (bit/s)
HorizontalMovement
FemaleD0.88−7.227.4
FemaleND0.76−7.126.9
MaleD1.61−5.987.1
MaleND1.52−6.357
VerticalMovement
FemaleD1.02−12.8115
FemaleND1.15−12.1913.4
MaleD2.13−12.0315.9
MaleND2.13−11.8814.8
Table 5. Throughput (bit/s) for the pointing tasks.
Table 5. Throughput (bit/s) for the pointing tasks.
Movement DirectionGenderHandPhysical PointingVirtual Pointing
HorizontalFemaleD11.758.94
ND11.158.84
MaleD12.549.76
ND11.99.4
VerticalFemaleD11.613.17
ND11.2912.22
MaleD12.5514.98
ND11.914.09
D: dominant hand; ND: nondominant hand.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, C.; Li, K.W.; Peng, L. Movement Time for Pointing Tasks in Real and Augmented Reality Environments. Appl. Sci. 2023, 13, 788. https://doi.org/10.3390/app13020788

AMA Style

Zhao C, Li KW, Peng L. Movement Time for Pointing Tasks in Real and Augmented Reality Environments. Applied Sciences. 2023; 13(2):788. https://doi.org/10.3390/app13020788

Chicago/Turabian Style

Zhao, Caijun, Kai Way Li, and Lu Peng. 2023. "Movement Time for Pointing Tasks in Real and Augmented Reality Environments" Applied Sciences 13, no. 2: 788. https://doi.org/10.3390/app13020788

APA Style

Zhao, C., Li, K. W., & Peng, L. (2023). Movement Time for Pointing Tasks in Real and Augmented Reality Environments. Applied Sciences, 13(2), 788. https://doi.org/10.3390/app13020788

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop