1. Introduction
Virtual reality (VR) technology provides an experience similar to reality to a user existing in a virtual environment created by a computer [
1,
2]. Through the advancement of head-mounted displays (HMDs) such as Oculus Rift CV1/Go and HTC Vive, the VR technology provides a more deeply immersive experience environment by combining an HMD with a hardware system such as a treadmill and VR globe. Furthermore, application studies have been conducted from various perspectives for user interfaces and haptic feedbacks in an immersive VR to interact with the virtual environment directly and control objects realistically [
3,
4,
5,
6].
An objective of immersive VR is to provide a user with a realistic experience based on the five senses regarding which actions are performed, with whom and where. Accordingly, studies were conducted on the following aspects: Displays that send stereoscopic information; auditory sense using volumetric audio sources; and tactile sense felt through a haptic system, which directly feeds a physical reaction back to a human body (hand, foot, etc.) [
7,
8,
9]. To increase the presence in a VR by inducing a high user immersion, a realistic interaction between reality and the VR is required. To design this, the following processes should be performed: Detecting the movements of joints in a human body existing in the real space; analyzing the intention of movement and reflecting it in the virtual environment [
5,
10]. Consequently, studies [
11,
12,
13] have been conducted to accurately map actual hand movements to the movements of a virtual hand model through markers (surface or optical). Accordingly, studies were conducted to develop a method of calculating a hand model precisely through a spherical-mesh tracking model [
14], and a method of realistically expressing the actions of a person including hand gestures, facial expressions, and whole body movements in VR by using motion capture data [
5]. Moreover, the following studies were conducted to facilitate infinite walking in limited space concerning movements of the user: A study on smooth assembled mapping that calculates walking that feels real with an isometric distortion [
15], and a study that proposed a representation method of natural walking in the process of multiple VR users sharing spatial information [
6]. Lee et al. [
16] represented movements in VR with a straightforward method by designing a portable walking simulator using a user’s walking-in-place with a more universal approach than the conventional studies.
For the most immersive VRs, algorithms, systems, user interfaces, and interactions are designed and studied by focusing on the presence felt by VR users. Typical examples include: Precisely controlling the strength applied on the tip of a finger by using wearable haptic devices such as 3-RSR (revolute–spherical–revolute) [
17] or 3-DoF [
18]; and a user-friendly haptic system [
8] that can be easily carried around while accurately mapping the user’s walking or action in a virtual environment through a motion capture device. However, the roles desired by the users existing in the VR or the level of their participation may vary (some may want to only observe while others may want a limited participation or close participation as a part of the experience). Therefore, rather than dwelling on the immersion in the hardware aspect, studies on immersive VR should consider various aspects such as roles and communication through social interaction.
With respect to a virtual collaboration environment, interaction methods and technology are proposed, whereby many users existing in a virtual environment can collaborate through communication in a given condition and environment [
19,
20,
21,
22]. For the virtual collaboration environments of immersive VRs, systems are designed to create immersive environments where multiple users wearing HMDs can experience immersion together. Related studies proposed interactions that can be used by multiple VR users to work on a group task together or communicate in an immersive environment; furthermore, they proposed, as a more advanced type, an asymmetric VR to allow non-immersive users (usually PC users) to participate together. ShareVR and Role VR proposed by Gugenheimer et al. [
23] and Lee et al. [
24], respectively, are representative studies on interactions in an asymmetric virtual environment. They proposed asymmetric interactions to provide experiences that satisfy all the HMD and non-HMD users located in the same space, in addition to an improved presence in an immersive environment. However, conventional studies have focused on VR users wearing immersion devices such as HMDs and have limitations in that only limited participation is possible for non-HMD users as assistants or spectators. Moreover, users cannot act as independent entities and they share an avatar. Consequently, there is a limitation that non-HMD users become dependent on HMD users for receiving improved presence and various experiences in a virtual environment.
This study proposes an asymmetric interface that provides non-HMD users with convenient, efficient, and immersive interactions with HMD users in an asymmetric VR. The key aspect is to provide high immersion while allowing any person to interact with others conveniently and to provide non-HMD users with user-optimized experience and improved presence as independent participants rather than as assistants to HMD users. Accordingly, a decision-making structure is designed to remind users continuously that they are sharing and experiencing together their experiences and thoughts. The main contributions of the proposed asymmetric interface are:
A controller-based hand interface is designed to maximize the immersion of every user in VR while minimizing the user burdens. A real-hand-to-controller mapping method is developed in such a way that a user can experience interactions with users or a virtual environment by directly using a hand.
A three-step decision-making structure (object, status, and message) is designed to allow users to share the thoughts, behavior, and emotions experienced by them in an asymmetric environment through an efficient and intuitive structure.
It is systematically verified that providing a user-optimized interface in an asymmetric VR can provide even non-HMD users with improved presence similar to that provided to HMD users, and also with various experiences in different environments (immersive and non-immersive).
3. Asymmetric Interface for Immersive Interaction
This study proposes an asymmetric interaction to provide an experience environment where HMD users and non-HMD users can have diverse experiences with high presence in asymmetric VRs. The proposed asymmetric VR is based on an experiential environment comprising asymmetric interfaces for both HMD and non-HMD users. However, interaction and interfaces are designed in view of extensibility to a large number of users. The experience space between users considers both co-located and remote users. However, we aim to design interfaces that can effectively share statuses, thoughts, and behavior, focusing on remote users who are difficult to communicate with directly.
The proposed asymmetric interface includes the following three parts: A controller-based hand interface, which can be used conveniently at a low cost for a highly immersive interaction, a three-step decision-making structure where users can share experience by efficiently and accurately exchanging information and thoughts between themselves, and an interface consistent with the experience environment of the user. In this study, Oculus Rift CV1 HMD and a dedicated controller are used for the basic experience environment, and an integrated development environment is built on the Unity 3D engine. The Oculus integration package that can be used in the Unity 3D engine is imported, and the HMD camera is controlled using the prefabs (OVRCameraRig, OVRPlayerController, etc.) provided in the package. Furthermore, the touch controller used by HMD and non-HMD users for interaction is controlled through the OVRInput. In addition, the UI functions of Unity 3D engine are used for texts, icons, etc., in the communication process.
3.1. Controller-Based Hand Interface
Users can directly use their hands to interact with a virtual environment or objects in an immersive VR. The proposed asymmetric interface also provides an interaction environment where all users (HMD and non-HMD) can use their hands. Previous studies by Han et al. [
3] and Jeong et al. [
50] also confirmed that interactions using hands in immersive VRs provide higher presence compared with a gaze interface as well as a keyboard and game pad. Kim et al. [
8] added a portable hand haptic system to maximize the presence in interactions using hands. Notably, existing studies emphasize that hands should be expressed more accurately and directly in a virtual environment and an easier and more convenient interface is required. However, most studies recommend the use of a lip motion device in addition to an HMD and some even require a haptic device. This study proposes a VR controller-based hand interface to provide a more popular and easier-to-use interaction environment while maintaining the highly immersive input method of using hands.
Three basic motions are defined to map the VR controller naturally with hand gestures according to the input settings of the controller. The key aspect is that the hand motions required to press a button or a trigger while holding the controller must correspond closely with the gestures of the virtual hand to maintain a high degree of immersion.
Figure 1 shows the process of mapping Oculus touch, which is a dedicated controller, with a three-dimensional virtual hand model that contains joint information in the Unity 3D development environment. The key inputs are set, and the left and right hands are operated separately to map the defined basic motions (grab, point, and open) naturally with the controller. Algorithm 1 summarizes the proposed controller-based hand interface.
Algorithm 1 Design of controller-based hand interface. |
1: Btn_Grab ← click state of controller’s index trigger button. |
2: Btn_Point ← click state of controller’s middle trigger button. |
3: procedure CONTROLLER To HAND INPUT PROCESS Btn_Grab, Btn_Point |
4: grabbing ← check the state of grabbing hand. |
5: pointing ← check the state of pointing gesture. |
6: opening ← check the state of open hand. |
7: if Btn_Grab == true then |
8: if grabbing == false then |
9: search an object to grab near the hand |
10: after collision test, grabbing by the hand |
11: store in the grab-object list |
12: grabbing = true |
13: end if |
14: else if Btn_Point == true then |
15: set as a pointing gesture. |
16: pointing = true |
17: else |
18: if grabbing == true then |
19: angular velocity calculation of drop object. |
20: drop the object grabbed by the hand. |
21: end if |
22: end if |
23: end procedure |
3.2. Three-Step Communication Process for User-to-User Interaction
For effective communication between users, we consider the following six factors: Listening, verbal communication, non-verbal communication, emotional awareness, written communication, and communicating in difficult situations. This study designs a communication process to present an environment where users can share their statuses, emotions, etc., through faster and more intuitive communication (both verbal and non-verbal) in a remote experience space excluding listening. To optimize for communication, a structure is designed through a three-step communication method that considers both a non-verbal method using objects and icons, and a verbal method based on text as ways to express the current situation and status as directly as possible. By extending the current three-step process, we aim to design a process with an open structure that converts voices into icons and messages.
In an asymmetric VR, both HMD and non-HMD users interact with the virtual environment or objects using their hands. We propose a communication process that can increase immersion in VR by allowing users to share differences in experience caused by the system, role, and behavior. A three-step process-structure is designed for intuitive and accurate communication between users. This communication process can quickly and accurately deliver objects, statuses, and messages to other users in an intuitive structure under the assumption that the sense of space in VR is used to the maximum. Kim et al. [
4] demonstrated that setting different dimensions (2D or 3D) in the process of selecting objects or exchanging information affected the understanding of information or immersion. Therefore, this study also defines an interface for communication by setting a dimension optimized to the type of information to be delivered and the method of communication.
Figure 2 shows the proposed flow of interaction between users. Presence is improved through collaboration between asymmetric users sharing experiences (thoughts, behavior, states, etc.) based on the differences in their independent VR environments.
The communication process is composed of three steps: Object, status, and message. The first step involves gripping various objects with the hand and delivering them to another user in a VR space. The second step involves communicating one’s status, emotion, etc. Similar to social networking services (SNS), this step uses emoticons which are often used to communicate messages directly on the Web. Finally, the message function involves accurately communicating requirements or core contents. This is not to type texts directly, but to auto-complete messages based on the two functions of object and status.
Figure 3 outlines the communication process of the three-step structure. HMD and non-HMD user interfaces transfer information (environment and space, object, etc.) through objects and express their current status and emotions. In addition, they have a communication structure to send and receive simple messages by combining them.
3.3. Development of User-Optimized Interface
In an asymmetric VR, there are differences in the methods and process by which HMD and non-HMD users can participate. In other words, a user-optimized interface should be provided by analyzing the system and environment factors under the assumption of basic interactions using the hand.
Figure 4 outlines the optimized roles and interaction methods based on an analysis of the differences in the experience environments between the users in an asymmetric VR. For HMD users to experience presence through high immersion as participants in VR, an environment for direction interaction must be provided in the virtual environment through an intuitive structure. non-HMD users have limitations as they must start from low presence owing to a non-immersive environment. However, they can overcome this limitation if various roles (manager, assistant, and participant) and various viewpoints (first-person and third-person) are provided as they can judge the situation as a whole by viewing the scene more broadly than HMD users do (limited view).
3.3.1. Direct Interaction for HMD User Interface
HMD users are provided with a high sense of space through stereoscopic visual information. Therefore, we design an immersive interaction that can communicate more directly with a virtual environment, objects, and other users in it. Thus, a graphical environment is implemented so that HMD users can interact directly with objects or environments using two hands and draw statuses and messages based on the local coordinates in the 3D space.
In an immersive VR, HMD user interface is provided with a high sense of space through first-person stereoscopic video transmitted to an HMD that systematically consists of a binocular camera. Therefore, an experience environment that allows users to perceive the virtual environment directly and control various objects in it intuitively is required.
Figure 5 shows the proposed experience environment for HMD users, which enables users to perform various behavior and object control directly through hand motions in a virtual environment with a controller-based hand interface.
In the case of a communication process for HMD users, the user-to-user interactions are also designed around the hand because it is possible to interact with the environment and objects directly using the hand in VR.
Figure 6 shows the result. First, users can deliver objects to other users by directly holding and throwing them with their hand. When users click a three-dimensional icon, which is activated around the hand as they hit a defined key with their index finger, an icon is generated above their head, which communicates the status. Finally, when they select the object and status, the message is auto-completed and delivered. HMD users enter the values in a 3D space in all the three steps. However, it is important for the non-HMD user interface that receive the information to recognize the delivered information quickly and accurately with a high degree of immersion. Therefore, it is implemented in such a manner that objects are delivered in 3D with their 3D shapes, status is delivered via 2D emoticons, which can be understood quickly and accurately [
4], and messages are converted to 1D texts, which are highly legible.
The auto-completion of messages is to predefine sentence combination templates and complete a sentence by finding and substituting a sentence that matches the object and status selected by the user. As shown in
Figure 7, the dialogue sentences often used in conversations or required in the application to be produced are listed abstractly in advance. The sentences are expressed by a combination of object (
O) and status (
S), and template sentences are predefined by considering various combinations such as object and object, object and status, and status and object. Subsequently, when the user selects (input) an object and status, a sentence is completed based on the selected information, and candidates are listed, among which the user can select the most ideal sentence (auto-completion). The message auto-completion feature is provided to non-HMD users as well.
Equation (
1) represents the process of finding the appropriate message according to the input based on the predefined sentence templates. First, an integer value is mapped according to the information selected by the user (object (
O): 1, status (
S): –1). Then, the messages are only found to match the input parameter (
I) with all the added values and the templates (
T). Then, the function finds only the sentences that match the templates (
T) with the sum of all the input parameters (
I). That is, only the messages with the exact combination of the input values and the template are found, and the selected information is substituted and displayed to the user. Where
n is the number of user-selected combinations. In this case, the order in which the messages are arranged is determined by raising the weights on the sentences which are frequently selected by the user.
3.3.2. Multi-Viewpoint Interaction for non-HMD User Interface
Unlike HMD user interface, non-HMD user interface is present in a non-immersive experiential environment that receives visual information through a flat display such as a monitor. Therefore, to utilize this limitation as a potential advantage, we employ non-HMD users in various roles such as participant, manager, and creator, and design the supporting viewpoint, interaction, and communication. The aim of this study is to extend beyond the general presence of “being there” to the presence of “being there together” by presenting an experiential environment where non-HMD users can experience the thoughts and experiences of HMD users. This enhanced presence of non-HMD users overcomes the limitation of a non-immersive experiential environment for non-HMD users by allowing them to participate in an experiential environment as subjects just like HMD users, rather than simply acting as assistants to HMD users. Thus, we design an interface structure for non-HMD users that provides multiple viewpoints freely switching between the first- and third-person and enables interaction and communication by directly utilizing the hands just like that for HMD users.
For users to experience the presence of VR in a non-immersive VR, 3D visual information is provided through displays such as a 3D monitor or a cave automatic virtual environment [
51]. However, these displays are different types of devices that replace the HMD and require considerable burden (economic and spatial) to be used as displays for non-HMD users in an asymmetric VR. Therefore, the key objective of this study is to propose a presence for non-HMD users as much as or better than the presence for HMD users and an interaction that can provide new experiences for HMD users under the assumption that non-HMD users participate in a general non-immersive environment using a PC.
From the viewpoint of non-HMD user interface, this has the advantage of observing and exploring scenes from more diverse viewpoints than an HMD user interface do. By reflecting this advantage in the interface, not only diverse roles (manager, assistant, and participant), but also diverse viewpoints can be provided. When the same first-person viewpoint as that of an HMD user interface is provided to a non-HMD user, they can participate and behave together in the virtual space as participants. Sometimes, they can play the role of an assistant to HMD users by operating a camera and judging the situation quickly. If a function for selecting a scene from a third-person viewpoint is provided, they can additionally design their roles as assistants. Considering these aspects,
Figure 8 shows a structure where non-HMD users can experience a non-immersive environment through a controller-based hand interface.
For first-person non-HMD user interface, an interface where they can interact as participants or as assistants to HMD users in VR is designed. However, unlike for an HMD user interface, there is no sensor to trace the 3D orientation of the VR controller; hence, the camera is fixed at an appropriate position where the hand can be present by assuming the location using the eye. Considering the limited environment where the hands of the non-HMD user are not free, a function for selecting and controlling objects through the index finger is additionally provided.
Figure 9 shows the process of expanding the controller-based hand interface and selecting objects from the first-person viewpoint of a non-HMD user interface.
In addition, a multi-viewpoint function that allows free conversion between first- and third-person observer viewpoints for non-HMD users is designed, and a manager role or different types of assistant roles are provided together. Thus, it is possible to judge every situation by perceiving the VR scenes, including the behavior and motions of HMD users, as a whole and to communicate the situation to HMD users through direct behavior; it is also possible to affect the flow of the VR scenes. Kim et al. [
52] proposed third-person immersive interaction based on the god-like interaction suggested by Stafford et al. [
53]. Furthermore, they verified that an interface optimized for the third-person viewpoint can provide a high degree of presence, as much as the first-person viewpoint and experiences optimized to the third-person viewpoint. This study designs multi-viewpoint interaction under the assumption that a third-person interface provides non-HMD users with new experiences in asymmetric VR. Furthermore, the effects of the multi-viewpoint interface on presence and experience are systematically analyzed through a survey.
When a controller-based hand interface is applied to the third-person viewpoint, it can include more functions than the three motions defined in Algorithm 1. Therefore, functions related to camera control (camera movement, view volume setting, viewpoint conversion, etc.) need to be applied in addition to the basic functions. Algorithm 2 outlines the functions added for third-person non-HMD user interface based on Algorithm 1.
Figure 10 shows the results of free camera control from the third-person viewpoint through the algorithm.
The communication process is designed separately for non-HMD users because first- and third-person multi-viewpoints are provided. For the first-person viewpoint, the basic process is the same because it is the same as the viewpoint of an HMD user interface.
Figure 11 shows a communication process for the first-person non-HMD user interface. If the function for delivering object, status, and message is the same as that for an HMD user interface, a method of selecting through a ray, which is calculated from the index finger, is added.
Algorithm 2 Input processing for third-person viewpoint camera control of non-HMD user interface. |
1: Procedure 3rd-PERSON VIEWPOINT CAMERA CONTROL PROCESS |
2: if grabbing == false then |
3: if pointing == true then |
4: if the current hand is the left hand then |
5: calculate the moving direction vector by subtracting the current position from the index fingertip position. |
6: move the camera to the calculated vector direction. |
7: else |
8: create the ray in the forward direction of fingertip. |
9: calculate the collision of the ray and predefined layers |
10: if click the B button on the controller then |
11: activate the non-HMD of 1st-person viewpoint with the collision coordinate. |
12: end if |
13: end if |
14: else if opening == true then |
15: if the current hand is the left hand then |
16: if click the Y button on the controller then |
17: zoom in camera |
18: else if click the X button on the controller then |
19: zoom out camera |
20: end if |
21: end if |
22: end if |
23: end if |
24: end procedure |
In the case of third-person non-HMD users, the observation is undertaken from the top view, and a flat image is provided compared with the first-person viewpoint; thus, every communication process is expressed in two dimensions. However, the interface for using the hand is the same in this case.
Figure 12 shows a communication process from the third-person viewpoint of a non-HMD user interface. The menu (object, status, and message) is enabled and selected based on the key input through a controller, and it also shows the step-by-step communication process. In the first menu, Objects, the selectable objects are output as 2D icons via coordinates on a circumference around the right hand. Furthermore, information is delivered via selection through the tip of the right index finger. The HMD user receiving the information converts the 2D icons to matching 3D objects so that 3D object information can be perceived realistically. The status is also processed in the same manner as objects, but the status is checked with 2D emoticons for HMD users. Finally, all the messages are transmitted as 1D texts.
5. Experimental Results and Analysis
The application developed for the experience environment and experiments of HMD and non-HMD users (both co-located and remote) based on the asymmetric interface in the VR was implemented by using Unity 2017.3.1f1 (64-bit) and Oculus SDK (ovr_unity_utilities 1.22.0). Furthermore, every user interacted with the VR environment or objects by mapping the Oculus Touch controller to his/her hands, whereas the HMD user received the video information through the Oculus Rift CV1 HMD. The PC used for the integrated development environment construction and experiment had the following specifications: Intel Core i7-6700, 16 GB RAM, and Geforce 1080 GPU.
Figure 14 shows images, from the developed application, of users performing the given roles and actions to accomplish the objective of the application based on the interactions classified according to the characteristics of users. In the images, the HMD user, as a participant, performed active actions based on the 3D visual information, and the non-HMD user performed various roles and actions by alternating between the third-person viewpoint (observer or assistant) and the first-person viewpoint (participant or assistant). Subsequently, in the configured experience environments, the users performed independent actions in a space (approximately 1.5 m × 1.5 m) where they could perform actions freely by using their hands while staying at the same spot. It is possible to perform actions while standing up or sitting down. Users can participate in the VR remotely, and co-located participation is also possible. In the case of co-located HMD and non-HMD users, the non-HMD user can watch the display screen of the HMD user together while a sufficiently large participation space is provided (
Figure 15). In this case, as the experience of the HMD user is indirectly experienced by the non-HMD user, the presence of the non-HMD user is improved [
24].
The essence of the proposed asymmetric interface is that all the users experiencing the asymmetric VR application can have new or satisfying experiences classified according to the experience environment while feeling similar presences. For a systematic analysis regarding this objective, a survey was conducted with the participants. First, the survey participants consisted of a total of 20 persons (males: 15, females: 5) between the ages of 22 and 37. Furthermore, the asymmetric interfaces classified into HMD and non-HMD users and their subsequent experience environments are set up as independent variables, while the presence, experience, and social interaction as dependent variables. First, as it is an important issue in this study to analyze the presence and experience between the HMD and non-HMD user interfaces comparatively, two persons (an HMD user and a non-HMD user) were paired up and the experiences were undertaken with ten teams with the total of 20 persons. Here, notably, the participants experienced both the situations of wearing and not wearing the HMD and answered the survey questions. To design the survey experiment process specifically, first, this study analyzed the experience differences of HMD and non-HMD user interfaces in the system and environment aspects. Based on this, the interactions were defined, suitable roles and actions were provided, and each user had the experiences by classifying them.
For non-HMD users, an interface to expand the breadth of roles and behavior through multi-viewpoint was designed as a way to overcome differences in the non-immersive experience environment. Therefore, the experience environment was designed such that the interface of a non-HMD user could be compared between a single viewpoint (third- or first-person) and the proposed multi-viewpoint. In RoleVR proposed by Lee et al. [
24], when the user participating in the asymmetric VR was a co-located user, the first-person viewpoint was replaced by sharing of the monitor screen of the HMD user with the non-HMD user; through the experiment, such an experience environment improved the presence of the non-HMD user. Considering this, the first-person viewpoint was additionally provided in this study as the experience environment of remote users was considered. Accordingly, experiments were performed to analyze the relationships of presence and role comparatively in various viewpoints.
The first experiment involved the analysis of the presence comparison questionnaire. The interactions were designed with the objective that all the users (HMD and non-HMD) existing in the VR can feel high presences. Therefore, a survey was conducted to investigate this aspect. This study analyzed the user reactions for the presence in various angles by using the presence questionnaire proposed by Witmer et al. [
54].
Table 1 shows the results of questionnaire values recorded by the users. The participants who experienced the application developed in-house experienced both the role of an HMD user and of a non-HMD user; the mean values were 6.224 (SD: 0.184) and 6.191 (SD: 0.229), respectively, showing that similarly high values were recorded in the survey. Furthermore, for calculating the normality for the experience environments, respectively, through D’Agostino’s K-squared test [
55], the significance probability (
p-value) for HMD and non-HMD (third and first) users was 0.071 and 0.408, respectively, thereby confirming that the null hypothesis could not be rejected and normal distributions were followed. Through a comparison with the case where the non-HMD user’s viewpoint was fixed, it was noted that the inability to utilize the viewpoint had a direct impact on the presence. It is an evident that when a non-HMD user interface has a first-person viewpoint, the presence is high. It is observed that the visual information must become a factor of the highest priority for increasing the presence of a user. Moreover, by providing various roles to a non-HMD user through a new viewpoint, the range of participation can be expanded, which is an important factor for increasing the presence. A study by Denisova and Cairn [
56] confirmed that the first-person viewpoint where the world is viewed through the eyes of a character increased immersion compared with the third-person viewpoint, regardless of the preferred viewpoint of the users. However, when interaction and an experiential environment optimized for the third-person viewpoint were presented in the immersive VR, comparative experiments with the first-person viewpoint confirmed that this also showed relatively satisfactory presence [
52]. Therefore, this study attempted to present an experiential environment that could utilize both the first- and third-person viewpoints to provide satisfactory presence by inducing immersion in a non-immersive experiential environment of non-HMD user interface. In addition, for the RoleVR of Lee et al. [
24], sharing the screen of an HMD user with a non-HMD user was as important a factor as providing distinguishable roles to every user. Unlike RoleVR, when the asymmetric VR of remote users was considered, it was confirmed that the multi-viewpoint interaction becomes an important factor for the presence of a non-HMD user. Based on the pairwise comparison analysis by Kim and Kim [
57], which presented a comparative experimental environment between presence and immersion depending on different interaction environments of VR users and demonstrated a statistically significant difference through the one-way analysis of variance (ANOVA), this study also performed a statistical analysis on the presence and experience of asymmetric interfaces for HMD and non-HMD users. Through calculations of the statistical significance of presence through one-way ANOVA analysis, it was observed that similar values were recorded and there was no significant difference between the HMD user interface and the multi-viewpoint interaction-applied non-HMD user interface. However, a statistically significant difference was observed between the non-HMD user interface of fixed viewpoint and the HMD user interface. Finally, there was a statistically significant difference between the viewpoints provided to the non-HMD user interface as well. This proves that providing an interface that considers third-person viewpoint significantly improves the presence for non-HMD users.
In the subsequent experiment, the user experience provided by the interaction of the asymmetric interface was systematically analyzed. To this end, this study utilized the core module, positive experience module, and social interaction module of the game experience questionnaire (GEQ) [
58]. The survey participants should record a value between 0 (not at all) and 4 (extremely) in the questions presented in the GEQ. Here, the core module consists of 33 items and it is possible to infer various factors such as competence and tension through a combination of items. In addition to the core module, there are questionnaire items that deal with social interaction. Therefore, this study analyzed the experiences provided by the proposed asymmetric interface to users through GEQ. First, the comprehensive impacts of the difference of roles felt by users in the asymmetric VR in terms of the immersion and interest were analyzed through the core module, and it was confirmed that a difference existed between the experiences felt by the non-HMD users and the HMD users (
Table 2). Based on the maximum satisfaction level of the experience suggested by GEQ, i.e., 4.0, overall satisfaction levels of 3.408 and 3.600 were recorded for the HMD and non-HMD users, respectively. Furthermore, for calculating the normality for the experience environments the significance probability (
p-value) for HMD and non-HMD users was 0.914 and 0.515, respectively, thereby confirming that the null hypothesis could not be rejected, and normal distributions were followed. This study attempted to supplement interfaces, focusing on the experience of the non-HMD users in an asymmetric environment. These points are considered to have increased the overall satisfaction. The detailed reason for this is analyzed as follows. It was confirmed that the roles provided to the users and their interactions yielded satisfying experiences in the asymmetric VR. In the results of dividing and comparing the specific components, as the non-HMD users performed various roles as an observer, assistant, and sometimes participant, and understood and managed the overall application, the flow and positive effects were high. However, although the concentration and immersion of HMD users were high as they performed actions directly in the immersive environment, annoyance and negative factors were also high owing to system factors such as VR motion sickness or relatively limited roles. Through a comparison of the questionnaire values via one-way ANOVA, it was confirmed that regarding the factors for satisfying experiences, everyone was satisfied with no significant difference between the HMD and non-HMD user interfaces. For a detailed experience, however, different trends were exhibited depending on the user. First, with respect to the immersion, challenge, and positive effect factors, the proposed asymmetric interface was determined to provide identical experience of participation as well as immersion. This was done by providing a role along with interaction optimized for the experience environment of the user. Moreover, because the proposed application contained entertainment elements, it did not trigger any special unpleasantness in the user. However, the users were satisfied with the non-HMD user interface because it provided diverse roles based on the multi-viewpoint, and the users had the competence or understood the flow of application more intuitively. By contrast, in the survey results of the HMD user interface, the negative effect was high with a significant difference because of relatively limited roles and the inconvenience arising from the blocked view. Therefore, the survey results confirmed that the proposed interface that can overcome the experience limitation of non-HMD users and induce immersion could be designed.
Finally, through conducting the comparison survey for the social interaction, it was confirmed that, based on the three-step communication process, a high mean was recorded, showing that all the users were satisfied. For calculating the normality for the experience environments, the significance probability (
p-value) for the HMD and the non-HMD (third and first) was 0.563 and 0.776, respectively, thereby confirming that the null hypothesis could not be rejected, and the normal distribution were followed. In the comparison with the case where the roles were limited through a fixed viewpoint, a large difference was observed with respect to the social interaction. However, when fixed at the first-person or third-person viewpoint, as there is a difference of interaction depending on the roles, it cannot be said with certainty that one has more influence than the other. Nevertheless, it is confirmed that, by providing various roles to the non-HMD users and accordingly expanding their interactions, the social relationship with the HMD users increased, and this had a large effect on sharing experiences with HMD users, showing a positive effect on the presence.
Table 3 shows the results of comprehensive analysis, and a statistical significance was confirmed through the one-way ANOVA as well. An experience environment was provided, in which a non-HMD user could freely switch between the viewpoints and interact with a HMD user while performing various roles in the application; based on this, it was proven that there was no significant difference in the social relationships. However, it is confirmed that if the role in a single viewpoint is limited, similar to that of HMD users, the limitation of social interaction in the non-immersive environment possessed by the conventional non-HMD user interface cannot be overcome.
6. Limitation and Discussion
This study designed an optimized user interface whereby enhanced presence can also be provided to non-HMD users who are participating in a non-immersive experience environment in an asymmetric VR through high participation and immersion, although they are different from HMD users. However, this process was based on the premise that the experience environment minimizes the economic burden and the additional burden of the equipment so that every user can easily access and use it. Regardless of this aspect, using a motion recognition device such as Leap Motion or an additional haptic device can help increase the presence of a non-HMD user—this is another approach that can be considered. Therefore, in the future, it will be necessary to conduct a comparative experiment for overcoming the non-immersive environment by providing a non-HMD user with another inexpensive device in addition to the dedicated HMD controller. Furthermore, it will be necessary to provide various asymmetric VR applications, and through this, confirm that the interactions of the proposed interface provide an enhanced presence and new positive experience to all users in the asymmetric VR through the process of analyzing the roles of a user from multiple angles. In particular, an analysis is required for the negative factors, such as VR sickness, as much as the analysis on the presence and positive experience felt by the users in the immersive VR. This study satisfies the technical requirements (number of frames per second and polygons of rendered scene) considered in a VR application. However, because the survey experiment centering on the users was not conducted, specialized questionnaire such as simulator sickness questionnaire (SSQ) should be used in the future to conduct the HMD-user-oriented analysis for the VR sickness that might occur in the process of interacting with a non-HMD user while considering the technical requirements of asymmetric interface. Furthermore, the effect of proposed interface should be examined in various age groups by expanding the present age groups of participants (20s and 30s) in the experiment.
This study focuses on the interaction in an asymmetric VR composed of HMD and non-HMD user interfaces and aims to confirm whether non-HMD users can actively participate in interaction, rather than simply being an assistant, through a new role and a differentiated interface. Therefore, the current study does not deal with comparative analysis experiments by designing various interfaces and various systematic and environmental factors. However, we confirmed that the interface that interacted with a virtual environment or objects by directly using the hands based on the existing study [
3,
52] provided enhanced presence through high immersion compared with the input methods of typical interactive systems utilizing gamepads and keyboards. Existing studies on the asymmetric VR including this study mainly focused on the objective performance analysis of the proposed interfaces by using verified questionnaires (e.g., PQ, GEQ) rather than comparing existing studies as the experience environment and conditions presented were clear. Therefore, in the future, we plan to conduct experiments by designing asymmetric interfaces for HMD and non-HMD users from various perspectives (hand, gaze, immersive device, etc.) and supplementing comparative analysis studies.