Next Article in Journal
Upper-Body Control and Mechanism of Humanoids to Compensate for Angular Momentum in the Yaw Direction Based on Human Running
Next Article in Special Issue
Emergency-Prioritized Asymmetric Protocol for Improving QoS of Energy-Constraint Wearable Device in Wireless Body Area Networks
Previous Article in Journal
A Bio-Inspired Control Strategy for Locomotion of a Quadruped Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Virtual Reality-Wireless Local Area Network: Wireless Connection-Oriented Virtual Reality Architecture for Next-Generation Virtual Reality Devices

1
School of Electrical & Electronics Engineering, Yonsei University, Seoul 03722, Korea
2
Department of Railroad Electrical and Electronic Engineering, Korea National University of Transportation, Gyeonggi 16106, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(1), 43; https://doi.org/10.3390/app8010043
Submission received: 15 November 2017 / Revised: 23 December 2017 / Accepted: 27 December 2017 / Published: 3 January 2018
(This article belongs to the Special Issue Wearable Wireless Devices)

Abstract

:

Featured Application

Virtual reality headsets and controllers with wireless connection.

Abstract

In order to enhance the user experience of virtual reality (VR) devices, multi-user VR environments and wireless connections should be considered for next-generation VR devices. Wireless local area network (WLAN)-based wireless communication devices are popular consumer devices with high throughput and low cost using unlicensed bands. However, the use of WLANs may cause delays in packet transmission, owing to their distributed nature while accessing the channel. In this paper, we carefully examine the feasibility of wireless VR over WLANs, and we propose an efficient wireless multiuser VR communication architecture, as well as a communication scheme for VR. Because the proposed architecture in this paper utilizes multiple WLAN standards, based on the characteristics of each set of VR traffic, the proposed scheme enables the efficient delivery of massive uplink data generated by multiple VR devices, and provides an adequate video frame rate and control frame rate for high-quality VR services. We perform extensive simulations to corroborate the outstanding performance of the proposed scheme.

1. Introduction

Network operators and system administrators are interested in the mixture of traffic carried in their networks for several reasons. Knowledge about traffic composition is valuable for network planning, accounting, security, and traffic control. Traffic control includes packet scheduling and intelligent buffer management, to provide the quality of service (QoS) needed by applications. It is necessary to determine to which applications packets belong, but traditional protocol layering principles restrict the network to processing only the IP packet header.
Virtual reality (VR) devices are novel, and attractive consumer electronics that can provide an immersive VR user experience (UX) [1,2]. In order to enhance the UX of VR services, there have been significant efforts to enhance not only its video and audio quality and interaction delay, but also the convenience of VR device connections to VR consoles, or VR-capable personal computers (PCs). Therefore, consumers in the VR market require high-resolution and comfortable VR devices. In order to provide a comfortable VR service environment without the need for wires, a wireless communication scheme with low latency needs to be employed for VR devices. However, high-resolution features are not only a challenge with respect to imaging equipment, but also for wireless interfaces. Therefore, we need to find the trade-off between these two features, i.e., reliable wireless connection and high-resolution video.
Wireless local area network (WLAN) is the most popular unlicensed-band wireless communication interface, which has a low cost while achieving high data throughput. Because some categories of IEEE 802.11 are designed to replace wired video interfaces, including high-definition multimedia interface (HDMI), IEEE 802.11-based WLAN can provide very high data rates.
In order to meet the high data-rate requirement of high-resolution video transmission, the IEEE 802.11 working group extended IEEE 802.11 standards to support the 60-GHz frequency band with a wide bandwidth. IEEE 802.11ad is the amendment standard, to operate IEEE 802.11 in the 60-GHz frequency band. In particular, IEEE 802.11ad utilizes the 2.16-GHz bandwidth to achieve a high data rate. However, IEEE 802.11ad has a relatively short communication range, owing to high-frequency band operation under indoor environments [3,4]. IEEE 802.11ay is the enhanced version of IEEE 802.11ad, with the support of channel bonding and multiple spatial streams [5].
Although these 60-GHz WLAN standards can be considered as a wireless VR interface for high-resolution video transmission, future VR systems will require real-time interactive control between multiple VR users. Some applications related to gaming industries have been adopting multiuser augmented reality (AR) systems to provide enhanced gaming experiences in the living room [6]. Because VR consumers have already experienced these multi-user AR systems, multi-user VR also needs to be provided to satisfy the needs of VR consumers. The sharing of VR experiences with nearby users is expected to provide much more immersive VR UX to VR consumers [6,7,8].
Wireless multi-user VR systems based on IEEE 802.11 standards can be described as in Figure 1, which shows the elements of wireless multi-user VR systems, VR data flows, and delay components of VR services. In order to provide an immersive VR interaction experience, each component of the VR system shall provide proper feedback, based on its sensing data. From delay components of Figure 1, the VR interaction delay of a wireless VR system, Tvr, can be described as follows.
Tvr = Tsensing + Tproc1_in_device + Ttransfer_UL + Tproc_in_PC + Ttransfer_DL + Tproc2_in_device
VR devices shall track motion and command of users by using sensing components. The sensing components may cause a delay, depending on their sensing performance. This delay is considered as Tsensing. Tproc1_in_device is a processing delay for a processor unit in VR devices, which handles sensing data and generates data packets. The generated data packets are transmitted to the associated VR computing device over a WLAN link in the proposed system. One-hop wireless packet delivery over a WLAN link causes a delay, Ttransfer_UL, which could be a relatively large value depending on the wireless channel condition, including channel congestion caused by channel contention. Ttransfer_UL is the most dominant delay component in the proposed multi-user VR system. The VR computing device, which is generally a PC, requires a processing delay, Tproc_in_PC. In order to provide seamless VR UX by minimizing a processing delay, high-processing performance is preferred. The VR computing device generates VR feedback packets, including VR video data, and transmits them to its associated VR devices over a WLAN link. The delay caused by this WLAN transmission is considered as Ttransfer_DL. Since the transmission causing the delay of Ttransfer_DL is from one node (VR computing device) to multiple nodes (wearable VR devices), and the transmission causing the delay of Ttransfer_UL is from multiple nodes (wearable VR devices) to one node (VR computing device). Ttransfer_DL can be more easily controlled than Ttransfer_UL. The downlink delay, Ttransfer_DL, is the second most important delay component in the proposed system. The VR devices decode the packet and operate to generate sensory feedback, which causes a processing delay, Tproc2_in_device. For instance, a VR headset decodes VR video image and perform video enhancement procedure, for a seamless and immersive UX. Tproc2_in_device is a delay component for this kind of hardware processing, to provide sensory feedback.
IEEE 802.11 systems are designed as contention-based, channel access, wireless communication systems [9]. Because of the properties of contention-based channel access, the performance of IEEE 802.11ad/ay systems degrades dramatically as the number of wireless stations (STAs) increases [10]. In other words, even though most advanced WLAN protocols and VIDEO codecs are utilized, supporting multi-user VR with low latency is almost impossible, owing to the multiple access inefficiency of WLAN.
VR devices need to upload their sensing information to trace user position and pose frequently, and each VR device usually keeps track of its position with a 1000-Hz sensing rate. This means that in multi-user VR, small uplink frames are generated very frequently, by multiple VR devices. Severe WLAN channel contention is caused by an overwhelming number of small VR control frames, leading to a very long channel access delay, which makes the operation of multi-user VR over WLAN impossible. Such problems cannot be solved by conventional IEEE 802.11 distributed coordination function (DCF) and enhanced distributed channel access (EDCA), which do not guarantee frame delivery delay [9].
In order to provide multi-user VR services over WLANs, the channel access delay needs to be minimized, and the frame rate of VR video and the arrival rate of uplink (UL) frames should be adaptively adjusted, depending on the wireless environment. Since both the technical progress of frame-interpolation schemes [11,12,13,14,15,16] and the frame-interpolation module are expected to be commonly employed in next-generation VR headsets [17], users could have stable high-refresh-rate vision with a lower downlink (DL) VR video frame rate. Sensing data, which is not fed back to computing machines, including VR-ready PCs, can be utilized by the frame-interpolation module to generate interpolation frames, with accurate moving data from the user. These interpolation frames are not based on future frames [11], because these are real-time video frames. The interpolation frames are generated from past frames and motion track data. Moving its latest frame to opposite vector of sensor data is the easiest method of generating interpolation frames.
In order to assist the interpolation frame generation, some network characteristics, e.g., the video frame arrival rate and control frame delivery rate, need to be provided to VR devices. When there is a mismatch between the visual and vestibular systems, VR sickness can result. This means that if the VR vision in VR displays cannot reflect the real movement of users, the user experience may be degraded. In order to prevent VR sickness, accurate VR frame interpolation operations are required. Therefore, in the network-condition-based VR frame delivery proposed in this paper, the video frame interpolation procedures are very important, in order to reduce VR sickness. The relationship between the received video frames and interpolated video frames is shown in Figure 2. In this paper, interpolated video frames do not refer to the video frames generated by the graphic processor of a VR PC or a VR console. Here, only frames that are generated by VR headsets after receiving video frames from a VR access point (AP) are referred to as interpolated video frames, which can be generated using received frames and motion track information. As shown in Figure 2, the processing unit in a VR headset shall move the last video frame to the reverse direction of user’s motion vector, to generate interpolated video frames. The interpolated video frames provide immediate visual responses that solve the mismatch between visual and vestibular systems. In wireless multi-user VR systems, since wireless links with inefficient multi-user channel access performance are bottlenecks, which cannot provide a sufficiently high data rate for a high video frame rate, many interpolated frames are generated. Because of such problems, next-generation wireless multi-user VR systems should be designed to be tightly coupled with wireless systems. The next-generation, wireless, multi-user VR systems should optimize their VR video image and motion tracking rate, considering the wireless link status. Based on the above observation, in order to design a high-quality multi-user VR system over WLANs, both wireless link optimization, which enhances the wireless channel access efficiency, and tight-coupled VR optimization with a wireless system, which prevents unnecessary resource wastage, should be considered. In this study, to provide high-quality VR UX in a multi-user WLAN VR service, we consider both the multi-user wireless link efficiency enhancement and VR optimization tightly coupled with a wireless system.
This paper is an extension of one that proposed delay-oriented VR mode that could be utilized by a VR AP [18]. The delay-oriented VR mode is included in this paper as a trigger-based channel access method. This paper proposes a novel wireless multi-user VR protocol structure, as well as specific channel access and system control schemes, to support multi-user VR systems over WLAN, including the delay-oriented VR mode. In addition to the novel structure and the enhanced channel access and control schemes, in this paper, we propose connection-recovery algorithms for a seamless VR UX.
The rest of this paper is organized as follows. In Section 2, we explain the proposed system architecture and protocol design, including the connection–recovery algorithm. In Section 3, we investigate the system performance of the proposed VR architecture and multi-user VR schemes, by performing extensive simulations. We also examine the delay and packet loss rate (PLR) performances in various simulation scenarios. Finally, Section 4 explains the reason why conventional EDCA, which is utilized in WLAN systems, cannot handle wireless multi-user VR applications, but the proposed system can handle them.

2. Architecture and Protocol

2.1. IEEE 802.11-Based Wireless VR System Architecture

The proposed multi-user VR systems with wireless interfaces consist of multiple IEEE 802.11 medium access control (MAC) layers and physical (PHY) layers. In order to accommodate multiple IEEE 802.11 protocols, we now propose a novel VR convergence layer (VRCL) and its interworking scheme with a station management entity (SME).
The network architecture for multi-user VR systems needs to meet the requirements of very high throughput and low latency. In order to satisfy the high data throughput requirement of high-resolution video images, we can utilize 60-GHz standards, i.e., IEEE 802.11ad/ay. These amendment standards are designed for wireless high-resolution image devices. Because immersive VR UX could be achieved by these high-resolution video images, the use of 60-GHz standards is inevitable in multi-user VR scenarios. However, although these high-throughput wireless standards are utilized, with a combination of UL and DL transmission in multi-user scenarios, the effective throughput and delay performance would deteriorate. This means that intra-basic service set (BSS) channel contention should be controlled by wireless VR protocols. Without such control algorithms and additional channel resources, VR experience cannot be guaranteed in wireless network environments.
If there are only DL video frames transmitted by a VR AP, network degradation may not occur. Multiple VR devices connected to an AP should transmit their motion tracking data and control data very frequently, i.e., 1000 Hz per device. Those uplink motion tracking and control data are problematic, because frequent motion tracking and control data can cause large channel access delays and throughput degradation. In order to resolve such an uplink data contention problem, multiuser control channels conforming to the wireless standard should be utilized for multi-user VR networks. IEEE 802.11ax is the most representative standard that supports multi-user network in the 5-GHz frequency band [19]. This means that because the 5 GHz channels of the IEEE 802.11ax standard do not interfere with 60-GHz frequency channels, a VR AP can accommodate multi-user uplink traffic very efficiently.
As a result, VR devices including VR APs need to have special multi-standard protocol architecture, described in Figure 3, to support VR connections based on WLANs. IEEE 802.11ax is the amendment standard for highly efficient WLAN in multiple-device scenarios. IEEE 802.11ax defines a trigger frame to accommodate multiple uplink frames from multiple devices simultaneously [19]. In many scenarios, it may be utilized in parallel with a conventional single-frame transmission. If the trigger frame requests stations to transmit its UL data, each station transmits its data without additional channel access delays. This trigger frame is able to substantially reduce the contention delay of WLAN systems.
The IEEE 802.11ad amendment standard is designed to utilize the 60-GHz frequency band, which provides wide bandwidth and high throughput. IEEE 802.11ay is an enhanced version of IEEE 802.11ad, and provides four times the bandwidth using channel bonding and additional spatial streams [5]. Because of the wide bandwidth, the 60-GHz standard is able to provide very high data rates over short distances. Therefore, in order to adequately utilize the high data rate to support multi-user VR, the channel inefficiency caused by channel contention should be minimized by separating the UL transmission and DL transmission.
The VR application layer described in Figure 3 is a protocol layer that provides VR images and control information on VR devices. VR video frames are generated using the frame rate that is reported by VRCL, and a detailed explanation on VRCL is provided in Section 2.2. The generation rate of the VR video frame is restricted by the VRCL, and it prevents the VR application layer from generating meaningless video frames. For VR controllers and VR headsets, motion-tracking information measured by sensors in VR devices is accommodated and utilized in this VR application layer. VR devices, especially VR headsets, would generate interpolation frames in this layer. The interpolation frame is the frame that needs to be displayed between the received real video frames delivered by a VR AP. These interpolated frames could be generated by performing many effective interpolation algorithms [11,12,13,14,15,16]. In this paper, VR video interpolation frames need to be based on motion-tracking information that is measured by VR sensors in VR devices [20,21,22]. Because this situation was not previously defined, further optimized interpolation methods should be studied.
The convergence layer described in Figure 3 is a protocol layer that enables multiple network standard convergence, as well as some special information for the VR application layer. VR videos that are generated by a VR PC are delivered to a VR AP, and the convergence layer in the VR AP determines the transmission interval of video frames based on network conditions, and reports the rate to the VR application layer of the VR PC. If VR video frames are transmitted without these considerations for network conditions, users would suffer poor UX, owing to the large delay time. Similar to the DL VR video frame transmission, the convergence layer also controls the UL frame delivery rate, based on its network condition. This would reduce network congestion or the required network performance of VR devices. As a result, the convergence layer prevents these catastrophic situations by controlling the frame delivery rate.
Although some motion-tracking information cannot be delivered, depending on the decision of the convergence layer, the convergence layer still provides motion-tracking information to the VR application layer in the VR headset, to generate interpolation frames. These interpolation frames should be generated considering motion-tracking information, in order to prevent VR sickness. The number of interpolation frames that need to be generated before the next frame may be predicted by the frame arrival rate information obtained from the convergence layer.
The station management entity (SME) is used for the accommodation and delivery of parameters for each network layer [9]. In some cases, frame off-loading could be performed by controlling the frame-delivery interval based on PHY and MAC layer parameters. The packet loss rate information and frame interval information are key sets of information delivered by the SME.
Each MAC and PHY layer follows its own standards. The convergence layer controls and schedules all frames into those multiple MAC layers properly. For example, DL data frames can be scheduled in the IEEE 802.11ad/ay MAC and UL data frames can be scheduled in the IEEE 802.11ax MAC. Because IEEE 802.11ax is a 5-GHz standard with multi-user support, it is a suitable protocol for the UL transmission protocol in multi-user VR systems. Figure 4 shows how IEEE 802.11ax could accommodate multiple frames, using orthogonal frequency-division multiple access (OFDMA). The trigger frame is transmitted by an AP, to instruct stations that have UL frames when and where to transmit their UL frames. In usual IEEE 802.11ax scenarios, the trigger frame would contend with other UL frames to guarantee the opportunity for all stations to access the channel. However, a VR AP requires a very tight delay property, and its traffic pattern is very regular—it is a special-purpose AP for VR devices. This means that for associated devices, there is no need for channel contention to guarantee opportunities for channel access. In other words, in order to fully utilize the UL OFDMA of IEEE 802.11ax for multi-user VR services, single-user UL transmission should be regulated. Single-user UL transmission can be regulated by the multi-user EDCA procedure in 802.11ax [19]. By using a new set of EDCA parameters, which is used by STAs in a multi-user BSS, AP can set STAs to have very low-priority EDCA parameters. Such low-priority EDCA parameters make STAs’ access time for a single-user UL transmission long, and the AP can transmit a trigger frame for UL OFDMA procedure while the STAs are waiting for the single-user UL transmission.

2.2. Protocol Design

The VRCL should encapsulate interpolation and frame rate information, using a VR video frame in an aggregated MAC protocol data unit (A-MPDU). The VRCL would control the UL and DL frame arrival rate for WLAN systems based on the frame rate of the original VR video rate and the wireless environment. Owing to VRCL, a VR AP and VR devices could utilize IEEE 802.11 family standards that provide MAC and PHY layers without standard modification. VRCL is the only additional layer for a multi-user VR network interface. VR applications do not require any additional network control features, and are required only to generate proper VR video frames, based on information delivered by the VRCL. The VRCL provides its UL control information rates and the required DL video frame to the VR application layer. Because the VRCL would discard VR frames that could cause large network delays on VR devices, the VR application does not need to waste its resources on unnecessary video frames. For this reason, the VRCL and VR AP need to perform the function of a VR system controller.
Not only does the VRCL provide information that can be used to control VR video frames to the VR application layer, but VRCL also controls its network operations based on information that is provided from the SME. The packet loss rate, received signal strength indication (RSSI), and modulation and coding scheme (MCS) index information are the representative information observed by VRCL. Based on the observed information, VRCL could control the MCS level, channel bandwidth, frame arrival rate, number of users supported by the VR AP, and so on. The 60-GHz and 5-GHz standards may be utilized by VRCL, because of its efficient multi-user VR operation. The main purpose of the 60-GHz wireless link, in this paper, is the delivery of a high-resolution VR video frame. Because that delivery requires a large bandwidth, the use of the 60-GHz standard is inevitable. A 5-GHz standard is employed to accommodate the multi-user UL frame, by utilizing the multi-user UL OFDMA procedure.
Although the required data rate of multi-user uplink sensing data is not very high, the frame arrival rate of the sensing data frames is relatively high. Usually, the video frame rate ranges from 90 Hz to 120 Hz, but the motion sensing rate is 1000 Hz in current-generation, wired VR devices. Because of this, a small-size UL data frame could spoil the overall wireless VR system, in spite of its small required data rate. This causes a large channel access delay on the DL VR video frame, unless the DL and UL are separated. By separating the UL sensing data transmission from the 60-GHz standard, DL VR video frames do not contend with other frames from VR devices. If a VR video frame requires additional transmission time, owing to its poor channel condition or the increasing number of VR users, the VRCL can control the video frame rate based on its packet loss rate. The VRCL in a VR AP encapsulates the video frame rate information that is used to generate VR video frames. The VRCL in VR devices extracts the information from the received frames and delivers the information to the VR application, in order to generate a video interpolation frame based on its received video frames and motion-sensing information.
Unless the channel has high coexistence issues, the DL frame may not have large channel access interference. However, in the WLAN multi-user VR scenario, UL frames always suffer large contention delays, owing to frequent channel access with the contention-based channel access method. In order to solve the problem, a VR AP should support the UL frame accommodation by transmitting a trigger frame, which is defined in the IEEE 802.11ax standard. In order to maximize the UL, the multi-user frame accommodation channel access of each VR device should be prohibited. The channel access may be prohibited by setting EDCA parameters, including AIFSN and CW, to very large values, and maximum values are particularly recommended. Setting the EDCA parameters does not require any modification to IEEE 802.11 standards, and manufacturers could configure the EDCA parameters easily.

2.3. Algorithm

The VRCL in a VR AP could perform DL VR video frame rate control and UL VR sensing frame rate control, depending on its wireless connection status. Figure 5 shows how the VRCL controls its DL VR video frame rate. The manufacturer of a VR system could set its packet loss rate thresholds and corresponding VR video frame rate. A video refresh rate of 90 Hz is usually used in current-generation VR systems, but next-generation VR systems may support a refresh rate of at least 120 Hz. This means that “refresh_rate_1” in Figure 5 needs to be set to its native refresh rate of the VR display. Refresh rate parameters with larger index numbers should be set to a larger value than refresh rate parameters with smaller index numbers. For “DL_PLR_threshold” parameters, as listed in Figure 5, the same rules should be applied, and a parameter with a larger index needs to be set to a larger value. The proposed algorithm in Figure 5 aims to solve the connection problem of wireless multi-user VR systems, by controlling the required channel throughput. If the channel condition is worse than “DL_PLR_threshold_n” (see Figure 5), the VR AP should perform a recovery procedure. The recovery procedure can be modified by the manufacturer, but the use of alternating channels and 5-GHz off-loading is at least recommended. Figure 6 shows an algorithm that controls the trigger frame transmission rate. A high value for “Trigger_rate_VR” (see Figure 6) indicates a small trigger frame interval, based on the PLR history of the UL sensing data frame. Because associated VR devices never perform EDCA procedures, they are disabled by the VR AP, and only overlapped BSS (OBSS) stations could cause channel collisions and channel interference. Similar to DL VR video frame rate control, the VRCL controls the frame rate and performs recovery procedure in poor channel conditions. In this paper, channel alternation and bandwidth modifications are recommended for the recovery procedures.
The VRCL in a VR headset could receive the DL VR video frame rate and UL VR sensing frame rate information from the VR AP. Based on each set of information, the VR application could generate an interpolation frame that is utilized during the DL VR video frame interval. Each interpolation frame should utilize motion-sensing information, even though the information was not delivered to the VR AP. From the motion-sensing information, the VR application layer should generate a motion vector, and the reverse vector of the generated motion vector should be applied to the interpolation frame.

3. Results and Analysis

3.1. Performance Metric Definition

In order to support a 1000-Hz sensing rate, we set the basic UL sensing frame rate to 1000 Hz. This means that VR devices generate their sensing frame every 1 ms. In this paper, we applied a separate WLAN structure that we proposed. In addition to the structure proposal, we compared the delay and packet loss rate properties of our proposed algorithm in various situations with conventional EDCA cases.
The main focus of this analysis is Ttransfer_UL of Equation (1), because Ttransfer_UL is relatively large despite its small UL frame size. Ttransfer_UL can be further decomposed as follows:
Ttransfer_UL = Tpreamble + Tdata + Tcontention + TIFSs + Tsync
where Tcontention includes the back-off time, as well as channel-busy duration during a back-off procedure, when the delays are caused by channel access of initial transmission and retransmissions. Because TIFSs is a fixed-time duration defined in the IEEE 802.11 standard, it cannot be changed. Tpreamble is also determined by the IEEE 802.11 standard. However, by using UL OFDMA transmission, only one Tpreamble is required for multiple stations, because UL stations transmit their preamble simultaneously at the beginning of the UL transmission. Tdata is the time duration required for over-the-air data transmission. In the EDCA case, the cumulative Tpreamble would require a much longer duration, even though it does not include any information for VR applications. Therefore, the proposed scheme utilizes the multi-user physical frame structure to reduce the effective Tpreamble although it requires a longer Tdata, owing to the small size of the resource unit (RU) for a single device. The RU is a unit of the frequency resource for OFDMA transmission, and the size and location of the RU for UL OFDMA transmission can be indicated by the trigger frame.
The proposed wireless multi-user VR system can reduce not only the effective Tpreamble, but also Tcontention. Because not all VR devices in the VR AP BSS are allowed to transmit their UL control data without receiving the trigger frame, there would be no contention. On the other hand, because VR devices cannot transmit their UL data without a trigger frame from the VR AP, we added a new delay factor Tsync. Tsync is a delay parameter caused by the time gap between the trigger frame reception and VR traffic generation.
In this paper, we assumed the existence of nine wireless VR devices, including VR controllers and VR headsets. Additional specific simulation parameters are shown in Table 1. Because of the effect of preamble overhead reduction, a duration that is nine times that of TSU-UL could be replaced by a single TMU-UL duration in the proposed multi-user VR mode case. TAIFS is the required time duration for channel sensing before trying to perform the channel access procedure. MQueue is the size of the queue parameter for each device. If the MQueue is filled, and an additional frame comes to the queue, the IEEE 802.11 system would discard the last frame, which is calculated as a packet loss that increases the packet loss rate. λTrigger is the trigger frame delivery rate controlled by VRCL, as mentioned above. VRCL could configure this value to provide seamless VR UX.
Even though λMotion, which is the motion sensing rate, is fixed to 1000 Hz by hardware, the VRCL controls its delivery rate, and undelivered information is used to generate interpolation frames in a VR headset. Nretx is a parameter for the maximum number of retransmissions (retx). Because VR systems are real-time applications, a large number of retransmissions is not suitable. In this study, we simulated cases involving no retransmissions up to two retransmissions. In order to test the robustness of the proposed system, we considered the presence of WLAN interference devices. NOBSS is a parameter that shows the number of WLAN interference devices that exist. In the no-interference cases, NOBSS is set to 0; otherwise, nine interferences devices are assumed. In order to measure the actual effect caused by interference devices, the reasonable rate of WLAN frames should be set to WLAN interference devices. In this paper, λOBSS, which refers to the frame arrival rate of each WLAN interference device, is set to 1 Hz. This means that every 1 s, WLAN interference devices generate a frame to transmit. The frame generated by WLAN interference devices has an air-time duration of TOBSS, and in this study, we set it to 10 ms.

3.2. Simulation Results

In conventional EDCA cases, nine devices generated their UL frames containing motion-sensing information every 1 ms. For conventional EDCA cases, the packet loss rate and Ttransfer_UL are shown in Figure 7. Because VR systems require real-time processing, the number of maximum retransmissions did not need to be set to a large number. If it was set to 2, as shown in Figure 7a, it would lower the PLR for a short time, but then increase to 0.4 PLR shortly afterwards. From the start, the PLR observed in conventional EDCA cases was not usable for consumer VR devices. In the no-retransmission case, as shown in Figure 7b, even though there was a large PLR and relatively small delay, not only was the PLR destructive, but the delay also could not be used. After 8 s, the delay value increased over 1 s for the motion-sensing data delivery. From the results of Figure 7, we can observe that conventional EDCA procedures cannot afford wireless multi-user VR systems. At a minimum, wireless VR systems require ms-level delays and a very low PLR for reasonable user experience.
The proposed VR AP disables the conventional EDCA procedure, which can cause tragic delays and PLR results as shown in Figure 7. If there are no interference devices, which are usually called OBSS devices, VR AP is the only wireless device that initiates the channel access procedure. In this case, a VR AP always obtained transmission opportunity without channel contention and back-off procedure, because the channel was idle during every channel access time period. This meant that the VR AP did not require an additional contention delay that is caused by the back-off procedure.
Although the proposed architecture and schemes work very well, as shown in Figure 8, the channel condition could be worse, owing to interference devices in the 5-GHz frequency band. As opposed to 60 GHz band radio, the 5-GHz band radio has a relatively long transmission range, interference range in this case, and 5-GHz frequency band devices are widely used by ordinary consumers. This means that a VR AP should consider the influence of interference. In order to consider and measure the effect of channel interference, in this paper we considered nine additional IEEE 802.11 devices that utilized the 5-GHz band. Each device generated its data frame with an air time of 10 ms every 1 s. The result is shown in Figure 9, and as opposed to the no-interference case, these results showed relatively large delays. Most frame-delivery delays were under 10 ms, but a large delay jitter was observed. If we set the maximum number of retransmission parameters to 0, the delay jitter would be decreased, as shown in Figure 9b.
Even though the 10-ms delay performance was relatively good for multi-user VR, in the severe-interference case, the delay jitter still needed to be minimized. The proposed algorithm to control the trigger frame delivery rate in Figure 6 could handle this severe interference case. Because the proposed VR system could realize variable-motion sensing information delivery rates, owing to its VR video interpolation module in VR headsets, its sensing data report rate could be adjusted to a small value. In this study, we assume a 100 Hz to 1000 Hz sensing data report rate. If the channel conditions are very poor, the VR AP should set its trigger frame delivery rate to 100 Hz. In order to measure the effect of controlling the trigger frame, we simulated the case with a 100-Hz trigger frame delivery rate in the severe-interference scenario. The result is shown in Figure 10, and it showed that in the no-retransmission case, all of the frames were delivered in a 10-ms delay. This means that the proposed wireless multi-user VR system guarantees a 10-ms wireless uplink delay, even in the presence of severe channel interference.

4. Conclusions

The proposed architecture, schemes, and rate control algorithm are able to substantially improve the delay performance in a wireless multi-user VR system. We showed that the conventional EDCA of WLAN could not fully support multi-user VR systems, because its delay and PLR were not affordable for multi-user VR system. We need to consider the proposed multi-user VR system for next-generation consumer VR devices, in order to support multi-user VR services. Unless there is an OBSS, the proposed multi-user VR system achieves very low delay and PLR, which could guarantee seamless VR UX. We observed that the retransmission procedure did not provide meaningful PLR improvement in the proposed wireless multi-user VR system. Without the need for retransmissions, by setting the maximum number of retransmission to be 0, we can significantly enhance the delay performance. Furthermore, because the proposed wireless multi-user VR system does not require modifications to the standard, the proposed scheme can be easily used with commercial WLAN chipsets that are available on the market, by adding VRCL layer proposed in this paper.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2017R1A2B4003987).

Author Contributions

Jinsoo Ahn, Young Yong Kim and Ronny Yongho Kim conceived the study. Jinsoo Ahn and Ronny Yongho Kim did the literature review. Jinsoo Ahn designed the model, implemented the simulation program and obtained results. Young Yong Kim and Ronny Yongho Kim did the editing and removed the grammatical mistakes. Ronny Yongho Kim in consultation of rest of the authors, re-confirmed the credibility of obtained solutions. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. He, P. Virtual reality for budget smartphones. Young Sci. J. 2016, 18, 50–57. [Google Scholar]
  2. Institute of Electrical and Electronics Engineers (IEEE). 2016 Index IEEE consumer electronics magazine. In IEEE CEM; IEEE: Piscataway, NJ, USA, 2016; Volume 5, pp. 137–147. [Google Scholar] [CrossRef]
  3. Kim, J.; Tian, Y.; Mangold, S.; Molisch, A.F. Joint Scalable Coding and Routing for 60 GHz Real-Time Live HD Video Streaming Applications. IEEE Trans. Broadcast. 2013, 59, 500–512. [Google Scholar] [CrossRef]
  4. Institute of Electrical and Electronics Engineers (IEEE). Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 3: Enhancements for Very High Throughput in the 60 GHz Band; IEEE: Piscataway, NJ, USA, 2012. [Google Scholar]
  5. Institute of Electrical and Electronics Engineers (IEEE). P802.11ay/D0.3—Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications—Amendment 7: Enhanced Throughput for Operation in License-Exempt Bands above 45 GHz; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  6. Billinghurst, M.; Clark, A.; Lee, G. A Survey of Augmented Reality. Found. Trends Hum. Comput. Interact. 2015, 8, 73–272. [Google Scholar] [CrossRef]
  7. Thomas, J.; Bashyal, R.; Goldstein, S.; Suma, E. MuVR: A multi-user virtual reality platform. In Proceedings of the 2014 IEEE Virtual Reality (VR), Minneapolis, MN, USA, 29 March–2 April 2014; pp. 115–116. [Google Scholar] [CrossRef]
  8. Chang, W. Virtual Reality System. U.S. Patent 14,951,037, 11 November 2015. [Google Scholar]
  9. Institute of Electrical and Electronics Engineers (IEEE). Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar]
  10. Chang, Z.; Alanen, O.; Huovinen, T.; Nihtila, T.; Ong, E.H.; Kneckt, J.; Ristaniemi, T. Performance analysis of IEEE 802.11ac DCF with hidden nodes. In Proceedings of the 2012 IEEE 75th Vehicular Technology Conference (VTC Spring), Yokohama, Japan, 6–9 May 2012; pp. 1–5. [Google Scholar]
  11. Dikbas, S.; Altunbasak, Y. Novel True-Motion Estimation Algorithm and Its Application to Motion-Compensated Temporal Frame Interpolation. IEEE Trans. Image Process. 2013, 22, 2931–2945. [Google Scholar] [CrossRef] [PubMed]
  12. Veselov, A.; Gilmutdinov, M. Iterative hierarchical true motion estimation for temporal frame interpolation. In Proceedings of the 2014 IEEE 16th International Workshop on MMSP, Jakarta, Indonesia, 22–24 September 2014; pp. 1–6. [Google Scholar]
  13. Jeong, S.G.; Lee, C.; Kim, C.S. Motion-Compensated Frame Interpolation Based on Multihypothesis Motion Estimation and Texture Optimization. IEEE Trans. Image Process. 2013, 22, 4497–4509. [Google Scholar] [CrossRef] [PubMed]
  14. Kim, D.; Lim, H.; Park, H. Iterative True Motion Estimation for Motion-Compensated Frame Interpolation. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 445–454. [Google Scholar] [CrossRef]
  15. Tang, C.; Wang, R.; Wang, W.; Gao, W. A new frame interpolation method with pixel-level motion vector field. In Proceedings of the 2014 IEEE Visual Communications and Image Processing Conference, Valletta, Malta, 7–10 December 2014; pp. 350–353. [Google Scholar]
  16. Choi, B.D.; Han, J.W.; Kim, C.S.; Ko, S.J. Motion-Compensated Frame Interpolation Using Bilateral Motion Estimation and Adaptive Overlapped Block Motion Compensation. IEEE Trans. Circuits Syst. Video Technol. 2007, 17, 407–416. [Google Scholar] [CrossRef]
  17. Ellsworth, J.J.; Johnson, K.W.; Clements, K. Head Mounted Display Performing Post Render Processing. U.S. Patent 15,043,133, 12 February 2016. [Google Scholar]
  18. Ahn, J.; Kim, Y.; Kim, R.Y. Delay oriented VR mode WLAN for efficient wireless multi-user virtual reality device. In Proceedings of the 2017 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 8–10 January 2017; pp. 122–123. [Google Scholar]
  19. Institute of Electrical and Electronics Engineers (IEEE). P802.11ax/D2.0—Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications—Amendment 6: Enhancements for High Efficiency WLAN; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  20. Welch, G.; Foxlin, E. Motion tracking: No silver bullet, but a respectable arsenal. IEEE Comput. Graph. Appl. 2002, 22, 24–38. [Google Scholar] [CrossRef]
  21. Bao, Y.; Wu, H.; Ramli, A.A.; Wang, B.; Liu, X. Viewing 360 degree videos: Motion prediction and bandwidth optimization. In Proceedings of the 2016 IEEE 24th International Conference on Network Protocols (ICNP), Singapore, 8–11 November 2016; pp. 1–2. [Google Scholar]
  22. Khorov, E.; Loginov, V.; Lyakhov, A. Several EDCA parameter sets for improving channel access in IEEE 802.11ax networks. In Proceedings of the 2016 International Symposium on Wireless Communication Systems (ISWCS), Poznan, Poland, 20–23 September 2016; pp. 419–423. [Google Scholar]
Figure 1. Wireless multiuser virtual reality (VR) system, based on IEEE 802.11 system.
Figure 1. Wireless multiuser virtual reality (VR) system, based on IEEE 802.11 system.
Applsci 08 00043 g001
Figure 2. Motion sensing based video interpolation in VR headsets.
Figure 2. Motion sensing based video interpolation in VR headsets.
Applsci 08 00043 g002
Figure 3. Proposed protocol architecture of a wireless, multi-user VR system.
Figure 3. Proposed protocol architecture of a wireless, multi-user VR system.
Applsci 08 00043 g003
Figure 4. Distributed coordination function (DCF) and the proposed delay-oriented VR mode operation for wireless VR system. (a) Wireless multiuser VR with conventional wireless local area network (WLAN); (b) Wireless multiuser VR with the proposed delay oriented VR mode WLAN.
Figure 4. Distributed coordination function (DCF) and the proposed delay-oriented VR mode operation for wireless VR system. (a) Wireless multiuser VR with conventional wireless local area network (WLAN); (b) Wireless multiuser VR with the proposed delay oriented VR mode WLAN.
Applsci 08 00043 g004
Figure 5. Proposed algorithm to adjust downlink (DL) VR video frame rate, and examine its feasibility of VR capacity for a given user profile.
Figure 5. Proposed algorithm to adjust downlink (DL) VR video frame rate, and examine its feasibility of VR capacity for a given user profile.
Applsci 08 00043 g005
Figure 6. Proposed algorithm to adjust uplink (UL) VR sensing information report rate, and examine its feasibility of VR capacity for a given user profile.
Figure 6. Proposed algorithm to adjust uplink (UL) VR sensing information report rate, and examine its feasibility of VR capacity for a given user profile.
Applsci 08 00043 g006
Figure 7. Delay and packet loss rate (PLR) for proposed WLAN structure with conventional enhanced distributed channel access (EDCA). (a) 9 VR devices with max number of retx = 2; (b) 9 VR devices without retx.
Figure 7. Delay and packet loss rate (PLR) for proposed WLAN structure with conventional enhanced distributed channel access (EDCA). (a) 9 VR devices with max number of retx = 2; (b) 9 VR devices without retx.
Applsci 08 00043 g007
Figure 8. Delay and PLR for proposed WLAN structure with proposed schemes in a no-interference environment.
Figure 8. Delay and PLR for proposed WLAN structure with proposed schemes in a no-interference environment.
Applsci 08 00043 g008
Figure 9. Delay and PLR for proposed WLAN structure with proposed schemes in a severe-interference environment. (a) Nine interference devices with max # of retx = 2; (b) Nine interference devices without retx.
Figure 9. Delay and PLR for proposed WLAN structure with proposed schemes in a severe-interference environment. (a) Nine interference devices with max # of retx = 2; (b) Nine interference devices without retx.
Applsci 08 00043 g009aApplsci 08 00043 g009b
Figure 10. Delay and PLR for proposed WLAN structure with proposed schemes and delivery rate control algorithm in a severe-interference environment. (a) Nine interference devices with max # of retx = 2; (b) Nine interference devices without retx.
Figure 10. Delay and PLR for proposed WLAN structure with proposed schemes and delivery rate control algorithm in a severe-interference environment. (a) Nine interference devices with max # of retx = 2; (b) Nine interference devices without retx.
Applsci 08 00043 g010aApplsci 08 00043 g010b
Table 1. Simulation Parameters.
Table 1. Simulation Parameters.
SymbolDescriptionValue
NVRNumber of VR consumer devices9 (3 headsets, 6 controllers)
TUL-SUTransmission opportunity time of a control frame in single-user transmission case90 μs
TUL-MUOverall transmission opportunity time for UL multiuser transmission case540 μs
TAIFSChannel sensing duration before initiating channel access procedure36 μs
MQueueSize of frame queue in number of frames1000 frames
λTriggerTrigger Frame delivery rate (Configurable)100 Hz to 1000 Hz, 1000 Hz is default
λMotionMotion sensing rate1000 Hz
NretxMaximum number of retransmission (Configurable)0 or 2
NOBSSNumber of interference devices (Configurable)0 or 9
λOBSSFrame arrival rate of each interference devices.1 Hz
TOBSSTransmission opportunity time of an interference frame10 ms = 10,000 μs

Share and Cite

MDPI and ACS Style

Ahn, J.; Kim, Y.Y.; Kim, R.Y. Virtual Reality-Wireless Local Area Network: Wireless Connection-Oriented Virtual Reality Architecture for Next-Generation Virtual Reality Devices. Appl. Sci. 2018, 8, 43. https://doi.org/10.3390/app8010043

AMA Style

Ahn J, Kim YY, Kim RY. Virtual Reality-Wireless Local Area Network: Wireless Connection-Oriented Virtual Reality Architecture for Next-Generation Virtual Reality Devices. Applied Sciences. 2018; 8(1):43. https://doi.org/10.3390/app8010043

Chicago/Turabian Style

Ahn, Jinsoo, Young Yong Kim, and Ronny Yongho Kim. 2018. "Virtual Reality-Wireless Local Area Network: Wireless Connection-Oriented Virtual Reality Architecture for Next-Generation Virtual Reality Devices" Applied Sciences 8, no. 1: 43. https://doi.org/10.3390/app8010043

APA Style

Ahn, J., Kim, Y. Y., & Kim, R. Y. (2018). Virtual Reality-Wireless Local Area Network: Wireless Connection-Oriented Virtual Reality Architecture for Next-Generation Virtual Reality Devices. Applied Sciences, 8(1), 43. https://doi.org/10.3390/app8010043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop