Next Article in Journal
A Radical Safety Measure for Identifying Environmental Changes Using Machine Learning Algorithms
Next Article in Special Issue
CMT-SCTP and MPTCP Multipath Transport Protocols: A Comprehensive Review
Previous Article in Journal
A Stochastic Confocal Elliptic-Cylinder Channel Model for 3D MIMO in Millimeter-Wave High-Speed Train Communication System
Previous Article in Special Issue
Video Streaming Adaptive QoS Routing with Resource Reservation (VQoSRR) Model for SDN Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sprinkle Prebuffer Strategy to Improve Quality of Experience with Less Data Wastage in Short-Form Video Streaming

1
Graduate School of Engineering and Science, Shibaura Institute of Technology, Tokyo 135-8548, Japan
2
Department of Information and Communications Engineering, Shibaura Institute of Technology, Tokyo 135-8548, Japan
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(13), 1949; https://doi.org/10.3390/electronics11131949
Submission received: 19 April 2022 / Revised: 16 June 2022 / Accepted: 19 June 2022 / Published: 22 June 2022
(This article belongs to the Special Issue Recent Advances on Intelligent Multimedia Networks)

Abstract

:
In mobile short-form video streaming, the video application usually provides the user with a playlist of recommended videos to be played one by one. In order to prevent playback stalls caused by possible fluctuations in the mobile network, after finishing buffering the currently-playing video, commercial video players continue to prebuffer (i.e., buffer in advance before playback) one subsequent video in the playlist with as much content as possible. However, since the user can skip a video at any time if he/she does not like it, prebuffering too much video content leads to the wastage of mobile data. Contrarily, without prebuffering any subsequent video, the video player is exposed to high risks of stalling events, which threaten the user’s quality of experience (QoE). In this paper, a novel Sprinkle Prebuffer Strategy (SPS) is proposed to overcome such drawbacks. Once the currently-playing video’s buffer reaches an optimal buffer threshold, the proposed SPS attempts to concurrently prebuffer all subsequent videos in the playlist, each up to an optimal prebuffer threshold. Based on the evaluation results, it is proven that the proposed SPS outperforms the referenced methods in providing the best user’s QoE with reasonable compensation for data wastage.

1. Introduction

Nowadays, short-form video streaming platforms, such as TikTok [1], Youtube Shorts [2], or Instagram Reels [3], are the key entertainment application on mobile devices. Thanks to the diversity in genres and richness in video content, using such a kind of video platform has become a part of most people’s everyday life. In fact, recent statistics reveal that a person spends more than 40 min per day on mobile device streaming short videos [4]. Furthermore, it is expected that short videos will soon account for 80% of mobile data traffic [5]. Such tremendous growth has put pressure on the service providers to continuously improve the service quality in order to gain more users and benefits.
As short-form videos are mainly watched on mobile devices, the video content is often downloaded via mobile network. Due to the instability characteristics [6,7], a video streamed over mobile network is more likely to suffer from the playback stalls (i.e., the video content fails to be downloaded in time for playback). Therefore, short-form video streaming player adopts the HTTP-based Streaming technique, where the video content is split into multiple small, fixed-duration video segments that will be played out chronologically from the client’s video player [8,9]. In order to prevent playback stalls, the video player attempts to download in advance as many video segments as possible and stores them in a playback buffer. Such a process is referred to as buffering the video. Moreover, short-form video application often delivers a playlist of videos to the user, which is preselected based on current trends, user subscriptions, or user preferences [10,11]. As such, after finishing buffering the currently-playing video on the screen, the player continues to buffer in advance the segments of subsequent pending videos in the playlist [12]. Such a mechanism is to ensure the uninterrupted playback for the user, therefore maintaining the high quality of experience (QoE) [13,14,15].
Nevertheless, due to diverse genres and repetitive content, it is not guaranteed that all videos in the playlist are of the user’s interests despite the efforts to improve the recommendation systems. Therefore, the short-form video application allows the user to scroll the currently-playing video to finish it early if he/she does not like it. Although such a feature helps the user search for interesting content and save time spent on watching irrelevant ones, the unwatched video segments preloaded in the video buffer are wastefully discarded. This also means that the mobile data spent for downloading those segments is wasted. On the one hand, this wastes the resources that the provider could otherwise utilize for other purposes (e.g., for other users, or simply not using to save operational costs). On the other hand, today’s mobile network vendors often provide hard quotas for mobile network data (e.g., 10 GB per month, or 15 GB of fast bandwidth data per month, etc.) [16]. Such a data wastage problem directly drains the user’s mobile data quota for downloading the video content he/she never watches.
As an attempt to solve the data wastage problem of short-form video streaming, a recent study has proposed to limit the number of segments prefetched in the buffer and ignore the subsequent videos [17]. Such a method may benefit the user (and also the provider [17,18]) when the network is fast and stable. However, the buffer has a crucial role in maintaining the smooth playback for the user when the network condition varies. Not buffering subsequent videos in advance will leave their buffers completely empty when they started playing out. When the network fluctuates, or when the bandwidth is intentionally throttled due to the mobile network operator’s policy, the buffer may not be loaded at a similar pace as the playback. Therefore, such a method probably increases the risk of playback stalls at the start of a video and degrades the user’s QoE [13,14,15]. As a result, there is no point in saving the data wastage when the user cannot probably enjoy the streaming service.
Realizing the above shortcomings, this work aims at creating a solution to optimizing the user’s QoE while maintaining the least data wastage for short-form video streaming. An empirical study was conducted to understand the buffer mechanism of commercial short-form video streaming applications. It was found that, after fully buffering the currently-playing video on the screen, these applications continue to fully buffer one subsequent pending video. The hypothesis in this study argues that, under fluctuated network, such a buffer filling strategy performs suboptimally in terms of both maintaining QoE and reducing data wastage if the user scrolls videos too quickly. Therefore, this paper proposes the novel Sprinkle Prebuffer Strategy (SPS). First of all, the buffer in short-form video streaming is novelly redefined to clarify their roles and targeted videos: viewing video buffer for storing video segments of the viewing video and pending video prebuffer for storing video segments of a pending video. Then, assume that all videos preselected in the playlist will be played eventually, the proposed SPS attempts to prebuffer (i.e., download in advance segments to a pending video prebuffer) all pending videos concurrently after the viewing video finishes its buffering process. Moreover, the amount of segments buffered or prebuffered is limited within the optimized viewing buffer threshold or prebuffer threshold, respectively, in order to reduce the data wastage. Finally, a comprehensive experimental evaluation is conducted to test the effectiveness of the SPS, covering both various fixed bandwidth environments and real mobile network traces. Based on the evaluation results, it is proven that the SPS outperforms the referenced methods in maximizing the user’s QoE with much less sacrifice for data wastage. In summary, the distinguished contributions of this work are as follows:
  • The playback buffer is redefined as viewing video buffer and pending video prebuffer, which better describe their roles and targeted videos in short-form video streaming.
  • The novel Sprinkled Prebuffer Strategy is proposed, which maintains the currently playing video and preloads all pending videos with respect to their optimized viewing buffer and prebuffer thresholds, respectively.
  • A comprehensive evaluation with both fixed-bandwidth networks and real mobile network slices is conducted, proving that the proposed SPS can maintain the highest QoE with less compensation for data wastage.
The remainder of this paper is organized as follows: Section 2 reviews the existing research. An empirical study about the commercial short-form video platforms, along with the hypothesis on their performance are presented in Section 3. Section 4 describes the proposal SPS in detail. Section 5 and Section 6 summarizes and discusses the evaluation results, respectively. Finally, Section 7 concludes this paper.

2. Related Work

Quality of Experience (QoE) is the most important optimization goal in online video streaming which accounts for the user’s level of satisfaction towards their subjective experience of the streaming session [19,20]. It has been found that the playback stall is the most serious streaming impairment that negatively affects the user’s QoE [13,14,15]. In fact, the playback stall happens when the buffer of the video player runs out of video segments to be played out. In this manner, over the years, the common sense in research efforts on improving the video streaming system has been to prevent the buffer from being drained out [8,20].
Existing works on video streaming mostly employed the HTTP Adaptive Streaming (HAS) [21], which extends the HTTP-based streaming mechanism by encoding the video into multiple quality versions represented by the video bitrates. To name a few, the work in [22] proposed to download the video segments at the lowest bitrate at the start of the streaming session. This was because the lower the bitrate is, the lighter the size of the segment is, thus the faster it can be downloaded to build up the buffer. As the buffer progressed through multiple predefined thresholds, the bitrate could be increased gradually or aggressively with respect to the buffer range and video segment size. Similarly, the BOLA proposed in [23] utilized the Lyapunov optimization of the buffer to determine the suitable bitrate that both maximized the video quality and minimized the risk of buffer underflow. Meanwhile, several works attempted to estimate the future available bandwidth of the user based on mathematical models [24,25,26], or machine learning-based prediction [27,28,29]. After that, the video bitrate lower than such an estimated bandwidth was selected to ensure the segment download time did not consume all the remaining duration of the buffer. Despite certain improvements, these works only targeted the general type of video streaming (long-form video streaming) where the influence of user behavior (i.e., the scrolling behavior) is neglectable.
While buffering video segments can help avoid playback stalls to ensure the user’s QoE, if the user decides to stop the video early before it finishes, the downloaded segments are wasted [11]. Such a data wastage problem is often neglected in research on long-form video streaming. In the case of short-form video streaming, however, the data wastage catches more attention due to the typical user’s scrolling behavior mentioned in the previous section. As a result, along with maintaining the user’s QoE, recent research on short-form video streaming also focuses on reducing data wastage.
The work in [12] has discovered that, different from long-form video streaming, commercial short-form video vendors usually utilize only a single video bitrate instead of adopting the HAS technique. Respecting such a bitrate setting, the authors then introduced LiveClip which used Markov Decision Process to decide which video to be buffered to a sliding window. It was reported that the LiveClip could improve the QoE and data wastage cost by around 10–40%. Nevertheless, the LiveClip required additional entities and connections for the streaming systems to monitor, collect, train, and share network’s and user’s statistics. On the one hand, such a framework could hinder the service scalability in real-life deployment when the number of clients increased [8]. On the other hand, it generated additional periodic traffics for fetching the computing models from such entities to the user’s device. Thus, the mobile data spent for downloading video segments was not efficiently saved because of a portion of it was utilized for the internal communications of the LiveClip.
The work in [17], however, was more straightforward and aligned well with the spirit of the proposed solution in this paper. The authors proposed WAS to set an optimal threshold for the playback buffer to limit the number of segments preloaded ahead of the current playback, thus minimizing the data wastage. The WAS optimized the buffer threshold based on a simple binary search of offline trace-driven simulation data and achieved a remarkable reduction in data wastage. Despite such an outperformance, the WAS only buffered the currently-playing video and did not care about the subsequent videos in the playlist like the commercial solution. This could probably expose the video player to a high risk of playback stalls at the start of a video under challenging network situations and seriously degraded the user’s QoE. In this case, the concern on data wastage became trivial as the user could not enjoy the streaming session.
In this work, a straightforward solution without requiring any additional entities to the existing video streaming framework is considered. In the next section, an empirical study of the buffer strategy of the commercial players is presented, based on which the proposed solution is constructed afterward.

3. Empirical Study

Similar to long-form video streaming, short-form video streaming players also hold a buffer storing video segments ahead of the current playback period to prevent stalling events. Still, as short-form video streaming is playlist-based, its buffer strategy is probably different.
A recent work [12] has investigated TikTok (also known as Douyin [30])—one of the most famous short-form video streaming platforms nowadays. The work has found that TikTok used a strategy called Next-One that attempted to fully buffer the currently-playing video. After succeeding in such a work, it attempted to fully buffer one subsequent video, then stopped the buffer process until that subsequent video was moved to the main screen for playback.
In this work, the buffer strategy of Youtube Shorts, which is also a popular short-form video streaming service, was also investigated. In order to capture the video traffics, a Fiddler [31] proxy was deployed on a Core-i7 physical machine running Windows 10 with 16 GB of RAM. Then, 6 videos suggested by the Youtube Shorts application were played from an iPhone SE2 smartphone that connected to the proxy via WiFi. By inspecting the video requests, the buffer strategy of the application was revealed, which is illustrated in Figure 1.
Inferring from Figure 1, the buffer strategy of Youtube Shorts was somehow similar to that of the Next-One. That was, after requesting all available video segments of the first video of the session, which was playing on the screen, the Youtube Shorts then requested segments of one subsequent video. After that, from the second video, the segments of the viewing video and one subsequent video tended to be requested simultaneously.
Based on the above investigations, it can be concluded that there is common sense in the buffer strategy of the commercial short-form video platforms: apart from the currently-playing video on the screen, they only buffer one subsequent video in the pending playlist. When the network is slow (limited bandwidth, congested, unstable connectivity, etc.), if the user scrolls the videos quickly (e.g., scrolls 2 videos within 1–2 s), there is a possibility that the subsequent video is either not yet or not enough buffered, causing playback stalls. Thus, the hypothesis in this study is that such a buffer strategy may increase the risk of stalling events and impair the user’s QoE under slow network situations. Furthermore, the fact that the Next-One strategy of TikTok attempts to fully buffer the subsequent video may increase the data wastage drastically in case such a video is skipped too early.
Based on the above hypothesis, the SPS to maintain the user’s QoE with awareness of data wastage is proposed in the next section.

4. The Sprinkle Prebuffer Strategy

4.1. Redefining the Buffer in Short-Form Video Streaming

Short-form video streaming also uses a video buffer to store future segments ahead of the current playback period [10,17]. However, the buffer in short-form video streaming does not just indicates the video on the screen but can also be of the awaiting videos in the playlist. In order not to confuse them, in this work, the definition of the buffer for short-form video streaming is reconsidered as follows:
  • Viewing video buffer: The buffer of the currently-playing video on the screen. It is to prevent the current playback from being stalled due to network fluctuation.
  • Pending video prebuffer: The buffer of a video that awaits in the playlist. Such a prebuffer helps the video prepare the video segments for its future playback even before being shown on the screen, thus better minimizing the risk of playback stalls. When a pending video is moved to the screen for playing, its pending video prebuffer becomes viewing video buffer.
Without loss of generality, in the later parts of this paper, the term buffer refers to the viewing video buffer and the term prebuffer refers to the pending video prebuffer. In the remainder of this section, details on the buffer and prebuffer use to reduce QoE loss with less data wastage trade-off are presented.

4.2. The Sprinkle Prebuffer Strategy

In this subsection, the novel Sprinkle Prebuffer Strategy (SPS)—is proposed. Figure 2 illustrates the buffer strategy of the proposed SPS.
Assume that all videos provided in the playlist will eventually be played, the SPS prebuffers all pending videos remaining in the playlist, one by one, up to a prebuffer threshold at ρ seconds. Such a strategy brings two advantages. Firstly, as every pending video has its chance to be prebuffered, the risk of playback stalls is lowered, especially at the start of the playback. Secondly, in case the video is forced to finish too early, it only wastes at most ρ seconds of playback rather than the whole video duration. The decision and mechanism to prebuffer the pending videos are depicted in the flowchart in Figure 3 and explained as follows:
  • As soon as the buffer reaches a viewing buffer threshold at θ seconds, the player finds one closest pending video (with respect to its order in the playlist) that has an empty prebuffer and starts prebuffering it. This is done in parallel with the buffering of the currently-playing video.
  • After a pending video finishes prebuffering up to ρ seconds, the player goes back to monitor the viewing buffer threshold and repeats the above procedure until the last pending video in the playlist.
It should be noted that whenever the user skips the currently viewing video, the video player immediately suspends the prebuffering process (if any) to ensure the buffering process. Furthermore, once the viewing video finishes buffering (i.e., all available segments of the viewing have been downloaded), the video player can obviously focus only on prebuffering subsequent videos.
The aforementioned viewing buffer threshold θ and prebuffer threshold ρ are calculated after every successful segment download in order to continuously adapt to the network situation. In the next subsection, the determinations of these thresholds are presented.

4.3. Determining the Thresholds

Prebuffering subsequent videos can help prevent playback stalls more effectively, thus benefiting the user QoE. However, prebuffering too many segments may increase the data wastage. Thus, the prebuffer should be limited with respect to both QoE loss and data wastage. Figure 4 depicts the buffer model used in this paper and Table 1 summarizes the notations used through out this section.
Suppose that at time t, a pending video enters the screen and is already prebuffered for ϱ seconds. Thus, it now starts playing out with ϱ seconds of its viewing buffer length. At the same time, a subsequent segment is downloaded simultaneously to fill up the buffer. When that segment is fully downloaded at time t , the buffer is added L seconds corresponding to the segment duration. However, it also loses an amount of time spent for downloading such a segment, which can be estimated as L r C , where r and C denote the video bitrate and the utilized bandwidth, respectively. Therefore, the buffer at t can be estimated as Equation (1):
0 B t = ϱ + L L r C θ
In order to prevent QoE loss caused by playback stalls, the downloading time of the subsequent segment must not exceed ϱ seconds. Such a condition can be controlled by choosing the appropriate bitrate that is lower than the bandwidth. However, as mentioned in Section 2, short-form video streaming often encodes only one bitrate to save the storage cost. Thus, the segment download time now relies mainly on the bandwidth C, which cannot be controlled from the user side. As a result, a safe strategy is to prebuffer the video as much as possible to make sure that the segment download progress will not drain out the buffer when the playback starts. In other words, the prebuffer amount ϱ must be chosen as larger than the segment download time as possible to minimize the risk of QoE loss q as in Equation (2):
q ϱ = 1 , ϱ = 0 L r C ϱ , o t h e r w i s e
In Equation (2), the square root is applied to better emphasize the severeness when the download time of a segment increases due to low bandwidth.
On the other hand, in case the user skips the video too soon (i.e., just after seeing the first seconds), a large amount of ϱ will lead to high data wastage. In this manner, ϱ should be kept as low as possible. In reality, the video starts playing as soon as the user scrolls to it. In order for the playback to start, it is unavoidable that the player must at least download the first segment of the video. Therefore, assume that the user skips the video only within the playback of the first segment, ϱ should be kept as close to the segment duration L as possible to minimize the data wastage ratio w, which is demonstrated as Equation (3):
w ϱ = 0 , ϱ < L 1 L ϱ , o t h e r w i s e
Based on the above constraints, the prebuffer amount ϱ can be optimized as Equation (4) in order to minimize QoE loss with minimal sacrifice for data wastage.
ϱ = m a x ϱ : m i n q ϱ , a n d m i n ϱ : m i n w ϱ
Moreover, in video streaming, the buffer or prebuffer is added a fixed portion equal to L every time. Therefore, the ϱ acquired from Equation (4) is rounded up to ρ with respect to the segment duration L. Finally, the prebuffer threshold for the proposed SPS is dynamically selected based on Equation (5):
ρ = r o u n d u p ϱ L L Subject to ϱ satisfies (4)
Meanwhile, the viewing buffer threshold θ also plays an important role in the proposed SPS. A short buffer threshold triggers the prebuffer process sooner and also saves more data when a video is skipped. However, a too short buffer threshold exposes the video player to a higher risk of playback stalls. In the proposed SPS, it is noted that when a prebuffer process is happening, the video player is actually downloading segments for both the currently-playing video and the prebuffering video. For this reason, a safe and straightforward solution is to set θ to twice the prebuffer threshold ρ (as in Equation (6)), so that the viewing buffer will not be drained out when a video is prebuffering simultaneously. A more in-depth optimization for θ will be conducted in future work.
θ = 2 ρ
In the next section, the evaluation to prove the effectiveness of the proposed SPS is presented.

5. Performance Evaluation

This section provides the details and the results of the experiment conducted to evaluate the effectiveness of the proposed SPS.

5.1. Metrics and Scenarios

5.1.1. Metrics

The proposed SPS and the referenced methods were evaluated with respect to the following metrics. For all metrics, a lower value indicates a better performance.
  • QoE Loss ( L Q ): The QoE Loss L Q is the percentage of the estimated QoE versus the maximum QoE the user should achieve.
    L Q = 1 Q Q m a x 0 , 1
    where Q is the estimated QoE of the user with respect to the frequency and duration of stalling events. It is calculated based on the eMOS [32] as follows:
    Q = m a x 5.84 4.95 7 m a x l n N R S p 6 + 1 , 0 + m i n T R , 15 15 8 , 0 0 , 5.84
    where N R and T R denote the number and duration of stalling events, respectively, and S p is the number of segments being played. Q m a x is the maximum value calculated based on Equation (8), which is 5.84.
  • Data Wastage (W): The data wastage W is calculated as the percentage of the unplayed segments versus the total segments downloaded ( S D ).
    W = 1 S p S D 0 , 1

5.1.2. Scenarios

In this experiment, firstly, the proposed SPS and the referenced methods were evaluated under various fixed bandwidths. Specifically, the bandwidth ranged from 750 Kbps to 2500 Kbps with a step size of 250 Kbps.
Then, all evaluated methods were tested under real mobile network slices. In particular, their performances were collected under a fast mobile network slice that represents the most ideal case, and a challenging mobile network slice where the bandwidth deteriorated and fluctuated due to possible congestion, interference, base station handover [6,7], or exceeded quota. Both mobile network slices were obtained from the real mobile network trace in [33] and their bandwidth histograms are shown in Figure 5.

5.2. Experimental Setup

Figure 6 depicts the framework used in this experiment.
The experiment was set up similar to that in [34]. Both the streaming server and the client were virtual machines running Ubuntu 20.04 LTS with 4 GB of RAM. These machines were deployed on a Core-i9 physical machine with 64 GB of RAM that also ran Ubuntu 20.04 LTS. The streaming server was hosted by the native http package of Golang [35], which provided the widely-deployed HTTP/2 [36]. Furthermore, the emulation of the network bandwidth was configured on the server side by the linux tc [37].
The short-form video streaming service was deployed on the server as a web application based on the dash.js framework [38] which was run on the client side via the Firefox web browser. In this experiment, it is assumed that the user was recommended a playlist of 6 videos, where each video was 60 s long. For all videos, the open-source Big Buck Bunny [39] was chosen, which was encoded into segments of 2 s at 1000 Kbps. The viewing buffer threshold and the prebuffer threshold were the optimized pair { θ , ρ } updated after every successful segment download as described in Section 4.3. Meanwhile, to the best of our knowledge, there is a lack of publicly available dataset of the user scrolling behavior in short-form video streaming. Therefore, for simulating the user’s scrolling behavior, each video was assumed to finish after its playback reached a predetermined period as shown in Table 2.
In this experiment, the video 2 represented a bad recommendation, causing the user to skip the video almost as soon as it was played. On the other hand, the watching time of the video 3–6 gradually deteriorated, indicating the deterioration of the user’s interest over time [40].
The proposed SPS was evaluated in comparison to the following methods. A summary of the settings of all evaluated methods is shown in Table 3.
  • Next-One [12]: This method adopted the mechanism of the commercial TikTok application. The player first downloaded all segments of the currently-playing video, without any limit for the viewing buffer. After that, it did the same thing with one subsequent video.
  • WAS- β [17]: This method was the latest open-source effort in reducing the data wastage in short-form video streaming, by limiting the viewing video buffer to β seconds based on a simple binary search of offline trace-driven simulation data. Unlike Next-One, the player only focused on the currently-playing video on the screen and did not prebuffer any subsequent video.

5.3. Results

In this subsection, the performances of the proposed SPS and the referenced methods in each scenario are summarized.

5.3.1. Fixed Bandwidths Network

Figure 7 illustrates the QoE loss and data wastage of all methods across the fixed bandwidths. In general, as the bandwidth got larger, the QoE loss (Figure 7a) tended to drop to 0. This was reasonable since larger bandwidths led to a lower risk for playback stalls. On the contrary, as the player could load the segments faster when the bandwidth increased, the data wastage (Figure 7b) tended to get worse. These trends of the results were applied to all evaluated methods. However, the improvement/impairment pace varied upon bandwidth ranges and methods.
It can be observed from Figure 7 that, when the bandwidth dropped lower than the video bitrate (C ≤ 1000 Kbps), the QoE loss and data wastage performance of all methods changed at a relatively identical rate. This was because such bandwidth ranges could not meet the demand for video bitrate; it was tough to download the segments in time for playing out, or to preload segments in advance. Such an explanation can be visually clarified by Figure 8, which shows the time-varying buffers of the evaluated methods under 750 Kbps. Since the video buffer is drained out as the user watches the video, the closer the finish point was to the horizontal axis (0 s), the less data were wasted.
It is shown in Figure 8 that all methods severely suffered from a huge number of playback stalls for all videos, which is the reason why their buffers usually stayed at or near 0 s throughout the streaming session. This also means that the player played every video segment as soon as it was downloaded and could hardly preload any segment. As a result, the performances of the evaluated methods could not differentiate since the importance of their buffer or prebuffer strategies could not be highlighted.
However, when the bandwidth grew larger than the video bitrate (C > 1000 Kbps), the performances of the evaluated methods became distinguishable. In particular, Figure 7b expresses that the WAS- β managed to maintain the data wastage stably and always lower than any other method. Nevertheless, it reduced the QoE loss at the lowest rate compared to the other methods as shown in Figure 7a, indicating the worst QoE performance. On the other hand, although the Next-One could recover the QoE much faster than WAS- β , it traded in for the worst data wastage that was always far beyond the others. Meanwhile, the proposed SPS provided the fastest improvement rate of the QoE. Besides, compared to Next-One, the proposed SPS kept the data wastage much closer to the WAS- β . In order to better understand the cause of such results, Figure 9 illustrates the time-varying buffers of the evaluated methods under 1500 Kbps.
Figure 9 demonstrates that, for all methods, there were several playback stalls at the beginning of some videos. This was due to the fact that the video buffer was not built up large enough during these first seconds. It should be noted that the video 1 was stalled when running any method because it was the first video of the streaming session (i.e., there was no way to prebuffer it). For the case of the WAS- β , despite the lowest data wastage, stalling events occurred on almost all videos (Figure 9b), which explained the worst QoE performance. This was understandable because the WAS- β did not load any video other than the currently-playing one on the screen.
On the other hand, thanks to the prebuffer mechanism, Next-One and the proposed SPS could reduce the playback stalls more properly. However, the Next-One suffered from more stalling events than the proposed SPS. Specifically, apart from the video 1, the video 3 was also stalled when running the Next-One, while the proposed SPS could play it smoothly. This was because the Next-One prebuffered only one subsequent video and only when the currently-playing one had downloaded all segments. In this case, as the video 2 was skipped too soon, there was no prebuffered data for the video 3 when it started playing as shown in Figure 9a.
Meanwhile, the novel prebuffer strategy of the proposed SPS could prebuffer subsequent videos much earlier (Figure 9c), thus eliminating playback stalls on all video, except the unavoidable stalls on the video 1. Furthermore, the SPS could reduce the wasted segments significantly compared to Next-One. Observing Figure 9c, the buffer and prebuffer were maintained at very low thresholds rather than kept climbing up as the Next-One (Figure 9a). This was the effect of the viewing buffer threshold θ and prebuffer threshold ρ . As a result, fewer data were wasted when the user scrolled videos.
Interestingly, Figure 7 also shows that the data wastage of the proposed SPS dropped at 2000 Kbps instead of getting worse. This was thanks to the optimization of θ and ρ ; as the bandwidth increased, the { θ , ρ } pair tended to decrease, thus prebuffering even fewer segments.
In summary, the evaluation results under fixed bandwidths network show that the proposed SPS provided the lowest QoE loss among all methods while maintaining the data wastage close to the upper bound (WAS- β ). In the next part, the performance of the SPS along with other methods was confirmed again under real mobile network slices.

5.3.2. Real Mobile Network Slices

Table 4 and Table 5 summarize the QoE loss and data wastage performance of all methods under the fast and the challenging mobile network slice, respectively. The bold value of each metric indicates the best performance.
Both Table 4 and Table 5 show that, for both mobile network slices, the performances of all methods aligned with theirs under the fixed bandwidths. That is, with the fast mobile network slice (Table 4), there was no loss of QoE as the video player could download the video segments easily. In addition, the WAS- β provided the best data wastage performance. Next-One was the worst at this metric, which was approximately 11.3 times higher than WAS- β , whereas the proposed SPS was only 2.6 times higher than WAS- β .
Similarly, for the challenging mobile network slice (Table 5), although the WAS- β was still best at limiting the data wastage, it traded in for the worst QoE loss. Furthermore, despite the better QoE loss compared to WAS- β (approximately 0.6 times), Next-One caused the highest data wastage that was 8.7 times higher than WAS- β . Meanwhile, the proposed SPS performed best in reducing the QoE loss (0.3 times of WAS- β and 0.5 times of Next-One), while keeping the data wastage only 4.9 times than that of WAS- β .

6. Discussion

Based on the evaluation results, it was proven that the commercial Next-One failed to prebuffer subsequent segments soon enough to prevent playback stalls. In addition, as the pending video was prebuffered without any limit, the data wastage caused by the Next-One was the worst among the evaluated methods. Such results aligned well with the hypothesis stated in Section 3 on the drawbacks of the buffer strategies of the commercial players. On the other hand, although managed to keep the data wastage perfectly at the lowest, the WAS- β completely gave up the user’s QoE. This was because the WAS- β only limited the viewing buffer and did not prebuffer any video. As a result, new videos entering the screen did not have enough buffers to avoid stalling events.
Meanwhile, the proposed SPS outperformed all methods in ensuring the least QoE loss. Examining the experimental results, it was found that such an outperformance was because the pending videos were prebuffered very early compared to the Next-One. This has proven the effectiveness of the proposed prebuffer strategy of the SPS that prebuffers all remained pending videos in the playlist instead of only the next one in line. The results also highlighted the contribution of the buffer threshold θ , which allowed the prebuffer process to start simultaneously with the buffer process without interrupting the playback. Furthermore, the optimization of the buffer threshold θ and prebuffer threshold ρ succeeded in holding the data wastage close to the upper bound WAS- β , without any sacrifice for QoE.
In order to investigate whether such performances of the evaluated methods could be consistent among different sets scrolling behaviors, the experiment in Section 5 was also conducted under randomized scrolling behaviors sets (considering the real mobile network slices). The randomized scrolling behaviors sets were generated following the Gaussian distribution (similar to that of [12]). Specifically, the following sets were considered:
  • impatient set ( N ( 15 , 7.5 ) ): the user scrolled every video relatively early, and
  • patient set ( N ( 45 , 22.5 ) ): the user scrolled every video relatively late.
The results are depicted as Figure 10 and Figure 11.
Inferring from Figure 10, the results trends were consistent with that of the fast mobile network slice in Section 5.3.2, for both case of the scrolling behaviors sets. For the challenging mobile network slice in Figure 11, a consistent performance with that in Section 5.3.2 was achieved with the patient scrolling behaviors set (Figure 11b). However, for the impatient scrolling behaviors set (Figure 11a), all methods performed relatively identically: the QoE loss were very high while the data wastage was very low. This was because the user scrolled every video so early that the player failed to ever prebuffer any subsequent video. In order to improve the performance of the proposed SPS with respect to such a scrolling behavior, future work may shorten the buffer threshold θ to help the client start the prebuffering process sooner.
On the other hand, it is possible to consider the use of the novel HTTP/3 protocol [41]. Specifically, existing research has found that HTTP/3 brought effective boosts for the user’s QoE in long-form video streaming [34,42], especially for the case of the mobile network [43,44]. Yet, no research has ever investigated the similar impact of HTTP/3 on neither the QoE nor data wastage in short video streaming. Table 6 and Figure 12 shows a preliminary HTTP/2 versus HTTP/3 comparison of the proposed SPS under the challenging mobile network slice and with the scrolling behavior as in Section 5.
According to Table 6, the HTTP/3 significantly reduced the QoE loss of the proposed SPS compared to the HTTP/2, while their data wastage performances did not significantly differ. Inspecting Figure 12, it is shown that the utilized bandwidth dropped when the video player was downloading 2 videos (i.e., buffering the currently-playing video and prebuffering a subsequent video) simultaneously. However, Figure 12a shows that there were some unusual drops of the bandwidth (e.g., near 70 s, 130 s, and 215 s) when using HTTP/2 even though the video player was only downloading 1 video. As a result, there were more playback stalls occurred when using HTTP/2 compared to HTTP/3, thus degrading the user’s QoE.
It is speculated that such a phenomenon was related to the differences in the congestion control mechanism between the transport layer of HTTP/2 and HTTP/3. It has been reported that the transport QUIC [45] of HTTP/3 updates the congestion window more frequently and increases it more aggressively than TCP [46] of HTTP/2, therefore utilizing the bandwidth better [34]. Thus, there was no unusual bandwidth deterioration occurring when using HTTP/3 as shown in Figure 12b. As for future work, such a performance discrepancy of the proposed SPS over HTTP/2 and HTTP/3 will be investigated more deeply to improve the user’s QoE in short-form video streaming.

7. Conclusions

In this paper, the novel Sprinkle Prebuffer Strategy—SPS—is proposed for ensuring the user’s quality of experience of short-form video streaming with less compensation for data wastage. In particular, the proposed SPS redefines the buffer as view video buffer and pending video prebuffer. Then, instead of prebuffering only one subsequent video as the commercial application, or not prebuffering any video as the referenced open-source solution, the SPS prebuffers all pending videos concurrently with respect to optimized thresholds of the aforementioned buffer and prebuffer. Evaluation results have confirmed the superior performance of the proposed SPS in reducing QoE loss to the least while keeping the data wastage close to the upper bound. In future work, the impact of HTTP/3 will be studied more to benefit user QoE.

Author Contributions

Conceptualization, C.M.T., T.N.D. and N.G.B.; Methodology, C.M.T., T.N.D. and P.X.T.; Supervision, P.X.T. and E.K.; Writing—original draft, C.M.T., T.N.D. and N.G.B.; Writing—review and editing, P.X.T. and E.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. TikTok. Available online: https://www.tiktok.com/ (accessed on 28 February 2022).
  2. Youtube Shorts. Available online: https://support.google.com/youtube/answer/10059070 (accessed on 28 February 2022).
  3. Instagram Reels. Available online: https://about.instagram.com/blog/announcements/introducing-instagram-reels-announcement (accessed on 28 February 2022).
  4. Average Time Spent per Session on Selected Short-Form Video Platforms Worldwide as of March 2021. Available online: https://www.statista.com/statistics/1237210/average-time-spent-persession-on-short-form-video-platforms-worldwide/ (accessed on 28 February 2022).
  5. Short Form Video Statistics and Marketing Trends for 2022. Available online: https://www.reelnreel.com/short-form-video-statistics-and-marketing/ (accessed on 28 February 2022).
  6. Qamar, F.; Hindia, M.H.D.N.; Dimyati, K.; Noordin, K.A.; Amiri, I.S. Interference management issues for the future 5G network: A review. Telecommun. Syst. 2019, 71, 627–643. [Google Scholar] [CrossRef]
  7. Mollel, M.S.; Abubakar, A.I.; Ozturk, M.; Kaijage, S.F.; Kisangiri, M.; Hussain, S.; Imran, M.A.; Abbasi, Q.H. A Survey of Machine Learning Applications to Handover Management in 5G and Beyond. IEEE Access 2021, 9, 45770–45802. [Google Scholar] [CrossRef]
  8. Bentaleb, A.; Taani, B.; Begen, A.C.; Timmerer, C.; Zimmermann, R. A Survey on Bitrate Adaptation Schemes for Streaming Media over HTTP. IEEE Commun. Surv. Tutor. 2019, 21, 562–585. [Google Scholar] [CrossRef]
  9. Dao, N.N.; Tran, A.T.; Tu, N.H.; Thanh, T.T.; Bao, V.N.Q.; Cho, S. A Contemporary Survey on Live Video Streaming from a Computation-Driven Perspective. ACM Comput. Surv. 2022. [Google Scholar] [CrossRef]
  10. Ran, D.; Hong, H.; Chen, Y.; Ma, B.; Zhang, Y.; Zhao, P.; Bian, K. Preference-Aware Dynamic Bitrate Adaptation for Mobile Short-Form Video Feed Streaming. IEEE Access 2020, 8, 220083–220094. [Google Scholar] [CrossRef]
  11. Chen, Z.; He, Q.; Mao, Z.; Chung, H.M.; Maharjan, S. A Study on the Characteristics of Douyin Short Videos and Implications for Edge Caching. In Proceedings of the ACM Turing Celebration Conference, ACM TURC’19, Chengdu, China, 17–19 May 2019. [Google Scholar] [CrossRef] [Green Version]
  12. He, J.; Hu, M.; Zhou, Y.; Wu, D. LiveClip: Towards Intelligent Mobile Short-Form Video Streaming with Deep Reinforcement Learning. In Proceedings of the 30th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video—NOSSDAV’20, Istanbul, Turkey, 10–11 June 2020; pp. 54–59. [Google Scholar] [CrossRef]
  13. Duc, T.N.; Minh, C.T.; Xuan, T.P.; Kamioka, E. Convolutional Neural Networks for Continuous QoE Prediction in Video Streaming Services. IEEE Access 2020, 8, 116268–116278. [Google Scholar] [CrossRef]
  14. Nguyen Duc, T.; Minh Tran, C.; Tan, P.X.; Kamioka, E. Modeling of Cumulative QoE in On-Demand Video Services: Role of Memory Effect and Degree of Interest. Future Internet 2019, 11, 171. [Google Scholar] [CrossRef] [Green Version]
  15. Barman, N.; Martini, M.G. QoE Modeling for HTTP Adaptive Video Streaming—A Survey and Open Challenges. IEEE Access 2019, 7, 30831–30859. [Google Scholar] [CrossRef]
  16. Sakura Mobile Data Pricing. Available online: https://www.sakuramobile.jp/long-term/pricing/ (accessed on 28 February 2022).
  17. Zhang, G.; Liu, K.; Hu, H.; Guo, J. Short Video Streaming with Data Wastage Awareness. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
  18. Chen, L.; Zhou, Y.; Chiu, D.M. Smart Streaming for Online Video Services. IEEE Trans. Multimed. 2015, 17, 485–497. [Google Scholar] [CrossRef] [Green Version]
  19. Seufert, M.; Egger, S.; Slanina, M.; Zinner, T.; Hoßfeld, T.; Tran-Gia, P. A Survey on Quality of Experience of HTTP Adaptive Streaming. IEEE Commun. Surv. Tutor. 2015, 17, 469–492. [Google Scholar] [CrossRef]
  20. Petrangeli, S.; Hooft, J.V.D.; Wauters, T.; Turck, F.D. Quality of Experience-Centric Management of Adaptive Video Streaming Services: Status and Challenges. ACM Trans. Multimed. Comput. Commun. Appl. 2018, 14, 1–29. [Google Scholar] [CrossRef] [Green Version]
  21. Stockhammer, T. Dynamic Adaptive Streaming over HTTP: Standards and Design Principles. In Proceedings of the Second Annual ACM Conference on Multimedia Systems, MMSys’11, San Jose, CA, USA, 23–25 February 2011; pp. 133–144. [Google Scholar] [CrossRef]
  22. Yarnagula, H.K.; Juluri, P.; Mehr, S.K.; Tamarapalli, V.; Medhi, D. QoE for Mobile Clients with Segment-Aware Rate Adaptation Algorithm (SARA) for DASH Video Streaming. ACM Trans. Multimed. Comput. Commun. Appl. 2019, 15, 1–23. [Google Scholar] [CrossRef]
  23. Spiteri, K.; Urgaonkar, R.; Sitaraman, R.K. BOLA: Near-Optimal Bitrate Adaptation for Online Videos. IEEE/ACM Trans. Netw. 2020, 28, 1698–1711. [Google Scholar] [CrossRef]
  24. Jiang, J.; Sekar, V.; Zhang, H. Improving Fairness, Efficiency, and Stability in HTTP-Based Adaptive Video Streaming With Festive. IEEE/ACM Trans. Netw. 2014, 22, 326–340. [Google Scholar] [CrossRef] [Green Version]
  25. Zahran, A.H.; Raca, D.; Sreenan, C.J. ARBITER+: Adaptive Rate-Based InTElligent HTTP StReaming Algorithm for Mobile Networks. IEEE Trans. Mob. Comput. 2018, 17, 2716–2728. [Google Scholar] [CrossRef]
  26. Karn, N.K.; Zhang, H.; Jiang, F.; Yadav, R.; Laghari, A.A. Measuring bandwidth and buffer occupancy to improve the QoE of HTTP adaptive streaming. Signal Image Video Process. 2019, 13, 1367–1375. [Google Scholar] [CrossRef]
  27. Raca, D.; Zahran, A.H.; Sreenan, C.J.; Sinha, R.K.; Halepovic, E.; Jana, R.; Gopalakrishnan, V.; Bathula, B.; Varvello, M. Empowering Video Players in Cellular: Throughput Prediction from Radio Network Measurements. In Proceedings of the 10th ACM Multimedia Systems Conference, MMSys’19, Amherst, MA, USA, 18–21 June 2019; pp. 201–212. [Google Scholar] [CrossRef]
  28. Bentaleb, A.; Begen, A.C.; Harous, S.; Zimmermann, R. Data-Driven Bandwidth Prediction Models and Automated Model Selection for Low Latency. IEEE Trans. Multimed. 2021, 23, 2588–2601. [Google Scholar] [CrossRef]
  29. Mei, L.; Hu, R.; Cao, H.; Liu, Y.; Han, Z.; Li, F.; Li, J. Realtime mobile bandwidth prediction using LSTM neural network and Bayesian fusion. Comput. Netw. 2020, 182, 107515. [Google Scholar] [CrossRef]
  30. Douyin. Available online: https://www.douyin.com/ (accessed on 28 February 2022).
  31. Fiddler. Available online: https://www.telerik.com/fiddler (accessed on 28 February 2022).
  32. Claeys, M.; Latré, S.; Famaey, J.; De Turck, F. Design and Evaluation of a Self-Learning HTTP Adaptive Video Streaming Client. IEEE Commun. Lett. 2014, 18, 716–719. [Google Scholar] [CrossRef] [Green Version]
  33. Mobile Throughput Trace Data. Available online: http://sonar.mclab.info/tracedata/TCP/ (accessed on 28 February 2022).
  34. Tran, C.M.; Nguyen Duc, T.; Tan, P.X.; Kamioka, E. Cross-Protocol Unfairness between Adaptive Streaming Clients over HTTP/3 and HTTP/2: A Root-Cause Analysis. Electronics 2021, 10, 1755. [Google Scholar] [CrossRef]
  35. golang http. Available online: https://golang.org/pkg/net/http/ (accessed on 28 February 2022).
  36. RFC 7540; Hypertext Transfer Protocol Version 2 (HTTP/2). Internet Engineering Task Force: Fremont, CA, USA, 2015.
  37. linux tc. Available online: https://man7.org/linux/man-pages/man8/tc.8.html (accessed on 28 February 2022).
  38. dash.js. Available online: https://github.com/Dash-Industry-Forum/dash.js/wiki (accessed on 28 February 2022).
  39. Big Buck Bunny. Available online: https://peach.blender.org/ (accessed on 28 February 2022).
  40. Li, J.; Zhao, H.; Hussain, S.; Ming, J.; Wu, J. The Dark Side of Personalization Recommendation in Short-Form Video Applications: An Integrated Model from Information Perspective. In Diversity, Divergence, Dialogue; Springer International Publishing: Cham, Switzerland, 2021; pp. 99–113. [Google Scholar]
  41. Hypertext Transfer Protocol Version 3 (HTTP/3)—draft-ietf-quic-http-34; Internet Engineering Task Force: Fremont, CA, USA, 2021.
  42. Nguyen, M.; Amirpour, H.; Timmerer, C.; Hellwagner, H. Scalable High Efficiency Video Coding Based HTTP Adaptive Streaming over QUIC. In Proceedings of the Workshop on the Evolution, Performance, and Interoperability of QUIC, EPIQ’20, Virtual, 10–14 August 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 28–34. [Google Scholar] [CrossRef]
  43. Perna, G.; Trevisan, M.; Giordano, D.; Drago, I. A first look at HTTP/3 adoption and performance. Comput. Commun. 2022, 187, 115–124. [Google Scholar] [CrossRef]
  44. Cicconetti, C.; Lossi, L.; Mingozzi, E.; Passarella, A. A Preliminary Evaluation of QUIC for Mobile Serverless Edge Applications. In Proceedings of the 2021 IEEE 22nd International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), Pisa, Italy, 7–11 June 2021; pp. 268–273. [Google Scholar] [CrossRef]
  45. RFC 9000; QUIC: A UDP-Based Multiplexed and Secure Transport. Internet Engineering Task Force: Fremont, CA, USA, 2021.
  46. RFC 793; Transmission Control Protocol. Internet Engineering Task Force: Fremont, CA, USA, 1981.
Figure 1. The video requests generated by the Youtube Shorts application when streaming short-form videos.
Figure 1. The video requests generated by the Youtube Shorts application when streaming short-form videos.
Electronics 11 01949 g001
Figure 2. An illustration of the prebuffer strategy of the proposed SPS.
Figure 2. An illustration of the prebuffer strategy of the proposed SPS.
Electronics 11 01949 g002
Figure 3. The flowchart of the proposed SPS.
Figure 3. The flowchart of the proposed SPS.
Electronics 11 01949 g003
Figure 4. The buffer model.
Figure 4. The buffer model.
Electronics 11 01949 g004
Figure 5. The bandwidth histograms of fast mobile network slice (a) and challenging mobile network slice (b).
Figure 5. The bandwidth histograms of fast mobile network slice (a) and challenging mobile network slice (b).
Electronics 11 01949 g005
Figure 6. The experimental framework.
Figure 6. The experimental framework.
Electronics 11 01949 g006
Figure 7. QoE loss (a) and data wastage (b) of the evaluated methods across the fixed bandwidths.
Figure 7. QoE loss (a) and data wastage (b) of the evaluated methods across the fixed bandwidths.
Electronics 11 01949 g007
Figure 8. The time-varying buffers under 750 Kbps of the Next-One (a), WAS- β (b), and the proposed SPS (c).
Figure 8. The time-varying buffers under 750 Kbps of the Next-One (a), WAS- β (b), and the proposed SPS (c).
Electronics 11 01949 g008
Figure 9. The time-varying buffers under 1500 Kbps of the Next-One (a), WAS- β (b), and the proposed SPS (c).
Figure 9. The time-varying buffers under 1500 Kbps of the Next-One (a), WAS- β (b), and the proposed SPS (c).
Electronics 11 01949 g009
Figure 10. The QoE loss and data wastage performance of the evaluated methods under the fast mobile network slice, under (a) the impatient scrolling behaviors set and (b) the patient scrolling behaviors set.
Figure 10. The QoE loss and data wastage performance of the evaluated methods under the fast mobile network slice, under (a) the impatient scrolling behaviors set and (b) the patient scrolling behaviors set.
Electronics 11 01949 g010
Figure 11. The QoE loss and data wastage performance of the evaluated methods under the challenging mobile network slice, under (a) the impatient scrolling behaviors set and (b) the patient scrolling behaviors set.
Figure 11. The QoE loss and data wastage performance of the evaluated methods under the challenging mobile network slice, under (a) the impatient scrolling behaviors set and (b) the patient scrolling behaviors set.
Electronics 11 01949 g011
Figure 12. Preliminary time-varying performance comparison of the proposed SPS under the challenging mobile network slice and with the scrolling behavior as in Section 5, over (a) HTTP/2 and (b) HTTP/3.
Figure 12. Preliminary time-varying performance comparison of the proposed SPS under the challenging mobile network slice and with the scrolling behavior as in Section 5, over (a) HTTP/2 and (b) HTTP/3.
Electronics 11 01949 g012
Table 1. The notations used in this section.
Table 1. The notations used in this section.
NotationUnitDefinition
tseconda time instant within the streaming session
Lsecondthe segment duration
B t secondthe buffer level at time t
rKbpsthe video bitrate
CKbpsthe utilized bandwidth
q the risk of QoE loss
w the data wastage ratio
ϱ secondthe prebuffer threshold
ρ secondthe prebuffer threshold rounded with respect to L
θ secondthe buffer threshold
Table 2. Finish period of each video.
Table 2. Finish period of each video.
Video123456
Finish period (s)60260453015
Percentage of video duration100%3%100%75%50%25%
Table 3. Settings of the evaluated methods.
Table 3. Settings of the evaluated methods.
MethodsViewing Buffer
Threshold (s)
Prebuffer
Threshold (s)
Number of Prebuffered
Subsequent Videos
Next-One [12]Full videoFull video1
WAS- β [17] β 00
SPS θ ρ All remaining videos
Table 4. The QoE loss and data wastage performance of the evaluated methods under the fast mobile network slice.
Table 4. The QoE loss and data wastage performance of the evaluated methods under the fast mobile network slice.
MetricsNext-OneWAS- β SPS
QoE Loss0.00.00.0
Data Wastage0.3940.0350.091
Table 5. The QoE loss and data wastage performance of the evaluated methods under the challenging mobile network slice.
Table 5. The QoE loss and data wastage performance of the evaluated methods under the challenging mobile network slice.
MetricsNext-OneWAS- β SPS
QoE Loss0.3030.4900.222
Data Wastage0.1460.0160.080
Table 6. Preliminary HTTP/2 versus HTTP/3 performance comparison of the proposed SPS under the challenging mobile network slice and with the scrolling behavior as in Section 5. The bold value of each metric indicates the best performance.
Table 6. Preliminary HTTP/2 versus HTTP/3 performance comparison of the proposed SPS under the challenging mobile network slice and with the scrolling behavior as in Section 5. The bold value of each metric indicates the best performance.
MetricsHTTP/2HTTP/3
QoE Loss0.2220.087
Data Wastage0.0800.088
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tran, C.M.; Nguyen Duc, T.; Bach, N.G.; Tan, P.X.; Kamioka, E. Sprinkle Prebuffer Strategy to Improve Quality of Experience with Less Data Wastage in Short-Form Video Streaming. Electronics 2022, 11, 1949. https://doi.org/10.3390/electronics11131949

AMA Style

Tran CM, Nguyen Duc T, Bach NG, Tan PX, Kamioka E. Sprinkle Prebuffer Strategy to Improve Quality of Experience with Less Data Wastage in Short-Form Video Streaming. Electronics. 2022; 11(13):1949. https://doi.org/10.3390/electronics11131949

Chicago/Turabian Style

Tran, Chanh Minh, Tho Nguyen Duc, Nguyen Gia Bach, Phan Xuan Tan, and Eiji Kamioka. 2022. "Sprinkle Prebuffer Strategy to Improve Quality of Experience with Less Data Wastage in Short-Form Video Streaming" Electronics 11, no. 13: 1949. https://doi.org/10.3390/electronics11131949

APA Style

Tran, C. M., Nguyen Duc, T., Bach, N. G., Tan, P. X., & Kamioka, E. (2022). Sprinkle Prebuffer Strategy to Improve Quality of Experience with Less Data Wastage in Short-Form Video Streaming. Electronics, 11(13), 1949. https://doi.org/10.3390/electronics11131949

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop