1. Introduction
With the rapid development of mobile devices and network technology, we are experiencing an era of explosive mobile data traffic. According to Ericsson’s report [
1], global mobile video traffic topped 136 EB/month, accounting for 74% of global mobile data by the end of 2024. The demand for large-scale video data information storage and transmission has promoted the development of data centers, especially cloud data centers. The processing method of the traditional cloud computing model is to send data to a remote cloud computing center. After processing and analysis, the data result is sent back to the user terminal [
2]. However, according to the Cisco white paper [
3], by 2023, the number of devices connected to IP networks will be more than three times the global population, and there will be 8.7 billion handheld or personal mobile devices and 4.4 billion Machine-To-Machine connections.
The rapid growth of networked devices makes the traditional cloud computing model unable to meet the low-latency and high-speed requirements of mobile video traffic. The mobile edge computing (MEC) model takes advantage of storage and computing resources at the edge of the wireless access network to provide services for mobile users, which can make up for the deficiencies of cloud computing [
4]. In recent years, edge caching is an effective way to solve the rapid increase in bandwidth demand caused by data-intensive mobile video streaming [
5]. Edge caching technology loads data from the data storage into the cache on demand through the mobile edge server, effectively increasing the transmission rate and reducing packet loss and transmission delay. Edge caching strategy has been a hot research issue, and it is widely used in the industrial internet of things [
6], vehicular ad hoc networks (VANETs) [
7,
8], and mobile edge networks [
9,
10,
11].
In contrast to conventional live and on-demand video streaming that are consumed using TVs and PCs, mobile video streaming is generally watched by users on mobile devices with wireless connections, i.e., 3G/4G cellular or WiFi [
12]. To meet user quality of experience (QoE) for mobile video streaming, edge network services should be stable, reliable, and low-cost [
13]. With the rapid development of video streaming media technology, higher requirements are put forward for the smoothness, clarity, and content quality of video playback. Since video streaming is an online service, video latency is the most intuitive factor that affects the quality of user experience. Considering that the video caching service is also online, the cache hit ratio can better reflect whether users can access their favorite videos accurately under the appropriate network conditions and device memory size.
To ensure that mobile users can enjoy video services satisfactorily, we need to pay attention to its mobility characteristics and low delay requirements. As an example shown in
Figure 1, in the edge caching scene of mobile video streaming, video content is provided by the content provider or central cloud server in the core network, and then cached in the macro base station (MBS) and the small base station (SBS) through the backhaul link. The SBS closest to the mobile user transmits the cached content to the user equipment via a high-speed link according to the user’s request. In the whole process, the MEC server is responsible for caching strategy scheduling, as well as content compression, decompression, encoding, and decoding tasks. As the current map navigation technology is relatively mature, our research on mobile edge cache is based on the premise that the user’s moving path is known. Using edge caching strategies to improve mobile video streaming services currently faces the following problems: (1) When the user path is known, arranging the base stations on the route to buffer appropriate video content to minimize the average delay is the key to improving the QoE. (2) When users move, the handoffs between base stations will be caused, which will suddenly increase time delay and cause video service interruption. (3) In the case of limited base station cache capacity, reasonably arranging cache content and updating to improve cache hit ratio is an essential issue for mobile edge cache.
Several relevant studies have been conducted to address the above questions. However, such studies are challenging because many different factors are involved, including user mobility pattern, video content characteristics, and mobile edge network characteristics. Previous research usually focused on a single aspect, such as studying the popularity of mobile video content [
14,
15], user mobility models [
16], or mobile network strategies to support mobile video streaming [
17]. The limitation of previous studies is that they did not consider the combined effects of user movement, video characteristics, and edge network deployment.
In this paper, we suggest solving the above problems from the perspective of the speed of mobile users and the popularity of video content. From a horizontal point of view, selecting the cached base station on the moving path according to the user’s speed and deciding how much video content to cache, which can reduce the high delay and service interruption caused by the handoffs between base stations. From a vertical perspective, according to the popularity of video content, each base station caches high-priority videos, and uses fine-grained and hierarchical storage to improve the cache hit rate. We combined the road traffic model to build a mobile video caching model based on popularity and speed perception.
The main contributions of the paper are summarized as follows:
First, we propose the hierarchical architecture of the mobile edge cache network, which is divided into three layers: cloud, edge, and user. The edge layer is composed of MBSs and SBSs, and users are distinguished by mobile speed. The hierarchical structure can make full use of the cache space and improve the cache hit ratio.
Second, we propose an edge caching strategy based on user’s speed and popularity for mobile video streaming (ECMSP). When the user path is known, we integrate and optimize the caching mechanism of mobile video streams from the perspectives of video popularity and user movement speed. This strategy can significantly reduce the high delay caused by user movement and service interruption caused by the handoffs between base stations.
Finally, we implemented simulation experiments to compare our caching strategy with the other three classic caching schemes. Experimental results show that our scheme achieves the best performance in terms of average latency and cache hit rate for mobile video streaming scenes.
This paper is organized as follows:
Section 2 introduces the related works.
Section 3 presents the architecture and methodology of the proposed edge caching strategy based on user’s speed and popularity for mobile video streaming. The experimental evaluation and performance results are presented in
Section 4.
Section 5 gives some conclusions and discusses potential future work.
2. Related Works
The content distribution technology caches popular video files in intermediate servers and agents, eliminating the need for repeated transmissions from remote servers. This can not only significantly save the transmission resource consumption of the core network but also improve the quality of user experience [
18].
Content distribution networks (CDN) have been well investigated in the Internet [
19]. The work in [
20] characterizes the mechanism by which CDNs serve video content, and the implications for video performance, especially emerging 4K video streaming. Researchers of [
21] elaborate on the comparative between HLS (HTTP Live Streaming) and RTMP (Real Time Messaging Protocol) with CDN and also without it. The result shows live video streaming with CDN has better performance than without CDN. However, we cannot simply apply traditional CDN-based content distribution techniques to mobile networks, as legacy CDN-based content distribution mechanism is generally designed for traditional wired communication network architecture [
22]. In mobile networks, the resources (e.g., storage, bandwidth, computing capacity) and the position of the deployed servers are constrained. More importantly, the hit rate of cached content items could be rather low in mobile networks due to the content dynamics, user mobility, and limited number of users in a cell.
An efficient video content caching strategy is of great significance for improving the quality of user experience, and related research are emerging constantly.
Currently, the combination of mobile edge computing and video distribution caching has become a new method of mobile video stream caching, which mainly handles low-latency and intensive computing tasks by transferring cloud computing power and resources to the network edge. Researchers of [
23] proposed a distributed caching architecture to bring content closer to the requester to reduce content delivery delays. They designed caching algorithms for multiple operators that cooperate by pooling together their co-located caches, in an effort to aid each other, in order to avoid extensive delays due to downloading content from distant servers. Guan et al. [
24] proposed a preference learning-based edge caching strategy (PrefCache). It learns to obtain the user’s preference for the video, and, if the video meets the user’s needs, it will be placed in the cache. Yang et al. [
11] proposed a group-partitioned video caching strategy algorithm (GPC) in vehicular networks. The algorithm first partitions the video requesters and then employs the Lagrange function and Lambert function to solve the cache probability matrix as an optimization variable.
In addition, the key to caching mobile video streams is to pay attention to user mobility. It is the focus of many related works to formulate corresponding caching strategies according to the characteristics of user movement. Su et al. [
25] proposed a cross-entropy-based caching scheme in vehicular networks. They analyzed the features of vehicular content requests based on the content access pattern, vehicle velocity, and road traffic density. The work in [
26] shows a content prefetching strategy to deal with the dynamic topology in VANETs. The author believes that the pre-caching scheme should be associated with multiple locations and multiple factors. Through machine learning technology, they combine driver preferences with map navigation information to determine the content cache node. Yao et al. [
27] chose to use OBU (On Board Unit) as the edge cache node. According to the probability of users’ vehicles reaching different hotspots, they proposed a method based on partial matching (PPM) and chose the nodes that stayed in the hotspots for a long time as cache nodes.
Although there have been some works on the cache hit ratio and cache delay in the edge cache of mobile video streams, they often only pay attention to one aspect of mobility or popularity, especially without considering the impact of user movement speed. They also did not consider the impact of the limitation of cache capacity.
3. Architecture and Methodology
In this section, we first give the layered architecture of the mobile edge caching network, and then explain the overall system of the mobile video stream edge caching. Next, we deploy the base station to the road traffic model and perform modeling. Finally, we propose our edge caching strategy for mobile video streams based on user’s speed and popularity.
3.1. The Hierarchical Architecture of Mobile Edge Cache Network
Mobile edge caching provides a highly distributed caching environment close to mobile users, which can be used to deploy applications and services, and store and process content [
28]. It can effectively place cloud computing and cloud storage on the edge of the network to enable content. Services and applications are accelerated to improve responsiveness from the edge. By caching the data to the local MEC server in advance, users can download the requested content directly from the local cache, thereby reducing redundant transmission, achieving faster service response and higher quality user experience.
Since our caching strategy mainly focuses on the user’s speed, we have changed the architecture of the mobile edge caching network, which is divided into three layers, as shown in
Figure 2.
Cloud Layer: Mainly composed of a central cloud server, and the video content is provided by the content provider of the core network.
Edge Layer: Composed of a MEC server and a base station. The MEC service is responsible for scheduling and coordination, and the base station is responsible for receiving and buffering video content to provide to the user. Base stations are deployed in groups of MBSs and SBSs.
User Layer: Different users are distinguished by different moving speeds, and different speeds have different caching strategies for them. They hold devices that can watch videos and use them to request content from the upper base station.
Based on the above three-layer mobile edge network, we apply mobile edge caching to mobile video streams to form a complete set of efficient caching systems. As shown in
Figure 3, the system consists of three parts, video traffic, mobile edge network, and core network. Video content comes with its own bit rate, transmission delay, and popularity attribute to distinguish, which represents the mobile video stream that can be seen on the market at present. The mobile edge network is responsible for managing user information, request queues, network control, and caching and updating strategies. The user model and the video model are combined. That is, each user not only has his own attributes such as moving speed and location, but also makes a request for one of the pieces of video content. The system forms a user video request queue according to the timeline, and transmits the video in the cloud or edge cache to the user device according to the specified location through the task request and response mechanism. Among them, the mobile edge server is responsible for monitoring the operation of the network, and does an excellent job in network congestion control and coordination with the road traffic model. In addition, the cache strategy includes two parts, one is the pre-caching strategy when the cache space is not full, and the other is the cache update strategy after the cache space is full. These three parts perform their duties and coordinate and cooperate to ensure the stable operation of the cache system.
3.2. Network Model
The deployment method of the base station is shown in
Figure 4. Both MBSs and SBSs are deployed on the road grid in a cellular manner. The set of MBSs and SBSs are denoted by
and
, respectively. Assuming ignoring the obstruction of buildings, each base station is responsible for a circular area centered on itself, where the coverage radius of MBSs is
, and the coverage radius of SBS is
. The cache capacity in each base station is limited. The cache capacity of MBSs and SBSs is determined by
and
, respectively.
Mobile Users: We assume that all users requesting video services are included, conforming to the Poisson Point Process (PPP) distribution. The collection of mobile user equipment (UE) is represented as . Each mobile user is identified with three components, and is presented as , where and are position coordinates of the users. represents the user’s moving speed.
Video Content: We resume that each piece of video content has a random size and popularity. The video content collection can be expressed as
. Each piece of video content can be expressed as
, where
and
represent the size and popularity of the video content.
is the time delay for users to obtain content. The different popularity of the video means different user request probabilities, expressed as vector
:
where
is the Zipf index [
29].
The transmission distance of the video content can be calculated from the real-time position coordinates of MBSs, SBSs, and mobile users. Among them, the maximum transmission distances from MBSs to SBSs and SBSs to UEs are denoted by
and
, respectively. The spectrum efficiency performances of MBS
m, SBS
n, and UE
is given by means of
where
and
are the transmission power. Path loss index corresponding to interference link is expressed as:
,
,
.
is the unilateral power spectral density of additive Gaussian white noise.
3.3. Video Caching Model
3.3.1. Cache Hit Ratio
The cache hit ratio is the most intuitive indicator of the effectiveness of the cache strategy. When a UE requests a video segment in , we can display cache hit probabilities as follows.
Based on the segmented transmission technology of video streaming media [
30], the video content is cached in block grouping. Assume that, in the
i-th prevalent sequence, there are video contents of diverse sizes in the
group, while the average video size of group
) is
. The ratio of video clips in group
g to aggregate files is indicated by
. Furthermore, given a cache memory of
and a request segment of
, the minimum cache hit probability of UEs can be denoted by
, where the caching strategy for all UEs can be expressed as
. Hence, the cache hit probability of a video requested to a SBS is
3.3.2. Average Delay
The average delay is the most important indicator that affects the quality of user experience. After obtaining the video content
, the MBS
m sends content
f to the SBS
n. The delay of acquiring content
from
m to
n can be obtained by
where the transmission distance from
m to
n is demoted by
, and the average bandwidth during the route from
m to
n is denoted by
. The possibility for sending the copy of video content
f can be expressed as
, which is given by
Here,
means that SBS
n has completely cached video content
f. The transmission of video content from the SBS to the UE requires a similar process. To sum up, the average delay of video caching is given by
3.4. Video Caching Strategy
3.4.1. Problem Derivation
Our research work aims to maximize the hit rate of mobile video streaming cache and minimize the average transmission delay for mobile users to obtain the video. In this section, we propose the problem of maximizing video cache hit ratio and minimizing transmission delay, and present a mobile video cache strategy algorithm based on speed and popularity to obtain optimal variable .
The joint caching hit ratio maximization and caching delay minimization problem is formulated by
To find the solution to problem , we first give the following theorem:
Theorem 1. Under the terms of , , , revenue function has a maximum cache probability for UE, i.e., .
Then, we apply the relaxation variable
to find the optimal result by the Karush–Kuhn–Tucher (KKT) condition. With
, we can get the Lagrange function as follows:
In addition, a KKT-based solution to the above optimization problem must satisfy the following conditions:
where the first equation represents the necessary condition for obtaining the extreme value. The second denotes the coefficient constraint. The last equation means that the file size must not exceed the RAM size of users. The optimized model above can be evaluated by means of conditional solution
.
3.4.2. Algorithm Design
In this part, we first explain the caching mechanism of our proposed caching strategy, and then propose the ECMSP algorithm to maximize the profit function by setting and solving the caching probability matrix.
When a user moves from area A to area C as shown in
Figure 5, the duration of the video requires the user to span the area covered by three base stations. Calculate the user’s stay time in each area based on user’s average speed, and then calculate the size of the video content that needs to be cached in each area based on the transmission rate. In the area at the junction of base stations, additional video content needs to be cached due to speed calculation errors and the handoffs between base stations. When there are multiple types of video content that need to be cached, the content with higher priority is selected according to the popularity of the video.
Next, we give the basic algorithm of the ECMSP strategy, which is mainly divided into two parts: grouping and optimization. Our algorithm is shown in Algorithm 1. We group MBSs and SBSs and cooperate to cache popular videos. First, set the buffer capacity
and
of different edge base stations, the total video popularity order
p, and other attributes
of each video content. We initialize the communication radius
and
of MBSs and SBSs, the Poisson distribution intensity
, the user’s position coordinates and speed
, the background thermal noise power
, etc. After a series of cycles and updating the caching strategy matrix, we obtain the optimal caching strategy matrix
and the optimal total revenue function value
.
Algorithm 1 Edge caching strategy based on mobile speed and popularity |
- 1:
Getting the initialization matrix dimension - 2:
- 3:
while do - 4:
while do - 5:
- 6:
while do - 7:
- 8:
while do - 9:
- 10:
Derive the cache probability with the manner in (9) - 11:
if then - 12:
Restricting the optimal caching strategy matrix - 13:
- 14:
end if - 15:
Refresh matrix C and receive a new - 16:
end while - 17:
end while - 18:
end while - 19:
if then - 20:
- 21:
end if - 22:
end while - 23:
Calculating the maximum value of total revenue function in (8)
|
5. Discussion
This paper has proposed an edge caching strategy for mobile video streaming, combined with map navigation services, which can cache the corresponding content in advance on the base station on the user’s route based on the user’s moving speed and the popularity of the video content when the user’s moving path is known. The purpose of this research work is to reduce the impact of the handoffs between base stations and service interruption during user movement, reduce the time delay for users to obtain video, improve the cache hit ratio of edge base stations, and improve the quality of user experience through caching strategies.
In order to evaluate the effectiveness of this strategy, we conducted a comparative experiment with the method in the latest literature. Although it was a simulation experiment, the data set of the video content in our experiment was selected from real data collected by a company engaged in the industry. By comparison, our proposed caching strategy has achieved better performance than other solutions in terms of average delay and cache hit ratio. The fly in the ointment is that only road models can be used in the simulation of mobile scenes, and the real factors of road traffic cannot be considered, such as the signal blocking by buildings or vehicles, and the influence between roadside signal sources. In addition, testing the performance of the caching strategy in the field will encounter other real challenges. On the one hand, the user’s mobile information is obtained, which involves the user’s security and privacy. A strategy is needed that can obtain the user’s moving path in advance without infringing on the user’s interests. On the other hand, some videos cannot be communicated between users due to copyright issues, which involves the intellectual property rights of the video content, and a specific strategy is needed to solve this problem. Therefore, the following other in-depth research needs to be carried out in actual cases to determine the effectiveness of the caching strategy.
In addition to the improvements in the experimental part, our future improvements will focus on the following. First of all, most of the current video content management and distribution technologies are based on traditional CDN systems, and there is no recognized solution for the research of mobile scenarios. Applying some mature technologies of CDN to mobile edge computing is one of our future research directions. Secondly, our caching strategy is based on the premise that the user’s moving path is known; this requires the help of real-time map navigation services. The current user mobile location information is controlled by the operators related to map navigation, so a specific incentive mechanism is needed to obtain this information [
34]. Furthermore, we consider optimizing the pre-caching design to refine the attributes of users, services and content, such as expanding the pre-caching capacity in densely populated areas, or setting additional capacity for emergency tasks to ensure timely response; as some video content has prominent regional characteristics, we need to add regional attributes to the video to increase their cache priority in certain regions. In addition, it is also a good idea to apply machine learning algorithms to the cache update mechanism.