Author Contributions
Conceptualization, T.-H.L., L.-H.C. and Y.-S.C.; methodology, T.-H.L., L.-H.C. and Y.-S.C.; software, Y.-S.C.; validation, T.-H.L., L.-H.C. and Y.-S.C.; formal analysis, T.-H.L., L.-H.C. and Y.-S.C.; investigation, T.-H.L. and L.-H.C.; resources, T.-H.L.; data curation, T.-H.L., L.-H.C. and Y.-S.C.; writing—original draft preparation, T.-H.L., L.-H.C. and Y.-S.C.; writing—review and editing, T.-H.L. and L.-H.C.; visualization, Y.-S.C.; supervision, T.-H.L.; project administration, T.-H.L.; funding acquisition, T.-H.L. All authors have read and agreed to the published version of the manuscript.
Figure 1.
The X2 handover process.
Figure 1.
The X2 handover process.
Figure 2.
The event A3 handover.
Figure 2.
The event A3 handover.
Figure 3.
The general representative model with 12 inputs, 1 hidden layer, 2 biases and 1 output.
Figure 3.
The general representative model with 12 inputs, 1 hidden layer, 2 biases and 1 output.
Figure 4.
The relationship between the amount of data received and the frame rate.
Figure 4.
The relationship between the amount of data received and the frame rate.
Figure 5.
Network topology.
Figure 5.
Network topology.
Figure 6.
The relationship for packet loss rate and RSRP from UE.
Figure 6.
The relationship for packet loss rate and RSRP from UE.
Figure 7.
The relationship between MOS and RSRP for video conference service for UE.
Figure 7.
The relationship between MOS and RSRP for video conference service for UE.
Figure 8.
The UE movement path and RSRP are collected as the training dataset.
Figure 8.
The UE movement path and RSRP are collected as the training dataset.
Figure 9.
The model loss of RSRP after 2 s of prediction: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 9.
The model loss of RSRP after 2 s of prediction: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 10.
The model loss of RSRP after 4 s of prediction: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 10.
The model loss of RSRP after 4 s of prediction: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 11.
The model loss of RSRP after 6 s of prediction: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 11.
The model loss of RSRP after 6 s of prediction: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 12.
The model loss of RSRP after 8 s of prediction: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 12.
The model loss of RSRP after 8 s of prediction: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 13.
The model loss of RSRP after 10 s of prediction: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 13.
The model loss of RSRP after 10 s of prediction: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 14.
Mobile moving path for deep learning model performance analysis.
Figure 14.
Mobile moving path for deep learning model performance analysis.
Figure 15.
Predicted RSRP results for the next 2 s: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 15.
Predicted RSRP results for the next 2 s: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 16.
Predicted RSRP results for the next 4 s: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 16.
Predicted RSRP results for the next 4 s: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 17.
Predicted RSRP results for the next 6 s: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 17.
Predicted RSRP results for the next 6 s: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 18.
Predicted RSRP results for the next 8 s: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 18.
Predicted RSRP results for the next 8 s: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 19.
Predicted RSRP results for the next 10 s: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 19.
Predicted RSRP results for the next 10 s: (a) MLP, (b) CNN, (c) RNN, (d) LSTM.
Figure 20.
UE’s moving path for handover mechanism testing.
Figure 20.
UE’s moving path for handover mechanism testing.
Figure 21.
The packet loss rates of three experimental results of the proposed deep-learning-based handover mechanism.
Figure 21.
The packet loss rates of three experimental results of the proposed deep-learning-based handover mechanism.
Figure 22.
The packet loss rates of three experimental results of the LTE event A3 handover mechanism.
Figure 22.
The packet loss rates of three experimental results of the LTE event A3 handover mechanism.
Figure 23.
The confidence interval of packet loss rates of the three experimental results for the proposed deep-learning-based handover mechanism.
Figure 23.
The confidence interval of packet loss rates of the three experimental results for the proposed deep-learning-based handover mechanism.
Figure 24.
The confidence interval of packet loss rates of three experimental results for the LTE event A3 handover mechanism.
Figure 24.
The confidence interval of packet loss rates of three experimental results for the LTE event A3 handover mechanism.
Figure 25.
The MOS of three experimental results of the proposed deep-learning-based handover mechanism.
Figure 25.
The MOS of three experimental results of the proposed deep-learning-based handover mechanism.
Figure 26.
The MOS of three experimental results of the LTE event A3 handover mechanism.
Figure 26.
The MOS of three experimental results of the LTE event A3 handover mechanism.
Figure 27.
The confidence interval of MOS of the three experimental results for the proposed deep-learning-based handover mechanism.
Figure 27.
The confidence interval of MOS of the three experimental results for the proposed deep-learning-based handover mechanism.
Figure 28.
The confidence interval of MOS of the three experimental results for the LTE event A3 handover mechanism.
Figure 28.
The confidence interval of MOS of the three experimental results for the LTE event A3 handover mechanism.
Table 1.
The relationship between MOS and VoIP.
Table 1.
The relationship between MOS and VoIP.
MOS | Quality | Impairment |
---|
5 | Excellent | Imperceptible |
4 | Good | Perceptible but not annoying |
3 | Fair | Slightly annoying |
2 | Poor | Annoying |
1 | Bad | Very Annoying |
Table 2.
Skype video codec parameters.
Table 2.
Skype video codec parameters.
Video resolution | 640 × 480, 320 ×240, 160 × 120 |
Frame rate per second | 5–30 fps |
Video bitrate | 5–950 Kbps |
Table 3.
The relationship between downlink throughput and RSRP.
Table 3.
The relationship between downlink throughput and RSRP.
RSRP Range (dBm) | Downlink Throughput (Kbps) |
---|
| | | |
---|
(−130, −125) | 5286.42 | 4957.90 | 4511.61 | 3413.85 |
(−125, −120) | 9965.03 | 8336.40 | 6276.86 | 3608.57 |
(−120, −115) | 19,379.11 | 17,267.44 | 14,352.46 | 9847.88 |
(−115, −110) | 31,757.07 | 27,976.82 | 23,088.08 | 16,024.25 |
(−110, −105) | 45,420.39 | 39,257.92 | 31,975.05 | 22,218.22 |
(−105, −100) | 57,578.17 | 48,962.60 | 39,552.47 | 27,603.03 |
(−100, −95) | 59,869.94 | 49,117.72 | 38,543.68 | 26,140.81 |
(−95, −90) | 69,091.76 | 57,735.74 | 46,309.86 | 32,471.78 |
(−90, −85) | 69,119.64 | 57,362.81 | 45,757.79 | 31,899.25 |
(−85, −80) | 73,730.66 | 62,012.55 | 50,166.78 | 35,667.25 |
(−80, −75) | 81,108.99 | 69,941.47 | 58,038.26 | 42,717.76 |
(−75, −70) | 84,024.21 | 73,216.94 | 61,404.49 | 45,849.86 |
(−70, −65) | 84,040.74 | 73,226.51 | 61,409.52 | 45,851.77 |
(−65, −60) | 88,638.70 | 78,669.06 | 67,211.11 | 51,452.32 |
(−60, −55) | 94,461.86 | 86,211.75 | 75,755.57 | 60,218.17 |
(−55, −50) | 101,295.20 | 99,657.95 | 94,443.59 | 83,012.62 |
Table 4.
The network parameters used in the simulation.
Table 4.
The network parameters used in the simulation.
Band | 2.1 GHz |
Channel Bandwidth | 15 MHz |
Resource Block | 75 |
eNB’s transport range | 4200 M |
eNB’s txpower | 43 dBm |
eNB’s antenna gain | 18 dB |
UE’s txpower | 18 dBm |
UE’s moving speed | 20 m/s |
Table 5.
The features for the training dataset.
Table 5.
The features for the training dataset.
Prediction Time | Dataset Features | Label |
---|
Prediction next 2 s of time t | RSRP_t, RSRP_t + 1, RSRP_t + 2, RSRP_t + 3, RSRP_t + 4, RSRP_t + 5, RSRP_t + 6, RSRP_t + 7, RSRP_t- + 8, RSRP_t + 9, RSRP_t + 10 | RSRP_t + 2 |
Prediction next 4 s of time t | RSRP_t + 4 |
Prediction next 6 s of time t | RSRP_t + 6 |
Prediction next 8 s of time t | RSRP_t + 8 |
Prediction next 10 s of time t | RSRP_t + 10 |
Table 6.
Deep learning model parameters.
Table 6.
Deep learning model parameters.
Model | Hidden Layer | Neurons |
---|
MLP | 3 full connection layers | 128, 128, 64 |
CNN | 1 convolutional layer, 1 pooling layer, 1 full connection layers | 32, 16 |
RNN | 2 RNN layer | 64, 32 |
LSTM | 2 LSTM layer | 64, 32 |
Table 7.
Deep learning training parameters.
Table 7.
Deep learning training parameters.
Parameter | Value |
---|
Epoch | 100 |
Batch size | 50 |
Activation | relu |
Optimizer | adam |
Loss function | MSE |
Table 8.
RSRP prediction test results using MLP.
Table 8.
RSRP prediction test results using MLP.
Interval | MSE | R-Squared |
---|
2 s | 0.814 | 0.993 |
4 s | 2.115 | 0.979 |
6 s | 3.526 | 0.962 |
8 s | 4.949 | 0.942 |
10 s | 5.633 | 0.929 |
Table 9.
RSRP prediction test results using CNN.
Table 9.
RSRP prediction test results using CNN.
Interval | MSE | R-Squared |
---|
2 s | 0.621 | 0.994 |
4 s | 0.958 | 0.99 |
6 s | 1.555 | 0.983 |
8 s | 1.818 | 0.979 |
10 s | 2.701 | 0.966 |
Table 10.
RSRP prediction test results using RNN.
Table 10.
RSRP prediction test results using RNN.
Interval | MSE | R-Squared |
---|
2 s | 0.111 | 0.999 |
4 s | 0.409 | 0.996 |
6 s | 0.968 | 0.99 |
8 s | 1.219 | 0.986 |
10 s | 1.259 | 0.984 |
Table 11.
RSRP prediction test results using LSTM.
Table 11.
RSRP prediction test results using LSTM.
Interval | MSE | R-Squared |
---|
2 s | 0.715 | 0.993 |
4 s | 1.896 | 0.981 |
6 s | 2.902 | 0.969 |
8 s | 3.203 | 0.963 |
10 s | 4.254 | 0.946 |