Distributed Machine Learning and Native AI Enablers for End-to-End Resources Management in 6G
Abstract
:1. Introduction
1.1. Background and Motivation
1.2. Contributions
- It provides a concise account of 6G challenges that yields three criteria for benchmarking the suitability of candidate ML-powered RM methodologies for 6G, also in connection with an end-to-end scope.
- Through a focused literature survey, it reviews distributed RL-based methodologies for RM, evaluating them with respect to the three criteria. The considered methodologies are categorized into six methodological frameworks, and this approach invites broader insight into the potential and limitations of the more general frameworks, beyond individual methodologies.
- It considers three important network-native AI enablers that are part of the 6G vision, discussing their functionality and interplay and exploring their potential for supporting each of the six methodological frameworks.
- It exploits the insight gained from previous steps towards discussing lessons learned and identifying open issues and promising directions for future research.
1.3. Related Work
1.4. Paper Structure
2. Challenges Relating to RM in 6G
3. Distributed ML for RM in 5G and Beyond
3.1. Overview, Benchmarking Criteria, and Methodological Frameworks
- Centralized training, decentralized execution (CTDE), where actors collect data from the environment and a centralized critic is trained with the data collected. This may allow for communication between agents, considering interactions explicitly. Often, the critic is discarded after training, but this disables the potential for any further online training. See Figure 1a.
- Fully decentralized communicating agents (FDC). Agents are trained locally on each node and information is exchanged in some form between agents, such as the state and action of other agents or a global reward. Learned communication is another variant, where agents learn what sort of information to transmit. See Figure 1b.
- Fully decentralized independent agents (FDI). Agents do not exchange information and each agent trains independently from the others. While this reduces communication costs, it does not consider interactions explicitly, which may lead to suboptimal solutions. See Figure 1c.
- Horizontal federated RL (HFRL) trains models that share the same state and action space but that collect data from different nodes. It may allow for interactions between agents. See Figure 1d.
- Vertical federated RL (VFRL) can provide flexibility in state and action space structure. Essentially, agents can be heterogeneous and have different action spaces. It may also allow for interactions between agents. See Figure 1e.
- Team learning (TeamL) allows for the formation of heterogeneous agents into teams that complement each other by doing different types of tasks. Agents within a team share information and reward signals. This allows for a certain degree of composability. See Figure 1f.
3.2. Radio Resources and Power Allocation
3.3. Edge Caching
3.4. Edge and Core Computing
3.5. End-to-End
3.6. Unifying and Distinguishing Features across Resource Types and Domains
Framework | Communication | Scalability | Composability/Modularity | Transferability | Related Works |
---|---|---|---|---|---|
CTE 1 (baseline) | Experience | Very low: convergence slows down with state–action space, and high communication costs to collect data centrally. | No | Low: features depend on system size. | (Mentioning only works that employ CTE as baseline for comparison to distributed RL schemes) RRM: [33,34,40] Compute: [48,50] End-to-end: [57] |
CTDE | Experience | Low: reduced state space, but communication costs are high and no interactions between agents during execution (high potential for instability when scaling up). | No | High: agents have local actions and state. | RRM: [33,34] Compute: [52] End-to-end: [56] |
FDC | State, action, reward | Medium: reduced state space and agents interact (good stability), but communication costs increase. | No | Medium: communication may depend on number of neighbors. | RRM: [36,37,38] Caching: [42,43,44] Compute: [49,50] End-to-end: [58,59] |
FDI | None | Low: reduced state space and no communication costs but no interactions (high potential for instability when scaling up). | Yes (at expense of scalability) | High: agents have local actions and state. | RRM: [35] Compute: [49,51] End-to-end: [60] |
HFRL | Model | Medium: reduced state space and communication rounds can be optimized but may lack interactions. | No | Medium to high: agents have local actions and state unless they include interactions. | Caching: [45,46] Compute: [48,49] |
VFRL | Partial model | Medium: reduced state space and communication rounds can be optimized but may lack interactions. | Yes | Medium: agents have local actions and state unless they include interactions | RRM: [40] |
TeamL | Intra-team state, action, reward | Medium: reduced state space but has inter-team and intra-team communication costs. | Yes | Medium: depends on the interaction between teams. | RRM: [39] |
4. Supporting Distributed Frameworks with Native AI Enablers
4.1. Native AI Enablers and Their Functionality
- Inference, which makes network control decisions and monitors model performance, taking steps to ensure it is acceptable. By monitoring, analyzing, and planning, inference can choose to deploy updated models (made available through online or offline learning, discussed next) to the live environment. Model updates are triggered when a potential for performance enhancements is detected, or when performance drops during inference or is predicted to drop by the online learning workflow.
- Online learning, which improves operating models. By collecting monitoring data from a digital twin and analyzing them, the system may plan to update these models while they are operating. The employed digital twin is connected to the live environment (see the DTN at the bottom of the figure) and enables the training of models under conditions as close as possible to the current ones, while also avoiding the risk of negatively impacting the actual network performance. If the online training workflow detects that the updated model has the potential for improved network performance, a replacement of the model currently used for inference may be triggered. Contrarily, if performance is predicted to drop beyond acceptable levels, a replacement of the inference model will again be triggered, this time selecting the new model among models made available through offline learning.
- Offline learning, which is managed by the knowledge reuse system. It is responsible for processing and organizing monitoring data, model libraries, and scheduling the training of new models. It takes input from analysis to determine how and which models to train. It also coordinates with planning to provide new models upon request or prompt the replacement of models with new ones. The DTN (topmost in the figure) employed in offline learning may be configured to explore scopes of different breadth.
4.2. Interactions between the Distributed RL Frameworks in Table 1 and AI Enablers
5. Lessons Learned, Open Issues, and Future Directions
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A. Summary of Related Work and Brief Discussion of Surveys Treating Relevant Aspects in a Different Context
Ref. | Description | Comparison to this Work |
---|---|---|
[11] | Focuses on end-to-end RM for the specific context of network slicing. Provides a taxonomy of AI methodologies and discusses aspects of a knowledge reuse system, including the type of data stored and curated and workflows for ML model training. Places emphasis on transfer learning. | This work considers AI workflows and digital twin networks in addition to knowledge transfer, conducts a focused survey of distributed RL methodologies for RM tasks of a much wider scope, organizes/abstracts the surveyed methodologies in a number of frameworks, and discusses how the AI enablers support each framework. |
[14] | Focuses on distributed methodologies and network technologies from an end-to-end network-for-AI viewpoint and examines how network technologies support distributed ML operation. | This work takes an AI-for-network approach. |
[15] | Focuses on architectures of distributed networked ML from an end-to-end network-for-AI-approach. Discusses optimizations of the algorithm’s training process in each architecture, such as communication efficient techniques and asynchronous methods. | This work takes an AI-for-network approach. |
[16] | Proposes virtualization of user demand using digital twins. Discusses pervasive intelligence and categorizes it into network management and service-oriented AI. Describes a network architecture based on an interplay between AI and virtualization supported by digital twins, but does not consider other AI enablers or specify a scope of RM tasks. | This work considers digital twins integrated with AI workflows and a knowledge reuse system, conducts a focused survey of distributed RL methodologies for RM tasks, organizes/abstracts the surveyed methodologies in a number of frameworks, and discusses how the AI enablers support each framework. |
[17] | Focuses on pervasive AI for highly heterogeneous networks with computational facilities, such as IoT networks. Adopting a network-for-AI viewpoint, it focuses on distributed AI architectures and techniques, such as parallelization and model splitting, and considers the optimization of training and inference processes. | This work takes an AI-for-network approach. |
Ref. | Description |
---|---|
[4] | Focuses on zero-touch management and categorizes ML applications by network domain, including distributed solutions. Does not abstract overdistributed ML approaches and does not consider AI enablers. Discusses ZTM architecture in detail and surveys extensively how ZTM may benefit from applications of ML. |
[7] | Provides an extensive survey of ML applications to networking, categorizing them by network task (including RM) and ML algorithm. Does not abstract over ML approaches, does not focus on distributed solutions, keeps the scope to 4G/5G and does not consider AI enablers. Offers preliminaries on SL, UL, RL, and neural networks. |
[8] | Surveys RRM tasks in 5G, including distributed ML solutions. Does not abstract over ML approaches, does not include AI enablers, and does not consider other types of resources. Compares ML approaches via simulation. |
[9] | Focuses on edge/core VNF and CNF placement in 5G. Does not abstract over ML approaches, does not discuss AI enablers, and does not consider other types of resources. Thoroughly classifies approaches by task, scenario, algorithm, and objective. Includes non-ML approaches. |
[25] | Focuses on 5G/6G technologies, surveys potential applications of ML to networking, and discusses the benefits of ML compared to other approaches. Does not abstract over ML approaches, and does not focus on distributed solutions or AI enablers. A brief introduction to SL, UL, RL, and neural networks is given. |
[27] | Offers a detailed tutorial on SL, UL, and RL in the context of deep learning for wireless networks. Includes general theory, the most relevant architectures, and training algorithms. Does not include a survey over specific ML applications to networking, does not examine distributed AI architectures or 6G, does not focus on RM, and does not include AI enablers. Discusses a possible cooperation between ML and conventional algorithms. |
[28] | Provides a taxonomy of deep learning applications to networking, per network layer and type of task. Keeps the scope to 5G, does not consider AI enablers, does not focus on RM, and does not abstract oversurveyed approaches. Discusses deep learning theory and compares DL frameworks. |
[29] | Focuses on the integration of deep learning into the network, such as deployment of ML and specific ML implementations tailored to networking. Keeps the scope to 5G, does not abstract over approaches, does not examine multi-agent systems, does not include AI enablers, and does not focus on RM. Provides a brief introduction to DL. |
[30] | Surveys deep RL for autonomous control in networks and proposes a reference architecture for DRL, including distributed solutions. Keeps the scope to 5G, does not include AI enablers, does not focus on RM, and does not abstract in terms of distributed architectures, focusing on network layers instead. Provides theory on advanced DRL algorithms. |
[31] | Focuses on RRM tasks in HetNet scenarios. Keeps the scope to 5G, does not abstract over ML approaches, does not include other type of resources, and does not include AI enablers. Discusses non-ML and ML approaches and examines many aspects of each task/approach combination. |
[32] | Focuses on RAN RM tasks and ML applications to RAN physical layer technologies. Keeps the scope to 5G, does not abstract over ML approaches, does not consider other types of resources, and does not include AI enablers. Compares non-ML approaches to ML. |
Appendix B. ML Preliminaries
Appendix B.1. Supervised Learning (SL)
Appendix B.2. Unsupervised Learning (UL)
Appendix B.3. Reinforcement Learning (RL)
References
- Letaief, K.B.; Shi, Y.; Lu, J.; Lu, J. Edge Artificial Intelligence for 6G: Vision, Enabling Technologies, and Applications. IEEE J. Sel. Areas Commun. 2022, 40, 5–36. [Google Scholar] [CrossRef]
- Zhang, S.; Zhu, D. Towards artificial intelligence enabled 6G: State of the art, challenges, and opportunities. Comput. Netw. 2020, 183, 107556. [Google Scholar] [CrossRef]
- Tataria, H.; Shafi, M.; Molisch, A.F.; Dohler, M.; Sjoland, H.; Tufvesson, F. 6G Wireless Systems: Vision, Requirements, Challenges, Insights, and Opportunities. Proc. IEEE 2021, 109, 1166–1199. [Google Scholar] [CrossRef]
- Coronado, E.; Behravesh, R.; Subramanya, T.; Fernandez-Fernandez, A.; Siddiqui, M.S.; Costa-Perez, X.; Riggio, R. Zero Touch Management: A Survey of Network Automation Solutions for 5G and 6G Networks. IEEE Commun. Surv. Tutor. 2022, 24, 2535–2578. [Google Scholar] [CrossRef]
- Nassef, O.; Sun, W.; Purmehdi, H.; Tatipamula, M.; Mahmoodi, T. A survey: Distributed Machine Learning for 5G and beyond. Comput. Netw. 2022, 207, 108820. [Google Scholar] [CrossRef]
- Hu, S.; Chen, X.; Ni, W.; Hossain, E.; Wang, X. Distributed Machine Learning for Wireless Communication Networks: Techniques, Architectures, and Applications. IEEE Commun. Surv. Tutor. 2021, 23, 1458–1493. [Google Scholar] [CrossRef]
- Sun, Y.; Peng, M.; Zhou, Y.; Huang, Y.; Mao, S. Application of Machine Learning in Wireless Networks: Key Techniques and Open Issues. IEEE Commun. Surv. Tutor. 2019, 21, 3072–3108. [Google Scholar] [CrossRef]
- Bartsiokas, I.A.; Gkonis, P.K.; Kaklamani, D.I.; Venieris, I.S. ML-Based Radio Resource Management in 5G and Beyond Networks: A Survey. IEEE Access 2022, 10, 83507–83528. [Google Scholar] [CrossRef]
- Attaoui, W.; Sabir, E.; Elbiaze, H.; Guizani, M. VNF and CNF Placement in 5G: Recent Advances and Future Trends. IEEE Trans. Netw. Serv. Manag. 2023, 1. [Google Scholar] [CrossRef]
- Camelo, M.; Cominardi, L.; Gramaglia, M.; Fiore, M.; Garcia-Saavedra, A.; Fuentes, L.; De Vleeschauwer, D.; Soto-Arenas, P.; Slamnik-Krijestorac, N.; Ballesteros, J.; et al. Requirements and Specifications for the Orchestration of Network Intelligence in 6G. In Proceedings of the 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), Virtual, 8–11 January 2022; pp. 1–9. [Google Scholar]
- Zhou, H.; Erol-Kantarci, M.; Poor, V. Knowledge Transfer and Reuse: A Case Study of AI-enabled Resource Management in RAN Slicing. IEEE Wirel. Commun. 2022, 1–10. [Google Scholar] [CrossRef]
- Hui, L.; Wang, M.; Zhang, L.; Lu, L.; Cui, Y. Digital Twin for Networking: A Data-driven Performance Modeling Perspective. IEEE Netw. 2022, 1–8. [Google Scholar] [CrossRef]
- Zhou, C.; Yang, H.; Duan, X.; Lopez, D.; Pastor, A.; Wu, Q.; Boucadair, M.; Jacquenet, C. Digital Twin Network: Concepts and Reference Architecture; Internet Engineering Task Force; Work in Progress, Internet-Draft, draft-irtf-nmrg-network-digital-twin-arch-03, 27 April 2023. Available online: https://datatracker.ietf.org/doc/html/draft-irtf-nmrg-network-digital-twin-arch-03 (accessed on 1 July 2023).
- Campolo, C.; Iera, A.; Molinaro, A. Network for Distributed Intelligence: A Survey and Future Perspectives. IEEE Access 2023, 11, 52840–52861. [Google Scholar] [CrossRef]
- Liu, X.; Yu, J.; Liu, Y.; Gao, Y.; Mahmoodi, T.; Lambotharan, S.; Tsang, D.H.-K. Distributed Intelligence in Wireless Networks. IEEE Open J. Commun. Soc. 2023, 4, 1001–1039. [Google Scholar] [CrossRef]
- Shen, X.; Gao, J.; Wu, W.; Li, M.; Zhou, C.; Zhuang, W. Holistic Network Virtualization and Pervasive Network Intelligence for 6G. IEEE Commun. Surv. Tutor. 2022, 24, 1–30. [Google Scholar] [CrossRef]
- Baccour, E.; Mhaisen, N.; Abdellatif, A.A.; Erbad, A.; Mohamed, A.; Hamdi, M.; Guizani, M. Pervasive AI for IoT Applications: A Survey on Resource-Efficient Distributed Artificial Intelligence. IEEE Commun. Surv. Tutor. 2022, 24, 2366–2418. [Google Scholar] [CrossRef]
- Saad, W.; Bennis, M.; Chen, M. A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems. IEEE Netw. 2020, 34, 134–142. [Google Scholar] [CrossRef]
- Sundarum, M. Distributed Compute and Communications in 5G. 5G Americas 2022. Available online: https://www.5gamericas.org/distributed-compute-and-communication-in-5g/ (accessed on 1 July 2023).
- Letaief, K.B.; Chen, W.; Shi, Y.; Zhang, J.; Zhang, Y.-J.A. The Roadmap to 6G: AI Empowered Wireless Networks. IEEE Commun. Mag. 2019, 57, 84–90. [Google Scholar] [CrossRef]
- Li, Q.; Ding, Z.; Tong, X.; Wu, G.; Stojanovski, S.; Luetzenkirchen, T.; Kolekar, A.; Bangolae, S.; Palat, S. 6G Cloud-Native System: Vision, Challenges, Architecture Framework and Enabling Technologies. IEEE Access 2022, 10, 96602–96625. [Google Scholar] [CrossRef]
- Wang, C.-X.; Di Renzo, M.; Stanczak, S.; Wang, S.; Larsson, E.G. Artificial Intelligence Enabled Wireless Networking for 5G and Beyond: Recent Advances and Future Challenges. IEEE Wirel. Commun. 2020, 27, 16–23. [Google Scholar] [CrossRef]
- Liu, G.; Huang, Y.; Dong, J.; Jin, J.; Wang, Q.; Li, N. Vision, requirements and network architecture of 6G mobile network beyond 2030. China Commun. 2020, 17, 92–104. [Google Scholar] [CrossRef]
- Ahammed, T.B.; Patgiri, R.; Nayak, S. A vision on the artificial intelligence for 6G communication. ICT Express 2023, 9, 197–210. [Google Scholar] [CrossRef]
- Mahmood, M.R.; Matin, M.A.; Sarigiannidis, P.; Goudos, S.K. A Comprehensive Review on Artificial Intelligence/Machine Learning Algorithms for Empowering the Future IoT Toward 6G Era. IEEE Access 2022, 10, 87535–87562. [Google Scholar] [CrossRef]
- Shahraki, A.; Ohlenforst, T.; Kreyß, F. When machine learning meets Network Management and Orchestration in Edge-based networking paradigms. J. Netw. Comput. Appl. 2023, 212, 103558. [Google Scholar] [CrossRef]
- Zappone, A.; Di Renzo, M.; Debbah, M. Wireless Networks Design in the Era of Deep Learning: Model-Based, AI-Based, or Both? IEEE Trans. Commun. 2019, 67, 7331–7376. [Google Scholar] [CrossRef]
- Mao, Q.; Hu, F.; Hao, Q. Deep Learning for Intelligent Wireless Networks: A Comprehensive Survey. IEEE Commun. Surv. Tutor. 2018, 20, 2595–2621. [Google Scholar] [CrossRef]
- Zhang, C.; Patras, P.; Haddadi, H. Deep Learning in Mobile and Wireless Networking: A Survey. IEEE Commun. Surv. Tutor. 2019, 21, 2224–2287. [Google Scholar] [CrossRef]
- Lei, L.; Tan, Y.; Zheng, K.; Liu, S.; Zhang, K.; Shen, X. Deep Reinforcement Learning for Autonomous Internet of Things: Model, Applications and Challenges. IEEE Commun. Surv. Tutor. 2020, 22, 1722–1760. [Google Scholar] [CrossRef]
- Agarwal, B.; Togou, M.A.; Ruffini, M.; Muntean, G.-M. A Comprehensive Survey on Radio Resource Management in 5G HetNets: Current Solutions, Future Trends and Open Issues. IEEE Commun. Surv. Tutor. 2022, 24, 2495–2534. [Google Scholar] [CrossRef]
- Hussain, F.; Hassan, S.A.; Hussain, R.; Hossain, E. Machine Learning for Resource Management in Cellular and IoT Networks: Potentials, Current Solutions, and Open Challenges. IEEE Commun. Surv. Tutor. 2020, 22, 1251–1275. [Google Scholar] [CrossRef]
- Ding, H.; Zhao, F.; Tian, J.; Li, D.; Zhang, H. A deep reinforcement learning for user association and power control in heterogeneous networks. Ad Hoc Netw. 2020, 102, 102069. [Google Scholar] [CrossRef]
- Nie, H.; Li, S.; Liu, Y. Multi-Agent Deep Reinforcement Learning for Resource Allocation in the Multi-Objective HetNet. In Proceedings of the 2021 International Wireless Communications and Mobile Computing (IWCMC), Harbin, China, 28 June–2 July 2021; pp. 116–121. [Google Scholar]
- Elsayed, M.; Erol-Kantarci, M.; Kantarci, B.; Wu, L.; Li, J. Low-Latency Communications for Community Resilience Microgrids: A Reinforcement Learning Approach. IEEE Trans. Smart Grid 2020, 11, 1091–1099. [Google Scholar] [CrossRef]
- Naderializadeh, N.; Sydir, J.J.; Simsek, M.; Nikopour, H. Resource Management in Wireless Networks via Multi-Agent Deep Reinforcement Learning. IEEE Trans. Wirel. Commun. 2021, 20, 3507–3523. [Google Scholar] [CrossRef]
- Zhao, N.; Liang, Y.-C.; Niyato, D.; Pei, Y.; Jiang, Y. Deep Reinforcement Learning for User Association and Resource Allocation in Heterogeneous Networks. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
- Giannopoulos, A.; Spantideas, S.; Kapsalis, N.; Gkonis, P.; Sarakis, L.; Capsalis, C.; Vecchio, M.; Trakadas, P. Supporting Intelligence in Disaggregated Open Radio Access Networks: Architectural Principles, AI/ML Workflow, and Use Cases. IEEE Access 2022, 10, 39580–39595. [Google Scholar] [CrossRef]
- Iturria-Rivera, P.E.; Zhang, H.; Zhou, H.; Mollahasani, S.; Erol-Kantarci, M. Multi-Agent Team Learning in Virtualized Open Radio Access Networks (O-RAN). Sensors 2022, 22, 5375. [Google Scholar] [CrossRef] [PubMed]
- Zhang, H.; Zhou, H.; Erol-Kantarci, M. Federated Deep Reinforcement Learning for Resource Allocation in O-RAN Slicing. In Proceedings of the GLOBECOM 2022—2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 958–963. [Google Scholar]
- Nomikos, N.; Zoupanos, S.; Charalambous, T.; Krikidis, I. A Survey on Reinforcement Learning-Aided Caching in Heterogeneous Mobile Edge Networks. IEEE Access 2022, 10, 4380–4413. [Google Scholar] [CrossRef]
- Zhang, T.; Fang, X.; Wang, Z.; Liu, Y.; Nallanathan, A. Stochastic Game Based Cooperative Alternating Q-Learning Caching in Dynamic D2D Networks. IEEE Trans. Veh. Technol. 2021, 70, 13255–13269. [Google Scholar] [CrossRef]
- Chen, S.; Yao, Z.; Jiang, X.; Yang, J.; Hanzo, L. Multi-Agent Deep Reinforcement Learning-Based Cooperative Edge Caching for Ultra-Dense Next-Generation Networks. IEEE Trans. Commun. 2021, 69, 2441–2456. [Google Scholar] [CrossRef]
- Jiang, W.; Feng, G.; Qin, S.; Liu, Y. Multi-Agent Reinforcement Learning Based Cooperative Content Caching for Mobile Edge Networks. IEEE Access 2019, 7, 61856–61867. [Google Scholar] [CrossRef]
- Yu, Z.; Hu, J.; Min, G.; Lu, H.; Zhao, Z.; Wang, H.; Georgalas, N. Federated Learning Based Proactive Content Caching in Edge Computing. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
- Zhao, L.; Ran, Y.; Wang, H.; Wang, J.; Luo, J. Towards Cooperative Caching for Vehicular Networks with Multi-level Federated Reinforcement Learning. In Proceedings of the ICC 2021—IEEE International Conference on Communications, Virtual, 14–23 June 2021; pp. 1–6. [Google Scholar]
- Haibeh, L.A.; Yagoub, M.C.E.; Jarray, A. A Survey on Mobile Edge Computing Infrastructure: Design, Resource Management, and Optimization Approaches. IEEE Access 2022, 10, 27591–27610. [Google Scholar] [CrossRef]
- Wang, X.; Han, Y.; Wang, C.; Zhao, Q.; Chen, X.; Chen, M. In-Edge AI: Intelligentizing Mobile Edge Computing, Caching and Communication by Federated Learning. IEEE Netw. 2019, 33, 156–165. [Google Scholar] [CrossRef]
- Huang, X.; Zhang, K.; Wu, F.; Leng, S. Collaborative Machine Learning for Energy-Efficient Edge Networks in 6G. IEEE Netw. 2021, 35, 12–19. [Google Scholar] [CrossRef]
- Ren, Y.; Sun, Y.; Peng, M. Deep Reinforcement Learning Based Computation Offloading in Fog Enabled Industrial Internet of Things. IEEE Trans. Ind. Inform. 2021, 17, 4978–4987. [Google Scholar] [CrossRef]
- Tuong, V.D.; Truong, T.P.; Nguyena, T.-V.; Noh, W.; Cho, S. Partial Computation Offloading in NOMA-Assisted Mobile-Edge Computing Systems Using Deep Reinforcement Learning. IEEE Internet Things J. 2021, 8, 13196–13208. [Google Scholar] [CrossRef]
- Goudarzi, M.; Palaniswami, M.S.; Buyya, R. A Distributed Deep Reinforcement Learning Technique for Application Placement in Edge and Fog Computing Environments. IEEE Trans. Mob. Comput. 2023, 22, 2491–2505. [Google Scholar] [CrossRef]
- Afolabi, I.; Taleb, T.; Samdanis, K.; Ksentini, A.; Flinck, H. Network Slicing and Softwarization: A Survey on Principles, Enabling Technologies, and Solutions. IEEE Commun. Surv. Tutor. 2018, 20, 2429–2453. [Google Scholar] [CrossRef]
- Ssengonzi, C.; Kogeda, O.P.; Olwal, T.O. A survey of deep reinforcement learning application in 5G and beyond network slicing and virtualization. Array 2022, 14, 100142. [Google Scholar] [CrossRef]
- Phyu, H.P.; Naboulsi, D.; Stanica, R. Machine Learning in Network Slicing—A Survey. IEEE Access 2023, 11, 39123–39153. [Google Scholar] [CrossRef]
- Mason, F.; Nencioni, G.; Zanella, A. A Multi-Agent Reinforcement Learning Architecture for Network Slicing Orchestration. In Proceedings of the 2021 19th Mediterranean Communication and Computer Networking Conference (MedComNet), Ibiza, Spain, 15–17 June 2021; pp. 1–8. [Google Scholar]
- Chergui, H.; Blanco, L.; Garrido, L.A.; Ramantas, K.; Kuklinski, S.; Ksentini, A.; Verikoukis, C. Zero-Touch AI-Driven Distributed Management for Energy-Efficient 6G Massive Network Slicing. IEEE Netw. 2021, 35, 43–49. [Google Scholar] [CrossRef]
- Liu, Q.; Choi, N.; Han, T. OnSlicing: Online End-to-End Network Slicing with Reinforcement Learning. In Proceedings of the 17th International Conference on emerging Networking EXperiments and Technologies, Munich, Germany, 7–10 December 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 141–153. [Google Scholar]
- Kim, Y.; Lim, H. Multi-Agent Reinforcement Learning-Based Resource Management for End-to-End Network Slicing. IEEE Access 2021, 9, 56178–56190. [Google Scholar] [CrossRef]
- Mai, T.; Yao, H.; Zhang, N.; He, W.; Guo, D.; Guizani, M. Transfer Reinforcement Learning Aided Distributed Network Slicing Optimization in Industrial IoT. IEEE Trans. Ind. Inform. 2022, 18, 4308–4316. [Google Scholar] [CrossRef]
- Sheraz, M.; Ahmed, M.; Hou, X.; Li, Y.; Jin, D.; Han, Z.; Jiang, T. Artificial Intelligence for Wireless Caching: Schemes, Performance, and Challenges. IEEE Commun. Surv. Tutor. 2021, 23, 631–661. [Google Scholar] [CrossRef]
- Wijethilaka, S.; Liyanage, M. Survey on Network Slicing for Internet of Things Realization in 5G Networks. IEEE Commun. Surv. Tutor. 2021, 23, 957–994. [Google Scholar] [CrossRef]
- Chafii, M.; Bariah, L.; Muhaidat, S.; Debbah, M. Twelve Scientific Challenges for 6G: Rethinking the Foundations of Communications Theory. IEEE Commun. Surv. Tutor. 2023, 25, 868–904. [Google Scholar] [CrossRef]
- Feriani, A.; Hossain, E. Single and Multi-Agent Deep Reinforcement Learning for AI-Enabled Wireless Networks: A Tutorial. IEEE Commun. Surv. Tutor. 2021, 23, 1226–1252. [Google Scholar] [CrossRef]
- Soto, P.; Camelo, M.; De Vleeschauwer, D.; De Bock, Y.; Chang, C.-Y.; Botero, J.F.; Latré, S. Network Intelligence for NFV Scaling in Closed-Loop Architectures. IEEE Commun. Mag. 2023, 61, 66–72. [Google Scholar] [CrossRef]
- Khan, L.U.; Saad, W.; Niyato, D.; Han, Z.; Hong, C.S. Digital-Twin-Enabled 6G: Vision, Architectural Trends, and Future Directions. IEEE Commun. Mag. 2022, 60, 74–80. [Google Scholar] [CrossRef]
- Shen, Y.; Shi, Y.; Zhang, J.; Letaief, K.B. Graph Neural Networks for Scalable Radio Resource Management: Architecture Design and Theoretical Analysis. IEEE J. Sel. Areas Commun. 2021, 39, 101–115. [Google Scholar] [CrossRef]
- Rusek, K.; Suarez-Varela, J.; Almasan, P.; Barlet-Ros, P.; Cabellos-Aparicio, A. RouteNet: Leveraging Graph Neural Networks for Network Modeling and Optimization in SDN. IEEE J. Sel. Areas Commun. 2020, 38, 2260–2270. [Google Scholar] [CrossRef]
- Wang, H.; Wu, Y.; Min, G.; Miao, W. A Graph Neural Network-Based Digital Twin for Network Slicing Management. IEEE Trans. Ind. Inform. 2020, 18, 1367–1376. [Google Scholar] [CrossRef]
- He, S.; Xiong, S.; Ou, Y.; Zhang, J.; Wang, J.; Huang, Y.; Zhang, Y. An Overview on the Application of Graph Neural Networks in Wireless Networks. IEEE Open J. Commun. Soc. 2021, 2, 2547–2565. [Google Scholar] [CrossRef]
- Tam, P.; Song, I.; Kang, S.; Ros, S.; Kim, S. Graph Neural Networks for Intelligent Modelling in Network Management and Orchestration: A Survey on Communications. Electronics 2022, 11, 3371. [Google Scholar] [CrossRef]
- Chen, M.; Gunduz, D.; Huang, K.; Saad, W.; Bennis, M.; Feljan, A.V.; Poor, H.V. Distributed Learning in Wireless Networks: Recent Progress and Future Challenges. IEEE J. Sel. Areas Commun. 2021, 39, 3579–3605. [Google Scholar] [CrossRef]
- Muscinelli, E.; Shinde, S.S.; Tarchi, D. Overview of Distributed Machine Learning Techniques for 6G Networks. Algorithms 2022, 15, 210. [Google Scholar] [CrossRef]
- Hosseinalipour, S.; Brinton, C.G.; Aggarwal, V.; Dai, H.; Chiang, M. From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks. IEEE Commun. Mag. 2020, 58, 41–47. [Google Scholar] [CrossRef]
- James, G.; Witten, D.; Hastie, T.; Tibshirani, R.; Taylor, J. An Introduction to Statistical Learning; Springer Texts in Statistics; Springer International Publishing: Cham, Switzerland, 2023; ISBN 978-3-031-38746-3. [Google Scholar]
- Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction, 2nd ed.; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
Framework | AI Plane | Digital Twin (Size) | Knowledge Reuse |
---|---|---|---|
CTDE | Easy to plan/hard to replace | Small | Simple, a single unit |
FDC | Hard to plan/easy to replace | Large | Simple, a single unit |
FDI | Easy to plan/easy to replace | Small | Multiple units, depending on the environment |
HFRL | Easy to plan/hard to replace | Large | Simple, a single unit |
VFRL | Easy to plan/hard to replace | Large | Simple, a single unit |
TeamL | Planning depends on inter-team interactions; easy to replace teams. | Large | Depends on the size of the team |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Karachalios, O.A.; Zafeiropoulos, A.; Kontovasilis, K.; Papavassiliou, S. Distributed Machine Learning and Native AI Enablers for End-to-End Resources Management in 6G. Electronics 2023, 12, 3761. https://doi.org/10.3390/electronics12183761
Karachalios OA, Zafeiropoulos A, Kontovasilis K, Papavassiliou S. Distributed Machine Learning and Native AI Enablers for End-to-End Resources Management in 6G. Electronics. 2023; 12(18):3761. https://doi.org/10.3390/electronics12183761
Chicago/Turabian StyleKarachalios, Orfeas Agis, Anastasios Zafeiropoulos, Kimon Kontovasilis, and Symeon Papavassiliou. 2023. "Distributed Machine Learning and Native AI Enablers for End-to-End Resources Management in 6G" Electronics 12, no. 18: 3761. https://doi.org/10.3390/electronics12183761
APA StyleKarachalios, O. A., Zafeiropoulos, A., Kontovasilis, K., & Papavassiliou, S. (2023). Distributed Machine Learning and Native AI Enablers for End-to-End Resources Management in 6G. Electronics, 12(18), 3761. https://doi.org/10.3390/electronics12183761