Machine Learning-Based Resource Management in Fog Computing: A Systematic Literature Review
Abstract
:1. Introduction
2. Related Work
3. Methodology
3.1. Identification
3.2. Screening
3.3. Inclusion
3.4. Motivation
4. Results and Discussion
4.1. RQ No. 1 What Are the State-of-the-Art ML and DL Algorithms Leveraging Resource Management in Fog Computing?
4.2. RQ No. 2 What ML/DL Algorithms Are Used for Resource Management in Fog Computing?
4.2.1. ML/DL Algorithms Used to Manage Latency and Their Relevant Issues in Fog Computing
4.2.2. ML/DL Algorithms Used to Task Offloading and Its Relevant Issues in Fog Computing
4.2.3. ML/DL Algorithms Used for Resource Utilization and Their Relevant Issues in Fog Computing
Techniques | Explanation | Issue Addressed | Refs. |
---|---|---|---|
Random Forest | ML techniques such as random forest, decision tree, and SVM can predict the task’s nature and optimize resource allocation based on previous data and patterns, which leads to efficient utilization of available resources. | Efficient resource allocation | [27,28,54] |
Decision Tree | |||
SVM | |||
DRL | These RL algorithms can optimize resource allocation by adopting policies that can balance workload across fog nodes, make the distribution of resources fair, and allocate them based on real-time demand and network conditions, which leads to efficient resource utilization. | Optimal resource allocation | [26,30,35,37,38,40,44,53,65,67,68] |
DDQN | |||
Q-learning | |||
DQN | |||
CNN | Neural network architectures like CNNs, RNNs, and feed-forward networks optimize resource utilization by learning patterns in data, predicting resource demands, and dynamically adjusting resource allocation to improve efficiency in processing tasks and minimizing idle resources. | Resource utilization | [25,36,43,58,70] |
RNN | |||
Fuzzy Logic | FL and neuro-fuzzy systems can handle uncertainties and variations in resource demands. So, these systems can adaptively manage the resources across fog computing environments. Fuzzy sets and rules can make decisions that optimize resource utilization and ensure efficient allocation in complex environments. | Adaptive resource management | [47,55,56,59,62] |
Neuro-Fuzzy | |||
Fuzzy Q-learning | |||
DL | DL techniques such as DNN and A2C maximize resource utilization by learning optimal policies that help minimize idle resources, adjust to dynamic resource allocation to match varying workload demands, and improve throughput, which leads to enhanced overall efficiency. | Maximizing resource utilization | [34,37,39,69,71] |
DNN | |||
A2C |
4.2.4. ML/DL Algorithms Used to Power Consumption and Their Relevant Issues in Fog Computing
4.2.5. ML/DL Algorithms Used to Address Service Placement and Their Relevant Issues in Fog Computing
4.2.6. ML/DL Algorithms Used to Address Cost and Their Relevant Issues in Fog Computing
4.2.7. ML/DL Algorithms Used to Address Load Balancing and Their Relevant Issues in Fog Computing
4.2.8. ML/DL Algorithms Used to Address QoS and Their Relevant Issues in Fog Computing
4.3. RQ No. 3 What Are the Major Challenges That Have Been Addressed During Resource Management in Fog Computing Using ML/DL Techniques?
4.4. RQ No. 4 How Do ML/DL Methods Enable Dynamic Resource Allocation and Distribution Across the Layers, i.e., Use Edge–Fog or Edge–Fog–Cloud, or Manage It on Fog?
5. Practical Implementation of ML/DL Algorithms
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Angel, N.A.; Ravindran, D.; Vincent, P.D.R.; Srinivasan, K.; Hu, Y.-C. Recent advances in evolving computing paradigms: Cloud, edge, and fog technologies. Sensors 2021, 22, 196. [Google Scholar] [CrossRef] [PubMed]
- Khan, W.Z.; Ahmed, E.; Hakak, S.; Yaqoob, I.; Ahmed, A. Edge computing: A survey. Future Gener. Comput. Syst. 2019, 97, 219–235. [Google Scholar] [CrossRef]
- Peng, X.; Ota, K.; Dong, M. Multiattribute-based double auction toward resource allocation in vehicular fog computing. IEEE Internet Things J. 2020, 7, 3094–3103. [Google Scholar] [CrossRef]
- Aburukba, R.O.; AliKarrar, M.; Landolsi, T.; El-Fakih, K. Scheduling Internet of Things requests to minimize latency in hybrid Fog–Cloud computing. Future Gener. Comput. Syst. 2020, 111, 539–551. [Google Scholar] [CrossRef]
- Aazam, M.; Huh, E.-N. Fog computing: The cloud-iot\/ioe middleware paradigm. IEEE Potentials 2016, 35, 40–44. [Google Scholar] [CrossRef]
- Al Yami, M.; Schaefer, D. Fog computing as a complementary approach to cloud computing. In Proceedings of the 2019 International Conference on Computer and Information Sciences (ICCIS), Sakaka, Saudi Arabia, 3–4 April 2019; pp. 1–5. [Google Scholar]
- Iftikhar, S.; Gill, S.S.; Song, C.; Xu, M.; Aslanpour, M.S.; Toosi, A.N.; Du, J.; Wu, H.; Ghosh, S.; Chowdhury, D. AI-based fog and edge computing: A systematic review, taxonomy and future directions. Internet Things 2023, 21, 100674. [Google Scholar] [CrossRef]
- Yao, S.; Zhao, Y.; Shao, H.; Zhang, C.; Zhang, A.; Liu, D.; Liu, S.; Su, L.; Abdelzaher, T. Apdeepsense: Deep learning uncertainty estimation without the pain for iot applications. In Proceedings of the 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), Vienna, Austria, 2–6 July 2018; pp. 334–343. [Google Scholar]
- Walia, G.K.; Kumar, M.; Gill, S.S. AI-empowered fog/edge resource management for IoT applications: A comprehensive review, research challenges and future perspectives. IEEE Commun. Surv. Tutor. 2023, 26, 619–669. [Google Scholar] [CrossRef]
- Abdulkareem, K.H.; Mohammed, M.A.; Gunasekaran, S.S.; Al-Mhiqani, M.N.; Mutlag, A.A.; Mostafa, S.A.; Ali, N.S.; Ibrahim, D.A. A review of fog computing and machine learning: Concepts, applications, challenges, and open issues. IEEE Access 2019, 7, 153123–153140. [Google Scholar] [CrossRef]
- Tmamna, J.; Ayed, E.B.; Ayed, M.B. Deep learning for internet of things in fog computing: Survey and open issues. In Proceedings of the 2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Sousse, Tunisia, 2–5 September 2020; pp. 1–6. [Google Scholar]
- Samann, F.E.; Abdulazeez, A.M.; Askar, S. Fog Computing Based on Machine Learning: A Review. Int. J. Interact. Mob. Technol. 2021, 15, 21–46. [Google Scholar] [CrossRef]
- Gupta, S.; Singh, N. Toward intelligent resource management in dynamic Fog Computing-based Internet of Things environment with Deep Reinforcement Learning: A survey. Int. J. Commun. Syst. 2023, 36, e5411. [Google Scholar] [CrossRef]
- Aqib, M.; Kumar, D.; Tripathi, S. Machine learning for fog computing: Review, opportunities and a fog application classifier and scheduler. Wirel. Pers. Commun. 2023, 129, 853–880. [Google Scholar] [CrossRef]
- Abdulazeez, D.H.; Askar, S.K. Offloading mechanisms based on reinforcement learning and deep learning algorithms in the fog computing environment. IEEE Access 2023, 11, 12555–12586. [Google Scholar] [CrossRef]
- Tran-Dang, H.; Bhardwaj, S.; Rahim, T.; Musaddiq, A.; Kim, D.-S. Reinforcement learning based resource management for fog computing environment: Literature review, challenges, and open issues. J. Commun. Netw. 2022, 24, 83–98. [Google Scholar] [CrossRef]
- Hassannataj Joloudari, J.; Mojrian, S.; Saadatfar, H.; Nodehi, I.; Fazl, F.; Khanjani Shirkharkolaie, S.; Alizadehsani, R.; Kabir, H.D.; Tan, R.-S.; Acharya, U.R. Resource allocation problem and artificial intelligence: The state-of-the-art review (2009–2023) and open research challenges. Multimed. Tools Appl. 2024, 83, 67953–67996. [Google Scholar] [CrossRef]
- Ghobaei-Arani, M.; Souri, A.; Rahmanian, A.A. Resource management approaches in fog computing: A comprehensive review. J. Grid Comput. 2020, 18, 1–42. [Google Scholar] [CrossRef]
- Hosseinzadeh, M.; Azhir, E.; Lansky, J.; Mildeova, S.; Ahmed, O.H.; Malik, M.H.; Khan, F. Task scheduling mechanisms for fog computing: A systematic survey. IEEE Access 2023, 11, 50994–51017. [Google Scholar] [CrossRef]
- Park, J.S. New Technologies and Applications of Edge/Fog Computing Based on Artificial Intelligence and Machine Learning. Appl. Sci. 2024, 14, 5583. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
- Omrany, H.; Chang, R.; Soebarto, V.; Zhang, Y.; Ghaffarianhoseini, A.; Zuo, J. A bibliometric review of net zero energy building research 1995–2022. Energy Build. 2022, 262, 111996. [Google Scholar] [CrossRef]
- La, Q.D.; Ngo, M.V.; Dinh, T.Q.; Quek, T.Q.; Shin, H. Enabling intelligence in fog computing to achieve energy and latency reduction. Digit. Commun. Netw. 2019, 5, 3–9. [Google Scholar] [CrossRef]
- Talaat, F.M.; Saraya, M.S.; Saleh, A.I.; Ali, H.A.; Ali, S.H.J. A load balancing and optimization strategy (LBOS) using reinforcement learning in fog computing environment. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 4951–4966. [Google Scholar] [CrossRef]
- Liang, Y.; Li, W.; Lu, X.; Wang, S. Fog computing and convolutional neural network enabled prognosis for machining process optimization. J. Manuf. Syst. 2019, 52, 32–42. [Google Scholar] [CrossRef]
- Jamil, B.; Ijaz, H.; Shojafar, M.; Munir, K. IRATS: A DRL-based intelligent priority and deadline-aware online resource allocation and task scheduling algorithm in a vehicular fog network. Ad Hoc Netw. 2023, 141, 103090. [Google Scholar] [CrossRef]
- Amzil, A.; Abid, M.; Hanini, M.; Zaaloul, A.; El Kafhali, S. Stochastic analysis of fog computing and machine learning for scalable low-latency healthcare monitoring. Clust. Comput. 2024, 27, 6097–6117. [Google Scholar] [CrossRef]
- Atoum, M.S.; Pati, A.; Parhi, M.; Pattanayak, B.K.; Khader, A.; Habboush, M.A.; Qalaja, E. A Fog-Enabled Framework for Ensemble Machine Learning-Based Real-Time Heart Patient Diagnosis. Int. J. Eng. Trends Technol. 2023, 71, 39–47. [Google Scholar] [CrossRef]
- Faraji Mehmandar, M.; Jabbehdari, S.; Haj Seyyed Javadi, H. A dynamic fog service provisioning approach for IoT applications. Int. J. Commun. Syst. 2020, 33, e4541. [Google Scholar] [CrossRef]
- Sethi, V.; Pal, S. FedDOVe: A Federated Deep Q-learning-based Offloading for Vehicular fog computing. Future Gener. Comput. Syst. 2023, 141, 96–105. [Google Scholar] [CrossRef]
- Nassar, A.; Yilmaz, Y. Reinforcement learning for adaptive resource allocation in fog RAN for IoT with heterogeneous latency requirements. IEEE Access 2019, 7, 128014–128025. [Google Scholar] [CrossRef]
- Fan, G.; Deng, Z.; Ye, Q.; Wang, B. Machine learning-based prediction models for patients no-show in online outpatient appointments. Data Sci. Manag. 2021, 2, 45–52. [Google Scholar] [CrossRef]
- Tahmasebi-Pouya, N.; Sarram, M.A.; Mostafavi, S. A reinforcement learning-based load balancing algorithm for fog computing. Telecommun. Syst. 2023, 84, 321–339. [Google Scholar] [CrossRef]
- Sukanya, V.; Jawade, P.B.; Jayanthi, M. An optimized deep learning framework to enhance internet of things and fog based health care monitoring paradigm. Multimed. Tools Appl. 2024, 1–21. [Google Scholar] [CrossRef]
- Wang, Z.; Goudarzi, M.; Gong, M.; Buyya, R. Deep Reinforcement Learning-based scheduling for optimizing system load and response time in edge and fog computing environments. Future Gener. Comput. Syst. 2024, 152, 55–69. [Google Scholar] [CrossRef]
- Memon, S.; Maheswaran, M. Using machine learning for handover optimization in vehicular fog computing. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, Limassol, Cyprus, 8–12 April 2019; pp. 182–190. [Google Scholar]
- Faraji-Mehmandar, M.; Jabbehdari, S.; Javadi, H.H.S. A self-learning approach for proactive resource and service provisioning in fog environment. J. Supercomput. 2022, 78, 16997–17026. [Google Scholar] [CrossRef]
- Shabir, B.; Rahman, A.U.; Malik, A.W.; Buyya, R.; Khan, M.A. A federated multi-agent deep reinforcement learning for vehicular fog computing. J. Supercomput. 2023, 79, 6141–6167. [Google Scholar] [CrossRef]
- Bai, W.; Qian, C. Deep reinforcement learning for joint offloading and resource allocation in fog computing. In Proceedings of the 2021 IEEE 12th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 20–22 August 2021; pp. 131–134. [Google Scholar]
- Sharma, A.; Thangaraj, V. Intelligent service placement algorithm based on DDQN and prioritized experience replay in IoT-Fog computing environment. Internet Things 2024, 25, 101112. [Google Scholar] [CrossRef]
- Jiang, F.; Ma, R.; Gao, Y.; Gu, Z. A reinforcement learning-based computing offloading and resource allocation scheme in F-RAN. EURASIP J. Adv. Signal Process. 2021, 2021, 91. [Google Scholar] [CrossRef]
- Talaat, F.M. Effective prediction and resource allocation method (EPRAM) in fog computing environment for smart healthcare system. Multimedia Tools Appl. 2022, 81, 8235–8258. [Google Scholar] [CrossRef]
- Sarkar, I.; Kumar, S. Deep learning-based energy-efficient computational offloading strategy in heterogeneous fog computing networks. J. Supercomput. 2022, 78, 15089–15106. [Google Scholar] [CrossRef]
- Lakhan, A.; Mohammed, M.A.; Obaid, O.I.; Chakraborty, C.; Abdulkareem, K.H.; Kadry, S. Efficient deep-reinforcement learning aware resource allocation in SDN-enabled fog paradigm. Autom. Softw. Eng. 2022, 29, 1–25. [Google Scholar] [CrossRef]
- Chen, S.; Wang, Q.; Zhu, X. Energy and delay co-aware intelligent computation offloading and resource allocation for fog computing networks. Multimedia Tools Appl. 2023, 83, 56737–56762. [Google Scholar] [CrossRef]
- Ibrahim, M.A.; Askar, S. An Intelligent Scheduling Strategy in Fog Computing System Based on Multi-Objective Deep Reinforcement Learning Algorithm. IEEE Access 2023, 11, 133607–133622. [Google Scholar] [CrossRef]
- Kamruzzaman, M.; Alanazi, S.; Alruwaili, M.; Alrashdi, I.; Alhwaiti, Y.; Alshammari, N. Fuzzy-assisted machine learning framework for the fog-computing system in remote healthcare monitoring. Measurement 2022, 195, 111085. [Google Scholar] [CrossRef]
- Jazayeri, F.; Shahidinejad, A.; Ghobaei-Arani, M.J. Autonomous computation offloading and auto-scaling the in the mobile fog computing: A deep reinforcement learning-based approach. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 8265–8284. [Google Scholar] [CrossRef]
- Alshammari, N.; Pervaiz, H.; Ahmed, H.; Ni, Q. Delay and Total Network Usage Optimisation Using GGCN in Fog Computing. In Proceedings of the 2023 IEEE 34th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), Toronto, ON, Canada, 5–8 September 2023; pp. 1–6. [Google Scholar]
- Ramezani Shahidani, F.; Ghasemi, A.; Toroghi Haghighat, A.; Keshavarzi, A. Task scheduling in edge-fog-cloud architecture: A multi-objective load balancing approach using reinforcement learning algorithm. Computing 2023, 105, 1337–1359. [Google Scholar] [CrossRef]
- Baccarelli, E.; Scarpiniti, M.; Momenzadeh, A.; Ahrabi, S.S. Learning-in-the-fog (LiFo): Deep learning meets fog computing for the minimum-energy distributed early-exit of inference in delay-critical IoT realms. IEEE Access 2021, 9, 25716–25757. [Google Scholar] [CrossRef]
- Jing, B.; Xue, H. IoT Fog Computing Optimization Method Based on Improved Convolutional Neural Network. IEEE Access 2023, 12, 2398–2408. [Google Scholar] [CrossRef]
- Gazori, P.; Rahbari, D.; Nickray, M. Saving time and cost on the scheduling of fog-based IoT applications using deep reinforcement learning approach. Future Gener. Comput. Syst. 2020, 110, 1098–1115. [Google Scholar] [CrossRef]
- Suryadevara, N.K. Energy and latency reductions at the fog gateway using a machine learning classifier. Sustain. Comput. Inform. Syst. 2021, 31, 100582. [Google Scholar] [CrossRef]
- Abdulazeez, D.H.; Askar, S.K. A Novel Offloading Mechanism Leveraging Fuzzy Logic and Deep Reinforcement Learning to Improve IoT Application Performance in a Three-Layer Architecture within the Fog-Cloud Environment. IEEE Access 2024, 12, 39936–39952. [Google Scholar] [CrossRef]
- Garg, K.; Chauhan, N.; Agrawal, R. Optimized resource allocation for fog network using neuro-fuzzy offloading approach. Arab. J. Sci. Eng. 2022, 47, 10333–10346. [Google Scholar] [CrossRef]
- Mishra, K.; Rajareddy, G.N.; Ghugar, U.; Chhabra, G.S.; Gandomi, A.H. A collaborative computation and offloading for compute-intensive and latency-sensitive dependency-aware tasks in dew-enabled vehicular fog computing: A federated deep Q-learning approach. IEEE Trans. Netw. Serv. Manag. 2023, 20, 4600–4614. [Google Scholar] [CrossRef]
- Etemadi, M.; Ghobaei-Arani, M.; Shahidinejad, A. A cost-efficient auto-scaling mechanism for IoT applications in fog computing environment: A deep learning-based approach. Clust. Comput. 2021, 24, 3277–3292. [Google Scholar] [CrossRef]
- Talaat, F.M.; Ali, S.H.; Saleh, A.I.; Ali, H.A. Effective load balancing strategy (ELBS) for real-time fog computing environment using fuzzy and probabilistic neural networks. J. Netw. Syst. Manag. 2019, 27, 883–929. [Google Scholar] [CrossRef]
- Ateya, A.A.; Soliman, N.F.; Alkanhel, R.; Alhussan, A.A.; Muthanna, A.; Koucheryavy, A. Lightweight deep learning-based model for traffic prediction in fog-enabled dense deployed iot networks. J. Electr. Eng. Technol. 2023, 18, 2275–2285. [Google Scholar] [CrossRef]
- Ahlawat, C.; Krishnamurthi, R. HCDQN-ORA: A novel hybrid clustering and deep Q-network technique for dynamic user location-based optimal resource allocation in a fog environment. J. Supercomput. 2024, 80, 1–52. [Google Scholar] [CrossRef]
- Faraji-Mehmandar, M.; Jabbehdari, S.; Javadi, H.H.S. Fuzzy Q-learning approach for autonomic resource provisioning of IoT applications in fog computing environments. J. Ambient Intell. Humaniz. Comput. 2023, 14, 4237–4255. [Google Scholar] [CrossRef]
- Zare, M.; Sola, Y.E.; Hasanpour, H. Towards distributed and autonomous IoT service placement in fog computing using asynchronous advantage actor-critic algorithm. J. King Saud Univ. -Comput. Inf. Sci. 2023, 35, 368–381. [Google Scholar] [CrossRef]
- Santos, J.; Wauters, T.; Volckaert, B.; De Turck, F. Resource provisioning in fog computing through deep reinforcement learning. In Proceedings of the 2021 IFIP/IEEE International Symposium on integrated network management (IM), Bordeaux, France, 7–21 May 2021; pp. 431–437. [Google Scholar]
- Baek, J.; Kaddoum, G. Online partial offloading and task scheduling in SDN-fog networks with deep recurrent reinforcement learning. IEEE Internet Things J. 2021, 9, 11578–11589. [Google Scholar] [CrossRef]
- Chandak, A.; Ray, N.K. A review of load balancing in fog computing. In Proceedings of the 2019 International Conference on Information Technology (ICIT), Bhubaneswar, India, 19–21 December 2019; pp. 460–465. [Google Scholar]
- Khansari, M.E.; Sharifian, S. A scalable modified deep reinforcement learning algorithm for serverless IoT microservice composition infrastructure in fog layer. Future Gener. Comput. Syst. 2024, 153, 206–221. [Google Scholar] [CrossRef]
- Sami, H.; Mourad, A.; Otrok, H.; Bentahar, J. Demand-driven deep reinforcement learning for scalable fog and service placement. IEEE Trans. Serv. Comput. 2021, 15, 2671–2684. [Google Scholar] [CrossRef]
- Tan, J.; Guan, W.J.E.R. Resource allocation of fog radio access network based on deep reinforcement learning. Eng. Rep. 2022, 4, e12497. [Google Scholar] [CrossRef]
- Iftikhar, S.; Golec, M.; Chowdhury, D.; Gill, S.S.; Uhlig, S. FogDLearner: A Deep Learning-based Cardiac Health Diagnosis Framework using Fog Computing. In Proceedings of the Australasian Computer Science Week 2022, Brisbane, Australia, 14–18 February 2022; pp. 136–144. [Google Scholar]
- Verma, P.; Tiwari, R.; Hong, W.-C.; Upadhyay, S.; Yeh, Y.-H. FETCH: A deep learning-based fog computing and IoT integrated environment for healthcare monitoring and diagnosis. IEEE Access 2022, 10, 12548–12563. [Google Scholar] [CrossRef]
- Habibi, P.; Farhoudi, M.; Kazemian, S.; Khorsandi, S.; Leon-Garcia, A. Fog computing: A comprehensive architectural survey. IEEE Access 2020, 8, 69105–69133. [Google Scholar] [CrossRef]
- Abbasi, M.; Yaghoobikia, M.; Rafiee, M.; Jolfaei, A.; Khosravi, M.R. Efficient resource management and workload allocation in fog–cloud computing paradigm in IoT using learning classifier systems. Comput. Commun. 2020, 153, 217–228. [Google Scholar] [CrossRef]
- Iftikhar, S.; Ahmad, M.M.M.; Tuli, S.; Chowdhury, D.; Xu, M.; Gill, S.S.; Uhlig, S. HunterPlus: AI based energy-efficient task scheduling for cloud–fog computing environments. Internet Things 2023, 21, 100667. [Google Scholar] [CrossRef]
- Naouri, A.; Nouri, N.A.; Khelloufi, A.; Sada, A.B.; Ning, H.; Dhelim, S. Efficient fog node placement using nature-inspired metaheuristic for IoT applications. Clust. Comput. 2024, 27, 8225–8241. [Google Scholar] [CrossRef]
- Nazeri, M.; Soltanaghaei, M.; Khorsand, R. A predictive energy-aware scheduling strategy for scientific workflows in fog computing. Expert Syst. Appl. 2024, 247, 123192. [Google Scholar] [CrossRef]
- Kashani, M.H.; Mahdipour, E. Load balancing algorithms in fog computing. IEEE Trans. Serv. Comput. 2022, 16, 1505–1521. [Google Scholar] [CrossRef]
- Mseddi, A.; Jaafar, W.; Elbiaze, H.; Ajib, W. Intelligent resource allocation in dynamic fog computing environments. In Proceedings of the 2019 IEEE 8th International Conference on Cloud Networking (CloudNet), Coimbra, Portugal, 4–6 November 2019; pp. 1–7. [Google Scholar]
- Poltronieri, F.; Tortonesi, M.; Stefanelli, C.; Suri, N. Reinforcement learning for value-based placement of fog services. In Proceedings of the 2021 IFIP/IEEE International Symposium on Integrated Network Management (IM), Bordeaux, France, 17–21 May 2021; pp. 466–472. [Google Scholar]
- Goel, G.; Tiwari, R. Resource scheduling techniques for optimal quality of service in fog computing environment: A review. Wirel. Pers. Commun. 2023, 131, 141–164. [Google Scholar] [CrossRef]
- Boudieb, W.; Malki, A.; Malki, M.; Badawy, A.; Barhamgi, M. Microservice instances selection and load balancing in fog computing using deep reinforcement learning approach. Future Gener. Comput. Syst. 2024, 156, 77–94. [Google Scholar] [CrossRef]
- Eyckerman, R.; Reiter, P.; Latré, S.; Marquez-Barja, J.; Hellinckx, P. Application placement in fog environments using multi-objective reinforcement learning with maximum reward formulation. In Proceedings of the NOMS 2022—2022 IEEE/IFIP Network Operations and Management Symposium, Budapest, Hungary, 25–29 April 2022; pp. 1–6. [Google Scholar]
- Shi, J.; Du, J.; Wang, J.; Yuan, J. Deep reinforcement learning-based V2V partial computation offloading in vehicular fog computing. In Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April 2021; pp. 1–6. [Google Scholar]
- Baek, J.; Kaddoum, G. Heterogeneous task offloading and resource allocations via deep recurrent reinforcement learning in partial observable multifog networks. IEEE Internet Things J. 2020, 8, 1041–1056. [Google Scholar] [CrossRef]
- Santos, J.; Wauters, T.; Volckaert, B.; De Turck, F. Reinforcement learning for service function chain allocation in fog computing. Commun. Netw. Serv. Manag. Era Artif. Intell. Mach. Learn. 2021, 7, 147–173. [Google Scholar]
- Liu, H.-I.; Galindo, M.; Xie, H.; Wong, L.-K.; Shuai, H.-H.; Li, Y.-H.; Cheng, W.-H. Lightweight Deep Learning for Resource-Constrained Environments: A Survey. ACM Comput. Surv. 2024, 56, 1–42. [Google Scholar] [CrossRef]
- Mao, Y.; You, C.; Zhang, J.; Huang, K.; Letaief, K.B. A survey on mobile edge computing: The communication perspective. IEEE Commun. Surv. Tutor. 2017, 19, 2322–2358. [Google Scholar] [CrossRef]
- Mattia, G.P.; Beraldi, R. Online Decentralized Scheduling in Fog Computing for Smart Cities Based On Reinforcement Learning. IEEE Trans. Cogn. Commun. Netw. 2024, 10, 1551–1565. [Google Scholar] [CrossRef]
- Lin, C.-C.; Peng, Y.-C.; Chen, Z.-Y.A.; Fan, Y.-H.; Chin, H.-H. Distributed Flexible Job Shop Scheduling through Deploying Fog and Edge Computing in Smart Factories Using Dual Deep Q Networks. Mob. Netw. Appl. 2024, 29, 886–904. [Google Scholar] [CrossRef]
- Singh, P.P.; Anik, F.I.; Senapati, R.; Sinha, A.; Sakib, N.; Hossain, E. Investigating customer churn in banking: A machine learning approach and visualization app for data science and management. Data Sci. Manag. 2024, 7, 7–16. [Google Scholar] [CrossRef]
- Shefa, F.R.; Sifat, F.H.; Uddin, J.; Ahmad, Z.; Kim, J.-M.; Kibria, M.G. Deep Learning and IoT-Based Ankle–Foot Orthosis for Enhanced Gait Optimization. Healthcare 2024, 12, 2273. [Google Scholar] [CrossRef]
ML | DL | NN | RL | Latency | Energy Consumption and Power Consumption | Resource Utilization | Cost Efficiency | QoS | Task Scheduling | Load Balancing and Service Placement | CPU Utilization | Scalability | Ref. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
✗ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | [8] |
✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | [9] |
✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | [10] |
✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | [11] |
✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | [12] |
✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | [13] |
✗ | ✓ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | [14] |
✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | [15] |
✗ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | [16] |
✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | [17] |
✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ | ✓ | [18] |
✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | [19] |
✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | [20] |
✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Proposed work |
No. | Research Questions | Significance |
---|---|---|
RQ 1. | What are the state-of-the-art ML and DL techniques leveraging resource management in fog computing? | It will highlight the most recent and advanced ML/DL techniques applied in fog computing for resource management, which in turn will highlight potential opportunities for future research and development. |
RQ 2. | What ML/DL algorithms are used for resource management in fog computing? | It will unravel the current capabilities, leading toward optimal resource allocation in fog computing. |
RQ 3. | What are the significant challenges to optimizing resource management in fog computing using ML/DL? | It will underscore significant challenges faced in optimizing resource management in fog computing. Identification of robust ML/DL solutions for the issues will pave the way for more intelligent and efficient fog computing solutions. |
RQ 4. | How can ML/DL enable dynamic resource allocation and distribution across the layers, i.e., edge–fog, edge–fog–cloud, or manage it on fog? | Imagine a system that can automatically allocate tasks and resources across all these different computing layers (edge, fog, and cloud) to achieve the best performance and save cost. Can ML help us build this super-smart system? |
Cat ** | Models | Usage/Application | Strengths | NOR * |
---|---|---|---|---|
ML | Decision trees | Optimal node selection | Straightforward, effective, and prone to overfitting with complex data | 7 |
Task distribution | ||||
KNN | Based on workload characteristics | Intuitive, adapts well to local variations, and can be computationally expensive | ||
Adjusts load based on proximity | ||||
SVM | Optimal resource utilization | Task effective in complex scenarios, handling large datasets and nonlinear relationships | ||
Random forest | Resource usage forecasting | Accurate, robust, and computationally intensive | ||
Dynamic decision-making | ||||
Identifying idle resources | ||||
Naïve Bayes | Based on probabilistic predictions | Simple, efficient, fatigue in handling complex dependencies | ||
Anomaly detection | ||||
Optimal assignment | ||||
FL | Fuzzy logic (FL) | Uncertainty based modeling | Adapts well to uncertainty and requires expert knowledge for rule definition | 4 |
Neuro-fuzzy | Optimal resource policies | Integrates adaptive learning with robustness but is complex to implement and train | ||
DRL | Q-Learning | Adaptive resource allocation policies | Q-learning can adapt to dynamic environments. However, its convergence is slower | 41 |
Real-time decision-making | ||||
SARSA | Immediate action based on states | Effective for immediate rewards, weak response for delayed rewards | ||
The decision is made based on learned rewards | ||||
Expected SARSA | Predicting rewards for allocations | Balances exploration and exploitation, is sensitive to reward estimation errors, and adjusts learning rates | ||
AWRR | Real-time load balancing | It is effective for real-time load balancing, which requires continuous monitoring | ||
** Allocating tasks to high-performance nodes | ||||
DQN | Optimizes resource allocation and task-offloading decisions | Effective in learning complex policies but computationally intensive | ||
DDQN | Enhances Q-value estimation and stability in dynamic environments | Reduces overestimation bias, improving learning efficiency | ||
DDPG | Manages continuous resource allocation and optimization | Handles complex tasks and constant action spaces effectively | ||
DRQN | Manages tasks with temporal dependencies and dynamic environments | Suitable for tasks requiring memory of past states and actions | ||
Advantage actor-critic (A2C) | Balances exploration and exploitation for adaptive resource management | Provides faster convergence and improved policy learning | ||
Federated Deep Q-learning-based offloading (FedDOVe) | Enables collaborative optimization of resource allocation across distributed fog nodes | Enhances scalability and robustness in federated fog environments | ||
Learning-in-the-fog (LiFo) | Enables autonomous decision-making at the edge for faster response times | Reduces latency and bandwidth consumption by minimizing reliance on centralized servers | ||
DL | CNN | Analyzes image or sensor data for real-time decision-making | Efficient for spatial data processing, suitable for image tasks in fog environments | 13 |
Graph convolution neural network (GCCN) | Manages network topology analysis, routing optimization, and graph-based resource allocation | Handles complex relationships in network data, improving scalability and performance | ||
PNN | Predicts resource demand | Robust probabilistic modeling requires substantial training and computation. | ||
Identifying deviation |
Technique | Explanation | Issues | Ref. |
---|---|---|---|
RL | Q-learning in RL for load balancing to optimize network performance. | Delay | [33] |
Q-Learning | |||
Q-Network | Uses Q-networks and neural networks to improve handover efficiency and offloading decisions. | [36] | |
DRQN | |||
RRN | |||
DRRL | |||
MODRL | Applies multi-objective deep reinforcement learning (MODRL) to minimize task completion time and optimize resource utilization. | Minimize Latency | [29] |
Q-Networks | |||
DQ-Learning | Reduces energy consumption at RSUs, balances computation load, and optimizes task offloading. | [30] | |
FedDOVe | |||
DRL | Deep Reinforcement Learning-based IoT application Scheduling algorithm (DRLIS) to reduce response time and balance server load. | [35] | |
BMNF | Utilizes NN for latency and energy consumption | [34] | |
DRL | Reduces delay, improves resource utilization, and optimizes costs. | [37] | |
DNN | |||
Multi-agent DRL | Uses multi-agent DRL to reduce end-to-end delay. | [38] | |
A2C | Applies DRL-based A2C algorithm for adaptive resource management. | [39] | |
DQ-Learning | Reduces energy consumption at RSUs, balances computation load, and optimizes task offloading. | [30] | |
FedDOVe | |||
A2C | Applies DRL-based A2C algorithm for adaptive resource management. | [40] | |
DDQN | Optimizes service placement and latency | ||
Decision Tree | Uses ML classifiers for optimizing latency and energy consumption. | [41] | |
DDQN | It uses DDQN and DQN phases to make offloading decisions and resource allocation. | ||
DRL | Utilizes DRL and NN for effective prediction and resource allocation (EPRAM). | ||
PNN | |||
MORL | It uses MORL and NN to optimize service placement in heterogeneous fog environments. | [42] | |
DNN | |||
DQN | |||
DQBRA | Uses DQN-based resource allocation (DQBRA) for optimal communication and computational delays. | [43] | |
DDN | Utilizes DL for joint offloading decisions and resource allocation in fog computing. | [44] | |
RNN | Improves handover efficiency and service continuity in the Internet of Vehicles (IoV) with intelligent fog node selection. | [45] | |
Feed-forward NN | |||
JODRA | DRL-based joint offloading decision and resource allocation (JODRA) with RL improves decision making Support vector regression (SVR) for workload prediction, reducing average delay and cost. | ||
RL | |||
SVR | |||
(F-AMLF) | Fuzzy-assisted machine learning framework (F-AMLF) optimize resource allocation reduce latency | [46] | |
GCCN | Enhances fog computing performance with intelligent resource allocation using GCCN. | Loop delay | [47] |
AWRR | Advanced weighted round robin (AWRR)Uses Q-learning RL algorithms for optimizing makespan and resource allocation. | Makespan | [48] |
RLFS | Optimizes task scheduling and reduces response time using RL and Fuzzy Systems (RLFS) techniques. | Response Time | [49] |
CML | Utilizes collaborative ML (CML) in software-defined networking (SDN) CloudSimSDN v2.0 to optimize response time and energy consumption. | [24] | |
DQN | Minimizes waiting time, delay, and packet loss while maximizing task completion percentage. | Waiting Time | [50] |
IRAT | End-to-end delay. |
Technique | Explanation | Issues | Refs. |
---|---|---|---|
DRL | It uses DRL and enhances versions to learn optimal offloading policies while considering resource availability, intelligent decisions, network conditions, and the nature of tasks. | Task Offloading | [35,46] |
CDDN | Conditional deep neural networks (CDDN) are used for task offloading, where the tasks are efficiently placed on processing devices according to their needs. | [51] | |
CNN | This fog-enabled architecture using CNNs efficiently distributes processes between terminal fog and cloud layers. | [25,52] | |
ML | ML algorithms and classifiers (e.g., random forest, SVM) are used to predict the nature of the tasks and make informed offloading decisions. | [27,53,54] | |
FL and DQN | The combination of FL and DQN gives a novel task-offloading technique, which, overall, improves the efficiency of the system. | [47,55] | |
FL and neuro-fuzzy | A novel resource allocation model is presented by combining FL and neuro-fuzzy techniques. This hybrid approach uses FL to model the orchestration decision based on various factors, and the neuro-fuzzy model uses fuzzy sets and rules for decision-making. | [56] | |
NN DNN | It utilizes NN to learn complex patterns in data and optimize offloading decisions. | [43,57] | |
RNN | It uses RNN to predict resource requirements based on workload. In response, it dynamically adjusts the number of resources to match the workload. | [58] | |
PNN | This research work efficiently manages workload by employing probabilistic neural network (PNN)-based algorithms. | [59] | |
CNN | A CNN is used to predict the increasing volume of traffic patterns generated by IoT devices and enables proactive network management. | [60] | |
DRLIS | The DRLIS is used for adaptive decisions and is integrated with the FogBus2 framework to minimize execution cost, response time, and load imbalance, resulting in optimized resource utilization and efficient load balancing. | Offloading Decisions | [35] |
DQN | DQN can learn optimal offloading decisions through interaction with the environment and rewards. | [26,61] | |
It uses intelligent fog service placement and intelligent fog service schedular for task offloading. | [58] | ||
DRL | It uses offloading as a strategy to optimize resource utilization by distributing the workload across multiple nodes. This approach addresses the challenges of a dynamic environment where task arrival rates and network conditions fluctuate, which affects offloading decisions. | [38] | |
DDQN | In this research, a novel placement algorithm based on DDQN is used, enabling optimal service placement in fog computing. | [40] | |
FL and DRL | A novel resource allocation model is presented by combining FL and neuro-fuzzy techniques. This hybrid approach uses FL to model the orchestration decision based on various factors, and the neuro-fuzzy model uses fuzzy sets and rules for decision-making. | [55,56] | |
FL and Q-learning | Uses fuzzy Q-learning approach for resource provisioning, efficiently allocating resources results in managing a dynamic workload. | [62] | |
DRL | It uses DRL for optimal task scheduling, considering task dependencies, resource availability, and network conditions. | Task Scheduling | [63,64] |
DRL | DRL is used for efficient resource allocation, which is essential for supporting task migration and offloading decisions. | Task Migration | [44,59,65] |
DRL and DQN | Offloads computation-intensive tasks to edge servers to reduce processing time. | Processing Time | [38,58] |
Technique | Explanation | Issues Addressed | Refs. |
---|---|---|---|
Random forest, SVM, decision trees | ML algorithms, i.e., random forest, SVM, and decision trees, can predict the nature of tasks and their characteristics, which can make effective decisions about scheduling, can fairly manage resources, reduce idle time and overall power consumption, and lead to improved power efficiency. | Power efficiency | [27,28,54] |
DRL, DDQN, Q-learning, DQN | DRL and RL algorithms are deployed to optimize task offloading, scheduling decisions, resource allocation, and resource utilization to minimize energy consumption by enhancing diverse policies that can handle workload variations and network conditions. | Energy consumption reduction | [26,30,35,37,38,40,44,45,53,65,68] |
NN, CNN, RNN | Neural network architectures such as CNN, RNNs, and feed-forward networks optimize energy consumption by learning to predict resource demands and dynamically adjusting resource allocation based on workload characteristics, leading to improved energy efficiency. | Energy efficiency | [25,36,43,70,74] |
FL, Neuro-fuzzy | Fuzzy systems can handle uncertainty, providing energy-efficient decision-making capabilities, and leveraging the system to adjust to real-time data and application for efficient resource allocation, which ensures optimal energy usage and reduces wastage in resource utilization. | Energy-efficient decision-making | [47,55,56,59,62] |
DL, DNN, A2C | DL techniques optimize power consumption by allocating resources effectively, reducing idle time, and matching resource allocation to workload demands to minimize power usage. | Power Usage Minimization | [34,37,39,69,71] |
Technique | Explanation | Issues Addressed | Ref. |
---|---|---|---|
DQN | DQN for packet scheduling, which minimizes delay, enhances task scheduling, and fair resource allocation. | Minimize packet loss | [26] |
Waiting time | |||
Task scheduling | |||
Random Forest | It uses ML with a K-fold random forest to address latency, which in turn improves QoS and system service performance. | Latency | [27] |
QoS enhancement | |||
Neural Framework | It uses DL neural frameworks to reduce latency, address energy consumption challenges, and find hospitals’ locations for services. | Energy consumption | [34] |
Locating fog node facility | |||
DQN | DRL is a power tool that supports adaptability, optimization, and autonomy. These DRL techniques, like DQN and Q-learning, are deployed for adaptive scheduling, load balancing, and optimizing weighted cost allocations, addressing latency issues effectively. | Adoptive scheduling | [35] |
Q-learning | Load balancing | ||
Weighted cost | |||
DRL | DDQN is deployed with priority experience replay for service placement in fog computing. It reduces service execution time and manages node energy. Priority experience replay enhances the learning rate and improves the system’s performance. | Service placement | [40] |
Service execution time | |||
Node energy | |||
Fuzzy | Fuzzy-assisted ML techniques are applied for latency reduction, to lower energy consumption, and to improve decision-making for the placement of tasks on fog nodes. | Lower energy | [47] |
ML | Cost-effective | ||
Decision-making | |||
FL | Integrates neuro-fuzzy systems to manage the effective allocation of resources in fog computing. | Network latency | [56] |
NN | Computational delay | ||
Optimization resource management task offloading | |||
DRL | Deep Q-learning and RNN are applied to balance load, resource utilization, and energy consumption. | Load balancing | [57] |
DQL | Resource utilization | ||
Energy consumption | |||
Response time | |||
A3C | Using A3C to manage dynamic workload, optimize resource utilization on fog servers, and meet deadlines. | QoS | [63] |
Q-Networks | Service placement |
Technique | Explanation | Issues Addressed | Refs. |
---|---|---|---|
DQN Q-learning | DRL techniques such as DQN and Q-learning optimize resource allocation and load balancing to reduce the weighted costs associated with service operations. | Adoptive Scheduling Load Balancing Weighted Cost | [35] |
DRL Q-Networks | Applies DRL and Q-networks to tackle QoS and service placement challenges while optimizing costs, ensuring that service deployments are cost-effective and aligned with performance metrics. | QoS Service Placement Problem Cost | [46] |
Fuzzy ML Q-learning | Efficient resource utilization, minimizing energy consumption, and effective scheduling can reduce costs. | Cost-Effective Decision-making Efficiency | [47,62,76] |
Technique | Explanation | Issues | Refs. |
---|---|---|---|
DQN | DQN helps prevent fog nodes either from overburdening or underutilization. IFSP provides a fair distribution of requests on fog nodes. | Load balancing | [26,68,78] |
DRLIS | DRL addresses load balancing with the scheduling algorithm DRLIS. It optimizes the response time of heterogeneous IoT applications and balances the load of processing servers. | Load balancing | [35,61,79] |
Federated DQL | A federated learning and DQL are used to balance the load across fog environments. | Load balancing | [30,57] |
DL | Applies deep learning techniques to optimize energy efficiency load balancing balance and manage network usage effectively in fog computing scenarios. | Load balancing | [49] |
Technique | Explanation | Issue Addressed | Refs. |
---|---|---|---|
DRL | Uses DRL algorithms like DQL to optimize task offloading and resource allocation, reducing latency by making adaptive decisions based on real-time network conditions. | Latency | [35,40,44,61,71] |
NN, ML | Neural frameworks such as CNNs and RNNs can be employed to predict and reduce latency by efficient task scheduling and resource utilization in network environments. | [34,36,43,70] | |
Q-Learning | Implements Q-learning in RL models to optimize load balancing and resource allocation, thereby enhancing throughput and system efficiency. | Throughput | [33,49,69] |
DDQL | Enhances reliability and accuracy by using advanced RL techniques to minimize service delays, improve response times, and meet SLA requirements consistently. | Reliability and Accuracy | [53,81] |
DNN | Integrates DNNs with RL/DRL methods to achieve high reliability and accuracy in task execution, ensuring precise decision-making under dynamic network conditions. | [42,82,83] | |
DQN | Utilizes DQN for optimal resource allocation and task scheduling to meet deadlines and prevent SLA violations, prioritizing critical tasks based on their requirements. | Deadline and SLA Violation | [26,30,84] |
RL | Applies RL algorithms such as SARSA and Expected SARSA to enforce strict adherence to deadlines by optimizing task completion times and reducing waiting periods. | [29,31,68,85] |
ML/DL | Algorithms | Key Impacts | Issue Addressed |
---|---|---|---|
ML | Decision Tree, SVM, PNN, random forest, LR, naïve Bais, ML | Resource optimization, energy management, decision-making | Latency, Energy Consumption |
RL | DRL, DRLIS, DDQN, DQ-Learning, A2C, RLFS, DRL-based IoT Scheduling, multi-agent DRL | Dynamic decision-making, latency reduction, resource optimization, energy efficiency, real-time performance, responsiveness | Latency, Resource Optimization, Energy Consumption, Response Time |
DL | BMNF, FedDOVe, MORL, F-AMLF, Collaborative ML, DQBRA, DDN, DQN | Complex pattern recognition, performance improvement, response time optimization, Federated learning, multi-objective optimization, collaborative learning, computational efficiency, accuracy, reliability, improved user experience, system performance | Energy Consumption, Performance Improvement, Resource Optimization, Response Time |
Cat * | Challenges | Challenge Description | IRM ** | Solution Prospect | IA *** |
---|---|---|---|---|---|
Computational Resource and Latency | Limited Resources | Processing Power | Complex models undeployable | Resource-efficient algorithms | DL Algorithms (e.g., CNN, RNN) |
Memory | Efficient model training techniques | ||||
Storage | Lightweight models | ||||
Compressed models | |||||
Latency Constraints | Critical Factor for real-time applications | Miss time-critical task | Prioritize time-bond tasks to meet the deadline | DRL | |
Network delay can impact performance | |||||
Energy Consumption | Balancing performance and energy-efficient | Increase operation cost | Optimizes resource allocation for critical tasks while considering energy efficiency | SVM | |
Reduce device life span | |||||
Scalability and Management | Scalability | Dynamic workload | System complexity | Auto Scaling | ML/DL |
Large-scale fog node management | Resource Provisioning | Distributed resource management | |||
Deployment and maintenance | Complex deployment due to node heterogeneity | Monitoring | Resource allocation | ML/DL (DQN, DDQN, DRL, MORL, ML) | |
Infrastructure | Fault Tolerance | Task offloading | |||
Enhance QoS | |||||
Resource Allocation | Dynamic workload | Cost-effectiveness | Task Scheduling | (ML, DRL, NN, FNN, DQN) | |
Resource heterogeneity | User experience | Resource provisioning | |||
Latency constraints | System performance | ||||
Data Availability and Quality | Limited Data | Insufficient historical data is available for training | Inaccurate Model | Generate synthetic data | |
Performance inefficiency | Transfer learning | ||||
Data augmentation techniques | |||||
Data Heterogeneity | Data Processing | Inefficient resource utilization | Effective data management | ||
Data integration | ·Model Complexity | Robust ML/DL techniques used | |||
Data Privacy and Security | Inconsistency format Difficult Data Aggregation | Computation overhead Storage requirement | Federated learning | ||
Model Complexity and Interpretability | Model Complexity | Computation overhead | Potential performance issues | Model optimization | ML/DL |
Overburden | Efficient resource utilization | ||||
Model Explainability | Trust and reliability | decision-making | Recognize patterns | ML/DL | |
Model Adaptability | Adopt network conditions | Deployment efficiency | Learning Models | DL | |
Value updating |
Algorithms | Architecture | Task Services | Ref. |
---|---|---|---|
DQN | IoT–Fog–Cloud | Tasks are managed across all layers | [26] |
SVM | IoT–Fog–Cloud | Tasks served across IoT, fog, and cloud layers | [27] |
SVM | Fog–Cloud | Tasks managed across fog and cloud | [28] |
DQN | Mist–Fog–Cloud | Tasks managed in mist, fog, and cloud | [33] |
DRL | Edge–Fog | Edge-to-fog task migration based on conditions | [35] |
DRL | Fog–Cloud | Tasks handled in fog, with potential cloud migration | [38] |
DNN | Fog–Cloud | Tasks executed primarily in fog with cloud capabilities | [43] |
RLFS | Edge–Fog–Cloud | Tasks are scheduled dynamically across edge, fog, and cloud | [50] |
Fuzzy Logic | IoT–Fog–Cloud | Tasks are managed with FL across all layers | [55] |
Neuro-Fuzzy | Edge–Fog–Cloud | Tasks managed across edge, fog, and cloud with neuro-fuzzy techniques | [56] |
CNN | Fog–Cloud | Tasks processed on fog and cloud | [61] |
DNN | Fog–Cloud | Tasks served in fog and possible cloud migration | [88] |
DNN | Edge–Fog | Tasks managed across edge and fog | [89] |
Real-Time Scenarios | Algorithms Used | Key Metrics | Validation | Ref. |
---|---|---|---|---|
Industrial Monitoring and Analysis | CNN | Energy, Production efficiency | The fog-enabled system improved energy efficiency and production output, reduced bandwidth and data transfer time, and allowed for faster fog layer processing over the cloud. This improvement was validated in a real manufacturing environment. | [25] |
Smart Cities Traffic Control | RL | Smart Cities or IoT environments for Fog Computing resource management | Using a single payload and varying traffic in a no-deadline scenario, with deadlines and two payloads, and using real-world traffic distributions, the SARSA algorithm maximized task completion across all of these. The performance was further supported by each node acting as an independent learner and its consistently superior performance in maximizing the rate of tasks completed within deadlines. | [88] |
Vehicular fog networks | DRL | Energy Consumption, Latency, Task Completion Rate, Load Balancing | This system addresses energy concerns, minimizes latency, and optimizes resource use, leading to performance benefits for vehicular fog computing and practical applications in smart transportation systems. | [30] |
Neural Network | Latency | Validation was conducted through experiments that compared predicted costs against actual values, confirming the models’ effectiveness in minimizing service interruptions during handovers. | [36] | |
DRL | Task Offloading, System Cost, End-to-End Delay, Processing Time | The Proposed Federated Multi-Agent DRL algorithm showed significant improvements in task residence time, end-to-end delay, and overall system cost. | [38] | |
DRL | Miscellaneous User Requests | Maximizes satisfied user requests within a delay threshold and outperforms heuristic approaches. | [78] | |
DRL | Mean Utility, Completion Ratio | These key results highlight the effectiveness of the proposed SBPO algorithm in real-time vehicular environments, its superior performance compared to existing methods, and the thorough validation process that supports its efficacy. | [83] | |
Healthcare | RL | Makespan, Completion Time, Dynamic Resource Allocation | Testbed deployment, performance evaluation, robustness and adaptability | [24] |
DRL | Latency, Task Assignment, Resource Utilization | Comparison with state-of-the-art load balancing algorithms (LC, RR, WRR, AWRR) | [42] | |
Neuro-Fuzzy | Network Latency, Computational Delay, Optimization, Resource Management, Task Offloading | Extensive simulations demonstrating efficacy in network, computing, and system performance. | [56] | |
Neural Networks | Load Balancing, Latency, Task Migration, Failure Rate, Task Priority, QoS | ELBS offers a promising approach to load balancing in FC environments for healthcare applications. It effectively addresses key challenges like resource allocation, task scheduling, and data management. However, further research is needed to evaluate its performance in real-world scenarios and to address potential scalability issues. | [59] | |
RNN | Latency, Resource Utilization, Energy Efficiency | FogDLearner should incorporate dynamic scalability to analyze the execution time scale with the number of devices. | [70] | |
DRL | Resource Consumption, Power Consumption, Network Bandwidth, Latency, Accuracy | Real-world healthcare data, clinical trials | [71] | |
DR Transformer Model | -- | The real-time predictive capabilities of the Transformer model allow for the accurate assessment of walking patterns and provide tailored recommendations for ankle–foot orthosis (AFO) usage. A feedback loop through physician validation and patient monitoring ensures an adaptive treatment plan, thus validating the potential for real-time clinical use. | [91] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khan, F.U.; Shah, I.A.; Jan, S.; Ahmad, S.; Whangbo, T. Machine Learning-Based Resource Management in Fog Computing: A Systematic Literature Review. Sensors 2025, 25, 687. https://doi.org/10.3390/s25030687
Khan FU, Shah IA, Jan S, Ahmad S, Whangbo T. Machine Learning-Based Resource Management in Fog Computing: A Systematic Literature Review. Sensors. 2025; 25(3):687. https://doi.org/10.3390/s25030687
Chicago/Turabian StyleKhan, Fahim Ullah, Ibrar Ali Shah, Sadaqat Jan, Shabir Ahmad, and Taegkeun Whangbo. 2025. "Machine Learning-Based Resource Management in Fog Computing: A Systematic Literature Review" Sensors 25, no. 3: 687. https://doi.org/10.3390/s25030687
APA StyleKhan, F. U., Shah, I. A., Jan, S., Ahmad, S., & Whangbo, T. (2025). Machine Learning-Based Resource Management in Fog Computing: A Systematic Literature Review. Sensors, 25(3), 687. https://doi.org/10.3390/s25030687