Reinforcement Learning for Reducing the Interruptions and Increasing Fault Tolerance in the Cloud Environment
Abstract
:1. Introduction
2. Related Works
3. Experimental Design
3.1. Configuring the Simulation Environment
- ▪
- Scenario i: Number of VMs = 5;
- ▪
- Scenario ii: Number of VMs = 10;
- ▪
- Scenario iii: Number of VMs = 15; and so on until,
- ▪
- Scenario x: Number of VMs = 50; where number of VMs increase by count of 5.
- ▪
- Scenario xi: Number of VMs = 100;
- ▪
- Scenario xii: Number of VMs = 200;
- ▪
- Scenario xiii: Number of VMs = 300;
- ▪
- Scenario xiv: Number of VMs = 400;
- ▪
- Scenario xv: Number of VMs = 500; where number of VMs increase by count of 100.
- ▪
- Task ID: representing a unique number to identify a certain task;
- ▪
- Planned CPU: represents the task’s total computing time
- ▪
- Task Type: Low (L)…………if 10 ≤ Planned CPU ≤ 60Medium (M)…...if 70 ≤ Planned CPU ≤ 300High (H)………. if Planned CPU > 300
3.2. Architecture of the RL-SJF and Working of Q-Table with RL-SJF
- ▪
- Task’s Planned CPU ‘Low’ allotted to ‘Low’ performance VM: Highest Ideal Reward
- ▪
- Task’s Planned CPU ‘Medium’ allotted to ‘Low’ performance VM: Medium-Low Reward
- ▪
- Task’s Planned CPU ‘High’ allotted to ‘Low’ performance VM: Lowest Reward
- ▪
- Task’s Planned CPU ‘Low’ allotted to ‘Medium’ performance VM: Low-Medium Reward
- ▪
- Task’s Planned CPU ‘Medium’ allotted to ‘Medium’ performance VM: Highest Ideal Reward
- ▪
- Task’s Planned CPU ‘High’ allotted to ‘Medium’ performance VM: High-Medium Reward
- ▪
- Task’s Planned CPU ‘Low’ allotted to ‘High’ performance VM: Lowest Reward
- ▪
- Task’s Planned CPU ‘Medium’ allotted to ‘High’ performance VM: Medium-High Reward
- ▪
- Task’s Planned CPU ‘High’ allotted to ‘High’ performance VM: Highest Ideal Reward
4. Results and Their Implications
4.1. Resource-Scheduling Results
- ▪
- ↑ VM = ↑ Cost: The overall cost required rises with the number of VMs.
- ▪
- Total Cost (SJF) = $893.12
- ▪
- Total Cost (RL-SJF) = $706.6.
- ▪
- Average Cost (SJF) = $59.54.
- ▪
- Average Cost (RL-SJF) = $47.11.
- ▪
- Average Decrease in Cost Percentage = 18.32%
- ▪
- Performance (RL-SJF) > Performance (SJF) concerning resource scheduling across all the scenarios.
- ▪
- Reduction in the cost percentage in the above table signifies how much cost is saved by the RL-SJF algorithm as compared to that of SJF.
4.2. Fault-Tolerance Results
- ▪
- ↑ VM = ↑ Cost: The overall cost required rises with the number of VMs.
- ▪
- Average Tasks Computed Successfully (SJF) = 11.1017%.
- ▪
- Average Tasks Computed Successfully (RL-SJF) = 55.5416%.
- ▪
- Increase in Task Computations (In terms of folds) by RL-SJF when compared to SJF = 4.9943.
- ▪
- Average Tasks Failed to Compute (SJF) = 88.8984%.
- ▪
- Average Tasks Failed to Compute (RL-SJF) = 44.5585%.
- ▪
- Decrease in Task Computations (in terms of folds) by RL-SJF when compared to SJF = 1.9952.
- ▪
- Performance (RL-SJF) > Performance (SJF) concerning Fault-Tolerance mechanism across all the scenarios.
Computing Tasks with Respect to Success | Computing Tasks with Respect to Failure | ||||||
---|---|---|---|---|---|---|---|
Scenario | VMs | SJF (in %) | RL-SJF (in %) | Improvement in Terms of Success Computations (In Terms of Folds) | SJF | RL-SJF | Improvement in Terms of Failed Computations (In Terms of Folds) |
1 | 5 | 10.9771 | 55.5819 | 5.0635 | 89.0230 | 44.4182 | 2.0043 |
2 | 10 | 11.1475 | 55.552 | 4.9834 | 88.8526 | 44.4481 | 1.9991 |
3 | 15 | 11.1537 | 55.6627 | 4.9906 | 88.8464 | 44.3374 | 2.0039 |
4 | 20 | 11.1388 | 55.4214 | 4.9756 | 88.8613 | 44.5787 | 1.9934 |
5 | 25 | 11.0654 | 55.7399 | 5.0374 | 88.9347 | 44.2602 | 2.0094 |
6 | 30 | 11.0530 | 55.4152 | 5.0136 | 88.9471 | 44.5849 | 1.9951 |
7 | 35 | 11.3615 | 55.5458 | 4.8890 | 88.6386 | 44.4543 | 1.9940 |
8 | 40 | 11.0505 | 55.6466 | 5.0357 | 88.9496 | 44.3535 | 2.0055 |
9 | 45 | 10.9435 | 55.3356 | 5.0565 | 89.0566 | 44.6645 | 1.9940 |
10 | 50 | 11.1226 | 55.2298 | 4.9656 | 88.8775 | 44.7703 | 1.9852 |
11 | 100 | 11.1021 | 55.3559 | 4.9861 | 88.898 | 44.6442 | 1.9913 |
12 | 200 | 11.1022 | 55.3273 | 4.9835 | 88.8979 | 44.6728 | 1.9900 |
13 | 300 | 11.1023 | 55.2987 | 4.9808 | 88.8978 | 44.7014 | 1.9887 |
14 | 400 | 11.1025 | 55.2702 | 4.9782 | 88.8976 | 44.7299 | 1.9874 |
15 | 500 | 11.1026 | 55.2416 | 4.9756 | 88.8975 | 44.7585 | 1.9862 |
Average | 11.1017 | 55.4416 | 4.9943 | 88.8984 | 44.5585 | 1.9952 |
5. Validating Experimental Results Using Empirical Analysis
- ▪
- Linear regression equation: represents the linear relationship between the computational cost required by the SJF and RL-SJF algorithms against the number of VMs in all the scenarios.
- ▪
- Regression line slope: represents the change in the computational cost required by the SJF and RL-SJF algorithms for one-unit change in the number of VMs across all the scenarios.
- ▪
- Slope sign: represents if the slope is either positive or negative.
- ▪
- Y-intercept of line: depicts a point where the regression line crosses the required cost by the VM.
- ▪
- Relationship (positive/negative): positive relationship indicates that the cost required for computations increases with an increase in the number of VMs; negative relationship indicates that the cost required for computations decreases with a decrease in the number of VMs.
- ▪
- R2: represents a statistical measurement representing the variance proportion of how well the regression line fits the data points in the graph of cost required against the number of VMs in each scenario.
5.1. Empirical Analysis Concerning Resource Scheduling
- ▪
- ↑ VM = ↑ Cost: The overall cost required rises with the number of VMs.
- ▪
- R2 (SJF) = 0.9640.
- ▪
- R2 (RL-SJF) = 0.9279.
- ▪
- R2 (RL-SJF) < R2 (SJF): The lower value of R2 for RL-SJF indicates that the cost required by RL-SJF is lower than the SJF algorithm.
- ▪
- Performance (RL-SJF) > Performance (SJF)
5.2. Empirical Analysis Concerning Fault-Tolerance
- ▪
- R2 (SJF) = 1 × 10−5 with respect to successfully computed tasks.
- ▪
- R2 (RL-SJF) = 0.2946 with respect to successfully computed tasks.
- ▪
- R2 (RL-SJF) > R2 (SJF): The higher value of R2 for RL-SJF indicates that the RL-SJF algorithm managed a better fault-tolerance mechanism with respect to successfully computing tasks.
- ▪
- R2 (SJF) = 1 × 10−5 with respect to failed computed tasks.
- ▪
- R2 (RL-SJF) = 0.2946 with respect to failed computed tasks.
- ▪
- R2 (RL-SJF) > R2 (SJF): The higher value of R2 for RL-SJF indicates that the RL-SJF algorithm managed better fault-tolerance mechanism with respect to failure of computing tasks.
- ▪
- Performance (RL-SJF) > Performance (SJF) concerning fault-tolerance.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Armbrust, M.; Fox, A.; Griffith, R.; Joseph, A.D.; Katz, R.; Konwinski, A.; Lee, G.; Patterson, D.; Rabkin, A.; Stoica, I.; et al. A view of cloud computing. Commun. ACM 2010, 53, 50–58. [Google Scholar] [CrossRef] [Green Version]
- Dillon, T.S.; Wu, C.; Chang, E. Cloud Computing: Issues and Challenges. In Proceedings of the 2010 24th IEEE International Conference on Advanced Information Networking and Applications, Perth, Australia, 20–23 April 2010. [Google Scholar]
- Prasad, M.R.; Naik, R.L.; Bapuji, V. Cloud Computing: Research Issues and Implications. Int. J. Cloud Comput. Serv. Sci. 2013, 2, 134–140. [Google Scholar] [CrossRef] [Green Version]
- Kumari, P.; Kaur, P. A survey of fault tolerance in cloud computing. J. King Saud Univ.—Comput. Inf. Sci. 2021, 33, 1159–1176. [Google Scholar] [CrossRef]
- Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2015. [Google Scholar]
- Cui, T.; Yang, R.; Wang, X.; Yu, S. Deep Reinforcement Learning-Based Resource Allocation for Content Distribution in IoT-Edge-Cloud Computing Environments. Symmetry 2023, 15, 217. [Google Scholar] [CrossRef]
- Akhtar, M.; Hamid, B.; Ur-Rehman, I.; Humayun, M.; Hamayun, M.; Khurshid, H. An Optimized Shortest job first Scheduling Algorithm for CPU Scheduling. J. Appl. Environ. Biol. Sci. 2015, 5, 42–46. [Google Scholar]
- Vivekanandan, D.; Wirth, S.; Karlbauer, P.; Klarmann, N. A Reinforcement Learning Approach for Scheduling Problems with Improved Generalization through Order Swapping. Mach. Learn. Knowl. Extr. 2023, 5, 418–430. [Google Scholar] [CrossRef]
- Yang, H.; Ding, W.; Min, Q.; Dai, Z.; Jiang, Q.; Gu, C. A Meta Reinforcement Learning-Based Task Offloading Strategy for IoT Devices in an Edge Cloud Computing Environment. Appl. Sci. 2023, 13, 5412. [Google Scholar] [CrossRef]
- Aberkane, S.; Elarbi-Boudihir, M. Deep Reinforcement Learning-based anomaly detection for Video Surveillance. Informatica 2022, 46, 131–149. [Google Scholar] [CrossRef]
- Sheng, S.; Chen, P.; Chen, Z.; Wu, L.; Yao, Y. Deep Reinforcement Learning-Based Task Scheduling in IoT Edge Computing. Sensors 2021, 21, 1666. [Google Scholar] [CrossRef]
- Shin, D.; Kim, J. Deep Reinforcement Learning-Based Network Routing Technology for Data Recovery in Exa-Scale Cloud Distributed Clustering Systems. Appl. Sci. 2021, 11, 8727. [Google Scholar] [CrossRef]
- Zhang, Z.; Liu, H.; Zhou, M.; Wang, J. Solving Dynamic Traveling Salesman Problems with Deep Reinforcement Learning. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 2119–2132. [Google Scholar] [CrossRef]
- Afshar, R.; Zhang, Y.; Firat, M.; Kaymak, U. A State Aggregation Approach for Solving Knapsack Problem with Deep Reinforcement Learning. In Proceedings of the Asian Conference on Machine Learning, Bangkok, Thailand, 18–20 November 2020; pp. 81–96. [Google Scholar]
- Cappart, Q.; Moisan, T.; Rousseau, L.; Prémont-Schwarz, I.; Cire, A.A. Combining Reinforcement Learning and Constraint Programming for Combinatorial Optimization. Proc. Conf. AAAI Artif. Intell. 2021, 35, 3677–3687. [Google Scholar] [CrossRef]
- Chien, W.; Weng, H.J.; Lai, C. Q-learning based collaborative cache allocation in mobile edge computing. Future Gener. Comput. Syst. 2020, 102, 603–610. [Google Scholar] [CrossRef]
- Li, Y.; Fadda, E.; Manerba, D.; Tadei, R.; Terzo, O. Reinforcement Learning Algorithms for Online Single-Machine Scheduling. In Proceedings of the Computer Science and Information Systems (FedCSIS), Leipzig, Germany, 1–4 September 2019. [Google Scholar]
- Liu, C.; Chang, C.F.; Tseng, C. Actor-Critic Deep Reinforcement Learning for Solving Job Shop Scheduling Problems. IEEE Access 2020, 8, 71752–71762. [Google Scholar] [CrossRef]
- Wang, X.; Wang, C.; Li, X.; Leung, V.C.M.; Taleb, T. Federated Deep Reinforcement Learning for Internet of Things With Decentralized Cooperative Edge Caching. IEEE Internet Things J. 2020, 7, 9441–9455. [Google Scholar] [CrossRef]
- Zhang, C.; Song, W.Y.; Cao, Z.; Zhang, J.; Tan, P.H.; Chi, X. Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning. Adv. Neural Inf. Process. Syst. 2020, 33, 1621–1632. [Google Scholar]
- Ren, J.; He, Y.; Yu, G.; Li, G.Y. Joint Communication and Computation Resource Allocation for Cloud-Edge Collaborative System. In Proceedings of the 2019 IEEE Wireless Communications and Networking Conference, Marrakesh, Morocco, 15–18 April 2019. [Google Scholar]
- Yuan, W.; Yang, M.; He, Y.; Wang, C.; Wang, B. Multi-Reward Architecture based Reinforcement Learning for Highway Driving Policies. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019. [Google Scholar]
- Li, J.; Hui, P.; Lv, T.; Lu, Y. Deep reinforcement learning based computation offloading and resource allocation for MEC. In Proceedings of the 2018 IEEE Wireless Communications and Networking Conference (WCNC), Barcelona, Spain, 15–18 April 2018. [Google Scholar]
- Sun, Y.; Peng, M.; Mao, S. Deep Reinforcement Learning Based Mode Selection and Resource Management for Green Fog Radio Access Networks. IEEE Internet Things J. 2018, 6, 1960–1971. [Google Scholar] [CrossRef] [Green Version]
- Zhang, C.; Liu, Z.; Gu, B.; Yamori, K.; Tanaka, Y. A Deep Reinforcement Learning Based Approach for Cost- and Energy-Aware Multi-Flow Mobile Data Offloading. IEICE Trans. Commun. 2018, 101, 1625–1634. [Google Scholar] [CrossRef] [Green Version]
- Zhong, C.; Gursoy, M.C.; Velipasalar, S. A deep reinforcement learning-based framework for content caching. In Proceedings of the 2018 52nd Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 21–23 March 2018. [Google Scholar]
- Kolomvatsos, K.; Anagnostopoulos, C. Reinforcement Learning for Predictive Analytics in Smart Cities. Informatics 2017, 4, 16. [Google Scholar] [CrossRef] [Green Version]
- Hussin, M.; Asilah Wati Abdul Hamid, N.; Kasmiran, K.A. Improving reliability in resource management through adaptive reinforcement learning for distributed systems. J. Parallel Distrib. Comput. 2015, 75, 93–100. [Google Scholar] [CrossRef]
- Gabel, T.; Riedmiller, M. Distributed policy search reinforcement learning for job-shop scheduling tasks. Int. J. Prod. Res. 2011, 50, 41–61. [Google Scholar] [CrossRef]
- Vengerov, D. A reinforcement learning approach to dynamic resource allocation. Eng. Appl. Artif. Intell. 2007, 20, 383–390. [Google Scholar]
- Chen, W.; Liu, Y.; Dai, Y.; Luo, Z. Optimal Qos-Aware Network Slicing for Service-Oriented Networks with Flexible Routing. In Proceedings of the ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 22–27 May 2022. [Google Scholar]
- Poryazov, S.; Saranova, E.; Andonov, V. Overall Model Normalization towards Adequate Prediction and Presentation of QoE in Overall Telecommunication Systems. In Proceedings of the 2019 14th International Conference on Advanced Technologies, Systems and Services in Telecommunications (TELSIKS), Niš, Serbia, 23–25 October 2019. [Google Scholar]
- Manzanares-Lopez, P.; Malgosa-Sanahuja, J.; Muñoz-Gea, J.P. A Software-Defined Networking Framework to Provide Dynamic QoS Management in IEEE 802.11 Networks. Sensors 2018, 18, 2247. [Google Scholar] [PubMed] [Green Version]
- Chen, W.; Deelman, E. WorkflowSim: A toolkit for simulating scientific workflows in distributed environments. In Proceedings of the 2012 IEEE 8th International Conference on E-Science, Chicago, IL, USA, 8–12 October 2012. [Google Scholar]
Sr. No. | Task Fault | Description of the Task Fault |
---|---|---|
1 | Unavailability of VMs | No free VMs available to compute tasks at a certain instance of time. |
2 | Breaching the cloud security | The currently computing task on a certain VM breaches the security of the cloud. |
3 | All VMs are deadlocked | A situation where all the VMs are blocked because each VM is holding a cloud resource and waiting for another resource held by another VM. |
4 | Task denied computing service | A certain task has been waiting to be computed in the cloud’s task queue and has suffered from starvation. |
5 | Data loss observed at the cloud | The currently computing task on a certain VM accidently or intentionally causes a data loss at the cloud end. |
6 | Cloud accounts hijacked | The currently computing task on a certain VM intentionally hacks cloud accounts. |
7 | Cloud’s SLAs violations | The currently computing task on a certain VM violates the regulatory measures mentioned in the SLAs. |
8 | Insufficient RAM | RAM of the VM is low on memory |
Scenario | VMs | SJF (in $) | RL-SJF (in $) | Decrease in Cost Percentage (in %) | Performance Comparison |
---|---|---|---|---|---|
1 | 5 | 17.05 | 16.76 | 1.71 | RL-SJF > SJF |
2 | 10 | 29.25 | 28.04 | 4.14 | RL-SJF > SJF |
3 | 15 | 38.66 | 35.11 | 9.19 | RL-SJF > SJF |
4 | 20 | 45.67 | 39.83 | 12.79 | RL-SJF > SJF |
5 | 25 | 51.02 | 43.09 | 15.55 | RL-SJF > SJF |
6 | 30 | 54.74 | 44.82 | 18.13 | RL-SJF > SJF |
7 | 35 | 57.28 | 45.95 | 19.79 | RL-SJF > SJF |
8 | 40 | 59.68 | 47.05 | 21.17 | RL-SJF > SJF |
9 | 45 | 61.35 | 47.48 | 22.61 | RL-SJF > SJF |
10 | 50 | 62.68 | 47.85 | 23.66 | RL-SJF > SJF |
11 | 100 | 73.71 | 56.12 | 23.87 | RL-SJF > SJF |
12 | 200 | 78.43 | 59.12 | 24.62 | RL-SJF > SJF |
13 | 300 | 83.14 | 62.12 | 25.28 | RL-SJF > SJF |
14 | 400 | 87.87 | 65.13 | 25.88 | RL-SJF > SJF |
15 | 500 | 92.59 | 68.13 | 26.41 | RL-SJF > SJF |
Average | 59.54 | 47.11 | 18.32 | 18.32 |
SJF | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Number of Virtual Machines (VMs) | |||||||||||||||
Task Fault | 5 | 10 | 15 | 20 | 25 | 30 | 35 | 40 | 45 | 50 | 100 | 200 | 300 | 400 | 500 |
VMs unavailable | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
Security Breach | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
Deadlocked VMs | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
Service Denied | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
Loss of Data | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
Hijacked Accounts | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
Violations of SLAs | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
Insufficient RAM | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
No Fault | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ |
RL-SJF | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Number of Virtual Machines (VMs) | |||||||||||||||
Task Fault | 5 | 10 | 15 | 20 | 25 | 30 | 35 | 40 | 45 | 50 | 100 | 200 | 300 | 400 | 500 |
VMs unavailable | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ |
Security Breach | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
Deadlocked VMs | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ |
Service Denied | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ |
Loss of Data | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
Hijacked Accounts | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
Violations of SLAs | × | × | × | × | × | × | × | × | × | × | × | × | × | × | × |
Insufficient RAM | × | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ |
No Fault | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ |
Parameters | SJF | RL-SJF |
---|---|---|
Linear Regression Equation | y = 4.7213x + 21.771 | y = 3.0036x + 23.078 |
Regression Line Slope | 4.7213 | 3.0036 |
Slope Sign | Positive | Positive |
Y-Intercept of Line | 21.771 | 23.078 |
Relationship | Positive | Positive |
R2 | 0.9640 | 0.9279 |
Analysis of VMs | ↑ VM = ↑ Cost | ↑ VM = ↑ Cost |
Overall Performance | RL-SJF > SJF |
Tasks Successfully Computed | Tasks Failed to Compute | |||
---|---|---|---|---|
Parameters | SJF | RL-SJF | SJF | RL-SJF |
Linear Regression Equation | y = 0.0001x + 11.101 | y = −0.0286x + 55.67 | y = −0.0001x + 88.899 | y = 0.0286x + 44.33 |
Regression Line Slope | 0.0001 | −0.0286 | −0.0001 | 0.0286 |
Slope Sign | Positive | Negative | Negative | Positive |
Y-Intercept of Line | 11.101 | 55.67 | 88.899 | 44.33 |
Relationship | Positive | Negative | Negative | Positive |
R2 | 1 × 10−5 | 0.2946 | 1 × 10−5 | R2 = 0.2946 |
Overall Performance | RL-SJF > SJF | RL-SJF > SJF |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lahande, P.; Kaveri, P.; Saini, J. Reinforcement Learning for Reducing the Interruptions and Increasing Fault Tolerance in the Cloud Environment. Informatics 2023, 10, 64. https://doi.org/10.3390/informatics10030064
Lahande P, Kaveri P, Saini J. Reinforcement Learning for Reducing the Interruptions and Increasing Fault Tolerance in the Cloud Environment. Informatics. 2023; 10(3):64. https://doi.org/10.3390/informatics10030064
Chicago/Turabian StyleLahande, Prathamesh, Parag Kaveri, and Jatinderkumar Saini. 2023. "Reinforcement Learning for Reducing the Interruptions and Increasing Fault Tolerance in the Cloud Environment" Informatics 10, no. 3: 64. https://doi.org/10.3390/informatics10030064
APA StyleLahande, P., Kaveri, P., & Saini, J. (2023). Reinforcement Learning for Reducing the Interruptions and Increasing Fault Tolerance in the Cloud Environment. Informatics, 10(3), 64. https://doi.org/10.3390/informatics10030064