On the Optimization of Kubernetes toward the Enhancement of Cloud Computing
Abstract
:1. Introduction
- Optimizing the data distribution latency, alongside improving the cluster backup and restore strategies toward better disaster recovery.
- Reducing configuration time and optimizing zero-downtime rolling updates while improving the robustness of Kubernetes services.
- Optimizing autoscaling strategies for Kubernetes toward optimizing cloud applications and services.
- Introducing a viable Scheduler for Kubernetes toward optimal load balancing and scheduling.
- Exploring different open-source frameworks toward end-to-end enhancement of Kubernetes.
2. Architecture and Principles of Kubernetes
2.1. Kubernetes Architecture
2.1.1. Master Node
- API Server connects various components in the Kubernetes cluster, implements specific operations on each object in the cluster, and provides services such as addition, deletion, modification, and query for resource objects.
- Controller Manager is the administrator and control center in the Kubernetes cluster. If any node is abnormal in the system, Controller Manager detects and handles the abnormality in time.
- Scheduler is the default resource scheduler for the Kubernetes cluster. Scheduler schedules pending Pods to the expected nodes according to the expected policy. The new Pod object created by the Controller Manager is received by Scheduler, which finds a suitable node schedule for it and then writes the binding information to ETCD through the API Server.
- ETCD is a key–value store used to store various information in the cluster, with high availability and persistence, and maintains the stable operation of the Kubernetes cluster.
2.1.2. Worker Node
- The kubelet service process runs on each node of the cluster, executes the tasks delivered by the master node, and manages the entire life cycle of the container. In addition, kubelet also pays attention to the status of the nodes and reports the running status and resource usage of all worker nodes to the master control node in real time.
- The service process of Kube-proxy runs on each worker node and forwards the access request received by Service to the backend.
2.2. Kubernetes Features
- Automation. Kubernetes allows users to automatically expand capacity, automatically update, automatically deploy, and automatically manage resources during use and has a set of default automation mechanisms.
- Service-centric. The design idea of Kubernetes is service-centric, users do not need to care about how to install or run and can focus more on processing business logic.
- High availability. Kubernetes regularly checks the status of each Pod instance, including the number of instances, the health status of the instance, etc. and ensures the high availability of Kubernetes by creating multiple master and ETCD clusters.
- Rolling updates. Kubernetes can complete application updates, replacement, and other operations without stopping its internal program operation and external services, saving a lot of time and resources.
2.3. Kubernetes Components
- Pod is the basic unit of scheduling in Kubernetes. In addition to the user business container, each Pod also contains a pause container called the “root container”. Each Pod also contains one or more user business containers that are closely related to the business.
- Label in Kubernetes exists as a key–value pair. The key and value in the key–value pair need to be defined by the user. Users can implement multidimensional management of resources in the label.Replication Controller is one of the core components of the Kubernetes system. It can be used to manage a set of Pods defined in the YAML file, ensuring that the number of Pod copies of the application meets the user-defined value throughout the life cycle.
- ReplicaSet helps monitor all the Pods. Particularly, ReplicaSet assists Deployment in maintaining the availability of Pods to a desired level.
- Deployment helps manage ReplicaSet and Pods. In addition, it is used for rolling updates while applications in Pods need to be updated. Moreover, it helps auto-scale Pods with the help of Horizontal Pod Autoscaler (HPA) [24]. Specifically, autoscaling is one of the most important features of Kubernetes. It allows containerized applications and services to run automatically and flexibly.
- Service can provide a common access address for a group of containers with the same function and can also send requested loading tasks to each container. The client requests access to a set of Pod copies through the address provided by the service, and the service successfully connects to the backend Pod copies through the label selector.
- Ingress helps bind requests to the Services, especially used for large-scale demands. It also provides load balancing. Generally, an Ingress controller is needed in the cluster to redirect the incoming requests to the ingress resource, which later is redirected to the appropriate endpoint.
3. Default Kubernetes Cluster and Limitations
3.1. ETCD Data Distribution and Latency
3.2. ETCD Backup and Restore and Issue
3.3. Rolling Update Performance and Issue
- The application service needs to be able to handle the TERM signal. If the container cannot handle the signal normally during execution (such as committing a database transaction), the Pod is still closed ungracefully.
- All Pods that provide services to the application are lost. When a new container is started on a new node, the services may be down, or if the Pods are not deployed using the controller, they may never be restarted.
3.4. Ingress and Ingress Controller in Handling Requests and Issue
3.5. Horizontal Pod Autoscaler (HPA) and Issue
3.6. Scheduling Model and Issue
- (1)
- The user sends a request to the API Server to create a Pod through the Rest interface of the API Server or the Kubernetes client tool (supported data types include JSON and YAML).
- (2)
- API Server handles user requests and stores Pod data in ETCD. The scheduler looks at unbound Pods through the APIServer. Attempts to allocate a node to the Pod.
- (3)
- Filter nodes: Scheduler screens each node according to the preselection algorithm, and eliminates the nodes that do not meet the requirements.
- (4)
- Host scoring: Scheduler scores the nodes screened out in the preselection algorithm. Each optimization algorithm has a different scoring focus, and each scoring algorithm has a different weight. Finally, the weighted average of the scoring results of each algorithm is the final score of a certain node.
- (5)
- Select node: Scheduler selects the node with the highest score in the optimization algorithm, binds the Pod to this node, and stores the binding information in ETCD.
Scheduling Algorithm
- (1)
- PodFitsHost. This algorithm specifies to run on a node by setting the node name.
- (2)
- PodFitsResources. Checks whether the available resources of the node meet the request amount of the Pod.
- (3)
- PodFitsHostPorts. Checks whether the port requested by the Pod is already occupied by the node.
- (4)
- PodSelectorMatches. Checks whether the node matches the label parameter.
- (5)
- CheckNodeMemoryPressure. Checks whether there is pressure on the memory of the node.
- (6)
- CheckNodeDiskPressure. Checks whether there is pressure on the disk space of the node.
- (7)
- NoDiskConflict. Checks whether the mounted storage volume conflicts with the storage volume in the Pod configuration file.
- (1)
- LeastRequestedPriority. Selects the node with the least resource consumption from the node list first, and the resource includes the sum of CPU and memory.
- (2)
- SelectorSpreadPriority. Selects nodes different from the Pod under the same RC from the node list first. The smaller the number of Pod copies, the greater the score.
- (3)
- ImageLocalityPriority. Prioritizes selecting nodes with existing images from the node list to avoid spending time pulling images.
3.7. Our Proposed System Framework
- Optimizing ETCD operations with the help of optimized disk drives (e.g., SSD) and giving ETCD services higher disk I/O permissions.
- Using Velero [32] components to back up data, and enable a zero-downtime rolling update strategy to improve the robustness of Kubernetes services and reduce downtime.
- Using open-source software Traefik [33] to reduce Ingress update configuration downtime.
- Using Prometheus [34] to obtain more detailed indicators, provide them to Kube-apiserver for HPA and scale Pod accordingly.
- Customising Scheduler Strategies based on Scheduler Extender.
4. Proposed Approach
4.1. ETCD Performance Optimization
4.2. Backup and Restore Performance Optimization
4.3. Rolling Update Performance Optimization
4.4. Ingress Performance Optimization
4.5. Autoscaling Performance Optimization
4.6. Scheduling and Load Balancing Optimization
4.6.1. Scheduler Design
- (1)
- Modify the source code of the default scheduler, add the scheduling algorithm, and then recompile and redeploy the scheduler. This approach intrudes into the source code of the Scheduler component of Kubernetes and is not conducive to version updates and rollbacks.
- (2)
- Develop a new scheduler to run in the cluster at the same time as the default scheduler. With this approach, if different Pods choose different schedulers, scheduling conflicts or scheduling failures may occur as the different scheduling processes cannot communicate with each other synchronously.
- (3)
- Custom algorithms are implemented based on the Kubernetes Scheduler Extender mechanism. This approach is nonintrusive to the source code and the Scheduler Extender runs as a plug-in, enabling flexible custom scheduling by modifying the scheduling policy configuration file.
4.6.2. Algorithm Improvement
4.6.3. Analysis of Algorithm Principles
5. Experiments
5.1. Experimental Setup
5.2. Experimental Analysis
5.2.1. ETCD IO Operation Analysis
5.2.2. Velero Backup, Migration, and Restore Operation Analysis
5.2.3. Fortio: Zero-Downtime Rolling Update Analysis
5.2.4. Traefik: Ingress Update Configuration Downtime Analysis
5.2.5. Prometheus: HPA Autoscaling Analysis
5.2.6. Custom Scheduler: Scheduling and Load Balancing Analysis
5.2.7. Default Configured K8s vs. Opitmized K8s: Overall Comparative Analysis
6. Related Work
6.1. ETCD Performance Optimization
6.2. Backup and Migration
6.3. Horizontal Pod Autoscaler (HPA) Optimization
6.4. Load Balancing and Scheduling
7. Conclusions and Future Work
7.1. Conclusions
7.2. Limitations and Future Work
- HPA performance optimization. The current method of using Prometheus to obtain metrics can respond to changes in the entire cluster in a more timely manner, but there is still a certain lag. Predicting HPA might be a better approach.
- Add more custom scheduling algorithms and more system indicators to improve scheduling performance in different environments.
- Perform more in-depth optimization according to the needs and characteristics of different platforms to achieve performance improvement. For example, scheduling algorithms and HPA indicators can be customized according to platform requirements.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Xiao, Z.; Song, W.; Chen, Q. Dynamic resource allocation using virtual machines for cloud computing environment. IEEE Trans. Parallel Distrib. Syst. 2012, 24, 1107–1117. [Google Scholar] [CrossRef]
- Huang, K.; Chen, H. The Applied Research on the Virtualization Technology in Cloud Computing. In Proceedings of the 1st International Workshop on Cloud Computing and Information Security, Shanghai, China, 9–11 November 2013; pp. 526–529. [Google Scholar]
- Bernstein, D. Containers and cloud: From lxc to docker to kubernetes. IEEE Cloud Comput. 2014, 1, 81–84. [Google Scholar] [CrossRef]
- Merkel, D. Docker: Lightweight linux containers for consistent development and deployment. Linux j 2014, 239, 2. [Google Scholar]
- Bentaleb, O.; Belloum, A.S.; Sebaa, A.; El-Maouhab, A. Containerization technologies: Taxonomies, applications and challenges. J. Supercomput. 2022, 78, 1144–1181. [Google Scholar] [CrossRef]
- Bigelow, S.J. What Is Docker and How Does It Work? 2020. Available online: https://www.techtarget.com/searchitoperations/definition/Docker/ (accessed on 7 August 2024).
- Anderson, C. Docker [software engineering]. IEEE Softw. 2015, 32, 102–c3. [Google Scholar] [CrossRef]
- Hat, R. What Is Kubernetes, 2020. Available online: https://www.redhat.com/en/topics/containers/what-is-kubernetes/ (accessed on 7 August 2024).
- Burns, B.; Grant, B.; Oppenheimer, D.; Brewer, E.; Wilkes, J. Borg, omega, and kubernetes. Commun. ACM 2016, 59, 50–57. [Google Scholar] [CrossRef]
- Mondal, S.K.; Pan, R.; Kabir, H.D.; Tian, T.; Dai, H.N. Kubernetes in IT administration and serverless computing: An empirical study and research challenges. J. Supercomput. 2022, 78, 2937–2987. [Google Scholar] [CrossRef]
- Ongaro, D.; Ousterhout, J. In search of an understandable consensus algorithm. In Proceedings of the 2014 USENIX Annual Technical Conference (USENIX ATC 14), Philadelphia, PA, USA, 19–20 June 2014; pp. 305–319. [Google Scholar]
- Oliveira, C.; Lung, L.C.; Netto, H.; Rech, L. Evaluating raft in docker on kubernetes. In Proceedings of the Advances in Systems Science: Proceedings of the International Conference on Systems Science 2016 (ICSS 2016) 19, Wroclaw, Poland, 7–9 September 2016; pp. 123–130. [Google Scholar]
- Rodríguez, H.; Quarantelli, E.L.; Dynes, R.R.; Smith, G.P.; Wenger, D. Sustainable disaster recovery: Operationalizing an existing agenda. In Handbook of Disaster Research; Springer: New York, NY, USA, 2007; pp. 234–257. [Google Scholar]
- Sameer; De, S.; Prashant Singh, R. Selective Analogy of Mechanisms and Tools in Kubernetes Lifecycle for Disaster Recovery. In Proceedings of the 2022 IEEE 2nd International Conference on Mobile Networks and Wireless Communications (ICMNWC), Tumkur, Karnataka, India, 2–3 December 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Malviya, A.; Dwivedi, R.K. A Comparative Analysis of Container Orchestration Tools in Cloud Computing. In Proceedings of the 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 23–25 March 2022; pp. 698–703. [Google Scholar] [CrossRef]
- Jackson, K. Kubernetes Rolling Updates. Available online: https://www.bluematador.com/blog/kubernetes-deployments-rolling-update-configuration (accessed on 20 May 2024).
- Shan, C.; Xia, Y.; Zhan, Y.; Zhang, J. KubeAdaptor: A docking framework for workflow containerization on Kubernetes. Future Gener. Comput. Syst. 2023, 148, 584–599. [Google Scholar] [CrossRef]
- Vayghan, L.A.; Saied, M.A.; Toeroe, M.; Khendek, F. Deploying microservice based applications with kubernetes: Experiments and lessons learned. In Proceedings of the 2018 IEEE 11th international conference on cloud computing (CLOUD), San Francisco, CA, USA, 2–7 July 2018; pp. 970–973. [Google Scholar]
- Balla, D.; Simon, C.; Maliosz, M. Adaptive scaling of Kubernetes Pods. In Proceedings of the NOMS 2020-2020 IEEE/IFIP Network Operations and Management Symposium, Budapest, Hungary, 20–24 April 2020; pp. 1–5. [Google Scholar]
- Menouer, T. KCSS: Kubernetes container scheduling strategy. J. Supercomput. 2021, 77, 4267–4293. [Google Scholar] [CrossRef]
- Pérez de Prado, R.; García-Galán, S.; Muñoz-Expósito, J.E.; Marchewka, A.; Ruiz-Reyes, N. Smart containers schedulers for microservices provision in cloud-fog-IoT networks. Challenges and opportunities. Sensors 2020, 20, 1714. [Google Scholar] [CrossRef] [PubMed]
- Senjab, K.; Abbas, S.; Ahmed, N.; Khan, A.u.R. A survey of Kubernetes scheduling algorithms. J. Cloud Comput. 2023, 12, 87. [Google Scholar] [CrossRef]
- Rejiba, Z.; Chamanara, J. Custom scheduling in kubernetes: A survey on common problems and solution approaches. ACM Comput. Surv. 2022, 55, 1–37. [Google Scholar] [CrossRef]
- Salinger, N. Autoscaling with Kubernetes HPA: How It Works with Examples, 2022. Available online: https://granulate.io/blog/kubernetes-autoscaling-the-hpa/ (accessed on 7 August 2024).
- Kubernetes_Official_Documentation. Ingress in Kubernetes, 2023. Available online: https://kubernetes.io/docs/concepts/services-networking/ingress/ (accessed on 11 July 2023).
- Kubernetes_Official_Documentation. Service in Kubernetes, 2023. Available online: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types (accessed on 7 August 2024).
- Kubernetes_Official_Documentation. Ingress Controller in Kubernetes, 2023. Available online: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/ (accessed on 11 July 2023).
- Altaf, U.; Jayaputera, G.; Li, J.; Marques, D.; Meggyesy, D.; Sarwar, S.; Sharma, S.; Voorsluys, W.; Sinnott, R.; Novak, A.; et al. Auto-scaling a defence application across the cloud using docker and kubernetes. In Proceedings of the 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion (UCC Companion), Zurich, Switzerland, 17–20 December 2018; pp. 327–334. [Google Scholar]
- Cloud, A. Getting Started with Kubernetes|Scheduling Process and Scheduler Algorithms, 2021. Available online: https://alibaba-cloud.medium.com/getting-started-with-kubernetes-scheduling-process-and-scheduler-algorithms-847e660533f1 (accessed on 13 July 2023).
- Fu, P. How to Customize Kubernetes Scheduler, 2021. Available online: https://medium.com/gemini-open-cloud/kubernetes-scheduler-%E5%AE%A2%E8%A3%BD%E5%8C%96%E7%9A%84%E6%96%B9%E6%B3%95-d662b4b7d279 (accessed on 7 August 2024).
- Tao, P. Kubernetes-v120-Architecture, 2020. Available online: https://blog.csdn.net/projim_tao/article/details/130140048 (accessed on 13 July 2023).
- Diagboya, E. What Is Velero? 2021. Available online: https://medium.com/mycloudseries/what-is-velero-1f205650b76c (accessed on 5 January 2023).
- SINGH, M. What Is Traefik and How to Learn Traefik? 2021. Available online: https://www.devopsschool.com/blog/what-is-traefik-how-to-learn-traefik/ (accessed on 1 January 2023).
- Patel, A. Prometheus—Overview, 2023. Available online: https://medium.com/devops-mojo/prometheus-overview-what-is-prometheus-introduction-92e064cff606 (accessed on 1 January 2023).
- Maayan, G.D. What Is Etcd and How Is It Used in Kubernetes? 2019. Available online: https://dev.to/giladmaayan/what-is-etcd-and-how-is-it-used-in-kubernetes-47bg (accessed on 1 January 2023).
- Xiaoshi, B. Analysis of Kubernetes Scheduler SchedulerExtender, 2020. Available online: https://my.oschina.net/u/4131034/blog/3162549 (accessed on 13 July 2023).
- Liggitt, J. Kubernetes. 2023. Available online: https://github.com/kubernetes/kubernetes/tree/master (accessed on 7 August 2024).
- Wittig, K. Kubernetes Metrics—The Complete Guide, 2021. Available online: https://www.kubermatic.com/blog/the-complete-guide-to-kubernetes-metrics/ (accessed on 13 July 2023).
- Labadie, C. Fortio: Load Testing Library, Command Line Tool, Advanced Echo Server, 2022. Available online: https://github.com/fortio/fortio (accessed on 7 August 2024).
- Mukherjee, A. An Inexact Introduction to Envoy, 2020. Available online: https://errindam.medium.com/an-inexact-introduction-to-envoy-ac41949834b5 (accessed on 18 June 2023).
- Lixu, T. Nginx Ingress Controller, 2022. Available online: https://blog.devgenius.io/k8s-nginx-ingress-controller-36bb06f95ac2 (accessed on 18 June 2023).
- Pedamkar, P. Gatling Load Testing, 2022. Available online: https://www.educba.com/gatling-load-testing/ (accessed on 1 January 2023).
- Alkraien, A. Intro to Nginx Web Server, 2022. Available online: https://medium.com/javarevisited/intro-to-nginx-web-server-part-1-bb590fad7035 (accessed on 18 June 2023).
- Larsson, L.; Tärneberg, W.; Klein, C.; Elmroth, E.; Kihl, M. Impact of etcd deployment on Kubernetes, Istio, and application performance. Softw. Pract. Exp. 2020, 50, 1986–2007. [Google Scholar] [CrossRef]
- Jeffery, A.; Howard, H.; Mortier, R. Rearchitecting Kubernetes for the Edge. In Proceedings of the 4th International Workshop on Edge Systems, Analytics and Networking, Online, 26 April 2021; pp. 7–12. [Google Scholar]
- Zhu, M.; Kang, R.; He, F.; Oki, E. Implementation of Backup Resource Management Controller for Reliable Function Allocation in Kubernetes. In Proceedings of the 2021 IEEE 7th International Conference on Network Softwarization (NetSoft), Virtual, 28 June–2 July 2021; pp. 360–362. [Google Scholar]
- Deshpande, U.; Linck, N.; Seshadri, S. Self-service data protection for stateful containers. In Proceedings of the 13th ACM Workshop on Hot Topics in Storage and File Systems, Virtual, 27–28 July 2021; pp. 71–76. [Google Scholar]
- Oh, S.; Kim, J. Stateful container migration employing checkpoint-based restoration for orchestrated container clusters. In Proceedings of the 2018 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of Korea, 17–19 October 2018; pp. 25–30. [Google Scholar]
- Lee, J.; Jeong, H.; Lee, W.J.; Suh, H.J.; Lee, D.; Kang, K. Advanced Primary–Backup Platform with Container-Based Automatic Deployment for Fault-Tolerant Systems. Wirel. Pers. Commun. 2018, 98, 3177–3194. [Google Scholar] [CrossRef]
- Casalicchio, E.; Perciballi, V. Auto-scaling of containers: The impact of relative and absolute metrics. In Proceedings of the 2017 IEEE 2nd International Workshops on Foundations and Applications of Self Systems (FASW), Tucson, AZ, USA, 18–22 September 2017; pp. 207–214. [Google Scholar]
- Taherizadeh, S.; Grobelnik, M. Key influencing factors of the Kubernetes auto-scaler for computing-intensive microservice-native cloud-based applications. Adv. Eng. Softw. 2020, 140, 102734. [Google Scholar] [CrossRef]
- Yu, J.-G.; Zhai, Y.-R.; Yu, B.; Li, S. Research and application of auto-scaling unified communication server based on Docker. In Proceedings of the 2017 10th International Conference on Intelligent Computation Technology and Automation (ICICTA), Changsha, China, 9–10 October 2017; pp. 152–156. [Google Scholar]
- Rossi, F. Auto-scaling Policies to Adapt the Application Deployment in Kubernetes. In Proceedings of the ZEUS, Potsdam, Germany, 20–21 February 2020; pp. 30–38. [Google Scholar]
- Mondal, S.K.; Wu, X.; Kabir, H.M.D.; Dai, H.N.; Ni, K.; Yuan, H.; Wang, T. Toward Optimal Load Prediction and Customizable Autoscaling Scheme for Kubernetes. Mathematics 2023, 11, 2675. [Google Scholar] [CrossRef]
- Zhang, J.; Ren, R.; Huang, C.; Fei, X.; Qun, W.; Cai, H. Service dependency based dynamic load balancing algorithm for container clusters. In Proceedings of the 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE), Xi’an, China, 12–14 October 2018; pp. 70–77. [Google Scholar]
- Zhang, W.-g.; Ma, X.-l.; Zhang, J.-z. Research on Kubernetes’ Resource Scheduling Scheme. In Proceedings of the 8th International Conference on Communication and Network Security, Qingdao, China, 2–4 November 2018; pp. 144–148. [Google Scholar]
- Nguyen, N.D.; Kim, T. Balanced Leader Distribution Algorithm in Kubernetes Clusters. Sensors 2021, 21, 869. [Google Scholar] [CrossRef] [PubMed]
- Liu, Q.; Haihong, E.; Song, M. The design of multi-metric load balancer for kubernetes. In Proceedings of the 2020 International Conference on Inventive Computation Technologies (ICICT), Coimbatore, Tamilnadu, India, 26–28 February 2020; pp. 1114–1117. [Google Scholar]
- Huang, J.; Xiao, C.; Wu, W. Rlsk: A job scheduler for federated kubernetes clusters based on reinforcement learning. In Proceedings of the 2020 IEEE International Conference on Cloud Engineering (IC2E), Sydney, Australia, 21–24 April 2020; pp. 116–123. [Google Scholar]
Keys | Connections | Clients | Target | Write QPS | Latency per Request |
---|---|---|---|---|---|
10,000 | 1 | 1 | only master | 77 | 49.5 ms |
10,000 | 1 | 1 | all members | 74 | 48.8 ms |
100,000 | 100 | 1000 | only master | 1164 | 1685.6 ms |
100,000 | 100 | 1000 | all members | 1147 | 1632.4 ms |
Keys | Connections | Clients | Target | Write QPS | Latency per Request |
---|---|---|---|---|---|
10,000 | 1 | 1 | only master | 1250 | 0.8 ms |
10,000 | 1 | 1 | all members | 840 | 1.2 ms |
100,000 | 100 | 1000 | only master | 21,348 | 45.1 ms |
100,000 | 100 | 1000 | all members | 21,475 | 45.4 ms |
Keys | Connections | Clients | Target | Write QPS | Latency per Request |
---|---|---|---|---|---|
10,000 | 1 | 1 | only master | 1289 | 0.8 ms |
10,000 | 1 | 1 | all members | 865 | 1.1 ms |
100,000 | 100 | 1000 | only master | 22,368 | 40.6 ms |
100,000 | 100 | 1000 | all members | 22,589 | 40.9 ms |
Number | CPU | Memory | Network Limits |
---|---|---|---|
Master | 2 vCPU cores | 4G | 50 Mbps |
Node1 | 2 vCPU cores | 4G | 50 Mbps |
Node2 | 4 vCPU cores | 4G | 75 Mbps |
Node3 | 8 vCPU cores | 8G | 125 Mbps |
Number | Network I/O | CPU Requests | Mem Requests |
---|---|---|---|
1, 2, 3 | 2 Mbps | 100 m | 160 Mi |
4, 5, 6 | 5 Mbps | 100 m | 160 Mi |
7, 8, 9 | 10 Mbps | 100 m | 160 Mi |
Experiment | Node1 | Node2 | Node3 |
---|---|---|---|
DSA | none | 8 | 1, 2, 3, 4, 5, 6, 7, 9 |
BNIP | 2, 6 | 3, 5, 8 | 1, 4, 7, 9 |
Experiment | Average Network IO Usage | Variance |
---|---|---|
DSA | 12.3% | 4.3 |
BNIP | 11.3% | 0.8 |
Number | CPU | Memory | Network Limits |
---|---|---|---|
Master | 4 vCPU cores | 4G | 50 Mbps |
Node1 | 2 vCPU cores | 4G | 50 Mbps |
Node2 | 4 vCPU cores | 4G | 75 Mbps |
Node3 | 8 vCPU cores | 8G | 125 Mbps |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mondal, S.K.; Zheng, Z.; Cheng, Y. On the Optimization of Kubernetes toward the Enhancement of Cloud Computing. Mathematics 2024, 12, 2476. https://doi.org/10.3390/math12162476
Mondal SK, Zheng Z, Cheng Y. On the Optimization of Kubernetes toward the Enhancement of Cloud Computing. Mathematics. 2024; 12(16):2476. https://doi.org/10.3390/math12162476
Chicago/Turabian StyleMondal, Subrota Kumar, Zhen Zheng, and Yuning Cheng. 2024. "On the Optimization of Kubernetes toward the Enhancement of Cloud Computing" Mathematics 12, no. 16: 2476. https://doi.org/10.3390/math12162476
APA StyleMondal, S. K., Zheng, Z., & Cheng, Y. (2024). On the Optimization of Kubernetes toward the Enhancement of Cloud Computing. Mathematics, 12(16), 2476. https://doi.org/10.3390/math12162476