Cloud Computing and Applications, Volume II

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 January 2023) | Viewed by 16574

Special Issue Editor


E-Mail Website
Guest Editor
Department of Informatics Engineering, University of Coimbra, 3004-531 Coimbra, Portugal
Interests: distributed systems; edge and cloud computing; wireless ad hoc networks
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

Tens of years of progress in computer hardware, networks and the recent enhancements in virtualization and containerization made the long-sought vision of cloud computing possible. Cloud providers compete with a wide portfolio of pay-as-you-go services, from simple computing or storage infrastructure, to machine learning services, including image, speech, and text recognition. More than simple IT outsourcing, these services can spark new, innovative and affordable products. 

While the benefits are undeniable, citizens and companies must still consider a few drawbacks. First, as integration with the cloud increases, the complex pricing schemes become harder to manage and control. A second risk is vendor lock-in, as companies upgrade their online presence with state-of-the-art provider-dependent cloud services. At the same time, companies should be able to retain part of their data on premises, or explore different providers, in hybrid and multi-cloud solutions, while maintaining observability, to ensure fine-tuned, highly performant distributed systems. 

The challenges for providers are equally demanding. Downtime, data losses, and data breaches can jeopardize third-party businesses, causing all sorts of damage. To preclude such scenarios, providers must replicate data and services, while maintaining privacy, by preventing access by other users, attackers and their own employees. Finally, providers must operate efficiently, or competition will drive them out of the market.

This Special Issue aims at publishing high-quality manuscripts covering new research on topics related to cloud computing, including but not limited to the following:

  • Cloud applications
  • Cloud architecture
  • Virtualization, containerization and container orchestration
  • Public, private and hybrid clouds
  • Interoperability and portability
  • Microservices
  • Observability and monitoring of distributed systems
  • Security and privacy
  • Reliable operation
  • Efficient operation

Prof. Dr. Filipe Araujo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Cloud applications
  • Cloud architecture
  • Virtualization, containerization and container orchestration
  • Public, private and hybrid clouds
  • Interoperability and portability
  • Microservices
  • Observability and monitoring of distributed systems
  • Security and privacy
  • Reliable operation
  • Efficient operation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 1982 KiB  
Article
A Method of Transparent Graceful Failover in Low Latency Stateful Microservices
by Kęstutis Pakrijauskas and Dalius Mažeika
Electronics 2022, 11(23), 3936; https://doi.org/10.3390/electronics11233936 - 28 Nov 2022
Cited by 1 | Viewed by 1551
Abstract
Microservice architecture is a preferred way to build applications. Being flexible and loosely coupled, it allows to deploy code at a high pace. State, or, in other words, data is not only a commodity but crucial to any business. The high availability and [...] Read more.
Microservice architecture is a preferred way to build applications. Being flexible and loosely coupled, it allows to deploy code at a high pace. State, or, in other words, data is not only a commodity but crucial to any business. The high availability and accessibility of data enables companies to remain competitive. However, maintaining low latency stateful microservices, for example, performing updates, is difficult compared to stateless microservices. Making changes to a stateful microservice requires a graceful failover, which has an impact on the availability budget. The method of graceful failover is proposed to improve availability of a low latency stateful microservice when performing maintenance. By observing database connection activity and forcefully terminating idle client connections, the method allows to redirect database requests from one node to another with negligible impact on the client. Thus, the proposed method allows to keep the precious availability budget untouched while performing maintenance operations on low latency stateful microservices. A set of experiments was performed to evaluate stateful microservice availability during failover and to validate the method. The results have shown that near-zero downtime was achieved during a graceful failover. Full article
(This article belongs to the Special Issue Cloud Computing and Applications, Volume II)
Show Figures

Figure 1

18 pages, 1324 KiB  
Article
Fundamentals of a Novel Debugging Mechanism for Orchestrated Cloud Infrastructures with Macrosteps and Active Control
by Bence Ligetfalvi, Márk Emődi, József Kovács and Róbert Lovas
Electronics 2021, 10(24), 3108; https://doi.org/10.3390/electronics10243108 - 14 Dec 2021
Cited by 3 | Viewed by 2394
Abstract
In Infrastructure-as-a-Service (IaaS) clouds, the development process of a ready-to-use and reliable infrastructure might be a complex task due to the interconnected and dependent services that are deployed (and operated later on) in a concurrent way on virtual machines. Different timing conditions may [...] Read more.
In Infrastructure-as-a-Service (IaaS) clouds, the development process of a ready-to-use and reliable infrastructure might be a complex task due to the interconnected and dependent services that are deployed (and operated later on) in a concurrent way on virtual machines. Different timing conditions may change the overall initialisation method, which can lead to abnormal behaviour or failure in the non-deterministic environment. The overall motivation of our research is to improve the reliability of cloud-based infrastructures with minimal user interactions and significantly accelerate the time-consuming debugging process. This paper focuses on the behaviour of cloud-based infrastructures during their deployment phase and introduces the adaption of a replay, and active control enriched debugging technique, called macrostep, in the field of cloud orchestration in order to provide support for developers troubleshooting deployment-related errors. The fundamental macrostep mechanisms, including the generation of collective breakpoint sets as well as the traversal method for such consistent global states, have been combined with the Occopus cloud orchestrator and the Neo4J graph database. The paper describes the novel approach, the design choices as well as the implementation of the experimental debugger tool with a use case for validation purposes by providing some preliminary numerical results. Full article
(This article belongs to the Special Issue Cloud Computing and Applications, Volume II)
Show Figures

Figure 1

19 pages, 358 KiB  
Article
Swarm-Like Distributed Algorithm for Scheduling a Microservice-Based Application to the Cloud Servers
by Marian Rusek and Grzegorz Dwornicki
Electronics 2021, 10(13), 1553; https://doi.org/10.3390/electronics10131553 - 27 Jun 2021
Cited by 1 | Viewed by 1944
Abstract
Introduction of virtualization containers and container orchestrators fundamentally changed the landscape of cloud application development. Containers provide an ideal way for practical implementation of microservice-based architecture, which allows for repeatable, generic patterns that make the development of reliable, distributed applications more approachable and [...] Read more.
Introduction of virtualization containers and container orchestrators fundamentally changed the landscape of cloud application development. Containers provide an ideal way for practical implementation of microservice-based architecture, which allows for repeatable, generic patterns that make the development of reliable, distributed applications more approachable and efficient. Orchestrators allow for shifting the accidental complexity from inside of an application into the automated cloud infrastructure. Existing container orchestrators are centralized systems that schedule containers to the cloud servers only at their startup. In this paper, we propose a swarm-like distributed cloud management system that uses live migration of containers to dynamically reassign application components to the different servers. It is based on the idea of “pheromone” robots. An additional mobile agent process is placed inside each application container to control the migration process. The number of parallel container migrations needed to reach an optimal state of the cloud is obtained using models, experiments, and simulations. We show that in the most common scenarios the proposed swarm-like algorithm performs better than existing systems, and due to its architecture it is also more scalable and resilient to container death. It also adapts to the influx of containers and addition of new servers to the cloud automatically. Full article
(This article belongs to the Special Issue Cloud Computing and Applications, Volume II)
Show Figures

Figure 1

19 pages, 4612 KiB  
Article
A Huffman-Based Joint Compression and Encryption Scheme for Secure Data Storage Using Physical Unclonable Functions
by Yong Liu, Bing Li, Yan Zhang and Xia Zhao
Electronics 2021, 10(11), 1267; https://doi.org/10.3390/electronics10111267 - 25 May 2021
Cited by 5 | Viewed by 3441
Abstract
With the developments of Internet of Things (IoT) and cloud-computing technologies, cloud servers need storage of a huge volume of IoT data with high throughput and robust security. Joint Compression and Encryption (JCAE) scheme based on Huffman algorithm has been regarded as a [...] Read more.
With the developments of Internet of Things (IoT) and cloud-computing technologies, cloud servers need storage of a huge volume of IoT data with high throughput and robust security. Joint Compression and Encryption (JCAE) scheme based on Huffman algorithm has been regarded as a promising technology to enhance the data storage method. Existing JCAE schemes still have the following limitations: (1) The keys in the JCAE would be cracked by physical and cloning attacks; (2) The rebuilding of Huffman tree reduces the operational efficiency; (3) The compression ratio should be further improved. In this paper, a Huffman-based JCAE scheme using Physical Unclonable Functions (PUFs) is proposed. It provides physically secure keys with PUFs, efficient Huffman tree mutation without rebuilding, and practical compression ratio by combining the Lempel-Ziv and Welch (LZW) algorithm. The performance of the instanced PUFs and the derived keys was evaluated. Moreover, our scheme was demonstrated in a file protection system with the average throughput of 473Mbps and the average compression ratio of 0.5586. Finally, the security analysis shows that our scheme resists physical and cloning attacks as well as several classic attacks, thus improving the security level of existing data protection methods. Full article
(This article belongs to the Special Issue Cloud Computing and Applications, Volume II)
Show Figures

Figure 1

Review

Jump to: Research

46 pages, 4167 KiB  
Review
A Survey of Swarm Intelligence Based Load Balancing Techniques in Cloud Computing Environment
by M. A. Elmagzoub, Darakhshan Syed, Asadullah Shaikh, Noman Islam, Abdullah Alghamdi and Syed Rizwan
Electronics 2021, 10(21), 2718; https://doi.org/10.3390/electronics10212718 - 8 Nov 2021
Cited by 24 | Viewed by 6057
Abstract
Cloud computing offers flexible, interactive, and observable access to shared resources on the Internet. It frees users from the requirements of managing computing on their hardware. It enables users to not only store their data and computing over the internet but also can [...] Read more.
Cloud computing offers flexible, interactive, and observable access to shared resources on the Internet. It frees users from the requirements of managing computing on their hardware. It enables users to not only store their data and computing over the internet but also can access it whenever and wherever it is required. The frequent use of smart devices has helped cloud computing to realize the need for its rapid growth. As more users are adapting to the cloud environment, the focus has been placed on load balancing. Load balancing allocates tasks or resources to different devices. In cloud computing, and load balancing has played a major role in the efficient usage of resources for the highest performance. This requirement results in the development of algorithms that can optimally assign resources while managing load and improving quality of service (QoS). This paper provides a survey of load balancing algorithms inspired by swarm intelligence (SI). The algorithms considered in the discussion are Genetic Algorithm, BAT Algorithm, Ant Colony, Grey Wolf, Artificial Bee Colony, Particle Swarm, Whale, Social Spider, Dragonfly, and Raven roosting Optimization. An analysis of the main objectives, area of applications, and targeted issues of each algorithm (with advancements) is presented. In addition, performance analysis has been performed based on average response time, data center processing time, and other quality parameters. Full article
(This article belongs to the Special Issue Cloud Computing and Applications, Volume II)
Show Figures

Figure 1

Back to TopTop