Next Article in Journal
Research and Development of Inland Green and Smart Ship Technologies in China
Previous Article in Journal
Research on Inertial Force Attenuation Structure and Semi-Active Control of Regenerative Suspension
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Survey on Modeling Languages for Applications Hosted on Cloud-Edge Computing Environments

by
Ioannis Korontanis
1,2,*,
Antonios Makris
1,2 and
Konstantinos Tserpes
1,2
1
Department of Informatics and Telematics, Harokopio University of Athens, 17671 Athens, Greece
2
School of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(6), 2311; https://doi.org/10.3390/app14062311
Submission received: 31 January 2024 / Revised: 23 February 2024 / Accepted: 29 February 2024 / Published: 9 March 2024
(This article belongs to the Topic Cloud and Edge Computing for Smart Devices)

Abstract

:
In the field of edge-cloud computing environments, there is a continuous quest for new and simplified methods to automate the deployment and runtime adaptation to application lifecycle changes. Towards that end, cloud providers promote their own service description languages to describe deployment and adaptation processes, whereas application developers opt for cloud-agnostic open standards capable of modeling applications. However, not all open standards are able to capture concepts that relate to the adaptation of the underlying computing environment to changes in the application lifecycle. In our quest for a formal approach to encapsulate these concepts, this study presents various Cloud Modeling Languages (CMLs). In this study, when referring to CMLs, we are discussing service description languages, domain-specific languages, and open standards. The output of this study is a review that performs a classification on CMLs based on their effectiveness in describing deployment and adaptation of applications in both cloud and edge environments. According to our findings, approximately 90.9% of the examined languages offer support for deployment descriptions overall. In contrast, only around 27.2% of examined languages allow developers the choice to specify whether their application components should be deployed on the edge or in a cloud environment. Regarding runtime adaptation descriptions, approximately 54.5% of the languages provide support in general.

1. Introduction

Cloud modeling languages (CMLs) are utilized to generate application models that articulate the software components and the application logic, with the aim of enabling their deployment and execution in a cloud and edge computing infrastructure. This concept is especially relevant when migrating applications to the cloud. Research indicates that modern CMLs can help with deploying applications, as well as setting up auto-scaling and self-healing features for applications transitioning to the cloud. The cloud-based concepts introduced and supported by CMLs may vary depending on their specific objectives and syntax. Typical CML examples such as OASIS TOSCA (Topology and Orchestration Specification for Cloud Applications) (https://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.3/os/TOSCA-Simple-Profile-YAML-v1.3-os.html, accessed on 10 October 2023) or CAMEL (Cloud Application Modelling and Execution Language) [1] provide effective templates for abstracting the application topology, component descriptions, and requirements. However, these CMLs have not yet been widely adopted by the market. Many production systems still heavily rely on commercial and tailor-made solutions, which may result in lock-in conditions and hinder portability.
The rise of the cloud-edge continuum, enabling next-generation applications, has sparked renewed efforts to find a suitable way to describe applications. One crucial requirement that has been introduced in this context is the minimization of the effort involved in transitioning from traditional platforms to these type of infrastructures. In such situations, a CML with appropriate abstractions and high-level syntax might make a seamless lift-and-shift migration possible. However, the question remains: what exactly are the ‘appropriate abstractions’ that will facilitate these novel requirements?
As per Bergmayr et al. [2], the precise elements of an application that a language ought to delineate, and even the suitable language to employ for a particular issue, remain undefined. They suggest that CMLs could be viewed as domain-specific languages within the domain of cloud computing. A comprehensive definition of CMLs might entail their usage in describing a collection of standalone software components into which an application is broken down. These languages are used to define the requirements of the application components in terms of resources, with the goal of ensuring that the quality level of the end product, such as QoE (Quality of Experience), meets the expectations of end users.
Furthermore, to the best of our knowledge, there is an important aspect that existing CMLs do not adequately address, i.e., the challenge of runtime adaptation in the context of auto-scaling and self-healing. Such a feature would allow applications the leverage of the full potential of cloud computing and modern continuous integration and deployment techniques. In the context of our review paper, runtime adaptation refers to the ability of the underlying infrastructure to dynamically adapt to changes in the application’s lifecycle state. This adaptation is accomplished through the invocation of infrastructure-level operations, such as auto-scaling and self-healing actions, for instance, by scaling application components across various clusters and devices within the cloud-edge continuum. To enable this, we are searching for CMLs capable of aiding in the specification of lifecycle events and the corresponding infrastructure-level mitigation actions.
Our research is basically a review that examines various CMLs to determine which ones exhibit the desired characteristics and perform a classification analysis on them. Based on our analysis, developers will be equipped to select the most suitable CML according to their requirements after reading this paper. Developer requirements call for a CML capable of effectively describing an ultra-low latency application deployment in a cloud-edge environment. Our prior research [3] investigated semantic gaps within CMLs used for edge and cloud applications. This analysis led us to conclude that effective CMLs should prioritize both ease of deployment and a high degree of automation in runtime management. In the present study, we pinpoint the CMLs that possess the capability to model applications such that they can be (a) deployed with minimal re-engineering effort on a cloud-edge continuum infrastructure and (b) adapted at runtime to lifecycle events.
The remaining sections of this paper are structured as follows. In Section 2, an overview of relevant research in the field is presented, along with a comparison based on our research objectives. In the same section, the methodology employed for the review, which led to the identification of the final set of CMLs closely aligned with our goals, is also analyzed. The actual analysis of concepts, semantics, and properties that each modeling language can describe is provided in Section 3. Lastly, in Section 4, the conclusions drawn from the analysis and future research directions in the field are outlined.

2. Review Methodology

Nowadays, there are several edge-cloud solutions in the market that seem to influence the growing interest in CMLs, which can be used to design applications for edge-cloud environments. CMLs may have a different scope and syntax; thus, our plan is to investigate their capacity to model application components and their requirements on edge-cloud computing environments. This investigation will lead to a comparison of the selected CMLs that may comprise a valuable resource to researchers and practitioners alike that work on technologies revolving around the edge-cloud continuum. The initial step in conducting this comparison involves understanding how each CML represents an application component and subsequently determining the aspects of the application component linked with deployment and runtime adaptation description.
Therefore, the starting point is the investigation of widely recognized open standards, modeling languages, and specifications such as TOSCA, CAMEL [1], and HOT (https://docs.openstack.org/heat/latest/template_guide/hot_spec.html) (accessed on 10 October 2023). This exploration aims to examine the properties and rationale behind the design of these languages. The secondary aim is to determine whether the grammar of those CMLs has the capability to support semantics for orchestration and runtime adaptation across both cloud and edge platforms. Furthermore, extensions of TOSCA and HOT are examined to understand the specific aspects that are expanded to introduce new edge-related semantics.
Our study also incorporates CMLs with semantic concepts that align with our interests, as highlighted in relevant surveys. In order to avoid the inclusion of irrelevant research works, a specific time window is applied along with a list of keywords to ensure that our survey contains publications that are in line with our objectives.

2.1. Relevant Surveys

This section explores several surveys providing valuable insights into the analysis of CMLs. The selected surveys cover the features, benefits, and classification of these languages, providing a comprehensive overview of their status and impact.
Bergmayr et al. [2] perform a systematic review, categorizing CMLs based on their language scope, characteristics, modeling capabilities, and tool support. The authors emphasize the importance of CMLs being able to describe both the cloud environment and application components, including their deployment into cloud services. Furthermore, the article explores the use of translators or mechanisms that interpret these languages and convert descriptions into a model that facilitates the provisioning of application components.
Similarly, in [4], Alidra et al. investigate Fog modeling languages, which are designed to model Fog systems and their distinct features. The authors evaluate the effectiveness of current Fog modeling languages by checking whether they align with the key attributes outlined in their feature model, covering general characteristics and supported tools. According to the feature model, a Fog modeling language can be defined by its scope and several characteristics such as dimension, layer, architecture type, control type, resources, properties, and genericity. For example, the dimension can represent the structure, behavior, or both of a Fog system, while the architecture type can target different kinds of Fog system architectures, such as distinct layers or complementary views. Additionally, a language can focus on significant properties of Fog systems, such as privacy/security, health, performance, or energy. Furthermore, the feature model highlights that a Fog modeling language can be characterized by its definitions, including abstract syntax, concrete syntax, and semantics. It may also have an extension mechanism to refine or expand the language when necessary. Finally, a Fog modeling language is characterized by its support, which encompasses implementation, capabilities, interoperability, exploitation, validation, and documentation.
Another study [5] classifies service description languages based on domain, delivery model, coverage, objectives, representation, and semantics. The research work categorizes service description languages into four categories:
  • Deployment and provisioning languages: These languages focus on describing the deployment of cloud applications and installation of required libraries on virtual host machines. The domain for these languages is exclusively the cloud.
  • Modeling and composition languages: These languages focus on describing features and composition of services that can be used in cloud environments, SOA (Service-Oriented Architectures), or SWS (Semantic Web Services).
  • Discovery and selection languages: These languages focus on matching user requirements with descriptions of available services. Similar to modeling and composition languages, discovery and selection languages work in the domains of cloud, SOA, or SWS, targeting IaaS (Infrastructure as a Service), PaaS (Plaform as a Service), SaaS (Service as a Service), and XaaS (Anything as a Service) infrastructures.
  • SLA (Service level agreement) languages: These languages focus on describing the functionalities and QoS (Quality of Service) that a cloud provider offers to a customer. This type of language works exclusively at the XaaS layer, targeting the cloud and SOA domains.
For each of the above languages, the authors categorize whether the scope is technical, business, or supports both.
A systematic literature review on the TOSCA open standard [6] reveals that TOSCA is widely considered as the most important specification for describing the deployment and lifecycle of cloud applications. The review categorizes works and papers based on TOSCA utilization in the following ways:
  • Tools capable of modeling components using TOSCA and deploying applications based on TOSCA models.
  • Extensions of TOSCA that introduce new topologies, management plans, and visual notations.
  • Methodologies for manipulating TOSCA models.
  • Papers that compare TOSCA with other specifications or standards.
  • Papers that present the usage of TOSCA in DevOps (Development and Operations), IoT (Internet of Things), and for testing purposes using TOSCA models.
  • Papers that introduce TOSCA and its concepts.
The above review was conducted during the timeframe from 2013 to 2017, which coincided with a surge in publications related to TOSCA. This period witnessed a peak in the popularity of TOSCA, with numerous works exploring diverse facets of TOSCA and its applications, including:
  • Reports on the utilization of TOSCA;
  • Presentations of methodologies for manipulating TOSCA models;
  • Extensions of TOSCA.
Alongside CMLs, deployment technologies are crucial in defining the components, relationships, and types in cloud deployment models. Wurster et al. [7] conduct a systematic review that categorizes deployment technologies into three types:
  • General-purpose deployment technologies, such as Puppet, Chef, Ansible, OpenStack Heat, Terraform, SaltStack, Juju, and Cloudify, can support single-, hybrid-, and multi-cloud deployments, as well as different types of cloud services (XaaS). These technologies are also flexible enough to be customized by integrating personalized and reusable components to cater to additional providers or services. The component’s lifecycle can be customized through specific actions, according to Wurster et al.’s classification.
  • Provider-specific deployment technologies, such as AWS (Amazon Web Services) CloudFormation and Azure Resource Manager, are designed for XaaS deployments and support the creation of reusable entities and component lifecycle management. However, unlike General-purpose deployment, these technologies are limited to single-cloud deployments as they are exclusive to specific cloud providers and can only work with the corresponding provider’s cloud services, as noted by Wurster et al.
  • Deployment technologies classified as platform-specific support the creation of reusable entities, allow for the management of a component’s lifecycle, and can support multiple cloud providers. However, they have certain limitations compared to General-purpose deployment, such as constraints on the cloud delivery model and the requirement of specific platform bundles to implement components. The technologies included in this group, according to Wurster et al.’s systematic review, are Kubernetes, CFEngine (Configuration Engine), and Docker Compose.
Kritikos et al. [8] conduct a survey aimed at evaluating the suitability of various CMLs for multi-cloud environments. Their evaluation involves identifying languages that are already multi-cloud enabled and assessing the areas that require extension to support this feature. The assessment criteria include the languages’ expressiveness, runtime support, capability to support multi-cloud environments, and model reusability. Our analysis of application concepts in CMLs is influenced by the findings of this survey. In particular, both our research and Kritikos et al.’s evaluation investigate deployment expressiveness, which pertains to describing the deployment of an application in terms of its structure and content. Additionally, both studies examine the ability to express event-driven descriptions through metric constraints and to bind adaptation actions with workflows. In Kritikos et al.’s evaluation, CAMEL [1] emerges as the top-performing CML, achieving the highest level of satisfaction across all evaluation dimensions. Notably, this survey encompasses several other CMLs, such as CloudFormation, Azure Resource Manager DSL (Domain-Specific Language), Google Cloud Deployment Manager DSL, Hashicorp Configuration Language, HOT (Heat Orchestration Template), JuJu, and TOSCA.
Our analysis aims to combine insights from the aforementioned surveys. This paper delves into the investigation of how CMLs can accurately capture and describe elements associated with cloud and edge computing environments. The goal is to assess which CMLs are suitable for deployment in cloud-edge continuum environments, much like the approach taken by Alidra et al. [4] for Fog modeling languages. Our additional aim is to examine the structure of various CMLs and delineate the properties they support for deployment and runtime adaptation. This analysis is similar to the one presented by Bergmayr et al. [2]. The aim of this property analysis is to investigate how advanced cloud-edge continuum platforms or providers, similar to those referenced by Wurster et al. [7], could employ these features to simplify deployment or runtime adaptation processes. By studying [6,8], our final objective is to identify cloud-independent CMLs capable of supporting these concepts to mitigate vendor lock-in risks. In this manner, our survey offers developers and researchers guidance on the topic of CMLs that can be employed for cloud-edge platforms based on their descriptive capabilities. It may also inspire the creation of new languages tailored to specific needs.

2.2. Research Questions

This survey paper seeks to answer the research question regarding the portrayal of an application component by CMLs. Given the absence of a uniform syntax, scope, and dictionary among CMLs, the paper aims to explore the various approaches they employ in describing application components. Additionally, the survey examines how CMLs address the depiction of deployment phases and runtime adaptation, alongside their descriptions of application components. The research questions are the following:
  • How modeling languages can model application components along with their requirements?
  • How modeling languages are describing the deployment phase of an application?
  • Are there any CMLs that can model the runtime adaptation of application components? If there are any, how are they able to provide this description?

2.3. Data Sources and Search Strategy

A decision was made to utilize the time window spanning from 2011 to 2023, as, from 2011 onward, cloud technologies appeared to be a prominent subject of interest in both the research and enterprise domains. This time window was near the release dates of TOSCA and CAMEL [1]. TOSCA was officially introduced in 2013 [6], but the first papers about TOSCA were already published in 2012. In 2016, the release of the TOSCA Simple YAML profile (a shift from XML to YAML) sparked fresh research interest. Concurrently, in the same year, CAMEL emerged as a promising alternative to TOSCA for describing application components and their behavior in cloud environments. The present survey includes older and newer publications that appeared to be relevant to the stated research questions.
Finding the most relevant research works proved to be challenging due to the wide range of research topics related to cloud computing within the past decade. To locate suitable research papers for our study, electronic databases such as Scopus, SpingerLink, ACM Digital Library, and IEEE Xplore were utilized. In our research, the queries performed in each of these libraries include terms shown in Table 1.
All the terms mentioned above have the potential to yield CMLs with syntax capable of describing the deployment and runtime adaptation of applications in the cloud-edge continuum. To identify CMLs that avoid cloud vendor lock-in issues, the keyword multi-cloud is included. It is crucial to emphasize the supported properties of CMLs that enable deployment and runtime adaptation; hence, additional keywords such as deployment, runtime adaptation, and orchestration are employed. One such keyword is workflows; there exist languages that can depict runtime adaptation through an abstract set of actions. Other CMLs can utilize geolocation constraints to assist orchestrators in selecting a desirable host from a global array. Another vital property of CMLs is their ability to express time constraints. This term is used to identify languages capable of describing constraints based on time, ensuring a higher quality of service. For the same reason, the term time-critical cloud application is used in queries to maximize understanding of the use of time constraints. The syntax used to express runtime adaptation is also significant; there are languages capable of expressing cloud service life cycle or metric-based conditions under which adaptation actions should be applied. CMLs that support conditions in general fall under elasticity specifications. In the context of runtime adaptation, interest also lies in self-adaptive systems that apply CML descriptions to their infrastructure, following a model-driven engineering approach. In a cloud-continuum ecosystem, infrastructure can be located in cloud computing, edge computing, or even fog computing. There are specialized CMLs specifically designed to describe internet of things applications in the cloud-edge continuum. These CMLs are intriguing as they offer different description capabilities than traditional CMLs. Detailed queries can be found in Table A1 and Table A2.
A substantial number of findings were collected from diverse sources, including 140 research papers from IEEE Xplore, 408 research papers from the ACM Digital Library, 1655 research papers from Scopus, and 3014 research papers from SpringerLink. This resulted in a total of 5217 findings related to the field of Computer Science. However, because of the varying relevance to the scope of our survey, it became necessary to perform manual filtering. To curate the final set of papers, a manual study selection process was executed, following a systematic literature review approach inspired by Bergmayr’s systematic literature review [2]. This process encompassed the establishment of inclusion and exclusion criteria to identify studies with similar objectives. It involved multiple pruning stages, such as removing duplicates, employing whitelist keywords to refine the results, and reviewing the titles and abstracts of the papers. Ultimately, manual selection was performed based on the content of the studies to identify the most relevant resources for our research.

2.4. Study Selection

This survey diverges from the conventional pruning stages of a traditional literature review. The literature review process involves several key stages. Initially, researchers define their research questions, objectives, and the scope of their study, formulating search queries and identifying relevant keywords. Following this, they conduct database searches to retrieve a broad set of potentially pertinent papers. In the screening phase, researchers review titles and abstracts to assess relevance and exclude papers that do not align with their criteria. Papers that pass this screening undergo a thorough full-text review. Data extraction and synthesis follow, with researchers extracting pertinent information and organizing it to address research questions. Quality assessment, depending on research methodology, may also be conducted on the methodology, research design, sample size, and other factors. Finally, the findings are reported in a structured manner, summarizing insights and discussing implications in the context of research objectives.
Our approach embodies a mixed literature review methodology. In this approach, the initial pruning stage encompasses the definition of search queries, the selection of relevant keywords, the choice of electronic databases, and the removal of duplicate query results. The subsequent step involves a whitelisting process applied to the search outcomes, emphasizing preferred keywords. Following that, the third stage is characterized by a screening phase, during which the titles and abstracts of the results are carefully reviewed to exclude papers falling outside the scope of our study. Finally, in the fourth stage, a comprehensive full-text review is undertaken to identify papers aligned with our research objectives, forming the set of papers subject to analysis in this survey. This approach was previously employed by Bergmayr et al. in [2].
The decision to employ a mixed literature review approach was influenced by several factors. One primary reason is the complexity of the subject matter. Papers spanning a decade, within a field with diverse terminologies, are being examined in our research. This diversity makes accurately identifying and assessing relevant articles challenging. To address this, the whitelisting method was incorporated. Additionally, our choice was influenced by the large number of search results. Traditional methods of manually filtering papers can be time-consuming, especially with many papers. In our mixed approach, time was saved by utilizing Python scripts for deduplication and whitelisting. Figure 1 presents all the steps outlined to reach the final set of examined papers.

2.4.1. Inclusion and Exclusion Criteria

This survey includes CMLs, domain-specific languages, and any other languages suitable for describing application components along with their deployment and runtime adaptation in edge-cloud computing environments. It is required that all related papers be written in the English language. Additionally, it is crucial to gather results directly related to Computer Science.
This survey includes CMLs with specific features like cloud independence and the ability to generate high-level models. Such languages are valuable for avoiding vendor lock-in and facilitating the integration of edge and various cloud providers. Platforms utilize these languages to create provider-agnostic high-level models of applications, which are then transformed into technical configurations interpretable by the platform. According to Rossini [1], this transformation process is executed by a tool called a Reasoner, specialized in generating technical application descriptions from high-level ones.
Another criterion for inclusion is the ability to describe the deployment unit of the application component. While many CMLs describe components as virtual machines, there is a growing trend toward container technologies like Docker and Kubernetes. As a result, there are CMLs capable of describing application components packaged as containers. This survey seeks to compile a comprehensive list of languages that can describe application components as high-level entities, services on virtual machines, or containers, covering a wide range of available options.
For a platform to effectively use a CML for deploying application components, the CML must have particular properties. Application components are usually packaged as images, like Docker containers or virtual machines, and stored in repositories for edge-cloud platforms. These platforms use mechanisms to fetch these images for deployment. Hence, CMLs must describe images, repositories, and credentials for seamless deployment.
Application collaborate within an application, often relying on input data and sometimes depending on each other. Understanding how CMLs represent relationships, dependencies, and environmental values of these components is crucial. These properties describe interactions and dependencies, and identify input parameters needed for successful execution.
With our focus on CMLs designed to describe applications in edge-cloud computing environments, a compilation of studies is of interest to us that introduces languages capable of defining the placement options for application components as deployment criteria. Developers highly value the capability of a CML to determine whether a component should deploy in either the cloud or the edge. In edge-cloud platforms, hosts comprise both physical and virtual machines. Thus, it is crucial for CMLs to describe both types of machines and their hardware capabilities. Describing the hardware capabilities of hosts is crucial in cloud-edge continuum platforms to ensure optimal deployment of application components. Additionally, certain CMLs enable the description of geolocation coordinates as deployment criteria or constraints. This is valuable as hosts may be spread across different countries, and considering geolocation can enhance deployment outcomes.
Our research is also driven by the importance of considering runtime adaptation in CMLs. Hence, it is vital to identify which CMLs allow for describing event-driven logic or optimization rules. Event syntax enables the description of QoS conditions that trigger actions to adapt an application component. These conditions provide guidelines on when and how adaptations should be executed based on specific QoS thresholds or events. Incorporating these thresholds within a model is akin to providing a step-by-step recipe for the platform to follow when QoS degrades. On the other hand, optimization rules allow application developers to define objectives, such as cost minimization or maximizing the number of running instances, which serve as guiding principles for making runtime adaptation decisions. Taking into account these two methods for describing runtime adaptation, our survey includes CMLs that enable the description of QoS conditions, constraints, execution flow (workflow), and optimization rules.

2.4.2. Pruning Stage 1: Search Limited to Relevant Publication Sources

During the process of conducting queries across multiple electronic databases to identify relevant works for our survey paper, the challenge of managing duplicate entries among the 5217 results was encountered.
To tackle this issue, the results from each electronic database were carefully compared to detect and remove duplicate entries. Papers with identical titles and authors were identified and excluded from consideration. Through the removal of duplicates, the paper count was reduced from the original 5217 to 4636.
Subsequently, several papers published in the selected sources that corresponded with our research interest had to be identified by us.

2.4.3. Pruning Stage 2: Whitelist Based Keyword Search

The queries presented in Table A1 and Table A2 were initially employed using a keyword whitelist. This was necessary because when combining Scopus and SpringerLink searches, the results yielded over 60,000 articles from unrelated science fields. In the second pruning stage, the results were further refined using both the same keywords that were applied on the queries and some new ones; the complete keyword list can be found in Table 2. Additionally, during this stage, the results related to the keyword Unified Modeling Language were omitted unless the same publication also featured other keywords presented in Table 2. This exclusion was implemented due to the out-of-scope nature of the results received.
The inclusion of new keywords became necessary due to the variation in terminologies used by different authors when describing CMLs. For example, certain individuals utilize the expression cloud modeling language, while others may opt for terms like domain-specific language or application topology language. This diversity also extends to the technical standards, commonly referred to as either configuration or infrastructure as code.
Manual search was conducted, and papers lacking at least one of the keywords were excluded from our list. The remaining papers totaled 1311. It is important to note that while these papers passed the initial screening, they may not directly relate to our research interest. Therefore, additional pruning stages were necessary to ensure a more accurate selection of relevant articles.

2.4.4. Pruning Stage 3: Manual Selection Based on Title and Abstract

After completing the initial pruning phases, pertinent publications for our survey were successfully pinpointed. However, a few irrelevant ones still persisted. As a result, a manual selection process was conducted by thoroughly reading and comparing the titles and abstracts of these papers. Determining a paper’s relevance solely based on its title can be challenging. In contrast, an abstract succinctly outlines a paper’s objectives in a few lines. Following this additional pruning step, a total of 334 remaining papers were left.

2.4.5. Pruning Stage 4: Manual Selection Based on Study Content

During the final pruning stage, all remaining relevant papers were reviewed. An analysis was performed to ascertain their compliance with the concepts specified in the inclusion and exclusion criteria. Furthermore, the references utilized in those papers were examined to uncover additional sources for our review. The relevant surveys mentioned in Section 2.1 were found to be beneficial in identifying and include further works related to our review during this process. A final set of articles consisting of 22 papers was generated through manual selection based on study content. These are the papers that are subject to comparison and analysis in this survey.

3. Results

Within each subsequent subsection, an analysis is presented of the supported properties deemed necessary for CMLs to describe deployment and runtime adaptation within cloud-edge platforms. This analysis can be viewed as both addressing the research questions outlined in Section 2.2 and as a means for classifying the examined CMLs based upon their supported properties.

3.1. Cloud Provider-Independent Models

Many users look for modeling languages that are not tied to a specific cloud provider. This is because they often have to learn a new specification each time they switch to a different platform for deploying applications. TOSCA, as a provider-independent modeling language, offers a solution to this challenge by providing a generic description for deployment and lifecycle management, allowing for users to employ the same specification across different cloud platforms.
GENTL is presented as a viable solution for creating cloud provider-independent models in [9]. It is a generic topology language that enables the mapping of different languages to a common model, with the goal of describing deployment and lifecycle management in a cloud provider-agnostic manner. This approach provides an alternative option for achieving cloud portability without being bound to specific cloud providers. EDDM, which was introduced in [7], is another CML that offers universal models that are not specific to any cloud provider.
While languages like GENTL, EDDM, and TOSCA offer an abstract approach that is cloud provider-independent, HOT does not possess this characteristic. It is specifically designed for use within the OpenStack cloud computing platform, but it is also being adopted and shared among multiple cloud providers due to its vendor-independent specifications for launching applications in the cloud environment, as mentioned in [10]. The documentation of HOT highlights the presence of a provider section, enabling developers to define providers. In the context of HOT, the deployment unit is portrayed as a virtual machine necessitating deployment on the server resources of a particular cloud provider. In their introduction of CAMEL, Rosini et al. [1] discuss how CAMEL can generate cloud provider-independent models usable across multiple cloud providers. Despite sharing a similar capability with HOT, CAMEL uses a different syntax for expressing providers. It supports various packages for designing different aspects of an application and its requirements, including a dedicated provider package specifically for describing cloud providers.
The abstract approach, which caters to multiple technologies, is also present in container description languages. For instance, Velo DSL [11] functions as an abstract deployment language for container orchestration and is capable of facilitating deployments across three container technologies: Docker Swarm, Kubernetes, and Mesos Marathon. Petrovic et al. [12] introduce SMADA-Fog (Semantic Model driven Approach to Deployment and Adaptivity of container-based applications in Fog Computing), a semantic model-driven approach that presents a framework facilitating automated code generation for infrastructure management. This includes the creation of Docker containers and SDN rules.
As mentioned earlier, many CMLs depend on translator tools to convert high-level, cloud-agnostic models into technical representations compatible with one or more cloud platforms. For instance, Uni4Cloud, as mentioned in [13], utilizes the OVF (Open Virtualization Format) to provide a cloud provider-agnostic representation of application components, their relationships, and requirements. OVF is a standard by DMTF (https://www.dmtf.org/standards/ovf) (accessed on 10 October 2023) that aims to offer a cloud-agnostic solution for deploying virtual machines in the cloud. Uni4Cloud also employs the OCCI (Open Cloud Computing Interface) (https://occi-wg.org/) (accessed on 10 October 2023) as a generic API that provides details deployment of applications in the cloud. Uni4Cloud includes a Cloud Adapter component that translates the details of OCCI into specific commands for each cloud provider.
In their research, Rosini et al. [1] use the term Reasoner to refer to the translator tool. They propose an architecture that consists of a Profiler component for matching a deployment model with a specific cloud provider, followed by a Reasoner component that resolves constraints to produce a cloud provider-specific model. The proposed architecture includes an Adapter component that reads the cloud provider-specific model and generates deployment plans for the chosen provider. Finally, an Executionware component is used to consume the deployment plans and perform the actual deployment. Notably, this architecture shares similarities with the one used in Uni4Cloud [13]. The analysis of the crucial role played by reasoners in enabling a platform to implement a CML and execute deployment actions described in a cloud-independent model for specific technologies is presented in the followingsection.

3.2. Description of the Deployment Phase

In edge-cloud computing environments, CMLs used for applications illustrate the interaction of application components and their deployment within these environments. Nevertheless, there is a lack of consensus among CMLs regarding the standardized approach for describing application components. While some languages represent them as containers, others favor the utilization of services on virtual machines. This discrepancy arises from developer perspectives on streamlining orchestration within specific cloud platforms that may have a preference for certain types of deployment units.
The previous subsection emphasizes the significance of CMLs that are not limited to specific cloud platforms. These languages should be capable of describing the deployment process. For instance, Uni4Cloud [13] utilizes the OCCI specification for cloud-agnostic application deployment. In contrast, CAMEL [1] enables the depiction of application deployment across various cloud providers. Users define components using the design time-domain, and runtime models are created by reasoners to facilitate deployment on the chosen provider. The deployment package in CAMEL allows developers to specify the deployment of their applications.
According to [14], TOSCA provides the ability to describe the deployment and management of application components in a cloud-agnostic way, making it compatible with various cloud providers and container technologies such as Docker and Kubernetes. However, to use TOSCA on cloud providers and container technologies, a reasoner is required. For reasoners to handle translation effectively, it is important to match the representations of a CML with the underlying technology entities. Building on this idea, Borisova et al. [15] conducted a study illustrating the utilization of a TOSCA-based language with Kubernetes to orchestrate pods, which serve as deployment units for one or multiple containers. They identified specific semantics necessary in TOSCA to describe application components, ensuring compatibility with Kubernetes configuration files.
Drawing inspiration from TOSCA’s capabilities, Wurster et al. introduced a language named EDDM in [7]. EDDM facilitates the creation of declarative deployment models that place significant emphasis on defining the method of application’s deployment in a platform-agnostic fashion. This includes specifying the application’s structure, components, configurations, and interdependencies without being tied to any specific platform. EDDM models can be mapped to Kubernetes and Docker Compose configuration files and it can even be mapped to TOSCA models.
Tosker [16] is a reasoner, which uses TOSCA Simple Profile YAML files to simplify the deployment of Docker containers. Tosker parses extended TOSCA models capable of describing container orchestration and transforms them into Docker deployment files. However, it is important to note that Tosker does not support the deployment of containers in the Docker Swarm mode. DesLaurier et al. [17] introduced an advanced reasoner in their paper. They presented MiCADO (Microservices-based Cloud Application-level Dynamic Orchestrator), a container orchestrator utilizing TOSCA on both Docker Swarm and Kubernetes platforms. To support this, they created a reasoner called MICADO Submitter. Initially, MICADO used Docker Swarm but later transitioned to Kubernetes.
Another example is the Cloudify platform, a cloud and network function virtualization (NFV) orchestration platform, which utilizes an extended version of TOSCA to introduce new entities for describing Docker container orchestration (https://cloudify.co/blog/docker-tosca-cloud-orchestration-openstack-heat-kubernetes/) (accessed on 12 October 2023). The Cloudify platform also includes a reasoner capable of translating its own TOSCA entities into Docker deployment files. Solayman et al. [18] presented a noteworthy instance of the cooperation between Docker and Cloudify’s TOSCA, which can deploy IoT applications as containers across both edge and cloud environments.
Moreover, there are reasoners that possess the ability to transform technical deployment files into high-level description models. In their publication [19], Weller et al. proposed the Deployment Model Abstraction Framework (DeMAF), which is a reasoner tool that enables the conversion of technology-specific deployment models into technology-agnostic deployment models including Terraform, Kubernetes, and AWS CloudFormation. These technology-agnostic models are structured based on the EDMM [7]. The same approach was employed by Tsagkaropoulos et al. [20], where a TOSCA generator tool was provided that is capable of retrieving annotations as input and generating the appropriate type level model of the TOSCA extension.
Even in languages not based on TOSCA, the utilization of a reasoner is evident. Alfonso et al. [21] introduced the Self-adaptive IoT DSL and provided a reasoner converting their language models into K3s orchestration files. In their subsequent study [22], they further emphasized that this reasoner can deploy both the application components described in the model and the entire provided framework, which reads the model for deployment and runtime adaptation purposes. Another example is the velo DSL, presented in [11], which functions as a language specifically designed for describing container orchestration. The Transpiler, as the reasoner of the Velo DSL, has the ability to interpret the abstract specification and generate deployment files for Kubernetes, Docker Swarm, and Mesos Marathon. SMADA-Fog [12] enables a deployment model capable of describing the deployment of application components as a static topology that must be applied to Docker. Zalila et al. [23] also discussed the inclusion of a reasoner named the OCCIWare runtime server in MoDMaCAO (Model-Driven Configuration Management of Cloud Applications with OCCI), which has the ability to interpret models and translate them into actions for provisioning infrastructure on OpenStack.
Certain CMLs are specifically designed to concentrate on delineating the deployment of services on virtual machines. An alternative approach offers developers the capability to craft tailored virtual machine images that incorporate pre-installed services. However, their conventional purpose primarily assists infrastructure owners in configuring virtual machines as hosts within cloud resources. It is worth noting that even these solutions incorporate reasoner logic to enable effective deployment management.
For example, Aeolus Blender [24] generates service deployment configuration using its Zephyrus component. This is achieved by parsing (i) a specification file that contains descriptions of components and their constraints, (ii) a universe file that represents available services and virtual machines in JSON (JavaScript Object Notation) format, and (iii) a configuration file that outlines system level data, such as the number of virtual machines and actions required for a service.
Another example, Wrangler [25], is a tool that facilitates the deployment of virtual machines on infrastructure clouds by allowing users to create an XML file that outlines the desired deployment configuration for virtual machines. The previously mentioned MICADO orchestrator [17] can also utilize Occupus or Terraform, along with its MICADO Submitter reasoner, to deploy virtual machines on different private or public cloud infrastructures using the provided descriptions.
Efforts are underway to develop a higher-level CML that combines the strengths of existing languages. An example of such an effort is GENTL [9]. It aims to create a versatile application topology language for describing cloud applications and making it easier to convert from languages like TOSCA or Blueprints into a single model. GENTL might use TOSCA or similar CMLs for the deployment phase. Additionally, it could include Blueprints [26] to help developers make abstract descriptions of cloud service offerings and deployment. It is important to note that a language like GENTL can handle deployment for various deployment units.
Researchers have made efforts to enhance existing modeling languages, aside from TOSCA, or even develop custom ones that incorporate additional descriptions relevant to the deployment phase. In their work, Zalila et al. [23] presented MoDMaCAO as an extension that improves the capabilities of the OCCI Platform. The OCCI Platform, which is itself an extension of OCCI, gains enhanced functionality through MoDMaCAO’s inclusion of additional lifecycle descriptions. This extension simplifies the process of deploying and managing applications and components on IaaS resources, particularly for platforms that adhere to the OCCI standard. The OCCI Platform provides entities that enable the clear description of application components and their relationships. The authors also established a connection between the OCCI Infrastructure and the OCCI Platform to accurately represent the underlying infrastructure. Consequently, they established a connection between the two components to ensure a comprehensive representation of both the application components and the associated infrastructure. In the BEACON project, a HOT extension [27] is utilized to describe the deployment which is distributed across the continuum through an orchestrator broker. Another example is the UML (Unified Modeling Language) extension proposed in [28], which can describe the deployment of virtual machines using the specification level deployment diagram and instance level deployment diagram. Additionally, it extends the UML class diagram to describe services. Alfonso et al. [21] presented a domain-specific language designed specifically for container deployment in Fog environments. Petrovic et al. [29] recommended using a custom XML syntax to describe the orchestration of containers on Docker Swarm. This language needs to be translated by a reasoner to generate a Docker Swarm deployment plan.
Certain TOSCA extensions deviate from the conventional TOSCA semantics and introduce their own descriptive elements. One such extension is Tosker [16], which enables the depiction of containers using new custom entities. Another extension, TOSCA Light, introduced by Wurster et al. [30], defines custom entities by specifying the set of EDMM compliance rules for TOSCA-based models. The deployment phase in TOSCA Light can be described using the topology template, similar to traditional TOSCA. Furthermore, a set of TOSCA extensions was introduced by Tsagkaropoulos et al. [20] to describe applications at both the design time level and instance level. The suggested syntax achieves a new way, distinct from TOSCA, of expressing constraints, requirements, and criteria that are independent from the underlying hosting infrastructure (hybrid clouds, multi-clouds, edge-based platforms and FaaS-based applications).

3.3. Description of Runtime Adaptation

Besides facilitating the description of deployment processes, certain modeling languages also provide support for runtime adaptation, including self-healing and auto-scaling through event-driven lifecycle operations.
According to Lipton et al. [14], TOSCA allows the description of lifecycle operations through relationship and node types. Specifically, it is mentioned that users can include lifecycle operation scripts in the relationship part of a node type, which contain the necessary lifecycle operations. This approach allows for self-healing adaptation during runtime. Given that EDDM’s [7] syntax is inspired by TOSCA, it shares a comparable logical structure. EDDM features an operation entity that can execute procedures for managing the components or relationships defined in the deployment model. As a result, the operation entity has the potential to describe an executable file that contains a self-healing operation. The research conducted by Wurster et al. [30] introduced TOSCA Light as a new language that combines elements from both EDDM and TOSCA approaches. To enable the use of lifecycle operation scripts in TOSCA Light, the authors performed a mapping between EDDM operations and the interface section of TOSCA.
In Wrangler [25], developers are provided with a similar approach for self-healing, where they can add additional scripts to descriptions in order to perform operations on virtual machines dynamically during runtime. CAMEL [1,31] also supports the attached script feature that contains lifecycle operations to describe the execution flow. As it is suggested in CAMEL, the lifecycle operations are performed by the Executionware component, which reads models written in CAMEL in order to perform deployments, apply scalability rules and perform provisioning.
Conversely, the event-driven lifecycle operations are intricately linked to the syntax of workflows, which govern the conditions and steps to dynamically adjust an application at runtime. For instance, the TOSCA official documentation provides examples of workflows that contain conditional logic which depends on the status of components. If TOSCA is extended, it has the potential to incorporate additional custom metrics into the conditional logic. A case in point is the TOSCA extension proposed by Stefanic et al. [32], which enables the description of cloud-native application lifecycle management. This extension utilizes conditional logic that has the potential to support newly introduced metrics as well. On the other hand, TOSCA also has the capability to utilize established workflow languages such as BPMN (Business Process Modeling Notation) by referencing BPMN files in the workflow section. BPMN is a specification used to model business process diagrams, which can also be translated into software process components. However, it appears that this syntax may not be very user-friendly when it comes to describing lifecycle operations.
TOSCA policies offer a standardized approach to defining scaling, placement, availability, and security requirements in cloud deployments, enabling automated management and orchestration of complex applications. Through the inclusion of policies within TOSCA templates, cloud administrators and developers can ensure consistent behavior across diverse cloud platforms and environments, facilitating interoperability and streamlining application management. DesLauriers et al. [17] utilize policy descriptions for specifying scalability, monitoring, and other non-functional requirements. However, policies alone are insufficient for describing event-driven logic. Tsagkaropoulos et al. [20] propose a TOSCA extension that incorporates various policies to enhance application deployment. These policies include collocation, anti-affinity, precedence, device exclusion, and placement policies. The collocation policy enables grouping application components on the same cloud provider’s hosts, ensuring their colocated placement. The anti-affinity policy ensures the deployment of target components on different cloud providers for fault tolerance or compliance purposes. The precedence policy ensures sequential deployment of fragments in a specified order. Device exclusion policies optimize edge processing by marking unsuitable devices for deployment. Placement policies, modeled using fragment entities in the TOSCA extension, provide a clear visualization of important deployment constraints, offering advantages over native TOSCA relationships.
Furthermore, there are occurrences where certain extensions of the TOSCA standard are deficient in their ability to represent workflows. A case in point is TOSCA Light [30], which is incapable of depicting workflows due to its reliance on semantics from both TOSCA and EDDM [7]. Unfortunately, the EDDM framework lacks a dedicated semantic entity to describe the intricacies of workflows.
Workflows are a common concept employed in various works. For instance, Petrovic et al. [29] introduce a language that allows users to describe the execution flow as interconnected tasks with task dependencies. In their work [12], Petrovic et al. discuss that SMADA-Fog incorporates an additional model, apart from the deployment model, to describe runtime adaptation. This model represents runtime adaptation as dynamic behavioral events associated with deployed applications. The authors emphasize that within the SMADA-Fog framework, runtime adaptation can be performed in response to changes in the execution environment, such as variations in traffic load, alterations in QoS parameters, and the number of connected users. Similarly, Alfonso et al. propose a language in [21] where workflows are used to depict runtime adaptation based on event-driven logic triggered by QoS conditions. Section 3.9 provides further insight into how workflows and event-driven logic are associated with the syntax of QoS and QoE conditions in CMLs. High-level languages like GENTL [9] have the potential to combine the capabilities of multiple existing languages to describe runtime adaptation. For instance, GENTL could leverage both TOSCA and Blueprints for describing complex system adaptations at runtime. TOSCA is well suited for describing workflows and conditional logic, while Blueprints offer the ability to describe QoS characteristics and provision resources. This makes GENTL valuable for cloud platforms that require runtime adaptation of applications.

3.4. Description of Application Components

CMLs strive to offer a thorough depiction of application components, encompassing their requirements, characteristics, and properties. In this section, the portrayal of application components in various CMLs is analyzed, and the properties used to describe them are examined.
TOSCA includes the tosca.nodes.SoftwareComponent node type, which is used to represent application components. This node type serves as a generic software component that can be deployed on a host, which is described by the tosca.nodes.Compute node type. In a similar manner, GENTL [9] provides a description of the Component type, encompassing details such as name, unique ID (Identity), relationships with other components, and other attributes. MoDMaCAO [23] leverages the component terminology, extending OCCI Platform’s Component entity, to achieve a comprehensive depiction of application components. Moreover, the Application entity is employed to describe the complete application as a whole. The combined use of these entities within the MoDMaCAO framework enables precise and detailed representations of both individual components and the overall application.
Some CMLs have the capability to categorize components and provide specific descriptions based on their category type. For instance, in the UML extension proposed in [28], developers can model components as generic software and use component diagrams to describe Platform-as-a-Service (PaaS) components such as DBMS (Database Management System), OS, runtime, middleware, load balancing, and web servers. Similarly, Uni4Cloud [13] utilizes the OVF to model components and provide details about the type of application component, such as application server, database, etc. In the documentation of TOSCA, we also found some predefined types like Database or DBMS and other application component types. Within EDMM [7], the component entity denotes a unit of an application that can be either logical, functional, or physical and is an instance of the component type entity. These component types are reusable and provide semantics for components that can be employed in diverse deployment models. Since each component can have a distinct component type, specifying the necessary semantics and actions for installation or termination, components can represent unique functionalities for applications, while component types can be repurposed in various deployment models.
In other CMLs, definitions of application components are included in a model that describes constraints. For instance, in Aeolus Blender [24,33], component specification is utilized by Zephyrus as one of the specifications to generate the deployment model, which encompasses descriptions of application components along with their associated constraints. SYBL (Simple Yet Beautiful Language) [34] enables users to describe components and set upper and lower limits associated with their costs. Each of the components has a unique ID to identify them. As previously mentioned in Section 3.2, it was highlighted that not all CMLs portray application components as platform-agnostic deployment units. Within these languages, the semantic representation of an application component is frequently tied to the target cloud service layer and the language’s scope. The most common depictions of application components are services and containers. For example, in HOT [10], application components are utilized to express the abstract representation of cloud services and resources, signifying deployment artifacts. Similarly, in CAMEL [1], developers are afforded the capability to represent application components as services.
Various CMLs can describe container orchestration and represent application components as containers. While these languages share the common objective of describing containers, there may be slight differences in their semantics. For instance, in their research, Petrovic et al. [29] employ the term Tasks to denote containerized application components. The same terminology is utilized in their subsequent research, where they propose the SMADA-Fog framework [12] which presents a deployment model capable of describing the tasks that require deployment on devices. In their proposed TOSCA extension, Tsagkaropoulos et al. [20] utilize the term fragments to refer to application components. In contrast, researchers Alfonso et al. [21,22] utilize the term Application to denote a container-associated application. Within the Application entity, developers can specify the application’s name, port, the Kubernetes port to be utilized, the image repository associated with it, and the minimum memory and CPU requirements for each application component. Meanwhile, developers can define the CPU and memory usage limits within the Container entity.
In the Velo DSL [11], application components are represented as containers that need to be deployed on hosts. The container entity within the Velo DSL encompasses various properties such as the Docker image, labels, resources, network ports, protocols, and a logical volume for data access and persistence. While it is common for languages to have the resource property under a host entity, as described in Section 3.6, some languages without host entities describe resources within the application component entities. Furthermore, the Velo DSL introduces the virtual bag entity, which defines a cohesive unit capable of containing one or more containers. This concept shares similarities with the concept of a pod in Kubernetes. Additionally, the Velo DSL distinguishes between two types of virtual bags: svbag, which can accommodate a single container, and mvbag, designed to host multiple containers. The Velo DSL also provides support for defining properties in the metadata of virtual bags. These metadata include the identifier of the virtual bag, the number of instances, the network endpoints accessible to them, the users with authentication privileges, and the version number. It is worth noting that virtual bags serve as the parent entity of containers within the Velo DSL.
TOSCA includes the tosca.nodes.SoftwareComponent node type for generic application component representation and the tosca.nodes.Container.Application.Docker node type for Docker container descriptions. Despite TOSCA already providing a means to describe containers, researchers continue to explore ways to simplify the syntax for container descriptions. For instance, Stefanic et al. [32] extend TOSCA in order to establish a specification standard for container orchestration. This TOSCA extension aims to introduce a more dynamic and expressive description of containers compared to the original TOSCA, incorporating QoS and QoE attributes, as well as relationships, dependencies, and time constraints. Similarly, Solayman et al. [18] extend Cloudify’s TOSCA to include syntax capable of describing not only container entities but also entities that describe the programming languages and libraries required by a container to run. This approach splits the container’s software requirements into three different entities, making it useful for container orchestration in serverless platforms where vessel containers need to be configured at runtime.
On the contrary, there are several modeling languages that could be utilized for the representation of application components as pre-installed services on virtual machines. TOSCA, for instance, has the capability to describe application components as virtual machines by extending tosca.nodes.SoftwareComponent. HOT [10] can also describe the deployment of virtual machines on available resources for a declared cloud provider Uni4Cloud [13], which utilizes OVF, is capable to model and deploy virtual machines with pre-installed services as well.
Faisal et al. [28] introduce a UML extension that models IaaS, PaaS, and SaaS layers. In the PaaS layer, virtual machines are described in the instance level deployment diagram, as they are instances of physical devices. In the SaaS layer, the UML extension provides a service diagram that includes three parts of description: the name of the service, characteristics and properties, and operation and functionalities of the service.
MiCADO [17] utilizes TOSCA’s advanced general description for application components. As mentioned earlier, besides supporting container deployment, MiCADO also provides support for deploying virtual machines. This deployment method assumes that the associated virtual machine image already includes the required libraries and binaries, and that the application is in a ready state. This CML includes two distinct entities for describing application components: one for components within containers and another for components within virtual machines.
CMLs have the ability to describe application components and often include properties that are tailored to their target and scope, allowing for the specification of requirements and capabilities of each component. Given that most edge-cloud applications are network- or web-based, CMLs need to provide mechanisms to describe the required ports for these components.
TOSCA provides the capability to define the ports that a component uses by either extending the tosca.nodes.SoftwareComponent type to include port descriptions or by utilizing predefined node types such as Database or DBMS types that already include port information. In Aeolus Blender [33], users define the ports used with component_type inside the universe file. Port names are distinguished from components or packages using a simple syntactic convention: ports start with @.
HOT also provides a means to describe ports for resources in the network section. Petrovic et al. [29] provide the ability to describe both internal and external container ports. The aforementioned observation also holds true for the SMADA-Fog deployment model, as presented in the research conducted by Petrovic et al. [12]. In CAMEL [1], users can describe port binding within a deployment package. With the internal type, users can define components along with their respective ports. Additionally, CAMEL allows for the modeling of hosts and their ports by instantiating vm_type as a requirement.

3.5. Relationships, Dependencies and Environment Values

Application components interact with each other to form a cohesive application. As a result, modeling languages need to be capable of representing relationships, dependencies, and environmental values. There are two main types of relationships:
  • host relationship that binds application components with hosts,
  • connection relationship that connects application components.
TOSCA encompasses the host relationship, which is utilized to describe the hosting of a tosca.nodes.SoftwareComponent instance to a tosca.nodes.Compute node instance. Additionally, TOSCA supports the ConnectsTo type of relationship, which models a connection between two components. Despite proposing a TOSCA extension, DesLauriers et al. [17] demonstrate that the language can utilize the host relationship to specify the deployment of a container on a virtual machine. Tsagkaropoulos et al. introduce a TOSCA extension that expands the host relationship and introduces new relationships in their work. The new relationships, specifically prestocloud.relationships.executedBy.faas and prestocloud.relationships.executedBy.loadBalanced, establish connections between FaaS fragments and FaaS agents, as well as load-balanced fragments and their associated virtual machines (VMs). In the context of prestocloud.nodes.agent.faas, FaaS fragments are deployed on agents represented by the prestocloud.nodes.agent.faas type. The relationship prestocloud.relationships.executedBy.faas indicates FaaS agents executing FaaS fragments. Agents modeled by prestocloud.nodes.agent.loadBalanced host load-balanced fragments represented by prestocloud.nodes.fragment.loadBalanced. The relationship prestocloud.relationships.executedBy.loadBalanced ensures proper matching between load-balanced fragments and their corresponding agents. In research by Solayman et al. [18], host and ConnectsTo relationships are used which are provided by the basic syntax of TOSCA, as well as the contained_in relationship which was introduced with their presented extension. The contained_in relationship expresses that a programming language or a library should be installed in a target container.
The EDMM [7] syntax supports the relation entity to illustrate the connections between components. This entity is an instance of the relation type entity, and each connection is derived from a distinct relation type. A relation type is a reusable entity that specifies the semantics of a connection when this type is assigned to it. EDMM offers several examples of relation types, including the network connection relation that enables two components to communicate and the hosted on relation type, which denotes that a component must be installed on a server.
TOSCA Light [30] integrates semantic rules from both TOSCA and EDMM. It is capable of supporting the description of the HostedOn relationship and the ConnectTo relationship, which are common to both languages. Developers have the ability to create custom relationships by extending these existing relationships.
Other CMLs may use different terminology and methodology to describe relationships. The OCCI Platform utilizes links between entities for this purpose. MoDMaCAO [23] further enhances the functionality of the OCCI Platform by introducing a new type of link called PlacementLink. This link serves to establish a connection between a Component entity from the OCCI Platform extension and a Compute entity from the OCCI Infrastructure extension, representing the deployment of an application component onto a virtual machine. Additionally, MoDMaCAO expands the relationship capabilities through the ComponentLink, which supports an extension for Containment annotations. This extension enables the depiction of an application’s hierarchical structure, signifying that an application encompasses one or more components. Uni4Cloud [13] modeling language facilitates the description of communication requirements between application components, which are represented as virtual machines, in order to compose the overall application. In GENTL [9], relationships between components are represented using a Connector that captures the association between a source component and a target component. GENTL allows for the definition of attributes associated with the Connector, such as deployed on, which help specify the type of the Connector as there may be multiple classes of Connector. In CAMEL [1], when users describe application component instances, they define a communication instance in the requirement section. The communication instance enables users to describe the communication between components or binding between components and hosts. The host relationship instance binds a component instance with a virtual machine instance, and the connect relationship instance describes the communication between components. In HOT, templates are used to specify the relationships between resources, such as describing the connections between a volume and a server.
There are models like FRACTAL4EDGE [35] that are designed for edge-cloud computing environments and focus on infrastructure-related aspects. FRACTAL4EDGE supports a single type of relationship that facilitates the connection of physical devices to each other for sharing results within the edge-cloud computing environment. Similarly, the UML extension proposed in [28] employs the specification level deployment diagram, in which developers can describe how physical nodes interact with each other in order to provide a desired result.
The deployment model of SMADA-Fog [12] introduces types of relationships between physical devices not previously discussed in this paper. The masterServer relationship signifies that a host has the role of a master and is responsible for managing other servers. Additionally, within the specific network device type of SMADA-Fog, there is a relationship called managesNF that describes the SDN controller’s ability to enforce traffic shaping rules within the software-defined network for the devices under its control, including both physical devices and virtual network functions. The deployment model of SMADA-Fog incorporates the connectedTo relationship to depict the communication between network devices and other device types. The same model of SMADA-Fog enables the utilization of the hasLocation relationship, linking a host to its execution environment (Edge-Cloud), and the hasEnvironment relationship, which links a Task to its designated execution environment (Edge-Cloud) for running.
Alfonso et al. in [21,22] present a novel approach in their Self-adaptive IOT DSL for communication representation. The language relies on the MQTT (Message Queuing Telemetry Transport) messaging protocol for communication among IoT devices and other nodes. This allows IoT devices acting as publishers or subscribers using the Topic relationship to engage with specific topics on an MQTT broker. Additionally, the DSL allows for the modeling of gateways for IoTDevices through the gateway relationship, establishing connections with EdgeNode concepts. This facilitates communication between sensors and various nodes, including the MQTT broker node. The research also introduces the concepts of clusters, employing the master relationship to indicate the master node and the worker relationship to represent workers. The communication between nodes is depicted with the linkedNodes relationship. Furthermore, the proposed DSL enables a container composition relationship, describing the deployment of a container to a host node.
The utilization of graphical user interface (GUI) tools can significantly simplify the process of expressing relationships between components and resources. FRACTAL4EDGE [35], MoDMaCAO [23], the UML extension [28] and Uni4Cloud [13] serve as notable examples of modeling tools that employ GUIs for this purpose. These tools enable developers to visually represent and depict the relationships between components through the utilization of graphical elements such as arrows. By leveraging these GUI-based tools, developers can enhance their ability to accurately and efficiently express the intricate connections between various components and resources.
Furthermore, apart from the relationships between components, there are also dependencies among them. If a model indicates that one component depends on another, the cloud platform should prioritize the deployment of the component that has no dependencies.
In TOSCA, dependencies between components can be found under the requirements section. This means that TOSCA models dependencies in the same manner as relationships. In the same manner, the EDDM syntax [7] is a dependency relationship that indicates which component is dependent on another component. TOSCA Light [30], which combines the semantic rules of both above-mentioned languages, provides support for a DependsOn relationship that allows the expression of dependencies between components. Moreover, developers can extend this relationship to create custom relationships.
The SMADA-Fog [12] deployment model includes a relationship type called ‘dependsOn’ between application components. This relationship indicates that a component can depend on another, enabling the output of one task to serve as the input for another, thus forming a service chain.
In a similar approach, MoDMaCAO [23] incorporates the Dependency link, derived from the ComponentLink, to effectively depict dependencies between components. ComponentLink is constructed with specific grammatical rules that facilitate the establishment of connections between dependent Component entities. The authors emphasize the flexibility provided to developers in extending the functionality of the Dependency link, enabling the expression of custom dependencies associated with installation, execution, and other relevant aspects. Installation dependency signifies that the deployment of the target component only proceeds if the source component is successfully deployed beforehand. On the other hand, execution dependency indicates that the source component can only be initiated when the target component is already in the active/running state.
While in HOT, developers can specify dependencies between resources, such as one virtual machine instance depending on another. It is worth noting that TOSCA and HOT depict dependencies differently from each other, as they have different scopes of usage. Similarly, CAMEL [1] also offers the capability to define dependencies in the deployment description.
Describing dependencies is crucial in CMLs as they involve properties that ensure proper functionality, such as a frontend application requiring the IP address of a backend database. Hence, it is vital for CMLs to offer the capability to describe inputs or environmental variables that can capture and manage these dependencies.
TOSCA incorporates the input section, which permits users to tailor deployments by providing input parameters rather than hardcoded values in a template. This feature is enabled for every node type, allowing for adaptable and dynamic configurations in cloud deployments. Similarly, EDDM [7] can depict inputs in a more abstract manner and utilize them when mapping to a specific technology or platform technical deployment file. While TOSCA Light [30] may have additional limitations, it is capable of incorporating input descriptions similar to those found in both TOSCA and EDDM.
MiCADO [17] introduces a TOSCA extension, adding the environment property to the custom entity for container description. This property is an array that can hold multiple environment values, following the format utilized by Kubernetes and Docker Compose for handling environment values. In their work, Tsagkaropoulos et al. [20] present a TOSCA extension that enables developers to indicate environmental variables in either the docker_cloud or docker_edge sections of the fragment entity. The inclusion of these two sections is necessary because an application component may require a more lightweight execution, utilizing fewer resources, when deployed at the edge compared to the cloud.
In the study by Petrović et al. [29], it is demonstrated that developers have the capability to define inputs for containerized tasks. These inputs can be specified by the developer or derived from the dependencies of the task. Furthermore, a more detailed description of environment variables is provided in the work by Stefanic et al. [32], which extends TOSCA to include environment variables such as monitoring adapter, Redis host, Redis port, default SIP, API password, API username, and API URL, allowing for richer and more customizable cloud modeling.

3.6. Description of Hosts

In the orchestration’s matchmaking process, application components require mapping to appropriate hosts. This mapping is depicted through relationships in modeling languages, defining associations between components and hosts. Some modeling languages treat hosts and application components as distinct entities, aiding orchestrators in efficiently matching components with hosts. In edge-cloud computing environments, application components have the flexibility to be hosted on either physical devices or virtual machines. Petrovic et al. [29] introduce a modeling language that includes a property enabling developers to describe both virtual machines and physical devices in cloud and edge environments. In their subsequent research, the SMADA-Fog [12] deployment model outlines three types of devices prevalent in fog environments: consumer devices, servers, and network devices. Servers represent the computing infrastructure capable of hosting applications. Further details about consumer devices are provided in Section 3.11, which delves into their representation in recent CMLs. Additionally, the network device type describes physical devices at the network layer, responsible for implementing networking protocols and executing specific network functions to facilitate communication. Examples include routers, switches, firewalls, load balancers, intrusion detection systems, and parental control devices. All device types support descriptions including IP address, MAC address, number of cores, memory and storage capacity, and processor architecture. The SMADA-Fog deployment model is not the sole one to describe network devices. In a similar manner, Zalila et al. present a router description using MoDMaCAO [23].
Similarly, Alfonso et al. [21,22] in the fog DSL present an entity called Node. This entity supports sub-entities capable of describing nodes in edge, fog, and cloud environments. Sub-entities support the description of CPU cores, architecture of processor, hostname, IP address, OS, memory, and storage. However, TOSCA only specifies the hardware and operating system requirements of physical devices or virtual machines using the tosca.nodes.Compute node type, without providing information on whether they are located at the edge or in the cloud. One usage example is the TOSCA-based language employed by MiCADO [17], which enables the description of virtual machines provided by the OpenStack, Azure and EC2 cloud provider. MiCADO’s CML employs separate entities to describe virtual machines belonging to different supported cloud providers. Each entity has its own distinct set of properties, as there are variations in the semantics used by the respective cloud providers. Tsagkaropoulos et al. introduce a processing node entity in their proposed TOSCA extension [20]. This entity can describe hardware and OS capabilities, and includes a resource property indicating whether the processing node belongs to the cloud or the edge. Occasionally, languages may support the same properties in different entities due to variations in their scope. For instance, Tsagkaropoulos et al. [20] illustrate sensors as a list of properties within a processing edge node entity. Conversely, Petrovic et al. in the SMADA-Fog Framework [12] represent sensors as properties of IoT devices within the entity labeled end devices (see Section 3.11). The primary reason for this distinction arises from the classification of IoT devices into two main categories: edge devices and user devices.
Another noteworthy aspect is the perspective of considering both virtual machines and physical devices as hosts within the same cluster. MoDMaCAO [23] demonstrates this capability by enabling the description of physical devices and virtual machines that can serve as hosts for applications. Furthermore, it allows the representation of how multiple servers can be combined to form a cluster of hosts.
In contrast, some CMLs lack the ability to clearly differentiate between application components and underlying resources, which can be attributed to the constrained scope of these languages. For instance, in [35], a CML is described that exclusively focuses on describing physical devices, edge nodes, smartwatches, edge devices, and end devices, assuming that an application component is running on top of them. Unlike other modeling languages, EDDM syntax, as introduced by Wurster et al. [7], does not feature a separate entity for describing hosts or resources. Rather, it utilizes the extensibility of the component entity to describe hosts and their resources.
Additionally, some CMLs are limited in their ability to describe only a single type of host. For instance, Aeolus Blender [24,33] utilizes a universe file that provides details about available virtual machine hosts for deploying application components. This file contains essential information related to virtual machine resources, such as memory, package repository, existing services, and packages. Similarly, GENTL [9] describes virtual machines as hosts for application components.
However, there are CMLs, such as Aeolus Blender [33], Wrangler [25], HOT, CAMEL [1] and Uni4Cloud [13], that are not specifically designed to describe all types of physical devices that can host applications. To address this limitation, the authors in [29] propose a way to model both physical and virtual devices, based on ARM and x86 architectures.
In some cases, physical devices are depicted as vessels for hosting virtual machines that in turn host application components. For example, in [28], the Unified Modeling Language (UML) is extended to include a specification level deployment diagram that conceptually represents virtual machines as instances of physical machines, implying that a physical machine could instantiate one or more virtual machines. These virtual machines can then serve as hosts in cloud platforms for orchestrating application components.
The same applies for HOT which can describe virtual machines as deployment units, allowing for application developers to deploy multiple virtual machine images. The target cloud platform then attempts to identify suitable physical devices for deploying one or more virtual machines, enabling the execution of these virtual machines. Developers have the option to either instantiate a virtual machine using HOT that would serve as a host in OpenStack or pre-install their application components in various virtual machine images and utilize them as vessels.
As it was mentioned before, certain CMLs are designed to be used across multiple cloud providers, and therefore need to be able to describe the desired host for an application while specifying the associated cloud provider. For example, in CAMEL [1], users can define a cloud provider along with the desired virtual machine resources, such as hardware, OS, and location, for an application component. The use of provider package in CAMEL allows for the description of virtual machines offered by a cloud provider. Developers can instantiate one or more objects of the virtual machine type within this package to specify the flavor, OS, RAM size, storage size, and number of CPU cores for the desired virtual machines.
Furthermore, the provider package of CAMEL supports additional features such as indicating whether the target cloud should be private or public, specifying the type of service provided by the cloud (IaaS/SaaS/PaaS), and defining the API endpoint of the cloud provider. In CAMEL, developers specify virtual machine requirements by defining a virtual machine object in the deployment package, with actual values detailed in the requirements package. This allows for developers to determine if a component needs a virtual machine with specific resources, such as CPU cores and storage, OS, and location.
Understanding how CAMEL describes hosts suggests that hosts are characterized by their resources, a principle also applicable to other CMLs. TOSCA provides the tosca.nodes.Compute node type, which allows for developers to describe CPU cores, memory, and disk space requirements for a component to be executed. Additionally, this node type enables specifying the expected operating system and architecture for the component on the node.
Despite being a lightweight extension of TOSCA with limited semantic support, TOSCA Light [30] is still capable of providing the same level of support in describing a compute node with resources, as mentioned in the preceding paragraph.
Likewise, in Wrangler [25], users can specify the necessary hardware resources for virtual machine deployment in deployment specification, which is a straightforward XML file. Each XML file can contain multiple nodes representing virtual machines. In GENTL [9], the syntax describes operating system and hardware requirements in Component element. Uni4Cloud [13] utilizes OVF for describing virtual machines and their capacity along with hardware resources [13]. OVF employs an XML descriptor file enabling modeling of CPU, disk, RAM, and network configurations. The VirtualHardwareSection element within OVF is utilized for defining resource requirements for virtual machines.
Resource requirements can also be calculated and generated by reasoners. In the Aeolus Blender framework [33], the internal component called Zephyrus is responsible for this task. Zephyrus takes input in the form of a JSON file that contains available service types and a description that maps each service with the required packages. The output generated by Zephyrus includes components along with their architectural and hardware requirements. This output is in the form of a deployment plan that contains resource information such as CPU, storage, and memory, and is then parsed by the Armonic component to generate appropriately sized virtual machines, also known as flavors. Soleyman et al. [18] describe flavors as instance types in the Cloudify node cloudify.nodes.aws.ec2.Instances, which also defines the resource capacity of the virtual machine.
The concept of flavors is also present in HOT, which allows for the description of virtual machine properties such as OS image, CPUs, memory, and vCPUs. This property can be utilized by cloud platforms during the orchestration phase to filter out hosts that do not meet the desired description, thereby enabling more fine-grained control over the resource allocation process. The TOSCA extension used by MiCADO [17] enables flavor description within a custom entity representing OpenStack virtual machines and their resources. It includes a property named flavor_name for developers to specify the resource size of their virtual machine. Similarly, the entity for EC2 resources includes the instance_type property, which describes the resource size for EC2 instances.

Hosts in Different Geolocations

Geolocation can be used as a deployment criterion in certain CMLs, as hosts may be distributed across different regions. This feature proves advantageous in orchestrating application components. For instance, Villari et al. [27] propose an extension of HOT that enables the deployment of services based on location constraints and location-aware elasticity rules.
Similarly, CAMEL [1] includes a location package that allows for users to specify locations as geographical regions or cloud locations. This option allows for the description of parent regions, such as continents, sub-continents, and countries.
In the model utilized by Aeolus Blender [33], there is the option to describe the locations of available hosts/virtual machines in a string format. Aeolus also includes a model that describes package repositories along with their geolocation information.
Cloud platforms can leverage this feature to facilitate the deployment of location-based applications. Uni4Cloud [13], for example, utilizes location information for internalization purposes, enabling the deployment of location-based applications with specific language packages.
By extending the tosca.nodes.ComputeNode node type, TOSCA can be enhanced to support geolocation criteria. Solayman et al. [18] demonstrate an example of such an extension in their work, where a virtual machine on AWS is described, and the geolocation is represented by the availability zone. Similarly, DesLauriers et al. [17] introduce an extension of the tosca.nodes.ComputeNode in their research, specifically for EC2 resources, incorporating the region_name property to denote the region of the corresponding virtual machine. In their study [20], Tsagkaropoulos et al. devise a TOSCA extension for an instance level model, which introduces several processing node entities to describe distinct resources linked to various cloud providers. These entities incorporate a cloud section that allows for describing the type of cloud provider and facilitates specifying the cloud region.
Alfonso et al. in [21] employ region description techniques for compute nodes integrated within edge, fog, or cloud environments. In this context, the region description could specify a particular building, rooms of that building and other workspaces. Similarly, the research addresses geolocation by utilizing latitude and longitude parameters in IoT device descriptions.
Further research is needed to identify the optimal method for expressing geolocation criteria, whether using latitude and longitude coordinates or specifying continent, country, region, and city names, or the availability zone. The selection of values would rely on the orchestration capabilities and architecture requirements of the cloud platform. In order to use geolocation parameters, a platform would need to have a monitoring mechanism in place to expose geolocation information, along with an intelligent reasoner to parse the geolocation data from the model and match it with the monitoring information.

3.7. Deploy Components on Edge and Cloud

Certain CMLs allow for specifying the location of hosts, whether in the cloud or at the edge. Other researchers have proposed languages enabling users to specify whether an application component should be deployed in an edge or cloud computing environment using CMLs. Viewed semantically, these functions express similar concepts, and their integration would improve the process of matching components with hosts.
Languages designed for describing deployments of IoT application components across different environments appear to provide robust support for such functionalities. The research discussed in [36] centers around the utilization of the ADOxx Definition Language (Architecture, Design, and Organization) to determine the most suitable placement of IoT application components in fog and cloud environments. Another language, as described in [18], offers seamless deployment of IoT application components across both cloud and edge environments. Additionally, the research conducted by Alfonso et al. [21] focuses on describing container deployments across edge, fog, and cloud nodes. In the research conducted by Petrovic et al. [29], a method for automating container deployment in edge and cloud resources is introduced. This approach involves using a defined XML syntax that allows for users to specify in the execution environment element whether a container should be deployed on the cloud or the edge. Unlike other CMLs that describe the placement issue from the components’ perspective, the SMADA-Fog deployment model [12] has the capability to specify the location of a host, whether it is on the cloud side or the edge side. In the context of the SMADA-Fog deployment model [12], the Task entity encompasses a property indicating whether the component should be deployed on an edge or a cloud host. It subsequently matches this requirement with a host characterized as either a cloud or edge-side resource within the model.
Other researchers advocate for the use of graphical user interface (GUI) tools to facilitate the description of application placement. The research presented in [35] introduces a GUI tool enabling users to design models depicting communication among various physical devices, including smartwatches, sensors, edge nodes, and cloud resources. The goal is to develop a tool that models communication between physical devices and virtual machines hosting application components, aiming to deploy systems spanning the edge-cloud continuum. An example use case discussed in the research is a Patient Monitoring System, where sensors gather vital signs from patients, sending data to the nearest edge node for preprocessing, and then forwarding processed data to intermediate components hosted on cloud resources. Another notable example is the aforementioned research that employs the ADOxx Definition Language from [36]. This approach allows for users to create GUI modeling tools using a meta-model similar to UML.

3.8. Repositories and Images

Modern CMLs aim to delineate the deployment of application components in both cloud and edge computing environments. Additionally, they should facilitate the declaration of an image repository, specifying the source from which cloud platforms retrieve images to generate the application components.
In TOSCA, images are defined in the artifacts section, allowing for developers to specify the name, type, and image repository for each image. TOSCA also supports the description of repositories, including external repositories that may contain plain code or virtual machine and/or Docker images. This feature can be particularly helpful during the orchestration phase, where an orchestrator, with the assistance of a reasoner, can read the TOSCA specification and pull the necessary code or image from the specified repositories as needed. Wurster et al. [7] explain that the artifact entity is a part of the EDDM syntax. Its role is to fetch a Docker image either from a designated repository or to construct it using a Dockerfile.
In their work [21], Alfonso et al. discuss the inclusion of the related Docker image repository within the Application entity. This approach is reasonable in terms of semantics, as deploying an application component as a container typically requires the orchestrator to retrieve the corresponding Docker image from a repository. The same approach seems to be applied by Velo DSL [11], where the Docker image is described by the image property in the container entity. Within the SMADA-Fog’s deployment model introduced by Petrovic et al. [12], there is also a provision for describing the repository and image name within the Task entity, which represents a containerized application component. In their work, Tsagkaropoulos et al. [20] present the inclusion of two sections, namely docker_cloud and docker_edge, within the fragment entity. These sections allow for the description of images and image repositories that a platform can utilize to deploy an application component either on the cloud or at the edge.
Uni4Cloud [13] uses extra files that include details about ISO and disk images for virtual machines, along with information about their image repositories. These files are used in combination with the OVF description for deployment. Typically, the description of image repositories is placed within the Reference tag, while virtual disks are usually located under the Disk tag.
There are languages available that can describe images for both containers and virtual machines. DesLauriers et al. [17] propose a TOSCA extension that introduces a custom entity specifically for describing containers, which incorporates the image property. This property requires the inclusion of the Docker image’s name, version, and repository prefix indicating its affiliation, following the conventions established by TOSCA for repository descriptions. Additionally, the CML supports entities for describing virtual machines from various cloud providers. These entities include properties such as image_id, instance_type or flavor_name for specifying virtual machine images. It is worth noting that the instance_type and flavor_name properties refer to the size of the running instance for virtual machines, and they are used within different entities corresponding to specific cloud providers.
However, other CMLs are only capable of describing the image for virtual machines. For instance, in Wrangler [25], the deployment files allow for users to define virtual machine images by specifying the name of the images and the type of instance, which represents the storage size of the virtual machine instance and the target cloud platform. Similarly, in HOT, virtual machine images can be described using the name property to define the desired image name, and the flavor property to define the storage size of the virtual machine instance. Both of these properties are part of the server instance description within the resources section.

3.9. Description of QoS and QoE

CMLs provide the capability to define constraints or elasticity rules that dictate how the Quality of Service (QoS) and Quality of Experience (QoE) may degrade in response to events, guiding the framework to perform runtime adaptations such as auto-scaling or self-healing for specific application components. Additionally, many CMLs support the specification of time constraints related to network metrics or response time. These constraints or rules allow for the modeling of real-time adaptation of the application to changing conditions, ensuring that the desired performance and user experience are maintained.
Stefanic et al. [32] emphasize that time constraints are often violated due to network issues or high data loads, with response time being the sole time constraint considered in their proposed model. However, there are several CMLs that do support time constraints. For instance, the modeling language proposed in [36] allows for the description of time constraints, including response time and deadline (timeout) constraints, for applications and services running on cloud or fog environments.
Moreover, HOT enables the definition of conditions where developers can describe specific measurement constraint values as triggers for specific actions. For instance, the HOT extension proposed by Villari in [27] supports time-critical constraints as network reachability rules. This demonstrates that CMLs provide the capability to specify time constraints for guiding the behavior and adaptation of cloud applications in response to changing conditions.
CAMEL’s [1] modeling and execution language allows for the description of response time as both a QoS constraint and an objective. A requirement can be stated as a condition (response time < 100 ms) or as an optimization objective (maximize performance). This flexibility in expressing response time constraints in different ways provides versatility in modeling and managing the performance of cloud applications using CAMEL.
Alfonso et al. with their proposed CML [21] support sytax capable to define QoS adaptation rules in runtime. These rules are defined within the AdaptationRule entity and expressed using the extended MPS Baselanguage. The language allows for specifying QoSEvent conditions based on metrics such as CPU consumption or latency and SensorEvent conditions linked to specific sensor groups. A notable feature is the ability to set a time period for the conditions to remain true, preventing unnecessary rule triggering. When conditions are satisfied, runtime adaptation actions are executed. The Self-adaptive IoT DSL language includes the allActions entity, which triggers all associated actions when set to true. Alternatively, the actionsQuantity attribute determines the number of actions executed if set to false, starting from the first action and continuing until the desired quantity is achieved. The language supports four types of actions: Offloading, which involves migrating containers between different layers (edge, fog, cloud); Scaling, which deploys replicas of an application; Redeployment, which stops and redeploys containers on nodes; and OperateActuator, which controls system actuators like alarms activation or deactivation.
Petrovic et al. [12] employ a similar approach in SMADA-Fog, leveraging an adaptation strategy model. This model facilitates the definition of a sequence of adaptation rules, where each rule comprises a condition–action pair that targets a specific container. These adaptation rules delineate the appropriate actions to be executed when specific conditions are met, thereby influencing the designated service. The conditions are triggered by changes in the application’s execution environment, leading to code generation for implementing the prescribed actions. Conditions can be expressed through two approaches: (a) detection of specific events, or (b) evaluation of relational expressions based on QoS metric thresholds or the number of active users/connections requesting the service. Furthermore, events can also be associated with metrics related to end devices, such as connection speed, signal strength, and various service-related aspects including QoS metrics like latency, framerate, and bitrate, as well as the number of devices utilizing the service. These metrics can subsequently impact the application’s QoS by influencing CPU and memory consumption. The adaptation strategy model encompasses a range of actions, including scaling (up/down) rules, re-deployment, latency minimization, traffic filtering, prioritization, and conditioned service creation/shutdown.
To incorporate QoS metrics into constraints and conditions, a platform requires robust monitoring systems. In their research [22], Alfonso et al. employ Prometheus (https://prometheus.io/) (accessed on 12 October 2023), along with a node exporter (https://github.com/prometheus/node_exporter) (accessed on 12 October 2023) and kube-state-metrics (https://github.com/kubernetes/kube-state-metrics) (accessed on 12 October 2023), for collecting infrastructure and pod metrics. Moreover, Prometheus effectively gathers data from the system’s MQTT broker topics, such as temperature, humidity, gas levels, and various other sensor data types. These metric collections play a critical role in detecting both QoS events and sensor events. The authors utilize these metrics to define alerts triggering the platform’s initiation of the runtime adaptation phase based on the specified QoS conditions/events described in the related model.
There are even CMLs that use conditions/constraints to perform different scopes of actions in cloud and edge environments. For instance, the SYBL language [34] provides the ability to write conditions/constraints related to cost, quality, and resources. A developer can utilize a strategy in SYBL by indicating response time as a condition/constraint. For example, the developer could declare that response time should be less than 4 ms when the number of users is less than 1000, and when this rule is violated, the application must scale out. A cloud customer could use SYBL to specify conditions/constraints related to the cost. For example, if the total cost is higher than 800 Euro, a scale-in action is performed to keep costs in acceptable limits. On the other hand, a cloud provider could use SYBL to specify conditions/constraints associated with a pricing schema or a price computation policies. For instance, if the availability is higher than 99 percent, the cost should be increased by 10 percent. This demonstrates how CMLs, like SYBL, enable the specification of conditions/constraints for different stakeholders to manage various aspects of cloud and edge environments.
As mentioned previously, CAMEL supports the declaration of optimization objectives. However, CAMEL [1] is not the only CML that supports optimization objectives/rules. In many CMLs, optimization objectives/rules are typically written in the form of key–value pairs, rather than following a threshold-based approach.
In CAMEL [1], optimization rules are considered as requirements, allowing for users to specify monitoring metrics in requirements package, specifically in the optimization requirement section, which can be minimized. CAMEL also supports the inclusion of QoS constraints in the cloud provider-independent model, where developers can describe virtual machine application components, virtual machine locations, and service-level objectives for these components. This demonstrates how CAMEL provides flexibility in defining requirements and constraints for cloud applications in a provider-independent manner.
The utilization of AMPL (A Mathematical Programming Language) within the SMADA-Fog framework [12] allows for the generation of optimization models. AMPL employs expressions that closely resemble traditional algebraic notation while also incorporating extensions to support the definition of network flow constraints and piecewise linearities, thus catering to optimization problems. AMPL is chosen due to its ability to combine and extend the expressive capabilities of similar algebraic modeling languages, making it more versatile while still being user-friendly. However, it is important to note that AMPL itself does not serve as a solver, but rather provides an interface to other programs responsible for solving. In this case, the CPLEX (Concert Platform for Mathematical Programming) (https://www.ibm.com/analytics/cplex-optimizer) (accessed on 13 October 2023) is employed as the linear programming solver, utilizing the simplex method for deployment and traffic optimization. SMADA-Fog has the capability to conduct optimization tasks focused on network optimization, minimizing latency, and maximizing execution speed.
In Aeolus Blender [33], one of the inputs that Zephyrus reads is a high-level specification of the desired system. In this specification, developers can include objective functions as optimization rules. For example, an optimization rule could be to minimize the number of virtual machines used for deployment, along with consideration of the system cost. Similarly, Tsagkaropoulos et al. in their study [20] utilize the optimization_variables section within the fragment entity. This section allows for them to assign weights to factors such as distance, cost, and friendliness when evaluating a provider. The distance factor indicates the geographical proximity between a host of a cloud provider from the centroid edge devices, the cost factor represents the monetary expenses involved, and the friendliness factor denotes the latency experienced when accessing the services of a cloud provider. The optimization weights obtained from the optimization_variables section are leveraged by an optimization solver to apply related policies. In addition, the same research work enables the specification of optimization constraints at the application level, such as allocating a budget of 1000 euros to be utilized over a span of 720 h on a cloud provider, with the objective of minimizing the overall cost.
As evident from the findings in Aeolus Blender [33], SYBL is not the only CML that incorporates cost-related metrics. In SYBL [34], users can define thresholds for monitoring metrics while also having the ability to specify QoS goals, similar to how optimization rules are defined in CAMEL [1] and Aeolus Blender [33]. These QoS goals allow for components or applications to express their requirements and optimize costs, showcasing the versatility of these CMLs in addressing various aspects of cloud computing.
On the other hand, GENTL [9] approaches the definition of QoS criteria/goals in a different manner. GENTL semantically models these criteria/goals as capabilities or requirements for discovery purposes, encompassing a wide range of functional interface descriptions. This highlights the flexibility and versatility of GENTL in representing QoS criteria/goals in various forms.

3.10. Attached Scripts

There are CMLs that incorporate support for attachments, allowing for the description of events or conditions under which specific scripts or actions must be executed to facilitate self-healing capabilities. This feature enables the modeling of dynamic behaviors and automated responses to changing conditions or events within the cloud and edge environment.
For TOSCA [16], this feature is achieved with the tosca.nodes.SoftwareComponent node type that includes an interface section. This allows for users to describe lifecycle phases such as create, configure, start, stop, and delete. Additionally, users can define custom operations within the interface section by using scripts to facilitate self-adaptive behavior, enabling applications to automatically adapt to changing conditions or events within the cloud environment.
MoDMaCAO [23] optimizes configuration management through the utilization of the interface semantic, which establishes a descriptive framework for lifecycle actions. The framework defines distinct phases for both the application and its components, encompassing undeployed, deployed, active, inactive, and error states. The supported lifecycle actions for the application and its components include deploy, undeploy, configure, start, and stop. By employing interfaces, MoDMaCAO simplifies the integration of various configuration management tools, including popular options like Ansible, ensuring compatibility and flexibility. This versatile feature extends its utility to the implementation of self-adaptation scripts and runtime adaptation scripts, making it an invaluable asset across diverse scenarios.
Solayman et al. [18] demonstrate the use of TOSCA’s attached scripts to enable serverless logic in three different scenarios. In the first scenario, a container is instructed to fetch a script during the starting phase and access the source code from a Github repository. In the second scenario, the container installs the necessary programming language before executing its code. In the third scenario, potential libraries are installed during container creation and after programming language installation. In all scenarios, the app_library entity is used to describe the programming language and libraries, which are linked to the container entity via the contained_in relationship.
The usage of the interface section in TOSCA extensions is not uniformly unrestricted. TOSCA Light, as elucidated by Wurster et al. [30], adopts the EDMM approach [7], which facilitates the integration and influence of the deployment lifecycle. Consequently, application developers are obliged to specify operations pertaining to the deployment lifecycle, encompassing component installation, startup, shutdown, and termination. Since the TOSCA standard defines essential operations like component startup, configuration, and shutdown, and the EDDM syntax, as expounded by Wurster et al. [7], employs a similar framework to TOSCA for employing attached scripts during different lifecycle operations (e.g., utilizing the artifact entity to represent a shell script linked to component installation phase), the range of interface types available in TOSCA Light is limited to standard lifecycle type. Therefore, TOSCA Light does not support models that define custom interface types.
DesLauriers et al. [17] propose a unique approach by introducing custom interface types in their work. These interface types provide the ability to specify the orchestration tool responsible for deployment and allow for the inclusion of additional custom parameters through the inputs field. The CML offers two choices for virtual machine orchestration: Terraform and Occupus, while Kubernetes and Docker are available options for container orchestration. Consequently, the language encompasses four interface types. Each interface is capable of describing the different lifecycle phases of applications on these four technologies and defines the corresponding input that indicates a native operation for creation, update, or deletion. In the case of MICADO, these lifecycle stages are managed by the respective orchestrator associated with that node. Hence, there is no requirement to associate those stages with a script or a piece of automation code, as is typically done in normative TOSCA.
Attached scripts can also be utilized for automating installation scripts or deployment of application components. Wrangler [25] allows for users to attach custom scripts containing operations associated with servers. For instance, a script can be used to start the required NFS (Network File System) services as part of the deployment process.
Despite having different scopes, TOSCA and CAMEL have the potential to utilize the attached scripts feature in a manner similar to Wrangler. CAMEL [1] also supports this feature, enabling users to define a configuration type in the deployment package where scripts can be attached to manage the lifecycle of a component, encompassing tasks such as downloading, installing, starting, configuring, and stopping.

3.11. End Devices

End devices refer to devices that are owned by users, such as mobile devices or computers, and run client-side applications to retrieve information from applications residing in edge-cloud computing environments. While defining end devices is not typically within the scope of edge-cloud computing, there are limited CMLs that support this description. One such example is the application model suggested in [35], which attempts to define the components that synthesize the edge-cloud continuum.
The deployment model of SMADA-Fog [12] includes a device type referred to as consumer type. According to Petrovic et al., consumer devices are heterogeneous devices utilized by end users, serving as consumers of the provider’s services. These devices encompass a wide range of technologies, including conventional personal desktop computers, smart televisions, laptops, smartphones, mobile robots, IoT devices, and smart home appliances. They have the capacity to support diverse scenarios, ranging from social networking and entertainment to Smart Grid management, healthcare applications, and manufacturing process support.
In Fog computing, IoT devices play a crucial role as end devices that consist of physical devices equipped with sensors for data collection. The proposed CML of Alfonso et al. [21,22] enables the description of these devices. The language includes an entity called IoTDevice, which encompasses two subordinate entities, Sensor and Actuator, used for specifying the sensors and actuators associated with a device. Additionally, the language incorporates a specific connectivity type property to capture the connectivity options such as Ethernet, Wi-Fi, ZigBee (Zonal Intercommunication Global standard), or other types.
Existing CMLs such as TOSCA, CAMEL, and others could potentially be extended to incorporate descriptions of end devices. The aim of this extension would be to enable cloud platforms to benefit from such descriptions. For instance, a cloud platform could leverage the information about the distance between edge devices and end devices to deploy application components to the edge devices that are closest to the users, thereby optimizing performance and user experience. This type of logic could potentially be expressed in a CML as a specific optimization rule, similar to the other optimization rules.

3.12. Authentication Credentials

In CMLs, authentication credentials typically include a username and a password, and are usually modeled in a public or private repository to enable a platform to access and retrieve code or images of application components. TOSCA, for instance, models authentication credentials in the context of external repository description to make the repository accessible.
In some CMLs, authentication credentials are used for connecting to the infrastructure. For example, in Wrangler [25], users define the authentication credentials required by the provider within the description of the virtual machine. Similarly, in the modeling language proposed by Petrovic et al. [29], developers can describe the credentials of a device when a component or tool needs to perform an SSH connection. In the subsequent research by Petrovic et al. [12], the SSH connection description is employed to specify the credentials required for user access to a specific server host. Additionally, the host description includes support for the Docker Swarm token, which enables a host to join the container cluster associated with a specific Swarm master.
In certain CMLs, authentication credentials can also be modeled to represent the roles of employees within a business. For example, CAMEL [1] provides support for describing authentication credentials and roles in the organization package. Users can define authentication credentials and roles as part of the employee entity within this package, allowing for differentiation of access rights to machines or applications based on the roles assigned to employees. Similarly, in the Velo DSL [11], developers have the ability to explicitly specify the authorized users for authentication within a particular virtual bag entity that encompasses one or more containers.

4. Conclusions and Future Research Directions

4.1. Main Findings

Based on the findings presented in Section 3 and summarized in Table 3 and Table 4, the main conclusion is that there is no single CML that can fully support all the concepts of edge-cloud computing. This work highlights the ongoing effort from academia to extent existing CMLs to address the limitations of existing solutions in terms of meeting the demands of developers for effortless application migration to cloud-edge continuum environments.
Particularly, researchers frequently utilize and extend TOSCA, CAMEL [1], and HOT due to their effectiveness in describing a wide range of concepts within existing CMLs. TOSCA typically serve as the baseline for creating new CMLs tailored to novel requirements. The availability of analytical documentation further enhances the popularity and widespread adoption of TOSCA among researchers. In our survey, we examine five TOSCA extensions [17,18,30,32], one HOT extension [27], and identify no CAMEL extensions. On the other hand, HOT does not provide straightforward extension capabilities for researchers as it lacks cloud-agnosticism and focuses solely on virtual machine descriptions. While CAMEL provides robust support for orchestrating multiple cloud providers, it lacks comprehensive documentation on extension methodologies. Our results showcase that TOSCA offers a rich set of generic entities that can be extended to describe diverse aspects, making it a perfect candidate among the existing CMLs to be used in multiple cloud-edge platforms for deployment and runtime adaptation descriptions.
In edge-cloud computing, the placement of application components plays a vital role, but only a few CMLs support it as a deployment criterion. Among the 22 CMLs, only 6 support this crucial feature, which is considered highly significant for cloud edge continuum platforms. In particular, TOSCA IOT ext. [18], ADOxx Definition Language [36], SMADA-Fog [12], TOSCA for Edge and Fog [20], FRACTAL4EDGE in [35] and Self-adaptive IoT DSL [21] are the only sources that provide comprehensive semantics beneficial for orchestrating cloud edge continuum platforms. The observation that only 27.2% of the languages analyzed permit developers to specify whether their application components should be deployed on the edge or in a cloud environment highlights the need for further research to address the lack of support for this feature. Similarly, the representation of runtime adaptation is supported by more than half of the presented CMLs. Among the 22 CMLs discussed, 12 (EDDM [7], TOSCA light [30], Wrangler [25], CAMEL [1,31], TOSCA lifecycle ext. [32], TOSCA MICADO [17], TOSCA for Edge and Fog [20], SMADA-Fog [12], Self-adaptive IoT DSL [21], GENTL [9], TOSCA and MoDMaCAO [23]) possess the capability to describe QoS criteria in different formats. Based on our analysis in Section 3.9, SYBL [34], Self-adaptive IoT DSL [21], ADOxx Definition Language [36], CAMEL [1], HOT, SMADA-Fog [12] and HOT extension [27] are seven of these languages that exhibit a condition syntax suitable for event-driven logic based on QoS rules. While GENTL [9] does not inherently support this description, it can achieve it by utilizing lower-level languages given its nature as a high-level language. Developers need to consider both the requirements of the target platform/provider and their own preferences when selecting a CML to describe runtime adaptation, as there are various supported syntaxes available. This presents an ongoing issue, as researchers may introduce a syntax in the future that could potentially supersede the others.
Runtime adaptation description is supported by a greater number of languages compared to the QoS criteria. Although these two aspects are closely related, runtime adaptation can be described using formats other than event-driven syntax with conditions, as analyzed in Section 3.3. This is why Table 3 and Table 4 include separate rows indicating which languages support execution flow (workflows), QoS constraints, and runtime adaptation. While workflows and QoS constraints/conditions can be combined to provide an event-driven logic, some languages only support certain parts of them for runtime adaptation. Among the seventeen languages, eight support the runtime adaptation description, while five support the workflows description. The only languages that support both descriptions are TOSCA and CAMEL [1], making them ideal choices for developers who wish to express runtime adaptation in either a non-event-driven format or with an event-driven format.
Indeed, the majority of CMLs primarily focus on prioritizing the deployment of application components in edge-cloud computing environments, with less emphasis on the aforementioned significant aspects. SYBL [34] and FRACTAL4EDGE [35] are the only languages that do not describe the deployment phase, as they serve a different purpose than describing the deployment of application components. However, they are included in this survey due to their semantics that are relevant to CMLs involved in orchestration across both cloud and edge environments. Languages like SYBL [34] and FRACTAL4EDGE [35], which support event-triggered actions based on cost considerations and flexible modeling of physical devices, can provide significant benefits for environmentally aware orchestrators. These orchestrators manage deployment and runtime adjustments while considering the energy usage of each device, similar to the approach outlined by Theodoropoulos et al. [37].

4.2. Future Work

The findings from our research survey will be utilized as the basis for our CML. As it was stated in the start of this paper, the main motivation for this research work was to identify how the deployment phase and runtime adaptation of applications in edge-cloud computing environments are described by existing CMLs. After an extensive literature review, it was decided to extend TOSCA to create a CML that can support the description of the deployment phase, runtime adaptation, and QoS criteria. This language aims to support numerous properties to facilitate deployment and runtime adaptation across cloud and edge environments, utilizing a syntax that is easy for developers to understand.
The primary challenge is to develop a user-friendly syntax capable of describing runtime adaptation alongside QoS deterioration conditions/events, which will subsequently prompt adaptation actions. Our interest lies in establishing a structure for QoS deterioration conditions akin to those outlined in SYBL [34] and the Self-adaptive IoT DSL presented in [21], as they provide a well-defined condition structure that can be integrated into workflows. Worfklows will be described as a list of QoS conditions and runtime adaptation actions that need to be performed by platforms during the runtime adaptation phase.
The secondary challenge for our CML is to simplify application component deployment on infrastructure with minimal re-engineering, regardless of the deployment unit. Taking inspiration from DesLauriers et al. [17], our CML will empower developers with the ability to describe application components as either virtual machines or containers. To streamline application component orchestration, our CML has to facilitate the description of input parameters and dependencies between components using environment values. Platforms can leverage these features to furnish input parameters or configurations, ensuring seamless deployment and communication among application components. To optimize the orchestration of application components, the description of resource requirements and the concept of placement will also be supported in our CML.
Furthermore, the development of the CML must follow a cloud provider-independent approach to ensure compatibility with any platform. This entails designing a syntax that offers generic information not tied to any specific platform, while also being easily adaptable across various platforms. In line with recommendations from the literature sources [1,16,17,29,33,38], platforms and frameworks that incorporate the usage of a CML must utilize their own reasoners to extract knowledge from models written in our provided CML. This is crucial for the effective execution of deployment and runtime adaptation plans. To facilitate the implementation of our language, we will draw inspiration from Borisova et al. [15], who propose the possibility of implementing TOSCA with Kubernetes. As documented in [4,6,17], the availability of supporting tools and documentation plays a crucial role in the successful implementation of the language. Consequently, we aim to develop a TOSCA to Kubernetes reasoner, leveraging existing research and insights to simplify the implementation process.

Author Contributions

I.K. contributed in the conceptuatlization, data collection and writing of the paper. A.M. contributed in the conceptuatlization, writing and reviewing and revising the paper. K.T. contributed in securing the funding for the work, conceptualization, reviewing and revising the paper as well as in the team work coordination. The paper reflects only the authors’ views. The EU Commission is not responsible for any use that may be made of the information it contains. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871793 (project ACCORDION) and No 101016509 (project CHARITY).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Queries for the cloud application model term for IEEE Xplore and ACM Digital Library.
Table A1. Queries for the cloud application model term for IEEE Xplore and ACM Digital Library.
Search EngineQueryResults
IEEE Xplore(“Document Title”: cloud application model OR “Document Title”: cloud modeling language OR “Document Title”:cloud modelling language OR “Document Title”:cloud open standard OR “Document Title”: model-based infrastructure OR “Document Title”:TOSCA OR “Document Title”:Openstack HOT) AND (“Author Keywords”:cloud computing OR “Author Keywords”:edge computing OR “Author Keywords”:workflows OR “Author Keywords”:runtime adaptation OR “Author Keywords”:elasticity specification OR “Author Keywords”:time constraints OR “Author Keywords”:domain-specific language OR “Author Keywords”:model-driven engineering OR “Author Keywords”:fog computing OR “Author Keywords”:internet of things OR “Author Keywords”:TOSCA OR “Author Keywords”:orchestration OR “Author Keywords”:deployment OR “Author Keywords”:infrastructure as code OR “Author Keywords”:multi-cloud OR “Author Keywords”:standards OR “Author Keywords”:self-adaptive system OR “Author Keywords”:cloud service OR “Author Keywords”:time-critical cloud application OR “Author Keywords”:quality of service)140
ACM Digital Library(Title: “cloud application modelling” OR “cloud modeling languages” OR “cloud modelling languages” OR “fog modeling languages” OR “fog modelling languages” OR “service description languages” OR “open standards for deployment” OR “runtime execution” OR “TOSCA” OR “openstack hot” OR “deployment automation technologies” OR “cloud Application topologies” OR “model driven” OR “orchestrating multicomponent applications” OR “Component-based Modeling” OR “deployment model abstraction” OR “extending TOSCA” OR “Modeling self-adaptative IoT” OR “runtime execution of self-adaptive” OR “model-driven” OR “Automatic Deployment of Services” OR “Automating Application Deployment” OR “blueprinting” OR “Deployment of Distributed Applications with Geographical Constraints” OR “deployment of cloud applications” OR “extensible language” OR “deployment of IoT application in fog”) AND (Author Keyword: “cloud computing” OR “deployment and provisioning” OR “edge computing” OR “runtime adaptation” OR “elasticity specification” OR “time constraints” OR “domain-specific language” OR “model-driven engineering” OR “fog computing” OR “internet of things” OR “TOSCA” OR “orchestration” OR “deployment and orchestration” OR “deployment” OR “infrastructure as code” OR “infrastructure-as-code” OR “multi-cloud” OR “standards” OR “self-adaptive system” OR “cloud service life cycle” OR “time-critical cloud application” OR “quality of service” OR “application topology language” OR “Open cloud computing interface” OR “Deployment Automation” OR “Unified Modeling Language” OR “Component Model”)408
Table A2. Queries for the cloud application model term for Scopus and SpringerLink.
Table A2. Queries for the cloud application model term for Scopus and SpringerLink.
Search EngineQueryResults
Scopus( TITLE ( "cloud application modelling" OR "cloud modeling languages" OR "cloud modelling languages" OR "fog modeling languages" OR "fog modelling languages" OR "service description languages" OR "open standards for deployment" OR "runtime execution" OR "TOSCA" OR "openstack hot" OR "deployment automation technologies" OR "cloud Application topologies" OR "model driven" OR "orchestrating multicomponent applications" OR "Component-based Modeling " OR "deployment model abstraction" OR "extending TOSCA" OR "Modeling self-adaptative IoT" OR "runtime execution of self-adaptive" OR "model-driven" OR "Automatic Deployment of Services" OR "Automating Application Deployment" OR "blueprinting" OR "Deployment of Distributed Applications with Geographical Constraints" OR "deployment of cloud applications" OR "extensible language" OR " deployment of IoT application in fog" ) AND KEY ( "cloud computing" OR "deployment and provisioning" OR "edge computing" OR "runtime adaptation" OR "elasticity specification" OR "time constraints" OR "domain-specific language" OR "model-driven engineering" OR "fog computing" OR "internet of things" OR "TOSCA" OR "orchestration" OR "deployment and orchestration" OR "deployment" OR "infrastructure as code" OR "infrastructure-as-code" OR "multi-cloud" OR "standards" OR "self-adaptive system" OR "cloud service life cycle" OR "time-critical cloud application" OR "quality of service" OR "application topology language" OR "Open cloud computing interface" OR "Deployment Automation" OR "Unified Modeling Language" OR "Component Model" ) )1655
SpingerLink(Title: "cloud application modelling" OR "cloud modeling languages" OR "cloud modelling languages" OR "fog modeling languages" OR "fog modelling languages" OR "service description languages" OR "open standards for deployment" OR "runtime execution" OR "TOSCA" OR "openstack hot" OR "deployment automation technologies" OR "cloud Application topologies" OR "model driven" OR "orchestrating multicomponent applications" OR "Component-based Modeling " OR "deployment model abstraction" OR "extending TOSCA" OR "Modeling self-adaptative IoT" OR "runtime execution of self-adaptive" OR "model-driven" OR "Automatic Deployment of Services" OR "Automating Application Deployment" OR "blueprinting" OR "Deployment of Distributed Applications with Geographical Constraints" OR "deployment of cloud applications" OR "extensible language" OR " deployment of IoT application in fog") AND (Phrase: "cloud computing" OR "deployment and provisioning" OR "edge computing" OR "runtime adaptation" OR "elasticity specification" OR "time constraints" OR "domain-specific language" OR "model-driven engineering" OR "fog computing" OR "internet of things" OR "TOSCA" OR "orchestration" OR "deployment and orchestration" OR "deployment" OR "infrastructure as code" OR "infrastructure-as-code" OR "multi-cloud" OR "standards" OR "self-adaptive system" OR "cloud service life cycle" OR "time-critical cloud application" OR "quality of service" OR "application topology language" OR "Open cloud computing interface" OR "Deployment Automation" OR "Unified Modeling Language" OR "Component Model")3014

References

  1. Rossini, A.; Kritikos, K.; Nikolov, N.; Domaschka, J.; Griesinger, F.; Seybold, D.; Romero, D.; Orzechowski, M.; Kapitsaki, G.; Achilleos, A. The Cloud Application Modelling and Execution Language (CAMEL); Universität Ulm: Ulm, Germany, 2017; pp. 1–39. [Google Scholar]
  2. Bergmayr, A.; Breitenbücher, U.; Ferry, N.; Rossini, A.; Solberg, A.; Wimmer, M.; Leymann, F. A Systematic Review of Cloud Modeling Languages. ACM Comput. Surv. 2018, 51, 1–38. [Google Scholar] [CrossRef]
  3. Korontanis, I.; Tserpes, K.; Pateraki, M.; Blasi, L.; Violos, J.; Diego, F.; Marin, E.; Kourtellis, N.; Coppola, M.; Carlini, E.; et al. Inter-Operability and Orchestration in Heterogeneous Cloud/Edge Resources: The ACCORDION Vision. In Proceedings of the 1st Workshop on Flexible Resource and Application Management on the Edge, New York, NY, USA, 25 June 2020; pp. 9–14. [Google Scholar] [CrossRef]
  4. Alidra, A.; Bruneliere, H.; Ledoux, T. A feature-based survey of Fog modeling languages. Future Gener. Comput. Syst. 2022, 138, 104–119. [Google Scholar] [CrossRef]
  5. Nawaz, F.; Mohsin, A.; Janjua, N. Service description languages in cloud computing: State- of-the-art and research issues. Serv. Oriented Comput. Appl. 2019, 13, 109–125. [Google Scholar] [CrossRef]
  6. Bellendorf, J.; Mann, Z.Á. Cloud Topology and Orchestration Using TOSCA: A Systematic Literature Review. In Proceedings of the Service-Oriented and Cloud Computing, Como, Italy, 12–14 September 2018; Kritikos, K., Plebani, P., de Paoli, F., Eds.; Springer International Publishing: Cham, Germany, 2018; pp. 207–215. [Google Scholar]
  7. Wurster, M.; Breitenbücher, U.; Falkenthal, M.; Krieger, C.; Leymann, F.; Saatkamp, K.; Soldani, J. The essential deployment metamodel: A systematic review of deployment automation technologies. SICS Softw. Intensive-Cyber-Phys. Syst. 2020, 35, 63–75. [Google Scholar] [CrossRef]
  8. Kritikos, K.; Skrzypek, P. Are cloud modelling languages ready for multi-cloud? In Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion, Auckland, New Zealand, 2–5 December 2019; pp. 51–58. [Google Scholar]
  9. Andrikopoulos, V.; Reuter, A.; Gómez Sáez, S.; Leymann, F. A GENTL Approach for Cloud Application Topologies. In Proceedings of the Service-Oriented and Cloud Computing, Manchester, UK, 2–4 September 2014; Villari, M., Zimmermann, W., Lau, K.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 148–159. [Google Scholar]
  10. Esposito, A.; Di Martino, B.; Cretella, G. Defining Cloud Services Workflow: A Comparison between TOSCA and OpenStack Hot. In Proceedings of the 2015 Ninth International Conference on Complex, Intelligent, and Software Intensive Systems, Santa Catarina, Brazil, 8–10 July 2015; IEEE: Piscataway, NJ, USA. [Google Scholar] [CrossRef]
  11. Quenum, J.G.; Ishuuwa, G. Abstracting Containerisation and Orchestration for Cloud-Native Applications. In Proceedings of the Cloud Computing–CLOUD 2020: 13th International Conference, Held as Part of the Services Conference Federation, SCF 2020, Honolulu, HI, USA, 18–20 September 2020; Proceedings 13. Springer: Berlin/Heidelberg, Germany, 2020; pp. 164–180. [Google Scholar]
  12. Petrovic, N.; Tosic, M. SMADA-Fog: Semantic model driven approach to deployment and adaptivity in fog computing. Simul. Model. Pract. Theory 2020, 101, 102033. [Google Scholar] [CrossRef]
  13. Sampaio, A.; Mendonça, N. Uni4Cloud: An Approach Based on Open Standards for Deployment and Management of Multi-Cloud Applications. In Proceedings of the 2nd International Workshop on Software Engineering for Cloud Computing, Honolulu, HI, USA, 22 May 2011; SECLOUD’11. pp. 15–21. [Google Scholar] [CrossRef]
  14. Lipton, P.; Palma, D.; Rutkowski, M.; Tamburri, D. TOSCA Solves Big Problems in the Cloud and Beyond! IEEE Cloud Comput. 2018, 5, 1. [Google Scholar] [CrossRef]
  15. Borisova, A.; Shvetcova, V.; Borisenko, O. Adaptation of the TOSCA standard model for the Kubernetes container environment. In Proceedings of the 2020 Ivannikov Memorial Workshop (IVMEM), Orel, Russia, 25–26 September 2020; pp. 9–14. [Google Scholar]
  16. Brogi, A.; Rinaldi, L.; Soldani, J. TosKer: A synergy between TOSCA and Docker for orchestrating multicomponent applications: TosKer: A synergy between TOSCA and Docker. Softw. Pract. Exp. 2018, 48, 2061–2079. [Google Scholar] [CrossRef]
  17. DesLauriers, J.; Kiss, T.; Ariyattu, R.C.; Dang, H.V.; Ullah, A.; Bowden, J.; Krefting, D.; Pierantoni, G.; Terstyanszky, G. Cloud apps to-go: Cloud portability with TOSCA and MiCADO. Concurr. Comput. Pract. Exp. 2021, 33, e6093. [Google Scholar] [CrossRef]
  18. Solayman, H.E.; Qasha, R.P. Portable Modeling for ICU IoT-based Application using TOSCA on the Edge and Cloud. In Proceedings of the 2022 International Conference on Computer Science and Software Engineering (CSASE), Duhok, Iraq, 15–17 March 2022; pp. 301–305. [Google Scholar] [CrossRef]
  19. Weller, M.; Breitenbücher, U.; Speth, S.; Becker, S. The deployment model abstraction framework. In Proceedings of the Enterprise Design, Operations, and Computing. EDOC 2022 Workshops: IDAMS, SoEA4EE, TEAR, EDOC Forum, Demonstrations Track and Doctoral Consortium, Bozen-Bolzano, Italy, 4–7 October 2022; Revised Selected Papers. Springer: Berlin/Heidelberg, Germany, 2023; pp. 319–325. [Google Scholar]
  20. Tsagkaropoulos, A.; Verginadis, Y.; Compastié, M.; Apostolou, D.; Mentzas, G. Extending tosca for edge and fog deployment support. Electronics 2021, 10, 737. [Google Scholar] [CrossRef]
  21. Alfonso, I.; Garcés, K.; Castro, H.; Cabot, J. Modeling self-adaptative IoT architectures. In Proceedings of the 2021 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C), Fukuoka, Japan, 10–15 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 761–766. [Google Scholar]
  22. Alfonso, I.; Garcés, K.; Castro, H.; Cabot, J. A model-based infrastructure for the specification and runtime execution of self-adaptive IoT architectures. Computing 2023, 105, 1883–1906. [Google Scholar] [CrossRef]
  23. Zalila, F.; Korte, F.; Erbel, J.; Challita, S.; Grabowski, J.; Merle, P. MoDMaCAO: A model-driven framework for the design, validation and configuration management of cloud applications based on OCCI. Softw. Syst. Model. 2022, 22, 871–889. [Google Scholar] [CrossRef]
  24. Di Cosmo, R.; Eiche, A.; Mauro, J.; Zacchiroli, S.; Zavattaro, G.; Zwolakowski, J. Automatic Deployment of Services in the Cloud with Aeolus Blender. In Proceedings of the Service-Oriented Computing, Goa, India, 16–19 November 2015; Barros, A., Grigori, D., Narendra, N.C., Dam, H.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; pp. 397–411. [Google Scholar]
  25. Juve, G.; Deelman, E. Automating Application Deployment in Infrastructure Clouds. In Proceedings of the 2011 IEEE Third International Conference on Cloud Computing Technology and Science, Athens, Greece, 29 November–1 December 2011; pp. 658–665. [Google Scholar]
  26. Papazoglou, M.P.; Heuvel, W.J.v.d. Blueprinting the Cloud. IEEE Internet Comput. 2011, 15, 74–79. [Google Scholar] [CrossRef]
  27. Villari, M.; Tricomi, G.; Celesti, A.; Fazio, M. Orchestration for the Deployment of Distributed Applications with Geographical Constraints in Cloud Federation. In Proceedings of the IISSC/CN4IoT, Brindisi, Italy, 20–21 April 2017. [Google Scholar]
  28. Faisal, M.; Nazir, S.; Babar, M. Notational Modeling for Model Driven Cloud Computing using Unified Modeling Language. EAI Endorsed Trans. Scalable Inf. Syst. 2018, 5, e9. [Google Scholar] [CrossRef]
  29. Petrovic, N. Model-driven Approach for Deployment of Container-based Applications in Fog Computing. In Proceedings of the 5th International Conference on Electrical, Electronic and Computing Engineering IcETRAN, Palić, Serbia, 11–14 June 2018. [Google Scholar]
  30. Wurster, M.; Breitenbücher, U.; Harzenetter, L.; Leymann, F.; Soldani, J.; Yussupov, V. TOSCA Light: Bridging the Gap between the TOSCA Specification and Production-ready Deployment Technologies. In Proceedings of the CLOSER, Online Streaming, 7–9 May 2020; pp. 216–226. [Google Scholar]
  31. Achilleos, A.P.; Kritikos, K.; Rossini, A.; Kapitsaki, G.M.; Domaschka, J.; Orzechowski, M.; Seybold, D.; Griesinger, F.; Nikolov, N.; Romero, D.; et al. The cloud application modelling and execution language. J. Cloud Comput. 2019, 8, 20. [Google Scholar] [CrossRef]
  32. Štefanič, P.; Cigale, M.; Jones, A.C.; Knight, L.; Taylor, I. Support for full life cycle cloud-native application management: Dynamic TOSCA and SWITCH IDE. Future Gener. Comput. Syst. 2019, 101, 975–982. [Google Scholar] [CrossRef]
  33. Di Cosmo, R.; Lienhardt, M.; Treinen, R.; Zacchiroli, S.; Zwolakowski, J.; Eiche, A.; Agahi, A. Automated synthesis and deployment of cloud applications. In Proceedings of the ASE 2014—29th ACM/IEEE International Conference on Automated Software Engineering, Vsters, Sweden, 15–19 September 2014; pp. 211–221. [Google Scholar] [CrossRef]
  34. Copil, G.; Moldovan, D.; Truong, H.L.; Dustdar, S. SYBL: An Extensible Language for Controlling Elasticity in Cloud Applications. In Proceedings of the 2013 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing, Delft, The Netherlands, 3–16 May 2013; pp. 112–119. [Google Scholar] [CrossRef]
  35. Sehout, O.N.; Ghiat, M.; Benzadri, Z.; Belala, F. A Component-based Modeling of Edge Systems Computing. In Proceedings of the ICAASE, Constantine, Algeria, 1–2 December 2018; pp. 99–106. [Google Scholar]
  36. Venticinque, S.; Amato, A. A methodology for deployment of IoT application in fog. J. Ambient. Intell. Humaniz. Comput. 2019, 10, 1955–1976. [Google Scholar] [CrossRef]
  37. Theodoropoulos, T.; Makris, A.; Korontanis, I.; Tserpes, K. GreenKube: Towards Greener Container Orchestration using Artificial Intelligence. In Proceedings of the 2023 IEEE International Conference on Service-Oriented System Engineering (SOSE), Athens, Greece, 17–20 July 2023; pp. 135–139. [Google Scholar] [CrossRef]
  38. Bhattacharjee, A.; Barve, Y.; Kuroda, T.; Gokhale, A. CloudCAMP: A Model-Driven Generative Approach for Automating Cloud Application Deployment and Management; Vanderbilt University: Nashville, TN, USA, 2017. [Google Scholar]
Figure 1. Prisma workflow of study selection.
Figure 1. Prisma workflow of study selection.
Applsci 14 02311 g001
Table 1. Search terms to identify CMLs.
Table 1. Search terms to identify CMLs.
Terms
cloud application model
cloud modeling language
open standards for cloud
specifications for cloud-edge
infrastructure as code
TOSCA
Openstack HOT
extensions of TOSCA
extensions of HOT
DSL (Domain-Specific Languages)
Table 2. Keywords for literature review.
Table 2. Keywords for literature review.
Keywords
cloud computing
edge computing
workflows
runtime adaptation
elasticity specification
time constraints
geolocation constraints
domain-specific language
model-driven engineering
fog computing
internet of things
TOSCA
orchestration
deployment
infrastructure as code
multi-cloud
standards
self-adaptive system
cloud service life cycle
time-critical cloud application
quality of service
application topology language
configuration
component model
Table 3. Results—Part A.
Table 3. Results—Part A.
TOSCAWranglerFRACTAL4EDGEGENTLTOSCA Lifecycle Ext.Aeolus BlenderHOT Ext.SYBLUni4CloudADOxxUML Ext.CAMELHOT
time constraints
geolocation constraints
deployment
runtime adaptation
placement
VMs
Containers
resource requirements
images
physical devices (hosts)
application components
relationships
dependencies
environment values
ports
optimization rules
cost constraints
authentication credentials
cloud provider-independent
attached scripts
execution flow
QoS criteria
end devices
✓ indicates that feature is available. ✗ indicates that feature is unavailable.
Table 4. Results—Part B.
Table 4. Results—Part B.
TOSCA IOT ext.EDDMTOSCA LightTOSCA MICADOSelf-Adapt. IoT DSLvelo DSLSMADA-FogMoDMaCAOTOSCA for Edge and Fog
time constraints
geolocation constraints
deployment
runtime adaptation
placement
VMs
Containers
resource requirements
images
physical devices (hosts)
application components
relationships
dependencies
environment values
ports
optimization rules
cost constraints
authentication credentials
cloud provider-independent
attached scripts
execution flow
QoS criteria
end devices
✓ indicates that feature is available. ✗ indicates that feature is unavailable.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Korontanis, I.; Makris, A.; Tserpes, K. A Survey on Modeling Languages for Applications Hosted on Cloud-Edge Computing Environments. Appl. Sci. 2024, 14, 2311. https://doi.org/10.3390/app14062311

AMA Style

Korontanis I, Makris A, Tserpes K. A Survey on Modeling Languages for Applications Hosted on Cloud-Edge Computing Environments. Applied Sciences. 2024; 14(6):2311. https://doi.org/10.3390/app14062311

Chicago/Turabian Style

Korontanis, Ioannis, Antonios Makris, and Konstantinos Tserpes. 2024. "A Survey on Modeling Languages for Applications Hosted on Cloud-Edge Computing Environments" Applied Sciences 14, no. 6: 2311. https://doi.org/10.3390/app14062311

APA Style

Korontanis, I., Makris, A., & Tserpes, K. (2024). A Survey on Modeling Languages for Applications Hosted on Cloud-Edge Computing Environments. Applied Sciences, 14(6), 2311. https://doi.org/10.3390/app14062311

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop