Next Article in Journal
Using Multimodal Foundation Models for Detecting Fake Images on the Internet with Explanations
Next Article in Special Issue
A Transfer Reinforcement Learning Approach for Capacity Sharing in Beyond 5G Networks
Previous Article in Journal
Resilience in the Internet of Medical Things: A Review and Case Study
Previous Article in Special Issue
A Task Offloading and Resource Allocation Strategy Based on Multi-Agent Reinforcement Learning in Mobile Edge Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flexible Hyper-Distributed IoT–Edge–Cloud Platform for Real-Time Digital Twin Applications on 6G-Intended Testbeds for Logistics and Industry

by
Maria Crespo-Aguado
*,
Raul Lozano
,
Fernando Hernandez-Gobertti
,
Nuria Molner
and
David Gomez-Barquero
Institute of Telecommunications and Multimedia Applications (iTEAM), Universitat Politècnica de Valencia (UPV), 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Future Internet 2024, 16(11), 431; https://doi.org/10.3390/fi16110431
Submission received: 19 October 2024 / Revised: 15 November 2024 / Accepted: 18 November 2024 / Published: 20 November 2024
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)

Abstract

:
This paper presents the design and development of a flexible hyper-distributed IoT–Edge–Cloud computing platform for real-time Digital Twins in real logistics and industrial environments, intended as a novel living lab and testbed for future 6G applications. It expands the limited capabilities of IoT devices with extended Cloud and Edge computing functionalities, creating an IoT–Edge–Cloud continuum platform composed of multiple stakeholder solutions, in which vertical application developers can take full advantage of the computing resources of the infrastructure. The platform is built together with a private 5G network to connect machines and sensors on a large scale. Artificial intelligence and machine learning are used to allocate computing resources for real-time services by an end-to-end intelligent orchestrator, and real-time distributed analytic tools leverage Edge computing platforms to support different types of Digital Twin applications for logistics and industry, such as immersive remote driving, with specific characteristics and features. Performance evaluations demonstrated the platform’s capability to support the high-throughput communications required for Digital Twins, achieving user-experienced rates close to the maximum theoretical values, up to 552 Mb/s for the downlink and 87.3 Mb/s for the uplink in the n78 frequency band. Moreover, the platform’s support for Digital Twins was validated via QoE assessments conducted on an immersive remote driving prototype, which demonstrated high levels of user satisfaction in key dimensions such as presence, engagement, control, sensory integration, and cognitive load.

1. Introduction

The 6G era envisions connecting the human, digital, and physical worlds and revolves around their interaction: a human world of our senses, bodies, intelligence, and values; a digital world of information, communication, and computing; and a physical world of objects and organisms [1,2]. In this three-pillar vision, depicted in Figure 1, the concept of Digital Twins is presented as a mix of the best of each world’s features [3]. Digital Twins are 3D virtual representations that serve as real-time digital counterparts of physical entities, providing ubiquitous tools for the simulation and analysis of complex environments. Taking advantage of the Internet of Things (IoT) data gathered by sensors, machines, robots, and cameras, Digital Twins enable continuous monitoring and adaptive control and optimize resource use and decision-making by leveraging real-time data for a dynamic virtual representation of physical assets.
The use of Digital Twins is particularly advantageous in complex logistics and industrial environments, where the coordination of moving objects (e.g., vehicles, cranes, ships, or robots) is critical. Using data from sensors, cameras, and LiDARs, Digital Twins can monitor the real-time locations of these objects, optimizing the scheduling and coordination of resources. Through immersive cockpits, operators can visualize a virtual representation of the entire logistics or factory line, detecting bottlenecks and adjusting workflows in real time. The integration of artificial intelligence (AI)-powered systems, including computer vision and sensor analysis, enables Digital Twins to autonomously monitor the operation. When combined with machine learning (ML), Digital Twins can even predict hazards (e.g., traffic collisions, access to restricted areas, equipment failures, or environmental factors) and send alerts to operators.
Nonetheless, the implementation of real-time Digital Twins poses significant challenges to communication networks. Digital Twins rely on real-time data to create virtual replicas of physical environments, including video streams (for 2D object detection, collision avoidance, or security monitoring), LiDAR streams (for precise localization and 3D object detection), or telemetry (e.g., temperature, position, or operational status). Given the computing capacity required to process all this information with low delay, the data gathered by IoT devices must be offloaded to more powerful servers. But, if not managed correctly by communication systems, Digital Twins have the potential to inundate network infrastructures, reducing efficiency and increasing latency, which impedes the real-time functioning of the application. Moreover, the number of IoT devices interconnected in a specific Digital Twin application may be expected to grow in the future, increasing the saturation of the networks [4].
To mitigate this effect, computations must occur closer to the data source, reducing latency and improving response times. This is particularly beneficial in time-sensitive Digital Twin applications such as autonomous driving [5], industrial robotics [6], and immersive teleoperation [7], where even milliseconds of delay can have critical consequences. As depicted in Figure 1, Edge computing ensures the efficient processing of IoT data, since it allows the offloading of the data gathered by different devices (e.g., sensors, machines, robots, and cameras) to be processed outside these devices in order to create useful information for the user. This implies not only reduced latency and improved response times but also a simplification of devices, which has an impact on their fabrication costs. In addition, Edge computing also filters data locally before sending only the necessary insights to the Cloud, thereby reducing bandwidth usage and ensuring quicker decision-making [8].
Throughout the literature, there are many Edge computing architecture proposals to implement real-time Digital Twins or support the testing of future applications for research purposes. However, few of them are implemented in private 5G networks and deployed in real logistics and industrial scenarios. To address this gap, the proposed solution involves designing and implementing a hyper-distributed IoT–Edge–Cloud computing platform that is automatically managed for real-time Digital Twins in logistics and industrial environments. This platform is intended as a living lab and testbed for future 6G applications, developed to meet the requirements of end users and designed in close collaboration with them. The design integrates the latest advancements in AI-driven analytics, machine learning-based automation, and cross-domain interoperability. The system’s hyper-distributed nature ensures that computation and decision-making occur at the optimal point—whether at the Edge, in the Cloud, or within the IoT device itself—depending on the application’s specific requirements.
This document is organized as follows: Section 2 provides a comprehensive overview of the state of the art in Digital Twin technologies, Edge computing, and IoT integration. Section 3 delves into the detailed design and development of the proposed hyper-distributed platform, highlighting its key components and functionalities, such as AI-driven orchestration. Section 4 presents the experimental platform setup, comprising two sites, including the frameworks used for the validation of the platform and early results on the 5G core and RAN performance. The validation also includes the implementation of a Digital Twin application prototype featuring immersive remote driving, where the QoS offered by the platform is demonstrated via an extensive QoE evaluation of the application. Section 5 discusses open challenges and potential future research directions, while Section 6 summarizes the key findings and contributions of this work.

2. Existing Solutions and Similar Testbeds

The rapid evolution of Digital Twin technologies, Edge computing, and IoT ecosystems has paved the way for advanced platforms that aim to address real-time industrial needs [9]. However, the development of such frameworks is still in its early stages, particularly when it comes to large-scale adoption and seamless integration across the IoT–Edge–Cloud continuum. Existing solutions [10] primarily focus on specific applications, and while they demonstrate promising results, gaps remain in areas such as scalability, latency management, and orchestration efficiency. In this section, we review state-of-the-art frameworks, platforms, and testbeds that lay the foundation for our proposed approach, highlighting their strengths and limitations in meeting the demands of 6G-enabled Digital Twins and hyper-distributed [11] Edge architectures, enabling flexible and decentralized resource utilization. These testbeds serve as critical experimental environments, enabling researchers to validate novel concepts and technologies that will ultimately shape future industrial systems. In fact, ref. [12] explores how multimodal sensing data inform real-time Digital Twins, aligning closely with the objectives of our hyper-distributed IoT–Edge–Cloud platform for real-time industrial sensing and communication. This study’s insights into the 6G research landscape shed light on critical improvements needed to support the platform, especially in testbed configuration and operational requirements.

2.1. Digital Twin Frameworks and Platforms

Digital Twins are becoming a paradigm-changer for different verticals, transforming how products and services are made and delivered, and allowing for the full digitalization of industrial elements (data, sensors, robotics, vehicles, etc.). Digital Twins are especially relevant for logistics and industrial environments, as they optimize the manufacturing process to reduce costs and increase operational efficiency and flexibility [13].
In addition, Digital Twins have immense potential in healthcare [14], smart cities [15], and robotics [16]. For instance, in healthcare, Digital Twins could simulate patient-specific models for personalized treatment planning, enhancing precision medicine. In smart cities, they can improve urban mobility management to reduce traffic congestion and fuel consumption. Furthermore, in robotic systems like drones and self-driving vehicles, Digital Twins enable better decision-making by providing real-time updates and scenario-based predictions, significantly improving safety and reliability.
Several Digital Twin frameworks and platforms have emerged to meet the growing demand for the real-time simulation and management of physical entities. Siemens’ MindSphere [17] and General Electric’s Predix [18] are among the leading platforms that provide end-to-end solutions for industrial IoT and Digital Twin implementations. MindSphere is a Cloud-based open IoT operating system that allows businesses to connect products, plants, systems, and machines, enabling robust analytics and the creation of Digital Twins for predictive maintenance and optimization. GE’s Predix, on the other hand, is a dedicated industrial IoT platform that integrates with AI/ML tools to develop Digital Twins for industrial assets. Both platforms emphasize scalability, offering integration with Edge devices, Cloud infrastructures, and third-party applications, making them suitable for large-scale industrial ecosystems. However, despite their broad capabilities, both platforms face challenges in supporting ultra-low latency and real-time synchronization across distributed environments, particularly in scenarios requiring immediate responses, such as autonomous systems or robotics. While these platforms are robust in terms of industrial IoT capabilities, comparisons with certain studies [19,20] reveal gaps in achieving the ultra-low latency and high scalability necessary for distributed Edge environments. Studies show that MindSphere and Predix face limitations in seamless interoperability, i.e., smooth and application-transparent integration, between Edge and Cloud layers, often relying on proprietary integration methods that restrict flexibility in hyper-distributed architectures.
Recent frameworks like IBM’s Maximo Application Suite (MAS) [21] and Hitachi’s Lumada [22] further exemplify industry efforts to build end-to-end IoT and Digital Twin solutions. IBM MAS focuses on asset performance management and predictive maintenance through AI-driven analytics, enhancing operational efficiency and supporting large-scale industrial IoT deployments. Lumada, developed by Hitachi, offers a modular platform for creating Digital Twins that optimize manufacturing and logistics operations through data-driven insights, with specific capabilities for Edge deployment to minimize latency and improve real-time responsiveness.
Open-source solutions such as Eclipse Ditto [23] and FIWARE [24] also play a significant role in advancing Digital Twin applications by offering more customizable and flexible frameworks. Eclipse Ditto focuses on managing digital representations of physical devices by providing a middleware layer that facilitates the synchronization of data and state between Edge devices and Cloud services. FIWARE, combined with its IoT and context-broker components, enables the development of Digital Twins across smart city and industrial use cases by integrating a wide range of IoT data sources. These platforms offer higher adaptability for research purposes and experimental testbeds due to their open-source nature, but they often require more intricate development and configuration efforts. Furthermore, while these platforms have made significant strides in terms of data management and resource orchestration, they still face limitations in real-time, high-volume data processing, which is essential for next-generation applications that rely on ultra-reliable low-latency communication (URLLC) and large-scale IoT deployments.
FIWARE’s recent updates emphasize modular interoperability, allowing the flexible integration of different IoT devices through open-standard APIs, yet studies indicate potential scalability challenges in handling the high-frequency data streams necessary for real-time Digital Twins [25]. Eclipse Ditto, while effective for synchronizing device states, still requires advancements in latency handling for time-sensitive IoT applications, as highlighted in recent evaluations [26].
Real-time applications of Digital Twins demand a tremendous amount of data collection, as well as virtualization, analytics, and rendering mechanisms. Also, a Digital Twin representation requires a high computational capacity in both the Edge and Cloud domains, which 5G may not be able to adequately fulfill, but 5G-Advanced and 6G features will [27]. To this end, future Digital Twins applications will need to rely on the adoption of 5G-Advanced technologies [28] that will enable the maximum exploitation of Digital Twin functionalities: (i) 5G-Advanced–IoT (5G-A-IoT) to connect machines and sensors on a large scale; (ii) a distributed IoT-to-Edge-to-Cloud continuum platform composed of resources from different providers in a transparent manner for the end users and verticals; and (iii) the integration of AI/ML analytic tools to furnish Digital Twins and the IoT-to-Edge-Cloud continuum platform with intelligence to enable real-time performance.
After a period of experimental evaluation of the first IoT generations and 5G, companies are now moving to the next level of digitalization of their supply chains. Several industrial elements, like sensors, devices, and machines, still have a gap in interconnection [29], which offers the opportunity to be fully exploited in order to take one step forward toward the overall connection of the industrial ecosystem. In this regard, the 3GPP roadmap for future releases aims to explore new 5G-A-IoT technologies to cover emerging market demands. While 5G adopts a human-centric approach mainly focused on user connectivity aspects and the early demands of verticals, 5G-Advanced needs to go a step beyond to address IoT machine-centered use cases (5G-A-IoT) [30]. At the moment, radio access networks (RANs) in many industrial premises still rely on wired network technologies since wireless interfaces do not satisfy their requirements [31], and machine-type communications scenarios are covered by LTE-based technologies such as LTE-M and NB-IoT, whose capabilities do not meet strict latency demands for real-time monitoring [32]. Requirements in terms of capacity, latency, reliability, and flexibility for automated real-time and collaborative robotics applications can only be met with 5G Rel-18, 5G-Advanced, and beyond.

2.2. Edge and Cloud Computing Solutions for IoT Applications

The advent of hyper-distributed platforms integrating the Internet of Things (IoT) and Edge and Cloud computing is key to tackling the complex challenges in modern logistics and industrial settings, especially for real-time Digital Twin applications. As the 6G network paradigm emerges, the development of systems capable of handling vast data from IoT devices while meeting the low-latency and high-throughput demands of real-time applications becomes increasingly crucial. The integration of IoT, Edge, and Cloud layers forms a cohesive platform where data from IoT devices are first processed at Edge nodes and then analyzed further in Cloud infrastructures [33]. By performing initial data filtering and aggregation at Edge nodes, the platform minimizes unnecessary data transmission to the Cloud, which reduces bandwidth usage and alleviates network congestion. This approach is especially beneficial in scenarios with high data volumes, where only relevant or summarized data need to be forwarded for further analysis, ultimately improving both system efficiency and responsiveness. This distribution of computational resources is necessary to optimize performance while minimizing latency, which is vital for real-time applications like Digital Twins that replicate physical environments in real time [34]. In particular, ref. [35] presents an architecture focused on real-time optimization and control within 6G Digital Twin Networks (DTNs). Although primarily focused on DTNs rather than IoT–Edge–Cloud systems for logistics, its architectural principles and real-time control insights indirectly contribute to the development of a hyper-distributed IoT platform, aligning with long-term 6G objectives for high-performance, ultra-reliable systems. Another example [36] introduces a Cloud-based framework for modular and context-aware services in healthcare, but its adaptable, reconfigurable design directly applies to hyper-distributed IoT systems in logistics and manufacturing. By understanding such modular frameworks, we can develop a flexible IoT–Edge–Cloud platform capable of seamlessly handling multiple applications and efficiently managing resource allocation. This flexibility is paramount for high-performance 6G testbeds that support Digital Twin applications and adapt to varying industrial and real-time demands.
IoT-to-Cloud technologies serve as the backbone for connecting a vast array of IoT devices to centralized Cloud infrastructures, enabling data aggregation, analysis, and decision-making at scale. Key technologies facilitating this integration include protocols like Message Queuing Telemetry Transport (MQTT) [37], Hypertext Transfer Protocol (HTTP) [38], and Constrained Application Protocol (CoAP) [39]. These lightweight communication protocols are designed for efficient data transfer between resource-constrained IoT devices and Cloud services, with MQTT being particularly popular for its low-bandwidth consumption and reliable messaging over unreliable networks. Coupled with these protocols, middleware platforms like AWS IoT Core [40], Microsoft Azure IoT Hub [41], and Google Cloud IoT [42] offer essential services, such as device management, data storage, and analytics. These platforms provide seamless integration between IoT devices and Cloud environments, allowing organizations to scale their deployments, implement security mechanisms, and leverage Cloud-native services for data processing and AI-driven insights. Furthermore, Apache OpenWhisk, an open-source platform, allows for serverless function execution in Edge environments, providing flexible resource scaling; however, its performance in latency-sensitive applications is limited by event-driven processing speeds, as reported in recent studies [43].
In addition to communication protocols and Cloud platforms, Edge computing plays a pivotal role in enhancing IoT-to-Cloud integration by decentralizing data processing and reducing the load on Cloud infrastructures. By introducing computation closer to the IoT devices, Edge nodes can handle time-sensitive tasks locally, filtering data before they are sent to the Cloud, thus minimizing latency and bandwidth usage. Technologies such as Kubernetes [44] and OpenStack [45], extended to support Edge environments, facilitate the deployment of microservices across hybrid IoT-to-Cloud architectures. These containerized applications enable scalable, distributed processing across heterogeneous systems, ensuring that critical tasks, such as real-time analytics and AI inference, are performed at the Edge, while the Cloud handles long-term data storage, large-scale analytics, and machine learning model training. However, while these integration technologies significantly improve IoT performance, challenges remain in orchestrating resources dynamically across distributed layers and ensuring secure, seamless interoperability between diverse IoT devices and Cloud services.
Edge-to-Cloud orchestration is critical in ensuring seamless resource management across this continuum. Orchestration systems handle the deployment and reconfiguration of services, monitor system performance, and enforce security protocols across heterogeneous environments [46]. AI/ML techniques further enhance the orchestration process by enabling dynamic task offloading, adaptive load balancing, and real-time fault detection. These algorithms optimize task scheduling based on latency, energy efficiency, and resource availability, ensuring that services are allocated to the most suitable layer, whether the Edge, the Cloud, or the device itself, depending on the computational requirements and real-time conditions. This flexibility is essential for maintaining performance across distributed architectures. For instance, ref. [47] offers a valuable model for achieving real-time, adaptable security for dispersed IoT systems. Using Behavior–Interaction–Priority components to ensure data-driven security and model-checking, this approach aligns well with the hyper-distributed architecture’s emphasis on security and low latency, providing an additional layer of validation and dependability for Digital Twin and IoT applications that require accurate, real-time data.
One of the major challenges in these platforms is managing the heterogeneity and volatility of Edge nodes [46], which often consist of devices with varying computational power, storage capacity, and connectivity. To address this, modern Edge solutions employ resource orchestration frameworks that dynamically allocate tasks based on the capabilities and real-time conditions of each Edge node [48]. Technologies such as Kubernetes with KubeEdge [49] and Apache OpenWhisk [50] enable the deployment and management of containerized applications across a distributed Edge infrastructure, ensuring efficient resource utilization and low-latency response times. Frameworks like Microsoft Azure IoT Edge and Google Anthos extend Kubernetes to Edge use cases, but their studies reveal constraints in orchestrating resource-intensive tasks across decentralized nodes under variable network conditions [51]. These platforms allow for dynamic task offloading between the Edge and Cloud, optimizing performance by processing time-sensitive tasks at the Edge and more complex, data-intensive workloads in the Cloud. Additionally, federated learning is being leveraged in Edge environments to mitigate the challenges of decentralized data by training AI models locally on Edge nodes and sharing only the learned parameters with the Cloud, reducing data transmission and enhancing privacy. However, ensuring robust security, fault tolerance, and seamless interoperability across these heterogeneous and often transient Edge devices remains a key technical hurdle in realizing the full potential of Edge computing for IoT applications.

3. Design of the IoT–Edge–Cloud Platform

This section presents the architecture for the hyper-distributed IoT–Edge–Cloud platform that enables real-time Digital Twin applications for logistics and industrial scenarios by integrating advanced computing resources across IoT, Edge, and Cloud environments. This platform leverages a flexible and scalable infrastructure that dynamically orchestrates computational tasks across geographically dispersed nodes, ensuring high performance and low latency for real-time operations. The combination of Digital Twins and the self-managed IoT–Edge–Cloud computing platform with artificial intelligence (AI) and machine learning (ML) will minimize human involvement in the design and validation of the physical network, which brings several benefits at the same time (e.g., lower labor costs and fewer human errors).

3.1. System Architecture Design

The diagram in Figure 2 illustrates a high-level system architecture of a flexible, hyper-distributed IoT–Edge–Cloud platform designed to be deployed in two different sites. It can be scaled based on the number of connected IoT devices, the geographical distribution of Edge nodes, and the specific industrial needs, ensuring the smooth functioning of real-time Digital Twin applications. This level of scalability is essential to meeting the demanding requirements of modern logistics and industrial environments, where operational efficiency, low latency, and high reliability are paramount. The architecture includes several layers: the IoT, Edge, Cloud, and orchestration layers.
In the IoT-Device layer, a diverse set of devices—including sensors, cameras, industrial robots, and vehicles—continuously gather data from the physical environment. The number and type of devices are variable, allowing the architecture to accommodate different scenarios, from a few localized devices to hundreds distributed over larger areas. The IoT devices are connected to the Edge layer via a private 5G network, divided into the radio access network (5G RAN) and core network (5GC). The private 5G network supports slicing at the RAN, transport, and core levels, which allows the creation of multiple virtual networks that can be tailored to the different requirements of each IoT application, such as bandwidth, latency, or security needs.
In the Edge computing layer, powerful multi-core processors and memory in geographically distributed Edge nodes are strategically deployed to manage latency-sensitive data processing. These nodes, located wherever needed, handle real-time responses locally, significantly reducing the need for data transfer to the Cloud. The Edge nodes can vary in number and capacity, depending on the specific application, and can be scaled as needed to ensure low-latency performance.
The Cloud computing layer, with its vast computational resources, serves as a central hub for more resource-intensive tasks, such as long-term data storage and in-depth analysis, allowing the system to offload non-latency-sensitive workloads.
Finally, the orchestration layer is critical in managing resources, applications, and services across the whole distributed platform. It dynamically allocates tasks to the most appropriate computing resources based on real-time conditions, performance needs, and service-level agreements (SLAs). It plays a critical role in managing the lifecycle of services and applications across the IoT, Edge, and Cloud infrastructure. A crucial aspect of this layer is the dynamic placement of network functions (NFs) that comprise the 5G core, which can be distributed between the Cloud and Edge servers based on the specific requirements of the use case. For example, the User Plane Function (UPF) can be moved closer to the Edge to reduce latency for real-time applications like Digital Twins.

3.2. Orchestration and Management

Two main tasks are managed by the orchestrator: inter-node orchestration and intra-node orchestration. Inter-node orchestration handles the distribution of services across multiple geographically dispersed Edge nodes, optimizing performance and resource usage across the system. Intra-node orchestration manages the resources within each Edge node, ensuring that computational power, memory, and other resources are used efficiently to meet the specific needs of applications.
The orchestrator uses declarative configuration to manage application deployment. Through YAML files and containerized services, users define the behavior, configurations, and policies for deployment, such as where to place workloads, how many instances to deploy, and how to monitor key performance indicators (KPIs). Moreover, a service catalog facilitates the deployment of predefined applications and services, streamlining the onboarding process. These definitions allow the orchestrator to automate service deployment and dynamically adjust resource allocation based on current demand, improving both performance and efficiency.
A key feature of this orchestrator is its AI-driven automation, which enhances orchestration by predicting system loads, optimizing resource allocation, and making intelligent deployment decisions in real time. This AI module is composed of two primary components: the Prediction Analytics Engine and the Decision Engine. Together, these components enable the orchestrator to adapt dynamically to changing conditions, improving both performance and energy efficiency while reducing operational costs.
The Prediction Analytics Engine utilizes an ML-based prediction model to anticipate future system demands. By analyzing historical CPU utilization data collected from the infrastructure and applications, the model predicts future CPU usage through an ARIMA time-series prediction approach [52]. This prediction is then used by the Decision Engine to determine the necessary scaling actions in real time, such as adjusting the number of replicas or redistributing resources across nodes. This predictive capability helps ensure that the system can proactively manage resource demands, especially during periods of high traffic or anticipated load surges.
The Decision Engine evaluates real-time traffic and system conditions to determine the optimal resource allocation strategy based on one of two focus areas: performance or energy efficiency. Regarding performance optimization, the focus is on maximizing performance. The Decision Engine dynamically scales the resources in response to increased user traffic. This approach maintains the required Quality-of-Service (QoS) metrics, such as response time and throughput, even under heavy-load conditions. For example, if traffic spikes due to a sudden influx of IoT data or requests for Digital Twin applications, the Decision Engine will increase the number of application pods in real time to handle the load, thereby ensuring that response times remain low, and throughput remains high. This real-time scaling based on AI-driven predictions allows the orchestrator to meet stringent performance requirements consistently, adapting to fluctuating demand without manual intervention.
Energy efficiency optimization is particularly prioritized during periods of low demand. The Decision Engine focuses on scaling down the resources as user traffic decreases, minimizing energy consumption while still meeting QoS requirements. For instance, in periods of low user activity, the orchestrator may reduce the number of replicas for less critical services or applications, conserving energy and preventing resource over-provisioning. This reduction is based on the Prediction Analytics Engine’s forecast of lower CPU usage, enabling the system to save energy without compromising service quality.
This AI-driven orchestration approach addresses key limitations in current infrastructure by enabling the real-time, adaptive management of resources across the IoT–Edge–Cloud platform. By optimizing either performance or energy efficiency as needed, the orchestrator can support the dynamic requirements of IoT and Digital Twin applications, delivering a flexible and sustainable solution. The use of predictive analytics and automated scaling ensures that the platform can handle unpredictable workloads effectively, balancing real-time response needs with sustainable energy usage.

4. Development and Validation of the Platform

This section provides a comprehensive description of the experimental development and validation process of the proposed IoT-to-Edge-to-Cloud platform architecture. This section is structured into several parts to provide a comprehensive overview of the platform’s architecture, experimental setup, and evaluation metrics. Initially, it delves into the detailed architecture of the platform, covering its multi-layered design, integration with Edge nodes, private 5G network, and advanced orchestration mechanisms to support real-time, latency-sensitive applications. This is followed by a thorough explanation of the experimental setup used to validate the architecture, focusing on the performance of critical components such as the 5G core and RAN. Finally, we demonstrate the platform’s capabilities by evaluating its performance under real-world conditions using a demanding application for immersive remote driving. The results provide insight into the platform’s ability to handle complex, latency-critical scenarios, ensuring scalability, resilience, and efficient resource utilization across IoT, Edge, and Cloud environments.

4.1. Experimental Platform Setup

To validate the architecture proposal, the experimental setup shown in Figure 3 was deployed using a two-site, geographically distributed design. The setup integrates Edge nodes at both sites, interconnected via secure links, with each site playing a vital role in managing data and running real-time applications.
Edge Site 1 hosts a main router (Router 1) that provides access to both the Cloud and two additional servers: one hosting the 5G core and the other responsible for hosting the Edge applications (Edge Server 1). A secure direct connection, protected by a firewall, links Edge Site 1 to Edge Site 2, ensuring safe communication between the two locations. Edge Site 2 similarly features a main router (Router 2) that connects Edge Server 2 to the 5G core network.
Both sites are integrated with their respective 5G radio access network (RAN), which operates under an open-source framework (OpenRAN) to facilitate the interoperability and replication of the setup. The RAN setup includes a remote Radio Unit (RU) that supports advanced radio splitting, allowing efficient communication between the Distributed Units (DUs) and Centralized Units (CUs). Edge Site 1 features a Baseband Unit (BBU) that is logically composed of a DU and CU, with the DU positioned near the RU to facilitate communication between the RU and the 5G core. The architecture supports various connectivity options between the two sites, including direct fiber connections and more complex network routes that integrate the 5G core.
To support the computational requirements of real-time Digital Twin applications, each Edge node is equipped with high-performance processors, powerful RAM, and storage capabilities to handle intensive data processing and AI-driven analytics. As an example, one of the servers used in this experimental setup is configured with an Intel® Xeon® Gold 6548N 32-Core Processor at 2.80 GHz, paired with 4 × 64 GB DDR5 4800 MHz ECC RDIMM server memory and storage based on NVMe technology for faster data access. Additionally, the server supports GPU processing with an NVIDIA T4 GPU, allowing efficient handling of compute-intensive tasks, including machine learning inference and real-time analytics.
In terms of communication performance, it is important to note that this network is completely experimental, not commercial, with specific configurations to support research and development. The radio access network at Edge Site 1 is configured with a TDD pattern that has an uplink/downlink slot ratio of 3/7, prioritizing the uplink to better suit the needs of real-time applications that require robust uplink performance. The n40 band operates with a bandwidth of 20 MHz, a 256QAM modulation, and MIMO 2x2 on the downlink. Similarly, the n78 band is configured with a bandwidth of 100 MHz, a 64QAM modulation, and MIMO 2x2 on the downlink. These configurations facilitate high data throughput and low latency, crucial for maintaining seamless communication between the IoT, Edge, and Cloud layers of the platform.
The 5G core in this site deploys seven UPFs to serve different slices, allowing for effective resource allocation and testing across multiple scenarios. The UPF selection is determined by the DNN (Data Network Name) chosen by the UE. Additionally, the core database contains associations between the DNN and the slices, making it easier to apply different Quality-of-Service (QoS) parameters to UEs, which is crucial for real-time IoT use cases.
Application orchestration across the two locations is handled by a Cloud-based orchestration platform. To enable the seamless deployment and management of Edge applications, a Kubernetes cluster is deployed at each Edge node, serving as the essential environment for hosting either entire applications or components of distributed applications. This setup ensures that any application intended to run at the Edge is appropriately deployed within the Kubernetes environment. The orchestrator manages the platform and application deployment, allowing for automatic and streamlined operation. Through this orchestration system, applications are dynamically deployed, scaled, and monitored across Edge sites, optimizing both energy efficiency and performance. This automation simplifies the overall functioning of the platform, enabling real-time adjustments to resource allocation based on current demands. An integrated AI module further enhances the orchestration process by analyzing continuous streams of data, allowing for advanced automation through the application of AI and ML techniques, as explained in Section 3.2.

4.2. Platform Performance Results

A series of tests were conducted to validate the deployment of the hyper-distributed IoT–Edge–Cloud platform to ensure a comprehensive characterization. The platform architecture is consistently similar across all proposed sites; therefore, testing was performed exclusively at Edge Site 1. Nonetheless, the results can be extrapolated and scaled to Edge Site 2 due to these similarities.
To thoroughly validate the platform’s performance, tests were divided into two main sets. The first set focused specifically on evaluating the 5G core network, including its handling of control- and user-plane latencies as well as peak data rates under various load scenarios, as shown in Table 1. The second set assessed the complete platform at Edge Site 1, as illustrated in Table 2, encompassing both the 5G core and the radio access network (RAN), to provide a holistic view of the system’s performance, considering real-world conditions across the entire network infrastructure.

4.2.1. Evaluation of the 5G Core

The first set of tests evaluated the latency of the network core in the control and user planes, as well as measured the supported peak data rate. Both tests were performed in different load scenarios to evaluate the performance under stress or in ideal situations. In addition, some tests were repeated using the seven slices available at Edge Site 1 to evaluate the performance change when using network slicing.
To carry out these 5G core evaluation tests, the LoadCore simulation tool [54] was employed to emulate the radio access network (RAN) and multiple UEs. LoadCore is specifically designed to evaluate 5G Standalone (5G SA) core deployments by testing performance and conformance at scale for both the control and user planes. This tool allows for the comprehensive emulation of UEs, gNodeBs (gNB), and core network functions, facilitating a broad range of tests, including capacity and latency assessments, mobility scenarios, isolated node or interface testing, and Quality-of-Experience (QoE) measurements. LoadCore also supports the validation of complex service-based architectures, which is essential for evaluating the flexibility and scalability of 5G network deployments.
LoadCore not only generates traffic through the N1, N2, and N3 interfaces to enable accurate evaluation of the user and control planes of the 5G core network but also performs in-depth measurements and provides visual representations of the results, making it easier to analyze and understand performance metrics. This functionality is especially beneficial for identifying potential performance limits under various operational conditions.
Regarding the experimental conditions, LoadCore was configured with all necessary IPs for the 5G core, as well as VLANs, and the UEs, which were also configured in the core network. Additionally, the load for each UE, the number of UPFs, and other test-specific parameters were carefully set in each test, ensuring a thorough evaluation of network performance across diverse scenarios. This level of customization allowed for precise testing of both capacity and latency, ensuring that the 5G core network could be evaluated comprehensively under realistic and high-stress conditions.
In the single-UE test with downlink TCP (Transmission Control Protocol) traffic with a single UPF, the maximum downlink throughput reached 468 Mb/s. Introducing uplink traffic reduced the downlink rate to 345 Mb/s, while uplink throughput reached 364 Mb/s. To further simulate real-world scenarios where multiple IoT devices are connected, tests with 10 simulated UEs connected to a single UPF were conducted. Each UE was configured with 50 Mb/s uplink and 100 Mb/s downlink TCP traffic. The peak downlink and uplink data rates in this test were 684 Mb/s and 336 Mb/s, respectively, as shown in Figure 4.
Latency tests were conducted to assess the 5G system’s performance in low-latency scenarios, which is critical for determining the platform’s suitability for IoT–Edge–Cloud applications. These tests evaluated both control-plane and user-plane latencies using UDP traffic. For the user-plane latency tests, LoadCore emulated a fixed number of UEs, each configured with an equal UDP traffic data rate. The results showed that 81% of downlink packets had an OWD (one-way delay) between 125 and 250 µs, with 99.8% of downlink jitter below 125 µs. For uplink traffic, 87.8% of packets had an OWD below 125 µs, and 97% of jitter was below 125 µs.
Control-plane latency was evaluated by conducting tests where LoadCore emulated twenty UEs, each cycling between idle and data transmission states, completing a total of 200 cycles with UDP traffic. This setup created a highly stressful scenario to measure the core network’s performance under high-stress conditions. In these tests, which involved a single UPF connected to the core, the average latency was measured at 0.3 s.
These tests were repeated across all seven available slices at Edge Site 1. With all UPFs active, the peak uplink data rate reached 830 Mb/s, while the downlink peaked at 882 Mb/s. Latency tests showed that 69% of downlink packets had an OWD between 125 and 250 µs, with 24% reaching 500 µs. For the uplink, 62% of packets had an OWD between 125 and 250 µs, with 24% between 250 and 500 µs.
Both the user- and control-plane latency tests yielded promising results, with latencies aligning well with expected values [55]. The low user-plane latencies are particularly advantageous for applications that require quick information exchange, such as Digital Twins in IoT–Edge–Cloud scenarios, where minimal delay is crucial for real-time responsiveness.
While the control-plane latency might appear elevated, it is important to highlight that these tests were carried out under high-stress conditions, with twenty instances of user equipment (UEs) connected simultaneously to a single User Plane Function (UPF), continuously transmitting traffic and switching between states. This represents an extreme case that is unlikely to occur in typical real-world scenarios. The fact that control-plane latency remains stable under such demanding conditions is highly encouraging, signaling robustness in the network’s core. In a more ideal scenario, where a single UE connects once to the core network, the control-plane latency is anticipated to be significantly lower. These results demonstrate the 5G core’s resilience and suitability for supporting complex, latency-sensitive applications across IoT, Edge, and Cloud environments.

4.2.2. Evaluation of 5G RAN and Core

While RAN simulators are instrumental in facilitating initial performance evaluations, they sometimes overlook some real-world performance factors, such as network constraints and variability. To complement the core network tests, additional real, non-simulated evaluations were conducted to assess the full performance of the Edge Site 1 network. These assessments, which included tests on the n40 and n78 frequency bands, measured the maximum throughput experienced by each user, taking into account the characteristics of the radio channel, physical RAN, and core network.
The radio access network at Edge Site 1 is configured with a 3/7 uplink/downlink slot ratio TDD pattern to prioritize the uplink, which better supports the real-time demands of applications like Digital Twins. The n40 band uses a 20 MHz bandwidth with 256QAM modulation and MIMO 2x2 on the downlink, while the n78 band is configured with a 100 MHz bandwidth, 64QAM modulation, and MIMO 2x2 on the downlink, enabling high data throughput and low latency across the IoT, Edge, and Cloud layers.
All network measurements were taken using a 5G-enabled smartphone directly connected to the radio access network, performing iperf tests over a fixed duration to gauge throughput accurately. This approach provided a comprehensive view of the overall performance under real-world conditions, accurately reflecting the user experience across the entire network infrastructure.
It is also important to note that this network setup is entirely experimental, with specific configurations tailored to support research and development, rather than a commercial deployment.
To analyze the results of these platform tests, we compared the measured throughput values to theoretical values calculated using a standardized formula from 3GPP, specifically tailored for 5G New Radio (NR) throughput estimations [53]. This formula provides an estimate of the maximum achievable throughput for both user equipment (UE) and cell capacity, taking into account various factors, like the modulation scheme, resource block allocation, and overhead reduction.
These tests, as shown in Table 2 and represented graphically in Figure 5, saw a maximum downlink of 552 Mb/s and uplink of 87.3 Mb/s for the n78 indoor band, while the n40 outdoor band achieved a downlink of 120 Mb/s and an uplink of 29 Mb/s. These measured values, being close to the theoretical limits, demonstrate the platform’s robust performance under real-world conditions, effectively handling network constraints and channel variability with minimal deviation from ideal calculations.
These preliminary tests demonstrated the architecture’s scalability and high data throughput, which are required for real-time applications like Digital Twins, while highlighting areas for improvement, such as optimizing resource usage and reducing latency during peak loads. Further iterations will enhance the orchestration mechanisms for seamless operation across the IoT–Edge–Cloud continuum.

4.3. Digital Twin Application Prototype

The experimental setup is designed to support and validate a wide range of Digital Twin applications. One notable example is an immersive remote driving application, shown in Figure 6. This application in particular has been tested on Edge Site 1 but could also be deployed on Edge Site 2, as it shares a very similar architecture. It features the remote control of two mobile robots over the private 5G network, while a Digital Twin of the robots, the network, and the scenario is represented in the user interface.
The robots, situated outdoors, are equipped with 360º cameras and an array of sensors, such as LiDAR, GNSS, and IMUs, to capture high-fidelity environmental data in real time. These sensor data are transmitted to immersive cockpits located indoors at the laboratory, where users experience a fully immersive environment through racing seats, pedal controls, VR headsets, and haptic vests. This setup simulates an authentic remote driving experience, where users perceive the robot’s point of view and receive real-time haptic feedback based on robot–environment interactions. The integration of control, perception, and telemetry data (gathered by the Digital Twin) ensures that accurate, real-time feedback is provided to the operators, allowing for seamless bidirectional communication between the physical and digital realms.
Each robot and cockpit is connected via the 5G network, with robots using the n40 band for robust outdoor connectivity, while the indoor cockpits rely on a direct connection for enhanced bandwidth and performance. In this context, the high-speed data exchange between the robots and the control center is fundamental to ensuring the quality of both visual and haptic feedback, which is critical for enhancing the operator’s situational awareness and control accuracy.
The application’s architecture employs Edge computing to process incoming data streams directly at the Edge server, avoiding the need to route data to centralized Cloud servers, thus minimizing delays in teleoperation scenarios where split-second decision-making is essential for safe and efficient robot control. Furthermore, the distributed approach ensures system scalability, allowing the deployment of additional robots without significantly increasing network overhead or latency, which is critical as the platform expands to cover more complex missions or larger areas.
The Edge server executes two distributed applications: AI-based object detection (using models like YOLO) to identify pedestrians and obstacles in order to avoid collisions, and a Cloud Robotics Platform that plays a critical role in integrating the physical, digital, and virtual worlds. The latter is the core of the Digital Twin data, which are stored using InfluxDB and Network-as-Code (NaC) and interpreted to create a virtual replica of the robot, the network, and its environment, which are represented in the 3D user interface.

4.4. Application Performance

The evaluation of application performance for the Digital Twin prototype in immersive remote driving focuses on validating whether the platform can meet stringent operational requirements under real-world conditions. As teleoperation and feedback demand low-latency, high-throughput communication, a robust assessment is essential for determining the platform’s viability in practical settings. This section introduces the core metrics and goals necessary for analyzing application performance, covering both the QoS and QoE parameters critical for remote driving applications.
The QoS metrics aim to quantify the application’s technical capabilities in terms of bitrate and latency. By setting specific measurement targets for each metric, we aim to simulate realistic usage scenarios and identify potential bottlenecks that could impact the user experience.
In addition, QoE metrics capture user-centered performance indicators, emphasizing that the QoS offered by the platform can support real-time Digital Twin applications in terms of presence, engagement, control, sensory integration, and cognitive load. User feedback from real-world trials is integral to this analysis, as it highlights how well the platform aligns with user expectations and needs.

4.4.1. QoS Metrics and Measurement Goals

To ensure a comprehensive evaluation of KPIs, each measurement process must be designed with practical, real-world conditions in mind, as shown in Table 3.
In terms of network performance, KPIs like RTT, throughput, and reliability provide essential insights into the system’s communication robustness and efficiency. RTT measures the time for a data packet to travel to its destination and back, indicating the latency in the network path. According to our preliminary laboratory tests, we estimate that more than 30 ms latency would impede real-time video streaming. Throughput, captured in both the uplink (UL) and downlink (DL) directions, evaluates the system’s capacity to handle data-intensive applications, especially under high-load conditions. In order to eventually support the immersive remote control of four robots (two per site), we estimate that 32 Mb/s per site is required in the uplink (two 15 Mb/s 4K 360 video streams plus two 1 Mb/s telemetry flows), whereas only 1 Mb/s per site is required in the downlink (two 500 kb/s telecontrol flows). Since the cockpits are placed only at n78 in one of the sites, this band requires double the capacity of n40. Reliability is tracked continuously to assess the consistency of network connectivity, which is crucial for applications requiring uninterrupted, real-time data transmission. According to our experience, packet loss must be maintained below 1% for efficient video decoding.
For video streaming, metrics like the streaming bitrate and latency are key for assessing the quality and responsiveness of transmitted visual information. The streaming bitrate reflects the amount of video data transferred per unit time, impacting image quality and visual clarity. We are limited by the parameters of the camera used in this prototype, which generates 15 Mb/s of traffic for a 4K 360 video using the RTSP protocol. Latency in video streaming is measured for 360-degree video streams, capturing the time delay between capturing the video feed and its display. Early measurements carried out in our laboratory show that the streaming latency is 300 ms on average for RTSP streaming over the 5G network, including its visualization in the user interface. These thresholds, however, are close to the ones defined in industry best practices [56] and empirical studies [57] in teleoperation and VR systems.
For telecontrol and feedback, precision is critical, as it directly affects the efficacy of the operation. The Command-to-Reception delay is recorded from the moment a command is issued by the user until the corresponding action is received on the robot. On the other hand, the Command-to-Execution delay also considers that the corresponding action is executed by the robot. Note that the execution depends on mechanical and electrical aspects not related to data communication, which is not the scope of our work but certainly has an impact on the user experience. Measuring these KPIs under real-world network conditions, where latency can be affected by bandwidth availability and network congestion, provides a realistic picture of how responsive the system would be in practice.
Early results show that, on average, the Command-to-Reception delay is only 50 ms, which aligns with industry standards [58,59] and empirical benchmarks [60] commonly used in latency-sensitive applications. However, the Command-to-Execution increases up to 350 ms, whereas the end-to-end (E2E) latency (i.e., Command-to-Execution plus Video Streaming Latency) increases up to 650 ms; these values exceed those that can be considered real-time according to standards and are due to the complexity and experimental character of the prototype.
Finally, latency thresholds for haptic feedback (i.e., under 30 ms) are based on studies [61] indicating that delays above this level reduce the user’s perception of real-time response, diminishing the immersive quality essential for applications like remote driving.

4.4.2. QoE Metrics and Measurement Results

Next, we present the analysis of the Quality of User Experience obtained from a survey conducted among 53 participants who tested the Digital Twin application prototype featuring the immersive remote driving of two robots. The tests took place in high-density city environments, with a distance of around 7 km between the robots and the cockpits. The assessment focused on various factors, such as presence, engagement, control, sensory integration, and cognitive load, whose results help to validate the QoS offered by the platform for supporting Digital Twin applications in real-life scenarios.
The metrics for assessing presence indicate a high level of perceived immersion among participants. Specifically, 97.9% of the participants rated their sense of presence at level 4 or above, suggesting that the system successfully created a convincing remote environment. This strong sense of presence is critical for applications requiring teleoperation, as it directly impacts the operator’s situational awareness and effectiveness.
Regarding engagement, a combined 95.8% of participants rated their engagement in the teleoperation experience at level 4 or above. This indicates that the system’s immersive elements, such as real-time video streaming and responsive controls, were effective in maintaining user interest and focus throughout the demonstration.
The participants’ perceptions of control over the remote robot were also high, with 75% of them rating their control experience at level 4 or above. This suggests that the system’s teleoperation interface was intuitive and responsive, which is crucial for real-time control scenarios where precision and responsiveness are required.
Sensory integration, which measures the effectiveness of combining visual and haptic feedback, received not-so-favorable ratings and is currently in need of improvement. Only 39.6% of participants rated it at level 4 or 5, indicating that the multimodal feedback provided an experience that was not entirely cohesive. Effective sensory integration is vital for enhancing the user’s interaction with remote systems, especially in scenarios requiring fine-grained control and quick reaction times. However, most participants were not accustomed to the inclusion of haptic feedback in a remote driving scenario.
The cognitive load associated with the teleoperation task was assessed to understand how mentally demanding the experience was for participants. While 33.3% rated the task as highly demanding (level 5), a significant proportion (22.9%) found it relatively manageable (level 2). These results highlight the system’s capability to balance complexity and usability, ensuring that users can focus on control and interaction without being overwhelmed by the cognitive demand.
The results suggest that, while optimizing sensory integration and cognitive load is required for complying with less experienced users, the platform is able to successfully offer the required QoS and QoE to support generic Digital Twin applications (as well as immersive and interactive experiences).

5. Open Issues and Future Work

One of the key challenges that need to be addressed by the architecture of this platform is ensuring energy efficiency and sustainability across the IoT–Edge–Cloud continuum. The orchestration layer of the platform should be improved, with energy efficiency as a core objective, leveraging AI/ML algorithms to optimize the allocation of resources and reduce unnecessary energy expenditure. The orchestration layer should intelligently distribute tasks to Edge nodes with the appropriate computational capacity and energy profile, minimizing power usage while ensuring the required performance for time-sensitive applications. Techniques like neural network pruning and quantization, as well as the development of low-power hardware, will be necessary to reduce the energy demands of AI/ML processing. In particular, the platform faces technical challenges in managing the energy consumption of high-performance Edge nodes during peak processing loads, which require fine-grained scheduling and adaptive power management mechanisms. Future work could also explore implementing real-time energy monitoring at both the node and cluster levels to detect and adjust for energy spikes, thus optimizing consumption dynamically.
Future work should also explore the potential of green computing technologies in reducing the platform’s environmental impact. The integration of renewable energy sources into Edge nodes and IoT devices is one possibility. Solar-powered IoT sensors or energy-harvesting technologies, for example, could significantly reduce the platform’s energy consumption, especially in remote or off-grid areas. Research into energy-efficient hardware, such as low-power microcontrollers and processors, will also be critical for minimizing the platform’s overall energy footprint. Future research should also investigate integrating green computing solutions, such as low-power AI inference engines specifically optimized for Edge applications. These engines could utilize workload-aware adaptation techniques that dynamically scale AI model complexity according to energy constraints without compromising performance. Additionally, exploring Edge node designs with modular renewable energy inputs—such as solar or wind modules—could facilitate the deployment of sustainable, off-grid solutions, particularly beneficial for rural or hard-to-reach IoT sites.
Predictive maintenance, driven by AI/ML, plays a significant role in extending the lifecycle of hardware components and reducing waste. By analyzing data from IoT devices and Edge nodes, the system could predict potential failures before they occur, minimizing downtime and reducing the need for frequent hardware replacements. For example, predictive analytics could be used not only for hardware maintenance but also for optimizing network performance, predicting congestion, or identifying potential security threats. The system could predict when certain devices or components are likely to experience low activity and adjust their energy consumption accordingly. Additionally, by simulating future system states, Digital Twins could help in planning energy-efficient expansion strategies, such as determining when and where to deploy new Edge nodes to meet the growing demand without over-provisioning resources. For example, algorithms could be developed to take into account not only the current workload but also the energy profile of individual devices and their surrounding environment. To address predictive maintenance challenges, the platform could implement lightweight, on-device AI modules that provide real-time analytics on component health, thus minimizing data transfer and processing costs. For instance, anomaly detection algorithms tailored to the behavior of specific hardware components could be deployed at the Edge. This approach would enable early fault detection with minimal impact on computational resources, crucial for sustaining continuous operations in resource-constrained environments. Additionally, future work could explore adaptive Digital Twin models that evolve based on real-time operational data, enabling more accurate simulations for predictive maintenance.
Security and privacy challenges are another key consideration that cannot be overlooked in future work. With an increasing number of devices connected to this hyper-distributed platform, the attack surface expands exponentially, creating vulnerabilities at various levels, from the IoT layer to the Cloud. As more data are collected and exchanged between nodes, privacy concerns also grow, especially in applications that involve personal or sensitive data. AI/ML could play a role in detecting and mitigating attacks in real time, but lightweight, efficient algorithms are necessary to avoid additional computational burden. Techniques like federated learning, where data remain local while models are trained in a decentralized fashion, could address privacy concerns while still allowing for the development of intelligent systems. The platform’s hyper-distributed nature introduces unique challenges in securing device-to-Cloud data flows, as well as ensuring data integrity across multiple network slices. One specific challenge lies in detecting and mitigating cross-layer attacks that exploit both the IoT and Cloud layers. Future work should also explore implementing cross-layer intrusion detection systems (IDSs) that use federated learning across Edge nodes to identify attack patterns without compromising data privacy. Developing these IDS solutions will require algorithms optimized for Edge deployment, with a minimal footprint to avoid disrupting essential real-time applications.
An area that deserves further exploration is interoperability and standardization. The platform integrates a variety of devices, applications, and communication protocols, making it imperative that these components work seamlessly together. The lack of standardization in IoT ecosystems remains a significant hurdle in achieving full interoperability. Future work should focus on establishing common protocols, communication standards, and APIs that allow diverse devices and applications to interact with each other. Standardization is crucial for scalability, enabling the integration of new technologies without disrupting existing workflows. Open architectures and collaboration between industry and academia could help in defining these global standards. To address interoperability, research could also focus on developing open-source interoperability frameworks for multi-vendor IoT ecosystems, enabling standardized data exchange protocols across devices and Cloud services. For instance, future work could involve implementing Edge-centric gateways that translate proprietary device protocols into a common framework, like MQTT or CoAP, facilitating seamless integration. Collaboration with standardization bodies (e.g., IEEE, ISO) could also help establish industry-wide protocols for Edge–Cloud interoperability.
Finally, social and ethical considerations must be addressed as hyper-distributed platforms become more embedded in critical infrastructures, such as healthcare, transportation, and smart cities. Issues of data ownership, transparency, and the ethical use of AI in decision-making processes must be carefully considered. Future work should explore the development of explainable AI (XAI) models that provide human-readable explanations for decisions made by AI systems. This will be essential for ensuring trust and accountability in applications where AI plays a central role in real-time decision-making. Ethical challenges also arise in ensuring the transparency and accountability of AI-driven decisions made within the IoT–Edge–Cloud platform. One concrete step in this direction would be to integrate explainable AI (XAI) components that provide clear rationales for decisions in critical applications, such as healthcare monitoring or autonomous systems. Additionally, it will be crucial to develop governance frameworks for data ownership and access rights, particularly for sensitive or personal data generated by IoT devices. This approach could help define roles and responsibilities among users, operators, and developers to maintain ethical standards across the platform.

6. Conclusions

This paper presents the design of a flexible, hyper-distributed IoT–Edge–Cloud platform aimed at enabling real-time Digital Twins in industrial and logistics environments. The platform is intended to serve as an innovative living lab and research testbed for future 6G applications. It integrates both open-source and commercial solutions, along with a private 5G network, to connect machines and sensors on a large scale.
By incorporating artificial intelligence (AI) and machine learning (ML) capabilities, the IoT–Edge–Cloud platform optimizes the use of computing and networking resources for real-time applications, significantly reducing human intervention in the design and validation of the physical network. This approach offers several advantages, including lower labor costs and a reduction in human errors.
The platform’s application in supporting immersive remote control applications, such as teleoperation, has demonstrated its potential in scenarios requiring low latency and precise control, which are essential for industries prioritizing real-time responsiveness. Real-world tests validated the platform’s performance in supporting high-throughput, low-latency applications. In the n78 band, the platform achieved downlink speeds of up to 552 Mb/s and uplink speeds of 87.3 Mb/s, close to theoretical maxima, demonstrating its capability for data-intensive applications. Similarly, tests in the n40 band revealed a downlink of 120 Mb/s and an uplink of 29 Mb/s. These results confirm the platform’s suitability for supporting Digital Twin applications in realistic settings, providing high QoS across the network infrastructure.
Additionally, the platform’s support for Digital Twins was validated via QoE assessments conducted on an immersive remote driving prototype. Over 97% of participants reported strong immersion and high engagement during teleoperation, reflecting the application’s capability to deliver an immersive user experience. Control and feedback synchronization were rated highly, with nearly 85% of users indicating a cohesive sensory integration experience.
In summary, the platform has been deployed in a real-world setting, accompanied by an experimental setup for an emerging application featuring immersive remote driving. This experiment demonstrated the platform’s flexibility and scalability as a testbed for future 6G applications and use cases. Through these applications, the platform addresses key challenges in real-time feedback and control, positioning it as a critical infrastructure for future innovations in remote operations and interactive digital environments. It addresses open challenges and future research directions, particularly focusing on enhancing energy efficiency within the IoT–Edge–Cloud continuum. Future efforts will explore energy optimization mechanisms to further support sustainable, high-performance applications.

Author Contributions

Conceptualization, M.C-A., R.L., F.H.-G. and N.M.; methodology, M.C.-A., R.L., F.H.-G., N.M.; software, M.C.-A., R.L. and F.H.-G.; validation, M.C.-A., R.L. and F.H.-G.; formal analysis, M.C.-A., R.L. and F.H.-G.; investigation, M.C.-A., R.L., F.H.-G. and N.M.; resources, N.M. and D.G.-B.; data curation, N.M. and D.G.-B.; writing—original draft preparation, M.C.-A., R.L., F.H.-G. and N.M.; writing—review and editing, N.M. and D.G.-B.; visualization, M.C.-A., R.L. and F.H.-G.; supervision, N.M. and D.G.-B.; project administration, N.M. and D.G.-B.; funding acquisition, N.M. and D.G.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Spanish Ministry of Economic Affairs and Digital Transformation and the European Union-NextGenerationEU through the UNICO 5G I+D ADVANCING-5G-TWINS (TSI-063000-2021-112, TSI-063000-2021-113, TSI-063000-2021-114) and UNICO 5G I+D ADVANCING-5G-IMMERSIVE (TSI-063000-2021-109, TSI-063000-2021-110, TSI-063000-2021-111) projects, and by the European Union’s Horizon Europe research and innovation programme (HORIZONMSCA-2022-DN-01) through the TOAST project under the Marie Skłodowska-Curie grant agreement No. 101073465.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Uusitalo, M.A.; Rugel, P.; Boldi, M.R.; Strinati, E.C.; Demestichas, P.; Ericson, M.; Fettweis, G.P.; Filippou, M.C.; Gati, A.; Hamon, M.H.; et al. 6G Vision, Value, Use Cases and Technologies From European 6G Flagship Project Hexa-X. IEEE Access 2021, 9, 160004–160020. [Google Scholar] [CrossRef]
  2. Viswanathan, H.; Mogensen, P.E. Communications in the 6G Era. IEEE Access 2020, 8, 57063–57074. [Google Scholar] [CrossRef]
  3. Wang, C.X.; You, X.; Gao, X.; Zhu, X.; Li, Z.; Zhang, C.; Wang, H.; Huang, Y.; Chen, Y.; Haas, H.; et al. On the Road to 6G: Visions, Requirements, Key Technologies, and Testbeds. IEEE Commun. Surv. Tutorials 2023, 25, 905–974. [Google Scholar] [CrossRef]
  4. Allioui, H.; Mourdi, Y. Exploring the full potentials of IoT for better financial growth and stability: A comprehensive survey. Sensors 2023, 23, 8015. [Google Scholar] [CrossRef]
  5. Kwon, J.H.; Kim, H.J.; Lee, S. Optimizing Traffic Scheduling in Autonomous Vehicle Networks Using Machine Learning Techniques and Time-Sensitive Networking. Electronics 2024, 13, 2837. [Google Scholar] [CrossRef]
  6. Nie, Z.; Chen, K.C.; Alanezi, Y. Socially Networked Multi-Robot System of Time-Sensitive Multi-Link Access in a Smart Factory. In Proceedings of the ICC 2023-IEEE International Conference on Communications, Rome, Italy, 28 May–1 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 4918–4923. [Google Scholar]
  7. Hazarika, A.; Rahmati, M. Towards an evolved immersive experience: Exploring 5G-and beyond-enabled ultra-low-latency communications for augmented and virtual reality. Sensors 2023, 23, 3682. [Google Scholar] [CrossRef]
  8. Gkonis, P.; Giannopoulos, A.; Trakadas, P.; Masip-Bruin, X.; D’Andria, F. A survey on IoT-edge-cloud continuum systems: Status, challenges, use cases, and open issues. Future Internet 2023, 15, 383. [Google Scholar] [CrossRef]
  9. Jamil, M.N.; Schelén, O.; Monrat, A.A.; Andersson, K. Enabling Industrial Internet of Things by Leveraging Distributed Edge-to-Cloud Computing: Challenges and Opportunities. IEEE Access 2024, 12, 127294–127308. [Google Scholar] [CrossRef]
  10. Hlophe, M.C.; Maharaj, B.T. From cyber–physical convergence to digital twins: A review on edge computing use case designs. Appl. Sci. 2023, 13, 13262. [Google Scholar] [CrossRef]
  11. Lubrano, F.; Caragnano, G.; Scionti, A.; Terzo, O. Challenges, Novel Approaches and Next Generation Computing Architecture for Hyper-Distributed Platforms Towards Real Computing Continuum. In Proceedings of the Advanced Information Networking and Applications, Kitakyushu, Japan, 17–19 April 2024; Barolli, L., Ed.; Springer: Cham, Switzerland, 2024; pp. 449–459. [Google Scholar]
  12. Alkhateeb, A.; Jiang, S.; Charan, G. Real-time digital twins: Vision and research directions for 6G and beyond. IEEE Commun. Mag. 2023, 61, 128–134. [Google Scholar] [CrossRef]
  13. Nguyen, T.N.; Zeadally, S.; Vuduthala, A.B. Cyber-physical cloud manufacturing systems with digital twins. IEEE Internet Comput. 2021, 26, 15–21. [Google Scholar] [CrossRef]
  14. Barnabas, J.; Raj, P. The human body: A digital twin of the cyber physical systems. In Advances in Computers; Elsevier: Amsterdam, The Netherlands, 2020; Volume 117, pp. 219–246. [Google Scholar]
  15. Xu, H.; Berres, A.; Yoginath, S.B.; Sorensen, H.; Nugent, P.J.; Severino, J.; Tennille, S.A.; Moore, A.; Jones, W.; Sanyal, J. Smart mobility in the cloud: Enabling real-time situational awareness and cyber-physical control through a digital twin for traffic. IEEE Trans. Intell. Transp. Syst. 2023, 24, 3145–3156. [Google Scholar] [CrossRef]
  16. Kaigom, E.G.; Roßmann, J. Value-driven robotic digital twins in cyber–Physical applications. IEEE Trans. Ind. Inform. 2020, 17, 3609–3619. [Google Scholar] [CrossRef]
  17. Shi, S. Industrial cloud, automation: The industrial. Internet of Things (IIoT) is being embraced by manufacturers as a natural extension of automation and controls development. Control. Eng. 2023, 70, 31–32. [Google Scholar]
  18. Zorchenko, N.; Tyupina, T.; Parshutin, M. Technologies Used by General Electric to Create Digital Twins for Energy Industry. Power Technol. Eng. 2024, 58, 521–526. [Google Scholar] [CrossRef]
  19. Gupta, R.; Reebadiya, D.; Tanwar, S. 6G-enabled edge intelligence for ultra-reliable low latency applications: Vision and mission. Comput. Stand. Interfaces 2021, 77, 103521. [Google Scholar] [CrossRef]
  20. Santos, J.; Wauters, T.; Volckaert, B.; De Turck, F. Towards low-latency service delivery in a continuum of virtual resources: State-of-the-art and research directions. IEEE Commun. Surv. Tutorials 2021, 23, 2557–2589. [Google Scholar] [CrossRef]
  21. Vaish, R.; Hollinger, M.C. Case Study: IBM–Automating Visual Inspection. In Springer Handbook of Automation; Springer: Berlin/Heidelberg, Germany, 2023; pp. 1439–1450. [Google Scholar]
  22. Fortino, G.; Guerrieri, A.; Pace, P.; Savaglio, C.; Spezzano, G. Iot platforms and security: An analysis of the leading industrial/commercial solutions. Sensors 2022, 22, 2196. [Google Scholar] [CrossRef]
  23. Kherbache, M.; Maimour, M.; Rondeau, E. Digital twin network for the IIoT using eclipse ditto and hono. IFAC-PapersOnLine 2022, 55, 37–42. [Google Scholar] [CrossRef]
  24. De Benedictis, A.; Rocco di Torrepadula, F.; Somma, A. A Digital Twin Architecture for Intelligent Public Transportation Systems: A FIWARE-Based Solution. In Proceedings of the International Symposium on Web and Wireless Geographical Information Systems, Yverdon-Les-Bains, Switzerland, 17–18 June 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 165–182. [Google Scholar]
  25. Conde, J.; Munoz-Arcentales, A.; Alonso, Á.; Huecas, G.; Salvachúa, J. Collaboration of digital twins through linked open data: Architecture with fiware as enabling technology. IT Prof. 2022, 24, 41–46. [Google Scholar] [CrossRef]
  26. Robles, J.; Martín, C.; Díaz, M. OpenTwins: An open-source framework for the development of next-gen compositional digital twins. Comput. Ind. 2023, 152, 104007. [Google Scholar] [CrossRef]
  27. Zeb, S.; Mahmood, A.; Hassan, S.A.; Piran, M.J.; Gidlund, M.; Guizani, M. Industrial digital twins at the nexus of NextG wireless networks and computational intelligence: A survey. J. Netw. Comput. Appl. 2022, 200, 103309. [Google Scholar] [CrossRef]
  28. Mihai, S.; Yaqoob, M.; Hung, D.V.; Davis, W.; Towakel, P.; Raza, M.; Karamanoglu, M.; Barn, B.; Shetve, D.; Prasad, R.V.; et al. Digital twins: A survey on enabling technologies, challenges, trends and future prospects. IEEE Commun. Surv. Tutorials 2022, 24, 2255–2291. [Google Scholar] [CrossRef]
  29. Mirani, A.A.; Velasco-Hernandez, G.; Awasthi, A.; Walsh, J. Key challenges and emerging technologies in industrial IoT architectures: A review. Sensors 2022, 22, 5836. [Google Scholar] [CrossRef]
  30. Dong, J.; Xu, Q.; Wang, J.; Yang, C.; Cai, M.; Chen, C.; Liu, Y.; Wang, J.; Li, K. Mixed cloud control testbed: Validating vehicle-road-cloud integration via mixed digital twin. IEEE Trans. Intell. Veh. 2023, 8, 2723–2736. [Google Scholar] [CrossRef]
  31. Alimi, I.A.; Patel, R.K.; Muga, N.J.; Pinto, A.N.; Teixeira, A.L.; Monteiro, P.P. Towards enhanced mobile broadband communications: A tutorial on enabling technologies, design considerations, and prospects of 5G and beyond fixed wireless access networks. Appl. Sci. 2021, 11, 10427. [Google Scholar] [CrossRef]
  32. Milovanovic, D.A.; Bojkovic, Z.S. 5G Ultrareliable and Low-Latency Communication in Vertical Domain Expansion. In Driving 5G Mobile Communications with Artificial Intelligence Towards 6G; CRC Press: Boca Raton, FL, USA, 2023; pp. 137–181. [Google Scholar]
  33. Kong, L.; Tan, J.; Huang, J.; Chen, G.; Wang, S.; Jin, X.; Zeng, P.; Khan, M.; Das, S.K. Edge-computing-driven Internet of Things: A Survey. ACM Comput. Surv. 2022, 55, 1–41. [Google Scholar] [CrossRef]
  34. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  35. Lin, X.; Kundu, L.; Dick, C.; Obiodu, E.; Mostak, T.; Flaxman, M. 6G digital twin networks: From theory to practice. IEEE Commun. Mag. 2023, 61, 72–78. [Google Scholar] [CrossRef]
  36. AlSobeh, A.M.; Hammad, R.; Al-Tamimi, A.K. A modular cloud-based ontology framework for context-aware EHR services. Int. J. Comput. Appl. Technol. 2019, 60, 339–350. [Google Scholar] [CrossRef]
  37. Arbab-Zavar, B.; Palacios-Garcia, E.J.; Vasquez, J.C.; Guerrero, J.M. Message queuing telemetry transport communication infrastructure for grid-connected AC microgrids management. Energies 2021, 14, 5610. [Google Scholar] [CrossRef]
  38. Baig, M.J.A.; Iqbal, M.T.; Jamil, M.; Khan, J. A low-cost, open-source peer-to-peer energy trading system for a remote community using the internet-of-things, blockchain, and hypertext transfer protocol. Energies 2022, 15, 4862. [Google Scholar] [CrossRef]
  39. Yassein, M.B.; Hmeidi, I.; Meqdadi, O.; Alghazo, F.; Odat, B.; AlZoubi, O.; Smairat, A. Challenges and techniques of constrained application protocol (CoAP) for efficient energy consumption. In Proceedings of the 2020 11th International Conference on Information and Communication Systems (ICICS), Irbid, Jordan, 7–9 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 373–377. [Google Scholar]
  40. Marino, C.A.; Chinelato, F.; Marufuzzaman, M. AWS IoT analytics platform for microgrid operation management. Comput. Ind. Eng. 2022, 170, 108331. [Google Scholar] [CrossRef]
  41. Satapathi, A.; Mishra, A. Build an IoT Solution with Azure IoT Hub, Azure Functions, and Azure Cosmos DB. In Developing Cloud-Native Solutions with Microsoft Azure and NET: Build Highly Scalable Solutions for the Enterprise; Springer: Berlin/Heidelberg, Germany, 2022; pp. 193–218. [Google Scholar]
  42. Fortino, G.; Guerrieri, A.; Savaglio, C.; Spezzano, G. A review of internet of things platforms through the iot-a reference architecture. In Proceedings of the International Symposium on Intelligent and Distributed Computing, Freiburg, Germany, 4–8 October 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 25–34. [Google Scholar]
  43. Alabbas, A.; Kaushal, A.; Almurshed, O.; Rana, O.; Auluck, N.; Perera, C. Performance analysis of apache openwhisk across the edge-cloud continuum. In Proceedings of the 2023 IEEE 16th International Conference on Cloud Computing (CLOUD), Chicago, IL, USA, 2–8 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 401–407. [Google Scholar]
  44. Dakić, V.; Kovač, M.; Slovinac, J. Evolving High-Performance Computing Data Centers with Kubernetes, Performance Analysis, and Dynamic Workload Placement Based on Machine Learning Scheduling. Electronics 2024, 13, 2651. [Google Scholar] [CrossRef]
  45. Tricomi, G.; D’Agati, L.; Longo, F.; Merlino, G.; Puliafito, A.; Silvestri, S. Paving the way for an Urban Intelligence OpenStack-based Architecture. In Proceedings of the 2024 IEEE International Conference on Smart Computing (SMARTCOMP), Osaka, Japan, 29 June–2 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 284–289. [Google Scholar]
  46. Ullah, A.; Kiss, T.; Kovács, J.; Tusa, F.; Deslauriers, J.; Dagdeviren, H.; Arjun, R.; Hamzeh, H. Orchestration in the Cloud-to-Things compute continuum: Taxonomy, survey and future directions. J. Cloud Comput. 2023, 12, 135. [Google Scholar] [CrossRef]
  47. Alsobeh, A.; Shatnawi, A. Integrating data-driven security, model checking, and self-adaptation for IoT systems using BIP components: A conceptual proposal model. In Proceedings of the International Conference on Advances in Computing Research, Orlando, FL, USA, 8–10 May 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 533–549. [Google Scholar]
  48. Khan, W.Z.; Ahmed, E.; Hakak, S.; Yaqoob, I.; Ahmed, A. Edge computing: A survey. Future Gener. Comput. Syst. 2019, 97, 219–235. [Google Scholar] [CrossRef]
  49. Fogli, M.; Kudla, T.; Musters, B.; Pingen, G.; Van den Broek, C.; Bastiaansen, H.; Suri, N.; Webb, S. Performance evaluation of kubernetes distributions (k8s, k3s, kubeedge) in an adaptive and federated cloud infrastructure for disadvantaged tactical networks. In Proceedings of the 2021 International Conference on Military Communication and Information Systems (ICMCIS), The Hague, The Netherlands, 4–5 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–7. [Google Scholar]
  50. Banaei, A.; Sharifi, M. Etas: Predictive scheduling of functions on worker nodes of apache openwhisk platform. J. Supercomput. 2022, 78, 5358–5393. [Google Scholar] [CrossRef]
  51. Santos, Á.; Correia, N.; Bernardino, J. On the Suitability of Cloud Models for MEC Deployment Purposes. In Proceedings of the 2023 6th Experiment@ International Conference (exp. at’23), Evora, Portugal, 5–7 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 255–260. [Google Scholar]
  52. Seabold, S.; Perktold, J. Statsmodels: Statistical Models in Python; Python Software Foundation: Wilmington, DE, USA,, 2010. [Google Scholar]
  53. 3rd Generation Partnership Project (3GPP). NR; User Equipment (UE) radio access capabilities (Release 18). Technical Report TS 38.306 V18.3.0, 3GPP, 2024. Technical Specification Group Radio Access Network. Available online: https://www.etsi.org/deliver/etsi_ts/138300_138399/138306/18.01.00_60/ts_138306v180100p.pdf (accessed on 1 November 2024).
  54. Keysight Technologies. P8900S LoadCore—Core Network Solutions. 2024. Available online: https://www.keysight.com/es/en/product/P8900S/loadcore-core-network-solutions.html (accessed on 12 November 2024).
  55. 5G Infrastructure Public Private Partnership (5G PPP). Beyond 5G/6G KPIs and Target Values. 2022. Available online: https://5g-ppp.eu/ (accessed on 12 November 2024).
  56. Khan, T.; Zhu, T.S.; Downes, T.; Cheng, L.; Kass, N.M.; Andrews, E.G.; Biehl, J.T. Understanding effects of visual feedback delay in ar on fine motor surgical tasks. IEEE Trans. Vis. Comput. Graph. 2023, 29, 4697–4707. [Google Scholar] [CrossRef]
  57. Zhao, L.; Nybacka, M.; Aramrattana, M.; Rothhämel, M.; Habibovic, A.; Drugge, L.; Jiang, F. Remote Driving of Road Vehicles: A Survey of Driving Feedback, Latency, Support Control, and Real Applications. IEEE Trans. Intell. Veh. 2024, 1–22. [Google Scholar] [CrossRef]
  58. Dreibholz, T.; Mazumdar, S. Towards a lightweight task scheduling framework for cloud and edge platform. Internet Things 2023, 21, 100651. [Google Scholar] [CrossRef]
  59. Makondo, N.; Kobo, H.I.; Mathonsi, T.E.; Du Plessis, D.; Makhosa, T.M.; Mamushiane, L. An efficient architecture for latency optimisation in 5G using Edge Computing for uRLLC use cases. In Proceedings of the 2024 International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD), Port Louis, Mauritius, 1–2 August 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–7. [Google Scholar]
  60. Velayutham, A. Optimizing sase for low latency and high bandwidth applications: Techniques for enhancing latency-sensitive systems. Int. J. Intell. Autom. Comput. 2023, 6, 63–83. [Google Scholar]
  61. Lin, Y.H.; Wang, Y.W.; Ku, P.S.; Cheng, Y.T.; Hsu, Y.C.; Tsai, C.Y.; Chen, M.Y. Hapticseer: A multi-channel, black-box, platform-agnostic approach to detecting video game events for real-time haptic feedback. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Online, 8–13 May 2021; pp. 1–14. [Google Scholar]
Figure 1. A conceptual diagram based on the Hexa-X 6G vision of interconnected worlds [1], now also encompassing Digital Twins, IoT, and Edge and Cloud computing components. The physical world comprises IoT devices, sensors, and autonomous machines that collect and act on real-world data, forming the basis of cyber-physical twinning with real-time feedback. The digital world represents a virtual replica of the physical world, comprehended along the IoT-Cloud-Edge computing continuum, which facilitates data processing, AI/ML applications, and storage for large datasets. The human world includes haptic devices, user interfaces, and Human–Machine Interfaces to enable immersive interaction with both digital and physical assets. This interconnected 6G ecosystem enables Immersive Communications across worlds, supports real-time control for responsive applications, and achieves cyber-physical twinning for advanced simulations and monitoring.
Figure 1. A conceptual diagram based on the Hexa-X 6G vision of interconnected worlds [1], now also encompassing Digital Twins, IoT, and Edge and Cloud computing components. The physical world comprises IoT devices, sensors, and autonomous machines that collect and act on real-world data, forming the basis of cyber-physical twinning with real-time feedback. The digital world represents a virtual replica of the physical world, comprehended along the IoT-Cloud-Edge computing continuum, which facilitates data processing, AI/ML applications, and storage for large datasets. The human world includes haptic devices, user interfaces, and Human–Machine Interfaces to enable immersive interaction with both digital and physical assets. This interconnected 6G ecosystem enables Immersive Communications across worlds, supports real-time control for responsive applications, and achieves cyber-physical twinning for advanced simulations and monitoring.
Futureinternet 16 00431 g001
Figure 2. The architecture of the proposed IoT–Edge–Cloud platform, exemplified for two sites, structured into three primary layers: IoT-Device, Edge, and Cloud. This design enables efficient communication and scalability for connected devices. The figure illustrates flexibility in application deployment, showing that applications can operate in either the Edge or Cloud layer or even span both. Each site includes a private 5G network, represented by a dotted square, which provides 5G connectivity to the IoT-Device layer. The Edge and Network Orchestration layer manages resource allocation across both sites, which are interconnected by a direct private fiber link, supporting seamless operation and coordination between the sites.
Figure 2. The architecture of the proposed IoT–Edge–Cloud platform, exemplified for two sites, structured into three primary layers: IoT-Device, Edge, and Cloud. This design enables efficient communication and scalability for connected devices. The figure illustrates flexibility in application deployment, showing that applications can operate in either the Edge or Cloud layer or even span both. Each site includes a private 5G network, represented by a dotted square, which provides 5G connectivity to the IoT-Device layer. The Edge and Network Orchestration layer manages resource allocation across both sites, which are interconnected by a direct private fiber link, supporting seamless operation and coordination between the sites.
Futureinternet 16 00431 g002
Figure 3. The hyper-distributed experimental setup, comprising two geographically distanced Edge sites for the development and validation of an IoT-to-Edge-to-Cloud platform. The architecture illustrates the distribution of essential elements, such as the 5G core location, distributed RAN components—CUs (Centralized Units), DUs (Distributed Units), and RUs (Remote Units)—and Edge Servers at both sites. Edge Site 1 and Edge Site 2 are interconnected via secure links, with dedicated firewalls and internal routers to manage network traffic. This configuration allows for real-time data processing and transmission across the IoT, Edge, and Cloud layers, supporting latency-sensitive applications and resilient network connectivity.
Figure 3. The hyper-distributed experimental setup, comprising two geographically distanced Edge sites for the development and validation of an IoT-to-Edge-to-Cloud platform. The architecture illustrates the distribution of essential elements, such as the 5G core location, distributed RAN components—CUs (Centralized Units), DUs (Distributed Units), and RUs (Remote Units)—and Edge Servers at both sites. Edge Site 1 and Edge Site 2 are interconnected via secure links, with dedicated firewalls and internal routers to manage network traffic. This configuration allows for real-time data processing and transmission across the IoT, Edge, and Cloud layers, supporting latency-sensitive applications and resilient network connectivity.
Futureinternet 16 00431 g003
Figure 4. A graphical representation of maximum data rates achieved for different UE and UPF configurations for both download (DL) and upload (UL) links.
Figure 4. A graphical representation of maximum data rates achieved for different UE and UPF configurations for both download (DL) and upload (UL) links.
Futureinternet 16 00431 g004
Figure 5. A comparison of the theoretical maximum data rates against the measured rates across different frequency bands (n78 and n40) for both download (DL) and upload (UL) links.
Figure 5. A comparison of the theoretical maximum data rates against the measured rates across different frequency bands (n78 and n40) for both download (DL) and upload (UL) links.
Futureinternet 16 00431 g005
Figure 6. Edge Site 1 network architecture for an immersive remote driving application, which includes two teleoperated vehicles leveraging 5G components and Edge servers for real-time control and feedback. The connectivity is provided through outdoor and indoor 5G bands (n40 and n78, respectively). The setup includes a Cloud Robotics Platform for processing the Digital Twin information and AI-driven applications for video-based environment detection, interfacing with Edge servers for lower-latency processing.
Figure 6. Edge Site 1 network architecture for an immersive remote driving application, which includes two teleoperated vehicles leveraging 5G components and Edge servers for real-time control and feedback. The connectivity is provided through outdoor and indoor 5G bands (n40 and n78, respectively). The setup includes a Cloud Robotics Platform for processing the Digital Twin information and AI-driven applications for video-based environment detection, interfacing with Edge servers for lower-latency processing.
Futureinternet 16 00431 g006
Table 1. An overview of peak data rates and latency (one-way delay, OWD) metrics of the 5G core network under various simulated conditions. Data rates were measured with a simulated RAN for different UE setups, across multiple User Plane Functions (UPFs). Latency measurements were split into user plane (UP) and control plane (CP) metrics, highlighting packet delivery within specific latency brackets. The standard deviation values indicate variability in network performance, with higher deviations observed in scenarios involving multiple UEs and UPFs.
Table 1. An overview of peak data rates and latency (one-way delay, OWD) metrics of the 5G core network under various simulated conditions. Data rates were measured with a simulated RAN for different UE setups, across multiple User Plane Functions (UPFs). Latency measurements were split into user plane (UP) and control plane (CP) metrics, highlighting packet delivery within specific latency brackets. The standard deviation values indicate variability in network performance, with higher deviations observed in scenarios involving multiple UEs and UPFs.
5G Core
Metrics
ConditionsMeasured ValueStandard
Deviation
Peak data rateSimulated RAN,
1UE, 1UPF (DL)
DL: 468 Mb/s36 Mb/s
Simulated RAN,
1UE, 1UPF (DL + UL)
DL: 445 Mb/s27 Mb/s
UL: 364 Mb/s31 Mb/s
Simulated RAN,
10UEs, 1UPF (DL + UL)
DL: 684 Mb/s27 Mb/s
UL: 336Mb/s14 Mb/s
Simulated RAN,
7UEs, 7UPFs (DL + UL)
DL: 882 Mb/s70 Mb/s
UL: 830 Mb/s56 Mb/s
Latency (UP) (OWD)Simulated RAN,
1UE, 1UPF (DL + UL)
DL: 81% packets,
125–250 us
N/A
UL: 88% packets,
<125 us
N/A
Simulated RAN,
7UE, 7UPF (DL + UL)
DL: 69% packets,
125–250 us
N/A
UL: 62% packets,
125–250 us
N/A
Latency (CP)Simulated RAN,
20UE, 1UPF
(200 idle-active cycles)
0.3 s0.2 s
Table 2. Maximum theoretical and measured data rates for download (DL) and upload (UL) links in n78 and n40 bands. Testing was conducted under real radio access network conditions using a single UE and a single UPF.
Table 2. Maximum theoretical and measured data rates for download (DL) and upload (UL) links in n78 and n40 bands. Testing was conducted under real radio access network conditions using a single UE and a single UPF.
Platform
Metrics
Maximum
Theoretical
Value [53]
Conditions
(DL, UL)
Measured ValueStandard
Deviation
User-
Experienced
Data Rate
DL: 613.5 Mb/sReal RAN (n78),
1UE, 1UPF
DL: 552.0 Mb/s43 Mb/s
UL: 140.6 Mb/sUL: 87.3 Mb/s27 Mb/s
DL: 152.8 Mb/sReal RAN (n40),
1UEs, 1UPF
DL: 120 Mb/s8 Mb/s
UL: 35.0 Mb/sUL: 29 Mb/s4 Mb/s
Table 3. Application performance metrics of Immersive Robot Racing, comprising key metrics and corresponding measurement goals that are essential for ensuring effective teleoperation. The metrics cover aspects like network performance, video streaming, and telecontrol and feedback.
Table 3. Application performance metrics of Immersive Robot Racing, comprising key metrics and corresponding measurement goals that are essential for ensuring effective teleoperation. The metrics cover aspects like network performance, video streaming, and telecontrol and feedback.
Application MetricsMeasurement AttributesMeasurement Goals
Network performance- Round-Trip Time
- Throughput for n40
- Throughput for n78
- Reliability
- <30 ms
- UL: >32 Mb/s, DL: >1 Mb/s
- UL: >2 Mb/s, DL: >64 Mb/s
- >99%
Video streaming- Streaming bitrate (UL)
- Streaming latency (360)
- <15 Mb/s
- <300 ms
Telecontrol and feedback- Command-to-Rec delay
- Command-to-Exec delay
- E2E latency
- Haptic feedback delay
- <50 ms
- <350 ms
- <650 ms
- <30 ms
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Crespo-Aguado, M.; Lozano, R.; Hernandez-Gobertti, F.; Molner, N.; Gomez-Barquero, D. Flexible Hyper-Distributed IoT–Edge–Cloud Platform for Real-Time Digital Twin Applications on 6G-Intended Testbeds for Logistics and Industry. Future Internet 2024, 16, 431. https://doi.org/10.3390/fi16110431

AMA Style

Crespo-Aguado M, Lozano R, Hernandez-Gobertti F, Molner N, Gomez-Barquero D. Flexible Hyper-Distributed IoT–Edge–Cloud Platform for Real-Time Digital Twin Applications on 6G-Intended Testbeds for Logistics and Industry. Future Internet. 2024; 16(11):431. https://doi.org/10.3390/fi16110431

Chicago/Turabian Style

Crespo-Aguado, Maria, Raul Lozano, Fernando Hernandez-Gobertti, Nuria Molner, and David Gomez-Barquero. 2024. "Flexible Hyper-Distributed IoT–Edge–Cloud Platform for Real-Time Digital Twin Applications on 6G-Intended Testbeds for Logistics and Industry" Future Internet 16, no. 11: 431. https://doi.org/10.3390/fi16110431

APA Style

Crespo-Aguado, M., Lozano, R., Hernandez-Gobertti, F., Molner, N., & Gomez-Barquero, D. (2024). Flexible Hyper-Distributed IoT–Edge–Cloud Platform for Real-Time Digital Twin Applications on 6G-Intended Testbeds for Logistics and Industry. Future Internet, 16(11), 431. https://doi.org/10.3390/fi16110431

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop