1. Introduction
The 6G era envisions connecting the human, digital, and physical worlds and revolves around their interaction: a human world of our senses, bodies, intelligence, and values; a digital world of information, communication, and computing; and a physical world of objects and organisms [
1,
2]. In this three-pillar vision, depicted in
Figure 1, the concept of Digital Twins is presented as a mix of the best of each world’s features [
3]. Digital Twins are 3D virtual representations that serve as real-time digital counterparts of physical entities, providing ubiquitous tools for the simulation and analysis of complex environments. Taking advantage of the Internet of Things (IoT) data gathered by sensors, machines, robots, and cameras, Digital Twins enable continuous monitoring and adaptive control and optimize resource use and decision-making by leveraging real-time data for a dynamic virtual representation of physical assets.
The use of Digital Twins is particularly advantageous in complex logistics and industrial environments, where the coordination of moving objects (e.g., vehicles, cranes, ships, or robots) is critical. Using data from sensors, cameras, and LiDARs, Digital Twins can monitor the real-time locations of these objects, optimizing the scheduling and coordination of resources. Through immersive cockpits, operators can visualize a virtual representation of the entire logistics or factory line, detecting bottlenecks and adjusting workflows in real time. The integration of artificial intelligence (AI)-powered systems, including computer vision and sensor analysis, enables Digital Twins to autonomously monitor the operation. When combined with machine learning (ML), Digital Twins can even predict hazards (e.g., traffic collisions, access to restricted areas, equipment failures, or environmental factors) and send alerts to operators.
Nonetheless, the implementation of real-time Digital Twins poses significant challenges to communication networks. Digital Twins rely on real-time data to create virtual replicas of physical environments, including video streams (for 2D object detection, collision avoidance, or security monitoring), LiDAR streams (for precise localization and 3D object detection), or telemetry (e.g., temperature, position, or operational status). Given the computing capacity required to process all this information with low delay, the data gathered by IoT devices must be offloaded to more powerful servers. But, if not managed correctly by communication systems, Digital Twins have the potential to inundate network infrastructures, reducing efficiency and increasing latency, which impedes the real-time functioning of the application. Moreover, the number of IoT devices interconnected in a specific Digital Twin application may be expected to grow in the future, increasing the saturation of the networks [
4].
To mitigate this effect, computations must occur closer to the data source, reducing latency and improving response times. This is particularly beneficial in time-sensitive Digital Twin applications such as autonomous driving [
5], industrial robotics [
6], and immersive teleoperation [
7], where even milliseconds of delay can have critical consequences. As depicted in
Figure 1, Edge computing ensures the efficient processing of IoT data, since it allows the offloading of the data gathered by different devices (e.g., sensors, machines, robots, and cameras) to be processed outside these devices in order to create useful information for the user. This implies not only reduced latency and improved response times but also a simplification of devices, which has an impact on their fabrication costs. In addition, Edge computing also filters data locally before sending only the necessary insights to the Cloud, thereby reducing bandwidth usage and ensuring quicker decision-making [
8].
Throughout the literature, there are many Edge computing architecture proposals to implement real-time Digital Twins or support the testing of future applications for research purposes. However, few of them are implemented in private 5G networks and deployed in real logistics and industrial scenarios. To address this gap, the proposed solution involves designing and implementing a hyper-distributed IoT–Edge–Cloud computing platform that is automatically managed for real-time Digital Twins in logistics and industrial environments. This platform is intended as a living lab and testbed for future 6G applications, developed to meet the requirements of end users and designed in close collaboration with them. The design integrates the latest advancements in AI-driven analytics, machine learning-based automation, and cross-domain interoperability. The system’s hyper-distributed nature ensures that computation and decision-making occur at the optimal point—whether at the Edge, in the Cloud, or within the IoT device itself—depending on the application’s specific requirements.
This document is organized as follows:
Section 2 provides a comprehensive overview of the state of the art in Digital Twin technologies, Edge computing, and IoT integration.
Section 3 delves into the detailed design and development of the proposed hyper-distributed platform, highlighting its key components and functionalities, such as AI-driven orchestration.
Section 4 presents the experimental platform setup, comprising two sites, including the frameworks used for the validation of the platform and early results on the 5G core and RAN performance. The validation also includes the implementation of a Digital Twin application prototype featuring immersive remote driving, where the QoS offered by the platform is demonstrated via an extensive QoE evaluation of the application.
Section 5 discusses open challenges and potential future research directions, while
Section 6 summarizes the key findings and contributions of this work.
2. Existing Solutions and Similar Testbeds
The rapid evolution of Digital Twin technologies, Edge computing, and IoT ecosystems has paved the way for advanced platforms that aim to address real-time industrial needs [
9]. However, the development of such frameworks is still in its early stages, particularly when it comes to large-scale adoption and seamless integration across the IoT–Edge–Cloud continuum. Existing solutions [
10] primarily focus on specific applications, and while they demonstrate promising results, gaps remain in areas such as scalability, latency management, and orchestration efficiency. In this section, we review state-of-the-art frameworks, platforms, and testbeds that lay the foundation for our proposed approach, highlighting their strengths and limitations in meeting the demands of 6G-enabled Digital Twins and hyper-distributed [
11] Edge architectures, enabling flexible and decentralized resource utilization. These testbeds serve as critical experimental environments, enabling researchers to validate novel concepts and technologies that will ultimately shape future industrial systems. In fact, ref. [
12] explores how multimodal sensing data inform real-time Digital Twins, aligning closely with the objectives of our hyper-distributed IoT–Edge–Cloud platform for real-time industrial sensing and communication. This study’s insights into the 6G research landscape shed light on critical improvements needed to support the platform, especially in testbed configuration and operational requirements.
2.1. Digital Twin Frameworks and Platforms
Digital Twins are becoming a paradigm-changer for different verticals, transforming how products and services are made and delivered, and allowing for the full digitalization of industrial elements (data, sensors, robotics, vehicles, etc.). Digital Twins are especially relevant for logistics and industrial environments, as they optimize the manufacturing process to reduce costs and increase operational efficiency and flexibility [
13].
In addition, Digital Twins have immense potential in healthcare [
14], smart cities [
15], and robotics [
16]. For instance, in healthcare, Digital Twins could simulate patient-specific models for personalized treatment planning, enhancing precision medicine. In smart cities, they can improve urban mobility management to reduce traffic congestion and fuel consumption. Furthermore, in robotic systems like drones and self-driving vehicles, Digital Twins enable better decision-making by providing real-time updates and scenario-based predictions, significantly improving safety and reliability.
Several Digital Twin frameworks and platforms have emerged to meet the growing demand for the real-time simulation and management of physical entities. Siemens’ MindSphere [
17] and General Electric’s Predix [
18] are among the leading platforms that provide end-to-end solutions for industrial IoT and Digital Twin implementations. MindSphere is a Cloud-based open IoT operating system that allows businesses to connect products, plants, systems, and machines, enabling robust analytics and the creation of Digital Twins for predictive maintenance and optimization. GE’s Predix, on the other hand, is a dedicated industrial IoT platform that integrates with AI/ML tools to develop Digital Twins for industrial assets. Both platforms emphasize scalability, offering integration with Edge devices, Cloud infrastructures, and third-party applications, making them suitable for large-scale industrial ecosystems. However, despite their broad capabilities, both platforms face challenges in supporting ultra-low latency and real-time synchronization across distributed environments, particularly in scenarios requiring immediate responses, such as autonomous systems or robotics. While these platforms are robust in terms of industrial IoT capabilities, comparisons with certain studies [
19,
20] reveal gaps in achieving the ultra-low latency and high scalability necessary for distributed Edge environments. Studies show that MindSphere and Predix face limitations in seamless interoperability, i.e., smooth and application-transparent integration, between Edge and Cloud layers, often relying on proprietary integration methods that restrict flexibility in hyper-distributed architectures.
Recent frameworks like IBM’s Maximo Application Suite (MAS) [
21] and Hitachi’s Lumada [
22] further exemplify industry efforts to build end-to-end IoT and Digital Twin solutions. IBM MAS focuses on asset performance management and predictive maintenance through AI-driven analytics, enhancing operational efficiency and supporting large-scale industrial IoT deployments. Lumada, developed by Hitachi, offers a modular platform for creating Digital Twins that optimize manufacturing and logistics operations through data-driven insights, with specific capabilities for Edge deployment to minimize latency and improve real-time responsiveness.
Open-source solutions such as Eclipse Ditto [
23] and FIWARE [
24] also play a significant role in advancing Digital Twin applications by offering more customizable and flexible frameworks. Eclipse Ditto focuses on managing digital representations of physical devices by providing a middleware layer that facilitates the synchronization of data and state between Edge devices and Cloud services. FIWARE, combined with its IoT and context-broker components, enables the development of Digital Twins across smart city and industrial use cases by integrating a wide range of IoT data sources. These platforms offer higher adaptability for research purposes and experimental testbeds due to their open-source nature, but they often require more intricate development and configuration efforts. Furthermore, while these platforms have made significant strides in terms of data management and resource orchestration, they still face limitations in real-time, high-volume data processing, which is essential for next-generation applications that rely on ultra-reliable low-latency communication (URLLC) and large-scale IoT deployments.
FIWARE’s recent updates emphasize modular interoperability, allowing the flexible integration of different IoT devices through open-standard APIs, yet studies indicate potential scalability challenges in handling the high-frequency data streams necessary for real-time Digital Twins [
25]. Eclipse Ditto, while effective for synchronizing device states, still requires advancements in latency handling for time-sensitive IoT applications, as highlighted in recent evaluations [
26].
Real-time applications of Digital Twins demand a tremendous amount of data collection, as well as virtualization, analytics, and rendering mechanisms. Also, a Digital Twin representation requires a high computational capacity in both the Edge and Cloud domains, which 5G may not be able to adequately fulfill, but 5G-Advanced and 6G features will [
27]. To this end, future Digital Twins applications will need to rely on the adoption of 5G-Advanced technologies [
28] that will enable the maximum exploitation of Digital Twin functionalities: (i) 5G-Advanced–IoT (5G-A-IoT) to connect machines and sensors on a large scale; (ii) a distributed IoT-to-Edge-to-Cloud continuum platform composed of resources from different providers in a transparent manner for the end users and verticals; and (iii) the integration of AI/ML analytic tools to furnish Digital Twins and the IoT-to-Edge-Cloud continuum platform with intelligence to enable real-time performance.
After a period of experimental evaluation of the first IoT generations and 5G, companies are now moving to the next level of digitalization of their supply chains. Several industrial elements, like sensors, devices, and machines, still have a gap in interconnection [
29], which offers the opportunity to be fully exploited in order to take one step forward toward the overall connection of the industrial ecosystem. In this regard, the 3GPP roadmap for future releases aims to explore new 5G-A-IoT technologies to cover emerging market demands. While 5G adopts a human-centric approach mainly focused on user connectivity aspects and the early demands of verticals, 5G-Advanced needs to go a step beyond to address IoT machine-centered use cases (5G-A-IoT) [
30]. At the moment, radio access networks (RANs) in many industrial premises still rely on wired network technologies since wireless interfaces do not satisfy their requirements [
31], and machine-type communications scenarios are covered by LTE-based technologies such as LTE-M and NB-IoT, whose capabilities do not meet strict latency demands for real-time monitoring [
32]. Requirements in terms of capacity, latency, reliability, and flexibility for automated real-time and collaborative robotics applications can only be met with 5G Rel-18, 5G-Advanced, and beyond.
2.2. Edge and Cloud Computing Solutions for IoT Applications
The advent of hyper-distributed platforms integrating the Internet of Things (IoT) and Edge and Cloud computing is key to tackling the complex challenges in modern logistics and industrial settings, especially for real-time Digital Twin applications. As the 6G network paradigm emerges, the development of systems capable of handling vast data from IoT devices while meeting the low-latency and high-throughput demands of real-time applications becomes increasingly crucial. The integration of IoT, Edge, and Cloud layers forms a cohesive platform where data from IoT devices are first processed at Edge nodes and then analyzed further in Cloud infrastructures [
33]. By performing initial data filtering and aggregation at Edge nodes, the platform minimizes unnecessary data transmission to the Cloud, which reduces bandwidth usage and alleviates network congestion. This approach is especially beneficial in scenarios with high data volumes, where only relevant or summarized data need to be forwarded for further analysis, ultimately improving both system efficiency and responsiveness. This distribution of computational resources is necessary to optimize performance while minimizing latency, which is vital for real-time applications like Digital Twins that replicate physical environments in real time [
34]. In particular, ref. [
35] presents an architecture focused on real-time optimization and control within 6G Digital Twin Networks (DTNs). Although primarily focused on DTNs rather than IoT–Edge–Cloud systems for logistics, its architectural principles and real-time control insights indirectly contribute to the development of a hyper-distributed IoT platform, aligning with long-term 6G objectives for high-performance, ultra-reliable systems. Another example [
36] introduces a Cloud-based framework for modular and context-aware services in healthcare, but its adaptable, reconfigurable design directly applies to hyper-distributed IoT systems in logistics and manufacturing. By understanding such modular frameworks, we can develop a flexible IoT–Edge–Cloud platform capable of seamlessly handling multiple applications and efficiently managing resource allocation. This flexibility is paramount for high-performance 6G testbeds that support Digital Twin applications and adapt to varying industrial and real-time demands.
IoT-to-Cloud technologies serve as the backbone for connecting a vast array of IoT devices to centralized Cloud infrastructures, enabling data aggregation, analysis, and decision-making at scale. Key technologies facilitating this integration include protocols like Message Queuing Telemetry Transport (MQTT) [
37], Hypertext Transfer Protocol (HTTP) [
38], and Constrained Application Protocol (CoAP) [
39]. These lightweight communication protocols are designed for efficient data transfer between resource-constrained IoT devices and Cloud services, with MQTT being particularly popular for its low-bandwidth consumption and reliable messaging over unreliable networks. Coupled with these protocols, middleware platforms like AWS IoT Core [
40], Microsoft Azure IoT Hub [
41], and Google Cloud IoT [
42] offer essential services, such as device management, data storage, and analytics. These platforms provide seamless integration between IoT devices and Cloud environments, allowing organizations to scale their deployments, implement security mechanisms, and leverage Cloud-native services for data processing and AI-driven insights. Furthermore, Apache OpenWhisk, an open-source platform, allows for serverless function execution in Edge environments, providing flexible resource scaling; however, its performance in latency-sensitive applications is limited by event-driven processing speeds, as reported in recent studies [
43].
In addition to communication protocols and Cloud platforms, Edge computing plays a pivotal role in enhancing IoT-to-Cloud integration by decentralizing data processing and reducing the load on Cloud infrastructures. By introducing computation closer to the IoT devices, Edge nodes can handle time-sensitive tasks locally, filtering data before they are sent to the Cloud, thus minimizing latency and bandwidth usage. Technologies such as Kubernetes [
44] and OpenStack [
45], extended to support Edge environments, facilitate the deployment of microservices across hybrid IoT-to-Cloud architectures. These containerized applications enable scalable, distributed processing across heterogeneous systems, ensuring that critical tasks, such as real-time analytics and AI inference, are performed at the Edge, while the Cloud handles long-term data storage, large-scale analytics, and machine learning model training. However, while these integration technologies significantly improve IoT performance, challenges remain in orchestrating resources dynamically across distributed layers and ensuring secure, seamless interoperability between diverse IoT devices and Cloud services.
Edge-to-Cloud orchestration is critical in ensuring seamless resource management across this continuum. Orchestration systems handle the deployment and reconfiguration of services, monitor system performance, and enforce security protocols across heterogeneous environments [
46]. AI/ML techniques further enhance the orchestration process by enabling dynamic task offloading, adaptive load balancing, and real-time fault detection. These algorithms optimize task scheduling based on latency, energy efficiency, and resource availability, ensuring that services are allocated to the most suitable layer, whether the Edge, the Cloud, or the device itself, depending on the computational requirements and real-time conditions. This flexibility is essential for maintaining performance across distributed architectures. For instance, ref. [
47] offers a valuable model for achieving real-time, adaptable security for dispersed IoT systems. Using Behavior–Interaction–Priority components to ensure data-driven security and model-checking, this approach aligns well with the hyper-distributed architecture’s emphasis on security and low latency, providing an additional layer of validation and dependability for Digital Twin and IoT applications that require accurate, real-time data.
One of the major challenges in these platforms is managing the heterogeneity and volatility of Edge nodes [
46], which often consist of devices with varying computational power, storage capacity, and connectivity. To address this, modern Edge solutions employ resource orchestration frameworks that dynamically allocate tasks based on the capabilities and real-time conditions of each Edge node [
48]. Technologies such as Kubernetes with KubeEdge [
49] and Apache OpenWhisk [
50] enable the deployment and management of containerized applications across a distributed Edge infrastructure, ensuring efficient resource utilization and low-latency response times. Frameworks like Microsoft Azure IoT Edge and Google Anthos extend Kubernetes to Edge use cases, but their studies reveal constraints in orchestrating resource-intensive tasks across decentralized nodes under variable network conditions [
51]. These platforms allow for dynamic task offloading between the Edge and Cloud, optimizing performance by processing time-sensitive tasks at the Edge and more complex, data-intensive workloads in the Cloud. Additionally, federated learning is being leveraged in Edge environments to mitigate the challenges of decentralized data by training AI models locally on Edge nodes and sharing only the learned parameters with the Cloud, reducing data transmission and enhancing privacy. However, ensuring robust security, fault tolerance, and seamless interoperability across these heterogeneous and often transient Edge devices remains a key technical hurdle in realizing the full potential of Edge computing for IoT applications.
3. Design of the IoT–Edge–Cloud Platform
This section presents the architecture for the hyper-distributed IoT–Edge–Cloud platform that enables real-time Digital Twin applications for logistics and industrial scenarios by integrating advanced computing resources across IoT, Edge, and Cloud environments. This platform leverages a flexible and scalable infrastructure that dynamically orchestrates computational tasks across geographically dispersed nodes, ensuring high performance and low latency for real-time operations. The combination of Digital Twins and the self-managed IoT–Edge–Cloud computing platform with artificial intelligence (AI) and machine learning (ML) will minimize human involvement in the design and validation of the physical network, which brings several benefits at the same time (e.g., lower labor costs and fewer human errors).
3.1. System Architecture Design
The diagram in
Figure 2 illustrates a high-level system architecture of a flexible, hyper-distributed IoT–Edge–Cloud platform designed to be deployed in two different sites. It can be scaled based on the number of connected IoT devices, the geographical distribution of Edge nodes, and the specific industrial needs, ensuring the smooth functioning of real-time Digital Twin applications. This level of scalability is essential to meeting the demanding requirements of modern logistics and industrial environments, where operational efficiency, low latency, and high reliability are paramount. The architecture includes several layers: the IoT, Edge, Cloud, and orchestration layers.
In the IoT-Device layer, a diverse set of devices—including sensors, cameras, industrial robots, and vehicles—continuously gather data from the physical environment. The number and type of devices are variable, allowing the architecture to accommodate different scenarios, from a few localized devices to hundreds distributed over larger areas. The IoT devices are connected to the Edge layer via a private 5G network, divided into the radio access network (5G RAN) and core network (5GC). The private 5G network supports slicing at the RAN, transport, and core levels, which allows the creation of multiple virtual networks that can be tailored to the different requirements of each IoT application, such as bandwidth, latency, or security needs.
In the Edge computing layer, powerful multi-core processors and memory in geographically distributed Edge nodes are strategically deployed to manage latency-sensitive data processing. These nodes, located wherever needed, handle real-time responses locally, significantly reducing the need for data transfer to the Cloud. The Edge nodes can vary in number and capacity, depending on the specific application, and can be scaled as needed to ensure low-latency performance.
The Cloud computing layer, with its vast computational resources, serves as a central hub for more resource-intensive tasks, such as long-term data storage and in-depth analysis, allowing the system to offload non-latency-sensitive workloads.
Finally, the orchestration layer is critical in managing resources, applications, and services across the whole distributed platform. It dynamically allocates tasks to the most appropriate computing resources based on real-time conditions, performance needs, and service-level agreements (SLAs). It plays a critical role in managing the lifecycle of services and applications across the IoT, Edge, and Cloud infrastructure. A crucial aspect of this layer is the dynamic placement of network functions (NFs) that comprise the 5G core, which can be distributed between the Cloud and Edge servers based on the specific requirements of the use case. For example, the User Plane Function (UPF) can be moved closer to the Edge to reduce latency for real-time applications like Digital Twins.
3.2. Orchestration and Management
Two main tasks are managed by the orchestrator: inter-node orchestration and intra-node orchestration. Inter-node orchestration handles the distribution of services across multiple geographically dispersed Edge nodes, optimizing performance and resource usage across the system. Intra-node orchestration manages the resources within each Edge node, ensuring that computational power, memory, and other resources are used efficiently to meet the specific needs of applications.
The orchestrator uses declarative configuration to manage application deployment. Through YAML files and containerized services, users define the behavior, configurations, and policies for deployment, such as where to place workloads, how many instances to deploy, and how to monitor key performance indicators (KPIs). Moreover, a service catalog facilitates the deployment of predefined applications and services, streamlining the onboarding process. These definitions allow the orchestrator to automate service deployment and dynamically adjust resource allocation based on current demand, improving both performance and efficiency.
A key feature of this orchestrator is its AI-driven automation, which enhances orchestration by predicting system loads, optimizing resource allocation, and making intelligent deployment decisions in real time. This AI module is composed of two primary components: the Prediction Analytics Engine and the Decision Engine. Together, these components enable the orchestrator to adapt dynamically to changing conditions, improving both performance and energy efficiency while reducing operational costs.
The Prediction Analytics Engine utilizes an ML-based prediction model to anticipate future system demands. By analyzing historical CPU utilization data collected from the infrastructure and applications, the model predicts future CPU usage through an ARIMA time-series prediction approach [
52]. This prediction is then used by the Decision Engine to determine the necessary scaling actions in real time, such as adjusting the number of replicas or redistributing resources across nodes. This predictive capability helps ensure that the system can proactively manage resource demands, especially during periods of high traffic or anticipated load surges.
The Decision Engine evaluates real-time traffic and system conditions to determine the optimal resource allocation strategy based on one of two focus areas: performance or energy efficiency. Regarding performance optimization, the focus is on maximizing performance. The Decision Engine dynamically scales the resources in response to increased user traffic. This approach maintains the required Quality-of-Service (QoS) metrics, such as response time and throughput, even under heavy-load conditions. For example, if traffic spikes due to a sudden influx of IoT data or requests for Digital Twin applications, the Decision Engine will increase the number of application pods in real time to handle the load, thereby ensuring that response times remain low, and throughput remains high. This real-time scaling based on AI-driven predictions allows the orchestrator to meet stringent performance requirements consistently, adapting to fluctuating demand without manual intervention.
Energy efficiency optimization is particularly prioritized during periods of low demand. The Decision Engine focuses on scaling down the resources as user traffic decreases, minimizing energy consumption while still meeting QoS requirements. For instance, in periods of low user activity, the orchestrator may reduce the number of replicas for less critical services or applications, conserving energy and preventing resource over-provisioning. This reduction is based on the Prediction Analytics Engine’s forecast of lower CPU usage, enabling the system to save energy without compromising service quality.
This AI-driven orchestration approach addresses key limitations in current infrastructure by enabling the real-time, adaptive management of resources across the IoT–Edge–Cloud platform. By optimizing either performance or energy efficiency as needed, the orchestrator can support the dynamic requirements of IoT and Digital Twin applications, delivering a flexible and sustainable solution. The use of predictive analytics and automated scaling ensures that the platform can handle unpredictable workloads effectively, balancing real-time response needs with sustainable energy usage.
4. Development and Validation of the Platform
This section provides a comprehensive description of the experimental development and validation process of the proposed IoT-to-Edge-to-Cloud platform architecture. This section is structured into several parts to provide a comprehensive overview of the platform’s architecture, experimental setup, and evaluation metrics. Initially, it delves into the detailed architecture of the platform, covering its multi-layered design, integration with Edge nodes, private 5G network, and advanced orchestration mechanisms to support real-time, latency-sensitive applications. This is followed by a thorough explanation of the experimental setup used to validate the architecture, focusing on the performance of critical components such as the 5G core and RAN. Finally, we demonstrate the platform’s capabilities by evaluating its performance under real-world conditions using a demanding application for immersive remote driving. The results provide insight into the platform’s ability to handle complex, latency-critical scenarios, ensuring scalability, resilience, and efficient resource utilization across IoT, Edge, and Cloud environments.
4.1. Experimental Platform Setup
To validate the architecture proposal, the experimental setup shown in
Figure 3 was deployed using a two-site, geographically distributed design. The setup integrates Edge nodes at both sites, interconnected via secure links, with each site playing a vital role in managing data and running real-time applications.
Edge Site 1 hosts a main router (Router 1) that provides access to both the Cloud and two additional servers: one hosting the 5G core and the other responsible for hosting the Edge applications (Edge Server 1). A secure direct connection, protected by a firewall, links Edge Site 1 to Edge Site 2, ensuring safe communication between the two locations. Edge Site 2 similarly features a main router (Router 2) that connects Edge Server 2 to the 5G core network.
Both sites are integrated with their respective 5G radio access network (RAN), which operates under an open-source framework (OpenRAN) to facilitate the interoperability and replication of the setup. The RAN setup includes a remote Radio Unit (RU) that supports advanced radio splitting, allowing efficient communication between the Distributed Units (DUs) and Centralized Units (CUs). Edge Site 1 features a Baseband Unit (BBU) that is logically composed of a DU and CU, with the DU positioned near the RU to facilitate communication between the RU and the 5G core. The architecture supports various connectivity options between the two sites, including direct fiber connections and more complex network routes that integrate the 5G core.
To support the computational requirements of real-time Digital Twin applications, each Edge node is equipped with high-performance processors, powerful RAM, and storage capabilities to handle intensive data processing and AI-driven analytics. As an example, one of the servers used in this experimental setup is configured with an Intel® Xeon® Gold 6548N 32-Core Processor at 2.80 GHz, paired with 4 × 64 GB DDR5 4800 MHz ECC RDIMM server memory and storage based on NVMe technology for faster data access. Additionally, the server supports GPU processing with an NVIDIA T4 GPU, allowing efficient handling of compute-intensive tasks, including machine learning inference and real-time analytics.
In terms of communication performance, it is important to note that this network is completely experimental, not commercial, with specific configurations to support research and development. The radio access network at Edge Site 1 is configured with a TDD pattern that has an uplink/downlink slot ratio of 3/7, prioritizing the uplink to better suit the needs of real-time applications that require robust uplink performance. The n40 band operates with a bandwidth of 20 MHz, a 256QAM modulation, and MIMO 2x2 on the downlink. Similarly, the n78 band is configured with a bandwidth of 100 MHz, a 64QAM modulation, and MIMO 2x2 on the downlink. These configurations facilitate high data throughput and low latency, crucial for maintaining seamless communication between the IoT, Edge, and Cloud layers of the platform.
The 5G core in this site deploys seven UPFs to serve different slices, allowing for effective resource allocation and testing across multiple scenarios. The UPF selection is determined by the DNN (Data Network Name) chosen by the UE. Additionally, the core database contains associations between the DNN and the slices, making it easier to apply different Quality-of-Service (QoS) parameters to UEs, which is crucial for real-time IoT use cases.
Application orchestration across the two locations is handled by a Cloud-based orchestration platform. To enable the seamless deployment and management of Edge applications, a Kubernetes cluster is deployed at each Edge node, serving as the essential environment for hosting either entire applications or components of distributed applications. This setup ensures that any application intended to run at the Edge is appropriately deployed within the Kubernetes environment. The orchestrator manages the platform and application deployment, allowing for automatic and streamlined operation. Through this orchestration system, applications are dynamically deployed, scaled, and monitored across Edge sites, optimizing both energy efficiency and performance. This automation simplifies the overall functioning of the platform, enabling real-time adjustments to resource allocation based on current demands. An integrated AI module further enhances the orchestration process by analyzing continuous streams of data, allowing for advanced automation through the application of AI and ML techniques, as explained in
Section 3.2.
4.2. Platform Performance Results
A series of tests were conducted to validate the deployment of the hyper-distributed IoT–Edge–Cloud platform to ensure a comprehensive characterization. The platform architecture is consistently similar across all proposed sites; therefore, testing was performed exclusively at Edge Site 1. Nonetheless, the results can be extrapolated and scaled to Edge Site 2 due to these similarities.
To thoroughly validate the platform’s performance, tests were divided into two main sets. The first set focused specifically on evaluating the 5G core network, including its handling of control- and user-plane latencies as well as peak data rates under various load scenarios, as shown in
Table 1. The second set assessed the complete platform at Edge Site 1, as illustrated in
Table 2, encompassing both the 5G core and the radio access network (RAN), to provide a holistic view of the system’s performance, considering real-world conditions across the entire network infrastructure.
4.2.1. Evaluation of the 5G Core
The first set of tests evaluated the latency of the network core in the control and user planes, as well as measured the supported peak data rate. Both tests were performed in different load scenarios to evaluate the performance under stress or in ideal situations. In addition, some tests were repeated using the seven slices available at Edge Site 1 to evaluate the performance change when using network slicing.
To carry out these 5G core evaluation tests, the LoadCore simulation tool [
54] was employed to emulate the radio access network (RAN) and multiple UEs. LoadCore is specifically designed to evaluate 5G Standalone (5G SA) core deployments by testing performance and conformance at scale for both the control and user planes. This tool allows for the comprehensive emulation of UEs, gNodeBs (gNB), and core network functions, facilitating a broad range of tests, including capacity and latency assessments, mobility scenarios, isolated node or interface testing, and Quality-of-Experience (QoE) measurements. LoadCore also supports the validation of complex service-based architectures, which is essential for evaluating the flexibility and scalability of 5G network deployments.
LoadCore not only generates traffic through the N1, N2, and N3 interfaces to enable accurate evaluation of the user and control planes of the 5G core network but also performs in-depth measurements and provides visual representations of the results, making it easier to analyze and understand performance metrics. This functionality is especially beneficial for identifying potential performance limits under various operational conditions.
Regarding the experimental conditions, LoadCore was configured with all necessary IPs for the 5G core, as well as VLANs, and the UEs, which were also configured in the core network. Additionally, the load for each UE, the number of UPFs, and other test-specific parameters were carefully set in each test, ensuring a thorough evaluation of network performance across diverse scenarios. This level of customization allowed for precise testing of both capacity and latency, ensuring that the 5G core network could be evaluated comprehensively under realistic and high-stress conditions.
In the single-UE test with downlink TCP (Transmission Control Protocol) traffic with a single UPF, the maximum downlink throughput reached 468 Mb/s. Introducing uplink traffic reduced the downlink rate to 345 Mb/s, while uplink throughput reached 364 Mb/s. To further simulate real-world scenarios where multiple IoT devices are connected, tests with 10 simulated UEs connected to a single UPF were conducted. Each UE was configured with 50 Mb/s uplink and 100 Mb/s downlink TCP traffic. The peak downlink and uplink data rates in this test were 684 Mb/s and 336 Mb/s, respectively, as shown in
Figure 4.
Latency tests were conducted to assess the 5G system’s performance in low-latency scenarios, which is critical for determining the platform’s suitability for IoT–Edge–Cloud applications. These tests evaluated both control-plane and user-plane latencies using UDP traffic. For the user-plane latency tests, LoadCore emulated a fixed number of UEs, each configured with an equal UDP traffic data rate. The results showed that 81% of downlink packets had an OWD (one-way delay) between 125 and 250 µs, with 99.8% of downlink jitter below 125 µs. For uplink traffic, 87.8% of packets had an OWD below 125 µs, and 97% of jitter was below 125 µs.
Control-plane latency was evaluated by conducting tests where LoadCore emulated twenty UEs, each cycling between idle and data transmission states, completing a total of 200 cycles with UDP traffic. This setup created a highly stressful scenario to measure the core network’s performance under high-stress conditions. In these tests, which involved a single UPF connected to the core, the average latency was measured at 0.3 s.
These tests were repeated across all seven available slices at Edge Site 1. With all UPFs active, the peak uplink data rate reached 830 Mb/s, while the downlink peaked at 882 Mb/s. Latency tests showed that 69% of downlink packets had an OWD between 125 and 250 µs, with 24% reaching 500 µs. For the uplink, 62% of packets had an OWD between 125 and 250 µs, with 24% between 250 and 500 µs.
Both the user- and control-plane latency tests yielded promising results, with latencies aligning well with expected values [
55]. The low user-plane latencies are particularly advantageous for applications that require quick information exchange, such as Digital Twins in IoT–Edge–Cloud scenarios, where minimal delay is crucial for real-time responsiveness.
While the control-plane latency might appear elevated, it is important to highlight that these tests were carried out under high-stress conditions, with twenty instances of user equipment (UEs) connected simultaneously to a single User Plane Function (UPF), continuously transmitting traffic and switching between states. This represents an extreme case that is unlikely to occur in typical real-world scenarios. The fact that control-plane latency remains stable under such demanding conditions is highly encouraging, signaling robustness in the network’s core. In a more ideal scenario, where a single UE connects once to the core network, the control-plane latency is anticipated to be significantly lower. These results demonstrate the 5G core’s resilience and suitability for supporting complex, latency-sensitive applications across IoT, Edge, and Cloud environments.
4.2.2. Evaluation of 5G RAN and Core
While RAN simulators are instrumental in facilitating initial performance evaluations, they sometimes overlook some real-world performance factors, such as network constraints and variability. To complement the core network tests, additional real, non-simulated evaluations were conducted to assess the full performance of the Edge Site 1 network. These assessments, which included tests on the n40 and n78 frequency bands, measured the maximum throughput experienced by each user, taking into account the characteristics of the radio channel, physical RAN, and core network.
The radio access network at Edge Site 1 is configured with a 3/7 uplink/downlink slot ratio TDD pattern to prioritize the uplink, which better supports the real-time demands of applications like Digital Twins. The n40 band uses a 20 MHz bandwidth with 256QAM modulation and MIMO 2x2 on the downlink, while the n78 band is configured with a 100 MHz bandwidth, 64QAM modulation, and MIMO 2x2 on the downlink, enabling high data throughput and low latency across the IoT, Edge, and Cloud layers.
All network measurements were taken using a 5G-enabled smartphone directly connected to the radio access network, performing iperf tests over a fixed duration to gauge throughput accurately. This approach provided a comprehensive view of the overall performance under real-world conditions, accurately reflecting the user experience across the entire network infrastructure.
It is also important to note that this network setup is entirely experimental, with specific configurations tailored to support research and development, rather than a commercial deployment.
To analyze the results of these platform tests, we compared the measured throughput values to theoretical values calculated using a standardized formula from 3GPP, specifically tailored for 5G New Radio (NR) throughput estimations [
53]. This formula provides an estimate of the maximum achievable throughput for both user equipment (UE) and cell capacity, taking into account various factors, like the modulation scheme, resource block allocation, and overhead reduction.
These tests, as shown in
Table 2 and represented graphically in
Figure 5, saw a maximum downlink of 552 Mb/s and uplink of 87.3 Mb/s for the n78 indoor band, while the n40 outdoor band achieved a downlink of 120 Mb/s and an uplink of 29 Mb/s. These measured values, being close to the theoretical limits, demonstrate the platform’s robust performance under real-world conditions, effectively handling network constraints and channel variability with minimal deviation from ideal calculations.
These preliminary tests demonstrated the architecture’s scalability and high data throughput, which are required for real-time applications like Digital Twins, while highlighting areas for improvement, such as optimizing resource usage and reducing latency during peak loads. Further iterations will enhance the orchestration mechanisms for seamless operation across the IoT–Edge–Cloud continuum.
4.3. Digital Twin Application Prototype
The experimental setup is designed to support and validate a wide range of Digital Twin applications. One notable example is an immersive remote driving application, shown in
Figure 6. This application in particular has been tested on Edge Site 1 but could also be deployed on Edge Site 2, as it shares a very similar architecture. It features the remote control of two mobile robots over the private 5G network, while a Digital Twin of the robots, the network, and the scenario is represented in the user interface.
The robots, situated outdoors, are equipped with 360º cameras and an array of sensors, such as LiDAR, GNSS, and IMUs, to capture high-fidelity environmental data in real time. These sensor data are transmitted to immersive cockpits located indoors at the laboratory, where users experience a fully immersive environment through racing seats, pedal controls, VR headsets, and haptic vests. This setup simulates an authentic remote driving experience, where users perceive the robot’s point of view and receive real-time haptic feedback based on robot–environment interactions. The integration of control, perception, and telemetry data (gathered by the Digital Twin) ensures that accurate, real-time feedback is provided to the operators, allowing for seamless bidirectional communication between the physical and digital realms.
Each robot and cockpit is connected via the 5G network, with robots using the n40 band for robust outdoor connectivity, while the indoor cockpits rely on a direct connection for enhanced bandwidth and performance. In this context, the high-speed data exchange between the robots and the control center is fundamental to ensuring the quality of both visual and haptic feedback, which is critical for enhancing the operator’s situational awareness and control accuracy.
The application’s architecture employs Edge computing to process incoming data streams directly at the Edge server, avoiding the need to route data to centralized Cloud servers, thus minimizing delays in teleoperation scenarios where split-second decision-making is essential for safe and efficient robot control. Furthermore, the distributed approach ensures system scalability, allowing the deployment of additional robots without significantly increasing network overhead or latency, which is critical as the platform expands to cover more complex missions or larger areas.
The Edge server executes two distributed applications: AI-based object detection (using models like YOLO) to identify pedestrians and obstacles in order to avoid collisions, and a Cloud Robotics Platform that plays a critical role in integrating the physical, digital, and virtual worlds. The latter is the core of the Digital Twin data, which are stored using InfluxDB and Network-as-Code (NaC) and interpreted to create a virtual replica of the robot, the network, and its environment, which are represented in the 3D user interface.
4.4. Application Performance
The evaluation of application performance for the Digital Twin prototype in immersive remote driving focuses on validating whether the platform can meet stringent operational requirements under real-world conditions. As teleoperation and feedback demand low-latency, high-throughput communication, a robust assessment is essential for determining the platform’s viability in practical settings. This section introduces the core metrics and goals necessary for analyzing application performance, covering both the QoS and QoE parameters critical for remote driving applications.
The QoS metrics aim to quantify the application’s technical capabilities in terms of bitrate and latency. By setting specific measurement targets for each metric, we aim to simulate realistic usage scenarios and identify potential bottlenecks that could impact the user experience.
In addition, QoE metrics capture user-centered performance indicators, emphasizing that the QoS offered by the platform can support real-time Digital Twin applications in terms of presence, engagement, control, sensory integration, and cognitive load. User feedback from real-world trials is integral to this analysis, as it highlights how well the platform aligns with user expectations and needs.
4.4.1. QoS Metrics and Measurement Goals
To ensure a comprehensive evaluation of KPIs, each measurement process must be designed with practical, real-world conditions in mind, as shown in
Table 3.
In terms of network performance, KPIs like RTT, throughput, and reliability provide essential insights into the system’s communication robustness and efficiency. RTT measures the time for a data packet to travel to its destination and back, indicating the latency in the network path. According to our preliminary laboratory tests, we estimate that more than 30 ms latency would impede real-time video streaming. Throughput, captured in both the uplink (UL) and downlink (DL) directions, evaluates the system’s capacity to handle data-intensive applications, especially under high-load conditions. In order to eventually support the immersive remote control of four robots (two per site), we estimate that 32 Mb/s per site is required in the uplink (two 15 Mb/s 4K 360 video streams plus two 1 Mb/s telemetry flows), whereas only 1 Mb/s per site is required in the downlink (two 500 kb/s telecontrol flows). Since the cockpits are placed only at n78 in one of the sites, this band requires double the capacity of n40. Reliability is tracked continuously to assess the consistency of network connectivity, which is crucial for applications requiring uninterrupted, real-time data transmission. According to our experience, packet loss must be maintained below 1% for efficient video decoding.
For video streaming, metrics like the streaming bitrate and latency are key for assessing the quality and responsiveness of transmitted visual information. The streaming bitrate reflects the amount of video data transferred per unit time, impacting image quality and visual clarity. We are limited by the parameters of the camera used in this prototype, which generates 15 Mb/s of traffic for a 4K 360 video using the RTSP protocol. Latency in video streaming is measured for 360-degree video streams, capturing the time delay between capturing the video feed and its display. Early measurements carried out in our laboratory show that the streaming latency is 300 ms on average for RTSP streaming over the 5G network, including its visualization in the user interface. These thresholds, however, are close to the ones defined in industry best practices [
56] and empirical studies [
57] in teleoperation and VR systems.
For telecontrol and feedback, precision is critical, as it directly affects the efficacy of the operation. The Command-to-Reception delay is recorded from the moment a command is issued by the user until the corresponding action is received on the robot. On the other hand, the Command-to-Execution delay also considers that the corresponding action is executed by the robot. Note that the execution depends on mechanical and electrical aspects not related to data communication, which is not the scope of our work but certainly has an impact on the user experience. Measuring these KPIs under real-world network conditions, where latency can be affected by bandwidth availability and network congestion, provides a realistic picture of how responsive the system would be in practice.
Early results show that, on average, the Command-to-Reception delay is only 50 ms, which aligns with industry standards [
58,
59] and empirical benchmarks [
60] commonly used in latency-sensitive applications. However, the Command-to-Execution increases up to 350 ms, whereas the end-to-end (E2E) latency (i.e., Command-to-Execution plus Video Streaming Latency) increases up to 650 ms; these values exceed those that can be considered real-time according to standards and are due to the complexity and experimental character of the prototype.
Finally, latency thresholds for haptic feedback (i.e., under 30 ms) are based on studies [
61] indicating that delays above this level reduce the user’s perception of real-time response, diminishing the immersive quality essential for applications like remote driving.
4.4.2. QoE Metrics and Measurement Results
Next, we present the analysis of the Quality of User Experience obtained from a survey conducted among 53 participants who tested the Digital Twin application prototype featuring the immersive remote driving of two robots. The tests took place in high-density city environments, with a distance of around 7 km between the robots and the cockpits. The assessment focused on various factors, such as presence, engagement, control, sensory integration, and cognitive load, whose results help to validate the QoS offered by the platform for supporting Digital Twin applications in real-life scenarios.
The metrics for assessing presence indicate a high level of perceived immersion among participants. Specifically, 97.9% of the participants rated their sense of presence at level 4 or above, suggesting that the system successfully created a convincing remote environment. This strong sense of presence is critical for applications requiring teleoperation, as it directly impacts the operator’s situational awareness and effectiveness.
Regarding engagement, a combined 95.8% of participants rated their engagement in the teleoperation experience at level 4 or above. This indicates that the system’s immersive elements, such as real-time video streaming and responsive controls, were effective in maintaining user interest and focus throughout the demonstration.
The participants’ perceptions of control over the remote robot were also high, with 75% of them rating their control experience at level 4 or above. This suggests that the system’s teleoperation interface was intuitive and responsive, which is crucial for real-time control scenarios where precision and responsiveness are required.
Sensory integration, which measures the effectiveness of combining visual and haptic feedback, received not-so-favorable ratings and is currently in need of improvement. Only 39.6% of participants rated it at level 4 or 5, indicating that the multimodal feedback provided an experience that was not entirely cohesive. Effective sensory integration is vital for enhancing the user’s interaction with remote systems, especially in scenarios requiring fine-grained control and quick reaction times. However, most participants were not accustomed to the inclusion of haptic feedback in a remote driving scenario.
The cognitive load associated with the teleoperation task was assessed to understand how mentally demanding the experience was for participants. While 33.3% rated the task as highly demanding (level 5), a significant proportion (22.9%) found it relatively manageable (level 2). These results highlight the system’s capability to balance complexity and usability, ensuring that users can focus on control and interaction without being overwhelmed by the cognitive demand.
The results suggest that, while optimizing sensory integration and cognitive load is required for complying with less experienced users, the platform is able to successfully offer the required QoS and QoE to support generic Digital Twin applications (as well as immersive and interactive experiences).
5. Open Issues and Future Work
One of the key challenges that need to be addressed by the architecture of this platform is ensuring energy efficiency and sustainability across the IoT–Edge–Cloud continuum. The orchestration layer of the platform should be improved, with energy efficiency as a core objective, leveraging AI/ML algorithms to optimize the allocation of resources and reduce unnecessary energy expenditure. The orchestration layer should intelligently distribute tasks to Edge nodes with the appropriate computational capacity and energy profile, minimizing power usage while ensuring the required performance for time-sensitive applications. Techniques like neural network pruning and quantization, as well as the development of low-power hardware, will be necessary to reduce the energy demands of AI/ML processing. In particular, the platform faces technical challenges in managing the energy consumption of high-performance Edge nodes during peak processing loads, which require fine-grained scheduling and adaptive power management mechanisms. Future work could also explore implementing real-time energy monitoring at both the node and cluster levels to detect and adjust for energy spikes, thus optimizing consumption dynamically.
Future work should also explore the potential of green computing technologies in reducing the platform’s environmental impact. The integration of renewable energy sources into Edge nodes and IoT devices is one possibility. Solar-powered IoT sensors or energy-harvesting technologies, for example, could significantly reduce the platform’s energy consumption, especially in remote or off-grid areas. Research into energy-efficient hardware, such as low-power microcontrollers and processors, will also be critical for minimizing the platform’s overall energy footprint. Future research should also investigate integrating green computing solutions, such as low-power AI inference engines specifically optimized for Edge applications. These engines could utilize workload-aware adaptation techniques that dynamically scale AI model complexity according to energy constraints without compromising performance. Additionally, exploring Edge node designs with modular renewable energy inputs—such as solar or wind modules—could facilitate the deployment of sustainable, off-grid solutions, particularly beneficial for rural or hard-to-reach IoT sites.
Predictive maintenance, driven by AI/ML, plays a significant role in extending the lifecycle of hardware components and reducing waste. By analyzing data from IoT devices and Edge nodes, the system could predict potential failures before they occur, minimizing downtime and reducing the need for frequent hardware replacements. For example, predictive analytics could be used not only for hardware maintenance but also for optimizing network performance, predicting congestion, or identifying potential security threats. The system could predict when certain devices or components are likely to experience low activity and adjust their energy consumption accordingly. Additionally, by simulating future system states, Digital Twins could help in planning energy-efficient expansion strategies, such as determining when and where to deploy new Edge nodes to meet the growing demand without over-provisioning resources. For example, algorithms could be developed to take into account not only the current workload but also the energy profile of individual devices and their surrounding environment. To address predictive maintenance challenges, the platform could implement lightweight, on-device AI modules that provide real-time analytics on component health, thus minimizing data transfer and processing costs. For instance, anomaly detection algorithms tailored to the behavior of specific hardware components could be deployed at the Edge. This approach would enable early fault detection with minimal impact on computational resources, crucial for sustaining continuous operations in resource-constrained environments. Additionally, future work could explore adaptive Digital Twin models that evolve based on real-time operational data, enabling more accurate simulations for predictive maintenance.
Security and privacy challenges are another key consideration that cannot be overlooked in future work. With an increasing number of devices connected to this hyper-distributed platform, the attack surface expands exponentially, creating vulnerabilities at various levels, from the IoT layer to the Cloud. As more data are collected and exchanged between nodes, privacy concerns also grow, especially in applications that involve personal or sensitive data. AI/ML could play a role in detecting and mitigating attacks in real time, but lightweight, efficient algorithms are necessary to avoid additional computational burden. Techniques like federated learning, where data remain local while models are trained in a decentralized fashion, could address privacy concerns while still allowing for the development of intelligent systems. The platform’s hyper-distributed nature introduces unique challenges in securing device-to-Cloud data flows, as well as ensuring data integrity across multiple network slices. One specific challenge lies in detecting and mitigating cross-layer attacks that exploit both the IoT and Cloud layers. Future work should also explore implementing cross-layer intrusion detection systems (IDSs) that use federated learning across Edge nodes to identify attack patterns without compromising data privacy. Developing these IDS solutions will require algorithms optimized for Edge deployment, with a minimal footprint to avoid disrupting essential real-time applications.
An area that deserves further exploration is interoperability and standardization. The platform integrates a variety of devices, applications, and communication protocols, making it imperative that these components work seamlessly together. The lack of standardization in IoT ecosystems remains a significant hurdle in achieving full interoperability. Future work should focus on establishing common protocols, communication standards, and APIs that allow diverse devices and applications to interact with each other. Standardization is crucial for scalability, enabling the integration of new technologies without disrupting existing workflows. Open architectures and collaboration between industry and academia could help in defining these global standards. To address interoperability, research could also focus on developing open-source interoperability frameworks for multi-vendor IoT ecosystems, enabling standardized data exchange protocols across devices and Cloud services. For instance, future work could involve implementing Edge-centric gateways that translate proprietary device protocols into a common framework, like MQTT or CoAP, facilitating seamless integration. Collaboration with standardization bodies (e.g., IEEE, ISO) could also help establish industry-wide protocols for Edge–Cloud interoperability.
Finally, social and ethical considerations must be addressed as hyper-distributed platforms become more embedded in critical infrastructures, such as healthcare, transportation, and smart cities. Issues of data ownership, transparency, and the ethical use of AI in decision-making processes must be carefully considered. Future work should explore the development of explainable AI (XAI) models that provide human-readable explanations for decisions made by AI systems. This will be essential for ensuring trust and accountability in applications where AI plays a central role in real-time decision-making. Ethical challenges also arise in ensuring the transparency and accountability of AI-driven decisions made within the IoT–Edge–Cloud platform. One concrete step in this direction would be to integrate explainable AI (XAI) components that provide clear rationales for decisions in critical applications, such as healthcare monitoring or autonomous systems. Additionally, it will be crucial to develop governance frameworks for data ownership and access rights, particularly for sensitive or personal data generated by IoT devices. This approach could help define roles and responsibilities among users, operators, and developers to maintain ethical standards across the platform.
6. Conclusions
This paper presents the design of a flexible, hyper-distributed IoT–Edge–Cloud platform aimed at enabling real-time Digital Twins in industrial and logistics environments. The platform is intended to serve as an innovative living lab and research testbed for future 6G applications. It integrates both open-source and commercial solutions, along with a private 5G network, to connect machines and sensors on a large scale.
By incorporating artificial intelligence (AI) and machine learning (ML) capabilities, the IoT–Edge–Cloud platform optimizes the use of computing and networking resources for real-time applications, significantly reducing human intervention in the design and validation of the physical network. This approach offers several advantages, including lower labor costs and a reduction in human errors.
The platform’s application in supporting immersive remote control applications, such as teleoperation, has demonstrated its potential in scenarios requiring low latency and precise control, which are essential for industries prioritizing real-time responsiveness. Real-world tests validated the platform’s performance in supporting high-throughput, low-latency applications. In the n78 band, the platform achieved downlink speeds of up to 552 Mb/s and uplink speeds of 87.3 Mb/s, close to theoretical maxima, demonstrating its capability for data-intensive applications. Similarly, tests in the n40 band revealed a downlink of 120 Mb/s and an uplink of 29 Mb/s. These results confirm the platform’s suitability for supporting Digital Twin applications in realistic settings, providing high QoS across the network infrastructure.
Additionally, the platform’s support for Digital Twins was validated via QoE assessments conducted on an immersive remote driving prototype. Over 97% of participants reported strong immersion and high engagement during teleoperation, reflecting the application’s capability to deliver an immersive user experience. Control and feedback synchronization were rated highly, with nearly 85% of users indicating a cohesive sensory integration experience.
In summary, the platform has been deployed in a real-world setting, accompanied by an experimental setup for an emerging application featuring immersive remote driving. This experiment demonstrated the platform’s flexibility and scalability as a testbed for future 6G applications and use cases. Through these applications, the platform addresses key challenges in real-time feedback and control, positioning it as a critical infrastructure for future innovations in remote operations and interactive digital environments. It addresses open challenges and future research directions, particularly focusing on enhancing energy efficiency within the IoT–Edge–Cloud continuum. Future efforts will explore energy optimization mechanisms to further support sustainable, high-performance applications.