Next Issue
Volume 16, October
Previous Issue
Volume 16, August
 
 

Future Internet, Volume 16, Issue 9 (September 2024) – 45 articles

Cover Story (view full-size image): This study envisions a novel framework to declaratively deploy next-generation 5G mobile core across sites following the liquid computing paradigm (provided by Liqo operator) for offloading both user and control planes to remote clusters. A full stack perspective is genuinely imprinted since the network slicing layer is orchestrated by Kubernetes, whereas monitoring and observability are achieved by employing a service-mesh solution (Istio). Conversely, the networking function and service layer are covered by the Liqo control plane that ensures that the overlay network communicates between sites through APIs. Different strategies for peering between clusters are proposed, and the obtained results show a significant improvement in network performance in the case of in-band peering when VPN tunneling is applied between sites. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31 pages, 5689 KiB  
Article
An Improved Routing Protocol for Optimum Quality of Service in Device-to-Device and Energy Efficiency in 5G/B5G
by Sanusi Mohammad Bunu, Omar Younis Alani and Mohammad Saraee
Future Internet 2024, 16(9), 347; https://doi.org/10.3390/fi16090347 - 23 Sep 2024
Viewed by 3562
Abstract
Some challenges when implementing the optimized link state routing (OLSR) protocol on real-life devices and simulators are unmanageable: link quality, rapid energy depletion, and high processor loads. The causes of these challenges are link state processing, unsuitable multipoint relay (MPR) nodes, and information [...] Read more.
Some challenges when implementing the optimized link state routing (OLSR) protocol on real-life devices and simulators are unmanageable: link quality, rapid energy depletion, and high processor loads. The causes of these challenges are link state processing, unsuitable multipoint relay (MPR) nodes, and information base maintenance. This paper proposes a structured, energy-efficient link sensing and database maintenance technique. The improved OLSR in the paper replaces the OLSRv2’s HELLO, HELLO, and Topology Control (TC) message sequence with a new sequence. MPR nodes are not mandated to broadcast TC messages if the number of nodes and their OLSRv2 addresses remain unchanged after subsequent broadcasts or if no node reported 2-hop symmetric connections. The paper further proposes an MPR selection technique that considers four parameters: node battery level, mobility speed, node degree, and connection to the base station for optimum relay selection. It combines the four parameters into one metric to reduce energy dissipation and control routing overhead. The modifications were implemented in NS-3, and the simulation results show that our improved OLSR outperforms the existing OLSR, OLSRv2 and other improved routing protocols in energy consumption, routing overhead, the packet delivery ratio and end-to-end delay, as compared to the related literature. Full article
Show Figures

Figure 1

26 pages, 943 KiB  
Article
Recommendation-Based Trust Evaluation Model for the Internet of Underwater Things
by Abeer Almutairi, Xavier Carpent and Steven Furnell
Future Internet 2024, 16(9), 346; https://doi.org/10.3390/fi16090346 - 23 Sep 2024
Viewed by 5013
Abstract
The Internet of Underwater Things (IoUT) represents an emerging and innovative field with the potential to revolutionize underwater exploration and monitoring. Despite its promise, IoUT faces significant challenges related to reliability and security, which hinder its development and deployment. A particularly critical issue [...] Read more.
The Internet of Underwater Things (IoUT) represents an emerging and innovative field with the potential to revolutionize underwater exploration and monitoring. Despite its promise, IoUT faces significant challenges related to reliability and security, which hinder its development and deployment. A particularly critical issue is the establishment of trustworthy communication networks, necessitating the adaptation and enhancement of existing models from terrestrial and marine systems to address the specific requirements of IoUT. This work explores the problem of dishonest recommendations within trust modelling systems, a critical issue that undermines the integrity of trust modelling in IoUT networks. The unique environmental and operational constraints of IoUT exacerbate the severity of this issue, making current detection methods insufficient. To address this issue, a recommendation evaluation method that leverages both filtering and weighting strategies is proposed to enhance the detection of dishonest recommendations. The model introduces a filtering technique that combines outlier detection with deviation analysis to make initial decisions based on both majority outcomes and personal experiences. Additionally, a belief function is developed to weight received recommendations based on multiple criteria, including freshness, similarity, trustworthiness, and the decay of trust over time. This multifaceted weighting strategy ensures that recommendations are evaluated from different perspectives to capture deceptive acts that exploit the complex nature of IoUT to the advantage of dishonest recommenders. To validate the proposed model, extensive comparative analyses with existing trust evaluation methods are conducted. Through a series of simulations, the efficacy of the model in capturing dishonest recommendation attacks and improving the accuracy rate of detecting more sophisticated attack scenarios is demonstrated. These results highlight the potential of the model to significantly enhance the trustworthiness of IoUT establishments. Full article
Show Figures

Figure 1

19 pages, 2263 KiB  
Review
Industry 4.0 and Beyond: The Role of 5G, WiFi 7, and Time-Sensitive Networking (TSN) in Enabling Smart Manufacturing
by Jobish John, Md. Noor-A-Rahim, Aswathi Vijayan, H. Vincent Poor and Dirk Pesch
Future Internet 2024, 16(9), 345; https://doi.org/10.3390/fi16090345 - 21 Sep 2024
Cited by 1 | Viewed by 1317
Abstract
This paper explores the role that 5G, WiFi 7, and Time-Sensitive Networking (TSN) play in driving smart manufacturing as a fundamental part of the Industry 4.0 vision. It provides an in-depth analysis of each technology’s application in industrial communications, with a focus on [...] Read more.
This paper explores the role that 5G, WiFi 7, and Time-Sensitive Networking (TSN) play in driving smart manufacturing as a fundamental part of the Industry 4.0 vision. It provides an in-depth analysis of each technology’s application in industrial communications, with a focus on TSN and its key elements that enable reliable and secure communication in industrial networks. In addition, this paper includes a comparative study of these technologies, analyzing them based on several industrial use cases, supported secondary applications, industry adoption, and current market trends. This paper concludes by highlighting the challenges and future directions for adopting these technologies in industrial networks and emphasizes their importance in realizing the Industry 4.0 vision within the context of smart manufacturing. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

28 pages, 6895 KiB  
Article
Alquist 5.0: Dialogue Trees Meet Generative Models, a Novel Approach for Enhancing SocialBot Conversations
by Ondrej Kobza, David Herel, Jan Cuhel, Tommaso Gargiani, Petr Marek and Jan Sedivy
Future Internet 2024, 16(9), 344; https://doi.org/10.3390/fi16090344 - 21 Sep 2024
Viewed by 561
Abstract
This article introduces Alquist 5.0, our SocialBot that was designed for the Alexa Prize SocialBot Grand Challenge 5. Building upon previous iterations, we present the integration of our novel neural response generator (NRG) Barista within a hybrid architecture that combines traditional predefined dialogues [...] Read more.
This article introduces Alquist 5.0, our SocialBot that was designed for the Alexa Prize SocialBot Grand Challenge 5. Building upon previous iterations, we present the integration of our novel neural response generator (NRG) Barista within a hybrid architecture that combines traditional predefined dialogues with advanced neural response generation. We provide a comprehensive analysis of the current state-of-the-art NRGs and large language models (LLMs), leveraging these insights to enhance Barista’s capabilities. A key focus of our development was in ensuring the safety of our chatbot and implementing robust measures to prevent profanity and inappropriate content. Additionally, we incorporated a new search engine to improve information retrieval and response accuracy. Expanding the capabilities of our system, we designed Alquist 5.0 to accommodate multimodal devices, utilizing APL templates enriched with custom features to deliver an outstanding conversational experience complemented by an excellent user interface. This paper offers detailed insights into the development of Alquist 5.0, which effectively addresses evolving user demands while preserving its empathetic and knowledgeable conversational prowess across a wide range of topics. Full article
Show Figures

Figure 1

33 pages, 1131 KiB  
Review
Artificial Intelligence to Reshape the Healthcare Ecosystem
by Gianluca Reali and Mauro Femminella
Future Internet 2024, 16(9), 343; https://doi.org/10.3390/fi16090343 - 20 Sep 2024
Viewed by 659
Abstract
This paper intends to provide the reader with an overview of the main processes that are introducing artificial intelligence (AI) into healthcare services. The first part is organized according to an evolutionary perspective. We first describe the role that digital technologies have had [...] Read more.
This paper intends to provide the reader with an overview of the main processes that are introducing artificial intelligence (AI) into healthcare services. The first part is organized according to an evolutionary perspective. We first describe the role that digital technologies have had in shaping the current healthcare methodologies and the relevant foundations for new evolutionary scenarios. Subsequently, the various evolutionary paths are illustrated with reference to AI techniques and their research activities, specifying their degree of readiness for actual clinical use. The organization of this paper is based on the interplay three pillars, namely, algorithms, enabling technologies and regulations, and healthcare methodologies. Through this organization we introduce the reader to the main evolutionary aspects of the healthcare ecosystem, to associate clinical needs with appropriate methodologies. We also explore the different aspects related to the Internet of the future that are not typically presented in papers that focus on AI, but that are equally crucial to determine the success of current research and development activities in healthcare. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

13 pages, 9262 KiB  
Article
Decentralized Mechanism for Edge Node Allocation in Access Network: An Experimental Evaluation
by Jesus Calle-Cancho, Carlos Cañada, Rafael Pastor-Vargas, Mercedes E. Paoletti and Juan M. Haut
Future Internet 2024, 16(9), 342; https://doi.org/10.3390/fi16090342 - 20 Sep 2024
Viewed by 555
Abstract
With the rapid advancement of the Internet of Things and the emergence of 6G networks in smart city environments, a growth in the generation of data, commonly known as big data, is expected to consequently lead to higher latency. To mitigate this latency, [...] Read more.
With the rapid advancement of the Internet of Things and the emergence of 6G networks in smart city environments, a growth in the generation of data, commonly known as big data, is expected to consequently lead to higher latency. To mitigate this latency, mobile edge computing has been proposed to alleviate a portion of the workload from mobile devices by offloading it to nearby edge servers equipped with appropriate computational resources. However, existing solutions often exhibit poor performance when confronted with complex network topologies. Thus, this paper introduces a decentralized mechanism aimed at determining the locations of network edge nodes in such complex network topologies, characterized by lengthy execution times. Our proposal provides performance improvements and offers scalability and flexibility as networks become more complex. Experimental evaluations are conducted using the Shanghai Telecom dataset to validate our proposed approach. Full article
(This article belongs to the Special Issue Distributed Storage of Large Knowledge Graphs with Mobility Data)
Show Figures

Figure 1

31 pages, 6685 KiB  
Article
Efficient Data Exchange between WebAssembly Modules
by Lucas Silva, José Metrôlho and Fernando Ribeiro
Future Internet 2024, 16(9), 341; https://doi.org/10.3390/fi16090341 - 20 Sep 2024
Viewed by 3619
Abstract
In the past two decades, there has been a noticeable decoupling of machines and operating systems. In this context, WebAssembly has emerged as a promising alternative to traditional virtual machines. Originally designed for execution in web browsers, it has expanded to executing code [...] Read more.
In the past two decades, there has been a noticeable decoupling of machines and operating systems. In this context, WebAssembly has emerged as a promising alternative to traditional virtual machines. Originally designed for execution in web browsers, it has expanded to executing code in restricted and secure environments, and it stands out for its rapid startup, small footprint, and portability. However, WebAssembly presents significant challenges in data transfer and the management of interactions with the module. Its specification requires each module to have its own memory, resulting in a “share-nothing” architecture. This restriction, combined with the limitations of importing and exporting functions that only handle numerical values, and the absence of an application binary interface (ABI) for sharing more complex data structures, leads to efficiency problems; these are exacerbated by the variety of programming languages that can be compiled and executed in the same environment. To address this inefficiency, the Karmem was designed and developed. It includes a new interface description language (IDL) aimed at facilitating the definition of data, functions, and relationships between modules. Additionally, a proprietary protocol—an optimized ABI for efficient data reading and minimal decoding cost—was created. A code generator capable of producing code for various programming languages was also conceived, ensuring harmonious interaction with the ABI and the foreign function interface. Finally, the compact runtime of Karmem, built atop a WebAssembly runtime, enables communication between modules and shared memory. Results of the experiments conducted show that Karmem represents an innovation in data communication for WASM in multiple environments and demonstrates the ability to overcome challenges of efficiency and interoperability. Full article
Show Figures

Figure 1

19 pages, 1298 KiB  
Article
Determinants to Adopt Industrial Internet of Things in Small and Medium-Sized Enterprises
by Abdullah Khanfor
Future Internet 2024, 16(9), 340; https://doi.org/10.3390/fi16090340 - 20 Sep 2024
Viewed by 686
Abstract
The Industrial Internet of Things (IIoT) enhances and optimizes operations and product quality by reducing expenses and preserving critical factory components. The IIoT can also be integrated into the processes of small and medium-sized enterprises (SMEs). However, several factors and risks have discouraged [...] Read more.
The Industrial Internet of Things (IIoT) enhances and optimizes operations and product quality by reducing expenses and preserving critical factory components. The IIoT can also be integrated into the processes of small and medium-sized enterprises (SMEs). However, several factors and risks have discouraged SMEs from adopting the IIoT. This study aims to identify the factors influencing IIoT adoption and address the challenges by conducting semi-structured interviews with experienced stakeholders in SME factories. Group quotations and thematic analysis indicate essential themes from these interviews, suggesting two primary categories, human- and machine-related factors, that affect implementation. The main human-related factor is the decision making of high-level management and owners to implement the IIoT in their plants, which requires skilled individuals to achieve IIoT solutions. Machine-related factors present several challenges, including device compatibility-, device management-, and data storage-associated issues. Comprehending and addressing these factors when deploying the IIoT can ensure successful implementation in SMEs, maximizing the potential benefits of this technology. Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
Show Figures

Figure 1

22 pages, 2352 KiB  
Article
Optimizing Network Performance: A Comparative Analysis of EIGRP, OSPF, and BGP in IPv6-Based Load-Sharing and Link-Failover Systems
by Kamal Shahid, Saleem Naseer Ahmad and Syed Tahir Hussain Rizvi
Future Internet 2024, 16(9), 339; https://doi.org/10.3390/fi16090339 - 20 Sep 2024
Viewed by 1594
Abstract
The purpose of this study is to evaluate and compare how well different routing protocols perform in terms of load sharing, link failover, and overall network performance. Wireshark was used for packet-level analysis, VMWare was used for virtualization, GNS3 was used for network [...] Read more.
The purpose of this study is to evaluate and compare how well different routing protocols perform in terms of load sharing, link failover, and overall network performance. Wireshark was used for packet-level analysis, VMWare was used for virtualization, GNS3 was used for network simulation, and Iperf3 was used to measure network performance parameters. Convergence time, packet losses, network jitter, and network delay are the parameters that were selected for assessment. To examine the behaviors of Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP) routing protocols in a variety of network settings, a simulated network environment incorporating both protocols along with Border Gateway Protocol (BGP) is created for the research. The setup for the experiment entails simulating different network conditions, such as fluctuating traffic loads and connection failures, to track how the protocols function in dynamic situations. The efficiency metrics for OSPF and EIGRP with BGP are measured and evaluated using the data generated by Wireshark and Iperf3. The results of this research show that EIGRP has a better failover convergence time and packet loss percentage as compared to the OSPF. Full article
Show Figures

Figure 1

17 pages, 329 KiB  
Article
Traffic Classification in Software-Defined Networking Using Genetic Programming Tools
by Spiridoula V. Margariti, Ioannis G. Tsoulos, Evangelia Kiousi and Eleftherios Stergiou
Future Internet 2024, 16(9), 338; https://doi.org/10.3390/fi16090338 - 19 Sep 2024
Viewed by 877
Abstract
The classification of Software-Defined Networking (SDN) traffic is an essential tool for network management, network monitoring, traffic engineering, dynamic resource allocation planning, and applying Quality of Service (QoS) policies. The programmability nature of SDN, the holistic view of the network through SDN controllers, [...] Read more.
The classification of Software-Defined Networking (SDN) traffic is an essential tool for network management, network monitoring, traffic engineering, dynamic resource allocation planning, and applying Quality of Service (QoS) policies. The programmability nature of SDN, the holistic view of the network through SDN controllers, and the capability for dynamic adjustable and reconfigurable controllersare fertile ground for the development of new techniques for traffic classification. Although there are enough research works that have studied traffic classification methods in SDN environments, they have several shortcomings and gaps that need to be further investigated. In this study, we investigated traffic classification methods in SDN using publicly available SDN traffic trace datasets. We apply a series of classifiers, such as MLP (BFGS), FC2 (RBF), FC2 (MLP), Decision Tree, SVM, and GENCLASS, and evaluate their performance in terms of accuracy, detection rate, and precision. Of the methods used, GenClass appears to be more accurate in separating the categories of the problem than the rest, and this is reflected in both precision and recall. The key element of the GenClass method is that it can generate classification rules programmatically and detect the hidden associations that exist between the problem features and the desired classes. However, Genetic Programming-based techniques require significantly higher execution time compared to other machine learning techniques. This is most evident in the feature construction method where at each generation of the genetic algorithm, a set of learning models is required to be trained to evaluate the generated artificial features. Full article
Show Figures

Figure 1

40 pages, 4416 KiB  
Review
A Review on Millimeter-Wave Hybrid Beamforming for Wireless Intelligent Transport Systems
by Waleed Shahjehan, Rajkumar Singh Rathore, Syed Waqar Shah, Mohammad Aljaidi, Ali Safaa Sadiq and Omprakash Kaiwartya
Future Internet 2024, 16(9), 337; https://doi.org/10.3390/fi16090337 - 14 Sep 2024
Viewed by 4184
Abstract
As the world braces for an era of ubiquitous and seamless connectivity, hybrid beamforming stands out as a beacon guiding the evolutionary path of wireless communication technologies. Several hybrid beamforming technologies are explored for millimeter-wave multiple-input multi-output (MIMO) communication. The aim is to [...] Read more.
As the world braces for an era of ubiquitous and seamless connectivity, hybrid beamforming stands out as a beacon guiding the evolutionary path of wireless communication technologies. Several hybrid beamforming technologies are explored for millimeter-wave multiple-input multi-output (MIMO) communication. The aim is to provide a roadmap for hybrid beamforming that enhances wireless fidelity. In this systematic review, a detailed literature review of algorithms/techniques used in hybrid beamforming along with performance metrics, characteristics, limitations, as well as performance evaluations are provided to enable communication compatible with modern Wireless Intelligent Transport Systems (WITSs). Further, an in-depth analysis of the mmWave hybrid beamforming landscape is provided based on user, link, band, scattering, structure, duplex, carrier, network, applications, codebook, and reflecting intelligent surfaces to optimize system design and performance across diversified user scenarios. Furthermore, the current research trends for hybrid beamforming are provided to enable the development of advanced wireless communication systems with optimized performance and efficiency. Finally, challenges, solutions, and future research directions are provided so that this systematic review can serve as a touchstone for academics and industry professionals alike. The systematic review aims to equip researchers with a deep understanding of the current state of the art and thereby enable the development of next-generation communication in WITSs that are not only adept at coping with contemporary demands but are also future-proofed to assimilate upcoming trends and innovations. Full article
Show Figures

Figure 1

20 pages, 1843 KiB  
Article
Implementation of White-Hat Worms Using Mirai Source Code and Its Optimization through Parameter Tuning
by Yudai Yamamoto, Aoi Fukushima and Shingo Yamaguchi
Future Internet 2024, 16(9), 336; https://doi.org/10.3390/fi16090336 - 13 Sep 2024
Viewed by 630
Abstract
Mirai, an IoT malware that emerged in 2016, has been used for large-scale DDoS attacks. The Mirai source code is publicly available and continues to be a threat with a variety of variants still in existence. In this paper, we propose an implementation [...] Read more.
Mirai, an IoT malware that emerged in 2016, has been used for large-scale DDoS attacks. The Mirai source code is publicly available and continues to be a threat with a variety of variants still in existence. In this paper, we propose an implementation system for malicious and white-hat worms created using the Mirai source code, as well as a general and detailed implementation method for white-hat worms that is not limited to the Mirai source code. The white-hat worms have the function of a secondary infection, in which the white-hat worm disinfects the malicious worm by infecting devices already infected by the malicious worm, and two parameters, the values of which can be changed to modify the rate at which the white-hat worms can spread their infection. The values of the parameters of the best white-hat worm for disinfection of the malicious botnet and the impact of the value of each parameter on the disinfection of the malicious botnet were analyzed in detail. The analysis revealed that for a white-hat worm to disinfect a malicious botnet, it must be able to infect at least 80% of all devices and maintain that situation for at least 300 s. Then, by tuning and optimizing the values of the white-hat worm’s parameters, we were able to successfully eliminate the malicious botnet, demonstrating the effectiveness of the white-hat botnet’s function of eliminating the malicious botnet. Full article
Show Figures

Figure 1

25 pages, 3467 KiB  
Article
Parallel and Distributed Frugal Tracking of a Quantile
by Italo Epicoco, Marco Pulimeno and Massimo Cafaro
Future Internet 2024, 16(9), 335; https://doi.org/10.3390/fi16090335 - 13 Sep 2024
Viewed by 528
Abstract
In this paper, we deal with the problem of monitoring network latency. Indeed, latency is a key network metric related to both network performance and quality of service, since it directly impacts on the overall user’s experience. High latency leads to unacceptably slow [...] Read more.
In this paper, we deal with the problem of monitoring network latency. Indeed, latency is a key network metric related to both network performance and quality of service, since it directly impacts on the overall user’s experience. High latency leads to unacceptably slow response times of network services, and may increase network congestion and reduce the throughput, in turn disrupting communications and the user’s experience. A common approach to monitoring network latency takes into account the frequently skewed distribution of latency values, and therefore specific quantiles are monitored, such as the 95th, 98th, and 99th percentiles. We present a comparative analysis of the speed of convergence of the sequential FRUGAL-1U, FRUGAL-2U, and EASYQUANTILE algorithms and the design and analysis of parallel, message-passing-based versions of these algorithms that can be used for monitoring network latency quickly and accurately. Distributed versions are also discussed. Extensive experimental results are provided and discussed as well. Full article
Show Figures

Figure 1

25 pages, 2612 KiB  
Article
Measuring the Effectiveness of Carbon-Aware AI Training Strategies in Cloud Instances: A Confirmation Study
by Roberto Vergallo and Luca Mainetti
Future Internet 2024, 16(9), 334; https://doi.org/10.3390/fi16090334 - 13 Sep 2024
Viewed by 794
Abstract
While the massive adoption of Artificial Intelligence (AI) is threatening the environment, new research efforts begin to be employed to measure and mitigate the carbon footprint of both training and inference phases. In this domain, two carbon-aware training strategies have been proposed in [...] Read more.
While the massive adoption of Artificial Intelligence (AI) is threatening the environment, new research efforts begin to be employed to measure and mitigate the carbon footprint of both training and inference phases. In this domain, two carbon-aware training strategies have been proposed in the literature: Flexible Start and Pause & Resume. Such strategies—natively Cloud-based—use the time resource to postpone or pause the training algorithm when the carbon intensity reaches a threshold. While such strategies have proved to achieve interesting results on a benchmark of modern models covering Natural Language Processing (NLP) and computer vision applications and a wide range of model sizes (up to 6.1B parameters), it is still unclear whether such results may hold also with different algorithms and in different geographical regions. In this confirmation study, we use the same methodology as the state-of-the-art strategies to recompute the saving in carbon emissions of Flexible Start and Pause & Resume in the Anomaly Detection (AD) domain. Results confirm their effectiveness in two specific conditions, but the percentage reduction behaves differently compared with what is stated in the existing literature. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence in Smart Societies)
Show Figures

Figure 1

20 pages, 6757 KiB  
Article
A Task Offloading and Resource Allocation Strategy Based on Multi-Agent Reinforcement Learning in Mobile Edge Computing
by Guiwen Jiang, Rongxi Huang, Zhiming Bao and Gaocai Wang
Future Internet 2024, 16(9), 333; https://doi.org/10.3390/fi16090333 - 11 Sep 2024
Viewed by 971
Abstract
Task offloading and resource allocation is a research hotspot in cloud-edge collaborative computing. Many existing pieces of research adopted single-agent reinforcement learning to solve this problem, which has some defects such as low robustness, large decision space, and ignoring delayed rewards. In view [...] Read more.
Task offloading and resource allocation is a research hotspot in cloud-edge collaborative computing. Many existing pieces of research adopted single-agent reinforcement learning to solve this problem, which has some defects such as low robustness, large decision space, and ignoring delayed rewards. In view of the above deficiencies, this paper constructs a cloud-edge collaborative computing model, and related task queue, delay, and energy consumption model, and gives joint optimization problem modeling for task offloading and resource allocation with multiple constraints. Then, in order to solve the joint optimization problem, this paper designs a decentralized offloading and scheduling scheme based on “task-oriented” multi-agent reinforcement learning. In this scheme, we present information synchronization protocols and offloading scheduling rules and use edge servers as agents to construct a multi-agent system based on the Actor–Critic framework. In order to solve delayed rewards, this paper models the offloading and scheduling problem as a “task-oriented” Markov decision process. This process abandons the commonly used equidistant time slot model but uses dynamic and parallel slots in the step of task processing time. Finally, an offloading decision algorithm TOMAC-PPO is proposed. The algorithm applies the proximal policy optimization to the multi-agent system and combines the Transformer neural network model to realize the memory and prediction of network state information. Experimental results show that this algorithm has better convergence speed and can effectively reduce the service cost, energy consumption, and task drop rate under high load and high failure rates. For example, the proposed TOMAC-PPO can reduce the average cost by from 19.4% to 66.6% compared to other offloading schemes under the same network load. In addition, the drop rate of some baseline algorithms with 50 users can achieve 62.5% for critical tasks, while the proposed TOMAC-PPO only has 5.5%. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

20 pages, 1730 KiB  
Article
Time-Efficient Neural-Network-Based Dynamic Area Optimization Algorithm for High-Altitude Platform Station Mobile Communications
by Wataru Takabatake, Yohei Shibata, Kenji Hoshino and Tomoaki Ohtsuki
Future Internet 2024, 16(9), 332; https://doi.org/10.3390/fi16090332 - 11 Sep 2024
Viewed by 602
Abstract
There is a growing interest in high-altitude platform stations (HAPSs) as potential telecommunication infrastructures in the stratosphere, providing direct communication services to ground-based smartphones. Enhanced coverage and capacity can be realized in HAPSs by adopting multicell configurations. To improve the communication quality, previous [...] Read more.
There is a growing interest in high-altitude platform stations (HAPSs) as potential telecommunication infrastructures in the stratosphere, providing direct communication services to ground-based smartphones. Enhanced coverage and capacity can be realized in HAPSs by adopting multicell configurations. To improve the communication quality, previous studies have investigated methods based on search algorithms, such as genetic algorithms (GAs), which dynamically optimize antenna parameters. However, these methods face hurdles in swiftly adapting to sudden distribution shifts from natural disasters or major events due to their high computational requirements. Moreover, they do not utilize the previous optimization results, which require calculations each time. This study introduces a novel optimization approach based on a neural network (NN) model that is trained on GA solutions. The simple model is easy to implement and allows for instantaneous adaptation to unexpected distribution changes. However, the NN faces the difficulty of capturing the dependencies among neighboring cells. To address the problem, a classifier chain (CC), which chains multiple classifiers to learn output relationships, is integrated into the NN. However, the performance of the CC depends on the output sequence. Therefore, we employ an ensemble approach to integrate the CCs with different sequences and select the best solution. The results of simulations based on distributions in Japan indicate that the proposed method achieves a total throughput whose cumulative distribution function (CDF) is close to that obtained by the GA solutions. In addition, the results show that the proposed method is more time-efficient than GA in terms of the total time required to optimize each user distribution. Full article
(This article belongs to the Special Issue Moving towards 6G Wireless Technologies—Volume II)
Show Figures

Figure 1

25 pages, 3537 KiB  
Article
A Complete EDA and DL Pipeline for Softwarized 5G Network Intrusion Detection
by Abdallah Moubayed
Future Internet 2024, 16(9), 331; https://doi.org/10.3390/fi16090331 - 10 Sep 2024
Viewed by 638
Abstract
The rise of 5G networks is driven by increasing deployments of IoT devices and expanding mobile and fixed broadband subscriptions. Concurrently, the deployment of 5G networks has led to a surge in network-related attacks, due to expanded attack surfaces. Machine learning (ML), particularly [...] Read more.
The rise of 5G networks is driven by increasing deployments of IoT devices and expanding mobile and fixed broadband subscriptions. Concurrently, the deployment of 5G networks has led to a surge in network-related attacks, due to expanded attack surfaces. Machine learning (ML), particularly deep learning (DL), has emerged as a promising tool for addressing these security challenges in 5G networks. To that end, this work proposed an exploratory data analysis (EDA) and DL-based framework designed for 5G network intrusion detection. The approach aimed to better understand dataset characteristics, implement a DL-based detection pipeline, and evaluate its performance against existing methodologies. Experimental results using the 5G-NIDD dataset showed that the proposed DL-based models had extremely high intrusion detection and attack identification capabilities (above 99.5% and outperforming other models from the literature), while having a reasonable prediction time. This highlights their effectiveness and efficiency for such tasks in softwarized 5G environments. Full article
(This article belongs to the Special Issue Advanced 5G and Beyond Networks)
Show Figures

Graphical abstract

33 pages, 1632 KiB  
Article
On the Fairness of Internet Congestion Control over WiFi with Deep Reinforcement Learning
by Shyam Kumar Shrestha, Shiva Raj Pokhrel and Jonathan Kua
Future Internet 2024, 16(9), 330; https://doi.org/10.3390/fi16090330 - 10 Sep 2024
Viewed by 3294
Abstract
For over forty years, TCP has been the main protocol for transporting data on the Internet. To improve congestion control algorithms (CCAs), delay bounding algorithms such as Vegas, FAST, BBR, PCC, and Copa have been developed. However, despite being designed to ensure fairness [...] Read more.
For over forty years, TCP has been the main protocol for transporting data on the Internet. To improve congestion control algorithms (CCAs), delay bounding algorithms such as Vegas, FAST, BBR, PCC, and Copa have been developed. However, despite being designed to ensure fairness between data flows, these CCAs can still lead to unfairness and, in some cases, even cause data flow starvation in WiFi networks under certain conditions. We propose a new CCA switching solution that works with existing TCP and WiFi standards. This solution is offline and uses Deep Reinforcement Learning (DRL) trained on features such as noncongestive delay variations to predict and prevent extreme unfairness and starvation. Our DRL-driven approach allows for dynamic and efficient CCA switching. We have tested our design preliminarily in realistic datasets, ensuring that they support both fairness and efficiency over WiFi networks, which requires further investigation and extensive evaluation before online deployment. Full article
Show Figures

Figure 1

28 pages, 3973 KiB  
Systematic Review
Edge Computing in Healthcare: Innovations, Opportunities, and Challenges
by Alexandru Rancea, Ionut Anghel and Tudor Cioara
Future Internet 2024, 16(9), 329; https://doi.org/10.3390/fi16090329 - 10 Sep 2024
Viewed by 6660
Abstract
Edge computing promising a vision of processing data close to its generation point, reducing latency and bandwidth usage compared with traditional cloud computing architectures, has attracted significant attention lately. The integration of edge computing in modern systems takes advantage of Internet of Things [...] Read more.
Edge computing promising a vision of processing data close to its generation point, reducing latency and bandwidth usage compared with traditional cloud computing architectures, has attracted significant attention lately. The integration of edge computing in modern systems takes advantage of Internet of Things (IoT) devices and can potentially improve the systems’ performance, scalability, privacy, and security with applications in different domains. In the healthcare domain, modern IoT devices can nowadays be used to gather vital parameters and information that can be fed to edge Artificial Intelligence (AI) techniques able to offer precious insights and support to healthcare professionals. However, issues regarding data privacy and security, AI optimization, and computational offloading at the edge pose challenges to the adoption of edge AI. This paper aims to explore the current state of the art of edge AI in healthcare by using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology and analyzing more than 70 Web of Science articles. We have defined the relevant research questions, clear inclusion and exclusion criteria, and classified the research works in three main directions: privacy and security, AI-based optimization methods, and edge offloading techniques. The findings highlight the many advantages of integrating edge computing in a wide range of healthcare use cases requiring data privacy and security, near real-time decision-making, and efficient communication links, with the potential to transform future healthcare services and eHealth applications. However, further research is needed to enforce new security-preserving methods and for better orchestrating and coordinating the load in distributed and decentralized scenarios. Full article
(This article belongs to the Special Issue Privacy and Security Issues with Edge Learning in IoT Systems)
Show Figures

Figure 1

20 pages, 2709 KiB  
Article
A New Framework for Enhancing VANETs through Layer 2 DLT Architectures with Multiparty Threshold Key Management and PETs
by Haitham Y. Adarbah, Mehmet Sabir Kiraz, Suleyman Kardas, Ali H. Al-Bayatti and Hilal M. Y. Al-Bayatti
Future Internet 2024, 16(9), 328; https://doi.org/10.3390/fi16090328 - 9 Sep 2024
Viewed by 1082
Abstract
This work proposes a new architectural approach to enhance the security, privacy, and scalability of VANETs through threshold key management and Privacy Enhancing Technologies (PETs), such as homomorphic encryption and secure multiparty computation, integrated with Decentralized Ledger Technologies (DLTs). These advanced mechanisms are [...] Read more.
This work proposes a new architectural approach to enhance the security, privacy, and scalability of VANETs through threshold key management and Privacy Enhancing Technologies (PETs), such as homomorphic encryption and secure multiparty computation, integrated with Decentralized Ledger Technologies (DLTs). These advanced mechanisms are employed to eliminate centralization and protect the privacy of transferred and processed information in VANETs, thereby addressing privacy concerns. We begin by discussing the weaknesses of existing VANET architectures concerning trust, privacy, and scalability and then introduce a new architectural framework that shifts from centralized to decentralized approaches. This transition applies a decentralized ledger mechanism to ensure correctness, reliability, accuracy, and security against various known attacks. The use of Layer 2 DLTs in our framework enhances key management, trust distribution, and data privacy, offering cost and speed advantages over Layer 1 DLTs, thereby enabling secure vehicle-to-everything (V2X) communication. The proposed framework is superior to other frameworks as it improves decentralized trust management, adopts more efficient PETs, and leverages Layer 2 DLT for scalability. The integration of multiparty threshold key management and homomorphic encryption also enhances data confidentiality and integrity, thus securing against various existing cryptographic attacks. Finally, we discuss potential future developments to improve the security and reliability of VANETs in the next generation of networks, including 5G networks. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

25 pages, 2396 KiB  
Article
Internet of Conscious Things: Ontology-Based Social Capabilities for Smart Objects
by Michele Ruta, Floriano Scioscia, Giuseppe Loseto, Agnese Pinto, Corrado Fasciano, Giovanna Capurso and Eugenio Di Sciascio
Future Internet 2024, 16(9), 327; https://doi.org/10.3390/fi16090327 - 8 Sep 2024
Viewed by 772
Abstract
Emerging distributed intelligence paradigms for the Internet of Things (IoT) call for flexible and dynamic reconfiguration of elementary services, resources and devices. In order to achieve such capability, this paper faces complex interoperability and autonomous decision problems by proposing a thorough framework based [...] Read more.
Emerging distributed intelligence paradigms for the Internet of Things (IoT) call for flexible and dynamic reconfiguration of elementary services, resources and devices. In order to achieve such capability, this paper faces complex interoperability and autonomous decision problems by proposing a thorough framework based on the integration of the Semantic Web of Things (SWoT) and Social Internet of Things (SIoT) paradigms. SWoT enables low-power knowledge representation and autonomous reasoning at the edge of the network through carefully optimized inference services and engines. This layer provides service/resource management and discovery primitives for a decentralized collaborative social protocol in the IoT, based on the Linked Data Notifications(LDN) over Linked Data Platform on Constrained Application Protocol (LDP-CoAP). The creation and evolution of friend and follower relationships between pairs of devices is regulated by means of novel dynamic models assessing trust as a usefulness reputation score. The close SWoT-SIoT integration overcomes the functional limitations of existing proposals, which focus on either social device or semantic resource management only. A smart mobility case study on Plug-in Electric Vehicles (PEVs) illustrates the benefits of the proposal in pervasive collaborative scenarios, while experiments show the computational sustainability of the dynamic relationship management approach. Full article
(This article belongs to the Special Issue Social Internet of Things (SIoT))
Show Figures

Figure 1

17 pages, 3648 KiB  
Article
Privacy-Preserving Authentication Based on PUF for VANETs
by Lihui Li, Hanwen Deng, Zhongyi Zhai and Sheng-Lung Peng
Future Internet 2024, 16(9), 326; https://doi.org/10.3390/fi16090326 - 8 Sep 2024
Viewed by 535
Abstract
The secret key is stored in an ideal tamper-proof device so that a vehicle can implement a secure authentication with the road-side units (RSUs) and other drivers. However, some adversaries can capture the secret key by physical attacks. To resist physical attacks, we [...] Read more.
The secret key is stored in an ideal tamper-proof device so that a vehicle can implement a secure authentication with the road-side units (RSUs) and other drivers. However, some adversaries can capture the secret key by physical attacks. To resist physical attacks, we propose a physical-preserving authentication based on a physical unclonable function for vehicular ad hoc networks. In the proposed scheme, a physical unclonable function is deployed on the vehicle and the RSU to provide a challenge–response mechanism. A secret key is only generated by the challenge–response mechanism when it is needed, which eliminates the need to store a long-term secret key. As a result, this prevents secret keys from being captured by adversaries, improving system security. In addition, route planning is introduced into the proposed scheme so that a vehicle can obtain the authentication key of RSUs on its route before vehicle-to-infrastructure authentication, which greatly speeds up the authentication when the vehicle enters the RSUs’ coverage. Furthermore, a detailed analysis demonstrates that the proposed scheme achieves security objectives in vehicular ad hoc networks. Ultimately, when contrasted with similar schemes, the performance assessment demonstrates that our proposed scheme surpasses others in terms of computational overhead, communication overhead and packet loss rate. Full article
Show Figures

Graphical abstract

22 pages, 8922 KiB  
Article
A Novel Framework for Cross-Cluster Scaling in Cloud-Native 5G NextGen Core
by Oana-Mihaela Dumitru-Guzu, Vlădeanu Călin and Robert Kooij
Future Internet 2024, 16(9), 325; https://doi.org/10.3390/fi16090325 - 6 Sep 2024
Viewed by 748
Abstract
Cloud-native technologies are widely considered the ideal candidates for the future of vertical application development due to their boost in flexibility, scalability, and especially cost efficiency. Since multi-site support is paramount for 5G, we employ a multi-cluster model that scales on demand, shifting [...] Read more.
Cloud-native technologies are widely considered the ideal candidates for the future of vertical application development due to their boost in flexibility, scalability, and especially cost efficiency. Since multi-site support is paramount for 5G, we employ a multi-cluster model that scales on demand, shifting the boundaries of both horizontal and vertical scaling for shared resources. Our approach is based on the liquid computing paradigm, which has the benefit of adapting to the changing environment. Despite being a decentralized deployment shared across data centers, the 5G mobile core can be managed as a single cluster entity running in a public cloud. We achieve this by following the cloud-native patterns for declarative configuration based on Kubernetes APIs and on-demand resource allocation. Moreover, in our setup, we analyze the offloading of both the Open5GS user and control plane functions under two different peering scenarios. A significant improvement in terms of latency and throughput is achieved for the in-band peering, considering the traffic between clusters is ensured by the Liqo control plane through a VPN tunnel. We also validate three end-to-end network slicing use cases, showcasing the full 5G core automation and leveraging the capabilities of Kubernetes multi-cluster deployments and inter-service monitoring through the applied service mesh solution. Full article
Show Figures

Figure 1

15 pages, 539 KiB  
Review
Machine Learning for Blockchain and IoT Systems in Smart Cities: A Survey
by Elias Dritsas and Maria Trigka
Future Internet 2024, 16(9), 324; https://doi.org/10.3390/fi16090324 - 6 Sep 2024
Cited by 1 | Viewed by 1905
Abstract
The integration of machine learning (ML), blockchain, and the Internet of Things (IoT) in smart cities represents a pivotal advancement in urban innovation. This convergence addresses the complexities of modern urban environments by leveraging ML’s data analytics and predictive capabilities to enhance the [...] Read more.
The integration of machine learning (ML), blockchain, and the Internet of Things (IoT) in smart cities represents a pivotal advancement in urban innovation. This convergence addresses the complexities of modern urban environments by leveraging ML’s data analytics and predictive capabilities to enhance the intelligence of IoT systems, while blockchain provides a secure, decentralized framework that ensures data integrity and trust. The synergy of these technologies not only optimizes urban management but also fortifies security and privacy in increasingly connected cities. This survey explores the transformative potential of ML-driven blockchain-IoT ecosystems in enabling autonomous, resilient, and sustainable smart city infrastructure. It also discusses the challenges such as scalability, privacy, and ethical considerations, and outlines possible applications and future research directions that are critical for advancing smart city initiatives. Understanding these dynamics is essential for realizing the full potential of smart cities, where technology enhances not only efficiency but also urban sustainability and resilience. Full article
(This article belongs to the Special Issue Machine Learning for Blockchain and IoT Systems in Smart City)
Show Figures

Figure 1

24 pages, 1914 KiB  
Article
Enhancing Elderly Care through Low-Cost Wireless Sensor Networks and Artificial Intelligence: A Study on Vital Sign Monitoring and Sleep Improvement
by Carolina Del-Valle-Soto, Ramon A. Briseño, Ramiro Velázquez, Gabriel Guerra-Rosales, Santiago Perez-Ochoa, Isaac H. Preciado-Bazavilvazo, Paolo Visconti and José Varela-Aldás
Future Internet 2024, 16(9), 323; https://doi.org/10.3390/fi16090323 - 6 Sep 2024
Viewed by 942
Abstract
This research explores the application of wireless sensor networks for the non-invasive monitoring of sleep quality and vital signs in elderly individuals, addressing significant challenges faced by the aging population. The study implemented and evaluated WSNs in home environments, focusing on variables such [...] Read more.
This research explores the application of wireless sensor networks for the non-invasive monitoring of sleep quality and vital signs in elderly individuals, addressing significant challenges faced by the aging population. The study implemented and evaluated WSNs in home environments, focusing on variables such as breathing frequency, deep sleep, snoring, heart rate, heart rate variability (HRV), oxygen saturation, Rapid Eye Movement (REM sleep), and temperature. The results demonstrated substantial improvements in key metrics: 68% in breathing frequency, 68% in deep sleep, 70% in snoring reduction, 91% in HRV, and 85% in REM sleep. Additionally, temperature control was identified as a critical factor, with higher temperatures negatively impacting sleep quality. By integrating AI with WSN data, this study provided personalized health recommendations, enhancing sleep quality and overall health. This approach also offered significant support to caregivers, reducing their burden. This research highlights the cost-effectiveness and scalability of WSN technology, suggesting its feasibility for widespread adoption. The findings represent a significant advancement in geriatric health monitoring, paving the way for more comprehensive and integrated care solutions. Full article
Show Figures

Figure 1

32 pages, 6053 KiB  
Article
Are Strong Baselines Enough? False News Detection with Machine Learning
by Lara Aslan, Michal Ptaszynski and Jukka Jauhiainen
Future Internet 2024, 16(9), 322; https://doi.org/10.3390/fi16090322 - 5 Sep 2024
Viewed by 3935
Abstract
False news refers to false, fake, or misleading information presented as real news. In recent years, there has been a noticeable increase in false news on the Internet. The goal of this paper was to study the automatic detection of such false news [...] Read more.
False news refers to false, fake, or misleading information presented as real news. In recent years, there has been a noticeable increase in false news on the Internet. The goal of this paper was to study the automatic detection of such false news using machine learning and natural language processing techniques and to determine which techniques work the most effectively. This article first studies what constitutes false news and how it differs from other types of misleading information. We also study the results achieved by other researchers on the same topic. After building a foundation to understand false news and the various ways of automatically detecting it, this article provides its own experiments. These experiments were carried out on four different datasets, one that was made just for this article, using 10 different machine learning methods. The results of this article were satisfactory and provided answers to the original research questions set up at the beginning of this article. This article could determine from the experiments that passive aggressive algorithms, support vector machines, and random forests are the most efficient methods for automatic false news detection. This article also concluded that more complex experiments, such as using multiple levels of identifying false news or detecting computer-generated false news, require more complex machine learning models. Full article
Show Figures

Figure 1

35 pages, 16771 KiB  
Article
Vulnerability Detection and Classification of Ethereum Smart Contracts Using Deep Learning
by Raed M. Bani-Hani, Ahmed S. Shatnawi and Lana Al-Yahya
Future Internet 2024, 16(9), 321; https://doi.org/10.3390/fi16090321 - 4 Sep 2024
Viewed by 779
Abstract
Smart contracts are programs that reside and execute on a blockchain, like any transaction. They are automatically executed when preprogrammed terms and conditions are met. Although the smart contract (SC) must be presented in the blockchain for the integrity of data and transactions [...] Read more.
Smart contracts are programs that reside and execute on a blockchain, like any transaction. They are automatically executed when preprogrammed terms and conditions are met. Although the smart contract (SC) must be presented in the blockchain for the integrity of data and transactions stored within it, it is highly exposed to several vulnerabilities attackers exploit to access the data. In this paper, classification and detection of vulnerabilities targeting smart contracts are performed using deep learning algorithms over two datasets containing 12,253 smart contracts. These contracts are converted into RGB and Grayscale images and then inserted into Residual Network (ResNet50), Visual Geometry Group-19 (VGG19), Dense Convolutional Network (DenseNet201), k-nearest Neighbors (KNN), and Random Forest (RF) algorithms for binary and multi-label classification. A comprehensive analysis is conducted to detect and classify vulnerabilities using different performance metrics. The performance of these algorithms was outstanding, accurately classifying vulnerabilities with high F1 scores and accuracy rates. For binary classification, RF emerged in RGB images as the best algorithm based on the highest F1 score of 86.66% and accuracy of 86.66%. Moving on to multi-label classification, VGG19 stood out in RGB images as the standout algorithm, achieving an impressive accuracy of 89.14% and an F1 score of 85.87%. To the best of our knowledge, and according to the available literature, this study is the first to investigate binary classification of vulnerabilities targeting Ethereum smart contracts, and the experimental results of the proposed methodology for multi-label vulnerability classification outperform existing literature. Full article
Show Figures

Figure 1

24 pages, 4648 KiB  
Article
A Micro-Segmentation Method Based on VLAN-VxLAN Mapping Technology
by Di Li, Zhibang Yang, Siyang Yu, Mingxing Duan and Shenghong Yang
Future Internet 2024, 16(9), 320; https://doi.org/10.3390/fi16090320 - 4 Sep 2024
Viewed by 888
Abstract
As information technology continues to evolve, cloud data centres have become increasingly prominent as the preferred infrastructure for data storage and processing. However, this shift has introduced a new array of security challenges, necessitating innovative approaches distinct from traditional network security architectures. In [...] Read more.
As information technology continues to evolve, cloud data centres have become increasingly prominent as the preferred infrastructure for data storage and processing. However, this shift has introduced a new array of security challenges, necessitating innovative approaches distinct from traditional network security architectures. In response, the Zero Trust Architecture (ZTA) has emerged as a promising solution, with micro-segmentation identified as a crucial component for enabling continuous auditing and stringent security controls. VxLAN technology is widely utilized in data centres for tenant isolation and virtual machine interconnection within tenant environments. Despite its prevalent use, limited research has focused on its application in micro-segmentation scenarios. To address this gap, we propose a method that leverages VLAN and VxLAN many-to-one mapping, requiring that all internal data centre traffic routes through the VxLAN gateway. This method can be implemented cost-effectively, without necessitating business modifications or causing service disruptions, thereby overcoming the challenges associated with micro-segmentation deployment. Importantly, this approach is based on standard public protocols, making it independent of specific product brands and enabling a network-centric framework that avoids software compatibility issues. To assess the effectiveness of our micro-segmentation approach, we provide a comprehensive evaluation that includes network aggregation and traffic visualization. Building on the implementation of micro-segmentation, we also introduce an enhanced asset behaviour algorithm. This algorithm constructs behavioural profiles based on the historical traffic of internal network assets, enabling the rapid identification of abnormal behaviours and facilitating timely defensive actions. Empirical results demonstrate that our algorithm is highly effective in detecting anomalous behaviour in intranet assets, making it a powerful tool for enhancing security in cloud data centres. In summary, the proposed approach offers a robust and efficient solution to the challenges of micro-segmentation in cloud data centres, contributing to the advancement of secure and reliable cloud infrastructure. Full article
Show Figures

Graphical abstract

19 pages, 602 KiB  
Article
Workflow Trace Profiling and Execution Time Analysis in Quantitative Verification
by Guoxin Su and Li Liu
Future Internet 2024, 16(9), 319; https://doi.org/10.3390/fi16090319 - 3 Sep 2024
Viewed by 568
Abstract
Workflows orchestrate a collection of computing tasks to form a complex workflow logic. Different from the traditional monolithic workflow management systems, modern workflow systems often manifest high throughput, concurrency and scalability. As service-based systems, execution time monitoring is an important part of maintaining [...] Read more.
Workflows orchestrate a collection of computing tasks to form a complex workflow logic. Different from the traditional monolithic workflow management systems, modern workflow systems often manifest high throughput, concurrency and scalability. As service-based systems, execution time monitoring is an important part of maintaining the performance for those systems. We developed a trace profiling approach that leverages quantitative verification (also known as probabilistic model checking) to analyse complex time metrics for workflow traces. The strength of probabilistic model checking lies in the ability of expressing various temporal properties for a stochastic system model and performing automated quantitative verification. We employ semi-Makrov chains (SMCs) as the formal model and consider the first passage times (FPT) measures in the SMCs. Our approach maintains simple mergeable data summaries of the workflow executions and computes the moment parameters for FPT efficiently. We describe an application of our approach to AWS Step Functions, a notable workflow web service. An empirical evaluation shows that our approach is efficient for computer high-order FPT moments for sizeable workflows in practice. It can compute up to the fourth moment for a large workflow model with 10,000 states within 70 s. Full article
Show Figures

Figure 1

34 pages, 2225 KiB  
Review
Graph Attention Networks: A Comprehensive Review of Methods and Applications
by Aristidis G. Vrahatis, Konstantinos Lazaros and Sotiris Kotsiantis
Future Internet 2024, 16(9), 318; https://doi.org/10.3390/fi16090318 - 3 Sep 2024
Cited by 1 | Viewed by 6242
Abstract
Real-world problems often exhibit complex relationships and dependencies, which can be effectively captured by graph learning systems. Graph attention networks (GATs) have emerged as a powerful and versatile framework in this direction, inspiring numerous extensions and applications in several areas. In this review, [...] Read more.
Real-world problems often exhibit complex relationships and dependencies, which can be effectively captured by graph learning systems. Graph attention networks (GATs) have emerged as a powerful and versatile framework in this direction, inspiring numerous extensions and applications in several areas. In this review, we present a thorough examination of GATs, covering both diverse approaches and a wide range of applications. We examine the principal GAT-based categories, including Global Attention Networks, Multi-Layer Architectures, graph-embedding techniques, Spatial Approaches, and Variational Models. Furthermore, we delve into the diverse applications of GATs in various systems such as recommendation systems, image analysis, medical domain, sentiment analysis, and anomaly detection. This review seeks to act as a navigational reference for researchers and practitioners aiming to emphasize the capabilities and prospects of GATs. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technologies in Greece 2024–2025)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop