Next Article in Journal
WebTrackingScore: A Combined Web Tracking Risk Score System for Websites
Next Article in Special Issue
On Edge-Fog-Cloud Collaboration and Reaping Its Benefits: A Heterogeneous Multi-Tier Edge Computing Architecture
Previous Article in Journal
A Comparative Survey of Centralised and Decentralised Identity Management Systems: Analysing Scalability, Security, and Feasibility
Previous Article in Special Issue
Empowering Healthcare: TinyML for Precise Lung Disease Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EdgeGuard: Decentralized Medical Resource Orchestration via Blockchain-Secured Federated Learning in IoMT Networks

by
Sakshi Patni
and
Joohyung Lee
*
School of Computing, Gachon University, Seongnam 13120, Republic of Korea
*
Author to whom correspondence should be addressed.
Future Internet 2025, 17(1), 2; https://doi.org/10.3390/fi17010002
Submission received: 19 October 2024 / Revised: 29 November 2024 / Accepted: 16 December 2024 / Published: 25 December 2024
(This article belongs to the Special Issue Edge Intelligence: Edge Computing for 5G and the Internet of Things)

Abstract

:
The development of medical data and resources has become essential for enhancing patient outcomes and operational efficiency in an age when digital innovation in healthcare is becoming more important. The rapid growth of the Internet of Medical Things (IoMT) is changing healthcare data management, but it also brings serious issues like data privacy, malicious attacks, and service quality. In this study, we present EdgeGuard, a novel decentralized architecture that combines blockchain technology, federated learning, and edge computing to address those challenges and coordinate medical resources across IoMT networks. EdgeGuard uses a privacy-preserving federated learning approach to keep sensitive medical data local and to promote collaborative model training, solving essential issues. To prevent data modification and unauthorized access, it uses a blockchain-based access control and integrity verification system. EdgeGuard uses edge computing to improve system scalability and efficiency by offloading computational tasks from IoMT devices with limited resources. We have made several technological advances, including a lightweight blockchain consensus mechanism designed for IoMT networks, an adaptive edge resource allocation method based on reinforcement learning, and a federated learning algorithm optimized for medical data with differential privacy. We also create an access control system based on smart contracts and a secure multi-party computing protocol for model updates. EdgeGuard outperforms existing solutions in terms of computational performance, data value, and privacy protection across a wide range of real-world medical datasets. This work enhances safe, effective, and privacy-preserving medical data management in IoMT ecosystems while maintaining outstanding standards for data security and resource efficiency, enabling large-scale collaborative learning in healthcare.

1. Introduction

In the era of digital healthcare transformation, the Internet of Medical Things (IoMT) has emerged as a form of critical infrastructure, enabling healthcare providers to collect, analyze, and utilize vast amounts of patient data efficiently [1]. The proliferation of IoMT devices has facilitated unprecedented levels of patient monitoring and care, enabling rapid innovations in personalized medicine and remote healthcare delivery [2]. The goal of collaborative healthcare management in geographically distributed IoMT networks is to efficiently allocate and manage resources across various edge devices to meet healthcare demands effectively. However, as IoMT adoption grows, so do the challenges associated with ensuring security, privacy, and compliance, particularly in distributed healthcare environments [3]. A recent survey highlights the landscape of IoMT adoption, revealing that wearable devices are the leading IoMT category, utilized by 69.6% of surveyed healthcare organizations [2]. Remote patient monitoring systems follow with a 62.3% adoption rate, and smart medical equipment is used by 60.2% of organizations. This diverse usage emphasizes the necessity for robust strategies to manage multi-device IoMT environments effectively [4,5].
The geographically distributed IoMT networks, where medical devices and edge computing resources are deployed across multiple locations, offer key advantages such as reduced latency in critical care scenarios, improved resilience, and enhanced compliance with regional health data regulations. These architectures are vital for meeting the demands of global healthcare delivery and ensuring uninterrupted service during localized disruptions or emergencies [6]. However, they offer significant safety concerns. The need to protect sensitive health information across a variety of locations, defend against a broader spectrum of online cyber threats, and ensure compliance under distinct healthcare regulatory standards increases layers of complexities as cited in [7]. Although several advanced security measures have been designed for IoMT, significant barriers still remain in meeting the security requirements of geographically dispersed healthcare networks. These problems include critical communication overhead that causes latency in time-sensitive medical applications, challenges with data integrity across various medical devices, and scalability issues arising from the incapacity of current security mechanisms to manage increasing amounts of health data [8]. Additionally, it is common for medical device combinations to be non-standardized, which leads to hazards and discrepancies. Inadequate visibility in auditing and monitoring further contributes to the inability to identify and address medical security issues in healthcare settings [9]. Furthermore, providing comprehensive security will require a lot of processing and storage power for a medical device with limited resources, which will affect its overall effectiveness and cost. Due to these difficulties, an effective solution for IoMT contexts is urgently needed. The suggested architecture, EdgeGuard, combines federated learning’s privacy with blockchain’s decentralized security [9,10]. It is specifically designed for the IOMT. This is the whole framework that, on the one hand, enhances health data security and optimizes resource management in collaborative healthcare environments and, on the other, permits safe and effective cooperation between dispersed medical devices and edge nodes.
EdgeGuard is an innovative framework that seeks to redefine the contours of secure and efficient data management in IoMT networks. It leverages blockchain technology, adaptive federated learning, and edge computing capabilities to address significant limitations of current approaches in healthcare data security and privacy [3,11]. EdgeGuard implements a novel decentralized architecture that optimizes resource utilization across diverse IoMT devices while ensuring robust data privacy and integrity [12]. Our framework introduces several key innovations: a lightweight blockchain consensus mechanism specifically designed for IoMT networks, an adaptive aggregation function for privacy-preserving federated learning, and intelligent resource allocation through reinforcement learning [13]. As illustrated in Figure 1, EdgeGuard introduces a secure blockchain layer that enables safe collaborative learning while preventing information leakage of patient data. The main contributions of this work are as follows:
  • Privacy-preserving federated learning architecture: A novel adaptive aggregation mechanism that enables secure model training across distributed healthcare institutions [14]. This architecture incorporates differential privacy techniques and secure aggregation protocols, ensuring patient privacy while optimizing model performance through quality-aware aggregation.
  • IoMT-optimized blockchain consensus: A lightweight consensus mechanism specifically engineered for resource-constrained medical devices, providing robust security guarantees while maintaining efficiency. This mechanism ensures data integrity and creates immutable audit trails for regulatory compliance.
  • Intelligent resource management: Advanced optimization techniques for heterogeneous IoMT environments include the following:
    A dynamic model complexity adaptation based on available computational resources.
    Adaptive learning rate scheduling, considering resource constraints.
    Quality-aware device selection for optimal federated learning rounds.
    Efficient model update compression for bandwidth-constrained scenarios.
  • Comprehensive performance framework: A multi-dimensional evaluation framework that assesses not only diagnostic accuracy but also computational efficiency, communication effectiveness, energy consumption, and fairness across diverse IoMT devices, ensuring practical deployability in real-world healthcare settings.
This paper is structured as follows: Section 2 discusses the related work and Section 3 presents EdgeGuard’s system formulation, including the system model, assumptions, threat model, problem statement, and design goals. Section 3 describes the proposed EdgeGuard framework in detail. Section 4 explains the operational design and algorithm analysis. Section 5 discusses the experimental setup, results, and analysis, and Section 6 concludes the paper.

2. Related Work

The integration of blockchain technology with federated learning in IoMT networks has emerged as a significant research area in recent years, driven by concerns over data privacy and security vulnerabilities in healthcare systems. In this section, we will analyze some of the most recent advances in this area, focusing on three main aspects: blockchain mechanisms, federated learning in healthcare, and integrated decentralized blockchain-FL architectures. In the context of IoMT networks, ref. [8] presented a security framework that provides security when medical data are being transferred based on a combination of encryption techniques, pattern recognition modules, and adaptive learning mechanisms. Although this approach has made advances in both the detection of anomalies as well as attack resistance, it neither considers the quality of data in a distributed setting nor does it take into account the computational capabilities of IoMT products. In addition, the framework lacks mechanisms for enabling decentralized yet secure collaborative machine learning across healthcare institutions—a must-have component to enhance diagnostic models while guaranteeing the privacy of records. This shortcoming is something EdgeGuard addresses with its blockchain-secured federated learning architecture.
Mohammed et al. [15] proposed a federated learning paradigm toward collaborative usage of health information while keeping it private based on both secure multi-party computations and differential privacy. Even though it has provided significant privacy preservation, there has been no proper consideration given to securing the model updates, the participating devices, or the data quality. A recent work proposed by Biken Singh et al. [16] introduces a blockchain-supported federated learning system for WBANs, focusing on energy efficiency and privacy through QNNs, differential privacy, and homomorphic encryption. However, their approach primarily focuses on energy optimization and basic privacy preservation, with no consideration of data quality assessment and device reliability in medical contexts. The authors failed to provide explicit security requirements for IoMTs in healthcare environments. EdgeGuard bridges this gap through its adaptive aggregation and lightweight consensus mechanisms, which are specifically tailored for IoMT.
Prior work by Yu et al. [17] proposed an I-UDEC framework combining blockchain, AI, and federated learning to optimize computation offloading and resource allocation in ultra-dense edge computing. The work obtained great improvements in task execution time but was mainly focused on general IoT scenarios and did not consider specific medical data sensitivity and healthcare regulatory compliance. Furthermore, their blockchain implementation was not designed for resource-constrained medical devices—something EdgeGuard addresses in its healthcare-specific design and lightweight consensus mechanism. The paper by Ali Kashif et al. [18] explored the integration of federated learning in healthcare Metaverse applications, highlighting potential benefits and challenges in medical diagnosis, patient monitoring, and drug discovery. While comprehensive in scope, the paper primarily focused on theoretical aspects and future possibilities, lacking practical implementation details or specific solutions for current IoMT security and privacy challenges that EdgeGuard addresses through its concrete blockchain-secured federated learning architecture. Previous research [19] proposed combining DFT with differential privacy in federated learning for healthcare, achieving better accuracy and reduced communication costs, but their approach lacked security mechanisms for model updates and did not address device reliability or data quality validation in medical networks—gaps that EdgeGuard specifically addresses.
Although the existing works are tremendous in terms of healthcare security through federated learning and integrating blockchain, they merely focus on individual aspects like privacy preservation or energy efficiency and cannot achieve a holistic solution in IoMT environments. Most approaches usually miss critical points related to data quality assessment and reliability of devices and, most importantly, come up with lightweight security mechanisms tailored for resource-constrained medical devices.
EdgeGuard has addressed this by designing an integrated framework that contains blockchain-secured federated learning and IoMT-specific optimizations. Our solution uniquely integrates adaptive aggregation based on data quality and device reliability, a lightweight consensus mechanism designed for medical devices, and comprehensive security measures that maintain HIPAA compliance while enabling efficient collaborative learning. The comparative analysis in Table 1 demonstrates the current work addresses particular aspects of privacy and security for health networks but not in totality. As such, EdgeGuard clearly distinguishes itself by combining privacy preservation, lightweight blockchain security, resource optimization, data quality assessment, and device reliability monitoring within the proposed healthcare-specific framework.

3. Problem Formulation

3.1. System Model

More than a thousand devices form a vast network of IoMT healthcare data resources. The adaptive federated learning model, with its dynamic aggregation, is recognized as a well-known system in the field of IoMT. This novel idea has changed the concept and analysis of medical data. It ensures privacy and security in these increasingly developing digital health systems. Edge devices, local datasets, central servers, edge servers, local and global models, adaptive aggregation functions, blockchain security layers, and more, are the main components used in this proposed model. The following are the main parts of our system:
  • Edge devices ( ED = { e d 1 , e d 2 , , e d N } ): The different numbers of IOMT N edge devices range in terms of their computing and sensing capabilities. These edge devices are important components of our distributed learning setup from simple fitness trackers to cutting-edge medical equipment.
  • Local datasets ( DS = { D S 1 , D S 2 , , D S N } ): Every edge device e d i tracks a local dataset, thus reflecting a different portion of global health data. These datasets are characterized by their size | D S i | and a quality metric q i [ 0 , 1 ] .
  • Edge servers ( ES = { e s 1 , e s 2 , , e s M } ): A collection of M edge servers arranged specifically to enable intermediate aggregation and computational offloading. These servers improve system scalability and responsiveness by acting as a link between the central server and resource-limited peripheral devices. This makes our whole system work better and faster, especially when dealing with lots of devices that might not be very powerful on their own.
  • Central server ( CS ): A high-performance central server that orchestrates the federated learning process, aggregates model updates, and maintains the global model. It is responsible for initiating learning rounds and disseminating the updated global model to edge devices.
  • Global and local models ( w g , w l ): A shared neural network with d parameters; the global model w g R d represents the collective knowledge extracted from various medical data sources across the network. Collection W consists of local model instances, where the model parameters trained on the local dataset D S i of edge device e d i are represented by each w l R d . These local models feed into the global model and are updated regularly, enabling a distributed learning process that protects data privacy and makes use of the IoMT network’s collective insights.
  • Communication links ( L = { l i j } ): The group of communication lines that link the central server, edge servers, and edge devices. The bandwidth b i j and latency λ i j of each connection l i j are its key characteristics.
  • Adaptive aggregation function ( f : R d × [ 0 , 1 ] × [ 0 , 1 ] R d ): A completely novel function that dynamically weighs each edge device’s contribution according to measures for data quality and reliability. It is defined as follows:
    w g = f ( { w i } i = 1 N , { q i } i = 1 N , { r i } i = 1 N )
    where r i denotes the device e i ’s reliability and q i denotes the data quality measure.
  • Blockchain Security Layer (BSL) B To ensure the security, integrity, and traceability of the federated learning process [20], our system incorporates a BSL. This decentralized ledger system, denoted as B = ( B , T , σ , V ) , consists of a chain of blocks B = { b 1 , b 2 , , b K } , each containing a set of verified transactions. The function T : E S { C } B maps entities to transactions recorded in blocks, while the cryptographic hash function σ : B { 0 , 1 } * ensures the immutability of the blockchain. A validation function V : B × E S { C } { 0 , 1 } verifies the legitimacy of transactions and blocks. Each model update and aggregation operation is recorded as a transaction in the blockchain, ensuring the integrity and traceability of the learning process:
    b k + 1 = σ ( b k T ( w g t + 1 ) T ( { w i t + 1 } i E t ) )
    where the symbol p a r a l l e l denotes concatenation. This blockchain layer gives our system a higher level of security by providing an editable and tamper-proof record of all learning activities inside the IoMT network.
  • Quality assessment module ( Q A : DS [ 0 , 1 ] ): A new module that grades the quality of a local dataset based on various criteria, such as data distribution, label accuracy, and task relevance, around the world. The quality score that is derived for every local dataset is q i = Q ( D i ) .
  • Reliability evaluation function ( R E L : ED × T [ 0 , 1 ] ): The feature that scores the reliability of the edge devices over time, accounting for the needs of hardware, uptime, and consistency of contributions. For each device at time t it returns a reliability score R e l i = R E L ( e d i , t ) .
In the complex ecosystem, the adaptive federated learning scheme requires a series of communication rounds. For each round t, we choose a subset of edge devices ED t ED . Based on their own datasets, this subset of edge devices computes local updates:
w i t + 1 = w i t η L F ( w i t , D S i )
where L F denotes the loss function and η represents the learning rate. Subsequently, using our adaptive aggregation function, the central server aggregates these updates:
w g t + 1 = f ( { w i t + 1 } i ED t , { q i t } i ED t , { r i t } i ED t )
In this dynamic aggregation technique by IoMT, worldwide learning depends not just on the volume of data but on the goodness of data along with the credibility of origin. This, in turn, opens up the gates for an even more adaptive and robust learning process.

3.2. Assumptions and Threat Model

The main assumptions and threats are considered, which define our system architecture and security measures while designing EdgeGuard for safe and effective federated learning in IoMT healthcare networks.

3.2.1. Main Assumptions

  • Data privacy and locality: The local dataset D i DS for every edge device e d i ED remains on the device. According to healthcare data standards and patient privacy, only model updates w i W are shared. Formally:
    e d i ED , share w i but not D S i
  • Device heterogeneity and intermittent connectivity: The edge devices, ED = { e d 1 , e d 2 , , e d N } , are heterogeneous in terms of processing capability and network reliability. We represent this heterogeneity by defining a time-varying subset of devices that are active, ED t ED in each round t:
    ED t = { e d i ED device e d i is active at time t }
  • Semi-honest participants: While implementing the steps, participants have the opportunity to learn from the data they have received. We assume the reliability assessment function R E L : ED × T [ 0 , 1 ] represents this behavior over time:
    e d i ED , t T : 0 < R E L ( e i , t ) 1

3.2.2. Primary Threats μ

  • Data poisoning attacks ( μ d ) : The malicious entities may manipulate the local dataset or introduce fake data into the global model w g updates. We represent this threat as a perturbation δ of local updates:
    w ˜ i t + 1 = w i t + 1 + δ , δ D adversary
    where D adversary is an adversarial distribution.
  • Privacy breaches ( μ p ) : The adversaries could try to recreate private information from model updates. In such cases, the differential privacy anticipates and assists the users from the privacy risks:
    Pr [ f ( D S ) S ] e ϵ · Pr [ f ( D S ) S ] + δ
    where datasets D and D S differ by one record; here, f is our learning algorithm, ϵ is the privacy budget, and δ is the failure probability.
  • Integrity and authentication attacks ( μ i ) : The attackers might fabricate the devices or interfere with model updates. In order to solve this, our blockchain security layer B = ( B , T , σ , V ) ensures the following:
    V ( b k , e i ) = 1 T ( w i t + 1 ) is a valid transaction in b k
    where b k B is a block in the blockchain, and V is the validation function.
With its blockchain-based integrity verification B , adaptive aggregation function f, and privacy-preserving methods, EdgeGuard confronts these fundamental assumptions and threats. The reliability evaluation function R and quality assessment module Q : D [ 0 , 1 ] are designed to counteract data poisoning attacks and the blockchain layer offers an unchangeable audit trail to guarantee the authenticity and integrity of model modifications. By keeping raw data localized and introducing noise to model updates, the federated learning method, when paired with differential privacy measures, naturally contributes to privacy protection against breaches.

3.3. Problem Statement and Design Goals

In this study, a federated learning system, called EdgeGuard, is developed using blockchain technology to enable safe and effective cooperation in the Internet of Medical Things (IoMT) networks with N-distributed edge devices. The challenge involves effectively carrying out federated learning tasks across many IoMT devices while reducing threats ( μ d , μ p , and μ i ). This is conducted to enhance the learning process’s overall efficiency and provide strong security in a cooperative healthcare setting. During the federated learning process over periods { t 1 , t 2 , , T } , the objective is to maximize data utility ( D U ) and model accuracy ( M A ) while minimizing communication overhead ( C O ), potential threats ( μ ), and model divergence ( M D ) in the presence of unknown malicious clients, such that we have the following:
{ ω ω : i = 1 N j = 1 M k = 1 K ω i j k }
where ω i j k = 1 if edge device e i participates in learning round j at local edge devices. Equation (12) formulates the threat model by incorporating the considered potential threats ( μ d : data poisoning, μ p : privacy breaches, and μ i : integrity attacks), aiming to minimize these risks and enhance the overall security of the collaborative environment:
μ k = min ( μ d , μ p , μ i )
Equation (13) is formulated in a way that guarantees the effective distribution of edge devices to learning rounds across geographically dispersed data centers:
ω : E × T × S × μ k { 0 , 1 }
The critical constraints that must be satisfied during the federated learning process within the IoMT network are stated in Equations (15)–(18):
C 1 : k = 1 K i = 1 N j = 1 M ω i j k = 1
C 2 : k = 1 K i = 1 N j = 1 M E C P U i × ω i j k S C P U k
C 3 : k = 1 K i = 1 N j = 1 M E R A M i × ω i j k S R A M k
C 4 : k = 1 K i = 1 N j = 1 M E B W i × ω i j k S B W k
C 5 : k = 1 K i = 1 N j = 1 M ω i j k A v a i l E D
The constraint C 1 specifies that each edge device e i must participate in exactly one learning round; the constraints {C2–C4} state that the available resource capacity (CPU, RAM, bandwidth) of the edge network must be greater than or equal to the total requesting resource capacity of participating edge devices; constraint C 5 specifies the geographical constraints, indicating the availability of edge devices within the IoMT network. The design goals of EdgeGuard are, thus, formulated as a multi-objective optimization problem:
min w g , ω { C O , μ , M D } and max w g , ω { D U , M A }
subject to constraints {C1–C5}, where w g denotes the global model parameters. To achieve these goals, EdgeGuard employs the following:
  • An adaptive aggregation function f to balance data utility and privacy:
    w g t + 1 = f ( { w i t + 1 } i E t , { q i t } i E t , { r i t } i E t )
  • A blockchain security layer B = ( B , T , σ , V ) to ensure integrity and traceability.
  • A quality assessment module Q : D [ 0 , 1 ] to evaluate data quality.
  • A reliability evaluation function R : E × T [ 0 , 1 ] to assess device trustworthiness.
These components work in concert to create a secure, efficient, and privacy-preserving federated learning system for IoMT healthcare networks.

4. Proposed Framework

The EdgeGuard framework in Figure 1 enables secure federated learning across distributed IoMTs by allowing the edge devices to perform local model training over the sensitive health data, and the encrypted model updates are transmitted over the blockchain layer safeguarding the data integrity and privacy. With the direction of continuous security analysis and resource management, the central server then aggregates these changes into a global model via adaptive aggregation. This very sophisticated technique enables collaborative learning with raw patient data remaining localized, therefore balancing usefulness and privacy against system efficiency in challenging medical situations. EdgeGuard will, therefore, address particular challenges when processing remote medical data and aid in providing more insight into healthcare by combining various modules without compromising individual patient privacy or system security.

4.1. EdgeGuard Framework

The EdgeGuard architecture consists of six main steps: local model training, local model upload, cross-verification, block generation and propagation, adaptive aggregation, and global model update. These procedures provide safe and efficient federated learning in IoMT healthcare networks.

4.1.1. Local Model Training

The local training process is independently conducted using locally stored data on each IoMT edge device. We consider N edge devices as E = { e d 1 , e d 2 , , e d N } . Let there be M medical sensors at each edge device denoted as S = { s 1 , s 2 , , s M } sending their health data to the IoMT environment for processing and analysis. Each sensor generates a set of health measurements, m a t h c a l H = h 1 , h 2 , , h Z , along with particular parameters like the timestamp, sensor type, and measurement value ( h t i m e i , h t y p e i , h v a l u e i ) . The edge device consists of computational resources characterized by CPU, memory, and bandwidth capacities ( E C P U i , E M e m i , E B W i ) . A convolutional neural network (CNN) is utilized at the edge devices to process and analyze the health data. The neural network comprises p-q-r number of neurons at the input, hidden, and output layers. These layers are interconnected through NN weights ( { w I 1 , w I 2 , , w I p , w H 1 , w H 2 , , w H q , w O 1 , , w O r } ) , with the size of the NN as S, such that we have the following:
S = ( p + 1 ) × q + ( q × r ) = q ( p + r + 1 ) q ( p + 2 ) as r = 1
The NN weights and biases ( b ) are initialized randomly in the range of [ 0 , 1 ] . The CNN collects the historical health data and normalizes the data to create and provide p input values, such as { D S 1 , D S 2 , , D S p } , into the input layer. The prediction process consists of three main steps: training, testing, and prediction. Data validation is conducted to improve the model’s performance, using the mean absolute percentage error (MAPE) as the error function to assess the model’s accuracy. The pre-processing of data extracts health measurements from different sensors and aggregates them over a fixed time interval. To normalize the input data within the range of [ 0 , 1 ] , the data aggregation process is applied, as shown in Equation (22):
D i ^ = D i D m i n D m a x D m i n
In the dataset, D S m a x represents the highest value obtained, while D S m i n corresponds to the lowest value. The normalized data, denoted as D S ^ , comprise a collection of all normalized data values, represented as D S 1 ^ , D S 2 ^ , , D S n ^ . These normalized one-dimensional values are utilized as input to the input layer of the CNN. This model analyzes previous p health data values to predict the health status ( Y o u t ) at the p + 1 t h time instance. The ReLU activation function, as depicted in Equation (23), is used in the hidden layers:
R e L U ( x ) = max ( 0 , x )
The evaluation of the accuracy and performance of the LM training process is carried out by utilizing the MAPE score, as given in Equation (24):
M A P E = 100 % n i = 1 n | Y p r e d i c t e d Y a c t u a l | Y a c t u a l
where n represents the total number of data samples, while Y a c t u a l and Y p r e d i c t e d correspond to the actual and predicted health status, respectively. Stochastic gradient descent with momentum (SGD-M) is employed to achieve dynamic and adaptive optimization of the network weights. In this context, velocity ( v ) represents the gradient change needed to reach the global minimum, as expressed in Equation (25):
w t + 1 = w t v t
The updated weight vector is represented as w t + 1 , while the current weight vector is denoted as w t . The calculation of v t can be performed using Equation (26):
v t = β · v t 1 + η w t
Here, the momentum is represented by the term β · v t 1 . The constant β has a value between 0 and 1, the learning rate is denoted as η , and w t corresponds to the gradient of the loss function for the weight. v t 1 represents the velocity at the previous step. Then, the local model w t + 1 is uploaded to the blockchain to complete federated learning aggregation.

4.1.2. Local Model Upload

In EdgeGuard, we collect model updates from our medical devices, represented as { Δ w 1 , Δ w 2 , , Δ w N } . These updates are added to our blockchain, forming a series of connected blocks { B 1 , B 2 , , B N } . Each block has two parts: a body and a header. The body, shown in Equation (27), contains the model updates and calculation times, as follows:
Body i = { ( Δ w k l , T local i ) | k , l , i }
Here, Δ w k l denotes the update from device k for training round l, and T local i denotes how long device i takes to compute. The header, given by Equation (28), is like an information tag, as follows:
Header i = { P prev , λ , PoW }
P prev links to the previous block, λ denotes how fast we make blocks, and PoW is proof that the block is legit. We also track the block size using Equation (29):
B size = h + δ m · N D
This depends on the header size h, the size of each update δ m , and the number of devices N D . This setup helps us keep our medical AI updates organized and secure, balancing technical precision with practical application in our IoMT network [21].

4.1.3. Cross-Verification

Miners broadcast and verify model updates, accumulate verified updates in a candidate block B, and finalize the block if it follows Equation (30):
B s i z e h + δ m · N E or t T w a i t

4.1.4. Block Generation and Propagation

EdgeGuard employs a proof of work (PoW) mechanism for secure block generation and propagation. This process unfolds in three key steps: hash generation, block generation rate determination, and block propagation with ledger update. In the hash generation step, a miner m in the network computes a hash value H by iteratively modifying a nonce value N. The goal is to find a hash that satisfies the condition expressed in Equation (31):
H ( N ) < T
Here, H represents the hash function, N is the nonce, and T denotes the target value that defines the PoW difficulty. The block generation rate, denoted as λ , is inversely proportional to the PoW difficulty, which is reflected in the target hash value T. This relationship is captured in Equation (32):
λ 1 T
This inverse relationship implies that a more stringent target value T results in a lower λ , thereby reducing the frequency of block generation. As stated in Equation (33), upon generation of a valid hash by a miner, the new block B is verified, approved, and disseminated to all miners. At this point, the miners cease their proof of work calculations and append B to their local ledgers.
If H ( T ( w ˜ i t + 1 ) n ) < Target , then B is added to all local ledgers
The proof-of-work method, thus, ensures integrity and security within the EdgeGuard architecture and provides a good base for decentralized storage and validation of the model changes within the IoMT network.

4.1.5. Adaptive Aggregation and Global Model Update

Another innovation is the adaptive aggregation function that weighs contributions from each edge device within the function of data quality and reliability of devices. Such a feature is extremely critical for ensuring that the federated learning process remains robust and effective in the eventuality of a possible problem that results from data quality and security threats. Let { w 1 , w 2 , , w N } represent the gradient updates acquired from the blockchain layer. Finally, the central server takes the adaptive aggregation function f to obtain the new global gradient at the time step t + 1 . Mathematically, we have the following:
w t + 1 = f ( { w i t + 1 } i E t , { q i t } i E t , { r i t } i E t )
where E t is the set of participating devices in round t, q i t is the quality score of device i’s data, and r i t is the reliability score of device i. The adaptive aggregation function f is defined as follows:
f ( { w i } , { q i } , { r i } ) = i E t α i w i i E t α i
where α i is the weight assigned to device i’s update and is calculated as follows:
α i = q i · r i · exp ( β w i w ¯ 2 )
Here, w ¯ is the average of all updates, and β is a hyperparameter controlling the influence of update similarity. This formulation ensures the following:
  • Higher quality data (higher q i ) have more influence on the global model.
  • More reliable devices (higher r i ) contribute more significantly.
  • Updates that are closer to the average (potentially more trustworthy) are given higher weight.
The quality score q i is determined by the quality assessment module (QAM):
q i = Q ( D i ) = 1 1 + exp ( ( γ 1 C i + γ 2 V i γ 3 O i ) )
where C i denotes the completeness of the data, V i denotes the validity, O i denotes the outlier ratio, and γ 1 , γ 2 , γ 3 denote learnable parameters. The reliability score r i is calculated by the reliability evaluation function (REF), as follows:
r i = R ( e i , t ) = δ r i , t 1 + ( 1 δ ) ( 1 1 + exp ( ( λ 1 U i + λ 2 A i λ 3 E i ) ) )
where r i , t 1 is the previous reliability score, U i is the uptime ratio, A i is the contribution accuracy, E i is the error rate, δ is a smoothing factor, and λ 1 , λ 2 , λ 3 are learnable parameters. This adaptive aggregation mechanism allows EdgeGuard to conduct the following:
  • Mitigate the impact of low-quality or malicious updates.
  • Adapt to changing device behaviors and data characteristics.
  • Improve the overall robustness and accuracy of the global model.
  • Provide an implicit defense against various attacks, including data poisoning and free-riding.
The aggregated gradient w t + 1 is then used to update the global model G M . The G M has the same architecture as the local models. The update process is expressed in Equation (39):
G M ( t + 1 ) = G M ( t ) + w t + 1

4.1.6. Local Model Update

The global gradient w t + 1 is broadcast to all local clients using the blockchain layer, ensuring that the privacy of each client is maintained. Then each local model L M i for the ith edge device is updated using the broadcasted global gradient w t + 1 as expressed in Equation (40):
L M i ( t + 1 ) = L M i ( t ) + w t + 1
where L M i ( t ) represents the local model parameters at epoch t, and L M i ( t + 1 ) represents the updated local model parameters for the next epoch. Local training for the next epoch begins with the updated parameters of the global model, ensuring consistency across all local models while maintaining the privacy and security of individual health data. This iterative process of local training, secure aggregation, and model distribution allows EdgeGuard to facilitate collaborative learning across IoMT devices, enabling improved healthcare insights while maintaining stringent privacy and security standards essential to medical applications.

4.2. Smart Contract Implementation for Access Control and Model Updates

The IoMT network, running on the EdgeGuard framework, promises to have extremely complicated architectures for safe and verifiable smart contracts. Our implementation includes three leading types: device access control, model update verification, and secure aggregation protocols. The layer of smart contracts is important as it facilitates the transition between the federated learning process and the incorporation of blockchain security.
The proposed smart contract manages device registration, validates model updates, and ensures secure aggregation according to the privacy and security requirements of healthcare data. Specifically, this study targets three main aspects:
  • Access control: Ensures only authorized IoMT devices participate in the federated learning process.
  • Model update verification: Validates and records model updates in an immutable manner.
  • Secure aggregation: Implements privacy-preserving model aggregation using multi-party computation.
Algorithm 1 presents the comprehensive smart contract protocol that governs these interactions within EdgeGuard.
This protocol further completes our description of the blockchain security layer B in Section 3.1, such that the federated learning process can remain robust and free of attacks only when legitimate devices are involved in the training model. It makes use of the adaptive aggregation function in Equations (34)–(36), in which quality scores q i and reliability scores r i are used for weighing each edge device.
The implementation guarantees the following three important properties:
  • Security: Through robust access control and validation mechanisms.
  • Privacy: Via secure multi-party computation during aggregation.
  • Verifiability: Through immutable blockchain records of all operations.
This kind of architecture of the smart contract provides a needed base in the IoMT environment for secure and verifiable federated learning, yet it certainly aligns with EdgeGuard’s decentralized approach to medical resource management.
Algorithm 1 EdgeGuard smart contract protocol.
Require: 
Set of edge devices ED = { e d 1 , e d 2 , , e d N } , local models W = { w 1 , w 2 , , w N } , quality scores Q = { q 1 , q 2 , , q N } , reliability scores R = { r 1 , r 2 , , r N }
Ensure: 
Verified global model update w g
1: 
Initialize DeviceInfo, ModelUpdate structures
2: 
for each registered device e d i ED  do
3: 
       DeviceInfo [ e d i ] { active : true , reliability : r i , timestamp : t }
4: 
       ModelUpdate [ e d i ] { model : w i , quality : q i , verified : false }
5: 
end for
6: 
for each update round t do
7: 
      for each active device e d i ED  do
8: 
            if  ValidateDevice ( e d i )  then
9: 
                 commitment GenerateCommitment ( w i )
10: 
               Commitments [ e d i ] commitment
11: 
               shares [ e d i ] SecureShare ( w i )
12: 
          end if
13: 
     end for
14: 
      maskedUpdates
15: 
     for each committed device e d i  do
16: 
            if  ValidateCommitment ( e d i )  then
17: 
                  α i q i · r i · exp ( β w i w ¯ 2 )                                                                                                                                                                                                       ▹ From Equation (36)
18: 
                  maskedUpdates maskedUpdates { DecryptShare ( shares [ e d i ] ) × α i }
19: 
            end if
20: 
     end for
21: 
      w t + 1 ( maskedUpdates ) / ( α i )                                                                                                                                                                                                         ▹ Adaptive aggregation
22: 
     if  H ( T ( w t + 1 n ) ) < Target  then
23: 
             RecordUpdate ( w t + 1 )
24: 
            for each e d i ED  do
25: 
                   ModelUpdate [ e d i ] . model w t + 1
26: 
                   ModelUpdate [ e d i ] . verified true
27: 
            end for
28: 
            if convergence criteria are met then
29: 
                  break
30: 
            end if
31: 
     end if
32: 
end for
33: 
return  w g w t + 1

4.3. Operational Design and Complexity Analysis

Algorithm 2 presents the functional design of the suggested EdgeGuard framework. The four main parts of the EdgeGuard algorithm—model update distribution, adaptive aggregation, blockchain operations, and edge device computations—determine how long it takes to run. Edge device computations including local model training and data preparation have temporal complexity O ( I × n ) , where n is the local dataset’s data point count and I is the number of training iterations. Using a temporal complexity of O ( M × 2 d ) , where M is the number of blocks and d is the PoW difficulty level, the blockchain layer generates and verifies secure blocks using the proof of work (PoW) consensus technique. With e d denoting the number of edge devices and N denoting the overall number of model parameters, the adaptive aggregation processes of the central server introduce a complexity of O ( N × e d ) .
This process aggregates updates from every edge device involved in participating. Moreover, the methods of quality assessment and dependability evaluation add a complexity of O ( e d × K ) , where K is the count of evaluation criteria. As such, the entire time complexity of the EdgeGuard approach could be estimated as follows:
O ( I × n ) + O ( M × 2 d ) + O ( N × e d ) + O ( E × K )
There is a balance between safe blockchain operations, local computations, and flexible global aggregation in the IoMT setting. This complexity analysis shows how EdgeGuard is spread out.
Algorithm 2 EdgeGuard: secure federated learning for IoMT.
Require: 
Set of edge devices E = { e d 1 , e d 2 , , e d N } , local datasets D = { D 1 , D 2 , , D N } , global model w g , blockchain B = ( B , T , σ , V ) , central server C
Ensure: 
Updated global model w g , secure and private federated learning
1: 
Initialize w g , B
2: 
for each communication round t do
3: 
       E t Select participating devices
4: 
      for each e i E t in parallel do
5: 
            Preprocess local data D i
6: 
             w i t Train local model on D i
7: 
             q i t Q ( D i )                                                                                                                                                                                                                                 ▹ Quality Assessment
8: 
             r i t R ( e i , t )                                                                                                                                                                                                                              ▹ Reliability Evaluation
9: 
             w ˜ i t w i t + N ( 0 , σ 2 )                                                                                                                                                                                                         ▹ Apply differential privacy
10: 
            b k σ ( b k 1 T ( w ˜ i t , q i t , r i t ) )                                                                                                                                                                                                  ▹ Create blockchain block
11: 
           while  H ( b k ) Target  do
12: 
                Adjust nonce in b k
13: 
           end while
14: 
           Broadcast b k to other miners
15: 
     end for
16: 
      C retrieves { w ˜ i t , q i t , r i t } i E t from B
17: 
      w g t + 1 f ( { w ˜ i t } i E t , { q i t } i E t , { r i t } i E t )                                                                                                                                                                                             ▹ Adaptive aggregation
18: 
      b k + 1 σ ( b k T ( w g t + 1 ) )                                                                                                                                                                                           ▹ Record global update in blockchain
19: 
     for each e i E  do
20: 
            e i retrieves w g t + 1 from B
21: 
            w i t + 1 w g t + 1                                                                                                                                                                                                                              ▹ Update local model
22: 
     end for
23: 
     if convergence criteria met then
24: 
           break
25: 
     end if
26: 
end for
27: 
return  w g

5. Performance Analysis

5.1. Experimental Setup

The EdgeGuard framework was evaluated using a comprehensive simulation environment built with PyTorch v1.9.0 for federated learning implementation, integrated with Ethereum Ganache v2.5.4 for blockchain simulation. The experiments were conducted on a server equipped with 2 Intel® Xeon® Silver 4114 CPUs (Santa Clara, CA, USA) (40 cores, 2.20 GHz), 128 GB RAM, running Ubuntu 20.04 LTS, and NVIDIA Tesla V100 GPU with 32 GB memory. The federated learning environment was implemented using PyTorch DistributedDataParallel, incorporating our custom adaptive aggregation mechanism with differential privacy support through PyTorch DP. The blockchain component utilized a private Ethereum network with smart contracts written in Solidity v0.8.0, specifically modified for IoMT requirements. Web3.py v5.28.0 facilitated smart contract interactions, while NumPy v1.21.0 and Pandas v1.3.0 were used for efficient data manipulation and numerical computations. To simulate the IoMT environment, we configured three types of edge devices with varying computational capabilities, as shown in Table 2. The network environment was configured to reflect real-world conditions, with bandwidth variations from 1–10 Mbps, latency ranges of 10–100 ms, and packet loss rates of 0.1–1% in a star topology with a central aggregator.
Dataset: We utilized the MIMIC-III dataset, performing comprehensive preprocessing including temporal alignment of vital signs, missing value imputation using forward fill, feature normalization, and time series segmentation into 24-h windows. The dataset was split into training (80%), validation (10%), and test (10%) sets. Performance monitoring was conducted using Linux perf-tools v5.15.0 for resource utilization, iperf3 v3.12 for network statistics, and Intel RAPL (Running Average Power Limit) through powercap-utils v0.6.0 for energy consumption measurements.

5.2. Baseline Implementation

For comparative analysis, we implemented FedAvg as our baseline following McMahan et al.’s seminal work [22]. The FedAvg implementation performs standard federated averaging without the security enhancements of EdgeGuard. In each communication round, the server selects a fraction of available clients ( C = 0.8 ) and broadcasts the current global model. Each selected client trains the model on their local data for E epochs and returns the model updates. The global model is then updated using the following:
w g t + 1 = k = 1 K n k n w k t
where n k represents the size of the local dataset at client k, n denotes the total dataset size, and w k t represents the local model parameters at round t. This vanilla implementation differs from EdgeGuard in several key aspects:
  • No quality-based weighting of client updates.
  • No reliability assessment of participating devices.
  • No blockchain-based security mechanisms.
  • No differential privacy protections.
Both EdgeGuard and the FedAvg baseline were implemented using the same optimization framework to ensure a fair comparison. The optimization configuration incorporates SGD with momentum as the base optimizer and includes additional enhancements such as cosine annealing for learning rate scheduling and gradient clipping to improve training stability. Table 3 provides a comprehensive list of all parameters used in our experiments, including the detailed optimization configuration that was previously omitted.

5.3. Evaluation and Simulation Results

We tested EdgeGuard on the MIMIC-III dataset in a simulated IoMT environment. Our evaluation focused on model accuracy, communication efficiency, security robustness, and resource utilization in four key aspects.
The experimentation environment maintains consistent conditions for both implementations, utilizing the MIMIC-III dataset with identical preprocessing steps and evaluation metrics. This setup ensures a fair comparison while highlighting EdgeGuard’s enhanced security and efficiency features.

5.3.1. Model Accuracy

Figure 2 presents how the accuracy of the model converges to rounds of communication in EdgeGuard compared to standard federated learning (FedAvg) and a centralized approach.
This resulted in an average test error of 94.3%, which was higher than that of FedAvg at 91.7% and closer to the 95.5% error rate achieved by a centralized approach. The adaptive aggregation mechanism contributed to faster convergence toward this higher final accuracy in the federated setting.

5.3.2. Communication Efficiency

Figure 3 shows the total amount of data transferred during the training process for different numbers of edge devices.
EdgeGuard outperformed FedAvg in terms of communication efficiency, cutting the overall amount of data sent by as much as 30%, especially as the number of edge devices rose.

5.3.3. Security Robustness

Figure 4 illustrates EdgeGuard’s performance under varying percentages of malicious nodes for different types of attacks.
EdgeGuard maintained an accuracy of over 90% even with up to 40% of the nodes being malicious, demonstrating strong resilience against data poisoning, model poisoning, and Sybil attacks.

5.3.4. Resource Utilization

Table 4 presents the average resource utilization per edge device type during training.
EdgeGuard showed effective resource management across a variety of device types. Compared to standard federated learning, the integration of blockchain activities resulted in an average 15% increase in energy usage. However, a decrease in communication overhead made up for this. To summarize, EdgeGuard outperformed conventional federated learning in terms of model correctness, communication effectiveness, and security robustness while exhibiting tolerable resource use increases. The adaptive methods of the system demonstrated efficacy in preserving performance in adversarial IoMT scenarios while optimizing resource utilization among heterogeneous edge devices.

6. Conclusions

In this paper, we present EdgeGuard, a unique architecture that improves federated learning in IoMT contexts in terms of security, efficiency, and speed. Through EdgeGuard’s integration of blockchain technology and adaptive federated learning, data privacy and integrity are guaranteed across distributed edge devices. Our analysis reveals that EdgeGuard resists up to 40.05% of malicious nodes while achieving a model accuracy of 94.34%, surpassing typical techniques by 2.68%. It also optimizes resource utilization in IoMT scenarios by reducing communication overhead by 30.67%. Coupled with proof of work consensus and differential privacy approaches, the framework’s adaptive aggregation mechanism provides a robust defense against various threats and ensures the protection of patient data. Thus, EdgeGuard offers a complete solution for machine learning in decentralized healthcare ecosystems that is safe, effective, and privacy-preserving, greatly advancing the development of reliable AI-driven healthcare applications.

Author Contributions

S.P.: Conceptualization, methodology, formal analysis, validation, writing original draft, Conceptualization; J.L.: Formal analysis, validation, visualization, review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy and ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Al-Turjman, F.; Nawaz, M.H.; Ulusar, U.D. Intelligence in the Internet of Medical Things era: A systematic review of current and future trends. Comput. Commun. 2020, 150, 644–660. [Google Scholar] [CrossRef]
  2. Huang, C.; Wang, J.; Wang, S.; Zhang, Y. Internet of medical things: A systematic review. Neurocomputing 2023, 557, 126719. [Google Scholar] [CrossRef]
  3. Chang, Y.; Fang, C.; Sun, W. A Blockchain-Based Federated Learning Method for Smart Healthcare. Comput. Intell. Neurosci. 2021, 2021, 4376418. [Google Scholar] [CrossRef] [PubMed]
  4. Raul, S.; Das, S.; Murty, C.S.; Devi, K. A Review on Intelligent Health Care System Using Learning Methods. In Recent Developments in Electronics and Communication Systems; IOS Press: Amsterdam, The Netherlands, 2023; pp. 154–159. [Google Scholar]
  5. Mohammed, M.A.; Lakhan, A.; Abdulkareem, K.H.; Zebari, D.A.; Nedoma, J.; Martinek, R.; Kadry, S.; Garcia-Zapirain, B. Energy-efficient distributed federated learning offloading and scheduling healthcare system in blockchain based networks. Internet Things 2023, 22, 100815. [Google Scholar] [CrossRef]
  6. Duan, Q.; Huang, J.; Hu, S.; Deng, R.; Lu, Z.; Yu, S. Combining Federated Learning and Edge Computing Toward Ubiquitous Intelligence in 6G Network: Challenges, Recent Advances, and Future Directions. IEEE Commun. Surv. Tutor. 2023, 25, 2892–2950. [Google Scholar] [CrossRef]
  7. Myrzashova, R.; Alsamhi, S.H.; Shvetsov, A.V.; Hawbani, A.; Wei, X. Blockchain meets federated learning in healthcare: A systematic review with challenges and opportunities. IEEE Internet Things J. 2023, 10, 14418–14437. [Google Scholar] [CrossRef]
  8. Khan, M.F.; AbaOud, M. Blockchain-Integrated Security for real-time patient monitoring in the Internet of Medical Things using Federated Learning. IEEE Access 2023, 11, 117826–117850. [Google Scholar] [CrossRef]
  9. Sai, S.; Chamola, V.; Choo, K.K.R.; Sikdar, B.; Rodrigues, J.J.P.C. Confluence of Blockchain and Artificial Intelligence Technologies for Secure and Scalable Healthcare Solutions: A Review. IEEE Internet Things J. 2023, 10, 5873–5897. [Google Scholar] [CrossRef]
  10. Khubrani, M.M. Artificial Rabbits Optimizer with Deep Learning Model for Blockchain-Assisted Secure Smart Healthcare System. Int. J. Adv. Comput. Sci. Appl. 2023, 14. [Google Scholar] [CrossRef]
  11. Manzoor, H.U.; Shabbir, A.; Chen, A.; Flynn, D.; Zoha, A. A Survey of Security Strategies in Federated Learning: Defending Models, Data, and Privacy. Future Internet 2024, 16, 374. [Google Scholar] [CrossRef]
  12. Patni, S.; Lee, J. Explainable AI Empowered Resource Management for Enhanced Communication Efficiency in Hierarchical Federated Learning. Comput. Electr. Eng. 2024, 117, 109260. [Google Scholar] [CrossRef]
  13. Otoum, Y.; Hu, C.; Said, E.H.; Nayak, A. Enhancing Heart Disease Prediction with Federated Learning and Blockchain Integration. Future Internet 2024, 16, 372. [Google Scholar] [CrossRef]
  14. Solat, F.; Patni, S.; Lim, S.; Lee, J. Heterogeneous Privacy Level-Based Client Selection for Hybrid Federated and Centralized Learning in Mobile Edge Computing. IEEE Access 2024, 12, 108556–108572. [Google Scholar] [CrossRef]
  15. Abaoud, M.; Almuqrin, M.A.; Khan, M.F. Advancing Federated Learning Through Novel Mechanism for Privacy Preservation in Healthcare Applications. IEEE Access 2023, 11, 83562–83579. [Google Scholar] [CrossRef]
  16. Singh, M.B.; Singh, H.; Pratap, A. Energy-Efficient and Privacy-Preserving Blockchain Based Federated Learning for Smart Healthcare System. IEEE Trans. Serv. Comput. 2024, 17, 2392–2403. [Google Scholar] [CrossRef]
  17. Yu, S.; Chen, X.; Zhou, Z.; Gong, X.; Wu, D. When Deep Reinforcement Learning Meets Federated Learning: Intelligent Multitimescale Resource Management for Multiaccess Edge Computing in 5G Ultradense Network. IEEE Internet Things J. 2021, 8, 2238–2251. [Google Scholar] [CrossRef]
  18. Bashir, A.K.; Victor, N.; Bhattacharya, S.; Huynh-The, T.; Chengoden, R.; Yenduri, G.; Maddikunta, P.K.R.; Pham, Q.V.; Gadekallu, T.R.; Liyanage, M. Federated Learning for the Healthcare Metaverse: Concepts, Applications, Challenges, and Future Directions. IEEE Internet Things J. 2023, 10, 21873–21891. [Google Scholar] [CrossRef]
  19. Hidayat, M.A.; Nakamura, Y.; Arakawa, Y. Enhancing Efficiency in Privacy-Preserving Federated Learning for Healthcare: Adaptive Gaussian Clipping with DFT Aggregator. IEEE Access 2024, 12, 88445–88457. [Google Scholar] [CrossRef]
  20. Rajagopal, S.M.; Supriya, M.; Buyya, R. Blockchain Integrated Federated Learning in Edge/Fog/Cloud Systems for IoT-Based Healthcare Applications: A Survey. In Federated Learning; Taylor & Francis Group: Oxfordshire, UK, 2024; pp. 237–269. [Google Scholar] [CrossRef]
  21. Rahman, M.A.; Hossain, M.S.; Islam, M.S.; Alrajeh, N.A.; Muhammad, G. Secure and provenance enhanced internet of health things framework: A blockchain managed federated learning approach. IEEE Access 2020, 8, 205071–205087. [Google Scholar] [CrossRef] [PubMed]
  22. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics, PMLR, Ft. Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
Figure 1. EdgeGuard: a secure federated learning framework for IoMT.
Figure 1. EdgeGuard: a secure federated learning framework for IoMT.
Futureinternet 17 00002 g001
Figure 2. Model accuracy convergence.
Figure 2. Model accuracy convergence.
Futureinternet 17 00002 g002
Figure 3. Communication efficiency.
Figure 3. Communication efficiency.
Futureinternet 17 00002 g003
Figure 4. Security Robustness Under Different Attacks.
Figure 4. Security Robustness Under Different Attacks.
Futureinternet 17 00002 g004
Table 1. Comparison of related works in blockchain-secured federated learning for healthcare.
Table 1. Comparison of related works in blockchain-secured federated learning for healthcare.
WorkPrivacy
Preservation
Security
Mechanism
Resource
Optimization
Data Quality
Assessment
Device
Reliability
Healthcare
Specific
[8] Pattern RecognitionDPEncryptionNoNoNoYes
[15] Privacy-preserving FLDP + MPCNoNoNoNoYes
[16] WBAN-based FLDP + HEBlockchainEnergy-awareNoNoYes
[17] I-UDEC FrameworkFLBlockchain2Ts-DRLNoNoNo
[18] DFT-based FLDP + DFTNoCommunicationNoNoYes
EdgeGuard (Ours)DP + MPCLightweight BlockchainResource-awareYesYesYes
DP: differential privacy, MPC: multi-party computation, HE: homomorphic encryption, DFT: discrete Fourier transform, 2Ts-DRL: two-timescale deep reinforcement learning, FL: Federated Learning.
Table 2. Edge device configurations.
Table 2. Edge device configurations.
DeviceCPU CoresMIPSRAM (GB)Storage (GB)Power (W)
E1226604325
E2430678648
E3834671612812
Table 3. Simulation parameters.
Table 3. Simulation parameters.
ParameterValue
System Configuration
Number of Edge Devices50–500
Number of IoMT Sensors100–2000
CPU Cores per Server40
RAM per Server128 GB
GPU Memory32 GB
Federated Learning Parameters
Train/Test Split80:20
Local Epochs10–50
Batch Size64
Communication Rounds1–300
Client Selection Rate0.8
Malicious Devices{10%, 20%,..., 50%}
Optimization Parameters
Base Learning Rate0.01
OptimizerSGD with Momentum ( β = 0.9 )
Learning Rate SchedulerCosine Annealing
Weight Decay 1 × 10 4
Gradient Clipping1.0
Early Stopping Patience10 epochs
Momentum0.9
Blockchain Parameters
Block Generation Rate ( λ ){0.1, 0.3, 0.5, 0.7}
Consensus AlgorithmProof of Work
Gas Limit6,721,975
Block Time15 s
Smart Contract VersionSolidity v0.8.0
Network Parameters
Bandwidth Range1–10 Mbps
Latency Range10–100 ms
Packet Loss Rate0.1–1%
Network TopologyStar
Dataset Configuration
Training Set80%
Validation Set10%
Test Set10%
Time Window24 h
Sampling Rate5 min
Security Parameters
Differential Privacy ε 0.1–1.0
Privacy Budget δ 10 5
Encryption MethodAES-256
Key Length2048 bits
Table 4. Resource utilization.
Table 4. Resource utilization.
DeviceCPU (%)RAM (GB)Network (MB/s)Energy (Wh)
E178.53.20.812.6
E265.36.11.220.1
E352.111.81.528.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Patni, S.; Lee, J. EdgeGuard: Decentralized Medical Resource Orchestration via Blockchain-Secured Federated Learning in IoMT Networks. Future Internet 2025, 17, 2. https://doi.org/10.3390/fi17010002

AMA Style

Patni S, Lee J. EdgeGuard: Decentralized Medical Resource Orchestration via Blockchain-Secured Federated Learning in IoMT Networks. Future Internet. 2025; 17(1):2. https://doi.org/10.3390/fi17010002

Chicago/Turabian Style

Patni, Sakshi, and Joohyung Lee. 2025. "EdgeGuard: Decentralized Medical Resource Orchestration via Blockchain-Secured Federated Learning in IoMT Networks" Future Internet 17, no. 1: 2. https://doi.org/10.3390/fi17010002

APA Style

Patni, S., & Lee, J. (2025). EdgeGuard: Decentralized Medical Resource Orchestration via Blockchain-Secured Federated Learning in IoMT Networks. Future Internet, 17(1), 2. https://doi.org/10.3390/fi17010002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop