PLA—A Privacy-Embedded Lightweight and Efficient Automated Breast Cancer Accurate Diagnosis Framework for the Internet of Medical Things
Abstract
:1. Introduction
- 1.
- The oversight of privacy concerns poses potential risks that may detrimentally affect patient interests.
- 2.
- The cost hurdles associated with image processing and modeling remain inadequately resolved, culminating in sub-optimal classification outcomes.
- 1.
- The development of PLA, a deep-learning-based approach aimed at facilitating breast cancer diagnosis within the context of the Internet of Medical Things (IoMT).
- 2.
- The adoption of MobileViT as the foundational backbone of the framework to achieve the desired lightweight and efficient objectives.
- 3.
- Implementing federated learning techniques to safeguard patient data privacy during the training process, specifically on institutional IoMT devices, ensuring timely diagnosis.
- 4.
- A comprehensive evaluation of our model through multiple experiments, resulting in competitive and noteworthy outcomes.
2. Relative Work
Theoretical Analysis of Breast Cancer Image Classification
3. Backbone Design of the Model
3.1. Image Enhancement Processing
3.2. Image Texture Feature Extraction
- 1.
- Roughness: Roughness mainly reflects the change in image’s gray level and can be expressed as:
- 2.
- Contrast: The contrast reflects the distribution of pixel intensity in the image and can be expressed as:The formula represents the peak state of the recovery statistic value, the fourth-order moment mean value, and the mean square error of the image gray value. represented the standard deviation, and denotes the kurtosis of the intensity histogram, which can be used as a polarization measurement indicator.
- 3.
- Orientation: Orientation primarily reflects the concentration of image texture intensity along a certain direction, and the formula is shown below:In the equation, p represents the peak, represents the number of peaks, represents the range of peaks and valleys, r represents the normalization factor, represents the position of the peak p, and represents the corresponding histogram.
- 4.
- Linearity: Linearity within an image texture refers to a structured and organized arrangement of linear elements. These elements could manifest as straight lines, contours, or patterns contributing to visual composition. The linearity equation is referred to as:The measurement of was based on a weighting scheme that assigned a weight of +1 to co-occurrences in the same direction and a weight of −1 to those in the perpendicular direction. This weighting scheme was incorporated into a mathematical function that utilized a direction co-occurrence matrix denoted by . Therefore, of the matrix represented the relative frequency of neighborhoods centered at two pixels separated by a distance d along the edge direction. In other words, it indicated the frequency at which one pixel had a quantized direction i, while the other had a quantized direction j, occurring on the image.
Algorithm 1 The proposed high-level algorithm for PLA |
4. Privacy-Embedded Lightweight Multi-Category Classification Design for Breast Cancer
- 1.
- Initialize the model: First, the model parameters are initialized on the central server, and the initial parameters are sent to the devices participating in the training.
- 2.
- Distribution of the models and the data: The initialized model is sent to each corresponding device, while the corresponding local dataset is distributed to them. The devices can be different devices or machines, such as smartphones, edge devices, etc.
- 3.
- Local training: The model is trained using the device to update its parameters. This process is performed in parallel on the device without transferring the raw data to the central server. The parameter update formula is:
- 4.
- Parameter aggregation: After the local training is completed, the device sends the respective model parameters to the central server. The central server aggregates these parameters based on pre-defined aggregation strategies (such as the weighted average) to obtain a new global model parameter with the formula:
- 5.
- Update model: The central server sends the aggregated global model parameters back to the device to update their local models.
- 6.
- Final aggregation: Determine whether convergence conditions are met. If not, repeat steps 3 to 5. If so, the final model parameters can be aggregated using the central server, and the aggregated network model after integration can be expressed as:
5. Implementation
5.1. System Implementation Details
5.2. Dataset
5.3. Metrics
6. Evaluations
7. Discussion
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Guo, M.H.; Liu, Z.N.; Mu, T.J.; Hu, S.M. Beyond self-attention: External attention using two linear layers for visual tasks. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5436–5447. [Google Scholar] [CrossRef]
- Kayikci, S.; Khoshgoftaar, T.M. Breast cancer prediction using gated attentive multimodal deep learning. J. Big Data 2023, 10, 1–11. [Google Scholar] [CrossRef]
- Hamedani-KarAzmoudehFar, F.; Tavakkoli-Moghaddam, R.; Tajally, A.R.; Aria, S.S. Breast cancer classification by a new approach to assessing deep neural network-based uncertainty quantification methods. Biomed. Signal Process. Control. 2023, 79, 104057. [Google Scholar] [CrossRef]
- Chakravarthy, S.S.; Bharanidharan, N.; Rajaguru, H. Deep Learning-Based Metaheuristic Weighted K-Nearest Neighbor Algorithm for the Severity Classification of Breast Cancer. IRBM 2023, 44, 100749. [Google Scholar] [CrossRef]
- Zhang, Y.; Liu, Y.L.; Nie, K.; Zhou, J.; Chen, Z.; Chen, J.H.; Wang, X.; Kim, B.; Parajuli, R.; Mehta, R.S.; et al. Deep learning-based automatic diagnosis of breast cancer on MRI using mask R-CNN for detection followed by ResNet50 for classification. Acad. Radiol. 2023, 30, S161–S171. [Google Scholar] [CrossRef]
- Tekin, E.; Yazıcı, Ç.; Kusetogullari, H.; Tokat, F.; Yavariabdi, A.; Iheme, L.O.; Çayır, S.; Bozaba, E.; Solmaz, G.; Darbaz, B.; et al. Tubule-U-Net: A novel dataset and deep learning-based tubule segmentation framework in whole slide images of breast cancer. Sci. Rep. 2023, 13, 128. [Google Scholar] [CrossRef]
- Uddin, K.M.M.; Biswas, N.; Rikta, S.T.; Dey, S.K. Machine learning-based diagnosis of breast cancer utilizing feature optimization technique. Comput. Methods Programs Biomed. Update 2023, 3, 100098. [Google Scholar] [CrossRef]
- Ahmed, A.; Xi, R.; Hou, M.; Shah, S.A.; Hameed, S. Harnessing Big Data Analytics for Healthcare: A Comprehensive Review of Frameworks, Implications, Applications, and Impacts. IEEE Access 2023, 11, 112891–112928. [Google Scholar] [CrossRef]
- Wang, X.; Ahmad, I.; Javeed, D.; Zaidi, S.A.; Alotaibi, F.M.; Ghoneim, M.E.; Daradkeh, Y.I.; Asghar, J.; Eldin, E.T. Intelligent Hybrid Deep Learning Model for Breast Cancer Detection. Electronics 2022, 11, 2767. [Google Scholar] [CrossRef]
- Alzubaidi, L.; Al-Shamma, O.; Fadhel, M.A.; Farhan, L.; Zhang, J.; Duan, Y. Optimizing the performance of breast cancer classification by employing the same domain transfer learning from hybrid deep convolutional neural network model. Electronics 2020, 9, 445. [Google Scholar] [CrossRef]
- Aldhyani, T.H.; Khan, M.A.; Almaiah, M.A.; Alnazzawi, N.; Hwaitat, A.K.A.; Elhag, A.; Shehab, R.T.; Alshebami, A.S. A Secure internet of medical things Framework for Breast Cancer Detection in Sustainable Smart Cities. Electronics 2023, 12, 858. [Google Scholar] [CrossRef]
- Gui, H.; Su, T.; Pang, Z.; Jiao, H.; Xiong, L.; Jiang, X.; Li, L.; Wang, Z. Diagnosis of Breast Cancer with Strongly Supervised Deep Learning Neural Network. Electronics 2022, 11, 3003. [Google Scholar] [CrossRef]
- Li, J.; Shi, J.; Su, H.; Gao, L. Breast cancer histopathological image recognition based on pyramid gray level co-occurrence matrix and incremental broad learning. Electronics 2022, 11, 2322. [Google Scholar] [CrossRef]
- Liang, H.; Li, J.; Wu, H.; Li, L.; Zhou, X.; Jiang, X. Mammographic Classification of Breast Cancer Microcalcifications through Extreme Gradient Boosting. Electronics 2022, 11, 2435. [Google Scholar] [CrossRef]
- Masko, D.; Hensman, P. The Impact of Imbalanced Training Data for Convolutional Neural Networks, 2015; Degree Project in Computer Science; KTH: Stockholm, Sweden, 2015; Available online: https://www.kth.se/social/files/588617ebf2765401cfcc478c/PHensmanDMasko_dkand15.pdf (accessed on 15 October 2023).
- Fu, Y.; Du, Y.; Cao, Z.; Li, Q.; Xiang, W. A deep learning model for network intrusion detection with imbalanced data. Electronics 2022, 11, 898. [Google Scholar] [CrossRef]
- Hassanat, A.B.; Tarawneh, A.S.; Abed, S.S.; Altarawneh, G.A.; Alrashidi, M.; Alghamdi, M. Rdpvr: Random data partitioning with voting rule for machine learning from class-imbalanced datasets. Electronics 2022, 11, 228. [Google Scholar] [CrossRef]
- Yang, H.; Xu, J.; Xiao, Y.; Hu, L. SPE-ACGAN: A Resampling Approach for Class Imbalance Problem in Network Intrusion Detection Systems. Electronics 2023, 12, 3323. [Google Scholar] [CrossRef]
- Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; Dean, J. A guide to deep learning in healthcare. Nat. Med. 2019, 25, 24–29. [Google Scholar] [CrossRef]
- Zhou, X.; Xu, X.; Liang, W.; Zeng, Z.; Yan, Z. Deep-learning-enhanced multitarget detection for end–edge–cloud surveillance in smart IoT. IEEE Internet Things J. 2021, 8, 12588–12596. [Google Scholar] [CrossRef]
- Li, L.; Xie, N.; Yuan, S. A Federated Learning Framework for Breast Cancer Histopathological Image Classification. Electronics 2022, 11, 3767. [Google Scholar] [CrossRef]
- Agbley, B.L.Y.; Li, J.P.; Haq, A.U.; Bankas, E.K.; Mawuli, C.B.; Ahmad, S.; Khan, S.; Khan, A.R. Federated Fusion of Magnified Histopathological Images for Breast Tumor Classification in the Internet of Medical Things. IEEE J. Biomed. Health Inform. 2023, 1–12. [Google Scholar] [CrossRef]
- Shaheen, M.; Farooq, M.S.; Umer, T.; Kim, B.S. Applications of federated learning; Taxonomy, challenges, and research trends. Electronics 2022, 11, 670. [Google Scholar] [CrossRef]
- Kandati, D.R.; Gadekallu, T.R. Federated learning approach for early detection of chest lesion caused by COVID-19 infection using particle swarm optimization. Electronics 2023, 12, 710. [Google Scholar] [CrossRef]
- Al Husaini, M.A.S.; Hadi Habaebi, M.; Gunawan, T.S.; Islam, M.R. Self-detection of early breast cancer application with infrared camera and deep learning. Electronics 2021, 10, 2538. [Google Scholar] [CrossRef]
- Ding, Y.; Yang, F.; Han, M.; Li, C.; Wang, Y.; Xu, X.; Zhao, M.; Zhao, M.; Yue, M.; Deng, H.; et al. Multi-center study on predicting breast cancer lymph node status from core needle biopsy specimens using multi-modal and multi-instance deep learning. NPJ Breast Cancer 2023, 9, 58. [Google Scholar] [CrossRef]
- Liu, M.; Hu, L.; Tang, Y.; Wang, C.; He, Y.; Zeng, C.; Lin, K.; He, Z.; Huo, W. A deep learning method for breast cancer classification in the pathology images. IEEE J. Biomed. Health Inform. 2022, 26, 5025–5032. [Google Scholar] [CrossRef]
- Hu, T.; Zhang, L.; Xie, L.; Yi, Z. A multi-instance networks with multiple views for classification of mammograms. Neurocomputing 2021, 443, 320–328. [Google Scholar] [CrossRef]
- Liu, S.S.; Lin, L. Adaptive Weighted Multi-View Clustering. In Proceedings of the Conference on Health, Inference, and Learning. PMLR, Cambridge, MA, USA, 22–24 June 2023; pp. 19–36. [Google Scholar]
- Choudhury, R.; Kitani, K.M.; Jeni, L.A. TEMPO: Efficient multi-view pose estimation, tracking, and forecasting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 14750–14760. [Google Scholar]
- Zhang, Y.; Feng, W.; Wu, Z.; Li, W.; Tao, L.; Liu, X.; Zhang, F.; Gao, Y.; Huang, J.; Guo, X. Deep-Learning Model of ResNet Combined with CBAM for Malignant–Benign Pulmonary Nodules Classification on Computed Tomography Images. Medicina 2023, 59, 1088. [Google Scholar] [CrossRef]
- Fan, M.; Yuan, W.; Zhao, W.; Xu, M.; Wang, S.; Gao, X.; Li, L. Joint prediction of breast cancer histological grade and Ki-67 expression level based on DCE-MRI and DWI radiomics. IEEE J. Biomed. Health Inform. 2019, 24, 1632–1642. [Google Scholar] [CrossRef]
- Johnson, M.; Stanczak, B.; Winblad, O.D.; Amin, A.L. Breast MRI assists in decision-making for surgical excision of atypical ductal hyperplasia. Surgery 2023, 173, 612–618. [Google Scholar] [CrossRef]
- Burçak, K.C.; Baykan, Ö.K.; Uğuz, H. A new deep convolutional neural network model for classifying breast cancer histopathological images and the hyperparameter optimisation of the proposed model. J. Supercomput. 2021, 77, 973–989. [Google Scholar] [CrossRef]
- Li, J.; Mi, W.; Guo, Y.; Ren, X.; Fu, H.; Zhang, T.; Zou, H.; Liang, Z. Artificial intelligence for histological subtype classification of breast cancer: Combining multi-scale feature maps and the recurrent attention model. Histopathology 2022, 80, 836–846. [Google Scholar] [CrossRef]
- Rezaei, Z. A review on image-based approaches for breast cancer detection, segmentation, and classification. Expert Syst. Appl. 2021, 182, 115204. [Google Scholar] [CrossRef]
- Elmannai, H.; Hamdi, M.; AlGarni, A. Deep learning models combining for breast cancer histopathology image classification. Int. J. Comput. Intell. Syst. 2021, 14, 1003. [Google Scholar] [CrossRef]
- Singh, L.K.; Khanna, M.; Singh, R. Artificial intelligence based medical decision support system for early and accurate breast cancer prediction. Adv. Eng. Softw. 2023, 175, 103338. [Google Scholar] [CrossRef]
- Yang, J.; Chen, H.; Zhao, Y.; Yang, F.; Zhang, Y.; He, L.; Yao, J. Remix: A general and efficient framework for multiple instance learning based whole slide image classification. In Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2022; pp. 35–45. [Google Scholar]
- Sudharshan, P.; Petitjean, C.; Spanhol, F.; Oliveira, L.E.; Heutte, L.; Honeine, P. Multiple instance learning for histopathological breast cancer image classification. Expert Syst. Appl. 2019, 117, 103–111. [Google Scholar] [CrossRef]
- Khadim, E.U.; Shah, S.A.; Wagan, R.A. Evaluation of activation functions in CNN model for detection of malaria parasite using blood smear images. In Proceedings of the 2021 International Conference on Innovative Computing (ICIC), Lahore, Pakistan, 9–10 November 2021; pp. 1–6. [Google Scholar]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
- Yan, R.; Ren, F.; Wang, Z.; Wang, L.; Zhang, T.; Liu, Y.; Rao, X.; Zheng, C.; Zhang, F. Breast cancer histopathological image classification using a hybrid deep neural network. Methods 2020, 173, 52–60. [Google Scholar] [CrossRef]
- Dash, S.; Parida, P.; Mohanty, J.R. Illumination robust deep convolutional neural network for medical image classification. Soft Comput. 2023, 1–13. [Google Scholar] [CrossRef]
- Ramadan, S.Z. Methods used in computer-aided diagnosis for breast cancer detection using mammograms: A review. J. Healthc. Eng. 2020, 2020. [Google Scholar] [CrossRef]
- Basodi, S.; Ji, C.; Zhang, H.; Pan, Y. Gradient amplification: An efficient way to train deep neural networks. Big Data Min. Anal. 2020, 3, 196–207. [Google Scholar] [CrossRef]
- Alqahtani, Y.; Mandawkar, U.; Sharma, A.; Hasan, M.N.S.; Kulkarni, M.H.; Sugumar, R. Breast Cancer Pathological Image Classification Based on the Multiscale CNN Squeeze Model. Comput. Intell. Neurosci. 2022, 2022, 7075408. [Google Scholar] [CrossRef] [PubMed]
- Zerouaoui, H.; Idri, A. Deep hybrid architectures for binary classification of medical breast cancer images. Biomed. Signal Process. Control 2022, 71, 103226. [Google Scholar] [CrossRef]
- Kumar, S.; Sharma, S. Sub-classification of invasive and non-invasive cancer from magnification independent histopathological images using hybrid neural networks. Evol. Intell. 2022, 15, 1531–1543. [Google Scholar] [CrossRef]
- Agarwal, P.; Yadav, A.; Mathur, P. Breast cancer prediction on breakhis dataset using deep cnn and transfer learning model. In Data Engineering for Smart Systems: Proceedings of SSIC 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 77–88. [Google Scholar]
- Djouima, H.; Zitouni, A.; Megherbi, A.C.; Sbaa, S. Classification of Breast Cancer Histopathological Images using DensNet201. In Proceedings of the 2022 7th International Conference on Image and Signal Processing and their Applications (ISPA), Mostaganem, Algeria, 8–9 May 2022; pp. 1–6. [Google Scholar]
- Singh, S.; Kumar, R. Breast cancer detection from histopathology images with deep inception and residual blocks. Multimed. Tools Appl. 2022, 81, 5849–5865. [Google Scholar] [CrossRef]
- Jakhar, A.K.; Gupta, A.; Singh, M. SELF: A stacked-based ensemble learning framework for breast cancer classification. Evol. Intell. 2023, 1–16. [Google Scholar] [CrossRef]
- Juhong, A.; Li, B.; Yao, C.Y.; Yang, C.W.; Agnew, D.W.; Lei, Y.L.; Huang, X.; Piyawattanametha, W.; Qiu, Z. Super-resolution and segmentation deep learning for breast cancer histopathology image analysis. Biomed. Opt. Express 2023, 14, 18–36. [Google Scholar] [CrossRef]
- Chhipa, P.C.; Upadhyay, R.; Pihlgren, G.G.; Saini, R.; Uchida, S.; Liwicki, M. Magnification prior: A self-supervised method for learning representations on breast cancer histopathological images. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 2717–2727. [Google Scholar]
Magnification | Benign | Malignant | Total |
---|---|---|---|
40× | 598 | 1398 | 1996 |
100× | 642 | 1437 | 2079 |
200× | 594 | 1418 | 2012 |
400× | 590 | 1232 | 1822 |
Total | 2424 | 5485 | 7909 |
Model | Image Size | Feature Extractor | Train/Test/Validation | Classification Accuracy on 40× |
---|---|---|---|---|
Sudharshan et al. [40] | Rnd (64 × 64) | PFTAS | 70%/30%/0% | 0.878 |
Alqahtani et al. [47] | Rnd (224 × 224) | ResNet50 | 85%/15%/0% | 0.888 |
Zerouaoui et al. [48] | Rnd (256 × 256) | Multi-model | Unknown | 0.926 |
Kumar et al. [49] | Rnd (299 × 299) | ResNet152 | Unknown | 0.824 |
Agarwal et al. [50] | WSI | ResNet50 | Unknown | 0.947 |
Djouima et al. [51] | Rnd (384 × 384) | DensNet201 | 70%/20%/10% | 0.920 |
Singh et al. [52] | Rnd (224 × 224) | Hybrid model | 65%/35%/0% | 0.808 |
Jakhar et al. [53] | WSI | Random forest, Extra tree | 80%/20%/0% | 0.934 |
Juhong et al. [54] | WSI | SRGAN-ResNeXt, Inception U-net | Unknown | 0.947 |
Chhipa et al. [55] | WSI | ResNet152 + MVPNet | 64%/20%/16% | 0.907 |
This model | WSI | PLA | 70%/20%/10% | 0.953 |
EXP Type | Metrics | Number of Samples | |||
---|---|---|---|---|---|
7909 | 4000 | 2000 | 1000 | ||
EXP-A | ACC | 0.953 | 0.965 | 0.893 | 0.610 |
Precision | 0.988 | 0.958 | 0.930 | 0.613 | |
Recall | 0.994 | 0.998 | 0.941 | 0.915 | |
F1 | 0.925 | 0.977 | 0.932 | 0.734 | |
EXP-B | ACC | 0.941 | 0.861 | 0.828 | 0.613 |
Precision | 0.979 | 0.919 | 0.936 | 0.956 | |
Recall | 0.988 | 0.930 | 0.887 | 0.881 | |
F1 | 0.983 | 0.918 | 0.960 | 0.968 | |
EXP-C | ACC | 0.877 | 0.845 | 0.610 | 0.610 |
Precision | 0.881 | 0.883 | 0.613 | 0.613 | |
Recall | 0.974 | 0.949 | 0.915 | 0.915 | |
F1 | 0.934 | 0.915 | 0.734 | 0.734 |
Particulars (Time/Memory) | Samples | |||
---|---|---|---|---|
1000 | 2000 | 4000 | 7909 | |
Design Model Time (ms) | 45.2 | 52.1 | 57.8 | 59.2 |
Design Model Memory (MB) | 9.42 | 9.42 | 9.42 | 9.42 |
ResNet18 Time (ms) | 56.4 | 63.6 | 70.5 | 72.6 |
ResNet18 Memory (MB) | 48.6 | 48.6 | 48.6 | 48.6 |
VGG11 Time (ms) | 60.6 | 67.6 | 77.8 | 79.1 |
VGG11 Memory (MB) | 138 | 138 | 138 | 138 |
Inception v3 Time (ms) | 64.4 | 71.6 | 78.5 | 80.1 |
Inception v3 Memory (MB) | 238 | 238 | 238 | 238 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yan, C.; Zeng, X.; Xi, R.; Ahmed, A.; Hou, M.; Tunio, M.H. PLA—A Privacy-Embedded Lightweight and Efficient Automated Breast Cancer Accurate Diagnosis Framework for the Internet of Medical Things. Electronics 2023, 12, 4923. https://doi.org/10.3390/electronics12244923
Yan C, Zeng X, Xi R, Ahmed A, Hou M, Tunio MH. PLA—A Privacy-Embedded Lightweight and Efficient Automated Breast Cancer Accurate Diagnosis Framework for the Internet of Medical Things. Electronics. 2023; 12(24):4923. https://doi.org/10.3390/electronics12244923
Chicago/Turabian StyleYan, Chengxiao, Xiaoyang Zeng, Rui Xi, Awais Ahmed, Mengshu Hou, and Muhammad Hanif Tunio. 2023. "PLA—A Privacy-Embedded Lightweight and Efficient Automated Breast Cancer Accurate Diagnosis Framework for the Internet of Medical Things" Electronics 12, no. 24: 4923. https://doi.org/10.3390/electronics12244923
APA StyleYan, C., Zeng, X., Xi, R., Ahmed, A., Hou, M., & Tunio, M. H. (2023). PLA—A Privacy-Embedded Lightweight and Efficient Automated Breast Cancer Accurate Diagnosis Framework for the Internet of Medical Things. Electronics, 12(24), 4923. https://doi.org/10.3390/electronics12244923