Enhancing Data Privacy Protection and Feature Extraction in Secure Computing Using a Hash Tree and Skip Attention Mechanism
Abstract
:1. Introduction
2. Related Work
2.1. Hash Algorithms
2.2. Attention Mechanisms
2.3. Large Models
3. Materials and Method
3.1. Materials
3.1.1. Dataset Collection
3.1.2. Data Preprocessing
3.2. Proposed Method
3.2.1. Overview
3.2.2. Hash-Tree-Based Transformer Model
- 1.
- Locating the Insertion Position: First, we determine the position of the new node within the hash tree. This is typically carried out by selecting an appropriate leaf position for insertion according to the hash value of the data and the tree’s balancing rules.
- 2.
- Calculating the Hash Value of the New Node: Perform a hash operation on the inserted node to generate its hash value. Assume the new node is , and its hash value is .
- 3.
- Updating the Hash Value of the Parent Node: Add the hash value of the new node to its parent node’s hash calculation to update the parent’s hash value. Assume the parent node is P; its updated hash value is calculated as
- 4.
- Recursively Updating Hash Values Along the Path: Each update affects its parent node; therefore, the hash values of all nodes along the path from the new node to the root node need to be recalculated, ensuring the overall hash consistency of the tree structure after inserting the new node.
- 1.
- Locating the Node to Delete: Identify the position of the node to delete, , within the tree structure based on the node’s hash value or positional information.
- 2.
- Removing the Node: Remove the node from the tree. If the node is a leaf, it is directly removed. If the node has children, its child nodes are adjusted according to the tree’s balancing strategy to maintain the structure.
- 3.
- Updating the Hash Value of the Parent Node: After deletion, the hash value of the parent node needs to be recalculated. Assume the deleted node is , and its parent node is P; the new hash value is calculated as
- 4.
- Recursively Updating Hash Values Along the Path: Starting from the parent node of the deleted node, recursively update the hash values along the path up to the root node to ensure the hash consistency of the entire tree.
3.2.3. Skip Attention Mechanism
- Network layers: Our model is designed with three layers, each including a skip attention unit and a feed-forward network unit. This design ensures sufficient model depth to handle complex data relationships while maintaining computational efficiency.
- Network width and channel numbers: The width of each skip attention unit is set to 64, meaning each attention head processes features of dimension 64. This width was determined through experimental balancing of model performance and computational efficiency. The number of channels is set to 256 to ensure the network can capture sufficient information.
3.2.4. Skip Loss Function
- 1.
- Improved gradient flow: By adding extra loss calculations for cross-layer connections, it helps address the problem of vanishing gradients, especially evident in deeper networks.
- 2.
- Enhanced model learning capability: It enables the model to more effectively learn and utilize features transmitted through skip connections, thus improving overall prediction accuracy and generalization ability.
- 3.
- Increased training stability: By dynamically adjusting the weights of loss at different layers, it makes the training process more stable, reducing fluctuations during training.
4. Results and Discussion
4.1. Experimental Setup
4.1.1. Hardware and Software Environment
4.1.2. Hyperparameters and Training Settings
4.1.3. Baseline
4.1.4. Evaluation Metrics
4.2. Time Series Data Prediction Results
4.3. Image Data Classification Results
4.4. Results of the Ablation Study on Different Attention Mechanisms
4.5. Encryption Effect Visualization Analysis
4.6. Discussion on Security
4.7. Discussion on Computational Complexities and Future Works
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Bharadiya, J.P. A comparative study of business intelligence and artificial intelligence with big data analytics. Am. J. Artif. Intell. 2023, 7, 24. [Google Scholar]
- Zhang, Y.; Wa, S.; Liu, Y.; Zhou, X.; Sun, P.; Ma, Q. High-accuracy detection of maize leaf diseases CNN based on multi-pathway activation function module. Remote Sens. 2021, 13, 4218. [Google Scholar] [CrossRef]
- Li, D.; Han, D.; Crespi, N.; Minerva, R.; Li, K.C. A blockchain-based secure storage and access control scheme for supply chain finance. J. Supercomput. 2023, 79, 109–138. [Google Scholar] [CrossRef]
- Kafi, M.A.; Akter, N. Securing financial information in the digital realm: Case studies in cybersecurity for accounting data protection. Am. J. Trade Policy 2023, 10, 15–26. [Google Scholar] [CrossRef]
- Zhang, Y.; Wa, S.; Zhang, L.; Lv, C. Automatic plant disease detection based on tranvolution detection network with GAN modules using leaf images. Front. Plant Sci. 2022, 13, 875693. [Google Scholar] [CrossRef]
- Rehan, H. AI-Powered Genomic Analysis in the Cloud: Enhancing Precision Medicine and Ensuring Data Security in Biomedical Research. J. Deep. Learn. Genom. Data Anal. 2023, 3, 37–71. [Google Scholar]
- Zhang, Y.; Yang, X.; Liu, Y.; Zhou, J.; Huang, Y.; Li, J.; Zhang, L.; Ma, Q. A time-series neural network for pig feeding behavior recognition and dangerous detection from videos. Comput. Electron. Agric. 2024, 218, 108710. [Google Scholar] [CrossRef]
- Mohammad, N. Enhancing Security and Privacy in Multi-Cloud Environments: A Comprehensive Study on Encryption Techniques and Access Control Mechanisms. Int. J. Comput. Eng. Technol. (IJCET) 2021, 12, 51–63. [Google Scholar]
- Shivaramakrishna, D.; Nagaratna, M. A novel hybrid cryptographic framework for secure data storage in cloud computing: Integrating AES-OTP and RSA with adaptive key management and Time-Limited access control. Alex. Eng. J. 2023, 84, 275–284. [Google Scholar] [CrossRef]
- Xu, K.; Wu, Y.; Li, Z.; Zhang, R.; Feng, Z. Investigating financial risk behavior prediction using deep learning and big data. Int. J. Innov. Res. Eng. Manag. 2024, 11, 77–81. [Google Scholar] [CrossRef]
- Sahu, S.K.; Mokhade, A.; Bokde, N.D. An overview of machine learning, deep learning, and reinforcement learning-based techniques in quantitative finance: Recent progress and challenges. Appl. Sci. 2023, 13, 1956. [Google Scholar] [CrossRef]
- Ren, Y.; Huang, D.; Wang, W.; Yu, X. BSMD: A blockchain-based secure storage mechanism for big spatio-temporal data. Future Gener. Comput. Syst. 2023, 138, 328–338. [Google Scholar] [CrossRef]
- Suganya, M.; Sasipraba, T. Stochastic Gradient Descent Long Short-Term Memory based secure encryption algorithm for cloud data storage and retrieval in cloud computing environment. J. Cloud Comput. 2023, 12, 74. [Google Scholar] [CrossRef]
- Puneeth, R.; Parthasarathy, G. Security and Data Privacy of Medical Information in Blockchain Using Lightweight Cryptographic System. Int. J. Eng. 2023, 36, 925–933. [Google Scholar] [CrossRef]
- Lu, S.; Liu, M.; Yin, L.; Yin, Z.; Liu, X.; Zheng, W. The multi-modal fusion in visual question answering: A review of attention mechanisms. PeerJ Comput. Sci. 2023, 9, e1400. [Google Scholar] [CrossRef]
- Li, Q.; Ren, J.; Zhang, Y.; Song, C.; Liao, Y.; Zhang, Y. Privacy-Preserving DNN Training with Prefetched Meta-Keys on Heterogeneous Neural Network Accelerators. In Proceedings of the IEEE 2023 60th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 9–13 July 2023; pp. 1–6. [Google Scholar]
- Zhang, Y.; Lv, C. TinySegformer: A lightweight visual segmentation model for real-time agricultural pest detection. Comput. Electron. Agric. 2024, 218, 108740. [Google Scholar] [CrossRef]
- Zhang, N.; Kim, J. A Survey on Attention mechanism in NLP. In Proceedings of the IEEE 2023 International Conference on Electronics, Information, and Communication (ICEIC), Singapore, 5–8 February 2023; pp. 1–4. [Google Scholar]
- Li, X.; Li, M.; Yan, P.; Li, G.; Jiang, Y.; Luo, H.; Yin, S. Deep learning attention mechanism in medical image analysis: Basics and beyonds. Int. J. Netw. Dyn. Intell. 2023, 2, 93–116. [Google Scholar] [CrossRef]
- Cheng, G.; Lai, P.; Gao, D.; Han, J. Class attention network for image recognition. Sci. China Inf. Sci. 2023, 66, 132105. [Google Scholar] [CrossRef]
- Han, D.; Zhou, H.; Weng, T.H.; Wu, Z.; Han, B.; Li, K.C.; Pathan, A.S.K. LMCA: A lightweight anomaly network traffic detection model integrating adjusted mobilenet and coordinate attention mechanism for IoT. Telecommun. Syst. 2023, 84, 549–564. [Google Scholar] [CrossRef]
- Lv, Z.; Chen, D.; Cao, B.; Song, H.; Lv, H. Secure deep learning in defense in deep-learning-as-a-service computing systems in digital twins. IEEE Trans. Comput. 2023, 73, 656–668. [Google Scholar] [CrossRef]
- Devlin, J. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. Gpt-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar]
- Vaswani, A. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
- Li, Q.; Zhang, Y.; Ren, J.; Li, Q.; Zhang, Y. You Can Use But Cannot Recognize: Preserving Visual Privacy in Deep Neural Networks. arXiv 2024, arXiv:2404.04098. [Google Scholar]
- Li, Q.; Zhang, Y. Confidential Federated Learning for Heterogeneous Platforms against Client-Side Privacy Leakages. In Proceedings of the ACM Turing Award Celebration Conference, Changsha, China, 5–7 July 2024; pp. 239–241. [Google Scholar]
- Zhang, P.; Dong, X.; Wang, B.; Cao, Y.; Xu, C.; Ouyang, L.; Zhao, Z.; Duan, H.; Zhang, S.; Ding, S.; et al. Internlm-xcomposer: A vision-language large model for advanced text-image comprehension and composition. arXiv 2023, arXiv:2309.15112. [Google Scholar]
- Dong, X.; Zhang, P.; Zang, Y.; Cao, Y.; Wang, B.; Ouyang, L.; Wei, X.; Zhang, S.; Duan, H.; Cao, M.; et al. Internlm-xcomposer2: Mastering free-form text-image composition and comprehension in vision-language large model. arXiv 2024, arXiv:2401.16420. [Google Scholar]
- Jacobs, S.A.; Tanaka, M.; Zhang, C.; Zhang, M.; Song, S.L.; Rajbhandari, S.; He, Y. Deepspeed ulysses: System optimizations for enabling training of extreme long sequence transformer models. arXiv 2023, arXiv:2309.14509. [Google Scholar]
- Li, H.; Wang, S.X.; Shang, F.; Niu, K.; Song, R. Applications of large language models in cloud computing: An empirical study using real-world data. Int. J. Innov. Res. Comput. Sci. Technol. 2024, 12, 59–69. [Google Scholar] [CrossRef]
- Yao, Y.; Duan, J.; Xu, K.; Cai, Y.; Sun, Z.; Zhang, Y. A survey on large language model (llm) security and privacy: The good, the bad, and the ugly. High-Confid. Comput. 2024, 4, 100211. [Google Scholar] [CrossRef]
- Chen, J.; Liu, Z.; Huang, X.; Wu, C.; Liu, Q.; Jiang, G.; Pu, Y.; Lei, Y.; Chen, X.; Wang, X.; et al. When large language models meet personalization: Perspectives of challenges and opportunities. World Wide Web 2024, 27, 42. [Google Scholar] [CrossRef]
- Qiu, J.; Li, L.; Sun, J.; Peng, J.; Shi, P.; Zhang, R.; Dong, Y.; Lam, K.; Lo, F.P.W.; Xiao, B.; et al. Large ai models in health informatics: Applications, challenges, and the future. IEEE J. Biomed. Health Inform. 2023, 27, 6074–6087. [Google Scholar] [CrossRef] [PubMed]
- Marichamy, V.S.; Natarajan, V. Blockchain based securing medical records in big data analytics. Data Knowl. Eng. 2023, 144, 102122. [Google Scholar] [CrossRef]
- Richter, T.; Artzt, M. International Handbook of Blockchain Law: A Guide to Navigating Legal and Regulatory Challenges of Blockchain Technology and Crypto Assets; Kluwer Law International BV: Alphen aan den Rijn, The Netherlands, 2024. [Google Scholar]
- Hu, S.; Lin, J.; Du, X.; Huang, W.; Lu, Z.; Duan, Q.; Wu, J. ACSarF: A DRL-based adaptive consortium blockchain sharding framework for supply chain finance. Digit. Commun. Netw. 2023. [Google Scholar] [CrossRef]
- Udayakumar, R.; Chowdary, P.B.K.; Devi, T.; Sugumar, R. Integrated SVM-FFNN for Fraud Detection in Banking Financial Transactions. J. Internet Serv. Inf. Secur. 2023, 13, 12–25. [Google Scholar]
- Zheng, J.; Xin, D.; Cheng, Q.; Tian, M.; Yang, L. The Random Forest Model for Analyzing and Forecasting the US Stock Market in the Context of Smart Finance. arXiv 2024, arXiv:2402.17194. [Google Scholar]
- Wang, J.; Hong, S.; Dong, Y.; Li, Z.; Hu, J. Predicting stock market trends using lstm networks: Overcoming RNN limitations for improved financial forecasting. J. Comput. Sci. Softw. Appl. 2024, 4, 1–7. [Google Scholar]
- Fang, Z.; Ma, X.; Pan, H.; Yang, G.; Arce, G.R. Movement forecasting of financial time series based on adaptive LSTM-BN network. Expert Syst. Appl. 2023, 213, 119207. [Google Scholar] [CrossRef]
- Arias-Serrano, I.; Cruz-Varela, J.; Almeida-Galárraga, D.; Tirado-Espin, A.; Velásquez-López, P.A.; Laurido-Mora, F.C.; Villalba-Meneses, F.; Avila-Briones, L.N. Artificial Intelligence Based Glaucoma and Diabetic Retinopathy Detection Using MATLAB—Retrained AlexNet Convolutional Neural Network; Technical Report; PubMed Central (PMC): Bethesda, MD, USA, 2024. [Google Scholar]
- Ma, L.; Wu, H.; Samundeeswari, P. GoogLeNet-AL: A Fully Automated Adaptive Model for Lung Cancer Detection. Pattern Recognit. 2024, 155, 110657. [Google Scholar] [CrossRef]
- Mirza, A.F.; Mansoor, M.; Usman, M.; Ling, Q. Hybrid Inception-embedded deep neural network ResNet for short and medium-term PV-Wind forecasting. Energy Convers. Manag. 2023, 294, 117574. [Google Scholar] [CrossRef]
- Talukder, M.A.; Layek, M.A.; Kazi, M.; Uddin, M.A.; Aryal, S. Empowering COVID-19 detection: Optimizing performance through fine-tuned efficientnet deep learning architecture. Comput. Biol. Med. 2024, 168, 107789. [Google Scholar] [CrossRef]
- Cortes, C. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Zaremba, W. Recurrent neural network regularization. arXiv 2014, arXiv:1409.2329. [Google Scholar]
- Hochreiter, S. Long Short-Term Memory. In Neural Computation; MIT-Press: Cambridge, MA, USA, 1997. [Google Scholar]
- Pan, X.; Ge, C.; Lu, R.; Song, S.; Chen, G.; Huang, Z.; Huang, G. On the integration of self-attention and convolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 815–825. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Rijmen, V.; Daemen, J. Advanced encryption standard. Proc. Fed. Inf. Process. Stand. Publ. Natl. Inst. Stand. Technol. 2001, 19, 22. [Google Scholar]
- Costache, A.; Smart, N.P. Homomorphic encryption without gaussian noise. Cryptol. ePrint Arch. 2017, 163, 1–24. [Google Scholar]
- Umar, H.G.A.; Aoun, M.; Kaleem, M.A.; Rehman, S.U.; khan, M.Z.; Younis, M.; jamil, M. Cryptographic Analysis of Blur-Based Encryption an in depth examination of resilience against various attack vectors. Res. Sq. 2023. [Google Scholar] [CrossRef]
- Ismail, S.M.; Said, L.A.; Radwan, A.G.; Madian, A.H.; Abu-ElYazeed, M.F. A novel image encryption system merging fractional-order edge detection and generalized chaotic maps. Signal Process. 2020, 167, 107280. [Google Scholar] [CrossRef]
Model | Precision | Recall | Accuracy | F1-Score | Memory (MB) | FPS |
---|---|---|---|---|---|---|
SVM [46] | 0.84 | 0.80 | 0.82 | 0.82 | - | 14,168 |
Random Forest [47] | 0.85 | 0.82 | 0.83 | 0.83 | - | 13,624 |
RNN [48] | 0.90 | 0.87 | 0.88 | 0.88 | 43 | 9103 |
LSTM [49] | 0.93 | 0.88 | 0.91 | 0.90 | 67 | 9857 |
Proposed Method | 0.94 | 0.89 | 0.92 | 0.91 | 381 | 6891 |
Model | Precision | Recall | Accuracy | F1-Score | Memory (MB) | FPS |
---|---|---|---|---|---|---|
SVM [46] | 0.86 | 0.81 | 0.83 | 0.83 | - | 14,168 |
Random Forest [47] | 0.88 | 0.85 | 0.87 | 0.86 | - | 13,631 |
RNN [48] | 0.90 | 0.87 | 0.88 | 0.88 | 46 | 9124 |
LSTM [49] | 0.92 | 0.88 | 0.90 | 0.90 | 61 | 9859 |
Proposed Method | 0.93 | 0.89 | 0.91 | 0.91 | 383 | 6899 |
Model | Precision | Recall | Accuracy | F1-Score | Memory (MB) | FPS |
---|---|---|---|---|---|---|
AlexNet [42] | 0.81 | 0.78 | 0.80 | 0.79 | 240 | 58 |
Inception v1 [43] | 0.84 | 0.81 | 0.83 | 0.82 | 27 | 52 |
ResNet50 [44] | 0.87 | 0.85 | 0.86 | 0.86 | 98 | 49 |
EfficientNet-B0 [45] | 0.90 | 0.88 | 0.89 | 0.89 | 29 | 37 |
Proposed Method | 0.92 | 0.89 | 0.91 | 0.90 | 381 | 24 |
Model | Precision | Recall | Accuracy | F1-Score | Memory (MB) | FPS |
---|---|---|---|---|---|---|
AlexNet [42] | 0.98 | 0.96 | 0.97 | 0.97 | 238 | 57 |
Inception v1 [43] | 0.98 | 0.97 | 0.98 | 0.97 | 27 | 52 |
ResNet50 [44] | 0.99 | 0.98 | 0.98 | 0.98 | 98 | 47 |
EfficientNet-B0 [45] | 0.99 | 0.99 | 0.99 | 0.99 | 27 | 38 |
Proposed Method | 0.99 | 0.99 | 0.99 | 0.99 | 385 | 26 |
Model | Precision | Recall | Accuracy | F1-Score | Memory (MB) | FPS |
---|---|---|---|---|---|---|
AlexNet [42] | 0.90 | 0.88 | 0.89 | 0.89 | 242 | 58 |
Inception v1 [43] | 0.91 | 0.89 | 0.90 | 0.90 | 29 | 54 |
ResNet [44] | 0.93 | 0.91 | 0.92 | 0.92 | 95 | 48 |
EfficientNet-B0 [45] | 0.94 | 0.92 | 0.93 | 0.93 | 31 | 37 |
Proposed Method | 0.95 | 0.93 | 0.94 | 0.94 | 385 | 24 |
Model | Precision | Recall | Accuracy | F1-Score |
---|---|---|---|---|
Standard Self-Attention [50] | 0.72 | 0.70 | 0.71 | 0.71 |
Convolutional Block Attention Module [51] | 0.85 | 0.82 | 0.83 | 0.83 |
Skip Attention | 0.94 | 0.89 | 0.92 | 0.91 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, Z.; Wang, Y.; Cong, L.; Song, Y.; Li, T.; Li, M.; Xu, K.; Lv, C. Enhancing Data Privacy Protection and Feature Extraction in Secure Computing Using a Hash Tree and Skip Attention Mechanism. Appl. Sci. 2024, 14, 10687. https://doi.org/10.3390/app142210687
Zhou Z, Wang Y, Cong L, Song Y, Li T, Li M, Xu K, Lv C. Enhancing Data Privacy Protection and Feature Extraction in Secure Computing Using a Hash Tree and Skip Attention Mechanism. Applied Sciences. 2024; 14(22):10687. https://doi.org/10.3390/app142210687
Chicago/Turabian StyleZhou, Zizhe, Yaqi Wang, Lin Cong, Yujing Song, Tianyue Li, Meishu Li, Keyi Xu, and Chunli Lv. 2024. "Enhancing Data Privacy Protection and Feature Extraction in Secure Computing Using a Hash Tree and Skip Attention Mechanism" Applied Sciences 14, no. 22: 10687. https://doi.org/10.3390/app142210687
APA StyleZhou, Z., Wang, Y., Cong, L., Song, Y., Li, T., Li, M., Xu, K., & Lv, C. (2024). Enhancing Data Privacy Protection and Feature Extraction in Secure Computing Using a Hash Tree and Skip Attention Mechanism. Applied Sciences, 14(22), 10687. https://doi.org/10.3390/app142210687