A User-Centered Framework for Data Privacy Protection Using Large Language Models and Attention Mechanisms
Abstract
:1. Introduction
- 1.
- A novel user-centered large language model (LLM) architecture is introduced, which incorporates differential privacy techniques to protect sensitive information during model training and inference.
- 2.
- A user attention mechanism has been developed that dynamically adjusts attention weights based on privacy requirements, enhancing the protection of sensitive data.
- 3.
- A unified loss function (uni-loss) has been formulated to balance classification performance, privacy protection, and the effectiveness of the attention mechanism, providing a comprehensive optimization strategy for multi-task learning environments.
- 4.
- Extensive experiments across various application scenarios demonstrate the effectiveness and robustness of the proposed framework, establishing a new benchmark in the field of data privacy protection.
2. Related Work
2.1. Security Computing Methods Based on Encryption Algorithms
Fully Homomorphic Encryption
2.2. Security Computing Methods Based on Noise or Frameworks
2.2.1. Differential Privacy
2.2.2. Federated Learning
3. Materials and Methods
3.1. Dataset Collection
3.2. Dataset Preprocessing
3.2.1. Data Cleaning
3.2.2. Data Annotation
3.2.3. Data Preprocessing
3.3. Proposed Method
3.3.1. Overview
- 1.
- Model input layer: Preprocessed data first enter the model input layer. Here, text data and image data are processed separately. For text data, the input layer converts it into word vectors or sentence vectors; for image data, the input layer converts it into pixel matrices or feature vectors.
- 2.
- Privacy protection layer: After the model input layer, data enters the privacy protection layer. The main task of this layer is to add noise through differential privacy techniques to protect user privacy. Specifically, given a set of model parameters , differential privacy is achieved by adding noise to the parameters with the mathematical expression as follows:
- 3.
- Core computation layer: After the privacy protection layer, data enter the core computation layer, which consists of multiple neural network layers (e.g., Transformer architecture) responsible for deep processing and analysis of the data. In this layer, the model uses a multi-head attention mechanism to capture data features. The multi-head attention mechanism is calculated as follows:
- 4.
- User attention mechanism: During the core computation layer process, the user attention mechanism is introduced. This mechanism dynamically adjusts attention weights to ensure that the model can protect users’ sensitive information while processing data. The user attention mechanism implementation involves several steps. The first is attention weight initialization, where attention weights are initialized at the beginning of model training. These weights can be initialized through pre-trained models or random initialization. Next is the dynamic adjustment of attention weights, where attention weights are dynamically adjusted during training and inference based on the characteristics of the input data and the privacy needs of the users. Specifically, by introducing user privacy weights , the model can accurately process data while protecting privacy. The last step is the multi-level attention mechanism, where different levels of attention mechanisms are introduced to improve privacy protection effectiveness. Different levels of attention mechanisms can capture various features of the data, achieving finer privacy protection.
- 5.
- Output layer: Finally, data enter the output layer, which is responsible for generating the final model output, including classification results, regression results, or generated text. To ensure privacy protection, the output layer may further process the results, such as masking or adding additional noise to sensitive information.
3.3.2. User-Centered LLM
3.3.3. User Attention Mechanism
3.4. Proof of Security
Uni-Loss Function
- 1.
- Classification Loss: This part of the loss measures the model’s performance in classification tasks. Similar to the traditional Transformer, cross-entropy loss is used to calculate classification loss, as shown in the following formula:
- 2.
- Privacy loss: To protect user privacy, differential privacy techniques are introduced during the model parameter update process. Privacy loss measures the model’s effectiveness in privacy protection, which is calculated as
- 3.
- Attention loss: Attention loss optimizes the effectiveness of the user attention mechanism, ensuring the model can protect sensitive user information while processing data. The attention loss is calculated as
3.5. Evaluation Metrics
3.6. Experimental Setup
3.6.1. Testbed for Computer Vision Tasks
3.6.2. Testbed for Language Tasks
3.6.3. Baseline
3.6.4. Hardware and Software Platform Configuration and Parameter Settings
4. Results and Discussion
4.1. Results on Computer Vision Tasks
4.2. Results on Language Process Tasks
4.3. Results on Throughput
4.4. Ablation Study on Attention Mechanism
4.5. Ablation Study on Loss Function
4.6. Future Work
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Qi, C.C. Big data management in the mining industry. Int. J. Miner. Metall. Mater. 2020, 27, 131–139. [Google Scholar] [CrossRef]
- Boerman, S.C.; Kruikemeier, S.; Zuiderveen Borgesius, F.J. Exploring motivations for online privacy protection behavior: Insights from panel data. Commun. Res. 2021, 48, 953–977. [Google Scholar] [CrossRef]
- Kortli, Y.; Jridi, M.; Al Falou, A.; Atri, M. Face recognition systems: A survey. Sensors 2020, 20, 342. [Google Scholar] [CrossRef]
- Ke, T.T.; Sudhir, K. Privacy rights and data security: GDPR and personal data markets. Manag. Sci. 2023, 69, 4389–4412. [Google Scholar] [CrossRef]
- Alwahaishi, S.; Ali, Z.; Al-Ahmadi, M.S.; Al-Jabri, I. Privacy Calculus and Personal Data Disclosure: Investigating the Roles of Personality Traits. In Proceedings of the 2023 9th International Conference on Control, Decision and Information Technologies (CoDIT), Rome, Italy, 3–6 July 2023; pp. 2158–2163. [Google Scholar]
- Zhang, L.; Zhang, Y.; Ma, X. A New Strategy for Tuning ReLUs: Self-Adaptive Linear Units (SALUs). In Proceedings of the ICMLCA 2021—2nd International Conference on Machine Learning and Computer Application, Shenyang, China, 17–19 December 2021; pp. 1–8. [Google Scholar]
- Li, Q.; Ren, J.; Zhang, Y.; Song, C.; Liao, Y.; Zhang, Y. Privacy-Preserving DNN Training with Prefetched Meta-Keys on Heterogeneous Neural Network Accelerators. In Proceedings of the 2023 60th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 9–13 July 2023; pp. 1–6. [Google Scholar]
- Alaya, B.; Laouamer, L.; Msilini, N. Homomorphic encryption systems statement: Trends and challenges. Comput. Sci. Rev. 2020, 36, 100235. [Google Scholar] [CrossRef]
- Kim, J.; Kim, S.; Choi, J.; Park, J.; Kim, D.; Ahn, J.H. SHARP: A short-word hierarchical accelerator for robust and practical fully homomorphic encryption. In Proceedings of the 50th Annual International Symposium on Computer Architecture, Orlando, FL, USA, 17–21 June 2023; pp. 1–15. [Google Scholar]
- Truex, S.; Liu, L.; Chow, K.H.; Gursoy, M.E.; Wei, W. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, Heraklion, Greece, 27 April 2020; pp. 61–66. [Google Scholar]
- Chamikara, M.A.P.; Bertók, P.; Khalil, I.; Liu, D.; Camtepe, S. Privacy preserving face recognition utilizing differential privacy. Comput. Secur. 2020, 97, 101951. [Google Scholar] [CrossRef]
- Meden, B.; Rot, P.; Terhörst, P.; Damer, N.; Kuijper, A.; Scheirer, W.J.; Ross, A.; Peer, P.; Štruc, V. Privacy–enhancing face biometrics: A comprehensive survey. IEEE Trans. Inf. Forensics Secur. 2021, 16, 4147–4183. [Google Scholar] [CrossRef]
- Sun, X.; Tian, C.; Hu, C.; Tian, W.; Zhang, H.; Yu, J. Privacy-Preserving and verifiable SRC-based face recognition with cloud/edge server assistance. Comput. Secur. 2022, 118, 102740. [Google Scholar] [CrossRef]
- Oyewole, A.T.; Oguejiofor, B.B.; Eneh, N.E.; Akpuokwe, C.U.; Bakare, S.S. Data privacy laws and their impact on financial technology companies: A review. Comput. Sci. IT Res. J. 2024, 5, 628–650. [Google Scholar] [CrossRef]
- Yalamati, S. Data Privacy, Compliance, and Security in Cloud Computing for Finance. In Practical Applications of Data Processing, Algorithms, and Modeling; IGI Global: Hershey, PA, USA, 2024; pp. 127–144. [Google Scholar]
- Kim, D.; Guyot, C. Optimized privacy-preserving cnn inference with fully homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 2023, 18, 2175–2187. [Google Scholar] [CrossRef]
- Hijazi, N.M.; Aloqaily, M.; Guizani, M.; Ouni, B.; Karray, F. Secure federated learning with fully homomorphic encryption for iot communications. IEEE Internet Things J. 2023, 11, 4289–4300. [Google Scholar] [CrossRef]
- Li, Q.; Zhang, Y.; Ren, J.; Li, Q.; Zhang, Y. You Can Use But Cannot Recognize: Preserving Visual Privacy in Deep Neural Networks. arXiv 2024, arXiv:2404.04098. [Google Scholar]
- Al Badawi, A.; Polyakov, Y. Demystifying bootstrapping in fully homomorphic encryption. Cryptol. ePrint Arch. 2023, 6791–6807. [Google Scholar]
- Madni, H.A.; Umer, R.M.; Foresti, G.L. Swarm-fhe: Fully homomorphic encryption-based swarm learning for malicious clients. Int. J. Neural Syst. 2023, 33, 2350033. [Google Scholar] [CrossRef]
- Wei, K.; Li, J.; Ding, M.; Ma, C.; Yang, H.H.; Farokhi, F.; Jin, S.; Quek, T.Q.; Poor, H.V. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3454–3469. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhao, J.; Yang, M.; Wang, T.; Wang, N.; Lyu, L.; Niyato, D.; Lam, K.Y. Local differential privacy-based federated learning for internet of things. IEEE Internet Things J. 2020, 8, 8836–8853. [Google Scholar] [CrossRef]
- Jia, B.; Zhang, X.; Liu, J.; Zhang, Y.; Huang, K.; Liang, Y. Blockchain-enabled federated learning data protection aggregation scheme with differential privacy and homomorphic encryption in IIoT. IEEE Trans. Ind. Inform. 2021, 18, 4049–4058. [Google Scholar] [CrossRef]
- Blanco-Justicia, A.; Sánchez, D.; Domingo-Ferrer, J.; Muralidhar, K. A critical review on the use (and misuse) of differential privacy in machine learning. ACM Comput. Surv. 2022, 55, 1–16. [Google Scholar] [CrossRef]
- Vasa, J.; Thakkar, A. Deep learning: Differential privacy preservation in the era of big data. J. Comput. Inf. Syst. 2023, 63, 608–631. [Google Scholar] [CrossRef]
- Huang, T.; Zheng, S. Using Differential Privacy to Define Personal, Anonymous and Pseudonymous Data. IEEE Access 2023, 11, 109225–109236. [Google Scholar] [CrossRef]
- Nguyen, D.C.; Ding, M.; Pathirana, P.N.; Seneviratne, A.; Li, J.; Poor, H.V. Federated learning for internet of things: A comprehensive survey. IEEE Commun. Surv. Tutor. 2021, 23, 1622–1658. [Google Scholar] [CrossRef]
- Pfitzner, B.; Steckhan, N.; Arnrich, B. Federated learning in a medical context: A systematic literature review. ACM Trans. Internet Technol. (TOIT) 2021, 21, 1–31. [Google Scholar] [CrossRef]
- Awosika, T.; Shukla, R.M.; Pranggono, B. Transparency and privacy: The role of explainable ai and federated learning in financial fraud detection. IEEE Access 2024, 12, 64551–64560. [Google Scholar] [CrossRef]
- Liu, T.; Wang, Z.; He, H.; Shi, W.; Lin, L.; An, R.; Li, C. Efficient and secure federated learning for financial applications. Appl. Sci. 2023, 13, 5877. [Google Scholar] [CrossRef]
- Zhang, H.; Hong, J.; Dong, F.; Drew, S.; Xue, L.; Zhou, J. A privacy-preserving hybrid federated learning framework for financial crime detection. arXiv 2023, arXiv:2302.03654. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I. Springer: Berlin/Heidelberg, Germany, 2016; Volume 14, pp. 21–37. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 478–502. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 1078–1093. [Google Scholar]
- Li, J.; Li, D.; Xiong, C.; Hoi, S. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In Proceedings of the International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022; pp. 12888–12900. [Google Scholar]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, Virtual Event, 18–24 July 2021; pp. 8748–8763. [Google Scholar]
- Li, J.; Li, D.; Savarese, S.; Hoi, S. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; pp. 19730–19742. [Google Scholar]
- Agrawal, R.; de Castro, L.; Yang, G.; Juvekar, C.; Yazicigil, R.; Chandrakasan, A.; Vaikuntanathan, V.; Joshi, A. FAB: An FPGA-based accelerator for bootstrappable fully homomorphic encryption. In Proceedings of the 2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Montreal, QC, Canada, 25 February–1 March 2023; pp. 882–895. [Google Scholar]
- Hernandez-Matamoros, A.; Kikuchi, H. Comparative Analysis of Local Differential Privacy Schemes in Healthcare Datasets. Appl. Sci. 2024, 14, 2864. [Google Scholar] [CrossRef]
- Mammen, P.M. Federated learning: Opportunities and challenges. arXiv 2021, arXiv:2101.05428. [Google Scholar]
- Aziz, R.; Banerjee, S.; Bouzefrane, S.; Le Vinh, T. Exploring homomorphic encryption and differential privacy techniques towards secure federated learning paradigm. Future Internet 2023, 15, 310. [Google Scholar] [CrossRef]
Model | Proposed Method | FHE | FL | DP | DP + FL |
---|---|---|---|---|---|
FasterRCNN | 0.82 | 0.77 | 0.74 | 0.73 | 0.71 |
SSD | 0.84 | 0.78 | 0.76 | 0.74 | 0.72 |
YOLO | 0.87 | 0.81 | 0.79 | 0.77 | 0.75 |
EfficientDet | 0.90 | 0.83 | 0.81 | 0.79 | 0.77 |
Model | Proposed Method | FHE | FL | DP | DP + FL |
---|---|---|---|---|---|
FasterRCNN | 0.79 | 0.73 | 0.70 | 0.69 | 0.67 |
SSD | 0.81 | 0.75 | 0.72 | 0.71 | 0.69 |
YOLO | 0.85 | 0.78 | 0.76 | 0.74 | 0.72 |
EfficientDet | 0.88 | 0.80 | 0.78 | 0.76 | 0.74 |
Model | Proposed Method | FHE | FL | DP | DP + FL |
---|---|---|---|---|---|
FasterRCNN | 0.80 | 0.75 | 0.72 | 0.71 | 0.69 |
SSD | 0.83 | 0.76 | 0.75 | 0.73 | 0.71 |
YOLO | 0.86 | 0.80 | 0.78 | 0.76 | 0.74 |
EfficientDet | 0.89 | 0.81 | 0.80 | 0.78 | 0.76 |
Model | Our Method | FHE | FL | DP | DP + FL |
---|---|---|---|---|---|
Transformer | 0.81 | 0.75 | 0.73 | 0.72 | 0.70 |
BERT | 0.84 | 0.78 | 0.76 | 0.74 | 0.72 |
CLIP | 0.86 | 0.80 | 0.78 | 0.76 | 0.74 |
BLIP | 0.88 | 0.81 | 0.79 | 0.77 | 0.75 |
BLIP2 | 0.90 | 0.82 | 0.80 | 0.79 | 0.77 |
Model | Our Method | FHE | FL | DP | DP + FL |
---|---|---|---|---|---|
Transformer | 0.76 | 0.70 | 0.68 | 0.67 | 0.64 |
BERT | 0.78 | 0.72 | 0.70 | 0.69 | 0.66 |
CLIP | 0.81 | 0.74 | 0.72 | 0.70 | 0.68 |
BLIP | 0.84 | 0.76 | 0.74 | 0.72 | 0.70 |
BLIP2 | 0.87 | 0.78 | 0.75 | 0.74 | 0.71 |
Model | Our Method | FHE | FL | DP | DP + FL |
---|---|---|---|---|---|
Transformer | 0.78 | 0.72 | 0.70 | 0.69 | 0.66 |
BERT | 0.80 | 0.74 | 0.72 | 0.71 | 0.68 |
CLIP | 0.83 | 0.77 | 0.74 | 0.73 | 0.70 |
BLIP | 0.85 | 0.78 | 0.76 | 0.74 | 0.72 |
BLIP2 | 0.88 | 0.80 | 0.77 | 0.76 | 0.73 |
Baseline (No Privacy Preserving) | Our Method | FHE | FL | DP | DP + FL |
---|---|---|---|---|---|
1 | 0.93 | 0.02 | 0.83 | 0.90 | 0.71 |
Model | User Attention | Multi-Head Attention | Self-Attention |
---|---|---|---|
FasterRCNN | 0.82 | 0.76 | 0.70 |
SSD | 0.84 | 0.78 | 0.72 |
YOLO | 0.87 | 0.80 | 0.74 |
EfficientDet | 0.90 | 0.84 | 0.82 |
Transformer | 0.81 | 0.77 | 0.72 |
BERT | 0.84 | 0.79 | 0.74 |
CLIP | 0.86 | 0.81 | 0.76 |
BLIP | 0.88 | 0.83 | 0.78 |
BLIP2 | 0.90 | 0.85 | 0.80 |
Model | User Attention | Multi-Head Attention | Self-Attention |
---|---|---|---|
FasterRCNN | 0.79 | 0.73 | 0.66 |
SSD | 0.81 | 0.75 | 0.70 |
YOLO | 0.85 | 0.78 | 0.72 |
EfficientDet | 0.88 | 0.81 | 0.79 |
Transformer | 0.76 | 0.75 | 0.70 |
BERT | 0.78 | 0.77 | 0.72 |
CLIP | 0.81 | 0.79 | 0.73 |
BLIP | 0.84 | 0.81 | 0.75 |
BLIP2 | 0.87 | 0.83 | 0.78 |
Model | User Attention | Multi-Head Attention | Self-Attention |
---|---|---|---|
FasterRCNN | 0.80 | 0.74 | 0.68 |
SSD | 0.83 | 0.76 | 0.71 |
YOLO | 0.86 | 0.79 | 0.73 |
EfficientDet | 0.89 | 0.82 | 0.80 |
Transformer | 0.78 | 0.76 | 0.71 |
BERT | 0.80 | 0.78 | 0.71 |
CLIP | 0.83 | 0.80 | 0.75 |
BLIP | 0.85 | 0.82 | 0.76 |
BLIP2 | 0.88 | 0.84 | 0.79 |
Model | Uni-Loss | Cross-Entropy Loss | MSE |
---|---|---|---|
FasterRCNN | 0.82 | 0.78 | 0.73 |
SSD | 0.84 | 0.80 | 0.75 |
YOLO | 0.87 | 0.83 | 0.78 |
EfficientDet | 0.90 | 0.85 | 0.82 |
Transformer | 0.81 | 0.78 | 0.74 |
BERT | 0.84 | 0.81 | 0.76 |
CLIP | 0.86 | 0.84 | 0.79 |
BLIP | 0.88 | 0.86 | 0.82 |
BLIP2 | 0.90 | 0.87 | 0.84 |
Model | Uni-Loss | Cross-Entropy Loss | MSE |
---|---|---|---|
FasterRCNN | 0.79 | 0.76 | 0.71 |
SSD | 0.81 | 0.78 | 0.74 |
YOLO | 0.85 | 0.82 | 0.77 |
EfficientDet | 0.88 | 0.84 | 0.80 |
Transformer | 0.76 | 0.73 | 0.69 |
BERT | 0.78 | 0.76 | 0.73 |
CLIP | 0.82 | 0.79 | 0.76 |
BLIP | 0.86 | 0.82 | 0.79 |
BLIP2 | 0.89 | 0.84 | 0.81 |
Model | User Attention | Multi-Head Attention | Self-Attention |
---|---|---|---|
FasterRCNN | 0.80 | 0.77 | 0.72 |
SSD | 0.83 | 0.79 | 0.74 |
YOLO | 0.86 | 0.82 | 0.77 |
EfficientDet | 0.89 | 0.85 | 0.81 |
Transformer | 0.78 | 0.75 | 0.71 |
BERT | 0.80 | 0.78 | 0.75 |
CLIP | 0.83 | 0.81 | 0.77 |
BLIP | 0.87 | 0.84 | 0.80 |
BLIP2 | 0.89 | 0.85 | 0.83 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, S.; Zhou, Z.; Wang, C.; Liang, Y.; Wang, L.; Zhang, J.; Zhang, J.; Lv, C. A User-Centered Framework for Data Privacy Protection Using Large Language Models and Attention Mechanisms. Appl. Sci. 2024, 14, 6824. https://doi.org/10.3390/app14156824
Zhou S, Zhou Z, Wang C, Liang Y, Wang L, Zhang J, Zhang J, Lv C. A User-Centered Framework for Data Privacy Protection Using Large Language Models and Attention Mechanisms. Applied Sciences. 2024; 14(15):6824. https://doi.org/10.3390/app14156824
Chicago/Turabian StyleZhou, Shutian, Zizhe Zhou, Chenxi Wang, Yuzhe Liang, Liangyu Wang, Jiahe Zhang, Jinming Zhang, and Chunli Lv. 2024. "A User-Centered Framework for Data Privacy Protection Using Large Language Models and Attention Mechanisms" Applied Sciences 14, no. 15: 6824. https://doi.org/10.3390/app14156824
APA StyleZhou, S., Zhou, Z., Wang, C., Liang, Y., Wang, L., Zhang, J., Zhang, J., & Lv, C. (2024). A User-Centered Framework for Data Privacy Protection Using Large Language Models and Attention Mechanisms. Applied Sciences, 14(15), 6824. https://doi.org/10.3390/app14156824