SPIRIT: Structural Entropy Guided Prefix Tuning for Hierarchical Text Classification
Abstract
:1. Introduction
- We propose a structural entropy guided prefix tuning method named SPIRIT for hierarchical text classification. To the best of our knowledge, this paper is the first introduction of prefix tuning to HTC.
- To construct hierarchy-aware prefixes, we decode the essential structure of label hierarchies with a structural entropy minimization algorithm, providing abstractive information to be encoded into the prefix.
- Given the essential structure of label hierarchies, a structural prefix network is proposed to construct initial prefix embeddings for prompting all hidden layers in the LM. Further, a depth-wise reparameterization strategy is originated for adapting structural prefixes to the LM.
- Experiments on four datasets demonstrate the effectiveness of SPIRIT For reproducibility, source code is available at https://github.com/Rooooyy/SPIRIT (accessed on 11 January 2025).
2. Preliminary
3. Methodology
3.1. Input Prompt
3.2. Structural Entropy Guided Prefix
Algorithm 1: Coding Tree Construction |
3.3. Model Training and Prediction
4. Experimental Evaluation
4.1. Results and Analysis
4.2. Ablation Studies
4.3. Evaluation on Path Consistency
4.4. Comparison with Large Language Models
4.5. Qualitative Analysis
5. Related Work
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
HTC | Hierarchical Text Classification |
NLP | Natural Language Processing |
GNN | Graph Neural Network |
MLP | Multi-Layer Perception |
LM | Language Model |
Appendix A. More Details
Appendix A.1. A Notation List for Structural Entropy
Notation | Explanation |
---|---|
A graph G with node set V and edge set E. | |
A coding tree. | |
A non-root node on the coding tree. | |
A subset of V, the composition of which depends on . | |
The number of edges from to . | |
The volume of , the total degree of nodes in . | |
The parent node of on the coding tree . | |
The i-th child node of on the coding tree . | |
The root node of coding tree | |
The structural entropy of G. | |
The K-dimensional structural entropy of G when the maximum height of is K. | |
A partition of V in the k-th layer of coding tree . |
Appendix A.2. A Case Study of Algorithm 1
References
- Zhou, J.; Ma, C.; Long, D.; Xu, G.; Ding, N.; Zhang, H.; Xie, P.; Liu, G. Hierarchy-Aware Global Model for Hierarchical Text Classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, 5–10 July 2020; Jurafsky, D., Chai, J., Schluter, N., Tetreault, J.R., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2020; pp. 1106–1117. [Google Scholar] [CrossRef]
- Chen, H.; Ma, Q.; Lin, Z.; Yan, J. Hierarchy-aware Label Semantics Matching Network for Hierarchical Text Classification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Paperspp), ACL/IJCNLP 2021, Virtual Event, 1–6 August 2021; Zong, C., Xia, F., Li, W., Navigli, R., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 4370–4379. [Google Scholar] [CrossRef]
- Deng, Z.; Peng, H.; He, D.; Li, J.; Yu, P.S. HTCInfoMax: A Global Model for Hierarchical Text Classification via Information Maximization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, 6–11 June 2021; Toutanova, K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tür, D., Beltagy, I., Bethard, S., Cotterell, R., Chakraborty, T., Zhou, Y., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 3259–3265. [Google Scholar] [CrossRef]
- Wang, Z.; Wang, P.; Huang, L.; Sun, X.; Wang, H. Incorporating Hierarchy into Text Encoder: A Contrastive Learning Approach for Hierarchical Text Classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, 22–27 May 2022; Muresan, S., Nakov, P., Villavicencio, A., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2022; pp. 7109–7119. [Google Scholar]
- U, S.C.L.; He, J.; Gutiérrez-Basulto, V.; Pan, J.Z. Instances and Labels: Hierarchy-aware Joint Supervised Contrastive Learning for Hierarchical Multi-Label Text Classification. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6–10 December 2023; Bouamor, H., Pino, J., Bali, K., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2023; pp. 8858–8875. [Google Scholar] [CrossRef]
- Wang, Z.; Wang, P.; Liu, T.; Lin, B.; Cao, Y.; Sui, Z.; Wang, H. HPT: Hierarchy-aware Prompt Tuning for Hierarchical Text Classification. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, 7–11 December 2022; Goldberg, Y., Kozareva, Z., Zhang, Y., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2022; pp. 3740–3751. [Google Scholar] [CrossRef]
- Jiang, T.; Wang, D.; Sun, L.; Chen, Z.; Zhuang, F.; Yang, Q. Exploiting Global and Local Hierarchies for Hierarchical Text Classification. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, 7–11 December 2022; Goldberg, Y., Kozareva, Z., Zhang, Y., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2022; pp. 4030–4039. [Google Scholar] [CrossRef]
- Ji, K.; Lian, Y.; Gao, J.; Wang, B. Hierarchical Verbalizer for Few-Shot Hierarchical Text Classification. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, ON, Canada, 9–14 July 2023; pp. 2918–2933. [Google Scholar] [CrossRef]
- Li, X.L.; Liang, P. Prefix-tuning: Optimizing continuous prompts for generation. arXiv 2021, arXiv:2101.00190. [Google Scholar]
- Qin, G.; Eisner, J. Learning How to Ask: Querying LMs with Mixtures of Soft Prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, 6–11 June 2021; Toutanova, K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tür, D., Beltagy, I., Bethard, S., Cotterell, R., Chakraborty, T., Zhou, Y., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 5203–5212. [Google Scholar] [CrossRef]
- Liu, X.; Ji, K.; Fu, Y.; Tam, W.L.; Du, Z.; Yang, Z.; Tang, J. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv 2021, arXiv:2110.07602. [Google Scholar]
- Li, A.; Pan, Y. Structural Information and Dynamical Complexity of Networks. IEEE Trans. Inf. Theory 2016, 62, 3290–3339. [Google Scholar] [CrossRef]
- Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph Attention Networks. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Zhu, H.; Zhang, C.; Huang, J.; Wu, J.; Xu, K. HiTIN: Hierarchy-aware Tree Isomorphism Network for Hierarchical Text Classification. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, ON, Canada, 9–14 July 2023; Rogers, A., Boyd-Graber, J.L., Okazaki, N., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2023; pp. 7809–7821. [Google Scholar] [CrossRef]
- Su, J.; Zhu, M.; Murtadha, A.; Pan, S.; Wen, B.; Liu, Y. ZLPR: A Novel Loss for Multi-label Classification. arXiv 2022. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, 2–7 June 2019; Volume 1 (Long and Short Papers). Burstein, J., Doran, C., Solorio, T., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 4171–4186. [Google Scholar] [CrossRef]
- Kowsari, K.; Brown, D.E.; Heidarysafa, M.; Meimandi, K.; Gerber, M.S.; Barnes, L.E. HDLTex: Hierarchical Deep Learning for Text Classification. In Proceedings of the 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, Mexico, 18–21 December 2017; pp. 364–371. [Google Scholar]
- Lewis, D.D.; Yang, Y.; Rose, T.G.; Li, F. RCV1: A New Benchmark Collection for Text Categorization Research. J. Mach. Learn. Res. 2004, 5, 361–397. [Google Scholar]
- Sandhaus, E. The New York Times Annotated Corpus; Linguistic Data Consortium: Philadelphia, PA, USA, 2008. [Google Scholar] [CrossRef]
- Gopal, S.; Yang, Y. Recursive regularization for large-scale classification with hierarchical and graphical dependencies. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11–14 August 2013. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; Conference Track Proceedings. Bengio, Y., LeCun, Y., Eds.; ICLR: Appleton, WI, USA, 2015. [Google Scholar]
- Gu, Y.; Han, X.; Liu, Z.; Huang, M. Ppt: Pre-trained prompt tuning for few-shot learning. arXiv 2021, arXiv:2109.04332. [Google Scholar]
- Yu, C.; Shen, Y.; Mao, Y. Constrained sequence-to-tree generation for hierarchical text classification. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; pp. 1865–1869. [Google Scholar]
- Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models are Few-Shot Learners. In Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, Virtual, 6–12 December 2020. [Google Scholar]
- Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. Gpt-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar]
- Shimura, K.; Li, J.; Fukumoto, F. HFT-CNN: Learning Hierarchical Category Structure for Multi-label Short Text Categorization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; Riloff, E., Chiang, D., Hockenmaier, J., Tsujii, J., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2018; pp. 811–816. [Google Scholar] [CrossRef]
- Banerjee, S.; Akkaya, C.; Perez-Sorrosal, F.; Tsioutsiouliklis, K. Hierarchical Transfer Learning for Multi-label Text Classification. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, 28 July–2 August 2019; Volume 1: Long Papers. Korhonen, A., Traum, D.R., Màrquez, L., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 6295–6300. [Google Scholar] [CrossRef]
- Huang, W.; Chen, E.; Liu, Q.; Chen, Y.; Huang, Z.; Liu, Y.; Zhao, Z.; Zhang, D.; Wang, S. Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network Approach. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019. [Google Scholar]
- Aly, R.; Remus, S.; Biemann, C. Hierarchical Multi-label Classification of Text with Capsule Networks. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, 28 July–2 August 2019; Volume 2: Student Research Workshop. Alva-Manchego, F., Choi, E., Khashabi, D., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 323–330. [Google Scholar] [CrossRef]
- Mao, Y.; Tian, J.; Han, J.; Ren, X. Hierarchical Text Classification with Reinforced Label Assignment. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, 3–7 November 2019; Inui, K., Jiang, J., Ng, V., Wan, X., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 445–455. [Google Scholar] [CrossRef]
- Wu, J.; Xiong, W.; Wang, W.Y. Learning to Learn and Predict: A Meta-Learning Approach for Multi-Label Classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, 3–7 November 2019; Inui, K., Jiang, J., Ng, V., Wan, X., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 4353–4363. [Google Scholar] [CrossRef]
- Rojas, K.R.; Bustamante, G.; Oncevay, A.; Cabezudo, M.A.S. Efficient Strategies for Hierarchical Text Classification: External Knowledge and Auxiliary Tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, 5–10 July 2020; Jurafsky, D., Chai, J., Schluter, N., Tetreault, J.R., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2020; pp. 2252–2257. [Google Scholar] [CrossRef]
- Lester, B.; Al-Rfou, R.; Constant, N. The power of scale for parameter-efficient prompt tuning. arXiv 2021, arXiv:2104.08691. [Google Scholar]
- Zhong, Z.; Friedman, D.; Chen, D. Factual probing is [mask]: Learning vs. learning to recall. arXiv 2021, arXiv:2104.05240. [Google Scholar]
- Hambardzumyan, K.; Khachatrian, H.; May, J. Warp: Word-level adversarial reprogramming. arXiv 2021, arXiv:2101.00121. [Google Scholar]
- Liu, X.; Zheng, Y.; Du, Z.; Ding, M.; Qian, Y.; Yang, Z.; Tang, J. GPT understands, too. AI Open 2024, 5, 208–215. [Google Scholar] [CrossRef]
- Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
- Liu, Y.; Liu, J.; Zhang, Z.; Zhu, L.; Li, A. REM: From Structural Entropy to Community Structure Deception. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019; pp. 12918–12928. [Google Scholar]
- Wu, J.; Li, S.; Li, J.; Pan, Y.; Xu, K. A Simple yet Effective Method for Graph Classification. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23–29 July 2022; pp. 3580–3586. [Google Scholar] [CrossRef]
- Zhang, C.; Zhu, H.; Peng, X.; Wu, J.; Xu, K. Hierarchical Information Matters: Text Classification via Tree Based Graph Neural Network. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, 12–17 October 2022; Calzolari, N., Huang, C., Kim, H., Pustejovsky, J., Wanner, L., Choi, K., Ryu, P., Chen, H., Donatelli, L., Ji, H., et al., Eds.; International Committee on Computational Linguistics: New York, NY, USA, 2022; pp. 950–959. [Google Scholar]
- Wu, J.; Chen, X.; Xu, K.; Li, S. Structural entropy guided graph hierarchical pooling. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 24017–24030. [Google Scholar]
- Yang, Z.; Zhang, G.; Wu, J.; Yang, J.; Sheng, Q.Z.; Peng, H.; Li, A.; Xue, S.; Su, J. Minimum Entropy Principle Guided Graph Neural Networks. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, WSDM 2023, Singapore, 27 February–3 March 2023; Chua, T., Lauw, H.W., Si, L., Terzi, E., Tsaparas, P., Eds.; ACM: New York, NY, USA, 2023; pp. 114–122. [Google Scholar] [CrossRef]
- Wu, J.; Chen, X.; Shi, B.; Li, S.; Xu, K. SEGA: Structural entropy guided anchor view for graph contrastive learning. In Proceedings of the International Conference on Machine Learning, PMLR, Honolulu, HI, USA, 23–29 July 2023. [Google Scholar]
- Wang, Y.; Wang, Y.; Zhang, Z.; Yang, S.; Zhao, K.; Liu, J. USER: Unsupervised Structural Entropy-Based Robust Graph Neural Network. In Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2023, Washington, DC, USA, 7–14 February 2023; Williams, B., Chen, Y., Neville, J., Eds.; AAAI Press: Washington, DC, USA, 2023; pp. 10235–10243. [Google Scholar]
- Zou, D.; Peng, H.; Huang, X.; Yang, R.; Li, J.; Wu, J.; Liu, C.; Yu, P.S. Se-gsl: A general and effective graph structure learning framework through structural entropy optimization. In Proceedings of the ACM Web Conference 2023, Austin, TX, USA, 30 April–4 May 2023; pp. 499–510. [Google Scholar]
Dataset | Depth | # Train | # Dev | # Test | ||
---|---|---|---|---|---|---|
WOS | 141 | 2 | 2 | 30,070 | 7518 | 9397 |
RCV1-v2 | 103 | 3.24 | 4 | 20,833 | 2316 | 781,265 |
NYTimes | 166 | 7.60 | 8 | 23,345 | 5834 | 7292 |
BGC | 146 | 3.01 | 4 | 58,715 | 14,785 | 18,394 |
Models | WOS | RCV1-v2 | NYTimes | BGC | ||||
---|---|---|---|---|---|---|---|---|
Micro-F1 | Macro-F1 | Micro-F1 | Macro-F1 | Micro-F1 | Macro-F1 | Micro-F1 | Macro-F1 | |
Supervised Fine-tuned Methods | ||||||||
BERT [16] † | 85.63 | 79.07 | 85.65 | 67.02 | 78.24 | 65.62 | 78.84 | 61.19 |
HiAGM(BERT) [1] † | 86.04 | 80.19 | 85.58 | 67.93 | 78.64 | 66.76 | 79.48 | 62.84 |
HTCInfoMax(BERT) [3] † | 86.30 | 79.97 | 85.53 | 67.09 | 78.75 | 67.31 | 79.16 | 62.94 |
HiMatch(BERT) [2] † | 86.70 | 81.06 | 86.33 | 68.66 | - | - | 78.89 | 63.19 |
HiTIN(BERT) [14] | 87.19 | 81.51 | 86.71 | 69.95 | 79.65 | 69.31 | - | - |
Supervised Contrastive Learning Methods | ||||||||
HGCLR [4] | 87.11 | 81.20 | 86.49 | 68.31 | 78.86 | 67.96 | 79.22 | 64.04 |
HJCL [5] | - | - | 87.04 | 70.49 | 80.52 | 70.02 | 81.30 | 66.77 |
Supervised Prompt Tuning Methods | ||||||||
HPT [6] | 86.94 | 81.46 | 87.21 | 69.32 | 80.35 | 70.28 | 80.27 | 67.80 |
HBGL [7] | 87.36 | 82.00 | 87.23 | 70.17 | 80.47 | 70.19 | - | - |
SPIRIT (Ours) | 87.44 | 82.04 | 87.42 | 70.29 | 80.65 | 70.73 | 81.88 | 68.60 |
Ablation Models | WOS | NYTimes | ||
---|---|---|---|---|
Micro-F1 | Macro-F1 | Micro-F1 | Macro-F1 | |
SPIRIT | 87.44 | 82.04 | 80.65 | 70.73 |
r.m. SPN (Section 3.2) | 87.19 | 81.84 | 80.27 | 70.43 |
r.p. | 87.04 | 81.47 | 80.40 | 70.35 |
r.p. | 86.72 | 81.34 | 80.07 | 70.09 |
r.m. | 86.94 | 81.56 | 80.24 | 70.42 |
r.p. | 86.59 | 80.47 | 79.45 | 68.28 |
Models | WOS | RCV1-v2 | NYTimes | BGC | ||||
---|---|---|---|---|---|---|---|---|
Micro-F1 | Macro-F1 | Micro-F1 | Macro-F1 | Micro-F1 | Macro-F1 | Micro-F1 | Macro-F1 | |
GPT-3.5-Turbo [24] | 50.30 | 26.06 | 45.43 * | 26.72 * | 37.34 | 20.41 | 52.91 | 34.94 |
+Prompt in Table 5 | 48.44 | 24.98 | 42.16 * | 23.67 * | 34.78 | 18.83 | 50.06 | 31.55 |
GPT-4o-0513 [25] | 62.29 | 51.06 | 55.91 * | 36.47 * | 41.55 | 26.60 | 63.44 | 43.03 |
+Prompt in Table 5 | 63.17 | 49.68 | 59.24 * | 33.40 * | 43.58 | 24.78 | 65.06 | 42.20 |
BERT [16] † | 85.63 | 79.07 | 85.65 | 67.02 | 78.24 | 65.62 | 78.84 | 61.19 |
SPIRIT (Ours) | 87.44 | 82.04 | 87.42 | 70.29 | 80.65 | 70.73 | 81.88 | 68.60 |
Prompt Template | You will be given a list of categories formatted as: |
A -> B -> C … -> X | |
where A, B, C, X are categories | |
and ’->’ represents the parent-child relationship. | |
A -> B means A is a child of B. | |
B -> C means B is a child of C. | |
You should also predict A when you predict B. | |
You should also predict B when you predict C. | |
[Label Paths] | |
Classify the following text into the categories above. | |
Just predict the categories, no explanation is allowed. | |
Texts: [Input] |
Methods | Labels |
---|---|
Ground Truths |
|
HPT |
|
SPIRIT (ones) |
|
SPIRIT |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhu, H.; Xia, J.; Liu, R.; Deng, B. SPIRIT: Structural Entropy Guided Prefix Tuning for Hierarchical Text Classification. Entropy 2025, 27, 128. https://doi.org/10.3390/e27020128
Zhu H, Xia J, Liu R, Deng B. SPIRIT: Structural Entropy Guided Prefix Tuning for Hierarchical Text Classification. Entropy. 2025; 27(2):128. https://doi.org/10.3390/e27020128
Chicago/Turabian StyleZhu, He, Jinxiang Xia, Ruomei Liu, and Bowen Deng. 2025. "SPIRIT: Structural Entropy Guided Prefix Tuning for Hierarchical Text Classification" Entropy 27, no. 2: 128. https://doi.org/10.3390/e27020128
APA StyleZhu, H., Xia, J., Liu, R., & Deng, B. (2025). SPIRIT: Structural Entropy Guided Prefix Tuning for Hierarchical Text Classification. Entropy, 27(2), 128. https://doi.org/10.3390/e27020128