The Power of Context: A Novel Hybrid Context-Aware Fake News Detection Approach
Abstract
:1. Introduction
- 1.
- The development of a novel hybrid deep learning framework that can effectively learn from multiple sources to detect fake news. These sources include news content, social user behaviour information, and various language aspects.
- 2.
- The evaluation of the proposed framework uses two different embedding models, BERT and GloVe, to determine its efficacy.
- 3.
- The evaluation of the proposed framework through extensive experimentation on four real-world datasets to demonstrate its effectiveness in detecting fake news.
- 4.
- The discovery that incorporating user behavioural representation with content-based information can lead to more accurate outcomes compared to existing state-of-the-art baselines.
2. Related Work
2.1. Preliminaries
2.1.1. Global Vectors for Word Representation (GloVe)
2.1.2. Bidirectional Encoder Representations from Transformers (BERT)
2.1.3. Long Short-Term Memory (LSTM)
2.1.4. Gated Recurrent Unit (GRU)
2.1.5. Convolutional Neural Network (CNN)
3. Proposed Hybrid Model
3.1. Features
3.1.1. Content-Based Features
3.1.2. Context-Based Features
3.2. Model Architecture
4. Experiments
4.1. Datasets
4.1.1. FakeNewsNet Dataset
4.1.2. FA-KES Dataset
4.1.3. LIAR Dataset
4.2. Comparison of Fake News Detection Methods
4.2.1. Baseline Methods
- 1.
- mCNN [9]: a model consisting of multiple convolution filters to capture different granularity from text data (e.g., a news article). We use BERT as an encoding model; thus, we call this baseline -mCNN.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- -BiGRU(Att): a stacked BiGRU architecture is employed, incorporating multiple BiGRU layers operating for an identical number of time steps, followed by an attention layer. This configuration enables the model to capture bidirectional dependencies, encompassing both forward and backward contexts, thereby enhancing effectiveness compared to unidirectional GRU models.
- 10.
- -text-sBiLSTM: this model focuses solely on textual content, disregarding user posting features and other auxiliary factors. By solely analysing the text, the model may overlook the nuances where certain portions of the text are true but are used to bolster false claims. Incorporating BiLSTM provides an advantage as it enables a thorough examination of the input text, encompassing both preceding and subsequent events, thereby enhancing the model’s ability to discern the veracity of the content [4].
- 11.
- -CNN-sBiGRU: this framework employs a combination of BERT-CNN for encoding text representations and stacked BiGRU layers for modelling additional auxiliary features. Subsequently, the outputs from both models are merged, followed by the application of a Sigmoid layer.
4.2.2. Results and Discussion
- 1.
- The interplay between the distinct properties of metadata and news content uncovers more patterns for machine learning models to identify, ultimately leading to enhanced detection performance. This seems to confirm the hypothesis that using both content- and context-based clues can improve the detection performance. More details on assessing impacts of selecting useful features can be found in Section 4.5.
- 2.
- The consideration of solely news content features for detecting fake news yielded subpar results, underscoring the importance of behavioural information in distinguishing between fake and genuine news articles.
- 3.
- Using context-aware embedding models like has demonstrated exceptional performance compared to off-the-shelf context-independent pre-trained models like GloVe. This highlights the superiority of semantically contextual representations over context-independent embedding methods, albeit with the caveat of their higher computational complexity.
4.3. Prediction Performance of BERT vs. GloVe Using Solely Context-Based Features
4.4. Prediction Performance of BERT vs. GloVe Using Solely Various Content-Based Features
4.5. Assessing Impacts of Selecting Useful Features
4.6. Limitations
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Koloski, B.; Stepišnik-Perdih, T.; Robnik-Šikonja, M.; Pollak, S.; Škrlj, B. Knowledge Graph informed Fake News Classification via Heterogeneous Representation Ensembles. arXiv 2021, arXiv:2110.10457. [Google Scholar] [CrossRef]
- Elhadad, M.K.; Li, K.F.; Gebali, F. A Novel Approach for Selecting Hybrid Features from Online News Textual Metadata for Fake News Detection. In Proceedings of the Advances on P2P, Parallel, Grid, Cloud and Internet Computing, Antwerp, Belgium, 7–9 November 2019; Barolli, L., Hellinckx, P., Natwichai, J., Eds.; Springer: Cham, Switzerland, 2020; pp. 914–925. [Google Scholar]
- Nasir, J.A.; Khan, O.S.; Varlamis, I. Fake news detection: A hybrid CNN-RNN based deep learning approach. Int. J. Inf. Manag. Data Insights 2021, 1, 100007. [Google Scholar] [CrossRef]
- Khan, J.Y.; Khondaker, M.T.I.; Afroz, S.; Uddin, G.; Iqbal, A. A benchmark study of machine learning models for online fake news detection. Mach. Learn. Appl. 2021, 4, 100032. [Google Scholar] [CrossRef]
- Alghamdi, J.; Lin, Y.; Luo, S. A Comparative Study of Machine Learning and Deep Learning Techniques for Fake News Detection. Information 2022, 13, 576. [Google Scholar] [CrossRef]
- Shu, K.; Cui, L.; Wang, S.; Lee, D.; Liu, H. DEFEND: Explainable Fake News Detection. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 395–405. [Google Scholar]
- Pennington, J.; Socher, R.; Manning, C.D. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 25–29 October 2014; pp. 1532–1543. [Google Scholar]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Kim, Y. Convolutional Neural Networks for Sentence Classification. arXiv 2014. [Google Scholar] [CrossRef]
- Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
- Zhou, X.; Zafarani, R. A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM Comput. Surv. (CSUR) 2020, 53, 1–40. [Google Scholar] [CrossRef]
- Zhou, X.; Jain, A.; Phoha, V.V.; Zafarani, R. Fake News Early Detection: An Interdisciplinary Study. arXiv 2019, arXiv:1904.11679. [Google Scholar]
- Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. arXiv 2013, arXiv:1301.3781. [Google Scholar]
- Le, Q.; Mikolov, T. Distributed representations of sentences and documents. In Proceedings of the International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 1188–1196. [Google Scholar]
- Wang, W.Y. “Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection. arXiv 2017, arXiv:1705.00648. [Google Scholar]
- Kaliyar, R.K.; Goswami, A.; Narang, P. FakeBERT: Fake news detection in social media with a BERT-based deep learning approach. Multimed. Tools Appl. 2021, 80, 11765–11788. [Google Scholar] [CrossRef]
- Alghamdi, J.; Lin, Y.; Luo, S. Modeling Fake News Detection Using BERT-CNN-BiLSTM Architecture. In Proceedings of the 2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR), Online, 2–4 August 2022; pp. 354–357. [Google Scholar] [CrossRef]
- Xia, H.; Wang, Y.; Zhang, J.Z.; Zheng, L.J.; Kamal, M.M.; Arya, V. COVID-19 fake news detection: A hybrid CNN-BiLSTM-AM model. Technol. Forecast. Soc. Chang. 2023, 195, 122746. [Google Scholar] [CrossRef]
- Zivkovic, M.; Stoean, C.; Petrovic, A.; Bacanin, N.; Strumberger, I.; Zivkovic, T. A Novel Method for COVID-19 Pandemic Information Fake News Detection Based on the Arithmetic Optimization Algorithm. In Proceedings of the 2021 23rd International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), Timisoara, Romania, 7–10 December 2021; pp. 259–266. [Google Scholar] [CrossRef]
- Shu, K.; Zhou, X.; Wang, S.; Zafarani, R.; Liu, H. The role of user profiles for fake news detection. In Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, Vancouver, BC, Canada, 27–30 August 2019; pp. 436–439. [Google Scholar]
- Vosoughi, S. Automatic Detection and Verification of Rumors on Twitter. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2015. [Google Scholar]
- Ma, J.; Gao, W.; Mitra, P.; Kwon, S.; Jansen, B.J.; Wong, K.F.; Cha, M. Detecting Rumors from Microblogs with Recurrent Neural Networks. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 3818–3824. [Google Scholar]
- Chen, W.; Yeo, C.K.; Lau, C.T.; Lee, B.S. Behavior deviation: An anomaly detection view of rumor preemption. In Proceedings of the 2016 IEEE 7th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 13–15 October 2016; pp. 1–7. [Google Scholar]
- Wu, L.; Liu, H. Tracing Fake-News Footprints: Characterizing Social Media Messages by How They Propagate. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, New York, NY, USA, 5–9 February 2018; pp. 637–645. [Google Scholar]
- Gupta, M.; Zhao, P.; Han, J. Evaluating event credibility on twitter. In Proceedings of the 2012 SIAM International Conference on Data Mining, Anaheim, CA, USA, 26–28 April 2012; pp. 153–164. [Google Scholar]
- Gupta, A.; Lamba, H.; Kumaraguru, P.; Joshi, A. Faking Sandy: Characterizing and Identifying Fake Images on Twitter during Hurricane Sandy. In Proceedings of the 22nd International Conference on World Wide Web, Rio de Janeiro, Brazil, 13–17 May 2013; pp. 729–736. [Google Scholar]
- Qazvinian, V.; Rosengren, E.; Radev, D.R.; Mei, Q. Rumor has it: Identifying Misinformation in Microblogs. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, Edinburgh, UK, 27–31 July 2011; pp. 1589–1599. [Google Scholar]
- Zhao, Z.; Resnick, P.; Mei, Q. Enquiring Minds: Early Detection of Rumors in Social Media from Enquiry Posts. In Proceedings of the WWW ’15: 24th International World Wide Web Conference, Florence, Italy, 18–22 May 2015; pp. 1395–1405. [Google Scholar] [CrossRef]
- Chua, A.Y.; Banerjee, S. Linguistic predictors of rumor veracity on the internet. In Proceedings of the International MultiConference of Engineers and Computer Scientists, Hong Kong, China, 16–18 March 2016; Volume 1, pp. 387–391. [Google Scholar]
- Ma, J.; Gao, W.; Wong, K.F. Detect Rumors in Microblog Posts using Propagation Structure via Kernel Learning; Association for Computational Linguistics: Vancouver, BC, Canada, 2017. [Google Scholar]
- Kwon, S.; Cha, M.; Jung, K.; Chen, W.; Wang, Y. Prominent features of rumor propagation in online social media. In Proceedings of the 2013 IEEE 13th International Conference on Data Mining, Dallas, TX, USA, 7–10 December 2013; pp. 1103–1108. [Google Scholar]
- Kwon, S.; Cha, M.; Jung, K. Rumor Detection over Varying Time Windows. PLoS ONE 2017, 12, e0168344. [Google Scholar] [CrossRef]
- Zubiaga, A.; Liakata, M.; Procter, R. Exploiting context for rumour detection in social media. In Proceedings of the International Conference on Social Informatics, Oxford, UK, 13–15 September 2017; Springer: Cham, Switzerland, 2017; pp. 109–123. [Google Scholar]
- Qin, Y.; Wurzer, D.; Lavrenko, V.; Tang, C. Spotting rumors via novelty detection. arXiv 2016, arXiv:1611.06322. [Google Scholar]
- Shu, K.; Wang, S.; Liu, H. Exploiting tri-relationship for fake news detection. arXiv 2017, arXiv:1712.07709. [Google Scholar]
- Jin, Z.; Cao, J.; Zhang, Y.; Luo, J. News verification by exploiting conflicting social viewpoints in microblogs. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 14–17 February 2016; Volume 30. [Google Scholar]
- Li, Q.; Liu, X.; Fang, R.; Nourbakhsh, A.; Shah, S. User behaviors in newsworthy rumors: A case study of twitter. In Proceedings of the International AAAI Conference on Web and Social Media, Cologne, Germany, 17–20 May 2016; Volume 10. [Google Scholar]
- Li, Q.; Zhang, Q.; Si, L. eventAI at SemEval-2019 task 7: Rumor detection on social media by exploiting content, user credibility and propagation information. In Proceedings of the 13th International Workshop on Semantic Evaluation, Minneapolis, MN, USA, 6–7 June 2019; pp. 855–859. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 4–9 December 2017; pp. 6000–6010. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Tang, D.; Qin, B.; Liu, T. Document Modeling with Gated Recurrent Neural Network for Sentiment Classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; pp. 1422–1432. [Google Scholar]
- Graves, A. Supervised sequence labelling. In Supervised Sequence Labelling with Recurrent Neural Networks; Springer: Berlin/Heidelberg, Germany, 2012; pp. 5–13. [Google Scholar]
- Ek, A.; Bernardy, J.P.; Chatzikyriakidis, S. How does Punctuation Affect Neural Models in Natural Language Inference. In Proceedings of the Probability and Meaning Conference (PaM 2020), Gothenburg, Sweden, 14–15 October 2020; pp. 109–116. [Google Scholar]
- Singh, V.; Dasgupta, R.; Sonagra, D.; Raman, K.; Ghosh, I. Automated fake news detection using linguistic analysis and machine learning. In Proceedings of the International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation (SBP-BRiMS), Washington, DC, USA, 5–8 July 2017; pp. 1–3. [Google Scholar]
- Stieglitz, S.; Dang-Xuan, L. Emotions and information diffusion in social media—Sentiment of microblogs and sharing behavior. J. Manag. Inf. Syst. 2013, 29, 217–248. [Google Scholar] [CrossRef]
- Ferrara, E.; Yang, Z. Quantifying the effect of sentiment on information diffusion in social media. PeerJ Comput. Sci. 2015, 1, e26. [Google Scholar] [CrossRef]
- Mohammad, S.; Turney, P. Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, Los Angeles, CA, USA, 5 June 2010; pp. 26–34. [Google Scholar]
- Guo, C.; Cao, J.; Zhang, X.; Shu, K.; Yu, M. Exploiting emotions for fake news detection on social media. arXiv 2019, arXiv:1903.01728. [Google Scholar]
- Chakraborty, A.; Paranjape, B.; Kakarla, S.; Ganguly, N. Stop clickbait: Detecting and preventing clickbaits in online news media. In Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Davis, CA, USA, 18–21 August 2016; pp. 9–16. [Google Scholar]
- Ghanem, B.; Ponzetto, S.P.; Rosso, P.; Rangel, F. Fakeflow: Fake news detection by modeling the flow of affective information. arXiv 2021, arXiv:2101.09810. [Google Scholar]
- Shu, K.; Wang, S.; Liu, H. Beyond news contents: The role of social context for fake news detection. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, Melbourne, Australia, 11–15 February 2019; pp. 312–320. [Google Scholar]
- Shu, K.; Mahudeswaran, D.; Wang, S.; Lee, D.; Liu, H. Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media. Big Data 2020, 8, 171–188. [Google Scholar] [CrossRef] [PubMed]
- Sadeghi, F.; Bidgoly, A.J.; Amirkhani, H. Fake news detection on social media using a natural language inference approach. Multimed. Tools Appl. 2022, 81, 33801–33821. [Google Scholar] [CrossRef]
- Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359, 1146–1151. [Google Scholar] [CrossRef]
- Wang, S.; Pang, M.S.; Pavlou, P.A. Cure or Poison? Identity Verification and the Posting of Fake News on Social Media. J. Manag. Inf. Syst. 2021, 38, 1011–1038. [Google Scholar] [CrossRef]
Study | Cues | Approach | ||
---|---|---|---|---|
TF | UF | PF | ||
Vosoughi et al. [21] | ✓ | ✓ | Hidden Markov Models | |
Ma et al. [22] | ✓ | RNN | ||
Chen et al. [23] | ✓ | ✓ | Anomaly detection, KNN | |
Wu et al. [24] | ✓ | LSTM-RNN | ||
Gupta et al. [25] | ✓ | ✓ | Graph-based method | |
Gupta et al. [26] | ✓ | ✓ | Graph-based method, DT | |
Qazvinian et al. [27] | ✓ | -regularized log-linear model | ||
Zhao et al. [28] | ✓ | DT ranking method | ||
Chua et al. [29] | ✓ | Linear Regression (LR) | ||
Ma et al. [30] | ✓ | Kernel-based method | ||
Kwon et al. [31] | ✓ | ✓ | Random Forest | |
Kwon et al. [32] | ✓ | ✓ | SpikeM | |
Zubiaga et al. [33] | ✓ | ✓ | Conditional Random Fields | |
Qin et al. [34] | ✓ | SVM | ||
Shu et al. [35] | ✓ | ✓ | Neural Network | |
Jin et al. [36] | ✓ | ✓ | LDA, Graph | |
Li et al. [37] | ✓ | ✓ | SVM | |
Li et al. [38] | ✓ | ✓ | LSTM | |
Shu et al. [6] | ✓ | Hierarchical Attention Network | ||
Ours | ✓ | ✓ | ✓ | BERT-mCNN-sBiGRU |
Dataset | PolitiFact | GossipCop |
---|---|---|
No. Candidate News | 694 | 18676 |
No. True News | 356 | 14129 |
No. Fake News | 338 | 4547 |
No. Candidate News | 804 |
No. True News | 426 |
No. Fake News | 378 |
No. Candidate News | 12791 |
No. True News | 7134 |
No. Fake News | 5657 |
Model | A (%) | P (%) | R (%) | F1 (%) |
---|---|---|---|---|
SAF | 0.691 | 0.638 | 0.789 | 0.706 |
BiLSTM-BERT | 0.885 | NA | NA | NA |
LNN-KG | 0.880 | 0.9011 | 0.880 | 0.8892 |
-BiGRU(Att) | 0.9137 | 0.9722 | 0.8750 | 0.9211 |
-text-sBiLSTM | 0.8705 | 0.8760 | 0.7750 | 0.8732 |
-mCNN | 0.8849 | 0.9571 | 0.8375 | 0.8933 |
-CNN-sBiGRU | 0.8921 | 0.9012 | 0.9125 | 0.9068 |
-mCNN-sBiGRU (ours) | 0.9209 | 0.9600 | 0.9000 | 0.9290 |
Model | A (%) | P (%) | R (%) | F1 (%) |
---|---|---|---|---|
SAF | 0.796 | 0.820 | 0.753 | 0.785 |
-BiGRU(Att) | 0.8450 | 0.8892 | 0.9101 | 0.8996 |
-text-sBiLSTM | 0.8389 | 0.8668 | 0.9319 | 0.8982 |
-mCNN | 0.8086 | 0.8906 | 0.8540 | 0.8719 |
-CNN-sBiGRU | 0.8555 | 0.8942 | 0.9193 | 0.9065 |
-mCNN-sBiGRU (ours) | 0.8640 | 0.8731 | 0.9614 | 0.9151 |
Model | A (%) | P (%) | R (%) | F1 (%) |
---|---|---|---|---|
Multinomial Naive Bayes | 0.5809 | 0.63 | 0.58 | 0.50 |
Hybrid CNN-RNN | 0.60 | 0.59 | 0.60 | 0.59 |
-BiGRU(Att) | 0.5679 | 0.5890 | 0.8958 | 0.7107 |
-text-sBiLSTM | 0.5062 | 0.5769 | 0.6250 | 0.6000 |
-mCNN | 0.5280 | 0.5823 | 0.5169 | 0.5476 |
-CNN-sBiGRU | 0.5404 | 0.5547 | 0.8539 | 0.6726 |
-mCNN-sBiGRU (ours) | 0.6296 | 0.6552 | 0.7917 | 0.7170 |
Model | A (%) | P (%) | R (%) | F1 (%) |
---|---|---|---|---|
Naive Bayes | 0.60 | 0.59 | 0.60 | 0.59 |
SVM | 0.62 | NA | NA | NA |
-BiGRU(Att) | 0.7411 | 0.7487 | 0.8137 | 0.7799 |
-text-sBiLSTM | 0.6196 | 0.6391 | 0.7465 | 0.6886 |
-mCNN | 0.6030 | 0.6087 | 0.8277 | 0.7015 |
-CNN-sBiGRU | 0.7277 | 0.7524 | 0.7703 | 0.7612 |
-mCNN-sBiGRU (ours) | 0.7451 | 0.7576 | 0.8053 | 0.7807 |
a | b | ||
---|---|---|---|
A (%) | A (%) | ||
sBiGRU(Att) | 0.8561 | sBiGRU(Att) | 0.8273 |
CNN-sBiGRU | 0.8777 | CNN-sBiGRU | 0.8489 |
mCNN-sBiGRU | 0.9137 | mCNN-sBiGRU | 0.8201 |
a | b | ||
---|---|---|---|
A (%) | A (%) | ||
sBiGRU(Att) | 0.8578 | sBiGRU(Att) | 0.8476 |
CNN-sBiGRU | 0.8349 | CNN-sBiGRU | 0.8121 |
mCNN-sBiGRU | 0.8560 | mCNN-sBiGRU | 0.8420 |
a | b | ||
---|---|---|---|
A (%) | A (%) | ||
sBiGRU(Att) | 0.4968 | sBiGRU(Att) | 0.4596 |
CNN-sBiGRU | 0.5217 | CNN-sBiGRU | 0.4347 |
mCNN-sBiGRU | 0.5590 | mCNN-sBiGRU | 0.5465 |
a | b | ||
---|---|---|---|
A (%) | A (%) | ||
sBiGRU(Att) | 0.7332 | sBiGRU(Att) | 0.7277 |
CNN-sBiGRU | 0.7205 | CNN-sBiGRU | 0.7127 |
mCNN-sBiGRU | 0.7285 | mCNN-sBiGRU | 0.7158 |
a | b | ||
---|---|---|---|
A (%) | A (%) | ||
sBiGRU(Att) | 0.8993 | sBiGRU(Att) | 0.7554 |
CNN-sBiGRU | 0.8705 | CNN-sBiGRU | 0.8633 |
mCNN-sBiGRU | 0.9065 | mCNN-sBiGRU | 0.8849 |
a | b | ||
---|---|---|---|
A (%) | A (%) | ||
sBiGRU(Att) | 0.8445 | sBiGRU(Att) | 0.8340 |
CNN-sBiGRU | 0.8480 | CNN-sBiGRU | 0.8252 |
mCNN-sBiGRU | 0.8509 | mCNN-sBiGRU | 0.8263 |
a | b | ||
---|---|---|---|
A (%) | A (%) | ||
sBiGRU(Att) | 0.4658 | sBiGRU(Att) | 0.4783 |
CNN-sBiGRU | 0.5093 | CNN-sBiGRU | 0.4907 |
mCNN-sBiGRU | 0.5404 | mCNN-sBiGRU | 0.4410 |
a | b | ||
---|---|---|---|
A (%) | A (%) | ||
sBiGRU(Att) | 0.6148 | sBiGRU(Att) | 0.5643 |
CNN-sBiGRU | 0.5643 | CNN-sBiGRU | 0.5746 |
mCNN-sBiGRU | 0.6046 | mCNN-sBiGRU | 0.5817 |
Dataset | Training Time | Inference Time |
---|---|---|
PolitiFact | 79.81 | 9.64 |
GossipCop | 1356.94 | 81.98 |
FA-KES | 153.54 | 10.65 |
LIAR | 788.57 | 20.52 |
Features | Metrics | |||
---|---|---|---|---|
A (%) | P (%) | R (%) | F1 (%) | |
+title | 0.8345 | 0.8353 | 0.8875 | 0.8606 |
+followers | 0.7698 | 0.7553 | 0.8875 | 0.8161 |
+friends | 0.7482 | 0.7473 | 0.8500 | 0.7953 |
+favorites | 0.8273 | 0.8333 | 0.8750 | 0.8537 |
+retweets | 0.8058 | 0.7849 | 0.9125 | 0.8439 |
+statuses | 0.8489 | 0.8831 | 0.8500 | 0.86625 |
+verified_count | 0.7842 | 0.8571 | 0.7500 | 0.8000 |
+title+followers | 0.8561 | 0.8750 | 0.8750 | 0.8750 |
+title+friends | 0.8129 | 0.7879 | 0.8125 | 0.8000 |
+title+favorites | 0.7914 | 0.7215 | 0.8906 | 0.7972 |
+title+retweets | 0.8201 | 0.7826 | 0.8438 | 0.8120 |
+title+statuses | 0.8417 | 0.7692 | 0.9375 | 0.8451 |
+title+verified_count | 0.8633 | 0.8689 | 0.8281 | 0.8480 |
+followers+friends | 0.7986 | 0.8600 | 0.6719 | 0.7544 |
+followers+favorites | 0.7770 | 0.7089 | 0.8750 | 0.7832 |
+followers+retweets | 0.7626 | 0.8605 | 0.5781 | 0.6916 |
+followers+statuses | 0.8129 | 0.8065 | 0.7812 | 0.7937 |
+followers+verified_count | 0.8273 | 0.7632 | 0.9062 | 0.8286 |
+friends+favorites | 0.6978 | 0.6146 | 0.9219 | 0.7375 |
+friends+retweets | 0.7122 | 0.6224 | 0.9531 | 0.7531 |
+friends+statuses | 0.7842 | 0.7361 | 0.8281 | 0.7794 |
+friends+verified_count | 0.8273 | 0.8030 | 0.8281 | 0.8154 |
+favorites+retweets | 0.7985 | 0.7432 | 0.8594 | 0.7971 |
+favorites+statuses | 0.7986 | 0.7308 | 0.8906 | 0.8028 |
+favorites+verified_count | 0.8489 | 0.9388 | 0.7188 | 0.8142 |
+retweets+statuses | 0.8345 | 0.8727 | 0.7500 | 0.8067 |
+retweets+verified_count | 0.8561 | 0.8143 | 0.8906 | 0.8507 |
+statuses+verified_count | 0.8057 | 0.7079 | 0.9844 | 0.8235 |
+title+followers+friends | 0.8417 | 0.7917 | 0.8906 | 0.8382 |
+title+followers+favorites | 0.8201 | 0.7532 | 0.9062 | 0.8227 |
+title+followers+retweets | 0.8489 | 0.8413 | 0.8281 | 0.8346 |
+title+followers+statuses | 0.8489 | 0.8644 | 0.7969 | 0.8293 |
+title+followers+verified_count | 0.8633 | 0.8082 | 0.9219 | 0.8613 |
+title+followers+friends+favorites | 0.8345 | 0.8361 | 0.7969 | 0.8160 |
+title+followers+friends+retweets | 0.8058 | 0.7229 | 0.9375 | 0.8163 |
+title+followers+friends+verified | 0.8417 | 0.8387 | 0.8125 | 0.8254 |
+followers+friends+favorites | 0.7914 | 0.7869 | 0.7500 | 0.7680 |
+favorites+friends+verified | 0.8849 | 0.8636 | 0.8906 | 0.8769 |
+followers+favorites+retweets+verified | 0.9137 | 0.9595 | 0.8875 | 0.9221 |
Features | Metrics | |||
---|---|---|---|---|
A (%) | P (%) | R (%) | F1 (%) | |
+title | 0.8440 | 0.8425 | 0.9782 | 0.9053 |
+followers | 0.8124 | 0.8482 | 0.9182 | 0.8818 |
+friends | 0.8472 | 0.8591 | 0.9565 | 0.9052 |
+favorites | 0.8314 | 0.8594 | 0.9312 | 0.8939 |
+retweets | 0.8279 | 0.8330 | 0.9684 | 0.8956 |
+statuses | 0.8381 | 0.8321 | 0.9867 | 0.9028 |
+verified_count | 0.8504 | 0.8605 | 0.9593 | 0.9072 |
+title+followers | 0.8191 | 0.8616 | 0.9087 | 0.8845 |
+title+friends | 0.8431 | 0.8528 | 0.9600 | 0.9032 |
+title+favorites | 0.8220 | 0.8900 | 0.8747 | 0.8823 |
+title+retweets | 0.8522 | 0.8740 | 0.9421 | 0.9068 |
+title+statuses | 0.8456 | 0.8606 | 0.9516 | 0.9038 |
+title+verified_count | 0.8375 | 0.8920 | 0.8954 | 0.8937 |
+title+followers+friends+retweets | 0.8466 | 0.8530 | 0.9653 | 0.9056 |
+title+followers+friends+statuses | 0.8308 | 0.8431 | 0.9561 | 0.8961 |
+title+followers+friends+verified | 0.8469 | 0.8508 | 0.9691 | 0.9061 |
+followers+friends+favorites | 0.8448 | 0.8475 | 0.9712 | 0.9051 |
+followers+friends | 0.8410 | 0.8405 | 0.9768 | 0.9036 |
+followers+favorites | 0.8386 | 0.8706 | 0.9259 | 0.8974 |
+followers+retweets | 0.8453 | 0.8690 | 0.9386 | 0.9025 |
+followers+statuses | 0.8426 | 0.8512 | 0.9617 | 0.9031 |
+followers+verified_count | 0.8461 | 0.8668 | 0.9431 | 0.9033 |
+friends+retweets | 0.8319 | 0.8664 | 0.9217 | 0.8932 |
+friends+statuses | 0.8311 | 0.8314 | 0.9765 | 0.8981 |
+friends+verified_count | 0.8429 | 0.8733 | 0.9287 | 0.9002 |
+favorites+retweets | 0.8386 | 0.8611 | 0.9400 | 0.8988 |
+favorites+verified_count | 0.8541 | 0.8810 | 0.9351 | 0.9072 |
+retweets+statuses | 0.8469 | 0.8548 | 0.9628 | 0.9056 |
+retweets+verified_count | 0.8405 | 0.8666 | 0.9347 | 0.8994 |
+title+followers+friends | 0.8324 | 0.8313 | 0.9789 | 0.8991 |
+title+followers+retweets | 0.8512 | 0.8531 | 0.9723 | 0.9088 |
+title+followers+verified_count | 0.8539 | 0.8764 | 0.9410 | 0.9076 |
+title+followers+friends+favorites | 0.8453 | 0.8683 | 0.9396 | 0.9026 |
+favorites+friends+verified | 0.8423 | 0.8615 | 0.9452 | 0.9014 |
+retweets+friends+verified | 0.8472 | 0.8679 | 0.9431 | 0.9040 |
+retweets+statuses+verified | 0.8480 | 0.8726 | 0.9375 | 0.9039 |
+followers+favorites+friends+verified | 0.8330 | 0.8433 | 0.9593 | 0.8975 |
+follow+fav+fri+retw+stat+verif | 0.8560 | 0.8556 | 0.9758 | 0.9118 |
+followers+favorites+retweets+verified | 0.8549 | 0.8798 | 0.9379 | 0.9079 |
Model | Features | Datasets | |||
---|---|---|---|---|---|
P | G | F | L | ||
A (%) | A (%) | A (%) | A (%) | ||
Our model | + sentiment | 0.8561 | 0.8565 | 0.4720 | 0.7245 |
+ morality | 0.8345 | 0.8474 | 0.5031 | 0.7332 | |
+ sentiment and morality | 0.8058 | 0.8399 | 0.5403 | 0.7088 | |
+ linguistic | 0.8417 | 0.8520 | 0.4968 | 0.7301 | |
+ linguistic and sentiment | 0.8633 | 0.8386 | 0.5155 | 0.7380 | |
+ linguistic and morality | 0.8273 | 0.8526 | 0.5714 | 0.7167 | |
+ all | 0.9209 | 0.8640 | 0.6296 | 0.7451 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alghamdi, J.; Lin, Y.; Luo, S. The Power of Context: A Novel Hybrid Context-Aware Fake News Detection Approach. Information 2024, 15, 122. https://doi.org/10.3390/info15030122
Alghamdi J, Lin Y, Luo S. The Power of Context: A Novel Hybrid Context-Aware Fake News Detection Approach. Information. 2024; 15(3):122. https://doi.org/10.3390/info15030122
Chicago/Turabian StyleAlghamdi, Jawaher, Yuqing Lin, and Suhuai Luo. 2024. "The Power of Context: A Novel Hybrid Context-Aware Fake News Detection Approach" Information 15, no. 3: 122. https://doi.org/10.3390/info15030122
APA StyleAlghamdi, J., Lin, Y., & Luo, S. (2024). The Power of Context: A Novel Hybrid Context-Aware Fake News Detection Approach. Information, 15(3), 122. https://doi.org/10.3390/info15030122