A Comparative Analysis of Active Learning for Rumor Detection on Social Media Platforms
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsIn the reference list I can see the paper
4. Soroush, V.; Deb, R.; Sinan, A. The spread of true and false news online. science, 2018, 359(6380):1146–1151
It was written by Soroush Vosoughi, Deb Roy, Sinan Aral. Here given names come first, and surnames follow them. So the reference should be S. Vosoughi, D. Roy, S. Aral, not Soroush V. Deb R., Sinan A.
I guess, other references must also be checked. Definitely, the same thing with [13].
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThis manuscript provides a comparative analysis of active learning for rumor detection in order to overcome data scarcity and its main goal is to assess the effectiveness of various active learning query strategies in the context of rumor detection models. On the whole, this manuscript is well-described and -organized, but the followings should be improved for readers:
1) In Figure 2, there is a typo "Query Stratey", that must be corrected. And the figure is a little bit awkward and is not intuitive. An arrow represents data or work flow? What do the Label of Arrows mean? In general, it is inconsistent.
2) Chapter 4 and 5 should be better merged.
3) Many ML classifiers are used for your manuscript. Why don't you use any deep learning models? In general, you can get better results if the underlying model performs well.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for Authors
The author discussed about A Comparative Analysis of Active Learning for Rumor Detection on Social Media Platforms. The following to be addressed
1. The quality of figures are very poor, its should be changed
2. How were the social media datasets collected, preprocessed, and labeled for the rumor detection task?
3. What methods were used to ensure the reliability and quality of the annotations for rumors and non-rumors?
4. Were there any potential biases in the labeled data, and if so, how were they addressed?
5. What were the main findings and results regarding the effectiveness of active learning in rumor detection, and how did these results compare to non-active learning methods or baselines?
6. Were statistical significance tests performed to validate the significance of the reported results?
Comments on the Quality of English LanguageNeed some moderate changes
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 3 Report
Comments and Suggestions for AuthorsIt can be accepted for publication