Previous Issue
Volume 12, December
 
 

Publications, Volume 13, Issue 1 (March 2025) – 5 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
20 pages, 506 KiB  
Article
Social Media Analysis of High-Impact Information and Communication Journals: Adoption, Use, and Content Curation
by Jesús Cascón-Katchadourian, Javier Guallar and Wileidys Artigas
Publications 2025, 13(1), 5; https://doi.org/10.3390/publications13010005 - 17 Jan 2025
Viewed by 1367
Abstract
The use of social media to disseminate academic content is increasing, particularly in scientific journals. This study has the following two main objectives: first, exploring the use of social media by high-impact academic journals in two different SJR categories (Library and Information Sciences [...] Read more.
The use of social media to disseminate academic content is increasing, particularly in scientific journals. This study has the following two main objectives: first, exploring the use of social media by high-impact academic journals in two different SJR categories (Library and Information Sciences and Communication), and second, analyzing content curation carried out by the world’s most influential journals in both areas. The research methodology is descriptive with a quantitative approach regarding the items studied. The study finds that COM journals have a stronger social media presence than LIS journals, and X dominates in both categories and regions as the top social network, with significant influence as the only platform. On the other hand, content curation was found to a high degree in both areas, especially in the LIS area, with 93% vs. 80% in COM. The study highlights that both COM and LIS journals primarily focus on promoting recent articles, with COM diversifying content more than LIS. In terms of the content curation techniques used in both areas, the majority are abstracting and summarizing. Full article
Show Figures

Figure 1

9 pages, 606 KiB  
Article
Analyzing the Drivers Behind Retractions in Tuberculosis Research
by Franko O. Garcia-Solorzano, Shirley M. De la Cruz Anticona, Mario Pezua-Espinoza, Fernando A. Chuquispuma Jesus, Karen D. Sanabria-Pinilla, Christopher Chavez Veliz, Vladimir A. Huayta-Alarcón, Percy Mayta-Tristan and Leonid Lecca
Publications 2025, 13(1), 4; https://doi.org/10.3390/publications13010004 - 14 Jan 2025
Viewed by 546
Abstract
Tuberculosis research plays a crucial role in understanding and responding to the necessities of people with this disease, yet the integrity of this research is compromised by frequent retractions. Identifying and analyzing the main reasons for retraction of tuberculosis articles is essential for [...] Read more.
Tuberculosis research plays a crucial role in understanding and responding to the necessities of people with this disease, yet the integrity of this research is compromised by frequent retractions. Identifying and analyzing the main reasons for retraction of tuberculosis articles is essential for improving research practices and ensuring reliable scientific output. In this study, we conducted an advanced systematic literature review of retracted original articles on Tuberculosis, utilizing databases such as Web of Science, Embase, Scopus, PubMed, LILACS, and the Retraction Watch Database webpage. We found that falsification and plagiarism were the most frequent reasons for retraction, although 16% of the retracted articles did not declare the drivers behind the retraction. Almost half of the retracted studies received external funding, affecting not only those specific studies but future funding opportunities for this research field. Stronger measures of research integrity are needed to prevent misconduct in this vulnerable population. Full article
Show Figures

Figure 1

8 pages, 591 KiB  
Opinion
Output-Normalized Score (OnS) for Ranking Researchers Based on Number of Publications, Citations, Coauthors, and Author Position
by Antonije Onjia
Publications 2025, 13(1), 3; https://doi.org/10.3390/publications13010003 - 4 Jan 2025
Viewed by 671
Abstract
This article discusses current methods for ranking researchers and proposes a new metric, the output-normalized score (OnS), which considers the number of publications, citations, coauthors, and the author’s position within each publication. The proposed OnS offers a balanced approach to evaluating a researcher’s [...] Read more.
This article discusses current methods for ranking researchers and proposes a new metric, the output-normalized score (OnS), which considers the number of publications, citations, coauthors, and the author’s position within each publication. The proposed OnS offers a balanced approach to evaluating a researcher’s scientific contributions while addressing the limitations of widely used metrics such as the h-index and its modifications. It favors publications with fewer coauthors while giving significant weight to both the author’s position in the publication and the total number of citations. Full article
Show Figures

Figure 1

12 pages, 888 KiB  
Article
Practicing Meta-Analytics with Rectification
by Ramalingam Shanmugam and Karan P. Singh
Publications 2025, 13(1), 2; https://doi.org/10.3390/publications13010002 - 2 Jan 2025
Viewed by 568
Abstract
This article demonstrates the necessity of assessing homogeneity in meta-analyses using the Higgins method. The researchers realize the importance of assessing homogeneity in meta-analytic work. However, a significant issue with the Higgins method has been identified. In this article, we explain the nature [...] Read more.
This article demonstrates the necessity of assessing homogeneity in meta-analyses using the Higgins method. The researchers realize the importance of assessing homogeneity in meta-analytic work. However, a significant issue with the Higgins method has been identified. In this article, we explain the nature of this problem and propose solutions to address it. Our narrative in this article is to point out the problem, analyze it, and present it well. A prerequisite to check the consistency of findings in comparable studies in meta-analyses is that the studies should be homogeneous, not heterogeneous. The Higgins I2 score, a version of the Cochran Q value, is commonly used to assess heterogeneity. The Higgins score is an improvement in the Q value. However, there is a problem with Higgins score statistically. The Higgins score is supposed to follow a Chi-squared distribution, but it does not do so because the Chi-squared distribution becomes invalid once the Q score is less than the degrees of freedom. This problem was recently rectified using an alternative method (S2 score). Using this method, we examined 14 published articles representing 133 datasets and observed that many studies declared homogeneous by the Higgins method were, in fact, heterogeneous. This article urges the research community to be cautious in making inferences using the Higgins method. Full article
Show Figures

Figure 1

11 pages, 235 KiB  
Opinion
Exploring the Need to Use “Plagiarism” Detection Software Rationally
by Petar Milovanovic, Tatjana Pekmezovic and Marija Djuric
Publications 2025, 13(1), 1; https://doi.org/10.3390/publications13010001 - 2 Jan 2025
Viewed by 776
Abstract
Universities and journals increasingly rely on software tools for detecting textual overlap of a scientific text with the previously published literature to detect potential plagiarism. Although software outputs need to be carefully reviewed by competent humans to verify the existence of plagiarism, university [...] Read more.
Universities and journals increasingly rely on software tools for detecting textual overlap of a scientific text with the previously published literature to detect potential plagiarism. Although software outputs need to be carefully reviewed by competent humans to verify the existence of plagiarism, university and journal staff, for various reasons, often erroneously interpret the degree of plagiarism based on the percentage of textual overlap shown in the similarity report. This is often accompanied by explicit recommendations to the author(s) to paraphrase the text to achieve an “acceptable” percentage of overlap. Here, based on the available literature and real-world examples from similarity reports, we provide a classification with extensive examples of phrases that falsely inflate the similarity index and argue the futility and dangers of rephrasing such statements just for the sake of reducing the similarity index. The examples provided in this paper call for a more reasonable assessment of text similarity. To fully endorse the principles of academic integrity and prevent loss of clarity of the scientific literature, we believe it is important to shift from pure bureaucratic and quantificational view on the originality of scientific texts to human-centered qualitative assessment of the manuscripts, including the software outputs. Full article
Previous Issue
Back to TopTop