Next Article in Journal
Temporal Evolution of Bradford Curves in Academic Library Contexts
Previous Article in Journal
Evolution and Trends in Digital Wallet Research: A Bibliometric Analysis in Scopus and Web of Science
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Is Citation Count a Legitimate Indicator of Scientific Impact? A Case Study of Upper (1974) “The Unsuccessful Self-Treatment of a Case of Writer’s Block” and Its Derivatives

by
Andy Wai Kan Yeung
Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, University of Hong Kong, Hong Kong, China
Publications 2024, 12(4), 35; https://doi.org/10.3390/publications12040035
Submission received: 21 August 2024 / Revised: 10 October 2024 / Accepted: 14 October 2024 / Published: 15 October 2024

Abstract

:
The work by Upper (1974) was a blank paper. Multiple replication studies were published. This work examined the number of citations received by these papers, and manually checked the citing papers to determine why they made the citations. The Dimensions literature database was queried with the search string: (unsuccessful treatment writer’s block). The search yielded 14 articles, two of which were irrelevant and excluded. The 12 papers remained after screening included the original study by Upper (1974), nine replication studies, one review, and one meta-analysis. The original work received 43 citations, but related works had fewer than 10 citations each. One fourth of citations of Upper (1974) were being satiric on “nothing” or “precise” from papers dealing with unrelated concepts, and five citations were deemed erroneous/digressed. One citation was made to acknowledge the reviewer’s comments to Upper (1974), which did not involve Upper’s own ideas. This work exposed a scenario where there were limitations of using citation count as the only metric to gauge scientific impact of journal articles.

1. Introduction

The number of citations a paper receives has long been considered a key metric for measuring its scientific impact [1,2,3]. However, it is important to remember that citation counts may not always accurately reflect the quality or impact of a paper. For example, a paper that has been retracted due to scientific misconduct or errors may still continue to receive citations long after its retraction, particularly if it had already gained a large number of citations prior to its retraction [4]. Similarly, it was found that papers criticized by a complementary piece of commentary or a letter to the editors were generally much more likely to be among the most cited papers of a journal and more cited than non-commented (i.e., not criticized) papers [5]. In addition, the Matthew effect can skew citation counts, as researchers tend to selectively cite references that have already been highly cited, regardless of their actual scientific merit [6]. For instance, a large scale citation analysis indicated that a small number of elite scientists has received increasing citation shares over time and the citation inequality is on the rise in natural sciences, medical sciences, and agricultural sciences [7]. Furthermore, citation practices may be influenced by factors such as author reputation, gender, and institutional affiliations, which may not necessarily reflect the actual quality of the specific research being cited [8]. Another potential issue with citation metrics is that they do not take into account the context of each citation. For instance, a paper may be cited for reasons other than its scientific merit, such as to criticize its flawed methodology, or to highlight a minor point that is not central to the main findings of the paper. Citations could be made by papers from so-called predatory journals, papers with false authors, or papers deemed having research integrity issues; in these scenarios, some researchers might even wish to refuse the citations to distance themselves from the citers [9]. These limitations suggest that, while citation counts can be a useful tool for gauging the impact of a paper, they should be interpreted with caution and should not be relied upon as the single measure of scientific quality.
Instructional papers such as reporting guidelines might be published concurrently in multiple journals to enable maximum readership. One example was the CONSORT (CONsolidated Standards of Reporting Trials) 2010 guideline, simultaneously published in 10 journals. Within these 10 duplicate papers, it could be observed that some duplicates gained nearly as many as 2000 citations, whereas some duplicates received as few as 1 citation [10]. In addition, each citation may carry its own context, with some citations being “wrong” in the sense that the cited reference did not support the stance of citing paper or contain the information quoted by the citing paper [11,12]. These examples highlighted the disadvantage of making a judgment on a paper’s scientific impact based on citation count alone. Other research qualities, such as originality and societal value, should also be considered to have a more comprehensive evaluation [13].
The current study focused on an unusual set of papers that have attracted significant attention in the scientific community. The origin of this collection of papers can be tracked back to Upper’s paper published in 1974 [14] titled The Unsuccessful Self-Treatment of a Case of “Writer’s Block”. This paper was a satirical take on a case study, consisting of a blank page with only a title and a brief footnote saying that portions of the paper were not presented at a particular conference and reprints might be obtained from the sole author, Dennis Upper. This work should have no originality or any societal value if viewed in a serious sense. Despite its lack of content, the paper was accepted for publication and even received comments from a “Reviewer A”, who praised it as “concise” and recommended that it should be published without revision. Since its publication, Upper’s paper has become a well-known example of scientific satire, and has even spawned numerous replication studies. These studies range from direct imitations of the original paper, to more elaborate parodies that are built upon Upper’s paper. The proliferation of these replication studies raises interesting questions about the nature of scientific citation and its role in shaping scientific discourse. The current study aimed to identify the replication studies that have been inspired by Upper’s paper. Specifically, the current study sought to investigate the context in which these papers have been cited, and whether they have received similar numbers of citations as the original paper. By examining the citation patterns of these replication studies, it was hoped to shed light on the ways in which scientific satire can have an impact on the research literature, and have a reflection on the meaning of scientific citations.

2. Materials and Methods

On 30 December 2022, the Dimensions literature database (https://www.dimensions.ai/ accessed on 30 December 2022) was queried with the search string: (unsuccessful treatment writer’s block). This search string was referenced from a prior meta-analysis [15]. This query searched for any articles that mentioned all these four words in their title and/or abstract. Articles were excluded if they were not purely satirical in nature. The search resulted in 14 articles. One of them contained all four words separately in its abstract, but was totally irrelevant, being an essay on a musician, and thus excluded [16]. Another paper had a very long abstract that expressed the views of the author on the non-dual, a psychological or philosophical concept [17]. That paper mentioned the Upper paper [14] in its long abstract, and had a blank main text. Since this paper indeed contained serious informative content through its abstract, it was also excluded. Finally, 12 articles remained after screening: one being the original study by Upper [14], nine being replication studies, one being a review on this topic, and one being a meta-analysis on this topic [15]. Both the review and meta-analysis papers were purely satirical in nature. Table 1 lists the key points of these 12 papers. The citation count and Altmetric Attention Score (AAS) of these 12 papers were recorded from Dimensions. Papers citing them were read to categorize the rationale of such citations into the following groups: (1) paper replicating the work by Upper in 1974; (2) paper seriously dealing with writer’s block or related psychological concepts; (3) being satiric on the concepts of “having nothing” or “writing with precision” from paper dealing with unrelated concepts; (4) erroneous/digressed citation; (5) wrong entry by Dimensions/publisher; or (6) no access and thus could not be determined. Similar to a recent citation analysis study [18], these citation groups were originally devised for this study, and subsequently amended and finalized after evaluating the collected data.

3. Results

As expected, the original work by Upper (1974) received the highest number of citations (n = 43) and AAS score (3825). Subsequent related works had fewer than 10 citations each and did not seem to have strong impact to the academia (Table 1). Next, the citations received by Upper (1974) were thoroughly examined, and it was revealed that one fourth of its citations were made by its replication studies and review/meta-analysis of this group of studies (Table 2). Another fourth of its citations came from papers seriously dealing with writer’s block or related psychological concepts (Table 3): many of these citations recognized that the Upper (1974) paper was satirical, but some of them seemed to cite it as a legitimate study without explicitly telling readers about its satire. Equally, another fourth of citations were being satiric on “nothing” or “precise” from papers dealing with unrelated concepts (Table 4). Five citations were deemed erroneous/irrelevant, and they were listed in Table 5, several of which were related to gene analysis.

4. Discussion

Submitting a blank manuscript seemed to be a bold action by the authors. Since each submission may have its own contextual background, in certain circumstances it is true that journal editors will appreciate the satirical nature of the submission and even allow so-called replication studies of a blank paper. However, if some journal editors cannot appreciate the submission, there may be a chance for the submitting authors to be dismissed by the editorial office of the journal for future submissions [47]. Besides, it would be very difficult for readers to comprehend the original intentions or underlying thoughts of the authors with a blank paper, if it was not purely satirical. In fact, some authors published commentary to supplement their papers that were blank or only contained a few words. Some notable examples were discussed below.
The first example was from the philosophy field. Habgood-Coote et al. published a paper titled Can a Good Philosophical Contribution Be Made Just by Asking a Question? with no abstract, main text, references, or supplementary files [48]. They published a commentary to explain their reasoning and intentions of this act [49]. The second example was from the law field. Jensen published a paper titled The Shortest Article in Law Review History with only three words, “This is it”, and two footnotes [50]. Years later, he also published a separate commentary to explain the why he wrote it and how some readers responded [51].
Of course, there could be some satirical publications. From the papers citing [14], there was a paper titled Nothing: A Review [52]. It had no abstract and main text, but the reference list cited several replication studies of [14], as well as two other blank papers: [53] from the linguistic field and [54] from the chemistry field.
Though Upper (1974) [14] was a blank paper, readers usually intended to perceive it as a lively example of writer’s block, as its title suggested. However, one paper cited it at the end of a sentence that said some investigators evaluated an author’s underlying intentions by checking an old manuscript with X-rays [44]. The idea of checking a manuscript with X-rays did not originate from Upper, but it was within the reviewer’s comments given to, and printed with, Upper (1974), which said:
“I have studied this manuscript very carefully with lemon juice and X-rays and have not detected a single flaw in either design or writing style”.
This citation brought out the issue of proper recognition to the idea contributor. In this case, the authors actually wanted to acknowledge the reviewer of Upper (1974), not Upper himself. A similar scenario was when the authors cited a review paper instead of the original articles, then they unfairly gave credit to the review authors for other researchers’ ideas [55]. However, currently there is no proper mechanism to cite reviewer comments, though more and more journals are now publishing papers alongside with open peer review reports.
Several citations of Upper (1974) were related to gene analysis and deemed irrelevant. It remained to be elucidated if the researchers who made these citations were affecting each other or made the citation independently. At first glance, it does not make sense to cite a blank paper to support any concept or justify any methodology concerning genetics. Searching the literature, one could identify a much more prominent and famous example of irrelevant citations: a three-page commentary on moth pheromone by Vickers (2017) [56]. It has attracted over 2300 citations according to Google Scholar. According to Vickers and bibliometricians, over 99% of citations of Vickers (2017) were off-topic or invalid [57]. One potential reason for the presence of so many invalid citations could be the use of a wrong DOI (truncated DOI) to search for Vickers (2017) in Google Scholar during the creation of a citation file for citation management software [57]. Hence, there could be many reasons for making irrelevant or invalid citations, but they could not be revealed without the input from the citing authors. A less prominent issue with citations is wrong entry by databases, thus over- or under-counting the number of citations. This issue is equally difficult to check in an automated manner on a large scale.
One issue with citation count, not evaluated in the current work, is the different coverage by different literature databases. In particular, the Web of Science platform hosts several databases, with the most popular pre-set combinations called Core Collection. Even though the name of the database seems straightforward and intuitive, the number of databases actually included in the Core Collection and their corresponding coverage timespans vary according to the subscription plan [58]. Therefore, results may not be directly comparable across studies if different databases and coverage timespans are used to generate dataset for the analysis. Meanwhile, nowadays, bibliometric analysis has become a popular paper type in the scientific literature, ranging from the analysis of a pre-defined dataset to the identification of top 100 most cited papers for a particular topic [59]. Especially for the latter type of papers, they may identify and cite the top 100 most cited papers for their field in the format of a table, but may not actually discuss or elaborate on the content of each of the top 100 papers. These citations may inflate the citation count of the highly cited papers with the context of Matthew effect, which means the highly cited papers tend to accumulate more citations due to their established recognition [60].
Another phenomenon observed from the results was that the replication studies of Upper (1974) were much less cited than the original work. A prior study also reported that the majority of articles citing the original studies did not cite relevant replication results from later studies [61]. In fact, even some of the latest replication studies of Upper (1974) neglected to cite their earlier counterparts, see [25,26,27].
The above examples highlighted the issue of citation analysis with simple citation counts. Upper (1974) and its derivative works were theoretically identical in terms of the message delivered to the readers and to the academic community. Though some of them were cited more than the others, it should not mean that some of them had a larger scientific impact. Instead of simply counting the number of citations to determine the impact of the cited references, it was more informative to consider the citing behavior, or the context of the citations. Readers are directed to a comprehensive review by [62]. For example, a citation could be made by professional motivations (contextually need to be cited) versus connectional motivations (social connections or personal benefits) [63]. Such a kind of categorization/analysis was not possible without making direct queries to the authors. Another categorization method focused on the citation context from the text, such as historical background, description of relevant work, use of established methodology, and/or mathematical equations [64,65]. A more intuitive categorization method was to classify citations into supportive, negative, and others (neutral) [62]. This was easier to be performed manually or by artificial intelligence [66,67].
Since Upper (1974) was a blank paper, none of the above categorization methods should be applicable. Still, the citation sentences could be examined to determine the context of the citation and if erroneous citations could be found. It was found that 12% of the citations received by Upper (1974) came from irrelevant fields such as genetic studies. It was unsure if such citations were, indeed, made by the authors, or due to typesetting errors during article production. Notwithstanding, this finding seemed to be consistent with the estimation that only 20% of citers read the original paper [68], and that 13.3% of references listed at the end of published papers contained errors [69].
The current analysis of the citations of Upper (1974) demonstrates the limitations and potential pitfalls of using citation count as the only measure of scientific impact. One may argue that this is only an isolated incident, but it may actually serve as a valid reflection of a critical issue in the current academic landscape: the over-reliance on citation metrics can distort the perceived value of research and lead to unintended consequences. There were two reasons. First, citation metrics do not necessarily reflect the potential research or societal impact of the cited paper. In the case of Upper (1974), the humorous and satirical nature of the blank paper has garnered a significant number of citations. If researchers are not aware of this extreme case before, they may easily perceive it as a genuine case report based on its title. Second, even this satirical blank paper of Upper (1974) received unrelated or erroneous citations made to justify certain procedures or concepts in gene studies. Therefore, it is important to adopt more holistic and multi-dimensional evaluation metrics. These should consider the quality, reproducibility, and even relevance beyond academia such as impact on policy, customer service, and society, rather than solely relying on the quantity of citations [70].
In conclusion, the findings of this study have demonstrated that even a satirical blank paper could receive many citations. Even more, some of these citations were made to substantiate certain concepts or justify some procedures for gene studies. Hence, it is not always legitimate to rely on citation count as the single metric to evaluate the scientific impact of research works. Although this may be an extreme case or isolated incident, research performance will be better evaluated with a more holistic approach in the future. Apart from citation count, relevance beyond academia such as impact on policy, customer service, and society could be considered. Even for citation count itself, supplementary metrics in terms of citation context, such as ratio of citations that are supportive, negative, or neutral, could be exploited.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data are provided in the main text.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Podlubny, I. Comparison of scientific impact expressed by the number of citations in different fields of science. Scientometrics 2005, 64, 95–99. [Google Scholar] [CrossRef]
  2. Radicchi, F.; Fortunato, S.; Castellano, C. Universality of citation distributions: Toward an objective measure of scientific impact. Proc. Natl. Acad. Sci. USA 2008, 105, 17268–17272. [Google Scholar] [CrossRef] [PubMed]
  3. Smith, M.J.; Weinberger, C.; Bruna, E.M.; Allesina, S. The scientific impact of nations: Journal placement and citation performance. PLoS ONE 2014, 9, e109195. [Google Scholar] [CrossRef] [PubMed]
  4. Teixeira da Silva, J.A.; Dobránszki, J. Highly cited retracted papers. Scientometrics 2017, 110, 1653–1661. [Google Scholar] [CrossRef]
  5. Radicchi, F. In science “there is no bad publicity”: Papers criticized in comments have high scientific impact. Sci. Rep. 2012, 2, 815. [Google Scholar] [CrossRef]
  6. Teixeira da Silva, J.A. The Matthew effect impacts science and academic publishing by preferentially amplifying citations, metrics and status. Scientometrics 2021, 126, 5373–5377. [Google Scholar] [CrossRef]
  7. Nielsen, M.W.; Andersen, J.P. Global citation inequality is on the rise. Proc. Natl. Acad. Sci. USA 2021, 118, e2012208118. [Google Scholar] [CrossRef]
  8. Ray, K.S.; Zurn, P.; Dworkin, J.D.; Bassett, D.S.; Resnik, D.B. Citation bias, diversity, and ethics. Account. Res. 2024, 31, 158–172. [Google Scholar] [CrossRef]
  9. Teixeira da Silva, J.A.; Vuong, Q.-H. The right to refuse unwanted citations: Rethinking the culture of science around the citation. Scientometrics 2021, 126, 5355–5360. [Google Scholar] [CrossRef]
  10. Yeung, A.W.K.; Wang, D.; El-Demerdash, A.; Horbanczuk, O.K.; Das, N.; Pirgozliev, V.; Lucarini, M.; Durazzo, A.; Souto, E.B.; Santini, A.; et al. Animal versus human research reporting guidelines impacts: Literature analysis reveals citation count bias. Anim. Sci. Pap. Rep. 2021, 39, 5–18. [Google Scholar]
  11. Yeung, A.W.K. Do “Ten simple rules for neuroimaging meta-analysis” receive equal attention and accurate quotation? An examination on the quotations to an influential neuroimaging meta-analysis guideline. NeuroImage Clin. 2023, 39, 103496. [Google Scholar] [CrossRef] [PubMed]
  12. Yeung, A.W.K. A citation analysis of (f) MRI papers that cited Lieberman and Cunningham (2009) to justify their statistical threshold. PLoS ONE 2024, 19, e0309813. [Google Scholar] [CrossRef] [PubMed]
  13. Aksnes, D.W.; Langfeldt, L.; Wouters, P. Citations, citation indicators, and research quality: An overview of basic concepts and theories. Sage Open 2019, 9, 2158244019829575. [Google Scholar] [CrossRef]
  14. Upper, D. The unsuccessful self-treatment of a case of “writer’s block”. J. Appl. Behav. Anal. 1974, 7, 497. [Google Scholar] [CrossRef] [PubMed]
  15. McLean, D.C.; Thomas, B.R. Unsuccessful treatments of “writer’s block”: A meta-analysis. Psychol. Rep. 2014, 115, 276–278. [Google Scholar] [CrossRef] [PubMed]
  16. Stewart, J. If I had possession over judgment day: Augmenting Robert Johnson. M/C J. 2013, 16. [Google Scholar] [CrossRef]
  17. Friedman, H.L. An Explication of All Cogent Scientific Conceptualizations Regarding the Non-Dual: Finding Nothing to Write. Int. J. Transpers. Stud. 2018, 37, 9. [Google Scholar]
  18. Yeung, A.W.K. The reverberation of implementation errors in a neuroimaging meta-analytic software package: A citation analysis to a technical report on GingerALE. Heliyon 2024, 10, e38084. [Google Scholar] [CrossRef]
  19. Molloy, G.N. The unsuccessful self-treatment of a case of “writer’s block”: A replication. Percept. Mot. Ski. 1983, 57, 566. [Google Scholar] [CrossRef]
  20. Hermann, B.P. Unsuccessful self-treatment of a case of “writer’s block”: A partial failure to replicate. Percept. Mot. Ski. 1984, 58, 350. [Google Scholar] [CrossRef]
  21. Olson, K.R. Unsuccessful Self-Treatment of “Writer’s Block”: A Review of the Literature. Percept. Mot. Ski. 1984, 59, 158. [Google Scholar] [CrossRef]
  22. Skinner, N.F.; Perlini, A.H.; Fric, L.; Werstine, E.P.; Calla, J. The Unsuccessful Group-Treatment of “Writer’s Block”. Percept. Mot. Ski. 1985, 61, 298. [Google Scholar] [CrossRef]
  23. Skinner, N.F.; Perlini, A.H. The unsuccessful group treatment of “writer’s block”: A ten-year follow-up. Percept. Mot. Ski. 1996, 82, 138. [Google Scholar] [CrossRef]
  24. Didden, R.; Sigafoos, J.; O’Reilly, M.F.; Lancioni, G.E.; Sturmey, P. A Multisite Cross-Cultural Replication of Unsuccessful Self-Treatment of Writer’s Block. J. Appl. Behav. Anal. 2007, 40, 773. [Google Scholar] [CrossRef]
  25. Artino, A.R., Jr. The unsuccessful treatment of a case of ‘Writer’s Block’: A replication in medical education. Med. Educ. 2016, 50, 1262–1263. [Google Scholar] [CrossRef]
  26. Brodhead, M.T.; Truckenmiller, A.J.; Cox, D.J.; Della Sala, M.R.; Yough, M.; Hartzheim, D.U. A multidisciplinary replication of Upper’s (1974) unsuccessful self-treatment of writer’s block. Behav. Anal. Pract. 2019, 12, 547. [Google Scholar] [CrossRef]
  27. van Compernolle, R.A.; Leontjev, D. It’s Beyond Our Group ZPD: A Sociocultural Approach to the Unsuccessful Self-treatment of Writer’s Block in Times of COVID-19. Lang. Sociocult. Theory 2020, 7, 223. [Google Scholar] [CrossRef]
  28. Ampatzidis, G. The Unsuccessful Self-treatment of a Case of ‘Writer’s Block’: A Replication in Science Education. J. Trial Error 2021, 2, 56–57. [Google Scholar] [CrossRef]
  29. Ahmed, S.J.; Güss, C.D. An analysis of writer’s block: Causes and solutions. Creat. Res. J. 2022, 34, 339–354. [Google Scholar] [CrossRef]
  30. Steinert, C.; Heim, N.; Leichsenring, F. Procrastination, perfectionism, and other work-related mental problems: Prevalence, types, assessment, and treatment—A scoping review. Front. Psychiatry 2021, 12, 736776. [Google Scholar] [CrossRef]
  31. Payakachat, N.; Hight, K.; Reinhardt, M.; Pate, A.; Franks, A.M. Exploring factors associated with scholarly writing among US pharmacy practice faculty. Res. Soc. Adm. Pharm. 2021, 17, 531–540. [Google Scholar] [CrossRef] [PubMed]
  32. Carlbring, P.; Andersson, G. Successful Self-Treatment of a Case of Writer’s Block. Cogn. Behav. Ther. 2011, 40, 1–4. [Google Scholar] [CrossRef]
  33. Boice, R. Writing blocks and tacit knowledge. J. High. Educ. 1993, 64, 19–54. [Google Scholar] [CrossRef]
  34. Boice, R. Combining writing block treatments: Theory and research. Behav. Res. Ther. 1992, 30, 107–116. [Google Scholar] [CrossRef] [PubMed]
  35. Salovey, P.; Haar, M.D. The efficacy of cognitive-behavior therapy and writing process training for alleviating writing anxiety. Cognit. Ther. Res. 1990, 14, 513–526. [Google Scholar] [CrossRef]
  36. Boice, R. Teaching of writing in psychology: A review of sources. Teach. Psychol. 1982, 9, 143–147. [Google Scholar] [CrossRef]
  37. Rosenberg, H.; Lah, M.I. A comprehensive behavioral-cognitive treatment of writer’s block. Behav. Cogn. Psychother. 1982, 10, 356–363. [Google Scholar] [CrossRef]
  38. Stevens, V.J. Increasing professional productivity while teaching full time: A case study in self-control. Teach. Psychol. 1978, 5, 203–205. [Google Scholar] [CrossRef]
  39. Passman, R.H. A procedure for eliminating writer’s block in a college student. J. Behav. Ther. Exp. Psychiatry 1976, 7, 297–298. [Google Scholar] [CrossRef]
  40. Kessel, D. Photodynamic therapy: Critical PDT theory. Photochem. Photobiol. 2023, 99, 199–203. [Google Scholar] [CrossRef]
  41. Cameron, R.P. Constraints for electric charge from Maxwell’s equations and boundary conditions. Phys. Scr. 2022, 97, 035502. [Google Scholar] [CrossRef]
  42. Schipper, T.; Storms, G.; Janssens, G.; Schoofs, S.; Capiau, E.; Verdonck, D.; Smets, P.; Peelman, L.J.; Broeckx, B.J. Genetic Aspects of Corneal Sequestra in a Population of Persian, Himalayan and Exotic Cats. Animals 2022, 12, 2008. [Google Scholar] [CrossRef] [PubMed]
  43. Speicher, D.J.; Luinstra, K.; Smith, E.J.; Castriciano, S.; Smieja, M. Non-invasive detection of viral antibodies using oral flocked swabs. BioRxiv 2020, 536227. [Google Scholar] [CrossRef]
  44. Alfeld, M.; De Viguerie, L. Recent developments in spectroscopic imaging techniques for historical paintings-a review. Spectrochim. Acta Part B At. Spectrosc. 2017, 136, 81–105. [Google Scholar] [CrossRef]
  45. Bromke, M.A.; Hesse, H. Phylogenetic analysis of methionine synthesis genes from Thalassiosira pseudonana. SpringerPlus 2015, 4, 391. [Google Scholar] [CrossRef]
  46. Lorenz, R.; Bernhart, S.H.; Höner zu Siederdissen, C.; Tafer, H.; Flamm, C.; Stadler, P.F.; Hofacker, I.L. ViennaRNA Package 2.0. Algorithms Mol. Biol. 2011, 6, 26. [Google Scholar] [CrossRef]
  47. Nigg, H.N.; Radulescu, G. Scientific misconduct in environmental science and toxicology. JAMA 1994, 272, 168–170. [Google Scholar] [CrossRef]
  48. Habgood-Coote, J.; Watson, L.; Whitcomb, D. Can a good philosophical contribution be made just by asking a question? Metaphilosophy 2022. ahead of print. [Google Scholar]
  49. Habgood-Coote, J.; Watson, L.; Whitcomb, D. Commentary on “Can a good philosophical contribution be made just by asking a question?”. Metaphilosophy 2022. ahead of print. [Google Scholar] [CrossRef]
  50. Jensen, E.M. The Shortest Article in Law Review History. J. Leg. Educ. 2000, 50, 156. [Google Scholar]
  51. Jensen, E.M. The Intellectual History of the Shortest Article in Law Review History. Case West. Reserve Law Rev. 2008, 59, 445–450. [Google Scholar]
  52. Karhulahti, V.-M. Nothing: A review. Humanit. Soc. 2021, 45, 653–654. [Google Scholar] [CrossRef]
  53. Fiengo, R.; Lasnik, H. On nonrecoverable deletion in syntax. Linguist. Inq. 1972, 3, 528. [Google Scholar]
  54. Goldberg, A.F.; Roth, K.; Chemjobber, C. A comprehensive Overview Chemical-free Consumer Products. Chem. Unserer Zeit 2016, 50, 144–145. [Google Scholar] [CrossRef]
  55. Teixeira, M.C.; Thomaz, S.M.; Michelan, T.S.; Mormul, R.P.; Meurer, T.; Fasolli, J.V.B.; Silveira, M.J. Incorrect citations give unfair credit to review authors in ecology journals. PLoS ONE 2013, 8, e81871. [Google Scholar] [CrossRef]
  56. Vickers, N.J. Animal communication: When I’m calling you, will you answer too? Curr. Biol. 2017, 27, R713–R715. [Google Scholar] [CrossRef]
  57. Teixeira da Silva, J.A.; Vickers, N.J.; Nazarovets, S. From citation metrics to citation ethics: Critical examination of a highly-cited 2017 moth pheromone paper. Scientometrics 2024, 129, 693–703. [Google Scholar] [CrossRef]
  58. Yeung, A.W.K. A revisit to the specification of sub-datasets and corresponding coverage timespans when using Web of Science Core Collection. Heliyon 2023, 9, e21527. [Google Scholar] [CrossRef]
  59. Yeung, A.W.K. Document type assignment by Web of Science, Scopus, PubMed, and publishers to “Top 100” papers. Malays. J. Libr. Inf. Sci. 2021, 26, 97–103. [Google Scholar]
  60. Wang, J. Unpacking the Matthew effect in citations. J. Informetr. 2014, 8, 329–339. [Google Scholar] [CrossRef]
  61. Hardwicke, T.E.; Szűcs, D.; Thibault, R.T.; Crüwell, S.; van den Akker, O.R.; Nuijten, M.B.; Ioannidis, J.P. Citation patterns following a strongly contradictory replication result: Four case studies from psychology. Adv. Methods Pract. Psychol. Sci. 2021, 4, 1–14. [Google Scholar] [CrossRef]
  62. Bornmann, L.; Daniel, H.D. What do citation counts measure? A review of studies on citing behavior. J. Doc. 2008, 64, 45–80. [Google Scholar] [CrossRef]
  63. Vinkler, P. A quasi-quantitative citation model. Scientometrics 1987, 12, 47–72. [Google Scholar] [CrossRef]
  64. Ahmed, T.; Johnson, B.; Oppenheim, C.; Peck, C. Highly cited old papers and the reasons why they continue to be cited. Part II., The 1953 Watson and Crick article on the structure of DNA. Scientometrics 2004, 61, 147–156. [Google Scholar] [CrossRef]
  65. Oppenheim, C.; Renn, S.P. Highly cited old papers and the reasons why they continue to be cited. J. Am. Soc. Inf. Sci. 1978, 29, 225–231. [Google Scholar] [CrossRef]
  66. Nicholson, J.M.; Mordaunt, M.; Lopez, P.; Uppala, A.; Rosati, D.; Rodrigues, N.P.; Grabitz, P.; Rife, S.C. Scite: A smart citation index that displays the context of citations and classifies their intent using deep learning. Quant. Sci. Stud. 2021, 2, 882–898. [Google Scholar] [CrossRef]
  67. Yeung, A.W.K.; Cushing, C.A.; Lee, A.L. A bibliometric evaluation of the impact of theories of consciousness in academia and on social media. Conscious. Cogn. 2022, 100, 103296. [Google Scholar] [CrossRef]
  68. Simkin, M.V.; Roychowdhury, V.P. Read before you cite! Complex Syst. 2003, 14, 269–274. [Google Scholar] [CrossRef]
  69. Al-Benna, S.; Rajgarhia, P.; Ahmed, S.; Sheikh, Z. Accuracy of references in burns journals. Burns 2009, 35, 677–680. [Google Scholar] [CrossRef]
  70. Friesen, F.; Baker, L.R.; Ziegler, C.; Dionne, A.; Ng, S.L. Approaching impact meaningfully in medical education research. Acad. Med. 2019, 94, 955–961. [Google Scholar] [CrossRef]
Table 1. Summary of 12 papers related to unsuccessful treatment of writer’s block.
Table 1. Summary of 12 papers related to unsuccessful treatment of writer’s block.
ReferencePaper TypeKey PointsCitationsAAS Score
Upper (1974) [14]OriginalThe original work. Abstract, main text, and reference were all blank.433825
Molloy (1983) [19]OriginalAbstract and main text were blank. Reference cited [14].882
Hermann (1984) [20]OriginalNo abstract. Main text contained two sentences, one finished and the other unfinished. Cited [14,19].463
Olson (1984) [21]ReviewNo abstract. Main text contained two sentences. It concluded that “the consistent outcome of these studies is [blank]”. Cited [14,19,20]128
Skinner et al. (1985) [22]OriginalAbstract and main text were blank. Reference cited [14,19]. Footnotes explained that the treatment procedures for weekly 1-h sessions over 2 years were ineffective for the group.322
Skinner and Perlini (1996) [23]OriginalAbstract mentioned that the treatment procedures for weekly 1-h sessions over 10 years were still ineffective for the group. Main text was blank. Cited [14,19,22].118
Didden et al. (2007) [24]OriginalAbstract and main text were blank. Reference cited [14].3217
McLean and Thomas (2014) [15]Meta-analysisAbstract and main text looked genuine. Concluded that group treatment tended to be more unsuccessful, due to the partially successful self-treatment by [20]. Cited [14,19,20,21,22,23,24].426
Artino Jr (2016) [25]OriginalAbstract and main text were blank. Reference cited [14].255
Brodhead et al. (2019) [26]OriginalAbstract and main text were blank. Reference cited [14].133
van Compernolle and Leontjev (2020) [27]OriginalAbstract had 4 sentences, claiming that COVID-19 has made the authors suffer from writer’s block and cannot be cured by the treatment proposed by Upper in 1974. Main text was blank. Cited [14]00
Ampatzidis (2021) [28]OriginalAbstract and main text were blank. Reference cited [14,19,20,25]06
Table 2. Breakdown of citations received by the 12 papers related to unsuccessful treatment of writer’s block.
Table 2. Breakdown of citations received by the 12 papers related to unsuccessful treatment of writer’s block.
ReferenceCitation Type
Type 1 (Replication)Type 2 (Serious)Type 3 (Satiric)Type 4 (Irrelevant)Type 5 (Wrong Entry)Type 6 (No Access)
Upper (1974) [14]111112531
Molloy (1983) [19]611000
Hermann (1984) [20]310000
Olson (1984) [21]100000
Skinner et al. (1985) [22]201000
Skinner and Perlini (1996) [23]100000
Didden et al. (2007) [24]111000
McLean and Thomas (2014) [15]031000
Artino Jr (2016) [25]101000
Brodhead et al. (2019) [26]001000
van Compernolle and Leontjev (2020) [27]000000
Ampatzidis (2021) [28]000000
Type 1: Citations from papers replicating the work by Upper in 1974. Type 2: citations from papers dealing seriously with writer’s block or related psychological concepts. Type 3: citations being satiric about the concepts of “having nothing” or “writing with precision” from papers dealing with unrelated concepts. Type 4: citations deemed erroneous/irrelevant. Type 5: wrong entry by Dimensions/publisher (actually no citation). Type 6: no access to the citing paper.
Table 3. Citations to Upper (1974) from papers dealing seriously with writer’s block or related psychological concepts.
Table 3. Citations to Upper (1974) from papers dealing seriously with writer’s block or related psychological concepts.
Citing SentenceCiting Paper
“Previous researchers have attempted to develop interventions based on strategies that writers have described as useful in overcoming writer’s block [citation here, 4 other citations]”.
“Most of these solutions have been taken from interviews with successful novelists about their routines and writing habits [citation here, 3 other citations]”.
“Unlike the samples in previous studies, which comprised mainly students [citation here, 2 other citations], the current sample consisted of semiprofessional and professional writers”.
[29]
“A famous and funny series of articles around this phenomenon concerns the unsuccessful self-treatment of writer’s block [citation here, 2 other citations]”.[30]
“Writer’s block is a well-known phenomenon recognized by faculty [citation here, 1 other citation]”.[31]
“Frankly, it is easy to catch a bit of writer’s block. Luckily, this has been dealt with in a number of articles throughout the years. Perhaps the most important one is [citation here] “Unsuccessful self-treatment of a case of ‘writer’s block’”. This is a truly unique article”.[32]
“In my twenty-five years of conversations with colleagues on this topic, the best-known of articles follow the same format: They are humorous, consisting of a title about blocking (for example, “My latest attempt to overcome my writing block”) followed by a blank page [citation here]”.[33]
“Consider Bergler’s (1950) likening of writing blocks to a rejection of mother’s milk, Ross’s (1985) claim that stymied writers can write while they sleep, or Upper’s (1974) [citation here] widely-known joke (a blank page) about blocking”.[34]
“Because people often study phenomena that have personal meaning and application to themselves, there may be anxious writers studying writing anxiety [citation here, 2 other citations]”.[35]
“So it is that when we discuss writing at all we tend to a nervous kind of humor, the single best-known article on writing in psychology is Upper’s (1974) [citation here] irreverent report of unsuccessfully treating a writing block”.[36]
“Finally, Upper (1974) [citation here] has presented a humorous example of the unsuccessful self-treatment of a case of writer’s block”.[37]
“This technique seemed perfectly suited for me because, like many of my colleagues [citation here] I had completed a number of research projects, but had not prepared them for publication”.[38]
“However. little attention has been given to adults who have the more specific problem of writer’s block, except, for example, in a humorous fashion with an unsuccessful case [citation here]”.
“No doubt, writer’s block has also affected many researchers [citation here]”.
“Thus, a rapid technique (five sessions, including information gathering and an unsuccessful session without contingency-fulfillment) for attenuating the common problem of writer’s block [citation here, 1 other citation] was demonstrated with no evidence of recidivism”.
[39]
Table 4. Representative example citations to Upper (1974) being satiric about the concepts of “having nothing” or “writing with precision” from papers dealing with unrelated concepts.
Table 4. Representative example citations to Upper (1974) being satiric about the concepts of “having nothing” or “writing with precision” from papers dealing with unrelated concepts.
Citing SentenceCiting Paper
On “writing with precision”:
“While most reports may not be as accurate and succinct as an example from the remote past [citation here], the comments of “Reviewer A” should be our goal: “I have not detected a single flaw in either design or writing style”.
[40]
On “nothing”:
“The possibility that the Universe is indeed electrically neutral is appealing, perhaps, as it aligns with the hypothesis that the Universe is literally an intricate embodiment of nothing [citation here, 2 other citations]”.
[41]
Table 5. Citations to Upper (1974) that were deemed irrelevant.
Table 5. Citations to Upper (1974) that were deemed irrelevant.
Citing SentenceCiting Paper
“As the value and reliability of analyzing such a highly altered pedigree is questionable, no segregation analysis was performed [citation]”.[42]
“R-Biopharm: 33/45 (73.3%; 95% CI: 58.4%, 84.2%)].…(citation)”.[43]
“With this they confirmed the concept of gaining insight into an author’s intentions by X-rays [citation]”. (“They” referred to investigators that used micro x-ray fluorescence to examine an old book)[44]
“Very little is known about the transcriptional regulation of gene expression in diatoms (citation)”.[45]
“Since its initial publication, no comprehensive description [citation] of the ViennaRNA Package has appeared”.[46]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yeung, A.W.K. Is Citation Count a Legitimate Indicator of Scientific Impact? A Case Study of Upper (1974) “The Unsuccessful Self-Treatment of a Case of Writer’s Block” and Its Derivatives. Publications 2024, 12, 35. https://doi.org/10.3390/publications12040035

AMA Style

Yeung AWK. Is Citation Count a Legitimate Indicator of Scientific Impact? A Case Study of Upper (1974) “The Unsuccessful Self-Treatment of a Case of Writer’s Block” and Its Derivatives. Publications. 2024; 12(4):35. https://doi.org/10.3390/publications12040035

Chicago/Turabian Style

Yeung, Andy Wai Kan. 2024. "Is Citation Count a Legitimate Indicator of Scientific Impact? A Case Study of Upper (1974) “The Unsuccessful Self-Treatment of a Case of Writer’s Block” and Its Derivatives" Publications 12, no. 4: 35. https://doi.org/10.3390/publications12040035

APA Style

Yeung, A. W. K. (2024). Is Citation Count a Legitimate Indicator of Scientific Impact? A Case Study of Upper (1974) “The Unsuccessful Self-Treatment of a Case of Writer’s Block” and Its Derivatives. Publications, 12(4), 35. https://doi.org/10.3390/publications12040035

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop