Next Article in Journal
The Ecosystem of Repository Migration
Previous Article in Journal
Quality Issues of CRIS Data: An Exploratory Investigation with Universities from Twelve Countries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Who Is (Likely) Peer-Reviewing Your Papers? A Partial Insight into the World’s Top Reviewers

School of Engineering and the Built Environment, Edinburgh Napier University, 10 Colinton Road, Edinburgh EH10 5DT, UK
*
Author to whom correspondence should be addressed.
Publications 2019, 7(1), 15; https://doi.org/10.3390/publications7010015
Submission received: 11 December 2018 / Revised: 7 February 2019 / Accepted: 27 February 2019 / Published: 4 March 2019

Abstract

:
Scientific publishing is experiencing unprecedented growth in terms of outputs across all fields. Inevitably this creates pressure throughout the system on a number of entities. One key element is represented by peer-reviewers, whose demand increases at an even higher pace than that of publications, since more than one reviewer per paper is needed and not all papers that get reviewed get published. The relatively recent Publons platform allows for unprecedented insight into the usual ‘blindness’ of the peer-review system. At a time where the world’s top peer-reviewers are announced and celebrated, we have taken a step back in order to attempt a partial mapping of their profiles to identify trends and key dimensions of this community of ‘super-reviewers’. This commentary focuses necessarily on a limited sample due to manual processing of data, which needs to be done within a single day for the type of information we seek. In investigating the numbers of performed reviews vs. academic citations, our analysis suggests that most reviews are carried out by relatively inexperienced academics. For some of these early career academics, peer-reviewing seems to be the only activity they engage with, given the high number of reviews performed (e.g., three manuscripts per day) and the lack of outputs (zero academic papers and citations in some cases). Additionally, the world’s top researchers (i.e., highly-cited researchers) are understandably busy with research activities and therefore far less active in peer-reviewing. Lastly, there seems to be an uneven distribution at a national level between scientific outputs (e.g., publications) and reviews performed. Our analysis contributes to the ongoing global discourse on the health of scientific peer-review, and it raises some important questions for further discussion.

Commentary

Science is a complex and crowded system [1], and academic peer-review is currently at the heart of it. “It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won”, wrote Richard Smith [2], a former editor and chief executive of the BMJ. Cole and Simon [3] looked at research grants and found that funding depends to a significant extent on chance. This was somehow contrasted by Li and Agha [4], who established that peer-review indeed selects the most promising proposals, i.e., those yielding more papers, patents, and citations.
Enthusiasts and critics of peer-review have fought their own corners for decades and there are certainly good arguments on either side [5]. Peer-review has however undoubtedly sometimes spectacularly failed to identify good science and rejected papers that eventually turned into Nobel prizes [6]. Grant-reviewing and paper-reviewing are however quite different, at least in the sense that the former varies greatly according to guidelines set by national research councils and scientific foundations, while the latter follows to a great extent a similar pattern across the globe.
We want to reflect on paper-peer-reviewing, whom Smith [2] likened to “democracy, poetry, love, and justice”, for the impossibility to define it in operational terms. Bohannon [7] investigated, with a remarkable experiment, the little or no scrutiny of many academic journals, while Ioannidis [8] demonstrated that most published research findings are false. These might be seen as extreme cases, yet they do show some critical issues with the peer reviewing system. Bunner and Larson [9] surveyed the quality of the peer-review process in a specific journal and concluded, perhaps unsurprisingly, that authors of accepted manuscripts were more likely to rate the peer-reviews they received positively, while editorial board members had a somewhat fairer and less biased judgement over what makes a good quality review.
However, we are not interested in peer-reviewing, but rather, in the peer-reviewers. Scrutiny of peer-reviewers is not new, and Evans et al. [10] looked into the characteristics of peer-reviewers who produce good quality reviews in a specific journal. Cabezas De Fierro et al. [11] reviewed about 300 referee reports from three journals, concluding that neither time nor length was related to review quality. Our own interest, however, was sparked by a relatively recent platform which opened a door onto the usual (single/double/triple1) blindness of peer-review: Publons.
We should probably start saying that we loved Publons. It is a wonderful initiative that allows to track and highlight the enormous effort that academics put into paper-peer-reviewing. The whole system would fall down if we stopped doing it, and since it is a voluntary, vastly unpaid, spare-time consuming activity for most of us, it is great to have a platform that keeps track of the work done.
The side we did not feel was necessary of Publons, is the umpteenth metric/ranking system that goes with it. In addition to citations across a number of platforms (Scopus, Scholar, ResearchGate, etc.) and quantitative metrics of our academic stature (h-index, i-10 index, RG score, etc.) now we must also worry about our reviewing metrics and merits. Publons indeed offers reviewers merit points and badges. Another collection of excellence that we should start, get to be good with, and then show off (Publons specifically encourages its use for promotion purposes). So much so that at the time of writing this piece the top of the homepage reads: “Were you named in Publons’ global Peer Review Awards? Find out—and see the world’s top peer reviewers…” And so we did.
To curious researchers like ourselves, Publons represents an unmissable and unprecedented opportunity to see who actually does peer-review and to understand in greater detail their profile, the credentials they have in order to do such a vital job for the advancement (and gate-keeping!) of science, and the likely time spent in peer-reviewing compared to myriad of other academic activities we all must engage with. A disclaimer we feel necessary at this point: We do not aim to shame any one reviewer, nor to question the moral integrity of the majority of the peer-reviewing community. We do believe, however, that the publishing/peer-reviewing academic system is so inflated that it is on the verge of blowing up sooner or later, and we wish to play our little part in helping identify the issues and hopefully rectify its current trajectory.
Our analysis is limited to the top 250 reviewers in Publons at the time of writing. We acknowledge that (1) this is a small sample of the Publons community and (2) and an even smaller sample of the peer-reviewing community. Yet, we believe it is sufficiently broad a sample to identify trends within that sub-community (Publons’ reviewers) and raise questions relevant to the broader community (peer-review in academia). The limit to 250 is due to the manual work done to track Publons’ top reviewers on Scopus, in order to match their reviewer and academic profiles; since citations and reviews change daily, 250 profiles is the maximum we managed to do in a single working day (18 October 2018).
In total, we analysed 46,079 reviews, which turned out to be done at an average ratio of about 185 reviews/reviewer over the last 12 months. The maths were quite straightforward: That is, a paper every other day, all year round, with no holiday whatsoever. In our own experience (openly available on Publons, of course) peer-reviewing a paper is a time-consuming activity, whose overall length varies greatly depending on (1) the scientific quality and content of the paper, (2) the proximity to the reviewer’s own field of expertise, (3) the presence of supplementary material to be reviewed, and (4) the clarity of the paper (both in terms of language and structure).
After discussing our experience with colleagues from other Universities in different countries and fields, we have agreed that a good average of the lower-bound estimates for peer-reviewing a scientific paper would be four hours. This includes: Reading the abstract before accepting the invitation; reading the paper; doing some background reading for those subjects, which might not fall into our immediate area of expertise; identifying the revisions needed and writing them up; transferring this information onto the online editorial system—which is often quite unfriendly and duplicates the information going to the authors and that going instead to the (handling) editor(s)—and double-checking what we wrote to ensure it is clear enough to be understood by editors and prospective authors.
All this considered, we tried to imagine a typical day of a Publons’ top reviewer. One remarkable profile high in the list has totalled over 800 reviews in the last 12 months. We have assumed that this person only takes two weeks’ holiday over the course of a year, and their weekly paper-review rate ranks at an impressive 18 papers per week. Using data from Publons’ punchcards (i.e., number of reviews mapped against days of the week), it turns out that on a typical weekday this person reviews about three papers. It might not sound a lot but honestly, would you be able to squeeze that into your daily academic routine and feel you are serving the journal, the authors, the editors, and the wider community well?
Publons also allows the average words per review of its reviewers to be seen. The case above is a rather prolific reviewer, not just in the numbers of papers they review, but also in the extensive information provided, which averages at about 2400 words per review. We had a quick look into typewriting speeds and it seems that for a proficient user (and this reviewer certainly is) an average speed is of about 45 words per minute. Their average review, therefore, turns out to an astonishing 53 min of pure typing. This does not include the actual reading of the paper, its understanding, the formulation of critical thinking, any thinking of how to best write the review, any reading through before submitting, or any mistakes and typos, etc. (just pure typing time). Add whatever you think these remaining activities are worth and multiply by three papers per day. What is left of a working day to actually be an academic and not just a peer-reviewer? With our own slow standards, we would spend over 12 h a day doing peer-review at such rates. Our research would be soon dead, in turn putting us in a weaker position as reviewers who are not researchers anymore.
Of course, this is an outlier in the peer-review community, but nonetheless a rewarded outlier, and a risky role-model that might be encouraged to be followed. Unfortunately they are not alone and the daily and weekly review numbers simply do not seem to add up if one takes into account the busy life of a fully-fledged academic (teaching, meeting students, writing grants, writing papers, revising papers, reviewing papers, supervising PhD students, networking, serving on scientific committees, serving on a number of institutional committees, doing consultancy work, engaging with the public, reviewing grants, etc.) and then allows some spare time to fulfil biological necessities and having some basic forms of a social life.
But then we thought, these are the world’s top peer-reviewers, and probably are able to review a paper at a glance, due to their academic standing. They all score in the top 1% in their respective fields (i.e., pretty much all fields of scientific enquiry). We had, therefore, imagined that these people must also be world-leading researchers in order to be so fast and effective in such a delicate job, after all, understanding a paper takes times! It turns out the reality is quite different, as Figure 1a shows (and indeed only one of the top 250 reviewers is also a 2017 Highly Cited Researcher [13], well done to this person!).
With the double caveat of (1) this not being a scientific paper and (2) the limited sample analysed, it can still be seen from Figure 1a that data points were far denser towards little citation counts and a high number of reviews (as high as the equivalent of a paper a day). We found this quite puzzling; how can someone still likely to be in the early stages of their careers (signalled by a potentially growing, but still limited, number of citations and academic outputs) take on so many reviews? And be so effective and fast with them or, tertium non datur, devote so much time to them? As only one of the global top reviewers was also a highly cited researcher, we had to come up with an alternative clustering criterion. Fully arbitrarily, we thought that 999 could be a proxy for a low-citation index. Incidentally, none of us has that many citations yet, so we’re labelling ourselves as lowly-cited! Similarly, we thought that 5000 would be a threshold, signalling some form of high citations. We do acknowledge that citations vary hugely across fields, career stage, research maturity, etc., but still, our clustering has the sole purpose of identifying trends.
With this in mind, we found that lowly-cited researchers dominate the Top 250 in both numbers (118, i.e., 47.2% of the sample) and reviews (22,439, which equals to 48.7% of the total reviews analysed, see Figure 1b). Significantly, 49 of the 118 lowly-cited reviewers were in the Top 100 and out of these, 24 had fewer than 99 citations, and seven reviewers had zero citations and published outputs. These latter are yet to prove if they can get themselves published and that their work has some relevance to their peers, but are entrusted with deciding whether or not others should be published. World-leaders (i.e., highly-cited scholars) on the other hand, carried out a mere 14.8% of the reviews analysed, and there were only 35 of them in the selected sample (14%). The similarity of the numbers in the two clusters (14% of the highly-cited reviewers doing 14.8% of the reviews and 47.2% of the lowly-cited doing the 48.7%) suggests that the world-leaders are not any better or any faster than their early career counterparts. Is this realistic?
Lastly, we tried to map where the world’s top reviewers are and the reviewers/reviews intensity of the different countries. This is shown in Figure 2a,b respectively. The usual suspects emerged right at the top. The USA, China, and India hosted the most reviewers, who in turn did most of the reviews. It is interesting to see that in the whole of Latin America, which is home to 652 million people, there were nearly no reviewers in the Top 250 (with the exclusion of one from Brazil and one from Mexico). The vast majority of African countries were also underrepresented, apart from Egypt, which showed a surprising total of 13 reviewers in the Top 250 (which makes Egypt fourth, both in terms of reviews and reviewers in the sample we analysed). Other countries that scored in the Top 10 of reviewers and reviews were Greece, Iran, Italy, Portugal, Turkey, Malaysia, and the UK.
It is worth noting a mismatch between countries’ research outputs and their supply of top reviewers. This is shown in Table 1. Interestingly, countries traditionally strong in academic outputs (i.e., Japan, Brazil, and Germany) did not show a similar profile when it came to global top reviewers. Significantly, not a single German scholar featured in the Top 250 global reviewers that we analysed.
We have already acknowledged that this is not a scientific paper and therefore, these findings should not be expected. However, some trends did emerge from our analysis, such as developing countries being almost absent in the global top reviewer’s list; a mismatch between the national demand and supply of reviewers2; lowly-cited academics doing the most reviews at rates equal or faster than those of highly-cited academics; and the unrealistic number of reviews carried out by single individuals. Regardless of these notional elements, a number of questions remain in our heads.
  • Is it healthy for the advancement of science that academics review three or more papers per week?
  • Is it worth to reward this frenzy over peer-reviewing more and more papers and try to excel in yet another metric imposed on us?
  • Is there a need to control the profiles of the world’s top reviewers in platforms like Publons and the journals for which they review in order to avoid that the places high up in the list are often populated by inexperienced academics reviewing for predatory journals?
  • Can one still be considered an academic if their full-time job is reviewing other people’s papers?
  • Is the current peer-reviewing system best suited to meet the future challenges of academic publishing with impressive annual growth rates of papers produced?
We don’t have any answers, but we felt the questions were worth asking.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Azoulay, P.; Graff-Zivin, J.; Uzzi, B.; Wang, D.; Williams, H.; Evans, J.A.; Jin, G.Z.; Lu, S.F.; Jones, B.F.; Börner, K. Toward a more scientific science. Science 2018, 361, 1194–1197. [Google Scholar] [CrossRef] [PubMed]
  2. Smith, R. Peer review: A flawed process at the heart of science and journals. J. R. Soc. Med. 2006, 99, 178–182. [Google Scholar] [CrossRef] [PubMed]
  3. Cole, S.; Simon, G.A. Chance and consensus in peer review. Science 1981, 214, 881–886. [Google Scholar] [CrossRef] [PubMed]
  4. Li, D.; Agha, L. Big names or big ideas: Do peer-review panels select the best science proposals? Science 2015, 348, 434–438. [Google Scholar] [CrossRef] [PubMed]
  5. Bornmann, L. Scientific peer review. Annu. Rev. Inf. Sci. Technol. 2011, 45, 197–245. [Google Scholar] [CrossRef]
  6. Nielsen, M. Three Myths about Scientific peer Review. Michael Neilsen Blog on January. 2009, Volume 8. Available online: http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/ (accessed on 18 November 2018).
  7. Bohannon, J. Who’s Afraid of Peer Review? AM. Assoc. Adv. Sci. 2013, 342, 60–65. [Google Scholar]
  8. Ioannidis, J.P. Why most published research findings are false. PLoS Med. 2005, 2, e124. [Google Scholar] [CrossRef] [PubMed]
  9. Bunner, C.; Larson, E.L. Assessing the quality of the peer review process: Author and editorial board member perspectives. Am. J. Infect. Control 2012, 40, 701–704. [Google Scholar] [CrossRef] [PubMed]
  10. Evans, A.T.; McNutt, R.A.; Fletcher, S.W.; Fletcher, R.H. The characteristics of peer reviewers who produce good-quality reviews. J. Gen. Intern. Med. 1993, 8, 422–428. [Google Scholar] [CrossRef] [PubMed]
  11. Cabezas De Fierro, P.; Meruane, O.S.; Espinoza, G.V.; Herrera, V.G. Peering into peer review: Good quality reviews of research articles require neither writing too much nor taking too long. Transinformação 2018, 30, 209–218. [Google Scholar] [CrossRef]
  12. Publons|Clarivate Analytics. Global State of Peer Review. 2018. Available online: https://publons.com/community/gspr/ (accessed on 18 November 2018).
  13. Clarivate Analytics|Web of Science. Look up to the Brightest Stars Introducing 2017’s Highly Cited Researchers. 2018. Available online: https://hcr.clarivate.com/wp-content/uploads/2017/11/2017-Highly-Cited-Researchers-Report-1.pdf (accessed on 18 November 2018).
1
For a definition of the different blindness levels the reader is referred to: “Global State of Peer Review—2018”, Publons—Clarivate Analytics [12].
2
This aspect of global peer review is comprehensively dealt with in [12]
Figure 1. (a) Verified reviews in the last 12 months vs. academic citations for the sample analysed (as of 18 October 2018) * Citations have been sourced exclusively from Scopus through the open researcher and contributor ID (ORCID) on the Publons profiles. In those few cases where the ORCID did not return a positive match on Scopus we have searched for outputs in ORCID and in turn searched for those outputs on Scopus to identify the academic and gather their citations. (b) Contribution to the sample of reviews analysed according to citation cluster. Lowly-cited reviewers are those with <999 citations, and highly-cited those with >5000 citations.
Figure 1. (a) Verified reviews in the last 12 months vs. academic citations for the sample analysed (as of 18 October 2018) * Citations have been sourced exclusively from Scopus through the open researcher and contributor ID (ORCID) on the Publons profiles. In those few cases where the ORCID did not return a positive match on Scopus we have searched for outputs in ORCID and in turn searched for those outputs on Scopus to identify the academic and gather their citations. (b) Contribution to the sample of reviews analysed according to citation cluster. Lowly-cited reviewers are those with <999 citations, and highly-cited those with >5000 citations.
Publications 07 00015 g001
Figure 2. (a) Global distribution of the 250-strong sample of top reviewers. (b) Global distribution of the 46,079-strong sample of ‘top’ reviews.
Figure 2. (a) Global distribution of the 250-strong sample of top reviewers. (b) Global distribution of the 46,079-strong sample of ‘top’ reviews.
Publications 07 00015 g002aPublications 07 00015 g002b
Table 1. Ranking of countries per research outputs and corresponding match in terms of reviews and reviewers in the sample analysed. * Research outputs ranking has been retrieved from Reference [12].
Table 1. Ranking of countries per research outputs and corresponding match in terms of reviews and reviewers in the sample analysed. * Research outputs ranking has been retrieved from Reference [12].
PublicationsOutputs *ReviewsReviewers
USA1st 1st 1st
China2nd 3rd 3rd
India3rd 2nd 2nd
UK4th 8th 9th
Japan5th 30th 29th
Iran6th 9th 6th
Brazil7th 52nd 35th
Australia8th 12th 12th
Germany9th N/AN/A
Canada10th 14th 13th

Share and Cite

MDPI and ACS Style

Pomponi, F.; D’Amico, B.; Rye, T. Who Is (Likely) Peer-Reviewing Your Papers? A Partial Insight into the World’s Top Reviewers. Publications 2019, 7, 15. https://doi.org/10.3390/publications7010015

AMA Style

Pomponi F, D’Amico B, Rye T. Who Is (Likely) Peer-Reviewing Your Papers? A Partial Insight into the World’s Top Reviewers. Publications. 2019; 7(1):15. https://doi.org/10.3390/publications7010015

Chicago/Turabian Style

Pomponi, Francesco, Bernardino D’Amico, and Tom Rye. 2019. "Who Is (Likely) Peer-Reviewing Your Papers? A Partial Insight into the World’s Top Reviewers" Publications 7, no. 1: 15. https://doi.org/10.3390/publications7010015

APA Style

Pomponi, F., D’Amico, B., & Rye, T. (2019). Who Is (Likely) Peer-Reviewing Your Papers? A Partial Insight into the World’s Top Reviewers. Publications, 7(1), 15. https://doi.org/10.3390/publications7010015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop