Next Article in Journal
Genome-Wide Analysis of the Trihelix Gene Family and Their Response to Cold Stress in Dendrobium officinale
Next Article in Special Issue
Predictive Model for the Factors Influencing International Project Success: A Data Mining Approach
Previous Article in Journal
Willingness-to-Pay for Environmental Measures in Non-Profit Sport Clubs
Previous Article in Special Issue
Distributing Enterprise Value to Stakeholders in the Range of Sustainable Development on the Basis of the Energy Industry in Poland
 
 
Article
Peer-Review Record

Ensuring Sustainable Evaluation: How to Improve Quality of Evaluating Grant Proposals?

Sustainability 2021, 13(5), 2842; https://doi.org/10.3390/su13052842
by Grażyna Wieczorkowska and Katarzyna Kowalczyk *
Reviewer 1: Anonymous
Reviewer 2:
Sustainability 2021, 13(5), 2842; https://doi.org/10.3390/su13052842
Submission received: 1 February 2021 / Revised: 26 February 2021 / Accepted: 1 March 2021 / Published: 5 March 2021
(This article belongs to the Special Issue Innovations Management and Technology for Sustainability)

Round 1

Reviewer 1 Report

The paper is well documented by the authors, with references that support the topic, and the research results are reflected in expressive diagrams, accompanied by relevant comments. The aspects and arguments presented in the result section are based on the data obtained from the two studies, but in order to properly highlight them we recommend a new division of conclusions, as a final section of the paper (no. 4) which may eventually be developed, by indicating future research directions, with opinions to support the results of the study, and recommendations for those involved in the analyzed process.

Author Response

Dear Reviewer

Thank you very much for your revision. Attached you will find our response to your comment.

Author Response File: Author Response.docx

Reviewer 2 Report

I have several concerns regarding your research work that are described as follows:

1.- Introduction: "The following biases may distort the process of evaluation and influence the distribution on public funding".   Do you refer to the biases given in the previous sentence?

2.- The introduction should describe the more relevant  studies in the literature on evaluation of grant proposals.

3.- Regarding  the "Activating reference pattern" subsection, there are several proposals in this same line. For instance, in [García, J.A., Rodriguez-Sánchez, R. & Fdez-Valdivia, J. STRATEGY: a tool for the formulation of peer-review strategies. Scientometrics 113, 45–60 (2017)],  the peer review strategy is defined as the smallest set of editorial decisions to optimally guide the other manuscript decisions. The chief formulates this review strategy by choosing those strategic editorial decisions optimally and having other editorial decisions align with them in the quality of collected reviews. In this process, a quality-assurance editor is in charge of the evaluation of the alignment quality of reviewer reports with each other review process. This policy thus ensures that all final decisions fit together since editors’ choices align on the announced strategic decisions. They  presented an automatic tool, called ‘STRATEGY’, for the formulation of such peer-review strategies. 

4.- Also, regarding the rating style, its influence  on the assessment has been analyzed in some  studies directly  related to your application: For instance, in [Chamorro-Padial, J., Rodriguez-Sánchez, R., Fdez-Valdivia, J. et al. An evolutionary explanation of assassins and zealots in peer review. Scientometrics 120, 1373–1385 (2019)], they explained why assassins and zealots evolutionary appear in peer review because of the evolutionary success of reviewers who do not distinguish acceptable and unacceptable manuscripts.

5.- You say that "For that reason we suggest that in order to minimalize the negative effects of rating stale and different comparison pattern – all the projects within one panel should be evaluated by the same set of reviewers." In this case, the adverse selection of reviewers is a problem to be considered. See for instance [García, J.A., Rodriguez‐Sánchez, R. and Fdez‐Valdivia, J. (2015), Adverse Selection of Reviewers. J Assn Inf Sci Tec, 66: 1252-1262]. Adverse selection occurs when a firm signs a contract with a potential worker but his/her key skills are still not known at that time, which leads the employer to make a wrong decision. In that article, they study the example of adverse selection of reviewers when a potential referee whose ability is his private information faces a finite sequence of review processes for several scholarly journals, one after the other. The editor's problem is to design a system that guarantees that each manuscript is reviewed by a referee if and only if the reviewer's ability matches the review's complexity. As is typically the case in solving problems of adverse selection in agency theory, the journal editor offers a menu of contracts to the potential referee, from which the reviewer chooses the contract that is best for him given his ability. The optimal contract will be the one that provides the right incentives to match the complexity of the review and the ability of the reviewer. 

6.- What's the meaning of the following sentence?: "All rating are an effect of comparison." Please clarify this point.

7.- Please rewrite the following paragraph in order to increase its clarity: "All rating are an effect of comparison. With reviewers conducting single or just a few evaluations we can’t control the reference standard which is activated in an individual reviewers mind. We can’t also determine their rating style which might unproportionally affect evaluations. When reviewers conduct multiple evaluations - the reference pattern is adjusted or begins to emerge during subsequent evaluations – so it can be more congruent among experts."

8.- You say that "However, conducting multiple reviews could pose a certain risk especially if evaluations are conducted in a sequence. This risk is a cognitive bias called a serial position effect." However, there exist other risks when using this type of research evaluation, e.g., the problem of adverse selection of reviewers as  described above.

9.- You say that "When evaluating multiple objects, a comparison pattern emerge during the evaluation in the process of calibration." This can be true for mainstream reviewers. However, reviewers may favor a submission or find flaws in the methodology, results or discussion and conclude that the manuscript is invalid. In fact, reviewers have been characterized as zealots (a referee that may uncritically favor a manuscript), assassins (a referee with stringent standards who advises rejection much more frequently than the norm), and mainstream referees (the mean) [See  Siegelman, S. S. (1991). Assassins and zealots: Variations in peer review. Special report. Radiology178(3), 637–642.]. Variations among referees in the perception of manuscript categories (acceptable or unacceptable) would make the review system unfair to authors whose manuscripts happened to be sent to an assassin reviewer. However, uncritical acceptance of a research work would also constitute unfairness. As a result for assasins and zealots , it is not so clear that "when evaluating multiple objects, a comparison pattern emerge during the evaluation in the process of calibration."

10.- In your experiment, eighty-six management students took on the role of the evaluators of conference abstracts submitted to a conference. So, in the subsequent analysis, have you considered the effect of a possible lack of training of the participants in order to accomplish this complex task? If not, you should state it to better understand your results. Please recall the problem of adverse selection of reviewers.

11.- The following seems to be repeated several times across the text "When applying for a grant, many donors require that the proposal need to demonstrate the positive or neutral impact of the project on sustainable development. To be able to select projects that will ensure sustainability we need to ensure the effective evaluation of the proposals."

12.- Regarding your suggestion “For that reason as a method of minimizing  the distorting impact of experts’ rating style we recommend calibrating experts by  training them before they start evaluation. One approach would be asking experts to  evaluate the best and worst projects from the previous competitions.” What is the basis of this insight? You tried it with your participants?  It is not clear for me.

13.- In the discussion section you say that: “…This effect is especially important in times of overflow in the presence of an uncontrollable flood of information[37] which results in shortening the time to concentrate on a single stimulus,  succumbing to the influence of attention-grabbing stimuli, reducing the depth of processing (e.g. scanning text rather than reading it. Our attempt to minimalize this effect by introducing a small break tasks failed.”  If this effect comes from the adverse selection problem in peer review, introducing a small break is not going to be a solution because you need to match in that case the reviewer’s ability and the evaluation’s complexity.

14.- I do not understand the following sentence: ``The second method of evaluation – where all the grant proposals are evaluated by 
the same set of reviewers we see a reduction in the problem of rating bias.'' So, please rewrite it.

15.- The same happens at the end of the sentence: `` This effect is especially important in times of overflow in the presence of an uncontrollable flood of information[37] which results in shortening the time to concentrate on a single stimulus, succumbing to the influence of attention-grabbing stimuli, reducing the depth of processing (e.g. scanning text rather than reading it); increased susceptibility to cognitive 
errors.´´. So, what is the meaning of increased susceptibility to cognitive errors?

 

Author Response

Dear Reviewer

Thank you very much for your revision. Attached you will find our response to your comments.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

The authors have done a good job. This is so because the results are interesting and well presented.

Back to TopTop