Evaluating Grant Peer Review in the Health Sciences: An Update Study for the Canadian Institutes of Health Research

Scientist reading scientific data in a laboratory

Photo by BraunS/Getty Images


Grant peer review has long been held as the gold standard process of quality assurance, despite receiving criticism from both within and outside academia. In particular, critics highlight inefficiency and structural flaws as issues that compromise its effectiveness in allocating funding.

In 2009, RAND Europe's literature review evaluating such criticisms could not find strong enough evidence to draw any firm conclusions. However, evidence did confirm the high cost of peer review and that review ratings varied.


In 2016, RAND Europe was asked by the Canadian Institutes of Health Research (CIHR) to update the 2009 study. The aim was to support the CIHR's ongoing review of their own peer review system and provide a more widely applicable source of evidence around the strengths and weaknesses of peer review for grant funding assessment.

Covering examples of accepted practice in peer review, the study looked at the investigator-initiated peer review practice and the indicators and methods used to evaluate the effectiveness and burdens of peer review systems by major international funders.


The new report updates the literature review from 2009, and also provides case studies of current practice across six major international biomedical and health research funders.


Acknowledging that peer review is likely to remain central to how CIHR funds research, the report suggested ways to minimise flaws in terms of:

Effectiveness - Bias against innovative research is the most clearly identified bias by the literature. Allowing individual panel members to rescue innovative applications is one suggested approach to mitigate this.

The CIHR has instituted training to raise awareness of commonly identified biases (age and gender) with the intention to reduce unconscious bias. However, the report found no evidence as to whether such training is effective.

Burden - Applicant burden needs to be considered alongside reviewer and administrative burden. As the majority of the burden falls on the applicant it is important that applicants get some benefits from their failed applications, an aspect that becomes even more significant when success rates fall. Providing the applicant with feedback is one way to do this.

Efficiency - Studies addressing the trade-off of effectiveness and burden are rare. These studies suggested that reducing the length of applications and the complexity of biographical information required has small effects on funding decisions. Further to this, such reductions would need to be drastic in order to reap the benefits.

Monitoring and evaluation - It remains striking how little robust evidence is available about the effectiveness, burden and efficiency of peer review as a method for grant allocation. The absence of empirical data underlines the importance of a reflective monitoring and evaluation system with benchmarks for reproducibility and consistency, alongside methods for stimulating discussion, such as external observers.

Improving the evidence base - Peer review is a central process in determining the allocation of resources in science. Given this importance, there is a need for better evidence, not only on its overall effectiveness but also to support the design of improved peer review processes.