Research involves experimentation and exploring the unknown. When it comes to choosing what to support, however, more than 95 per cent of academic biomedical research funding is controlled by the same system—peer review of grant applications. Despite significant criticism, peer review is endorsed by all major funders and generally cited as the gold standard for awarding funding.
Experimenting with peer review is challenging for funders, as the experience of the Canadian Institutes of Health Research shows. Last summer, the CIHR made changes to its peer-review system, in particular increasing the use of virtual discussion forums.
The system had some teething problems—including finding enough reviewers with the right qualifications—exacerbated by parallel structural changes to their schemes and, crucially, a decrease in overall funding. It was a perfect storm: more than 1,000 researchers signed a protest letter to the health minister, who intervened to try and resolve the issue.
In response, the CIHR convened an international review panel to advise on the best approaches to awarding grants. RAND Europe was commissioned to review the empirical evidence on the effectiveness and burden of peer review to support the panel's deliberations.
Our most startling finding was the dearth of evidence around the effectiveness of peer review, especially given its importance in the research system. Moreover, the evidence that does exist is not reassuring.
There is evidence that peer review is vulnerable to cronyism, and that it stifles innovation. Review scores are highly variable, suggesting a lack of reliability in the process and making it difficult to judge whether such scores reflect the chances of success. The lack of alternative funding approaches against which to judge the effectiveness of peer review is also part of the problem.
What evidence there is suggests that peer review appears most effective when used conservatively. It is better at identifying applications that meet a minimum threshold for funding than it is at distinguishing a tiny number of stellar applications from amid a wider pool of good applicants. Therefore, as success rates fall—as has been happening with funders worldwide—peer review is pushed further out of its comfort zone. Alongside a higher number of disappointed applicants, this leads to increasing dissatisfaction among researchers.
More evidence shows that, despite a rising volume of complaints about the workload of review panels, the burden of peer review will largely fall on applicants. This too is increasingly concerning in light of falling success rates.
Some pioneering studies have involved convening dummy panels and re-reviewing real applications. These showed that the length of applications can be halved with little effect on decision-making.
It is likely that a triage stage in the peer-review process, to identify the best applications from dramatically shorter proposals, could reduce the application burden. When we say “dramatically”, we mean it—there is evidence that if applications are only slightly shorter, this does not reduce the burden, presumably because applicants spend more time on each remaining sentence.
The international panel has now released its report, featuring a number of recommendations for the CIHR. It recommended a peer-review process that incorporates triage and improved feedback for applicants that pass this stage, alongside an increase in the calibre and training of the administrators that operate the process to improve reviewer selection and feedback. To address concerns of parochialism, the panel has suggested a rotating panel system using more international members.
Critically, the panel has also backed the idea of experimentation—recommending that the CIHR continue to develop novel methods, such as the use of virtual panel meetings and reviewing. At the same time, the panel suggests that the researchers must be more willing to accept such experimentation, to improve the understanding of peer-review processes.
Experiments could be designed to both reveal new ideas in practice and test what really works. Are shorter applications less burdensome, or do they simply lead to a greater number of more polished proposals? Are the applications cut out at triage demonstrably worse than the ones retained?
Ultimately, it is vital to be reflective and sceptical about peer review—the central process of funding allocation and driver of scientific resources—as it is about the subject of the research itself.
Steven Wooding is a visiting research fellow at the University of Cambridge and a former RAND Europe researcher. Susan Guthrie is a research leader at RAND Europe. They led the study ‘What do we know about grant peer review in the health sciences?', which will be published later this year.
This commentary originally appeared on Research Fortnight on April 23, 2017. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.