Measuring Up: How to Ensure Peer Review for Grant Applications Remains Up to the Mark


Jul 26, 2019

May sitting with laptop and reading a report, photo by  jacoblund/Getty Images

Photo by jacoblund/Getty Images

This commentary originally appeared on Research Fortnight on July 24, 2019.

Expert peer review is considered the gold standard for assessing the validity, significance and originality of research. When it comes to grant applications, however, peer review is not without its shortcomings.

In a recent assessment of the grant application process, RAND Europe identified several flaws in the use of peer review for determining funding, including bias, burden and conservatism.

Because it relies on individual judgment, peer review is inherently subjective and at risk of bias.

Share on Twitter

Because it relies on individual judgment, peer review is inherently subjective and at risk of bias. One expert's great science might be another's mediocrity. Bias can surface in a variety of ways—around the career stage and characteristics of the applicant, for example, the research field or institution, or the characteristics of the reviewer.

Peer review is also time-consuming. The burden falls mostly on those applying for the grants, with one study estimating that grant preparation and review can cost somewhere between a fifth and a third of the total fund (PDF) size.

Decision by committee can result in a conservative approach. There is evidence that peer review can stifle innovation because of risk-avoidance by both applicants and reviewers.

Despite these increasingly well-known weaknesses, very few funders have experimented with potential solutions. One exception is the Australian National Health and Medical Research Council (NHMRC).

The council has restructured its grant programme with the aim of encouraging creativity and innovation, minimising the burden on applicants and reviewers, and providing opportunities for researchers at all career stages.

Grants have been restructured into four funding streams: investigator grants for individuals; synergy grants for multidisciplinary teams; ideas grants for innovative and novel research; and strategic and leveraging grants, primarily targeted calls. Each has different weights assigned to its selection criteria: track record, for example, is more significant for synergy grants than for ideas grants. The number of grants an individual can hold simultaneously is also capped, with the intention of reducing applications.

To help assess whether these changes will achieve NHMRC's aims, we conducted an international literature review on studies of bias, burden and conservatism in the grant funding process. The existing literature is not extensive, and measuring any of these elements is not straightforward. Still, there are lessons to be learned.

Gender, for instance, is one of the most studied areas of bias. One study analysing linguistic patterns in reviewer critiques found that male investigators were described in terms of leadership and personal achievement as “leaders,” “pioneers,” and producers of “highly significant research,” whereas women were described in terms of their working environments and “expertise”.

There is strong evidence of falling success rates, increasing the average amount of work that goes into each successful proposal. Studies suggest that the majority of the burden of application—around 75 per cent—falls on applicants. We also found evidence that peer review processes are conservative, and that low success rates may exacerbate this.

Measuring—or even defining—what constitutes innovative research can be challenging. One interesting possibility is to monitor disagreement among peer reviewers. New ideas are unlikely to hold universal appeal, so controversy might be an indicator of innovation.

Based on our literature review, as well as interviews and an assessment of NHMRC data, we developed 43 metrics to evaluate whether the council's new grant programme is effective. Some of these metrics are specific to a type of grant, but most are broadly applicable, such as the mean and median number of hours spent by researchers preparing and reviewing grants.

This framework will help the council assess whether its aims are being met, and give it confidence in the work that it funds. Other funders around the world could adapt it to assess bias, burden and conservatism in their own grant review programmes, and measure the impact of any changes made.

Peer review remains the dominant way to assess research grant applications, but it is hard to measure its effectiveness. Without comparable alternatives, funders need to experiment with the grant process itself, and evaluate the effects of changing it.

The growing evidence showing some of the challenges and shortcomings of peer review has helped start to build acceptance that change is needed—even if it may not be welcomed by all who work in research. Addressing some of the challenges that peer review poses could ensure that the best research receives the financial support it deserves.

Daniela Rodriguez-Rincon is an analyst and Susan Guthrie is a research leader in the innovation, health and science group at the non-profit, non-partisan research organisation RAND Europe