Funders Using Bibliometrics Should Keep It Simple

commentary

(Research Fortnight)

Man pointing to a document

Photo by Yagi-Studio/Getty Images

by Salil Gunashekar and Susan Guthrie

September 15, 2017

Experience shows that avoiding information overload is crucial if measures are to inform grant decisions, say Salil Gunashekar and Sue Guthrie.

Assessment panels make extensive use of bibliometrics as an evaluation tool to assist their decisions on research funding. In the UK, the Research Excellence Framework uses bibliometric data to inform its assessments. And the National Institute for Health Research (NIHR) has employed bibliometric analysis alongside wider information in several awarding panels for major funding schemes.

What is less clear, however, is just how these panels use bibliometric information, and what impact it has on the decision-making process. This lack of understanding may be one reason for the scepticism and cynicism about the value of metrics among some researchers.

A recently published study by RAND Europe examines this issue, looking at the way assessment panels use bibliometrics. The research is based on interviews with 10 panel members from three NIHR funding programmes: the Senior Investigators competition; the Collaborations for Leadership in Applied Health Research and Care; and the Biomedical Research Centres.

RAND Europe has provided bibliometric information and advice to these schemes' grant-making panels for almost 10 years. Combining this experience with interviews has enabled us to make some recommendations of better ways to use such information to support funding decisions.

One of the risks of using bibliometrics is information overload. We found that, although panel members broadly support the use of bibliometrics as an assessment tool, few had any experience of them before or outside their role as grant reviewers. This means that many do not fully understand the techniques involved and can become overwhelmed by too much data.

One crucial issue, therefore, is how data are presented. Visualisations can help, and it is useful to present data in a few different ways, such as in tables and graphs, since different people might find different ways of presentating information more intuitive. Concise, quick reference guides on how to interpret bibliometric analysis could also assist panel members. And training on how to use bibliometrics can support and inform decision making.

Another way to reduce the risk of information overload is to focus on metrics that panel members find most useful and in which they have the most faith. Normalised citation scores and numbers of highly cited papers—for example, those in the top 5 per cent or 10 per cent for the field—were typically considered most useful. Data provided should be limited to these few reliable and robust metrics.

Timing is another issue for grant-review panels. We found that bibliometrics are used where evaluation begins, not where it ends: reviewers found it most useful early in the assessment process—that is, when the data are used primarily in the initial individual assessment of candidates for research funding—before they meet other panel members. Such information should therefore be provided well in advance of the panel's final meeting to help inform their individual evaluations.

A longer timescale also gives panel members the chance to challenge the bibliometric results, both during the individual assessment period and at the selection-panel meeting. Reviewers value having an opportunity to talk to the bibliometric experts, seeing it as a helpful way to address any concerns about the validity of the information provided and how it should be interpreted.

When using bibliometrics, it's important to address and clarify potential biases. One such bias is whether the bibliometric ‘scores' of applicants across different fields of research could be reliably compared with each other. Analyses can also introduce biases related to gender and career stage, disadvantaging women and early-career researchers. To enable the data to be used appropriately and to be credible to panel members, it is crucial to highlight the caveats and drawbacks associated with particular bibliometric analyses when presenting them to the panel.

Finally bibliometric analysis should not be used in isolation, especially on major grants—one interviewee described them as “part of the sauce, not the meat on the plate”. Any analysis should be used to complement wider evaluation criteria. Most panel members felt it was a useful tool for decision-making, alongside other sources of information.

To maximise the effectiveness and utility of bibliometrics, panel members must be provided with clear information in a timely manner, with appropriate caveats and explanations. This can help panels make informed decisions on research grants. The right guidance to panel members can help to make the most of bibliometrics.


Salil Gunashekar is a senior analyst and Sue Guthrie is a research leader at RAND Europe.

This commentary originally appeared on Research Fortnight on September 6, 2017. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.