Better evaluations to support the needs of older people in the UK

Seniors playing cards in a retirement home

peopleimages/Getty Images

Researchers explored how to improve the evaluation and evaluability of services for older people in the UK and made several recommendations for different stakeholder groups, including commissioners, evaluators, service providers, and national policymakers in NGOs and government.


Transforming health and care for older people is complex and demanding. Evaluating such efforts requires a range of approaches and ideas, and a reflection on ways to improve how evaluations are commissioned, completed and used in a changing policy landscape.


RAND Europe was first commissioned by Age UK to consider how evaluations of integrated care programmes can be carried out. The unpublished work produced from this project supported Age UK in defining its plans for future evaluations of its Integrated Care Programme. The study also highlighted that there were wider problems with how evaluations of services for older people were being conducted.

In light of this, RAND Europe was commissioned by Age UK to conduct a second study to address the problem of how to improve the evaluation and evaluability of services for older people.


Three workshops with different stakeholder groups were convened to discuss areas for improvement in how evaluations of initiatives in older care are commissioned, undertaken, reported and utilised.


The key areas to focus on for improvement were:

  • Relevance: evaluations were often not focused on major challenges and consequently findings were ‘unsurprising’ or even trivial.
  • Timeliness: evaluation findings were often not available when decisions had to be made.
  • Replication and lack of cumulative building of knowledge: evaluations were often designed as stand-alone pieces rather than building on previous evaluations and contributing to future evaluations.
  • Reluctance to share knowledge: competition, a lack of clarity in communication, and poor knowledge management/mobilisation led to a reluctance to share information, pool data, or drew on the evaluations of potential competitors.
  • Efficiency: evaluations often failed to use routine data and ignored the costs imposed on service providers and users.


The workshops generated a number of key recommendations for different groups working in relevant areas:

Commissioners from multiple organisations should collaborate routinely to identify key cross-cutting challenges, jointly fund evaluations addressing common concerns, engage the evaluation community earlier when developing an evaluation specification, and ensure budgets and timelines match the scale of the question(s) asked.

Evaluators should engage earlier with commissioners and better balance competitive with collaborative behaviours to inform commissioners, develop standardised data sets, and make better use of routine data.

Service providers can be a great source of tacit knowledge, but they often experience evaluations as something to be endured rather than as something to shape. Providers should work with evaluators to develop better routine monitoring and data collection.

National policymakers in NGOs and government should support the conditions under which the ‘tragedy of the commons’ can be avoided. This can be done by: identifying and communicating priorities for evaluation research locally and nationally; monitoring the evaluation landscape; encouraging and rewarding organisations for engaging with national priorities; and helping to nourish the ‘commons’ through sharing and collaborating while also ensuring adequate competition and variety.