Culturally Appropriate Care in the Age of AI

commentary

May 24, 2024

An elderly women tests a robot companion in New York, June 7, 2022, photo by ElliQ/Cover Images via Reuters Connect

An elderly women tests a robot companion in New York, June 7, 2022

Photo by ElliQ/Cover Images via Reuters Connect

The rise of generative AI has been described as a technological revolution on a scale similar to the introduction of mobile and cloud computing (PDF). It has the potential to significantly change social care, alongside wider technological developments such as sensor technologies that can support people with care needs. What does it mean for culturally sensitive social care?

Generative AI—that is, artificial intelligence which can create new data—poses distinct challenges when used in adult social care. In some countries, this technology is already used to create personalised care plans. Such plans not only take into account individual needs and preferences, but can also produce predictive assessments about, for instance, the risk of falls or further health deterioration. AI technology has been used to match culturally tailored carers to adults needing social care, and generative AI already plans and allocates resources in some care settings. Other, similar uses are also being considered, such as providing mental health support for both patients and caregivers through chatbots.

Culturally appropriate care, or culturally competent care, is sensitive to the cultural identity and heritage of those who are cared for, and has been linked to improved health outcomes for patients. It includes paying particular attention to people's beliefs, including their understanding of care based on, for example, ethnicity, nationality, religion, sexuality, or gender identity. While culture is not easy to define, it can be understood as a framework of meaning (PDF) that enables us to make sense of the world and to structure our lives. The importance of culturally appropriate social care was underscored during the COVID-19 pandemic. Many people in social care became particularly detached from their cultural or religious communities, finding it harder to engage with people and events of cultural significance to them.

Culturally appropriate care is sensitive to the cultural identity and heritage of those who are cared for, and has been linked to improved health outcomes.

Share on Twitter

There have been notable efforts to incorporate technology designed to be culturally sensitive, such as robots, into care homes. These robots are designed to assist older people in culturally tailored ways, starting conversations about topics that would likely be of interest, playing preferred music or audiobooks, and even aiding with prayers by reminding the person of relevant rituals and the location of required prayer objects.

At its core, culturally competent care is attuned to reducing health disparities that are rooted in racial, ethnic, socio-economic, and religious differences. Given that generative AI is already used in some adult social care settings for tasks, such as writing meeting notes and creating care plans it is important to consider the ethical and social ramifications of those new approaches, as well as how these new technologies can help reduce some of the pressures on adult social care.

The question of how to regulate generative AI systems in adult social care is receiving increasing attention. For example, February 2024 saw a series of roundtables on AI in adult social care at the University of Oxford. There are particular areas of concern for cultural competence, one of which is accounting for the malleable and flexible nature not only of generative AI technology, but also of culture itself, which is complex and ever evolving. This could mean a need to involve relevant experts and communities in the development and continual assessment of suitable AI tools for culturally appropriate adult social care.

Another important issue for culturally appropriate social care is mitigating the effects of the well-documented overt and covert biases in large language models (PDF) underpinning generative AI. Care can be impacted negatively by biases. This can be particularly problematic when biases affect the results generated for social care, potentially leading to the care needs of particular groups of people being dismissed or to the provision of culturally inappropriate care suggestions. As some authors have argued (PDF), this bias should be considered at all stages of AI tool conceptualisation, design, development, validation, access, and monitoring. Consideration should include not only biases related to algorithm design and the historical data underpinning it, but also diversity and inclusion in the development and validation stages. While these topics have long been under scrutiny for medicine and health care, more research is needed about this phenomenon in the field of adult social care to enable appropriate and culturally sensitive regulatory approaches.

When considering the introduction of AI into adult social care settings, reflecting on the following questions may help to ensure cultural competence:

  • How is generative AI used to determine what constitutes a culturally competent care plan for a person in adult social care?
  • In what ways do pre-existing biases in the large language models (LLMs) underpinning generative AI tools impact the care plans that are being developed?
  • How are social care teams and informal carers prepared to employ and manage these technologies in culturally sensitive ways?
  • How is personal data being used to support culturally sensitive social care?
  • Are people in social care and their families aware of how generative AI is being utilised, including how data about their cultural identity may be used within and by AI tools?
  • What impact may different socio-cultural groups' levels of trust in technological innovation have towards people's approaches to integrating generative AI into adult social care?

Clear regulatory and ethical guidelines could help mitigate some of the risks of using generative AI tools to provide safe and culturally competent social care.

Share on Twitter

A key question to consider in regulating AI applications in general, and including in the context of AI use for social care, is how the systems handle, store, and use sensitive information. What different cultures consider to be sensitive information may vary. When generative AI supports the creation of care plans, AI tools have access to sensitive personal data, such as race, ethnicity, or nationality. This may later be used by the AI tool for purposes beyond the original intent, such as further machine learning. Clear regulatory and ethical guidelines could help mitigate some of the risks of using generative AI tools to provide safe and culturally competent social care. Simultaneously, they could contribute towards increasing trust in such technologies by reassuring members of all cultural communities, ethnic minorities, and religious groups that their interests are protected.

While generative AI holds promise for supporting adult social care, including assisting in administrative and decisionmaking processes, questions about AI biases, governance, and oversight, and the changing natures of both culture and technology need to be addressed to ensure the provision of culturally competent social care. Attending to such concerns will be necessary to use generative AI in adult social care safely.


Zuzanna Marciniak-Nuqui is an analyst at RAND Europe. The author would like to thank Sonja Marjanovic, Sarah Parkinson, and Stephanie Stockwell for providing comments on drafts of this commentary.