Healthcare and AI: Can African knowledges help reduce inequities?

by | 5 Feb 2024 | Research Spotlight | 0 comments

In recent years, the use of artificial intelligence (AI) in healthcare has increased dramatically. AI is used to help with various tasks, from recognising medical images, making diagnoses, and creating personalised treatment plans based on individual rather than population data.

How does AI Work?

An AI model is a computer programme that analyses a set of data and makes predictions based on patterns it finds. Prevalent AI models are fed with large amounts of data to “train” them and increase the accuracy of the predictions.

How can AI Worsen Health Inequalities?

The data used to train AI models tends to be biased. First, by disproportionately reflecting specific demographics (for example, white men are overrepresented, creating the so-called “pale male” data problem). Second, by having certain assumptions or stereotypes about different groups embedded in the data. This means AI models often reproduce and perpetuate the same inequities found in the socio-political and cultural context in which they are developed. In other words: “bias in, bias out”.

How can incorporating African knowledges in the development of healthcare AI help?

  • Diversifying data used to train AI: One way of mitigating bias in healthcare AI is to train AI using data that is more representative of the global population, including data about African people and knowledges.
  • Co-designing AI with communities: Involving African communities in the development of AI by consulting members during the research, design and deployment of AI can help reduce bias. Instead of developing AI solutions and then trying to adapt them to the contexts of rural, resource-scarce or underserved communities, it is crucial to develop AI to meet the specific needs of these communities.
  • Interrogating our assumptions: Much of AI technology is developed using a Eurocentric worldview that centres on Western values such as individualism. In contrast, the values of many communities in Southern Africa (Botswana, South Africa, Namibia, Zambia and Zimbabwe) have long been underpinned by “Ubuntu/botho” – an emphasis on the inherent interconnectedness between a person and their community. To de-centre Western perspectives, we need to ask ourselves: What assumptions are embedded in AI’s mathematical models? In co-designing with, for example, indigenous African communities, might we still be asking members to fit their perspectives into our Western framework? Examining our assumptions can help ensure that in our attempts to make different cultures and ethnicities visible in AI, we do not inadvertently render the worldviews of said cultures invisible.

What Next?

Reducing inequities in healthcare requires that we use more representative datasets and collaborate with community members in the development of healthcare AI. It is also important to go beyond this and question the assumptions embedded not only in the development of AI but in our attempts to make AI more inclusive. The spirit of “Ubuntu/botho” is just one example of how we can appreciate the different ways that humans have made sense of the world, and that ancient African knowledges have value for contemporary life.

What is the Role of Health Psychology?

Health psychologists are well-situated because of their training in research ethics, cognition and human behaviour to help mitigate inequities. By collaborating with AI scientists where possible, health psychologists can help inform the development of inclusive AI models. Additionally, in their use of AI for psychological research or interventions, they can be aware of its shortcomings and more actively include human oversight in the processes.

Resources:

Book chapter:  The rise of artificial intelligence in healthcare applications

Book chapter: Bidwell, Nicola J., Helen Arnold, Alan F. Blackwell, Charlie Nqeisji, Kun Kunta, and Martin Ujakpa. “AI Design and Everyday Logics in the Kalahari.” In The Routledge Companion to Media Anthropology, pp. 557-569. Routledge, 2022.

Journal: Invigorating Ubuntu Ethics in AI for healthcare: Enabling equitable care

About the Author:

Karabo Sibasa is a PhD student in Psychology at the University of Manchester, based at the Manchester Centre for Health Psychology. She attended the 4th African Human-Computer Interaction Conference in South Africa in 2023, where she sat down with Professors Nicolla Bidwell and Nobert Jere to talk about using AI to improve healthcare and better serve marginalised communities.

0 Comments