Londonchiropracter.com

This domain is available to be leased

Menu
Menu

Want to develop ethical AI? Then we need more African voices

Posted on November 27, 2021 by admin

Artificial intelligence (AI) was once the stuff of science fiction. But it’s becoming widespread. It is used in mobile phone technology and motor vehicles. It powers tools for agriculture and healthcare.

But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, was fired from Google’s Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies. For instance, in a 2018 paper Gebru and another researcher, Joy Buolamwini, had shown how facial recognition software was less accurate in identifying women and people of color than white men. Biases in training data can have far-reaching and unintended effects.

There is already a substantial body of research about ethics in AI. This highlights the importance of principles to ensure technologies do not simply worsen biases or even introduce new social harms. As the UNESCO draft recommendation on the ethics of AI states:

We need international and national policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole.

In recent years, many frameworks and guidelines have been created that identify objectives and priorities for ethical AI.

This is certainly a step in the right direction. But it’s also critical to look beyond technical solutions when addressing issues of bias or inclusivity. Biases can enter at the level of who frames the objectives and balances the priorities.

In a recent paper, we argue that inclusivity and diversity also need to be at the level of identifying values and defining frameworks of what counts as ethical AI in the first place. This is especially pertinent when considering the growth of AI research and machine learning across the African continent.

Context

Research and development of AI and machine learning technologies are growing in African countries. Programs such as Data Science Africa, Data Science Nigeria, and the Deep Learning Indaba with its satellite IndabaX events, which have so far been held in 27 different African countries, illustrate the interest and human investment in the fields.

The potential of AI and related technologies to promote opportunities for growth, development, and democratization in Africa is a key driver of this research.

Yet very few African voices have so far been involved in the international ethical frameworks that aim to guide the research. This might not be a problem if the principles and values in those frameworks have universal application. But it’s not clear that they do.

For instance, the European AI4People framework offers a synthesis of six other ethical frameworks. It identifies respect for autonomy as one of its key principles. This principle has been criticized within the applied ethical field of bioethics. It is seen as failing to do justice to the communitarian values common across Africa. These focus less on the individual and more on community, even requiring that exceptions are made to uphold such a principle to allow for effective interventions.

Challenges like these – or even acknowledgment that there could be such challenges – are largely absent from the discussions and frameworks for ethical AI.

Just like training data can entrench existing inequalities and injustices, so can failing to recognize the possibility of diverse sets of values that can vary across social, cultural, and political contexts.

Unusable results

In addition, failing to take into account social, cultural, and political contexts can mean that even a seemingly perfect ethical technical solution can be ineffective or misguided once implemented.

For machine learning to be effective at making useful predictions, any learning system needs access to training data. This involves samples of the data of interest: inputs in the form of multiple features or measurements, and outputs which are the labels scientists want to predict. In most cases, both these features and labels require human knowledge of the problem. But a failure to correctly account for the local context could result in underperforming systems.

For example, mobile phone call records have been used to estimate population sizes before and after disasters. However, vulnerable populations are less likely to have access to mobile devices. So, this kind of approach could yield results that aren’t useful.

Similarly, computer vision technologies for identifying different kinds of structures in an area will likely underperform where different construction materials are used. In both of these cases, as we and other colleagues discuss in another recent paper, not accounting for regional differences may have profound effects on anything from the delivery of disaster aid, to the performance of autonomous systems.

Going forward

AI technologies must not simply worsen or incorporate the problematic aspects of current human societies.

Being sensitive to and inclusive of different contexts is vital for designing effective technical solutions. It is equally important not to assume that values are universal. Those developing AI need to start including people of different backgrounds: not just in the technical aspects of designing data sets and the like but also in defining the values that can be called upon to frame and set objectives and priorities.The Conversation

This article by Mary Carman, Lecturer in Philosophy, University of the Witwatersrand and Benjamin Rosman, Associate Professor in the School of Computer Science and Applied Mathematics, University of the Witwatersrand, is republished from The Conversation under a Creative Commons license. Read the original article.

Source

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • VisioLab raises $11M to scale its AI-powered iPad checkout to stadiums, canteens, and campuses worldwide
  • Jeff Bezos’ physical AI lab is close to raising $10 billion at a $38 billion valuation
  • Clarifai says it deleted 3 million OkCupid user photos and the facial-recognition models trained on them
  • Amazon puts up to $25 billion more into Anthropic and secures 10-year cloud commitment in return
  • OpenAI’s Codex for Mac now watches your screen to build context, but sends the screenshots to its servers first

Recent Comments

    Archives

    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020

    Categories

    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    ©2026 Londonchiropracter.com | Design: Newspaperly WordPress Theme