{"id":9177,"date":"2021-11-27T14:00:56","date_gmt":"2021-11-27T14:00:56","guid":{"rendered":"http:\/\/TheNextWeb=1374337"},"modified":"2021-11-27T14:00:56","modified_gmt":"2021-11-27T14:00:56","slug":"want-to-develop-ethical-ai-then-we-need-more-african-voices","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=9177","title":{"rendered":"Want to develop ethical AI? Then we need more African voices"},"content":{"rendered":"\n<p>Artificial intelligence (<a href=\"https:\/\/thenextweb.com\/topic\/artificial-intelligence\">AI<\/a>) was once the stuff of science fiction. But it\u2019s becoming widespread. It is used in <a href=\"https:\/\/www.lifewire.com\/mobile-technology-ai-in-phones-4584792\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">mobile phone technology<\/a> and <a href=\"https:\/\/builtin.com\/artificial-intelligence\/artificial-intelligence-automotive-industry\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">motor vehicles<\/a>. It powers tools for <a href=\"https:\/\/www.forbes.com\/sites\/cognitiveworld\/2019\/07\/05\/how-ai-is-transforming-agriculture\/?sh=3e1838924ad1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">agriculture<\/a> and <a href=\"https:\/\/theconversation.com\/africas-health-systems-should-use-ai-technology-in-their-fight-against-covid-19-135862\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">healthcare<\/a>.<\/p>\n<p>But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, <a href=\"https:\/\/www.wired.com\/story\/google-timnit-gebru-ai-what-really-happened\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">was fired<\/a> from Google\u2019s Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies. For instance, in a <a href=\"http:\/\/proceedings.mlr.press\/v81\/buolamwini18a.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">2018 paper<\/a> Gebru and another researcher, Joy Buolamwini, had shown how facial recognition software was less accurate in identifying women and people of color than white men. Biases in training <a href=\"https:\/\/thenextweb.com\/topic\/data\">data<\/a> can have far-reaching and unintended effects.<\/p>\n<p>There is already a substantial body of research about ethics in AI. This highlights the importance of principles to ensure technologies do not simply worsen biases or even introduce new social harms. As the <a href=\"https:\/\/en.unesco.org\/artificial-intelligence\/ethics#drafttext\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">UNESCO draft recommendation on the ethics of AI<\/a> states:<\/p>\n<blockquote readability=\"6\">\n<p>We need international and national policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole.<\/p>\n<\/blockquote>\n<p>In recent years, many <a href=\"https:\/\/futureoflife.org\/ai-principles\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">frameworks<\/a> and <a href=\"https:\/\/standards.ieee.org\/content\/dam\/ieee-standards\/standards\/web\/documents\/other\/ead1e.pdf\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">guidelines<\/a> have been created that identify objectives and priorities for ethical AI.<\/p>\n<p>This is certainly a step in the right direction. But it\u2019s also critical to <a href=\"https:\/\/www.cell.com\/patterns\/fulltext\/S2666-3899(21)00015-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2666389921000155%3Fshowall%3Dtrue\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">look beyond<\/a> technical solutions when addressing issues of bias or inclusivity. Biases can enter at the level of who frames the objectives and balances the priorities.<\/p>\n<p>In a <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10676-020-09534-2\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">recent paper<\/a>, we argue that inclusivity and diversity also need to be at the level of identifying values and defining frameworks of what counts as ethical AI in the first place. This is especially pertinent when considering the growth of AI research and machine learning across the African continent.<\/p>\n<h2>Context<\/h2>\n<p>Research and development of AI and machine learning technologies are growing in African countries. Programs such as <a href=\"http:\/\/www.datascienceafrica.org\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Data Science Africa<\/a>, <a href=\"https:\/\/www.datasciencenigeria.org\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Data Science Nigeria<\/a>, and the <a href=\"https:\/\/deeplearningindaba.com\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Deep Learning Indaba<\/a> with its <a href=\"https:\/\/deeplearningindaba.com\/2021\/indabax\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">satellite IndabaX events<\/a>, which have so far been held in 27 different African countries, illustrate the interest and human investment in the fields.<\/p>\n<p>The potential of AI and related technologies to promote opportunities for <a href=\"https:\/\/info.microsoft.com\/ME-DIGTRNS-WBNR-FY19-11Nov-02-AIinAfrica-MGC0003244_01Registration-ForminBody.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">growth, development, and democratization in Africa<\/a> is a key driver of this research.<\/p>\n<p>Yet very few African voices have so far been involved in the international ethical frameworks that aim to guide the research. This might not be a problem if the principles and values in those frameworks have universal application. But it\u2019s not clear that they do.<\/p>\n<p>For instance, the <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s11023-018-9482-5\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">European AI4People framework<\/a> offers a synthesis of six other ethical frameworks. It identifies respect for autonomy as one of its key principles. This principle has been <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1111\/dewb.12145\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">criticized<\/a> within the applied ethical field of bioethics. It is seen as <a href=\"https:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/02580136.2016.1223983\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">failing to do justice to the communitarian values<\/a> common across Africa. These focus less on the individual and more on community, even <a href=\"https:\/\/bmcmedethics.biomedcentral.com\/articles\/10.1186\/1472-6939-8-10\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">requiring that exceptions<\/a> are made to uphold such a principle to allow for effective interventions.<\/p>\n<p>Challenges like these \u2013 or even acknowledgment that there could be such challenges \u2013 are largely absent from the discussions and frameworks for ethical AI.<\/p>\n<p>Just like training data can entrench existing inequalities and injustices, so can failing to recognize the possibility of diverse sets of values that can vary across social, cultural, and political contexts.<\/p>\n<h2>Unusable results<\/h2>\n<p>In addition, failing to take into account social, cultural, and political contexts can mean that even a seemingly perfect <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3287560.3287598\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">ethical technical solution can be ineffective or misguided once implemented<\/a>.<\/p>\n<p>For machine learning to be effective at making useful predictions, any learning system needs access to training data. This involves samples of the data of interest: inputs in the form of multiple features or measurements, and outputs which are the labels scientists want to predict. In most cases, both these features and labels require human knowledge of the problem. But a failure to correctly account for the local context could result in underperforming systems.<\/p>\n<p>For example, mobile phone call records have <a href=\"https:\/\/www.mdpi.com\/2076-3263\/8\/5\/165\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">been used<\/a> to estimate population sizes before and after disasters. However, vulnerable populations are less likely to have access to mobile devices. So, this kind of approach <a href=\"https:\/\/elibrary.worldbank.org\/doi\/10.1093\/wber\/lhz039\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">could yield results that aren\u2019t useful<\/a>.<\/p>\n<p>Similarly, computer vision technologies for identifying different kinds of structures in an area will likely underperform where different construction materials are used. In both of these cases, as we and other colleagues discuss in <a href=\"https:\/\/www.cell.com\/patterns\/fulltext\/S2666-3899(21)00225-7?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2666389921002257%3Fshowall%3Dtrue\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">another recent paper<\/a>, not accounting for regional differences may have profound effects on anything from the delivery of disaster aid, to the performance of autonomous systems.<\/p>\n<h2>Going forward<\/h2>\n<p>AI technologies must not simply worsen or incorporate the problematic aspects of current human societies.<\/p>\n<p>Being sensitive to and inclusive of different contexts is vital for designing effective technical solutions. It is equally important not to assume that values are universal. Those developing AI need to start including people of different backgrounds: not just in the technical aspects of designing data sets and the like but also in defining the values that can be called upon to frame and set objectives and priorities.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/counter.theconversation.com\/content\/171837\/count.gif?distributor=republish-lightbox-basic\" alt=\"The Conversation\" width=\"1\" height=\"1\" class=\"js-lazy\"><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https:\/\/theconversation.com\/republishing-guidelines --><\/p>\n<p><noscript><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/counter.theconversation.com\/content\/171837\/count.gif?distributor=republish-lightbox-basic\" alt=\"The Conversation\" width=\"1\" height=\"1\" class><\/noscript><\/p>\n<p><em>This article by <a href=\"https:\/\/theconversation.com\/profiles\/mary-carman-1290812\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Mary Carman<\/a>, Lecturer in Philosophy, <a href=\"https:\/\/theconversation.com\/institutions\/university-of-the-witwatersrand-894\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">University of the Witwatersrand<\/a> and <a href=\"https:\/\/theconversation.com\/profiles\/benjamin-rosman-1224003\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Benjamin Rosman<\/a>, Associate Professor in the School of Computer Science and Applied Mathematics, <a href=\"https:\/\/theconversation.com\/institutions\/university-of-the-witwatersrand-894\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">University of the Witwatersrand, <\/a>is republished from <a href=\"https:\/\/theconversation.com\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/defining-whats-ethical-in-artificial-intelligence-needs-input-from-africans-171837\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">original article<\/a>.<\/em><\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/news\/defining-ethical-artificial-intelligence-input-africans-syndication\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence (AI) was once the stuff of science fiction. But it\u2019s becoming widespread. It is used in mobile phone technology and motor vehicles. It powers tools for agriculture and healthcare. But&#8230;<\/p>\n","protected":false},"author":1,"featured_media":9178,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/9177"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=9177"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/9177\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/9178"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=9177"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=9177"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=9177"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}