{"id":9171,"date":"2021-11-27T10:00:49","date_gmt":"2021-11-27T10:00:49","guid":{"rendered":"http:\/\/TheNextWeb=1374238"},"modified":"2021-11-27T10:00:49","modified_gmt":"2021-11-27T10:00:49","slug":"worried-about-ai-ethics-worry-about-developers-ethics-first","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=9171","title":{"rendered":"Worried about AI ethics? Worry about developers\u2019 ethics first"},"content":{"rendered":"\n<p>Artificial intelligence is already making decisions in the fields of business, health care and manufacturing. But AI algorithms generally still get help from people applying checks and making the final call.<\/p>\n<p>What would happen if <a href=\"https:\/\/thenextweb.com\/topic\/artificial-intelligence\">AI<\/a> systems had to make independent decisions, and ones that could mean life or death for humans?<\/p>\n<p>Pop culture has long portrayed our general distrust of AI. In the 2004 sci-fi movie <em>I, Robot<\/em>, detective Del Spooner (played by Will Smith) is suspicious of robots after being rescued by one from a car crash, while a 12-year-old girl was left to drown. He <a href=\"https:\/\/www.imdb.com\/title\/tt0343818\/characters\/nm0371671\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">says<\/a>:<\/p>\n<blockquote readability=\"6\">\n<p>I was the logical choice. It calculated that I had a 45% chance of survival. Sarah only had an 11% chance. That was somebody\u2019s baby \u2013 11% is more than enough. A human being would\u2019ve known that.<\/p>\n<\/blockquote>\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\" readability=\"4.3703703703704\">\n<p lang=\"en\" dir=\"ltr\">Asimov&#8217;s Three Laws of Robotics. <a href=\"https:\/\/twitter.com\/hashtag\/MSInspire?src=hash&amp;ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">#MSInspire<\/a><a href=\"https:\/\/t.co\/A4FkpBOBYa\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">pic.twitter.com\/A4FkpBOBYa<\/a><\/p>\n<p>\u2014 Richard Hay (@WinObs) <a href=\"https:\/\/twitter.com\/WinObs\/status\/1151548669830860802?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">July 17, 2019<\/a><\/p>\n<\/blockquote>\n<p>Unlike humans, robots lack a moral conscience and follow the \u201cethics\u201d programmed into them. At the same time, human morality is highly variable. The \u201cright\u201d thing to do in any situation will depend on who you ask.<\/p>\n<p>For machines to help us to their full potential, we need to make sure they <a href=\"https:\/\/www.moralmachine.net\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">behave ethically<\/a>. So the question becomes: how do the ethics of AI developers and engineers influence the decisions made by AI?<\/p>\n<h2>The self-driving future<\/h2>\n<p>Imagine a future with self-driving cars that are fully autonomous. If everything works as intended, the morning commute will be an opportunity to prepare for the day\u2019s meetings, catch up on news, or sit back and relax.<\/p>\n<p>But what if things go wrong? The car approaches a traffic light, but suddenly the brakes fail and the computer has to make a split-second decision. It can swerve into a nearby pole and kill the passenger, or keep going and kill the pedestrian ahead.<\/p>\n<p>The computer controlling the car will only have access to limited information collected through car sensors, and will have to make a decision based on this. As dramatic as this may seem, we\u2019re only a few years away from potentially facing such dilemmas.<\/p>\n<p>Autonomous cars will generally provide safer driving, but accidents will be inevitable \u2013 especially in the foreseeable future, when these cars will be sharing the roads with human drivers and other road users.<\/p>\n<p>Tesla <a href=\"https:\/\/techcrunch.com\/2021\/05\/07\/tesla-refutes-elon-musks-timeline-on-full-self-driving\/#\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">does not yet produce<\/a> fully autonomous cars, although it plans to. In collision situations, Tesla cars don\u2019t automatically operate or deactivate the Automatic Emergency Braking (AEB) system if a human driver is in control.<\/p>\n<p>In other words, the driver\u2019s actions are not disrupted \u2013 even if they themselves are causing the collision. Instead, if the <a href=\"https:\/\/www.forbes.com\/sites\/patricklin\/2017\/04\/05\/heres-how-tesla-solves-a-self-driving-crash-dilemma\/?sh=1a3225616813\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">car detects a potential collision<\/a>, it sends alerts to the driver to take action.<\/p>\n<p>In \u201cautopilot\u201d mode, however, the car should automatically brake for pedestrians. Some argue if the car can prevent a collision, then there is a moral obligation for it to override the driver\u2019s actions in every scenario. But would we want an autonomous car to make this decision?<\/p>\n<figure><iframe loading=\"lazy\" src=\"https:\/\/player.vimeo.com\/video\/192179726\" width=\"500\" height=\"281\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\">[embedded content]<\/iframe><\/figure>\n<h2>What\u2019s a life worth?<\/h2>\n<p>What if a car\u2019s computer could evaluate the relative \u201cvalue\u201d of the passenger in its car and of the pedestrian? If its decision considered this value, technically it would just be making a cost-benefit analysis.<\/p>\n<p>This may sound alarming, but there are already technologies being developed that could allow for this to happen. For instance, the recently re-branded Meta (formerly Facebook) has highly evolved facial recognition that can easily identify individuals in a scene.<\/p>\n<p>If these data were incorporated into an autonomous vehicle\u2019s AI system, the algorithm could place a dollar value on each life. This possibility is depicted in an extensive 2018 study conducted by experts at the Massachusetts Institute of Technology and colleagues.<\/p>\n<p>Through the <a href=\"https:\/\/www.nature.com\/articles\/s41586-018-0637-6\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Moral Machine<\/a> experiment, researchers posed various self-driving car scenarios that compelled participants to decide whether to kill a homeless pedestrian or an executive pedestrian.<\/p>\n<p>Results revealed participants\u2019 choices depended on the level of economic inequality in their country, wherein more economic inequality meant they were more likely to sacrifice the homeless man.<\/p>\n<p>While not quite as evolved, such data aggregation is already in use with China\u2019s <a href=\"https:\/\/www.businessinsider.com.au\/china-social-credit-system-punishments-and-rewards-explained-2018-4\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">social credit<\/a> system, which decides what social entitlements people have.<\/p>\n<p>The health-care industry is another area where we will see AI making decisions that could save or harm humans. Experts are increasingly <a href=\"https:\/\/theconversation.com\/ai-could-be-our-radiologists-of-the-future-amid-a-healthcare-staff-crisis-120631\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">developing AI to spot anomalies<\/a> in <a href=\"https:\/\/www.aidoc.com\/blog\/5-ways-ai-can-assist-radiologists\/#\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">medical imaging<\/a>, and to help physicians in prioritizing medical care.<\/p>\n<p>For now, doctors have the final say, but as these technologies become increasingly advanced, what will happen when a doctor and AI algorithm don\u2019t make the same diagnosis?<\/p>\n<p>Another example is an automated medicine reminder system. How should the system react if a patient refuses to take their medication? And how does that affect the patient\u2019s autonomy, and the overall accountability of the system?<\/p>\n<p>AI-powered drones and weaponry are also ethically concerning, as they can make the decision to kill. There are conflicting views on whether such technologies should be completely <a href=\"https:\/\/www.theguardian.com\/news\/2020\/oct\/15\/dangerous-rise-of-military-ai-drone-swarm-autonomous-weapons\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">banned or regulated<\/a>. For example, the use of autonomous drones can be limited to surveillance.<\/p>\n<p>Some have called for military robots to be programmed with ethics. But this raises issues about the programmer\u2019s accountability in the case where a drone kills civilians by mistake.<\/p>\n<h2>Philosophical dilemmas<\/h2>\n<p>There have been many philosophical debates regarding the ethical decisions AI will have to make. The classic example of this is the <a href=\"https:\/\/theconversation.com\/the-trolley-dilemma-would-you-kill-one-person-to-save-five-57111\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">trolley problem<\/a>.<\/p>\n<figure>\n<p><iframe srcdoc=\"\n\n<style>*{padding:0;margin:0;overflow:hidden}html,body{background:#000;height:100%}img{position:absolute;top:0;left:0;width:100%;height:100%;object-fit:cover;transition:opacity .1s cubic-bezier(0.4,0,1,1)}a:hover img+img{opacity:1!important}<\/style>\n<p><a href='https:\/\/www.youtube.com\/embed\/bOpf6KcWYyw?feature=oembed&amp;autoplay=1&amp;mute=1&amp;modestbranding=1&amp;iv_load_policy=3&amp;theme=light&amp;playsinline=1'><img src='https:\/\/img.youtube.com\/vi\/bOpf6KcWYyw\/hqdefault.jpg'><img src='https:\/\/cdn0.tnwcdn.com\/wp-content\/themes\/cyberdelia\/assets\/img\/ytplaybtn.png' style='top: 50%;left:50%;width:68px;height:48px;transform:translate3d(-50%,-50%,0)'><img src='https:\/\/cdn0.tnwcdn.com\/wp-content\/themes\/cyberdelia\/assets\/img\/ytplaybtn-hover.png' style='top: 50%;left:50%;width:68px;height:48px;opacity:0;transform:translate3d(-50%,-50%,0)'><\/a>&#8221; height=&#8221;240&#8243; width=&#8221;320&#8243; allow=&#8221;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&#8221; allowfullscreen frameborder=&#8221;0&#8243;>[embedded content]<\/iframe><\/p>\n<\/figure>\n<p><!--resp-video-container--><\/p>\n<p>People often struggle to make decisions that could have a life-changing outcome. When evaluating how we react to such situations, one study reported choices can vary depending on <a href=\"https:\/\/www.technologyreview.com\/2018\/10\/24\/139313\/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">a range of factors<\/a> including the respondent\u2019s age, gender and culture.<\/p>\n<p>When it comes to AI systems, the algorithms training processes are critical to how they will work in the real world. A system developed in one country can be influenced by the views, politics, ethics and morals of that country, making it unsuitable for use in another place and time.<\/p>\n<p>If the system was controlling aircraft, or guiding a missile, you\u2019d want a high level of confidence it was trained with data that\u2019s representative of the environment it\u2019s being used in.<\/p>\n<p>Examples of failures and bias in technology implementation have included <a href=\"https:\/\/www.iflscience.com\/technology\/this-racist-soap-dispenser-reveals-why-diversity-in-tech-is-muchneeded\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">racist soap dispenser<\/a> and inappropriate <a href=\"https:\/\/www.theguardian.com\/technology\/2015\/jul\/01\/google-sorry-racist-auto-tag-photo-app\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">automatic image labelling<\/a>.<\/p>\n<p>AI is not \u201cgood\u201d or \u201cevil\u201d. The effects it has on people will depend on the ethics of its developers. So to make the most of it, we\u2019ll need to reach a consensus on what we consider \u201cethical\u201d.<\/p>\n<p>While private companies, public organizations and research institutions have their own guidelines for ethical AI, the United Nations has recommended developing what they call \u201c<a href=\"https:\/\/en.unesco.org\/artificial-intelligence\/ethics#recommendation\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">a comprehensive global standard-setting instrument<\/a>\u201d to provide a global ethical AI framework \u2013 and ensure human rights are protected.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/counter.theconversation.com\/content\/170961\/count.gif?distributor=republish-lightbox-basic\" alt=\"The Conversation\" width=\"1\" height=\"1\" class=\"js-lazy\"><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https:\/\/theconversation.com\/republishing-guidelines --><\/p>\n<p><noscript><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/counter.theconversation.com\/content\/170961\/count.gif?distributor=republish-lightbox-basic\" alt=\"The Conversation\" width=\"1\" height=\"1\" class><\/noscript><\/p>\n<p><em>This article by <a href=\"https:\/\/theconversation.com\/profiles\/jumana-abu-khalaf-1206676\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Jumana Abu-Khalaf<\/a>, Research Fellow in Computing and Security, <a href=\"https:\/\/theconversation.com\/institutions\/edith-cowan-university-720\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Edith Cowan University<\/a> and <a href=\"https:\/\/theconversation.com\/profiles\/paul-haskell-dowland-382903\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Paul Haskell-Dowland<\/a>, Professor of Cyber Security Practice, <a href=\"https:\/\/theconversation.com\/institutions\/edith-cowan-university-720\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Edith Cowan University<\/a>, is republished from <a href=\"https:\/\/theconversation.com\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/the-self-driving-trolley-problem-how-will-future-ai-systems-make-the-most-ethical-choices-for-all-of-us-170961\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">original article<\/a>.<\/em><\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/news\/worried-about-ai-ethics-worry-about-developers-ethics-first-syndication\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence is already making decisions in the fields of business, health care and manufacturing. But AI algorithms generally still get help from people applying checks and making the final call. What&#8230;<\/p>\n","protected":false},"author":1,"featured_media":9172,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/9171"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=9171"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/9171\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/9172"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=9171"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=9171"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=9171"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}