{"id":8619,"date":"2021-10-27T18:07:19","date_gmt":"2021-10-27T18:07:19","guid":{"rendered":"http:\/\/TheNextWeb=1371273"},"modified":"2021-10-27T18:07:19","modified_gmt":"2021-10-27T18:07:19","slug":"a-beginners-guide-to-ai-ethics","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=8619","title":{"rendered":"A beginner\u2019s guide to AI: Ethics"},"content":{"rendered":"\n<div><img decoding=\"async\" src=\"https:\/\/img-cdn.tnwcdn.com\/image\/neural?filter_last=1&amp;fit=1280%2C640&amp;url=https%3A%2F%2Fcdn0.tnwcdn.com%2Fwp-content%2Fblogs.dir%2F1%2Ffiles%2F2021%2F10%2Fpichaiethics.jpg&amp;signature=8c3d968a9a275c2f483385e3d5fac783\" class=\"ff-og-image-inserted\"><\/div>\n<p><em>Welcome to Neural\u2019s beginner\u2019s guide to AI. This multi-part feature should provide you with a very basic understanding of what AI is, what it can do, and how it works. The guide contains articles on (in order published)&nbsp;<a href=\"https:\/\/thenextweb.com\/artificial-intelligence\/2018\/07\/03\/a-beginners-guide-to-ai-neural-networks\/\">neural networks<\/a>,<span>&nbsp;<\/span><a href=\"https:\/\/thenextweb.com\/artificial-intelligence\/2018\/07\/18\/a-beginners-guide-to-ai-computer-vision-and-image-recognition\/\">computer vision<\/a>,<span>&nbsp;<\/span><a href=\"https:\/\/thenextweb.com\/artificial-intelligence\/2018\/07\/25\/a-beginners-guide-to-ai-natural-language-processing\/\">natural language processing<\/a>,<span>&nbsp;<\/span><a href=\"https:\/\/thenextweb.com\/artificial-intelligence\/2018\/08\/02\/a-beginners-guide-to-ai-algorithms\/\">algorithms<\/a>,<span>&nbsp;<\/span><a href=\"https:\/\/thenextweb.com\/artificial-intelligence\/2018\/11\/16\/a-beginners-guide-to-ai-human-level-machine-intelligence\/\">artificial general intelligence<\/a>,<span>&nbsp;<\/span><a href=\"https:\/\/thenextweb.com\/neural\/2020\/08\/10\/a-beginners-guide-to-ai-the-difference-between-video-game-ai-and-real-ai\/\">the difference between video game AI and real AI<\/a>, and <a href=\"https:\/\/thenextweb.com\/news\/a-beginners-guide-to-ai-the-difference-between-human-and-machine-intelligence\">the difference between human and machine intelligence<\/a>.<\/em><\/p>\n<p>The discourse surrounding artificial intelligence ethics is wide, varied, and completely out of control.<\/p>\n<p>Those debating technology ethics tend to be the people with the most at stake financially \u2013 politicians, big tech developers, and researchers from major universities.<\/p>\n<p>It can be difficult to gauge their motivations when the biggest argument against deploying dangerous AI systems without consideration for the potential harm they can do typically boils down to: \u201c<a href=\"https:\/\/www.ncbi.nlm.nih.gov\/books\/NBK189657\/#:~:text=Economic%20regulation%20tends%20to%20stifle%20market%20innovation.&amp;text=Regulation%20that%20does%20not%20require,to%20escape%20the%20regulatory%20constraints.\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">regulation might stifle innovation<\/a>.\u201d<\/p>\n<p>Worse, the media tends to muddy up the issue by <a href=\"https:\/\/arxiv.org\/pdf\/2001.05046.pdf\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">conflating artificial intelligence ethics with speculative science fiction<\/a>. Should we worry about sentient AI rising up and killing us all? Yes. But is it an ethical issue? We\u2019ll come back to this question.<\/p>\n<p>It can be difficult for industry insiders to grasp the scientific, political, and moral implications involved in the development and deployment of a given artificial intelligence system.<\/p>\n<p>So how do we, as laypersons who haven\u2019t dedicated their careers to understanding artificial intelligence, parse the incredibly obtuse and often nonsensical world of AI ethics? We use common sense.<\/p>\n<h2>Ethics? Morals? Values?<\/h2>\n<p>AI ethics are a sticky wicket because they comprise a two-fold scenario. Primarily, ethics refer to a single ideal: human behavior. But in the case of AI, we must also consider the behavior of the machine.<\/p>\n<p>Traditional automobiles, for example, don\u2019t have the ability to make decisions that could harm humans. Your 1984 Ford Escort can\u2019t choose to switch lanes on its own against your will unless there\u2019s a severe mechanical failure. But your 2021 Tesla with so-called \u201cFull Self Driving\u201d enabled can.<\/p>\n<p>However, the Tesla isn\u2019t making a decision based on its personal ethics, morals, or values. It\u2019s doing what it was programmed to do. It\u2019s not smart, it doesn\u2019t understand roads or what driving is, it\u2019s just code executing with the ability to integrate new data in real-time.<\/p>\n<p>The first example of morality that people seem to want to bring up when it comes to AI is <a href=\"https:\/\/en.wikipedia.org\/wiki\/Trolley_problem\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">the trolley problem<\/a>. This ethics conundrum supposes you\u2019re in a trolley that will crash into five people if you do nothing or one person if you pull a lever.<\/p>\n<p>On the surface, the ethical thing to do seems to be to sacrifice the one to save the many. But what if you did that and found out the five people were all serial killers and the one person was a nun?<\/p>\n<h2>These aren\u2019t the ethics you\u2019re looking for<\/h2>\n<p>It doesn\u2019t matter. Seriously. This isn\u2019t an AI ethics question even if the trolley is autonomous and the operator is a neural network. It\u2019s just a moral conundrum.<\/p>\n<p>It\u2019s almost impossible to train a Tesla on how to handle a situation where it absolutely has to murder someone because those type of split-second decision situations don\u2019t manifest in a void.<\/p>\n<p>Questions of such esoteric nature are usually meant to distract from the real situation. In this case, Tesla vehicles don\u2019t have a problem deciding between the greater good and the least harmful situation, they\u2019re not sentient or \u201csmart\u201d by any definition. They struggle to perform incredibly basic feats of morality such as \u201c<a href=\"https:\/\/driving.ca\/auto-news\/industry\/u-s-opens-probe-into-teslas-autopilot-over-emergency-vehicle-crashes\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">should I veer out of my way to smash into that parked ambulance<\/a> or keep driving past it?\u201d<\/p>\n<p>The only ethical issue here is whether or not these systems should be falsely advertised as \u201cAutopilot\u201d and \u201cFull Self Driving\u201d when they can\u2019t do either.<\/p>\n<p>This isn\u2019t an ethical problem concerning the development of AI.<\/p>\n<p>It\u2019s an ethical issue concerning the <i>deployment<\/i><span> of AI. Is it ethical to test a product on city streets that could potentially kill people? Does it remain ethical to continue testing this product on open roads even after its misuse has resulted in multiple deaths?<\/span><\/p>\n<p><span>It\u2019s the same with discussions surrounding bias. Algorithms are biased and, most often, it\u2019s impossible to determine why or how these biases will manifest until they\u2019re discovered in the open.<\/span><\/p>\n<h2>They\u2019re all biased<\/h2>\n<p><span>It would be pretty hard to make the case against AI research for fear a system could manifest bias because it\u2019s a safe assumption every AI system has bias. We need to push the boundaries of technology in order to advance as a civilization.<\/span><\/p>\n<p><span>But, is it ethical to keep a system in production after it\u2019s been found to contain harmful bias? When the city of Flint Michigan determined its water supply was poisoned it decided to keep the water on and hide the danger from its citizens. <\/span><\/p>\n<p><span>Even the President at the time, Barrack Obama, went on TV and <a href=\"https:\/\/time.com\/4318358\/obama-drinks-flint-tap-water\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">drank what was supposedly a glass of Flint tap water<\/a> to assure residents that everything was fine. Approximately 95,000 people suffered harm from the US government\u2019s ethical decisions concerning that tainted water.<\/span><\/p>\n<p><span>When it comes to AI, the government and big tech are even more feckless and disingenuous. <\/span><\/p>\n<p><span>Google, for example, is one of the richest companies in the history of the world. Yet, <a href=\"https:\/\/theconversation.com\/googles-algorithms-discriminate-against-women-and-people-of-colour-112516\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">its algorithms manifest bias in ways that demonstrate racism and bigotry<\/a>. According to Google, it isn\u2019t a racist company. So why would it continue to develop and deploy algorithms it knows to be racist?<\/span><\/p>\n<p><span>Because the people who work for the company feel that the harm their products do doesn\u2019t outweigh the value they provide. <\/span><\/p>\n<p><span>Search works fine for most people. Every once in awhile it does something incredibly racist such as <a href=\"https:\/\/www.wsj.com\/articles\/BL-DGB-42522\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">display pictures of Black people when someone searches for the term \u201cgorilla,\u201d<\/a> and that\u2019s just fine with all of us. <\/span><\/p>\n<p><span>Ethically, those of us who use Google and the people creating Google\u2019s products have decided that there is an acceptable amount of racism and bigotry we\u2019re willing to accept and support. <\/span><\/p>\n<p>We don\u2019t talk about AI ethics in terms of the harms we\u2019ve chosen to tacitly endorse. The discussion tends to surround the unknowable \u2014&nbsp;<em>what should we do about sentient AGI?<\/em><\/p>\n<h2>The ethics of ignoring ethical implications<\/h2>\n<p><span>One day it could be extremely important to determine whether AI should be allowed to purchase property or whatever. But today we may as well be having a discussion on land rights in the Andromeda Galaxy. It\u2019s moot. There\u2019s no current indication that we\u2019re within a millennia or even a century of AI sentience. <\/span><\/p>\n<p><span>We do, however, have thousands of companies around the world using <a href=\"https:\/\/hbr.org\/2019\/05\/all-the-ways-hiring-algorithms-can-introduce-bias\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">racist AI systems to judge job candidates<\/a>. We\u2019ve got law enforcement agencies using harmful, <a href=\"https:\/\/www.theregreview.org\/2021\/03\/20\/saturday-seminar-facing-bias-in-facial-recognition-technology\/#:~:text=According%20to%20the%20researchers%2C%20facial,particularly%20vulnerable%20to%20algorithmic%20bias.\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">biased facial recognition<\/a> and <a href=\"https:\/\/thenextweb.com\/news\/predictive-policing-is-a-scam-that-perpetuates-systemic-bias\">predictive policing systems<\/a> to <a href=\"https:\/\/www.nytimes.com\/2020\/06\/24\/technology\/facial-recognition-arrest.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">wrongfully arrest<\/a>, <a href=\"https:\/\/www.americanbar.org\/groups\/judicial\/publications\/judges_journal\/2020\/winter\/ai-and-judges-ethical-obligations\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">judge<\/a>, and <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">sentence people<\/a>. And social media companies deploy AI at scale massive enough to <a href=\"https:\/\/www.oii.ox.ac.uk\/news\/releases\/use-of-social-media-to-manipulate-public-opinion-now-a-global-problem-says-new-report\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">effect global public perception<\/a>. <\/span><\/p>\n<p><span>Modern use of AI is dangerous and unregulated. The term \u201cWild West\u201d has been used so often in conjunction with descriptions of the current state of AI that it\u2019s lost all meaning, but it remains apt. <\/span><\/p>\n<p><span>Common sense tells us that a product that kills people or demonstrates racism and bigotry is unethical to use without some form of regulation.&nbsp;<\/span><\/p>\n<p><span>But there\u2019s almost nothing stopping anyone from developing an AI system capable of causing sweeping injurious harm.<\/span><\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/news\/a-beginners-guide-ai-ethics\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Welcome to Neural\u2019s beginner\u2019s guide to AI. This multi-part feature should provide you with a very basic understanding of what AI is, what it can do, and how it works. The guide&#8230;<\/p>\n","protected":false},"author":1,"featured_media":8620,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/8619"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=8619"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/8619\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/8620"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=8619"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=8619"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=8619"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}