{"id":768,"date":"2020-10-29T09:55:31","date_gmt":"2020-10-29T09:55:31","guid":{"rendered":"https:\/\/thenextweb.com\/?p=1325868"},"modified":"2020-10-29T09:55:31","modified_gmt":"2020-10-29T09:55:31","slug":"should-a-conscious-robot-get-the-same-rights-as-a-human","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=768","title":{"rendered":"Should a conscious robot get the same rights as a human?"},"content":{"rendered":"\n<p>In the \u201c<a href=\"https:\/\/www.imdb.com\/title\/tt0092455\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Star Trek: The Next Generation<\/a>\u201d episode \u201c<a href=\"https:\/\/www.youtube.com\/watch?v=vjuQRCG_sUw\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">The Measure of a Man<\/a>\u201d&nbsp;Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue that Data deserves the same rights as a human being. Naturally, the question arises: What is the basis upon which something has rights? What gives an entity moral standing?<\/p>\n<p>The philosopher <a href=\"https:\/\/uchv.princeton.edu\/people\/peter-singer\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Peter Singer<\/a> argues that <a href=\"https:\/\/press.princeton.edu\/books\/paperback\/9780691150697\/the-expanding-circle\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">creatures that can feel pain or suffer have a claim<\/a> to moral standing. He argues that nonhuman animals have moral standing since they can feel pain and suffer. Limiting it to people would be a form of speciesism, something akin to racism and sexism.<\/p>\n<p>Without endorsing Singer\u2019s line of reasoning, we might wonder if it can be extended further to an android robot like Data. It would require that Data can either feel pain or suffer. And how you answer that depends on how you understand consciousness and intelligence.<\/p>\n<p>As real artificial intelligence technology advances toward Hollywood\u2019s imagined versions, the question of moral standing grows more important. If AIs have moral standing, <a href=\"https:\/\/scholar.google.com\/citations?hl=en&amp;user=p8IBbFgAAAAJ&amp;view_op=list_works&amp;citft=1&amp;citft=2&amp;citft=3&amp;email_for_op=anand.vaidya%40sjsu.edu&amp;gmla=AJsN-F5dgp1wqST6325SGkx3GDfsuDj1T0bjxLMYTYACMHnsI9bz6KE47rKKwPP6_QhT3W8pQ75gTI-HE5UKm6Yuy-xDaIxMhTCW0fteFvhSyYxWd8lbRRiIB3UJa9Ae_ICCLAhpkgmnLy8Fb5MqDWpLfZI3lUJn79B3uWEmyfktBXWwdP9BWQvE2dmyfOZw6RKZ_ysSudgdzzT2zzxIVbVSxbvi_KwU_rBpHCllTxkWfvgkbF3hzX1HdNN6hPcmqO5mWgyxAro2\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">philosophers like me<\/a> reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn\u2019t interfere with their pursuing their goals.<\/p>\n<p><em>[Read: <span class=\"c-message_attachment__title\"><a class=\"c-link c-message_attachment__title_link\" href=\"https:\/\/thenextweb.com\/politics\/2020\/10\/16\/what-audience-intelligence-data-tells-us-about-the-2020-us-presidential-election\/\" target=\"_blank\" rel=\"noreferrer noopener\" data-qa=\"message_attachment_title_link\"><span dir=\"auto\">What audience intelligence data tells us about the 2020 US presidential election<\/span><\/a>]<\/span><\/em><\/p>\n<h2>Two flavors of intelligence and a test<\/h2>\n<p>IBM\u2019s <a href=\"https:\/\/doi.org\/10.1016\/S0004-3702(01)00129-1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Deep Blue chess machine<\/a> was successfully trained to beat grandmaster Gary Kasparov. But it could not do anything else. This computer had what\u2019s called domain-specific intelligence.<\/p>\n<p>On the other hand, there\u2019s the kind of intelligence that allows for the ability to do a variety of things well. It is called domain-general intelligence. It\u2019s what lets people cook, ski, and raise children \u2013 tasks that are related, but also very different.<\/p>\n<p>Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence. This summer, a startup called <a href=\"https:\/\/openai.com\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">OPENAI<\/a> released a new version of its <a href=\"https:\/\/www.cs.ubc.ca\/%7Eamuham01\/LING530\/papers\/radford2018improving.pdf\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Generative Pre-Training<\/a> language model. GPT-3 is a natural-language-processing system, trained to read and write so that it can be easily understood by people.<\/p>\n<p><a href=\"http:\/\/dailynous.com\/2020\/07\/30\/philosophers-gpt-3\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">It drew immediate notice<\/a>, not just because of its impressive ability to mimic stylistic flourishes and put together <a href=\"https:\/\/theconversation.com\/a-language-generation-programs-ability-to-write-articles-produce-code-and-compose-poetry-has-wowed-scientists-145591\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">plausible content<\/a>, but also because of how far it had come from a previous version. Despite this impressive performance, GPT-3 <a href=\"https:\/\/www.technologyreview.com\/2020\/08\/22\/1007539\/gpt3-openai-language-generator-artificial-intelligence-ai-opinion\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">doesn\u2019t actually know anything<\/a> beyond how to string words together in various ways. AGI remains quite far off.<\/p>\n<p>Named after pioneering AI researcher Alan Turing, the <a href=\"https:\/\/plato.stanford.edu\/entries\/turing-test\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Turing test<\/a> helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it\u2019s an AI or a human being? If he can\u2019t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.<\/p>\n<h2>Two kinds of consciousness<\/h2>\n<p>There are <a href=\"http:\/\/www.nyu.edu\/gsas\/dept\/philo\/faculty\/block\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">two parts<\/a> of consciousness. First, there\u2019s the what-it\u2019s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It\u2019s about how you experience a phenomenon, like smelling a rose or feeling pain.<\/p>\n<p>In contrast, there\u2019s also access consciousness. That\u2019s the ability to report, reason, behave, and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.<\/p>\n<p><a href=\"https:\/\/doi.org\/10.1177\/1073858416673817\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Blindsight nicely illustrates the difference<\/a> between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything on the left side of their visual field. But if asked to pick up a pen from an array of objects on the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted \u2013 an example of access consciousness without phenomenal consciousness.<\/p>\n<p>Data is an android. How do these distinctions play out with respect to him?<\/p>\n<figure class=\"align-center zoomable\" readability=\"2\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><a href=\"https:\/\/images.theconversation.com\/files\/365309\/original\/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/images.theconversation.com\/files\/365309\/original\/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" alt=\"Still from Star Trek: The Next Generation\" width=\"600\" height=\"445\" class=\" lazy\" data-lazy=\"true\" data-srcset=\"https:\/\/images.theconversation.com\/files\/365309\/original\/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=445&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/365309\/original\/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=445&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/365309\/original\/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=445&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/365309\/original\/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=559&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/365309\/original\/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=559&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/365309\/original\/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=559&amp;fit=crop&amp;dpr=3 2262w\"><\/a><figcaption><a href=\"https:\/\/thenextweb.com\/neural\/2020\/10\/29\/should-a-conscious-robot-get-the-same-rights-as-a-human\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fneural%2F2020%2F10%2F29%2Fshould-a-conscious-robot-get-the-same-rights-as-a-human%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Do Data\u2019s qualities grant him moral standing? CBS\" data-title=\"Share Do Data\u2019s qualities grant him moral standing? CBS on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Do Data\u2019s qualities grant him moral standing? CBS on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Do Data\u2019s qualities grant him moral standing? CBS<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<h2>The Data dilemma<\/h2>\n<p>The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.<\/p>\n<p>Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard, and reason with him about the best path to take.<\/p>\n<p>He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets , and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.<\/p>\n<p>However, Data most likely lacks phenomenal consciousness \u2013 he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He\u2019s self-aware and has access consciousness \u2013 can grab the pen \u2013 but across all his senses he lacks phenomenal consciousness.<\/p>\n<p>Now, if Data doesn\u2019t feel pain, at least one of the reasons Singer offers for giving a creature moral standing is not fulfilled. But Data might fulfill the other condition of being able to suffer, even without feeling pain. Suffering might not require phenomenal consciousness the way pain essentially does.<\/p>\n<p>For example, what if suffering were also defined as the idea of being thwarted from pursuing a just cause without causing harm to others? Suppose Data\u2019s goal is to save his crewmate, but he can\u2019t reach her because of damage to one of his limbs. Data\u2019s reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering. He would have preferred to save the crewmate, and would be better off if he did.<\/p>\n<p>In the episode, the question ends up resting not on whether Data is self-aware \u2013 that is not in doubt. Nor is it in question whether he is intelligent \u2013 he easily demonstrates that he is in the general sense. What is unclear is whether he is phenomenally conscious. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.<\/p>\n<h2>Should an AI get moral standing?<\/h2>\n<p>Data is kind \u2013 he acts to support the well-being of his crewmates and those he encounters on alien planets. He obeys orders from people and appears unlikely to harm them, and he seems to <a href=\"https:\/\/theconversation.com\/after-75-years-isaac-asimovs-three-laws-of-robotics-need-updating-74501\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">protect his own existence<\/a>. For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing.<\/p>\n<p>But what about <a href=\"https:\/\/www.youtube.com\/watch?v=YbEWJXld3Ig\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Skynet<\/a> in the <a href=\"https:\/\/www.youtube.com\/watch?v=k64P4l2Wmeg\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">\u201cTerminator\u201d<\/a> movies? Or the worries recently expressed by <a href=\"https:\/\/www.tesla.com\/elon-musk\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Elon Musk<\/a> about <a href=\"https:\/\/www.cnbc.com\/2018\/03\/13\/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">AI being more dangerous than nukes<\/a>, and by <a href=\"https:\/\/www.hawking.org.uk\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Stephen Hawking<\/a> on <a href=\"https:\/\/www.bbc.com\/news\/technology-30290540\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">AI ending humankind<\/a>?<\/p>\n<p>Human beings don\u2019t lose their claim to moral standing just because they act against the interests of another person. In the same way, you can\u2019t automatically say that just because an AI acts against the interests of humanity or another AI it doesn\u2019t have moral standing. You might be justified in fighting back against an AI like Skynet, but that does not take away its moral standing. If moral standing is given in virtue of the capacity to nonphenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.<\/p>\n<p>There are no artificial general intelligence machines yet. But now is the time to consider what it would take to grant them moral standing. How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs \u2013 whether kind and helpful like Data, or set on destruction, like Skynet.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/counter.theconversation.com\/content\/130453\/count.gif?distributor=republish-lightbox-basic\" alt=\"The Conversation\" width=\"1\" height=\"1\" class=\" lazy\" data-lazy=\"true\"><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https:\/\/theconversation.com\/republishing-guidelines --><\/p>\n<hr>\n<p><em>This article is republished from <a href=\"https:\/\/theconversation.com\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">The Conversation<\/a>&nbsp;by&nbsp;<a href=\"https:\/\/theconversation.com\/profiles\/anand-vaidya-684855\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Anand Vaidya<\/a>, Associate Professor of Philosophy, <a href=\"https:\/\/theconversation.com\/institutions\/san-jose-state-university-2091\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">San Jos\u00e9 State University<\/a>&nbsp;under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/if-a-robot-is-conscious-is-it-ok-to-turn-it-off-the-moral-implications-of-building-true-ais-130453\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">original article<\/a>.<\/em><\/p>\n<p class=\"c-post-pubDate\"> Published October 29, 2020 \u2014 09:55 UTC <\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/neural\/2020\/10\/29\/should-a-conscious-robot-get-the-same-rights-as-a-human\/\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the \u201cStar Trek: The Next Generation\u201d episode \u201cThe Measure of a Man\u201d&nbsp;Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue&#8230;<\/p>\n","protected":false},"author":1,"featured_media":769,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/768"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=768"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/768\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/769"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=768"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=768"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=768"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}