{"id":3197,"date":"2021-02-19T19:35:46","date_gmt":"2021-02-19T19:35:46","guid":{"rendered":"https:\/\/thenextweb.com\/?p=1339872"},"modified":"2021-02-19T19:35:46","modified_gmt":"2021-02-19T19:35:46","slug":"researchers-propose-ethically-correct-ai-for-smart-guns-that-locks-out-mass-shooters","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=3197","title":{"rendered":"Researchers propose \u2018ethically correct AI\u2019 for smart guns that locks out mass shooters"},"content":{"rendered":"\n<div><img decoding=\"async\" src=\"https:\/\/img-cdn.tnwcdn.com\/image\/neural?filter_last=1&amp;fit=1280%2C640&amp;url=https%3A%2F%2Fcdn0.tnwcdn.com%2Fwp-content%2Fblogs.dir%2F1%2Ffiles%2F2019%2F05%2Fcybersecurity1200.png&amp;signature=4a2bcfbe8989051cd93a893648ae8100\" class=\"ff-og-image-inserted\"><\/div>\n<p>A trio of computer scientists from the Rensselaer Polytechnic Institute in New York recently published research detailing a potential AI intervention for murder: an ethical lockout.<\/p>\n<p>The big idea here is to stop mass shootings and other ethically incorrect uses for firearms through the development of an AI that can recognize intent, judge whether it\u2019s ethical use, and ultimately render a firearm inert if a user tries to ready it for improper fire.<\/p>\n<p>That sounds like a lofty goal, in fact the researchers themselves refer to it as a \u201cblue sky\u201d idea, but the technology to make it possible is already here.<\/p>\n<p>According to the team\u2019s <a href=\"https:\/\/arxiv.org\/pdf\/2102.09343.pdf\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">research<\/a>:<\/p>\n<blockquote readability=\"11\">\n<p align=\"LEFT\">Predictably, some will object as follows: \u201cThe concept you introduce is attractive. But unfortunately it\u2019s nothing more than a dream; actually, nothing more than a pipe dream. Is this AI really feasible, science- and engineering-wise?\u201d We answer in the affirmative, confidently.<\/p>\n<\/blockquote>\n<p align=\"LEFT\">The research goes on to explain how recent breakthroughs involving long-term studies have lead to the development of various AI-powered reasoning systems that could serve to trivialize and implement a fairly simple ethical judgment system for firearms.<\/p>\n<p align=\"LEFT\">This paper doesn\u2019t describe the creation of a smart gun itself, but the potential efficacy of an AI system that can make the same kinds of decisions for firearms users as, for example, cars that can lock out drivers if they can\u2019t pass&nbsp;a breathalyzer.<\/p>\n<p align=\"LEFT\">In this way, the AI would be trained to recognize the human intent behind an action. The researchers describe the recent mass shooting at a Wal Mart in El Paso and offer different view of what could have happened:<\/p>\n<blockquote readability=\"27\">\n<p align=\"LEFT\">The shooter is driving to Walmart, an assault rifle, and a massive amount of ammunition, in his vehicle. The AI we envisage knows that this weapon is there, and that it can be used only for very specific purposes, in very specific environments (and of course it knows what those purposes and environments are).<\/p>\n<p align=\"LEFT\">At Walmart itself, in the parking lot, any attempt on the part of the would-be assailant to use his weapon, or even position it for use in any way, will result in it being locked out by the AI. In the particular case at hand, the AI knows that killing anyone with the gun, except perhaps e.g. for self-defense purposes, is unethical. Since the AI rules out self-defense, the gun is rendered useless, and locked out.<\/p>\n<\/blockquote>\n<p align=\"LEFT\">This paints a wonderful picture. It\u2019s hard to imagine any objections to a system that worked perfectly. Nobody needs to load, rack, or fire a firearm in a Wal Mart parking lot unless they\u2019re in danger. If the AI could be developed in such a way that it would only allow users to fire in ethical situations such as self defense, while at a firing range, or in designated legal hunting areas, thousands of lives could be saved every year.<\/p>\n<p align=\"LEFT\">Of course, the researchers certainly predict myriad objections. After all, they\u2019re focused on navigating the US political landscape. In most civilized nations gun control is common sense.<\/p>\n<p align=\"LEFT\">The team anticipates people pointing out that criminals will just use firearms that don\u2019t have an AI watchdog embedded:<\/p>\n<blockquote readability=\"7\">\n<p align=\"LEFT\">In reply, we note that our blue-sky conception is in no way restricted to the idea that the guarding AI is only in the weapons in question.<\/p>\n<\/blockquote>\n<p align=\"LEFT\">Clearly the contribution here isn\u2019t the development of a smart gun, but the creation of an <i>ethically correct<\/i><span> AI. If criminals won\u2019t put the AI on their guns, or they continue to use dumb weapons, the AI can still be effective when installed in other sensors. It could, hypothetically, be used to perform any number of functions once it determines violent human intent. <\/span><\/p>\n<p align=\"LEFT\"><span>It could lock doors, stop elevators, alert authorities, change traffic light patterns, text location-based alerts, and any number of other reactionary measures including unlocking law enforcement and security personnel\u2019s weapons for defense. <\/span><\/p>\n<p align=\"LEFT\"><span>The researchers also figure there will be objections based on the idea that people could hack the weapons. This one\u2019s pretty easily dismissed: firearms will be easier to secure than robots, and we\u2019re already putting AI in those. <\/span><\/p>\n<p align=\"LEFT\"><span>While there\u2019s no such thing as total security, the US military fills their ships, planes, and missiles with AI and we\u2019ve managed to figure out how to keep the enemy from hacking them. We should be able to keep police officers\u2019 service weapons just as safe.<\/span><\/p>\n<p align=\"LEFT\"><span>Realistically, it takes a leap of faith to assume an ethical AI can be made to understand the difference between situations such as, for example,&nbsp;home invasion and domestic violence, but the groundwork is already there. <\/span><\/p>\n<p align=\"LEFT\"><span>If you look at driverless cars, we know people have already died because they relied on an AI to protect them. But we also know that the potential to save tens of thousands of lives is too great to ignore in the face of a, so far, relatively small number of accidental fatalities. <\/span><\/p>\n<p align=\"LEFT\"><span>It\u2019s likely that, just like <a href=\"https:\/\/www.vox.com\/recode\/2020\/2\/26\/21154502\/tesla-autopilot-fatal-crashes\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Tesla\u2019s AI<\/a>, a gun control AI could result in accidental and unnecessary deaths. But approximately 24,000 people <a href=\"https:\/\/health.ucdavis.edu\/what-you-can-do\/facts.html#:~:text=In%202018%2C%2024%2C432%20people%20in%20the%20U.S.%20died%20by%20firearm%20suicide.&amp;text=Firearms%20are%20the%20means%20in,74%25%20of%20homicides%20in%202018.\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">die annually<\/a> in the US due to suicide by firearm, 1,500 children are killed by gun violence, and almost 14,000 adults are murdered with guns<\/span><i>. <\/i><span>It stands to reason an AI-intervention could significantly decrease those numbers. <\/span><\/p>\n<p align=\"LEFT\"><span>You can read the whole paper <a href=\"https:\/\/arxiv.org\/pdf\/2102.09343.pdf\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">here<\/a>.<\/span><\/p>\n<p class=\"c-post-pubDate\"> Published February 19, 2021 \u2014 19:35 UTC <\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/neural\/2021\/02\/19\/researchers-ethically-correct-ai-smart-guns-lock-out-mass-shooters\/\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A trio of computer scientists from the Rensselaer Polytechnic Institute in New York recently published research detailing a potential AI intervention for murder: an ethical lockout. The big idea here is to&#8230;<\/p>\n","protected":false},"author":1,"featured_media":3198,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/3197"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3197"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/3197\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/3198"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3197"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3197"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3197"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}