Londonchiropracter.com

This domain is available to be leased

Menu
Menu

Researchers propose ‘ethically correct AI’ for smart guns that locks out mass shooters

Posted on February 19, 2021 by admin

A trio of computer scientists from the Rensselaer Polytechnic Institute in New York recently published research detailing a potential AI intervention for murder: an ethical lockout.

The big idea here is to stop mass shootings and other ethically incorrect uses for firearms through the development of an AI that can recognize intent, judge whether it’s ethical use, and ultimately render a firearm inert if a user tries to ready it for improper fire.

That sounds like a lofty goal, in fact the researchers themselves refer to it as a “blue sky” idea, but the technology to make it possible is already here.

According to the team’s research:

Predictably, some will object as follows: “The concept you introduce is attractive. But unfortunately it’s nothing more than a dream; actually, nothing more than a pipe dream. Is this AI really feasible, science- and engineering-wise?” We answer in the affirmative, confidently.

The research goes on to explain how recent breakthroughs involving long-term studies have lead to the development of various AI-powered reasoning systems that could serve to trivialize and implement a fairly simple ethical judgment system for firearms.

This paper doesn’t describe the creation of a smart gun itself, but the potential efficacy of an AI system that can make the same kinds of decisions for firearms users as, for example, cars that can lock out drivers if they can’t pass a breathalyzer.

In this way, the AI would be trained to recognize the human intent behind an action. The researchers describe the recent mass shooting at a Wal Mart in El Paso and offer different view of what could have happened:

The shooter is driving to Walmart, an assault rifle, and a massive amount of ammunition, in his vehicle. The AI we envisage knows that this weapon is there, and that it can be used only for very specific purposes, in very specific environments (and of course it knows what those purposes and environments are).

At Walmart itself, in the parking lot, any attempt on the part of the would-be assailant to use his weapon, or even position it for use in any way, will result in it being locked out by the AI. In the particular case at hand, the AI knows that killing anyone with the gun, except perhaps e.g. for self-defense purposes, is unethical. Since the AI rules out self-defense, the gun is rendered useless, and locked out.

This paints a wonderful picture. It’s hard to imagine any objections to a system that worked perfectly. Nobody needs to load, rack, or fire a firearm in a Wal Mart parking lot unless they’re in danger. If the AI could be developed in such a way that it would only allow users to fire in ethical situations such as self defense, while at a firing range, or in designated legal hunting areas, thousands of lives could be saved every year.

Of course, the researchers certainly predict myriad objections. After all, they’re focused on navigating the US political landscape. In most civilized nations gun control is common sense.

The team anticipates people pointing out that criminals will just use firearms that don’t have an AI watchdog embedded:

In reply, we note that our blue-sky conception is in no way restricted to the idea that the guarding AI is only in the weapons in question.

Clearly the contribution here isn’t the development of a smart gun, but the creation of an ethically correct AI. If criminals won’t put the AI on their guns, or they continue to use dumb weapons, the AI can still be effective when installed in other sensors. It could, hypothetically, be used to perform any number of functions once it determines violent human intent.

It could lock doors, stop elevators, alert authorities, change traffic light patterns, text location-based alerts, and any number of other reactionary measures including unlocking law enforcement and security personnel’s weapons for defense.

The researchers also figure there will be objections based on the idea that people could hack the weapons. This one’s pretty easily dismissed: firearms will be easier to secure than robots, and we’re already putting AI in those.

While there’s no such thing as total security, the US military fills their ships, planes, and missiles with AI and we’ve managed to figure out how to keep the enemy from hacking them. We should be able to keep police officers’ service weapons just as safe.

Realistically, it takes a leap of faith to assume an ethical AI can be made to understand the difference between situations such as, for example, home invasion and domestic violence, but the groundwork is already there.

If you look at driverless cars, we know people have already died because they relied on an AI to protect them. But we also know that the potential to save tens of thousands of lives is too great to ignore in the face of a, so far, relatively small number of accidental fatalities.

It’s likely that, just like Tesla’s AI, a gun control AI could result in accidental and unnecessary deaths. But approximately 24,000 people die annually in the US due to suicide by firearm, 1,500 children are killed by gun violence, and almost 14,000 adults are murdered with guns. It stands to reason an AI-intervention could significantly decrease those numbers.

You can read the whole paper here.

Published February 19, 2021 — 19:35 UTC

Source

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • LG Electronics and Nvidia are in talks on robotics, AI data centres, and mobility
  • Sequoia is giving away the hardware for an AI project it cannot invest in. That is the point.
  • Trump says Anthropic Pentagon deal is ‘possible’, weeks after blacklisting the company as a national security risk
  • Samsung and IKEA just made the $6 smart home real, and your TV is already the hub
  • OpenAI recruits Cognizant and CGI to take Codex into enterprise software shops worldwide

Recent Comments

    Archives

    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020

    Categories

    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    ©2026 Londonchiropracter.com | Design: Newspaperly WordPress Theme