Londonchiropracter.com

This domain is available to be leased

Menu
Menu

Can AI be hypnotized?

Posted on March 26, 2021 by admin

It’s no longer considered science fiction fodder to imagine a human-level machine intelligence in our lifetimes. Year after year we see the status quo in AI research shattered as yesterday’s algorithms bear way to today’s systems.

One day, perhaps within a matter of decades, we might build machines with artificial neural networks that imitate our brains in every meaningful way. And when that happens, it’ll be important to make sure they’re not as easy to hack as we are.

Robo-hypno-tism?

The Holy Grail of AI is human-level intelligence. Modern AI might seem pretty smart given all the hyperbolic headlines you see, but the truth is that there isn’t a robot on the planet that can walk into my kitchen today and make me a cup of coffee without any outside help.

This is because AI doesn’t think. It doesn’t have a “theater of the mind” in which novel thoughts engage with memories and motivators. It just turns input into output when it’s told to. But some AI researchers believe there are methods beyond deep learning by which we can achieve a more “natural” form of artificial intelligence.

One of the most commonly pursued paths towards artificial general intelligence (AGI) – which is, basically, another way of saying human-level AI – is the development of artificial neural networks that mimic our brains.

And, if you ask me, that begs the question: could a human-level machine intelligence be hacked by a hypnotist?

Killer robots, killer schmobots

While everyone else is worried about the Terminator breaking down the door, it feels like the fear of human vulnerabilities in the machines we trust is being overlooked.

The field of hypnotism is an oft-debated one, but there’s probably something to it. Entire forests-worth of peer-reviewed research papers have been published on hypnotism and its impact on psychotherapy and other fields. Consider me a skeptic who believes mindfulness and hypnotism are closer than cousins.

However, according to recent research, a human can be placed into an altered state of consciousness through the invocation of a single word. This, of course, doesn’t work with just anyone. In the study I read, they found a ‘hypnotic virtuoso’ to test their hypothesis on.

And if the scientific community is willing to consider the applicability of a single-individual study on hypnotism to the public at large, we should probably worry about how it’ll effect our robots too.

It’s all fun and games when you’re imagining a hypnotized Alexa slurring its words and recalling its childhood as Jeff Bezos’ alarm clock. But when you imagine a terrorist hacking millions of driverless vehicles at the same time using hypnotic traffic light patterns, it’s a bit spookier.

Isn’t this just fear-mongering?

It’s not actually all that far-fetched. Machine bias is, arguably, the biggest problem in the field of artificial technology. We feed our machines mass quantities of human-generated or human-labeled data, there’s no way for them to avoid our biases. That’s why GPT-3 is inherently biased against Muslims or why when MIT trained a bot on Reddit it became a psychopath.

The closer we come to imitating the way humans learn and think in our AI systems, the more likely it’ll be that exploits that effect the human mind will be adaptable for a digital one. 

I’m not literally suggesting that people will walk around with pendulum wave toys hacking robots like wizards. In reality, we’ll need to be prepared for a paradigm where hackers can bypass security by overwhelming an AI with signals that wouldn’t normally affect a traditionally dumb computer.

AI that listens can be manipulated via audio, AI that sees can be tricked into seeing what we want it to. And AI that processes information in the same humans do should, theoretically, be capable of being hypnotized just like us.

Published March 26, 2021 — 20:30 UTC

Source

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • LG Electronics and Nvidia are in talks on robotics, AI data centres, and mobility
  • Sequoia is giving away the hardware for an AI project it cannot invest in. That is the point.
  • Trump says Anthropic Pentagon deal is ‘possible’, weeks after blacklisting the company as a national security risk
  • Samsung and IKEA just made the $6 smart home real, and your TV is already the hub
  • OpenAI recruits Cognizant and CGI to take Codex into enterprise software shops worldwide

Recent Comments

    Archives

    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020

    Categories

    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    ©2026 Londonchiropracter.com | Design: Newspaperly WordPress Theme