Londonchiropracter.com

This domain is available to be leased

Menu
Menu

Who gets to decide if an AI is alive?

Posted on February 22, 2022 by admin

Experts predict artificial intelligence will gain sentience within the next 100 years. Some predict it’ll happen sooner. Others say it’ll never happen. Still other experts say it already has happened.

It’s possible the experts are just guessing.

The problem with identifying “sentience” and “consciousness” is there’s no precedent when it comes to machine intelligence. You can’t just check a robot’s pulse or ask it to define “love” to see if it’s alive.

The closest we have to a test for sentience is the Turing Test and, arguably, Alexa and Siri passed that years ago.

At some point, if and when AI does become sentient, we’ll need an empirical method for determining the difference between clever programming and machines that are actually self-aware.

Sentience and scientists

Any developer, marketing team, CEO, or scientist can claim they’ve created a machine that thinks and feels. There’s just one thing stopping them: the truth.

And that barrier’s only as strong as the consequences for breaking it. Currently, the companies dabbling at the edge of artificial general intelligence (AGI) have wisely stayed on the border of “it’s just a machine” without crossing into the land of “it can think.”

They use terms such as “human-level” and “strong AI” to indicate they’re working towards something that imitates human intelligence. But they usually stop short of claiming these systems are capable of experiencing thoughts and feelings.

Well, most of them anyway. Ilya Sutskever, the chief scientist at OpenAI, seems to think AI is already sentient:

it may be that today’s large neural networks are slightly conscious

— Ilya Sutskever (@ilyasut) February 9, 2022

But Yann LeCun, Facebook/Meta’s AI guru, believes the opposite:

Nope.
Not even for true for small values of “slightly conscious” and large values of “large neural nets”.
I think you would need a particular kind of macro-architecture that none of the current networks possess.

— Yann LeCun (@ylecun) February 12, 2022

And Judea Pearl, a Turing Award-winning computer scientist, thinks even fake sentience should be considered consciousness since, as he puts it, “faking it is having it.”

As far as I know we do not have an agreed Turing test for consciousness, except, of course, systems that act and communicate as though they have consciousness. Here, my faithful guideline is: faking it is having it, because it is practically impossible to fake w/o having.

— Judea Pearl (@yudapearl) February 15, 2022

Here we have three of the world’s most famous computer scientists, each of them progenitors of modern artificial intelligence in their own right, debating consciousness on Twitter with the temerity and gravitas of a Star Wars versus Star Trek argument.

And this is not an isolated incident by any means. We’ve written about Twitter beefs and wacky arguments between AI experts for years.

It would appear that computer scientists are no more qualified to opine on machine sentience than philosophers are.

Living machines and their lawyers

If we can’t rely on OpenAI’s chief scientist to determine whether, for example, GPT-3 can think, then we’ll have to shift perspectives.

Perhaps a machine is only sentient if it can meet a simple set of rational qualifications for sentience. In which case we’d need to turn to the legal system in order to codify and verify any potential incidents of machine consciousness.

The problem is that there’s only one country with an existing legal framework by which the rights of a sentient machine can be discussed, and that’s Saudi Arabia.

As we reported back in 2017:

A robot called Sophia, made by Hong Kong company Hanson Robotics, was given citizenship during an investment event where plans to build a supercity full of robotic technology were unveiled to a crowd of wealthy attendees.

Let’s be perfectly clear here: if Sophia the Robot is sentient, so is Amazon’s Alexa, Teddy Ruxpin, and The Rockafire Explosion.

It’s an animatronic puppet that uses natural language processing AI to generate phrases. From an engineering point of view, the machine is quite impressive. But the AI powering it is no more sophisticated than the machine learning algorithms Netflix uses to try and figure out what TV show you’ll want to watch next.

In the US, the legal system consistently demonstrates an absolute failure to grasp even the most basic concepts related to artificial intelligence.

Last year, Judge Bruce Schroeder banned prosecutors from using the “pinch to zoom” feature of an Apple iPad in the Kyle Rittenhouse trial because nobody in the courtroom properly understood how it worked.

Per an article by Ars Technica’s Jon Brodkin:

Schroeder prevented … [Kenosha County prosecutor Thomas Binger] from pinching and zooming after Rittenhouse’s defense attorney Mark Richards claimed that when a user zooms in on a video, “Apple’s iPad programming creat[es] what it thinks is there, not what necessarily is there.”

Richards provided no evidence for this claim and admitted that he doesn’t understand how the pinch-to-zoom feature works, but the judge decided the burden was on the prosecution to prove that zooming in doesn’t add new images into the video.

And the US government remains staunch in its continuing hands-off approach to AI regulation.

It’s just as bad in the EU, where lawmakers are currently stymied over numerous sticking points including facial recognition regulations, with conservative and liberal party lines fueling the dissonance.

What this means is that we’re unlikely to see any court, in any democratic country, make rational observations on machine sentience.

Judges and lawyers often lack basic comprehension of the systems at play and scientists are too busy deciding where the goalposts for sentience lie to provide any sort of consistent view on the matter.

Currently, the utter confusion surrounding the field of AI has led to a paradigm where academia and peer-review act as the first and only arbiters of machine sentience. Unfortunately, that puts us back into the realm of scientists arguing over science.

That just leaves PR teams and the media. On the bright side, the artificial intelligence beat is quite competitive. And many of us on it are painfully aware of how hyperbolic the entire field has become since the advent of modern deep learning.

But the dark side is that intelligent voices of reason with expertise in the field they’re covering — the reporters with years of experience telling shit from Shinola and snake oil from AI — are often shouted over by access journalists with larger audiences or peers providing straight-up coverage of big tech press releases.

No Turing Test for consciousness

The simple fact of the matter is that we don’t have a legitimate, agreed-upon test for AI sentience for the exact same reason we don’t have one for aliens: nobody’s sure exactly what we’re looking for.

Are aliens going to look like us? What if they’re two-dimensional beings who can hide by turning sideways? Will sentient AI take a form we can recognize? Or is Ilya Sutskever correct and AI is already sentient?

Maybe AI is already superintelligent and it knows that coming out as alive would upset a delicate balance. It could be secretly working in the background to make things a tiny bit better for us every day — or worse.

Perhaps AI will never gain sentience because it’s impossible to imbue computer code with the spark of life. Maybe the best we can ever hope for is AGI.

The only thing that’s clear is that we need a Turing Test for consciousness that actually works for modern AI. If some of the smartest people on the planet seem to think we could stumble onto machine sentience at any second, it feels pragmatic to be as prepared for that moment as we possibly can.

But we need to figure out what we’re looking for before we can find it, something easier said than done.

How would you define, detect, and determine machine sentience? Let us know on Twitter.

Source

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Jeff Bezos’s representative just left the board of a startup that raised $1.4 billion on his name. The first truck has not been built.
  • Quantum Motion lands $160m in EU’s first major late-stage commitment
  • Google’s AI Overviews killed 58 per cent of publisher clicks. Now it is adding a ‘Further Exploration’ section to bring some back.
  • Snap lost a 400 million dollar AI deal, 20 million dollars a month to the Iran war, and 24 per cent of its stock price. The AR glasses had better work.
  • The UAE’s AI champion just leased a converted Minneapolis office. The irony writes itself.

Recent Comments

    Archives

    • May 2026
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020

    Categories

    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    ©2026 Londonchiropracter.com | Design: Newspaperly WordPress Theme