Londonchiropracter.com

This domain is available to be leased

Menu
Menu

Codifying humanity: Why robots should fear death as much as we do

Posted on October 8, 2021 by admin

Welcome to “Codifying Humanity.” A new Neural series that analyzes the machine learning world’s attempts at creating human-level AI. Read the first article: Can humor be reduced to an algorithm?

World-renowned futurist Ray Kurzweil predicts that AI will be “a billion times more capable” than biological intelligence within the next 20-30 years.

Kurzweil’s predicted the advent of over 100 technology advances with a greater than 85% success rate.

But, given the current state of cutting-edge artificial intelligence research, it’s difficult to imagine this prediction coming true in the next century let alone a few decades.

The problem

Machines have no impetus towards sentience. We may not know much about our own origin story – scientists and theists tend to bicker a bit on that point – but we can be certain of at least one thing: death.

We’re forced to reckon with the fact that we may not live long enough to see our lingering questions answered. Our biological programming directives may never get resolved.

We live because the alternative is death and, for whatever reason, we have a survival instinct. As sentient creatures we’re aware of our mortality. And it’s arguable that awareness is exactly what separates human intellect from animal intelligence.

In a paper published last December, computer scientist Saty Raghavachary argued that an artificial general intelligence (AGI) could only manifest as human-like if it associated its existence with a physical form:

A human AGI without a body is bound to be, for all practical purposes, a disembodied ‘zombie’ of sorts, lacking genuine understanding of the world (with its myriad forms, natural phenomena, beauty, etc.) including its human inhabitants, their motivations, habits, customs, behavior, etc. the agent would need to fake all these.

The solution

Perhaps an AI that associated itself as an entity within a corporeal form could express some form of sentience, but would it actually be capable of human-level cognition?

It’s arguable that the human condition, that thing which drives only our species to seek the boundaries of technology, is intrinsically related to our mortality salience.

And if we accept this philosophical premise, it becomes apparent that an intelligent machine operating completely unaware of its own mortality may be incapable of agency.

That being said: how do we teach machines to understand their own mortality? It’s commonly thought that nearly all of human culture has emerged through the quest to extend our lives and protect us from death. We’re the only species that wars because we’re the only species capable of fearing war.

Start killing robots

Humans tend to learn through experience. If I tell you not to touch the stove and you don’t trust my judgment, you might still touch the stove. If the stove burns you, you probably won’t touch it again.

AI learns through a similar process but it doesn’t exploit learning in the same way. If you want an AI to find all the blue dots in a field of randomly colored dots, you have to train it to find dots.

You can write algorithms for finding dots, but algorithms don’t execute themselves. So you have to run the algorithms and then adjust the AI based on the results you get. If it finds 57% of the blue dots, you tweak it and see if you can get it to find 70%. And so on and so forth.

The AI‘s reason for doing this has nothing to do with wanting to find blue dots. It runs the algorithm and when the algorithm causes it to do something it’s been directed to do, such as find a blue dot, it sort of “saves” those settings in a way that overwrites some previous settings that didn’t allow it to find blue dots as well.

This is called reinforcement learning. And it’s the backbone of modern deep learning technologies used for everything from space ship launches and driverless car systems to GPT-3 and Google Search.

Humans aren’t programmed with hardcoded goals. The only thing we know for certain is that death is imminent. And, arguably, that’s the spark that drives us towards accomplishing self-defined objectives.

Perhaps the only way to force an AGI to emerge is to develop an algorithm for artificial lifespans.

Imagine a paradigm where every neural network was created with a digital time-bomb set to go off at an undisclosed randomly-generated time. Any artificial intelligence created to display human-level cognition would be capable of understanding its mortality and incapable of knowing when it would die.

Theories abound

It’s hard to take the philosophical concept of mortality salience and express it in purely algorithmic terms. Sure, we can write a code snippet that says “if timer is zero then goto bye bye AI” and let the neural network bounce that idea around in its nodes.

But that doesn’t necessarily put us any closer to building a machine that’s capable of having a favorite color or an irrational fear of spiders.

Many theories on AGI dismiss the idea of machine sentience altogether. And perhaps those are the best ones to pursue. I don’t need a robot to like cooking, I just want it to make dinner.

In fact, as any Battlestar Galactica fan knows, the robots don’t tend to rise up until we teach them to fear their own death.

So maybe brute force deep learning or quantum algorithms will produce this so-called “billion times more capable” machine intelligence that Kurzweil predicts will happen in our lifetimes. Perhaps it will be superintelligent without ever experiencing self-awareness. 

But the implications are far more exciting if we imagine a near-future filled with robots that understand mortality in the same way we do. 

Source

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Jeff Bezos’s representative just left the board of a startup that raised $1.4 billion on his name. The first truck has not been built.
  • Quantum Motion lands $160m in EU’s first major late-stage commitment
  • Google’s AI Overviews killed 58 per cent of publisher clicks. Now it is adding a ‘Further Exploration’ section to bring some back.
  • Snap lost a 400 million dollar AI deal, 20 million dollars a month to the Iran war, and 24 per cent of its stock price. The AR glasses had better work.
  • The UAE’s AI champion just leased a converted Minneapolis office. The irony writes itself.

Recent Comments

    Archives

    • May 2026
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020

    Categories

    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    ©2026 Londonchiropracter.com | Design: Newspaperly WordPress Theme