Londonchiropracter.com

This domain is available to be leased

Menu
Menu

AI’s true purpose is freeing up humans to find the biggest problems

Posted on February 19, 2022 by admin

Last week’s announcement of AlphaCode, DeepMind’s source code–generating deep learning system, created a lot of excitement—some of it unwarranted—surrounding advances in artificial intelligence.

As I’ve mentioned in my deep dive on AlphaCode, DeepMind’s researchers have done a great job in bringing together the right technology and practices to create a machine learning model that can find solutions to very complex problems.

However, the sometimes-bloated coverage of AlphaCode by the media highlights the endemic problems with framing the growing capabilities of artificial intelligence in the context of competitions meant for humans.

Measuring intelligence with tests

For decades, AI researchers and scientists have been searching for tests that can measure progress toward artificial general intelligence. And having envisioned AI in the image of the human mind, they have turned to benchmarks for human intelligence.

Being multidimensional and subjective, human intelligence can be difficult to measure. But in general, there are some tests and competitions that most people agree are indicative of good cognitive abilities.

Think of every competition as a function that maps a problem to a solution. You’re provided with a problem, whether it’s a chessboard, a go board, a programming challenge, or a science question. You must map it to a solution. The size of the solution space depends on the problem. For example, go has a much larger solution space than chess because it has a larger board and a bigger number of possible moves. On the other hand, programming challenges have an even vaster solution space: There are hundreds of possible instructions that can be combined in nearly endless ways.

But in each case, a problem is matched with a solution and the solution can be weighed against an expected outcome, whether it’s winning or losing a game, answering the right question, maximizing a reward, or passing the test cases of the programming challenge.

A game in progress on a Go board
A game in progress on a Go board. Image via Pixabay

When it comes to us humans, these competitions really test the limits of our intelligence. Given the computational limits of the brain, we can’t brute-force our way through the solution space. No chess or go player can evaluate millions or thousands of moves at each turn in a reasonable amount of time. Likewise, a programmer can’t randomly check every possible set of instructions until one results in the solution to the problem.

We start with a reasonable intuition (abduction), match the problem to previously seen patterns (induction), and apply a set of known rules (deduction) continuously until we refine our solution to an acceptable solution. We hone these skills through training and practice, and we become better at finding good solutions to the competitions.

In the process of mastering these competitions, we develop many general cognitive skills that can be applied to other problems, such as planning, strategizing, design patterns, theory of mind, synthesis, decomposition, and critical and abstract thinking. These skills come in handy in other real-world settings, such as business, education, scientific research, product design, and the military.

In more specialized fields, such as math or programming, tests take on more practical implications. For example, in coding competitions, the programmer must decompose a problem statement into smaller parts, then design an algorithm that solves each part and put it all back together. The problems often have interesting twists that require the participant to think in novel ways instead of using the first solution that comes to mind.

Interestingly, a lot of the challenges you’ll see in these competitions have very little to do with the types of code programmers write daily, such as pulling data from a database, calling an API, or setting up a web server.

But you can expect a person who ranks high in coding competitions to have many general skills that require years of study and practice. This is why many companies use coding challenges as an important tool to evaluate potential hires. Otherwise said, competitive coding is a good proxy for the effort that goes into making a good programmer.

Mapping problems to solutions

When competitions, games, and tests are applied to artificial intelligence, the computational limits of the brain no longer apply. And this creates the opportunity for shortcuts that the human mind can’t achieve.

Take chess and go, two board games that have received much attention from the AI community in the past decades. Chess was once called the drosophila of artificial intelligence. In 1996, DeepBlue defeated chess grandmaster Garry Kasparov. But DeepBlue did not have the general cognitive skills of its human opponent. Instead, it used the sheer computational power of IBM’s supercomputers to evaluate millions of moves every second and choose the best one, a feat that is beyond the capacity of the human brain.

At the time, scientists and futurists thought that the Chinese board game go would remain beyond the reach of AI systems for a good while because it had a much larger solution space and required computational power that would not become available for several decades. They were proven wrong in 2016 when AlphaGo defeated go grandmaster Lee Sedol.

But again, AlphaGo didn’t play the game like its human opponent. It took advantage of advances in machine learning and computation hardware. It had been trained on a large dataset of previously played games—a lot more than any human can play in their entire life. It used deep reinforcement learning and Monte Carlo Tree Search (MCTS)—and again the computational power of Google’s servers—to find optimal moves at each turn. It didn’t do a brute-force survey of every possible move like DeepBlue, but it still evaluated millions of moves at every turn.

AlphaCode is an even more impressive feat. It uses transformers—a type of deep learning architecture that is especially good at processing sequential data—to map a natural language problem statement to thousands of possible solutions. It then uses filtering and clustering to choose the 10 most-promising solutions proposed by the model. Impressive as it is, however, AlphaCode’s solution-development process is very different from that of a human programmer.

Humans are problem finders, AIs are problem solvers

alphacode hyped headlines

When thought of as the equivalent of human intelligence, advances in AI lead us to all kinds of wrong conclusions, such as robots taking over the world, deep neural networks becoming conscious, and AlphaCode being as good as an average human programmer.

But when viewed in the framework of searching solution spaces, they take on a different meaning. In each of the cases described above, even if the AI system produces outcomes that are similar to or better than those of humans, the process they use is very different from human thinking. In fact, these achievements prove that when you reduce a competition to a well-defined search problem, then with the right algorithm, rules, data, and computation power, you can create an AI system that can find the right solution without going through any of the intermediary skills that humans acquire when they master the craft.

Some might dismiss this difference as long as the outcome is acceptable. But when it comes to solving real-world problems, those intermediary skills that are taken for granted and not measured in the tests are often more important than the test scores themselves.

What does this mean for the future of human intelligence? I like to think of AI—at least in its current form—as an extension instead of a replacement for human intelligence. Technologies such as AlphaCode cannot think about and design their own problems—one of the key elements of human creativity and innovation—but they are very good problem solvers. They create unique opportunities for very productive cooperation between humans and AI. Humans define the problems, set the rewards or expected outcomes, and the AI helps by finding potential solutions at superhuman speed.

There are several interesting examples of this symbiosis, including a recent project in which Google’s researchers formulated a chip floor-planing task as a game and had a reinforcement learning model evaluate numerous potential solutions until it found an optimal arrangement. Another popular trend is the emergence of tools like AutoML, which automate aspects of developing machine learning models by searching for optimal configurations of architecture and hyperparameter values. AutoML is making it possible for people with little experience in data science and machine learning to develop ML models and apply them to their applications. Likewise, a tool like AlphaCode will provide programmers to think more deeply about specific problems, formulate them into well-defined statements and expected results, and have the AI system generate novel solutions that might suggest new directions for application development.

Whether these incremental advances in deep learning will eventually lead to AGI remains to be seen. But what’s for sure is that the maturation of these technologies will gradually create a shift in task assignment, where humans become problem finders and AIs become problem solvers.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.

Source

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Jeff Bezos’s representative just left the board of a startup that raised $1.4 billion on his name. The first truck has not been built.
  • Quantum Motion lands $160m in EU’s first major late-stage commitment
  • Google’s AI Overviews killed 58 per cent of publisher clicks. Now it is adding a ‘Further Exploration’ section to bring some back.
  • Snap lost a 400 million dollar AI deal, 20 million dollars a month to the Iran war, and 24 per cent of its stock price. The AR glasses had better work.
  • The UAE’s AI champion just leased a converted Minneapolis office. The irony writes itself.

Recent Comments

    Archives

    • May 2026
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020

    Categories

    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    ©2026 Londonchiropracter.com | Design: Newspaperly WordPress Theme