
Meta recently announced a long-term research partnership to study the human brain. According to the company, it intends to use the results of this study to “guide the development of AI that processes speech and text as efficiently as people.”
This is the latest in Meta’s ongoing quest to perform the machine learning equivalent of alchemy: producing thought from language.
The big idea: Meta wants to understand exactly what’s going on in people’s brains when they process language. Then, somehow, it’s going to use this data to develop an AI capable of understanding language.
According to Meta AI, the company spent the past two years developing an AI system to process datasets of brainwave information in order to glean insights into how the brain handles communication.
Now, the company’s working with research partners to create its own databases.
Per a Meta AI blog post:
Our collaborators at NeuroSpin are creating an original neuroimaging data set to expand this research. We’ll be open-sourcing the data set, deep learning models, code, and research papers resulting from this effort to help spur discoveries in both AI and neuroscience communities. All of this work is part of Meta AI’s broader investments toward human-level AI that learns from limited to no supervision.
The plan is to create an end-to-end decoder for the human brain. This would involve building a neural network capable of translating raw brainwave data into words or images.
That sounds pretty rad, but things quickly veer into stranger territory as the blog post continues beneath a subheading titled “Toward human-level AI”:
Overall, these studies support an exciting possibility — there are, in fact, quantifiable similarities between brains and AI models. And these similarities can help generate new insights about how the brain functions. This opens new avenues, where neuroscience will guide the development of more intelligent AI, and where, in turn, AI will help uncover the wonders of the brain.
But they’re both going about it the same way: by trying to back into it through natural language processing (NLP)
It’s unclear how predicting speech from brainwaves will lead to human-level speech recognition. Just as it’s unclear how GPT-3, or any future text generators, will lead to AGI.
There’s an argument to be made that, in lieu of a clear goal, researchers are merely trying to solve problems in the general area of human understanding on the way to the eventual promise of AGI.
But there’s also the idea that deep learning isn’t robust enough to imitate or emulate the human brain sufficiently for the development of machines capable of human-level reasoning.
Tesla AI might play a role in AGI, given that it trains against the outside world, especially with the advent of Optimus
— Elon Musk (@elonmusk) January 19, 2022
It’s about time big tech stopped framing every AI advancement as the direct bridge to the sentient robots of tomorrow. And, perhaps, it’s also time to consider a different approach to AGI.