Meta recently announced a long-term research partnership to study the human brain. According to the company, it intends to use the results of this study to “guide the development of AI that processes speech and text as efficiently as people.”
This is the latest in Meta’s ongoing quest to perform the machine learning equivalent of alchemy: producing thought from language.
The big idea: Meta wants to understand exactly what’s going on in people’s brains when they process language. Then, somehow, it’s going to use this data to develop an AI capable of understanding language.
According to Meta AI, the company spent the past two years developing an AI system to process datasets of brainwave information in order to glean insights into how the brain handles communication.
Now, the company’s working with research partners to create its own databases.
Per a Meta AI blog post:
Our collaborators at NeuroSpin are creating an original neuroimaging data set to expand this research. We’ll be open-sourcing the data set, deep learning models, code, and research papers resulting from this effort to help spur discoveries in both AI and neuroscience communities. All of this work is part of Meta AI’s broader investments toward human-level AI that learns from limited to no supervision.
The plan is to create an end-to-end decoder for the human brain. This would involve building a neural network capable of translating raw brainwave data into words or images.
That sounds pretty rad, but things quickly veer into stranger territory as the blog post continues beneath a subheading titled “Toward human-level AI”:
Overall, these studies support an exciting possibility — there are, in fact, quantifiable similarities between brains and AI models. And these similarities can help generate new insights about how the brain functions. This opens new avenues, where neuroscience will guide the development of more intelligent AI, and where, in turn, AI will help uncover the wonders of the brain.