Skip to main content

Verified by Psychology Today

Intelligence

Neuroscience and AI's Future

How The Thousand Brains Theory of Intelligence may unlock machine intelligence.

Numenta
Jeff Hawkins
Source: Numenta

What if the key to unlocking the future of artificial intelligence (AI) by achieving artificial general intelligence (AGI) is in understanding how human intelligence in the biological brain works? Jeff Hawkins, an American inventor, scientist, entrepreneur, engineer, and author thinks he knows the way forward. Hawkins is the co-founder and chief scientist at Numenta. On May 20, 2021, Numenta announced it had increased the speed of AI deep learning networks by 100 times using its sparse algorithms derived from neuroscience research and published its results in a new white paper.

In 2018, Hawkins introduced The Thousand Brains Theory of Intelligence, a sensory-motor theory, in which every part of the human neocortex learns complete models of objects and concepts by combining input with a grid cell-derived location, then integrating over movements.

According to the theory, the human brain has 150,000 cortical columns that act as a learning machine that learns a predictive model of its inputs by observing changes over time. Cortical columns create map-like structures called reference frames, which are then filled with links to other reference frames. The neocortex is likely to have hundreds of thousands of models of each object in the world that vote together to reach a consensus, hence the name of the theory. The Thousand Brains Theory of Intelligence was detailed in a paper published in 2019 in Frontiers in Neural Circuits.

In 2005, Hawkins co-founded Numenta with the mission to develop machine intelligence through neocortical theory. Hawkins directed the Redwood Neuroscience Institute, now at U.C. Berkeley, during 2002-2005. In 2004, he authored On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines with Sandra Blakeslee. Hawkins is considered a pioneer of the handheld computer industry that rose in the 1990s. He is the architect of the PalmPilot and Treo smartphone, as well as the co-founder of Palm Computing and Handspring.

Numenta
A Thousand Brains
Source: Numenta

In 1992, Hawkins was invited to speak at the Intel Corporation during its three-day annual planning meetings for hundreds of senior employees. Hawkins spoke presciently of a future in which pocket-sized computing devices costing between $500 and $1,000 would be owned by billions of people worldwide. Afterward, Hawkins was seated at a lunch table with Gordon Moore, the co-founder of Intel Corporation. He asked Moore what he thought of the talk. Not only did Hawkins not get a direct answer, but Moore avoided talking to him for the remainder of the lunch. “It soon became clear that neither he nor anyone else at the table believed what I had said,” wrote Hawkins in his latest book, A Thousand Brains: A New Theory of Intelligence.

Following is an interview that has been edited and condensed.

Cami Rosso: Do you feel validated now that smartphones are ubiquitous as you had predicted after your presentation at Intel and the uncomfortable silence at Moore’s table?

Jeff Hawkins: I don’t feel validated in the sense that there's always some level of self-doubt that you have, right? But I also felt at that meeting that was what, some 20 to 30 years ago, that it was inevitable that it was going to happen … I never really had a lot of doubt that the smartphone revolution was going to happen.

CR: How far off are we from AGI?

JH: It’s really, really hard to put a date on it. But what I feel very confident about now, is that we have a blueprint on how to do it. We literally have written out a set of tasks and milestones in my company. And we talk openly about it; it’s nothing secret. I wrote about some of it in the book. So how quickly? It’s very difficult to say. There’s no doubt in my mind that in this century it’s going to be the dominant technology—the later part of the century—the same way computers were in the last century. But is it 10 years away? Is it 20 years away? 25 years away? I don’t think it’s more than that.

Where are we? It’s not a binary thing, of course, where we basically have an agreed-upon paradigm of what intelligent machines are … One of the reasons why I wrote the book is to make that argument for what’s the paradigm for AGI. What’s AGI going to look like? How are we going to build this thing? What’s going to play out? And we’ll see how many people believe it.

But I have no doubt about it: It’s going to happen. And I’m pretty confident that it’s going to happen under the principles I wrote about. So, some of it will be wrong, but most of it will be right … It’s a matter of doing it correctly, and not rushing it.

CR: Can deep learning get us to AGI, or do we need something else?

JH: I don’t think so. I make the argument in the book that there are many things that have to change in deep learning. It’s not just, tweak it and more of the same. That position, by the way, is not too controversial.

There are a lot of very senior AI people who feel the same way that we have a lot of roadblocks here. And I don’t think most people know what the next thing to do is, so in my book I propose the principles on how the brain works, how intelligence works.

What we’re doing at Numenta is starting with deep learning networks, and how far we can go by modifying them. Instead of throwing everything away, how do we start moving from where we are?

We have a whole theory on sparsity, and we’ve now been able to speed up existing deep neural networks by anywhere from five to over a hundred times, depending on the network architecture. They become more robust, and they’re less prone to adversarial attacks.

We’re now adding dendrite theory that I wrote about in the book. We’re showing that we can solve another big problem in deep learning, which is continuous learning. Deep learning networks can’t be incrementally trained; you have to start over and train the whole thing again.

We think we can solve that problem. So those are the things we can do to improve the AI machine learning community today, continuing with the deep learning paradigm, but ultimately making it much better.

CR: Is this project your final frontier?

JH: I do have other things that I want to work on … If I’m able to mentally and physically, I will work on in the future — and this gets into one of the other big questions of the universe — is the nature of time.

CR: What is the highest good?

JH: To me, my personal goals for good are to help humanity in the very long-term — all humans around the world, very far into the future.

Copyright © 2021 Cami Rosso. All rights reserved.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today