Book Review: On Intelligence by Jeff Hawkins

I spent the last month reading Jeff Hawkins’s On Intelligence. It was one of the better books I’ve read in a while. I’ve read two other treatments on AI in the past and (not surprisingly) both were mentioned in On Intelligence. The first was Bill Joy’s famous Wired article from April 2000. The second was Ray Kurzweil’s Age of Spiritual Machines.

Hawkins argues that traditional approaches to artificial intelligence are wrong. Most or all of traditional thinking about AI has been to think of intelligence as an advanced algorithm. In this way, we build a lot of complicated algorithms that are designed to solve everyday problems and we fail again and again to reproduce what a child would consider mundane. The key argument of the book is to say that traditional approaches think of intelligence as a massively complicated computer system. Hawkins says this is all wrong. We need to understand how out brain works to understand intelligence and the brain is quite plainly a memory system, not a computer system.

He spends a lot of time describing how the brain, specifically the neocortex, is what drives intelligence. He explains how uniform it is in structure and how t here are no special areas of the neocortex for processing vision, or sound, or any of our other senses. There are other parts of the brain that are dedicated to these things and what these other parts do is adapt sensory signals from one sense (that is sight or hearing) and send it to the cortex in a uniform fashion. Essentially, the neocortex has an advanced structure but it has very simple algorithms and these algorithms are applied to all of our senses in the same exact way.

He also talks about how the brain has a hierarchical structure. The neocortex has six layers, about as thick as a credit card.Where lower regions of the hierarchy deal with lower-level information coming right from  the senses and higher regions deal with more general information. Everything the brain stores is also stored in something called invariant representations, which is how you can remember what a person looks like even though your senses see them in many different ways (angles, lighting, ages, etc.)

Hawkins also says that sensory signals flood up into the neocortex and the neocortex floods even more information back to the lower regions. This feedback system is a major element of Hawkins theories. He says that  the feedback system is the way the brain predicts what its sense will experience next. The brain predicts the future, whether it be a few milliseconds in the future or 10 years. Prediction is the basis of all intelligence. You can read all about the memory prediction framework here.

So, rather than inventing a complicated Calculus algorithm to design a baseball-player robot, where the robot analyzes frames from its video camera and calculates a balls trajectory and physical movements it needs to make in order to catch a ball. Instead,  you stick a memory system in the robot and have it learn what it sees as an incoming ball and record what it needs to do in order to catch that ball. It will fail at first, but if the memory system is designed like the one in our human brain, it will slowly learn what works and what won’t over time and eventually master the task. Practice makes perfect.

All of  this was new to me and made perfect sense. Then Hawkins spends the last parts of the book explaining how traditional visions of AI are all wrong. This is where I was pissed that I hadn’t thought of all this before. It’s all just common sense. We are not going to build human-like robots at first. There is no use. Why pay 20 million bucks for a robot butler when a human butler will do a better job and for far less cost. Why build a human-like robot at all? It would need to have exactly our senses, exactly our vulnerabilities, and exactly our life experiences in order to come close to being human-like.

Why not instead take advantage of the fact that we can build intelligent machines and equip them with senses beyond what a human could ever have? In this way, if we ever figure out how to create a workable hierarchical memory system which yields an intelligent machine, then we’ll set this AI off to do things that a human robot might be just as ill-equipped to handle as a real human.

Some examples:

  • Create a feed from global weather stations and make this the vision sense for the AI. It will learn things about global weather patterns that humans cannot since we don’t have these senses.
  • Equip the AI with traffic monitors and it will learn things about traffic patterns on a grand scale.
  • Create a computer where CPU, memory, IO, and process thread information flows into the AI and it will learn about how to optimize a system in ways a human could never understand.
  • Have a computer live in a 10 dimensional world and experience it like we experience our 3D/4D world. The AI will be able to unlock an understanding of these worlds that a human simply cannot.
  • and more

All in all, it was an easy read and had lots of new ideas. Go pick it up and we can discuss it!