brain, mind and cognition - technische universität münchen · brain, mind and cognition seite 1...

36
Brain, Mind and Cognition Seite 1 On Intelligence, by Jeff Hawkins & Sandra Blakeslee Jeff Hawkins art to explain in his book the word “intelligence” is very spectacular and fascinating to me. The main intention of the author of this book is to explain the intelligence of a human and to show how to build an intelligent machine, in a way, the reader would understand it. I think that is the reason, why the reading is so comfortable. I get the feeling in each page of the book that Hawkins greatest wish is that the reader knows what he wants to say. The basic idea of Hawkins is that the brain is a mechanism to predict the future. Specifically, hierarchical regions of the brain predict their future input sequences. The brain is like a feed forward hierarchical state machine with special properties that enable it to learn. After the introduction, he talked about, why previous attempts of understanding intelligence and building intelligent machines have failed. In the middle of the book, he tells details about how the physical brain implements the memory-prediction model. More specifically, he describes the individual components of the brain and explains how they work together. In the next chapter he discusses social and other implications of the theory, which could be for many readers, the thought-provoking section. At the end of the book he gives a look-out of how the issue “building intelligence machines” could develop in the future, in his opinion. Already by the beginning of the book, where Hawkins talks about his reasons why he is so interested in how a brain works, tells me, that this book would show me a new kind of thinking about intelligence of humans and intelligence of a machine. Until now I have not thought about my intelligence so accurate. It is simply accepted that some people are "smarter" than others, but what exactly intelligence means does only a few know. It is self- evident, that we understand what we see, that we know what we have done yesterday, that we can remember faces of different people and so on… However, we note at the situation as with the artificial hand that our senses can be fooled sometimes. Therefore, it is not quite as easy to understand the human brain. It is very surprising that we know the text of a song, only we hear the melody or we know the next song of a CD. While I am reading I come more and more to the question which part is responsible for what part in the brain? In this book there are so many different approaches about the constituent parts of a human brain and how they work together, all of which are very impressive. However, for me, makes only the combination of all understanding of the human brain and behavior as impressive. Because of that, it would be difficult for me to say, that one thought or concept, which is proposed by Hawkins, is the most interesting for me. I think all of the ideas in the books are

Upload: dangkiet

Post on 27-Apr-2018

216 views

Category:

Documents


2 download

TRANSCRIPT

Brain, Mind and Cognition

Seite 1

On Intelligence,

by Jeff Hawkins & Sandra Blakeslee

Jeff Hawkins art to explain in his book the word “intelligence” is very spectacular and

fascinating to me. The main intention of the author of this book is to explain the intelligence

of a human and to show how to build an intelligent machine, in a way, the reader would

understand it. I think that is the reason, why the reading is so comfortable. I get the feeling in

each page of the book that Hawkins greatest wish is that the reader knows what he wants to

say.

The basic idea of Hawkins is that the brain is a mechanism to predict the future. Specifically,

hierarchical regions of the brain predict their future input sequences. The brain is like a feed

forward hierarchical state machine with special properties that enable it to learn.

After the introduction, he talked about, why previous attempts of understanding intelligence

and building intelligent machines have failed.

In the middle of the book, he tells details about how the physical brain implements the

memory-prediction model. More specifically, he describes the individual components of the

brain and explains how they work together.

In the next chapter he discusses social and other implications of the theory, which could be for

many readers, the thought-provoking section.

At the end of the book he gives a look-out of how the issue “building intelligence machines”

could develop in the future, in his opinion.

Already by the beginning of the book, where Hawkins talks about his reasons why he is so

interested in how a brain works, tells me, that this book would show me a new kind of

thinking about intelligence of humans and intelligence of a machine. Until now I have not

thought about my intelligence so accurate. It is simply accepted that some people are

"smarter" than others, but what exactly intelligence means does only a few know. It is self-

evident, that we understand what we see, that we know what we have done yesterday, that we

can remember faces of different people and so on…

However, we note at the situation as with the artificial hand that our senses can be fooled

sometimes. Therefore, it is not quite as easy to understand the human brain. It is very

surprising that we know the text of a song, only we hear the melody or we know the next song

of a CD. While I am reading I come more and more to the question which part is responsible

for what part in the brain?

In this book there are so many different approaches about the constituent parts of a human

brain and how they work together, all of which are very impressive. However, for me, makes

only the combination of all understanding of the human brain and behavior as impressive.

Because of that, it would be difficult for me to say, that one thought or concept, which is

proposed by Hawkins, is the most interesting for me. I think all of the ideas in the books are

Brain, Mind and Cognition

Seite 2

important to understand how a human brain works and if it is possible to build an intelligent

machine, which works as a human brain.

Now I want to talk about these parts of the books, which impressed me, the most.

One is the human memory, it is really fascinating that how many things we can remember.

The brain uses vast amounts of memory to create a model of the world. Everything you know

and have learned is stored in this model. The brain uses this memory-based model to make

continuous predictions of future events. It is the ability to make predictions about the future

that is the crux of intelligence.

For example, we know today is Monday, so it is logical that tomorrow is Tuesday. But why

we know that?

The neocortex is the part of the brain, which is responsible to memorize everything. It

represents the sensory input, the motor cortex responsible for movements and large-scale

association centers. It is very important how the neocortex handles the data. Every day you do

things on the same proper way, for example: come home, put the key into the door, turn

around the key and the door opens. If everything has changed from the usual behavior (door is

a lot lighter, different door handle), you would recognize that immediately because the

neocortex compares memorized patterns with new ones.

But not every creature has one. For example a fish doesn´t have a neocortex. The fish are not

able to learn that nets mean death or to figure out how to build tools to cut nets.

A lot of muscle commands are stored in our brain for doing sport, take a cup of coffee or open

the door. I want to talk about the “catch the ball” example, which is mentioned in the book.

Our brain has to assimilate three things. The appropriate memory is automatically recalled by

the sight of the ball, the memory actually recalls a temporal sequence of muscle commands

and the retrieved memory is adjusted as it is recalled to accommodate the particulars of the

moment, such as the ball's actual path and the position of your body.

In addition it is very important to mention that our memory doesn´t know how to catch a ball

since our birth. We have to learn all in our lifetime. We have to learn speaking, going,

swimming, reading, so we have to learn everything, which we can, before. Furthermore we

don´t learn by seeing, we learn over years of repetitive practice, and it is stored, not

calculated, in your neurons.

The memory is in the DNA of the organism, so some creatures are able to speak and others

aren´t. But intelligence not only comes from our DNA, it needs a lot of practice. Nevertheless,

some brains probably have more cells or different kinds of connections than others. This

means that some people are faster in a position to learn some things than others.

Albert Einstein owns a brain, which was measurably unusual. Some scientists discovered that

his brain had more support cells, called glia, per neuron than average. It showed an unusual

pattern of grooves, or sulci, in the parietal lobes - a region thought to be important for

mathematical abilities and spatial reasoning. Because of that the genetic factors play an

Brain, Mind and Cognition

Seite 3

important role in the intelligence of a human. So some people could be learn something faster

than others or can remember more.

Now after presenting some parts of the book, I can only say that in the book are lots of

interesting thoughts for me, but the interaction is the main part.

After I referred which one is the most appealing or interesting idea to me, I come to one of the

most self appearing questions you ask yourself after reading this book. If Hawkins´ book

stimulate thoughts inside of me about what it takes to build intelligent technical systems? To

ask in accurate words, do I feel like I can build intelligent technical systems respectively do I

want spend more time in this issue in my future?

Some part in me says yes and some no. When I read the book, I understand what Hawkins

wanted to tell the reader. All about the human behavior seems to be logical. But, when I

thought about it once or twice again, it wouldn´t be so easy for me to understand all. I notice,

that I only recognize the individual parts, but not the connection between there. The worst part

of the book was for me to understand how the individual components (like memory,

neocortex) are connected. So how the signal flows are. And I am of the opinion that these

would be the significant problem in building an intelligent technical system.

Creating a memory or function, that are similar to the human movements, does not represent a

problem. However, the interaction between the recognition of a ball moving at a person, and

the human act, represent the main task and the main problem in the implementation of

intelligent machines. But also the time, which has to be very fast between the different kinds

of brain, hindered the building of intelligent machines. In addition, it must not be ignored that

the development and research of human brains, as well as the implementation of intelligent

machines cause significant costs.

In conclusion it can be said that this is a very attractive research field, which needs more time

to be fully understood. This is especially recognizable by today's developments in robotics.

But also in many films is shown what the people wish in relation to intelligent machines.

In the book “On Intelligence” you get a good overview of the basics of intelligence and how

an intelligent machine works. However, much more is needed to get a real comprehension of

this issue. Hawkins otherwise would not have spent many years of his life with it. The pretty

curious thing is that nearly every person or professor Hawkins mentioned had another

definition. Besides, we must remember that this book was published in 2004 and since then a

lot has changed.

In the end I want to say, I liked that book to read. I have not yet addressed this issue and I will

probably not, but I still get a little insight. I like that the book by responding better to the

physical components of the brain, and not only on technical aspects. Hawkins has managed to

get out that it this rather difficult subject very easy to the reader. I am of the opinion that a lot

of people, which are interested on the human brain and on an idea to build intelligent

machines, would be strengthened in their passion, after reading this book.

Essay on “On Intelligence”

The book was really mind blowing from so many points that if I start writing about everything which impressed me I would be downgraded a point at least for sizing and for not answering the questions. I like the perspective of the author as it comes from an engineer with a lot of practical experience in computer science and mobile computing. This aspect has made the book very appealing and also the explanations and examples were clear to me.

Probably the book will first help me improve myself as I now have an idea on how my brain works and afterwards I should try to apply, somehow, the knowledge. I know for sure that I will read more about the framework he is proposing, as it has now won my interest.

I think it is quite hard to name an idea which impressed me. Probably the whole idea could be summed up by the four attributes of the neocortical memory that are fundamentally different from computers and what ever humans have came out so far:

• The neocortex stores sequences of patterns.

• The neocortex recalls patterns auto-associatively.

• The neocortex stores patterns in an invariant form.

• The neocortex stores patterns in a hierarchy.

I like how this resembles with object oriented programming: the hierarchy being the classes with inheritance and all, the patterns and being the functions and attributes, the invariant forms the data classes, how the brain does not care from who the signals come from (which sensor).

From these four, probably I was most impressed by the theory behind the invariant representations because this totally changes the way we should think when developing intelligent applications (either software or even full robots). The idea behind an auto-associative memory has already been exploited and it is clear that it

1

will not work in all the courses. Like the author has mentioned, in real life all the things never happen in the same manner, so using a normal neural memory it would be almost impossible to get a match. The example gave in the book with the ball is so comprehensive. When a ball is being thrown to you, your brain will not do all the maths and physics behind this to see where the ball will be and then call the memory of the process behind catching a ball. The brain has the ability of detecting that a certain action is similar to what it has already has a pattern for and applies the knowledge.

The concept the author used in the book which really impressed me was the example of a person thinking that that you want to go into the kitchen of your house. First you have to imagine you doing all the actions which are required and then the brain will try to make those predictions come true by actually triggering all the movements and actions.

The applications today which we call artificial intelligence has not been so impressive because I think that a current robot is not intelligent at all, rather the intelligence lies with the programmer/team of programmers. I always considered that there is way to much work needed to be done in order for some device/software to be able to act “intelligently”. The solution was not simple ... in fact it is more close to the "brute force" method, if we think about the fact that a lot of processing power is needed to compute all the details. It is not ok.

To some extent, I consider the book "has ruined" the idea in my head about intelligent solutions. Recently I have been presented with a problem, at work, in which I am asked to develop an intelligent software. All I can think of is the framework presented in the book. Probably I will have to actually implement something which resembles to the stuff described in the book and experiment to get a glimpse of the possibilities.

Right now I have almost no clue on how would one start tackling such a problem. I have to start reading some more books/articles. It is also very interesting how the brain makes predictions by analogy to the past experiences. Given this I totally

2

agree with the author when he mentions that we imagine that new technology will be used to do something familiar, only faster and cheaper. We need to first experiment, get results and see where the technology goes. In my opinion one aspect is very important: we should not start to develop artificial intelligence just to solve a problem like speech recognition. This will only limit us in seeing the full potential, and could even take us in a totally wrong direction (the author stated this in the beginning when he said that people started to think of brains like computers). There will be definitely someone who, after seeing all the data from the experiments, will come out with some ingenious and creative way to put it to good use.

I now am very curious if a "memory" similar to the one our brains posses can be computer simulated. in order to be used in very small tasks. This pattern recognition along side with the learning process would fit perfect as a replacement for certain applications where the the system is just to complicated to be observed by any human (only certain specialists can do it). By developing such a software the cost will drop for companies and the specialists will be able to do something else, more valuable.

First I think we need to really understand how the human brain really works and how the number of layers, the number of neurons, the number of connections and what other parts from the brain (like the thalamus) relate together. After completing the framework, a certain algorithm needs to be created, an algorithm which tells us that if we want to make, for example, an application which is able to do speech recognition for approximately 100.000 words we need to have x layers, y neurons out of which y1 need to be of some type and need to have connections from layer 1 to layer x, and so on. This will be very important from the optimization point of view. Also the training part will somehow needed to be shortened. I don’t think that we can wait a couple of years, maybe more, till some memory is trained and ready to mass product. Presumably, in the beginning, it will mostly be like how electric cars are today. They present good potential but they are not optimized, they take way too long to charge, the speed is also not so good, and not to mention the price which is too high right now to actually make a difference.

3

Steven Hawkings book proposes several interesting topics in his book, “On intelligence”. He explains in length a top-down overview of human learning and intelligence. The most interesting topic by far, in my opinion, does not have anything to do with the function of the brain though: It is the concept of loosening one's attachment to one's educational background. As an electrical engineer, I always tried to interpret findings about the human brain, about intelligence, social interaction and many more topics with an engineers point of view. Just like Hawkings described his initial problems with computer scientists like himself, I always tried to find analogies between new ideas and old knowledge from my studies. It was simpler, more convenient – and less off a mental step to take. I never quite saw it as a drawback: It actually enabled me to understand certain principles much faster than, say, a student of biology. An ear uses principles very similar to an amplifier used in my cars stereo – why not imagine some of its functions as a tiny, biological amplifier? Why not interpret the ear's way of regulating noise like a system from automated control engineering? It definitely helps understand the basic function of the ear, at least for me, but I didn't see the danger: Over-simplification and misinterpretation, combined with loosing sight of the overall image. This applied not only for my understanding of the ear, but also other subjects, derived from psychology and biology mainly. But it went even deeper than just the comprehension of principles. Getting back to the example of the human ear - without actually realizing it, I had thought of the 'construction' of the ear by 'nature' to be just like a construction process for a car stereo in a big company. My mental image was somewhere close to a guy, sitting in a cubicle, thinking about how to design a perfect receiver of audio input, later coming up with the ear, shouting 'Heureka!'. Hawkings book led me to remember those mental images I had made, and correct them. Because of being an engineer, I had already made assumptions that did not really manifest that way in reality. The ear, of course, had evolved over time, and was not constructed by some imaginary other engineer in ages past. Hawkings very convincingly pointed out that more often than not, a top-down view combining different sciences was advantageous. Reading the book, I realized how easy my rather technical view of biological subjects had made my studies of some of them for me, but how difficult it had made other considerations. After reading the book, I now see neural networks not only with the eye of an engineer, but also try to think about it as a psychologist, or a human in general. Each of us has a brain available, so why only think about it from a technical viewpoint? Many different ideas developed in my head through that multi-studies view of my knowledge, especially in areas were the engineering overlaps with biology. Why not try, just like Hawkings, to build intelligence instead of imitation? Especially if our technical imitations spawned a lot of interesting spin-offs, but remained rather pale imitations over a long time? Why not, on the other hand, playfully, just like nature, experiment

with things similar to nature, instead of always seeing them as a solution to worlds most pressing problems? Great minds like Goethe are said to have been 'Jack-of-al-trades', to have dabbled in many sciences and arts and have included the best aspects of each one into their work. Hawkings, in his top-down view, blurring borders between his own technical background and other sciences, is not unlike them in that way. I assume he will get a lot of recognition for this ability to see beyond one's own nose – even if, or especially if, some of his predictions about intelligence turn out to be wrong. Theories exist to be proven wrong or to find more evidence for them. But a way of solving problems, a new way for interpreting and extending theories, in my opinion, is a masterpiece in itself. And conveying this underlying principle in a book also explaining many other topics makes the book an especially interesting read. Hawkings book, on the other hand, made me contemplate a lot about the term “intelligent technical system”. I guess the first step to develop one would be to define what “intelligent” meant, as even Hawkings, in my opinion, didn't give a satisfactory answer. For some people, the Turing test, or the Chinese Room, might be a good starting point, simply improving the hurdles for really counting as intelligent later on. Or is real human intelligence, our way of hierarchically storing data, later making predictions about the future, the only real starting point? I guess this will remain a topic of discussion for quite a time, which does not have to be a bad thing. Sometimes it's more difficult to try to achieve the overall, top-down goal from scratch. Maybe altering our approaches that were made bottom-up, though with the overall goal still in mind, will someday lead to an acceptable result. Maybe our knowledge of brain, mind and cognition, but also of computational intelligence and hardware, need to increase before a breakthrough is achieved. But in any case, a broader view, as mentioned earlier, will be of benefit. I'd consider including more 'outsiders' into computational intelligence research to be a first step. Why not ask a psychologist for help? Even though the vocabulary will differ quite a lot, and misunderstandings and problems will turn up in the beginning, a task as daunting as creating real 'intelligence' might not be tackled with just engineering minds working on it. I would even go as far as proposing more classes related to biology or psychology for engineers interested in the subject (and vice versa), increasing the chance of a proper understanding between the different branches of study. Engineers are often seen as one-track specialists by other people – not without any reason. This will have to change, to my mind, if we want to see progress in artificial intelligence any time soon. Results will be found by large, diverse teams of great minds, or at least great minds with a very broad spectrum of abilities, and not a single, very focused engineer sitting in a back room and perfecting his latest algorithm. On the other hand, technology, at least for interpreting and finding knowledge about the human brain, will have to improve in some ways. A less static, more flexible view of the

human brain is of the essence to get a proper understanding of our cognitive functions, not only some still images derived from a stationary MRI-device. I'm convinced, especially with the recent developments in biological imaging in mind, that these improvements are possible in the next ten years, and even more interesting results will show because of them. Who would have considered modern day MRI possible some decades ago? I guess we can expect some epochal inventions and results still to come, an it will be difficult to predict them – even with a prediction machine as advanced as our brain at our disposal. Yet again, these inventions will have to be judged from a broad view, and not just by an engineer happy about colourful pictures and his pay-check or a doctor happy to cure a certain disease – as important as these aspects might be on their own. It will take some effort to find the most promising candidates for the task of identifying and interpreting cognitive architecture. But this, of all requirements for better technical intelligence, is the most easily achievable one. Lastly, I'm afraid we will eventually have to let go of some fancy ideas the movies got us into. When developing technical intelligence, we should maybe get away from C3PO and other very human-like entities. Maybe an advanced version of Siri, able to not only give predefined answers but understand and learn about human conversation, would be enough, making a perfect conversation partner for the ever-increasing amount of lonely people in the western world or stimulating philosophical questions by interacting and speaking with each other, in between artificial intelligences. Maybe reaching each and every aspect of intelligence is too much to ask, and an intermediate solution more reasonable in our lifetimes. Or the lifetimes of our children. While keeping our own abilities in mind, finding a machine intelligent in some ways might be sufficient – maybe reaching the goal of rebuilding the human mind from scratch is simply too much. And maybe it will even be important, to a certain extend, to then think about how to treat those machines. If the are intelligent, should they be considered to have the same rights as other sentient beings? At which point of development should they perhaps even count the same as a human being, if ever? Maybe it will turn out it's not a bad thing to not have to worry about some moral issues of the discussed topics right now. And maybe another Steven Hawkings, a few years from now, will formulate theories and concepts for solving these philosophical questions. Until then, his book mainly stimulated one last thought in me: That the research on technical intelligence will need some of the best brains in the world in the next few decades – and that there is a vast market, a vast amount of research topics, and a vast amount of answers waiting somewhere out there for engineers, doctors and people simply interested in the topic alike.

Essay

Now your neocortex is reading this essay. In this book Hawkins first talk about his research in his early years. And he indicates that many biologist and computer scientists go the wrong way during their research, just because these biologists tend to reject and ignore the idea of thinking of the brain, and computer scientists often don’t believe they have anything to learn from biology. Namely the interdisciplinarity plays a important role in science. And the environment of research is really hard. There are many neuroscience centers in the world, but no others are dedicated to finding an overall theoretical understanding of neocortex – the part of the human brain responsible for intelligence.He is pursuing a drean that many people think is unatainable. He refer to approach to studying intelligence as ‚real intelligence’ to distinguish it from ‚artifical intelligence’. Comparing with Hawkins many AI scientists tried to programm computers to act like humans without first answering what intelligence is and what it means to understand. They left out the most important part of building intelligent machines, the intelligence! Intelligence is not defined by intelligent behavior. Computers and brains are built on completely different principles. One is programmed, one is self - learning. One has to be perfect to work at all, one is naturally flexible and tolerant of failures. One has a central processor, one has no centralized control. In all cases, the successful AI programs are only good at the one particular thing for which they were specifically designed. They don't generalize or show flexibility, and even their creators admitts they don't think like humans. Even today, no computer can understand language as well as a three-year-old or see as well as a mouse. The ultimate defensive argument

of AI is that computers could, in theory, simulate the entire brain. A computer could model all the neurons and their connections, and if it did there would be nothing to distinguish the "intelligence" of the brain from the "intelligence" of the computer simulation.

In the second chapter the author indicates at first that behavior is a manifestation of intelligence, but not the central characteristic or primary definition of being intelligent. Intelligence is something that is happening in your head. Behavior is an optional ingredient. Further he points out that to understand and research the neucortex is the most important step at the beginning.

Hawkins thinks that, all the essential aspects of intelligence occur in the neocortex, with important roles also played by two other brain regions, the thalamus and the hippocampus. He proposes that the cortex uses the same computational tool to accomplish everything it does. There is a single powerful algorithm implemented by every region of cortex. If you connect regions of cortex together in a suitable hierarchy and provide a stream of input, it will learn about its environment. Therefore, there is no reason for intelligent machines of the future to have the same senses or capabilities as we humans.

Brains are pattern machines. It's not incorrect to express the brain's functions in terms of hearing or vision, but at the most fundamental level, patterns are the name of the game. No matter how different the activities of various cortical areas may seem from each other, the same basic cortical algorithm is at work. And brain is not a operator, it just search the answer from memory. Totally has our brain three characters, the first one is that we remmeber things always in a sequence. The second is self – association. And the last one is invariant representations. And the real intelligence is just like this.

Then he desicribes a much stronger proposition. Prediction is not just one of the things your brain does. It is the primary

function of the neocortex, and the foundation of intelligence. The cortex is an organ of prediction.

Then I will describle my personal idea how to build a intelligent machine. The important characters to build a inelligent machine is that the machine can borrow the knowledge from nature. And they can compare the input partterns with their memory. And we can easily teach them like teach children. Through our teaching the system should build its our modell of our world. But this is not the only way that the machine recogize the world. It should also require the self – learning – skill. It can observe what’s happend around itself. All above are the most important characters of a intelligent machine.

To realize it we need a 8 trillion bytes of memory. Because silicon chips are small, low power, and rugged, the silicon chip is the best choice. Then we should build a communication system. Each part of this system should have the ability to comunicate with each other. And the circuit should not be very complicated. I will quote a example of the author. Telephone system is a really good system. We can make phone call easily through a high-capacity line. So the connecting line of the intelligent system should be high – capacity and high – speed. And there must be many feedback controll in this system. The algorithm play a important role in this system. The author says our hearing area and seeing area in our brain have the same algorithm. We can transfer the hearing cells to seeing area. Then those hearing cells can also see the world. In our intelligent system I think we can also use a global algorithm. And the algorithm has the ability to compare the input pattern with the memory. The sensors and corresponding processor are not the same. The processors can change the analog signals to digital signals. So the central processor can easily compute all the input signals in the same way.

Essay

“On Inteligence” – Jeff Hawkins

1.What do you think is the most interesting thought/concept proposed

by Hawkins?

In my opinion, the most interesting concept proposed by Hawkins is the “theory of the

patterns stored in a sequence fashion in the neocortex”. Storing and recalling

sequences are necessary for making predictions, recognizing time-based patterns

and generating behavior.

The fact that memories are stored and recalled in a time-based sequence is probably

a key attribute of many, if not all, cortical areas.

Before reading the book I have never thought of the brain this way and I think the

book captured my interest in learning more about it, it’s components and the way

everything works together; about the way the synaptic connections between the

neurons work and how the neurons support the stored memories.

Neurons are the basic information processing structures in the central nervous

system. They process all of the motor information, all of the sensory information

(smell, taste, see, hear, touch) and all of the cognitive information through which we

are able to reason, think, dream and everything we do with our minds. They

communicate with one another via synapses. Because of the enormous amount of

information stored in our brain, only a limited number of synapses and neurons in the

brain are playing an active role in memory recall at any one time.

Through the example of telling a story or an apparently simple task as telling the

alphabet backwards Hawkins succeeded to explain and also made it possible for me

to visualize his theory of the fact that the information is stored as a sequence of

patterns in the neocortex.

One can observe this also from his actions. When doing something recurrently we

use to follow the same pattern. We don’t even notice it anymore but when we try to

change it we observe that it gets difficult. We can force ourselves to change the way

we do it but we have to stay focused. If our focus drifts away we return to the way we

were accustomed to do it.

The fact that you need to finish one part of the story before you can start telling the

next one is also a consequence of the theory mentioned above. Every pattern

evokes the next one and it is also related to the way in which memories are stored.

Without telling the previous part you cannot recall how the story will continue.

Another exemplification of his theory can be found in everyday life. When thinking

about the past and experiences in the past one notices that she/he recalls them in

much the same way and sequence as they happened. I think it is logic that the things

that happen first have priority in being stored. This allows the brain to gain quick

access to past information when it needs to get integrated with incoming sensory

perceptions. Things that happened first are more important because they predict

what comes later. Particular neurons are activated which at their turn activate the

next neurons.

I think, stored memories are an important feature of mammals and a feature that

makes us "intelligent". It allows us to learn from previous experiences in order to

improve and to evolve.

I can also identify with an example found in the book: remembering the lyrics of a

song. Sometimes when I want to recall a certain line of the lyrics I have to sing the

whole song in my head until I can remember the line I was looking for. From time to

time I had thoughts wondering about what this could mean and why it was like this

but I have never found an explanation and I didn’t think it was because of this feature

of the neocortex.

Another interesting fact is his affirmations which says that truly random thoughts

don’t exist and that memory recalls almost always follows a pathway of associations.

This works also hand in hand with Hawkins theory of the memory-prediction

framework.

I found it very intriguing to learn that we anticipate the phrases, the individual words

we use and all our daily actions. Previous experiences are memorized as invariant

patterns in a sequence fashion and in a hierarchical form in the neocortex so the

brain is building a model of the world around it and predicts outcomes based on the

stored data. A prerequisite for the memory-prediction framework to work is the theory

of the patterns stored in a sequence fashion in the neocortex.

2. Does Hawkins' book stimulate thoughts inside of you about what it takes to build intelligent technical systems?

Yes, Hawkin’s book stimulates thoughts inside of me about what it takes to build

intelligent technical systems. I agree with his statement saying that there is no

reason that intelligent machines built in the future should look or act like a human.

I share Hawkins opinion that before we start building intelligent machines it is

important to first understand how the human brain works. Even if we don’t build them

as a copy of the human it is wise to use the cortex as a model of building intelligent

machines and thus get inspired by nature.

In my opinion, the biggest challenge in building intelligent machines is the way we

have to engineer sensors. I think that, in order for a machine to be called "intelligent"

it has to know how to adjust it's actions based on what it senses. A machine can be

useful if it knows how to act and to react in certain cases. Even if we would

implement all senses human have I don't see how the machines would be capable to

process the information on it's own and act proactively.

I also think that there is no benefit for us when building intelligent machines which

are built like a human or if they interact with us in human like ways. My belief is that

we should see the machines as a tool in order to improve and ease our life. We

should try to build them in order to do what people cannot do or what is too

dangerous for people to do. An important fact which comes when building intelligent

machines is that in this case we can locate the memory system of the machines

remotely from its senses. Thus, we can built them in order to help us in dangerous

environments, e.g. when defusing a bomb.

Deep learning architectures seems to me as being a good approach in order to have

a success in building intelligent machines. Although, a big mystery remaining for me

is how artificial neural networks will overcome the big issue of "connectivity".

Essay

“On Inteligence” – Jeff Hawkins

1.What do you think is the most interesting thought/concept proposed by Hawkins?

The most interesting thought to me was the thought of my consciousness

being just the state of an hierarchical structured memory, called

neocortex. The idea that my consciousness, the awareness of myself as

an individual with a character, a mind, a personality and creativity

sensing the environment and taking actions would be the result of just a

specially structured memory – some biochemical hardware device.

My identity, the way I think – the fact that I think and the fact that I think

about me to think – are naturally just results of that hardware, that neocortex.

It's all just the wiring inside that architecture described by Jeff Hawkings. Can it

be that simple? All it needs to make the difference between my mind and a

fly's, is just to release a grid which features properties as a hierarchy, invariant

storage of patterns, auto-associative memory and prediction? It will then take

some time to learn and voilá out comes me? More or less. Of course he states

that it takes a lot more to be a human being. But then the consciousness and

what I consider as myself is the configuration in that grid?

It is both astonishing and yet unsurprising to me. I always felt there

couldn't be any magic or soul to it what it means to be a person - that

particular person you are, not your sister or your best friend nor your

neighbour. I always was aware the difference would be in the brain.

All people are built quite similar with a similar brain architecture. The

chemistry is all the same. But still each adult differs, even twins do. It

was always clear to me that character as well as intellect of a person

derived from what he or she had learned and experienced. Those

experiences had to be stored somehow to influence our further

thoughts and reactions. Reactions based on decisions we make

because we are conscious.

A newborn baby has no idea of itself, the mind is quite empty. Not

until some age in childhood it understands its situation as a human

being in a social environment. Then, once it gains grasp of its own

personality and a role it plays in the world its consciousness grows.

At least that is what I remember to experience. And to me that is

what I would consider the most prominent indication that my brain is

intelligent. It keeps learning from inputs given and understands more

and more of the world around.

To learn this could be implemented in a structured memory is what I

find appealing in Jeff Hawking's book. You would need to organize a

special hierarchical architecture along with a few feedback and

strengthening mechanisms in a memory to implement intelligence.

This system would then be able to learn and understand. Understand

what the system itself is to its environment and what it does. Why it

does what it does. In my understanding of what Jeff Hawking

describes as an intelligent, brain-like system this gain of

understanding of the system itself is implied, given enough learning

phase.

What makes that idea I earlier declared to be most interesting in the

book that interesting to me, is the consequence for machines to

develop consciousness. Because, given you build a brain-like

system which is intelligent and hence able to learn and understand

what it does for whom, why it does it and how it's done, it has what I

consider consciousness. How would a system that has a

consciousness behave, would it gain a personality? Would it become

egoistical? Some people could consider egoism as a valid symptom

of intelligence. At which level would that consciousness be stated

compared to the one of humans, animals of lower animals? Would

you kill if you turned the machine off? These are thoughts which

came up to me whilst reading the book. Probably my consciousness

is just the state of that electrical- biochemical hardware in my head.

2. Does Hawkins' book stimulate thoughts inside of you about what it takes to build intelligent technical systems?

Basically it brings three thoughts to me:

1. It would not only be very interesting but rather necessary to verify

his theory. I guess one could write a software simulation of an

intelligent system. The description of the neocortex in his

framework is already very object-oriented. I imagine one could

write sort of an software library which implements the particular

mechanisms and entities in his framework. Then instantiating few

hundred thousand neurons on a fast computer could give a nice

simulation of a small brain, or at least cortex like system.

Interesting would be to see if it would then start learning. It also

could give hints on flaws in Hawkings framework of intelligent

systems. He already states, he expects there to be several flaws.

2. Another important point would be to adapt an implementation of a

neuronal system to its specific application. As also stated in the

book one could decide whether to keep the system learning or

freeze it at any point to keep its learning status. This, I guess will

be a highly demanded property of such systems. Also there would

be a scalable parameter in capacity of a system. Hence all tasks a

system can be dedicated to are of different complexity, a

intelligent system would only need a reasonable amount of

neurons.

3. In order to build intelligent technical systems it would be very

agreeable to build those on a silicon base since the technology for

that is already there. I guess a very grateful approach would be to

find structures on silicon which realize elementary functions or

mechanisms in the framework Jeff Hawking gives in the book.

Such as a transistor realizes the function of amplification,

something else could constitute basically an axon.

If an adaption of the framework wouldn't be feasible at silicon

level, may be on logic level. I'm sure the lower one would find an

adaption, the better the performance of the implemented design of

a intelligent system would be.

On Intelligence: assignment work

Q. What do you think is the most interesting thought/concept proposed by Hawkins?

Ans: Hawkins proposes several ideas in the book “On Intelligence”, authored by him in

association with Sandra Blakeslee. In my opinion the single most important thought is the way he

has tried to define intelligence, through his analogies and deductive logic, in an easy to

understand way, targeted at a larger audience. He proposes that the basis of intelligence is

prediction and not reactive behavior which has for long been assumed to be the proof of

intelligence as in Universal Turing Machine. The prediction based hypotheses is the corner point

of the book in my opinion.

Intuition tells us that we humans are intelligent because we humans can react in a certain way

depending on the inputs that we receive via our senses. This has been the driving force behind

Artificial Intelligence approach to building intelligent machines but as we know now that this

approach has not produced any remarkable result. The fallacy is that intuition does not always

show us correct interpretation. Glancing at the history we would see a lot of such instances. Some

of these intuitions which we now know were incorrect are like the earth is flat and is at the centre

of the universe. It was also believed earlier that it is the sun that revolves around earth which is

static. Kepler, who first discovered the elliptical orbit, believed that the planets move on a large

plane. Thus we can see that intuition does not always lead us to correct conclusions.

While we can easily recognize any face, almost instantly, there is not any machine in the world

right now which can do this as well as us. All it can do is try to match the observed face with

those saved in the templates repository of the program and try to find any close match. Even this

works under a lot of constraints - like the lighting conditions, orientation of face, etc. – and the

accuracy and speed both are low. The same goes for any other AI attempts like Speech synthesis,

pattern recognition. These AI based solutions can still not make out all the 3D objects in a video

feed. Another approach that scientists tried was the neural network approach. Though, this

approach showed more promise it came nowhere near to achieving the objective of making

intelligent machines. This was partly because these neural networks studied and researched were

too simplistic, had few hierarchies and were not auto-associative. Proponents of AI suggest that

with increased computing power the AI solutions would be able to perform these tasks like

humans.

Hawkins rebuts this line of thinking of AI scientists and engineers with a well-reasoned approach.

He also goes on to prove logically – using Chinese Room Experiment - that a machine fulfilling

the criteria of „Turing Machine‟ not necessarily implies it is intelligent. We know that neurons

work based on electrochemical potential spikes that propagate between neurons and these travel

much slower than the electrical pulses in electrical wires or silicon chips of today. Using the 100

step rule – Hawkins trashes the argument that we need to have faster computing power for

building intelligent machine. We can recognize any object like say a cat in a picture in less than a

second. In this time the information entering the brain can only travel across about 100 neurons.

So our brain can achieve the result in less than 100 steps whereas computers attempting to solve

On Intelligence: assignment work

the same problem would need billions of steps. So, definitely building faster computers is not a

pre-requisite for building intelligent machines.

Thus, he establishes that in order to build intelligent machines we need to understand what

exactly intelligence is and then only we can go on to achieve engineering solutions to create

intelligent machines. In order to understand intelligence we need to start from something which

we know is intelligent and that is the brain. Therefore, it is highly imperative for us to understand

how the brain works in detail. A lot of people would have us believe that understanding the

working of brain is impossible for us humans. No doubt the task is highly challenging but we

cannot just leave it because of its sheer toughness. Even if we look at the current microprocessors

with billions of transistors we would not be able to comprehend each of the transistors and its

connections but we do know how the individual transistors work and we know the basic logic

gates using the transistors. Using this knowledge and EDA techniques we are able to design this

microprocessor and ensure its flawless function at system level. Even this would have seemed

impossible 60 years. Similarly, if we understand the basics of brain‟s working we can then come

up with relevant technology and design intelligent machines using this knowledge.

The knowledge that neurons in our neo-cortex have a large number of feedback connections and

these neurons are arranged hierarchically in layers suggest it‟s memory based and the feedback

connections coordinate and regulate the excitation or non-excitation of neurons. Though much

anatomical details at individual cell level is not available, Hawkins hypothesises based on his

work and the work of other researchers in the field that Prediction is the essence of our

intelligence. Showing how a new born child has no memories and so cannot speak or write, etc.

The same child as he grows up observing his neo-cortex neurons learn to recognize patters.

Higher level cortical neurons have a broader knowledge of these patterns than the lower ones but

they are slower. Hence we think slow when we encounter something which we have never

experienced before. Based on these sequences of patterns, auto-associativity, invariant storage of

patterns and storage of patterns in a hierarchical manner the brain is able to predict

simultaneously the next possible set of events. He explains it further with several examples like

the listening to any music over and over again. We can easily recognise the music even when we

hear only a small part of it – which might even be played in a different scale. Similarly while

climbing down the stairs our mind predicts the next step in advance and instructs the muscles to

move accordingly and we can climb down effortlessly even without looking or feeling it every

time. However, if we miss even a single step we get a little shock and it takes some time before

we can rebalance ourselves – which happens because in this situation the prediction of lower

layers failed and now the cortex tries to understand this anomaly at a higher layer and then

instructs the muscles accordingly. The door experiment also displays the same thing.

Thus we see that „Prediction‟ is the cornerstone of Hawkins theory and it seems logically correct.

Of course we need more anatomical details at cellular level to come to a final conclusion.

Nevertheless, the proposed theory is a good starting point for neuroscientists and engineers to

pursue further research into this area.

On Intelligence: assignment work

Q. Does Hawkins' book stimulate thoughts inside of you about what it takes to build

intelligent technical systems?

Ans. Hawkins book definitely is thought provoking and it has some compelling thoughts about

what it takes to build intelligent systems. Although, I had doubts about the AI approach earlier

but after reading his book I am convinced that AI is not the solution for building intelligent

systems. The book provided me with some solid points which helps me understand better what it

takes to make an intelligent system. I earlier understood that just by feeding the known

possibilities into the memory we cannot hope to make a system intelligent and that we need some

adaptability and flexibility as well to make a system that can be termed as intelligent. We face a

lot of circumstances which might be unknown to us but we learn to adapt to it quite fast. The

simple act of catching a ball in air is quite instinctive for us but the same task is tough for a

modern day robot. As a kid when we have no memory every circumstance is new and it takes

more time to adjust and react to it, like learning a language or to recognise faces and objects.

Once we become accustomed to it we can do these tasks almost instantaneously and we can

accommodate a lot of variations as well but the same is not feasible for any modern AI

application.

Reading the book bestowed upon me the idea that Prediction framework is the essence of

intelligence. The intelligent system shall have the following abilities - It should be able to store

information in an invariant form; It should have auto associative abilities so that it can identify

the information even when only a part of it is provided or is somewhat altered; It should have

hierarchial memory, so that if one level of memory fails to understand another higher level

memory can bring some insight.

Hawkins then goes on to show how the meaning of intelligent can vary in context of different

organisms. For example, single celled organism just needs to know where to find food and react

accordingly whereas for complex living beings like human‟s intelligence produces much more

varied behaviour. This tells us that our machines can be made intelligent in different contexts.

The book does not go into much detail at the anatomical or biological level which is not

surprising to me as the book is meant for larger audience and also because not all the fine details

about the working of brain is known. So, the book, I think, does a fair job of proposing a

plausible hypothesis. I believe that Hawkins has provided us with a broad, higher level of

understand regarding the functioning of brain. This can work out as a starting position for future

researchers into this field but a lot of more details and clarity is needed about the lower level of

functioning of the neurons in the brain. This knowledge can help in testing the hypothesis

proposed by Hawkins and would also provide more information more engineers to find

technological solutions aimed at building intelligent systems. Overall, I can say the book

provided me the basic information about what an intelligent system can be but comes short on the

granular details about achieving the results.

ON INTELLIGENCE by Jeff Hawkins

Coming from a Communications Engineering background, my knowledge on brains was

just limited to the fact that it’s a very powerful organ capable of performing various

computationally demanding tasks in the blink of an eye. I knew that it consisted of

several billions of neurons and that these brain cells communicated with each other in a

way to make us the person we are today. I had never envisioned that even our brain

could follow a fixed algorithm in order to perform the daily life tasks.

In the book, the author introduced us to the model of the brain as perceived by AI

researchers and Neural Networks researchers. He explained us the various flaws in

those models, and instead introduced his own views on how the brain works and how

intelligence and behavior are different things.

The world is an ocean of constantly changing patterns which come slapping onto our

brains continuously. I see my friend out in the garden or inside the room or during

sunset, it doesn’t matter to me what is she wearing and yet I am able to tell that she is

my friend. However, the same thing for a robot is almost impossible. A little shift in

pixels, and the robot fails. The author claims, that the brain is able to do this because of

its neocortical memory and its ability to predict. At one point of the book the author

wrote a sentence “Our brains have a model of the world and is constantly checking

that model against reality”. This sentence was the whole essence of his theory of

Memory Prediction Framework on Intelligence and it interested me the most in this

book.

The foundation of this theory lies in the neocortex and the neocortical memory

associated with the same. The author explained the biological structures of the

Neocortex and how it is made up of layers which are arranged in a hierarchical manner.

Later he explained the attributes of the neocortical memory without which the brain

would not be able to predict the responses. He stated that our neocortex perceives

everything in this world as patterns and stores the sequences of such patterns in the

neocortical memory. In addition to this, the neocortex is an auto-associative system, in

the sense that on feeding only some parts of the pattern, it is able to recall the whole

pattern. Owing to the auto-associative nature, we are able to recall the whole song on

knowing just some notes of the same. One of the most interesting attribute of the

neocortex was that all the patterns were stored in an invariant form inside the memory.

It is due to the virtue of this invariant storage of patterns in our memory, that I can recall

my friend whether she is out in the Sun or inside a dimly lighted room. Unlike in robots,

where the minutest of the details have to be fed in, the human brain memory captures

the essence of the relationship and not the smallest details. Memory storage, memory

recall and memory recognition occur at the level of invariant forms. There is no

equivalent concept in computers. But an important function of the cortex is to use this

memory to predict. But with the invariance how can it predict something?

The way we interpret the world is by finding invariant structures in the constantly

changing input., just like in the case of recognizing my friend, whether she is out on the

beach or on the mountains. The key to prediction is combining our memory with the

current input. The neocortex uses the past memories and combines them with the

present input to predict something. In most of the cases, when the prediction matches

the real outcome, we don’t feel any difference. However, when I am hearing my favorite

song, and I know all the notes no matter what the pitch is, and if due to some technical

error a note is missed, I immediately sense a problem because the outcome was not

corresponding to my prediction. As the author stated, memory and prediction are keys

to unlocking intelligence. Memory and prediction allow us to use our behaviors more

intelligently.

This whole concept which brings together the memory and present inputs to predict and

unlocking our intelligence was something I never knew and it really generated interest

in me to explore further in the same.

Another instance in the book which was really intriguing in the book was the fact that

the brain has no clue if the incoming signal is a picture, audio or touch. To the brain, the

signals from all the senses are the same, and the way it routes this information is based

on the same algorithm. It was a bit difficult for me to digest this idea of sensory

substitution but on reading a case study where a researcher enabled a blind person to

see by displaying visual patterns on the persons tongue really strengthened my belief in

this theory.

How to build intelligent systems

As children, we learn some things naturally, some come through experience and some

are taught by our parents. All our experiences since our childhood are stored

somewhere in our memory, waiting to be triggered by the one important pattern related

to that memory. Every time we walk into a restaurant, we do not have to be reminded

that we should sit down and wait until someone comes to take our order.

Is it the same with any machine in this technologically advanced world? Machines are

still struggling towards speech recognition. Even though they can recognize words, but

in no way are they able to make sense out of it. Let us take this step by step. On setting

up an intelligent system, silicon easily surpasses the neurons as long as the speed is

concerned. With the current technology, transistors are able to switch in the order of

nanoseconds, where as neurons are only in the order of some milliseconds. The next

issue is the memory. There is no doubt that with todays available technology, achieving

the memory comparable to the neocortex is not a daunting task. However, given that our

neocortex works with a hierarchical memory system, it is one of the challenges to come

up with silicon memories based on a hierarchical structure which may enable machines

to predict.

An important ingredient in order to enable a machine to predict and hence becoming

intelligent is to expose it to suitable patterns. In order to achieve this, a good sensory

system, not necessarily equivalent to our senses is needed. Once this is achieved, how

can we proceed on making the machine learn and store its experiences in forms of

patterns? In my view pattern recognition and if the machine is able to combine the past

memory with current input is most important issue to be addressed. On able to teach the

machine everyday and exposing it to various patterns, it may be able to form its own

world. It may happen that this world is different from what we sense. But, what is more

important that the machine is able to respond to events and be able to recall events

based on the past patterns and the present state.

As mentioned in the book, even the most difficult of the problems have the simplest of

solutions. The right amount of dedication and time to dig through the vast amount of

information could lead us to a solution of “How can we model intelligent systems” and

ultimately to an era of intelligent systems.

[1]

Thoughts about "On Intelligence" by Jeff Hawkins

In his book "On Intelligence" Hawkins tries to explain the difference

between artificial and real intelligence. He also introduces models on how

the human brain works.

On of the most interesting concept Hawkins proposes is that the human

brain works with a hierarchical system and makes predictions all the time.

The hierarchical system is shown by the model of different layers, in which

each layer interacts with other layers. An input signal from the senses, for

example what we see and hear, arrives at the first layer. The neurons and

synapses in this layer influence each other and other ones in the upper

layers. In the upper layers a neuron only fires if it has the right input from its

connected neurons. Thus many signals going up the hierarchy lead to a

representation of the whole outside environment within the brain by a few

neurons in the upper layers. But the model does not only include signals

going up the hierarchy, but also down. This means that for example on

neuron in the upper layer gives a feedback to other neurons or whole areas,

which kind of tells them, when they are expected to be active in the next

moment. The brain is able to learn patterns that often occur and to make a

prediction which input signal should arrive next within this actual pattern. In

my opinion this concept is very important for the understanding of

intelligence and why natural intelligence is much more efficient and able to

predict things the right way in comparison to computed intelligence. What

makes the brain so fast are not the connections between the neurons

(which are slow in relation to a computer) but the right way of signal

[2]

processing. If the incoming signals fit to the learned pattern which is

expected to occur they don't need to be passed all the way up, only

differences to the prediction are recognized. What makes those model of

predictions so interesting, is that the brain tries to find a fitting pattern all

the time and is also able to correct existing patterns or find new ones. Real

intelligence isn't like a algorithm that is used to search for expected

patterns, it just adapts to the incoming signals if they happen to come in the

same scheme many times again. Another interesting thing with those

predictions is that we are still able to make them, even if the incoming

signals change, but all in the same way. For example if the tune of a known

song is changed, the next note can nevertheless be predicted. No computer

is able to make prediction in so many different ways, it has to search for

things the brain just discovers by repetition.

Hawkins book leads to understanding the reasons, why the approach of

artificial intelligence can't solve the problem of creating real intelligence. At

the moment, each artificial intelligence is developed for a special problem,

like understanding the human speech or playing chess. Some of them are

able to learn in their specific, predefined sector, but confronted with totally

new problems they have to fail because they are not able to learn

unexpected patterns the way a brain does. Thus they aren't able to make

predictions in a new field the computer isn't programmed for. As electrical

engineer I would have seen intelligence as a kind of computer problem, but

after reading Hawkins' book one understands why, even with future, much

faster computers we still won't be able to make an intelligent system. A

system consisting mostly out of transistors can't learn new patterns in the

required way. The book raises the question, if the approach of connecting

neurons would be the right way to create intelligence. But which way should

[3]

they be connected? After reading the book I think, that we still don't know

enough about the way learning, understanding and in a whole our

intelligence works to only think about creating intelligence. It would maybe

be better, if not only neuroscientists, but also all others that are interested in

understanding and creating intelligence would make a effort in finding out

how the brain actually works. Only the knowledge of where within the brain

certain signals are processed doesn't help much. Thus it would be a great

step forward to invent new methods to show what is happening in the active

brain area, which signals flow up and especially down the layers. If we

would understand the way the neurons are connected, how new synapses

are formed and how neurons can interact with each other in different ways,

it should be much easier to define intelligence not by behavior but by the

way the system works. And only if we have an idea of how our created

intelligence should work and look like, we will be able to bring it into being.

So maybe we should try again connecting neurons in the lab in different

ways and we will definitely have to find new ways to scan the brain

activities, not only by separating the activity in different areas but also in the

small cell layers. Nevertheless it is likely that we don't really need to create

real human-like intelligence. To make life easier and to help us a lot in

everyday life, different computer based system, each developed for special

field of tasks, will probably be good enough.

All in all Hawkins gives a very good and easily understood definition of

intelligence. I would have preferred a more complex definition of

understanding, which he only describes as hidden from the outside.The

book leaves many open questions that are worth thinking about, but will be

answered possibly only in many years.

“Look deep into the nature, and then you will understand better” – Albert Einstein

The nature has thrown a lot of challenges at us. We have spent many years in understanding

many complex entities such as the stars, planets, millions of living creatures on earth and our

own human body. Once we arrive at an understanding of these complex creations of nature, we

are forced to wonder at the simplicity behind the design of these objects. As Jeff Hawkins says –

“The best ideas in science are always simple, elegant and unexpected”. This is the stand out

thought that comes to my mind after reading the book “On Intelligence” by Jeff Hawkins.

In pursuit of understanding intelligence in human brain, the author points out various ideas

which eventually lead to the formation of a generalized framework. Of importance is the

observation of Vernon Mountcastle in his paper “An organizing principle of cerebral function”.

It was a well known fact that the Cortex which is responsible for intelligence of human brain had

a remarkably uniform structure everywhere. Mountcastle suggested that as the regions of cortex

look similar, they might also perform the same operation. In short, the human brain performs the

same algorithm repeatedly in all regions of the cortex. This I feel is the most powerful idea of

this book. The author harbors on this idea in order to build the framework of human intelligence.

As soon as read this concept the first thing that came into my mind was the simple but repetitive

pattern that the nature follows in all its designs. It is really fascinating how this could explain

some of the most complex things we see around us including the human brain. In general, all the

creations of nature follow the same process of continuously iterating a single pattern or an

algorithm.

Look at the things around us. Mountain, rivers, clouds, trees and so on. One could sense the

obvious chaotic nature of all these objects. You probably cannot define the exact shape of the

mountain with any of the known regular geometric shapes. Mathematicians around the world had

concluded that the shapes in nature are chaotic and could not be defined using equations. All

these assumptions changed with the invention of Fractals by Benoit Mandelbrot.

In his book, The Fractal Geometry of nature, Mandelbrot said that all complex shapes in nature

could be formed by iterative repetition of a simple shape which was later named as the

Mandelbrot set. Once this was know, scientists from various domains including physics, wireless

communications, biology and many more started using this concept to explain complex

phenomena and solve problems. Most of the scientists felt that this was the design secret of

nature that had evolved over a long period of time and they called it the thumb print of god. It

was present everywhere. Cloud formations, branching of trees, stalks of a broccoli, rings of

Saturn, snowflakes… Of prime importance is the use of this fractal architecture in the human

body. Not only our blood vessels and nerves have the shape of fractals but even our heart beat

has a rhythm that follows strict fractal architecture.

One of the most famous science fiction writers, Arthur C. Clarke made a documentary on fractals

where he says “We may never understand how our brain works, if we do, I suspect, that will

depend on some application of fractal geometry”. As I see it, Jeff Hawkins’s explanation of

working of the cortex is another representation of the same fractal theory.

According to Hawkins, the cortex uses the same algorithm irrespective of the type of sensory

input and it does so in a hierarchical fashion. I feel that this process is logically same as taking a

simple shape and iterating it over and over again which is nothing but a fractal. Another

important observation is that a fractal has infinite resolution. That is if you zoom in on a fractal

image, you could see the same image in a slightly different representation. According to

Hawkins, the same thing happens in the human brain. At each level of the hierarchy, one can find

the same cortex algorithm. The job of any cortical region is to find out how its inputs are related,

to memorize the sequence of correlations between them, and to use this memory to predict how

the inputs will behave in the future.

From the discussion above it is clear that nature uses similar design strategy whether it is the

shape of lightning bursts or functioning of human brains. And most of the solutions found by

nature are simple yet elegant as well as exact. The take home message from this discussion for

engineers like us is that when in problem, it is not such a bad idea to look at nature for solutions.

The design of intelligent machines has been an endless endeavor of human race ever since the

advent of simple machines. Unfortunately, this effort has been inspired more by fiction and

movies rather than nature. As the author states, efforts have been made to replicate human brain

rather than understand the underlying principle.

Consider the example mobile phones of the current era. We refer to them as “Smart” phones. By

definition smart means quick-witted intelligence. But in this context the word smart has been

used to differentiate the multi-utility hand held devices from the trivial mobile telephone devices

or feature phones. People might argue that the so called smart phones can do more than just

making phone calls. I feel that this ability makes them multi-utility devices and not smart.

Further the development of these devices has been in the direction of making them more and

more powerful. Exorbitant cameras, multi core processors and sophisticated hardware have come

up in recent times. This makes them faster, efficient, multi tasking and cheap. They miss the one

crucial quality – “Smartness” or intelligence.

I fully support the author’s view of making intelligent machines. He says that there are two vital

parts that are needed to build intelligent machines.

A set of sensors is essential for communicating with the outer world. I feel that we already have

sufficient number of sensory devices at our disposal. Consider the example of the mobile phone.

We have the usual sound and visual inputs. Along with this we have an array of sensors which

include motion detection sensors, infra red, blue tooth, location detection sensor (GPS) and many

more.

Next comes the most important part. A memory system which can do more than just storing data.

As the author points out, it has to have an invariant storage and constant prediction mechanism in

order to be intelligent. Now, building such a system is a very challenging task. But we have the

advantage of speed, capacity and replicability on our side. With these things, I feel that we will

be able to conquer the challenge of building intelligent systems soon.

Having an understanding of what exactly intelligence is and how nature designs intelligent

systems (human brains) has equipped us with a strong tool. And every engineer has to keep these

ideas at the back of his mind while designing any system. I feel that there are more mysteries to

be unraveled and more problems to be solved using the design algorithm of nature.

Brain, Mind and Cognition

Seite 1

On Intelligence

by Jeff Hawkins and Sandra Blakeslee

In the book <On Intelligence>, the author Jeff Hawkins talks about his background and experiences

with AI and especially how a brain does work in order to find a way of building a machine, which

can replace a human mind.

The book begins with some background information on why previous attempts of understanding

intelligence and building intelligent machines have failed. After that he gives an introduction and

development of the central idea of the theory, which he calls the memory-prediction framework. In

Chapter 6, he tells details how the physical brain, implements the memory-prediction model

effectively, in other words, how the brain works. Then he discusses social and other implications of

the theory, which could be for many readers, the thought-provoking section. The book ends with a

discussion of intelligent machines, how we can build on that and what the future will be.

First of all he talks about how he became familiar with the subject AI and how other famous

professors or people, who are acknowledged in AI, think about his doing and the way of how AI can

be started with or what they understand by hearing intelligence. To say it in other words, how they

do define intelligence. In a way it sounded like, they accepted their current situation, which is that

they think it's not worthy (or it's too expensive) to research further in the direction of how a brain

works or how to built a machine, which is capable of the same things as a brain. The funny thing is

that nearly every professor or person Hawkins mentioned had another definition of intelligence and

another opinion of how to get things started if they would implement such a machine. That was the

first hint for me that it's not that easy as I thought to get this task done.

After a proper „introduction“, what this book is about and what he is trying to achieve, he

introduces a lot of concepts / ideas about how the brain works. Some made by his own and others

read or heard of other persons, which already researched in this direction.

He was mentioning that the brain is auto-associative, which means that auto-associative memories

are capable of retrieving a piece of data upon presentation of only partial information from that

piece of data. Instead traditional memory stores data at a unique address and can recall the data

upon presentation of the complete unique address.

Furthermore he was talking about Vernon Mountcastle, who strengthened his theory about the

cortex shape being remarkably uniform in appearance and structure. What he means is that every

part of the neocortex probably performs the same basic operations.

John Searle, an influential philosophy professor at the University of California at Berkeley, made a

thought experiment, called the Chinese room. In this experiment an English person answers Chinese

questions by just following instructions of a book, although he doesn’t speak a word Chinese. With

Brain, Mind and Cognition

Seite 2

this experiment he was trying to proof that machines especially computers aren’t or can’t be

intelligent.

Those ideas I just mentioned are just a few, which came to my mind, while I was writing this Essay.

I think the most appealing concept which was proposed by Hawkins in this book is the influence of

patterns. What I mean is, Hawkins explains with a few simple examples, how the brain, especially

the neocortex, handles data. He mentioned examples as everyday experiences, like coming home,

put the key into the door, turn around the key and so on. Those are actions which usually appear in

the same proper way every day. The salient point in this situation is, that if any of those actions

differ from their usual behavior, you would recognize that immediately because the neocortex

compares memorized patterns with new ones. For example if the door opens up easier, because

somebody changed the weight of the door, you would notice it. Or the shape of the door handle is a

little bit different than before, you would notice it.

The prediction of future moments by using patterns is another way how the neocortex works. An

example there for is music. Imagine you hearing a song you like and just by hearing the first to

notes you already can tell what’s coming next. So the neocortex can predict the future by referring

to memorized patterns. In detail, those patterns develop through the different layers of the

neocortex.

After I referred to some concepts of how the brain works and I gave a conclusion of which one is

the most appealing or interesting idea to me, I come to one of the most self appearing questions you

ask yourself after reading this book.

Due to this explanations and experiences, which are offered in this book, is it possible for me, to

build such a system/machine? To ask in accurate words, do I feel like I can do this? (Especially on

my own?)

While I was reading this book, in the first place, after he mentioned the interaction between the

different layers of the cortex, I was thinking, I don't understand anything he is talking about. But

afterwards, when I thought about it once or twice again and remembering, for example, him talking

of predicting patterns through comparing with memorized patterns, I got this feeling of „I can to

this“. From out of nowhere it sounded pretty logical and I felt like I had written this book (I am

exaggerating) and I want to see the current states of people already researching in this direction.

But if it would be that easy, a lot of people who have read that book would already have

implemented such a complex construct. I am pretty sure that there are people who already tried it,

because the book <On Intelligence> was established in 2004. So that's a long time until now and the

current situation is probably pretty different of the situation Hawkins was talking about in his book.

As I already told before, there is no general definition of intelligence or how to settle things that a

machine works as a brain in a proper way. So you can't just go on implementing something like

that. In addition a brain isn't just a brain. That's one reason, why I think that Hawkins compares a

few brains of different mammals. Okay, the task is to build a machine, which can replace or can act

like a human brain. But what I wanted to say is that not only different mammals have different

brains, also different humans have differences in their brain. The general structure is the same, but

every brain works a little bit different.

Brain, Mind and Cognition

Seite 3

The memory is in the DNA of the organism. Some creatures are able to speak and others are not,

because of some brains having more cells or different connections than other brains. Due to this,

some people are faster in a position to learn than others. But intelligence comes not only from our

DNA, it needs a lot of practice.

Hawkins also talks about our brains have to learn. What he means is that there is no brain, which

can do everything in the first place. It has to learn step by step.

For example he was talking about Albert Einstein:

“Albert Einstein owns a brain, which was measurably unusual. Some scientists discovered that his

brain had more support cells, called glia, per neuron than average. It showed an unusual pattern of

grooves, or sulci, in the parietal lobes- a region thought to be important for mathematical abilities

and spatial reasoning. Because of that the genetic factors plays an important role in the intelligence

of a human.”

As you can see, Albert Einstein is the best example for a person being smarter because of having

more or different cells than other people’s brains.

But I think the real problem is, like Hawkins also proposed and what this whole book is about, that

it has to be clear what we do expect of the machine or what we do think how the brain works. To be

exact: Instead of how we do think it works, we should know how it does work! Because a good

preparation is the Alpha and the Omega.

The next problem is that you need a vast amount of money and time. You can't do this just as a

hobby, it is not done by a few weeks. In addition to that, you have to be smart. Not everybody can

do such a thing.

If you look at the whole thing again, some parts in me say yes and some other say no.

Although I said that after thinking about it one or two times again I felt like Ii did understand

everything and I can do that all on my own.

But the most difficult part of the book was for me to understand how the individual components are

connected. So how the signal flows develop through the cortex.

I am of the opinion that those wiring system would cause a big problem, if I would try to implement

such a technical system.

Creating a memory, or functions, that are similar to the human movements, make no difficulty.

However, the interaction, for example the “catch a ball” example, between the recognition of a ball

moving at you and the human act of catching the ball with your bare hands, represent the main task

and the main problem in the implementation of intelligent machines.

But also the time, which has to be very fast between the different kinds of layers of the brain,

hindered the building of intelligent machines.

To come to a conclusion, the book gives a pretty good overview about intelligence and how in

general the brain works. But I don't think that the book <On Intelligence> stimulates me, in such a

way that I am able to build such a complex system or to write an algorithm in order to tell a

machine how it has to behave.

This book is not like the instruction book in the Chinese room experiment, made by John Searle.

You can’t just mindless toil only commands.

As I said earlier, in a way it is stimulating me, but only because it is a pretty interesting topic, with a

lot potential for the future and not in order to build one by myself.

Essay

on Jeff Hawkin’s “On Intelligence”

Some time ago I read an interesting thought, which stated that the good book should not only give you answers, but also induce you to ask more questions. And exactly this happened after I had read “On Intelligence”. Though now I know surely more about the brain and mind, and the way how it works, but, on the other hand, I have also more questions than I had before reading it. Some of them I’m going to discuss in this essay.

This book mostly addresses the scientific questions: how, when and what for are the parts of our brain working, how they interact with each other. But since the topic is very sophisticated and, if considered in more abstract fashion, spreads wide beyond its biological and physical aspects, some of the ideas proposed by the author are closely related to and branch immediate consequences in the metaphysical and philosophical area, for instance, in epistemology and ontology. The most thought-provoking (and, consequently, interesting) idea, which I found in the book, was exactly one of these, namely, the author’s definition and explanation of creativity.

Creativity has been an urgent and attractive topic for me for a long time, especially the question how can a man develop and train this ability; therefore, I was looking forward to discover how the topic could be explained given the author’s brain theory even before I had actually found the corresponding chapter. The result was surprising.

I’ve met plenty of definitions for this term, but that one is outstanding. Basically, the author considers creativity as a process of remembering something you have experienced before and applying it to current situation, in other words, using it in a new fashion. On the other hand, creativity is mostly referred to as a creation process, fundamentally opposite to recalling something you learned before. It cannot be correct, I thought, and started to look for a contrary instance. Unexpectedly, I did not manage to find it – any creative process I thought could be, with some restrictions, explained within the terms of Hawkin’s definition of creativity.

But I’m still sure there is something wrong about it. The definition contains a contradiction itself. We start to recall our past experience and, when the analogy has been found, we apply it in the new way. But where does this “new way” come

from? According to book, everything is stored in our brain as an invariant representation, therefore the way we use something is also supposed to be stored in our memory somehow. It means the “new way” is in reality not new, but we recall it from the memory. Hence, it immediately follows that the creativity has nothing creative in it, and is just a recalling. And it, consequently, implies that the humanity never really invented anything, we have only been recombining already existed things.

This idea contradicts mine world’s perception. I think the author’s explanation and theory is far from complete. If we consider the experience recombination process to be the actual “creative” part, then it becomes impossible to explain, in terms of the theory, how do creative people know how to recombine the past experience? Is it every time just an accident? I don’t think so.

Another problem I see is the way how the author accounts for the differences in people’s creativity. He argues that the only difference is the amount of the experience you have and the physical capacity of our brain. On the contrary, to my mind, creativeness is not determined by any measurable characteristic of human’s mind and is surely not determined by experience, moreover, the relationship here is reversed, otherwise it won’t be possible to explain the fact that children tend to be more creative than adults.

Although some of the author’s ideas are provoking us to dive into the underlying metaphysical aspects of human mind, the book has indeed a very practical aim: to explore how could the intelligent machines be build. The author explains his theory of how the human brain works and argues about the existent attempts to construct an artificial intelligence. As a someone, who entered the AI-area from the technical side, the author gives us plenty of analogies and correspondences from the technical world. He is surely aiming to arouse reader’s attention to AI creation, and, I must admit, in my case he has succeeded to stimulate thought about it.

Before starting to do something, I’m tending to ensure first whether “the road is clear”, mentally going through the whole “way” and checking if I can imagine any obstacles for realization. In this regard, I have noticed an interesting thing about the book: the author doesn’t explicitly discuss any vital troubles in constructing an intelligent system. Some of them have been now and then mentioned in passing throughout the book, but there is still no dedicated chapter or even subsection about it. I think, the author does it intentionally, because once you are aware of obstacles, you are stuck to them and they are restricting your thoughts. Sometime having a new wrinkle can help you to find solution, which

avoids the urgent problems completely – just because you weren’t aware of their existence.

The book leaves a strong impression that building intelligent machines is not a problem at all. There already exist some prototypes for artificial neural networks, which are already capable to learn some patterns, therefore, assuming the brain model proposed by the author to be valid, we only need to improve them and implement the described layered model and the algorithm of communication between them.

Of course, the book describes only the concept; a lot of the particular processes remain unclear, and the understanding is not full. To my mind, the next step would be the software simulation of the system, maybe scaled according to the current achievable performance. It should be clearly possible to do, even having an incomplete model.

Due to the fact that they are artificial, the simulations won’t give us answers to all existing questions, but they will help to formulate new questions and find weaknesses in the model. I’m sure, some kind of brain simulations are already in progress. While I was reading about the intelligent car concept, I remembered Google exploiting this concept. Currently their intelligent cars are being tested on the road, so maybe they have already built an intelligent system and now they are training and teaching it?

The author assumes the AI to require some underlying hardware for successful implementation, but I think it possibly won’t even be necessary: by the time we will have a complete and reliable model of the working brain (even assuming an optimistic timing 5-10 years) the PC’s performance will most likely allow us to deploy artificial intelligence at the software level only.

Talking about this book in general, I must say it was very useful for me. The proposed ideas came very nonpresumable to me, but now I have a complete model how does the brain function, it replaced and adjusted my previous notion, which was based on the school biology classes. Of course, the model is not full and possibly not correct, but it is a good point to start with.

The topics mind and intelligence have always been interesting for me, but, I should admit, only as a theoretical question. I have never considered them to be capable to gain a practical and applied importance in a foreseeable future, and exactly the same thoughts I had about the artificial technical systems. However, with the book “On Intelligence” I discovered that the artificial intelligence is not as far away as I assumed and it is worth paying attention to it.