Information

Does some form of self organizing map exist in the brain?

Does some form of self organizing map exist in the brain?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Self organizing maps (SOM's) are an extremely interesting and powerful tool in the field of artificial neural networks. My question is: Does something similar exist in the brain? Or are these systems completely "man made"? Also, if there are such things in the brain, how do they work?


These are great questions. The SOM was absolutely biologically inspired. Sure, there are many examples.

Examples of self-organizing topographical feature maps in the human brain include the the sensory homunculus, retinotopic maps, and the entorhinal cortex. There are even maps such as these in the midbrain auditory nucleus of the barn owl as well as the auditory cortex of the mustache bat.

As to how they work and organize in nature, that is still an active area of research, and it seems like there are no experimental conclusions that suggest the existence of universal rules for self-organization of sensory maps in the brain. As far as I know, however, Kohonen's theory that these maps organize via a combination of lateral inhibition and Hebbian learning has not yet been displaced by a more modern idea.


Self-organized complexity in the physical, biological, and social sciences

The National Academy of Sciences convened an Arthur M. Sackler Colloquium on “Self-organized complexity in the physical, biological, and social sciences” at the NAS Beckman Center, Irvine, CA, on March 23–24, 2001. The organizers were D.L.T. (Cornell), J.B.R. (Colorado), and Hans Frauenfelder (Los Alamos National Laboratory, Los Alamos, NM). The organizers had no difficulty in finding many examples of complexity in subjects ranging from fluid turbulence to social networks. However, an acceptable definition for self-organizing complexity is much more elusive. Symptoms of systems that exhibit self-organizing complexity include fractal statistics and chaotic behavior. Some examples of such systems are completely deterministic (i.e., fluid turbulence), whereas others have a large stochastic component (i.e., exchange rates). The governing equations (if they exist) are generally nonlinear and may also have a stochastic driver. Many of the concepts that have evolved in statistical physics are applicable (i.e., renormalization group theory and self-organized criticality). As a brief introduction, we consider a few of the symptoms that are associated with self-organizing complexity.


A Neuroscientist’s Theory of Everything

K arl Friston wanted me to know he had plenty of time. That wasn’t quite true. He just didn’t want our conversation—about his passion, the physics of mental life—to end. Once it did, he would have to step outside, have a cigarette, and get straight back to modeling COVID-19. I caught the University College London neuroscientist at 6 p.m., his time, just after he had sat on a panel at a COVID-related press conference. He apologized for still having on a tie and seemed grateful to me for supplying some “light relief and a distraction.”

A decade ago, Friston published a paper called “The Free-Energy Principle: A Unified Brain Theory?” It spells out the idea that the brain works as an editor, constantly minimizing, “squashing” input from the outside world, and in the process balancing internal models of the world with sensations and perceptions. Life, in Friston’s view, is about minimizing free energy. But it’s not just a view of the brain. It’s more like a theory of everything. Friston’s free-energy theory practically sets your brain on fire when you read it, and it has become one of the most-cited papers in the world of neuroscience. This May, Friston published a new paper, “Sentience and the Origins of Consciousness,” that takes his ideas into new intellectual territory.

NOT A FAN OF SURPRISES: Karl Friston (above) has argued that lifeforms, in order to survive, must limit the long-term average of surprise they experience in their sensory exchanges with the world. Being surprised too often is tantamount to a failure to resist a natural tendency toward disorder. Kate Peters

Friston, currently a Wellcome Trust Principal Fellow and Scientific Director of the Wellcome Trust Centre for Neuroimaging, invented statistical parametric mapping, a brain scanning technique that has allowed neuroscientists to assess, as never before, the activity in specific brain regions and their roles in behavior. The discoveries he’s helping to make about the nature of the brain come out of a psychiatrist’s concern for the well-being of his patients, suffering from chronic schizophrenia. “Most of our practical work on causal modeling, data analysis, and imaging sciences was motivated and actually funded by schizophrenia research,” Friston said. “It has been a central part of my life and career for decades.”

The applications of Friston’s research are tangible and have made major contributions to mental disease, brain imaging, and now the COVID-19 pandemic. Venturing into the theory behind them, however, is a safari through a jungle of fascinating and at times beguiling concepts. Friston’s ideas have been on my radar for some time, so I was excited to jump right in. He was a passionate tour guide, taking us through a landscape of some of the most stimulating topics in science today, from consciousness to quantum physics to psychedelics.

In “The Free-Energy Principle,” you write the world is uncertain and full of surprises. Action and human perception, you argue, are all about minimizing surprise. Why is it important that things—including us—minimize surprise?

If we minimize surprise now, then on average over time, we’re minimizing the average surprise, which is entropy. If a thermostat could have beliefs about its world—it might say, “My world is about living at 22 degrees centigrade”—so any sensory information from its thermal receptors that departs from that is surprising. It will then act on the world to try and minimize that surprise and bring that prediction error back to zero. Your body’s homeostasis is doing exactly the same thing.

Does the brain minimize surprise in order to conserve energy?

You could certainly say that. But I wouldn’t quite put it like that. It’s not that the brain has a goal to exist. It just exists. It looks as if it has a goal to exist. What does existing mean? It’s always found in that configuration. The brain has to sample the world in a way that it knows what’s going to happen next. If it didn’t, you’d be full of surprises and you’d die.

Anything you talk about is really just an explanation for your lived world.

What’s the core argument of the free energy principle?

Variational free energy is basically a quantity that stands in for surprise. It’s the ultimate goal-function of life. Why is there a difference between free energy and surprise? Let’s say I tasked you with engineering an oil droplet. You want to engineer an oil droplet and sell it on Amazon. You would have to write down its equations of motion, flows on gradients, where these gradients are defined by surprise, and the surprise defines the likely configuration that characterizes an oil droplet. The problem is, when you come to evaluate that surprise, that potential energy, it becomes numerically impossible to do because of all the different ways in which its configuration could have been caused. This means that flows can’t physically be realized by you—trying to engineer your Amazon oil droplet.

So how can you build an oil droplet, or anything, for that matter?

There is a way of doing it, invented by the physicist Richard Feynman. He had exactly the same problem in quantum electrodynamics. He wanted to evaluate the probability of all the ways that this electron could get from the initial state in which it was prepared to some final or end state. The number of paths that a particle could take is infinite. Feynman was facing an enormous problem. He wanted to calculate the most likely electron path, but he couldn’t evaluate all the possible ways that the particle could get from here to here, let alone start looking up the most likely path.

So he came up with variational free energy, which is essentially a mathematical quantity that is always bigger than the surprise. If you squash or reduce the free energy, which you can measure quite easily, you can do your gradient descent, and get to the bottom of free energy. What Feynman effectively did was replace an impossible integration problem with a tractable optimization problem. If you want to emulate self-organization, you’re going to have to change the problem from the way the world works into an approximation of the way the world works, and develop an optimization scheme.

You can see why free energy starts to take a key role in articulating these ideas about how the brain works. It may well be that it wasn’t just Richard Feynman that realized this was the way to self-evidence efficiently and effectively. It might be that evolution has also realized this. This variational trick—minimizing Feynman variational free energy—has become installed in us.

Why Some Sports Fans Have More Fun

You won’t have seen it on the podium, but the human brain’s mirror neuron system could have medaled at this year’s Olympic Games, or basically any sporting event with an audience. The mirror neuron system is a network of neurons. READ MORE

The idea of a Markov blanket shows in your recent paper on the origins of consciousness. Who was Markov, and what is this blanket named after him?

Andrey Markov was one of the grandfathers of stochastic processes and probability theory. The notion of a Markov blanket arises, not in consciousness research, and not really even in neuroscience—it’s much more fundamental than that. Any thing necessarily exists as a Markov blanket. Because if it didn’t, you wouldn’t be able to measure anything that distinguished the thing from something else. It’s absolutely crucial. If something doesn’t have a Markov blanket, it doesn’t exist. From the perspective of systems neuroscience, the Markov blanket is a new and important thing in the toolkit that allows you to demystify and talk with a different calculus, and a different language, about things such as sentience.

What’s a good example of something that illustrates the Markov blanket?

Let’s go back to our oil droplet. Imagine it in a glass of water. The agenda is to understand, “Why does the oil droplet hang together? Why does it resist the tendency to be dispersed, dissolve, dissipate, and distribute all its molecules around the solvent?” There’s something special about certain things or systems, like the oil droplet, that manage to distinguish themselves from the universe, or the environment in which they are immersed.

The Markov blanket helps explain how things can exist—but what is it, exactly?

The Markov blanket is a permeable interface between the inside and the outside, enabling a two-way exchange. Stuff on the outside—the environment, the universe, the heat bath—impacts what’s going on inside via the sensory part of the Markov blanket. The Markov blanket has sensory and active states. Stuff on the outside, the external states, influence the blanket’s sensory states, what the blanket senses. And stuff on the inside of the blanket, the internal states, influence the blanket’s active states. The active states close that circle of causality, or, if you like, they disclose what’s going on inside by acting on the outside state. With that mathematical construct in place, you can go a lot further than 20th-century physics, which was all about equilibrium statistics and thermodynamics—the kind of physics that you would have been taught in school. Implicit in equilibrium physics is the notion that you’ve got an isolated or a closed system immersed in a heat bath—without ever asking where the heat bath came from. That implicitly assumed the Markov blanket.

The brain has to sample the world in a way to what’s going to happen next.

Where can the Markov blanket take physics in the 21st century?

You can start to address what most people in physics—not most, but those with a more adventurous mind and the time and money to do it—want to ask and address. Which is, “How do things self-organize when they are exposed to something out there?” Open systems, in other words. Systems that are far from equilibrium—non-equilibrium steady states that persist despite the fact that they are in exchange with their environment. We are the perfect examples of things that seem to persist over time, despite a fluctuating and capricious world out there.

Your work seems to make a physics of sentience possible. Can we really quantify forces that govern our minds?

That’s exactly right. Force is just an expression of a gradient, and the gradient is a construct that determines the flow of states. You can think of gravity, for example, as a force or as a gradient flow on a gravitational potential. I’m getting a bit abstract now, but I think it’s important and demystifying to say that the potential energy function that produces the forces that are causing all your neural activity at this moment—while you’re listening in a sentient way to what I’m talking about—can all be written down as flows on a potential energy, the logarithm of the probability of my sensations, of being in a particular state, given my model of the world.

This, almost tautologically, has to be the case, mathematically speaking. If I’ve got a good model of the world on the other side of my Markov blanket, and I keep on sampling things that I predict and I expect to sample, then it must be the case that everything on the inside is recapitulating and minimizing that potential. Mathematically, this means maximizing my evidence for the likelihood of those sensations, given my understanding, or model prediction, of what’s going on. This leads you into this notion of self-evidencing, which is just another way of saying “to exist.”

OK, you’re getting a bit abstract. Can you put this mental action in more concrete terms?

There’s a really nice way of thinking about the mechanics of this in terms of belief-updating. So, in the world of Bayesian statistics, you get some new data. You update your prior beliefs to posterior beliefs after seeing the data. You’re assimilating that data, updating it, then revising and changing your mind. Changing your mind on the basis of the new information at hand is called belief-updating. That belief-updating is a measure of the degree to which you have moved in this space of beliefs on this information geometry. If you get lots of new information that changes your mind a lot, you’ve moved a long distance. And that means that there has been a big force, a big pressure, on you exerted by this new information to change your mind.

If, on the other hand, what you are sensing from, say, the soles of your feet, conveys no information, and you’re not changing your mind, you wouldn’t notice it. There’s no belief-updating unless you attend to the sensations from the soles of your feet. I find this fascinating: The measure of how much you move in these information geometries, how much force is exerted, or how steep these gradients are that are pulling you this way or that, very much rests upon the precision of information. It is this precision that determines mental action.

Richard Feynman had exactly the same problem in quantum electrodynamics.

What do you mean by “precision of information”?

It’s the curvature, or the gradients of the force fields causing your belief-updating, literally causing increases or decreases in neural activity. It ties together very nicely from a mathematical point of view. If you can start to choose how much precision to afford this kind of information, or that kind of information, then you’ve got a bit of an inner life. Of course, oil droplets can’t choose where to deploy their precision—but you can.

While looking at you, I can focus on the lamp to my left without moving my eyes on it. You’re saying that ability is a kind of prerequisite for a rudimentary form of inner life. Is the ability to be aware that you have this ability a further step in a richer inner life?

Absolutely. You and I have this hypothesis that it’s me having these qualitative experiences, of talking to you. And it’s me prosecuting these mental actions. But that’s just another hypothesis, and it’s just another representation of some reality out there. If you start to put these representations into the mix, then you’re getting much closer to a minimal selfhood that underwrites the agency and the ownership of these qualitative experiences—experiences that have been actively constructed through some covert mental actions and inner life.

How do Markov blankets help make sense of our inner life?

I have to be clear that I’m speaking as a physicist, because I’m not a philosopher. That said, there is a representationalist interpretation of the internal states of something with a Markov blanket. You could say that all that matters in terms of sentience, perception, and active inference, is just on the inside. It’s our neuronal activity, say, the internal states that are dependent upon, and influencing, the blanket states. The states comprise our sensory states, our sensory receptors—our sensorium if you like—and ways of changing that sensorium through acting, like my eyes palpating the world to get new sensory information. This means you’re never going to be able to transparently sample—or know—what is out there, generating sensory blanket states. You could then easily adopt an anti-realist position about external reality.

Do Markov blankets support the theories that cast doubt on the existence of the world outside individual consciousness?

Not necessarily. If you have a Markov blanket, it is true mathematically that the inside has to have some form of synchronization with the outside. Any system that survives a world and regulates its exchange with that world must embody or contain a model of that world. There has to be a reality out there that is sympathetic to, and has engendered, the notion of self-organization, the very existence of the blanket that contains internal states.

For example, if I measured a certain neuronal population in your visual cortex, I could infer that because this particular population has increased its activity, there is more than likely to be a bar of visual contrast moving across your visual receptive field, at that point in your visual space. So in a sense, it dissolves realist/anti-realist debate. You’re licensed to adopt an anti-realist perspective, but at the end of the day, you have to acknowledge, “Yes, I can only do that because there’s something out there.”

I’ve been through the doors in the Huxleyian sense. It’s mind-opening.

Why do you think we are so puzzled by the hard problem of consciousness?

This is something that Andy Clark is very bemused by, and relates to David Chalmers’ question, “Why are we so puzzled about the fact that we have qualitative experience?” One deflationary approach is to say, “Well, you’re committed to the meta-problem itself.” What kinds of sentience, or what kinds of creatures, could possibly be puzzled by the fact that they perceive? Just asking that question provides some really interesting insights.

As soon as you have a bit of your brain that is deploying internal action, it means implicitly that you have to have representations of different kinds of action. You’ve now got the opportunity to think: What would happen if I didn’t attend to this qualitative experience, this redness, say? It may well be that having a brain that has these counterfactual hypotheses at its disposal is all you need to explain why philosophers exist and ask these questions.

This brings you back to this notion that anything you talk about is really just an explanation for my lived world. It’s the simplest explanation for all of these sensations that I’m getting, in all the modalities that I possess. And it doesn’t have to be true or false. As long as it’s a good-enough explanation that keeps your surprise down and self-evidence nice and high—that’s all it’s required to do. Selfhood in itself now becomes just another explanation. Anything that a philosopher says also succumbs to exactly the same argument, including qualia. Qualia become reifications of the best explanation for my understanding of my sensory data and my internal view of this inner life. The highest form of consciousness is the philosopher’s brain.

That’s very flattering to philosophers.

They deserve it. Not only do they have internal representations of qualia and can talk about them, they actually have this ability to generate an effective world in which it’s possible to not have qualia. You start to entertain the following puzzle: Oh, if it’s possible not to have qualia, why do we have qualia?

Is the self an illusion?

Well, say you’re a feral child who’s never seen another mammal. There would be no need to have a notion of self. You and the universe would just be one thing. But as soon as you start to notice other things that look like you, a question has to be resolved, “Did you do that, or did I do that?” Now you need to have, as part of your internal model, the hypothesis, the fantasy, the illusion—which may be absolutely right—that there are other things like me out there, and I need to model that. I think theory of mind and the necessity of encultured brains provides a simple answer as to why we have self. But to come back to your question, I think, yes, selfhood is another plausible hypothesis in my generative model that provides the best explanation for my sensory exchanges.

I’ve read some of your work on psychedelics. What do they do to our belief-updating minds?

Psychedelics act on certain kinds of neurotransmitters called neuromodulators and literally disintegrate your belief-updating, freeing this neuronal population from the evidence and the influences of that population. You’ll now be unable to call on high-level representations of selfhood to constrain your qualitative experience at a lower level of abstraction. You may experience a lower-level disillusion of the ego, literally the “self” representation may now not be able to influence experience. You might be attending to vivid sensations that would now have other interpretations that are not glued together by a self-centered, egocentric narrative. You will have alternative explanations for what caused this sensation and what caused that sensation. All of this is perfectly sensible under a generative model in which the constraints of one level of processing on another were removed. For example, you could have the hypothesis that if I am me, there’s another hypothesis, I’m not me. If you start to remove the evidence for these two hypotheses, then you create an enormous uncertainty about me-hood. You’ll get depersonalization and possibly a bad trip.

Do you have any personal experiences of tripping?

When I was younger, I enjoyed magic mushrooms. So I’ve been through the doors in the Huxleyan sense. It’s a mind-opening and revealing experience where you never quite take for granted the grip you have on reality. This grip is a gift of a well-oiled inference machine generating fantasy after fantasy that’s just spot on. When that goes away, you’ll know about it, and you’ll appreciate how finely tuned our grip on that sensorium is and how privileged we are to exist.

Brian Gallagher is an associate editor at Nautilus. Follow him on Twitter @BSGallagher.


Stuart Hameroff’s research involves a theory of consciousness developed over the past 20 years with eminent British physicist Sir Roger Penrose. Called ‘orchestrated objective reduction’ (‘Orch OR’), it suggests consciousness arises from quantum vibrations in protein polymers called microtubules inside the brain’s neurons, vibrations which interfere, ‘collapse’ and resonate across scale, control neuronal firings, generate consciousness, and connect ultimately to ‘deeper order’ ripples in spacetime geometry. Consciousness is more like music than computation.

What is consciousness?

How does the brain, a lump of pinkish-gray meat, produce feelings, emotions, understanding and awareness (a question termed the ‘hard problem’ by philosopher David Chalmers)? The mystery has been pondered since ancient times, and is currently approached from many disciplines, e.g. neuroscience, medicine, philosophy, psychology, physics, biology, cosmology, the arts, meditative and spiritual traditions, etc. All these have something to say, but from different directions, like the proverbial blind men describing an elephant. Moreover consciousness cannot be directly measured, observed nor verified, a problem in my field of anesthesiology where we want our patients to be decidedly unconscious. How do we even study consciousness scientifically?

In 1994 I co-organized the first international, interdisciplinary conference ‘Toward a Science of Consciousness‘ at the University of Arizona in Tucson, bringing together all approaches under one umbrella, or more aptly perhaps, one circus tent. After some confusion, the interdisciplinary concept took hold, thanks largely to a famous talk the opening morning by David Chalmers.

The audience was restless after two boring lectures when Dave took the stage. With waist-length hair, strutting like Mick Jagger, he explained that brain functions including memory, learning, language and behavior were difficult, but still relatively easy compared to the really ‘hard problem’ of subjective experience, feelings, emotions, awareness, thinking, composed of raw components termed ‘qualia’. Moreover he offered his own view that qualia were somehow ‘funda-mental’, akin to basic features of the universe, like electrical charge, magnetic spin, photons or mass, and that there must exist some ‘psycho-physical bridge’ between brain activities and a basic level of the universe. The audience buzzed. At the coffee break I eavesdropped on chatter about the ‘hard problem’.

The conference was a hit. Soon thereafter we started the Center for Consciousness Studies at the University of Arizona, Dave Chalmers joining our philosophy department to become the Center’s director. The conferences have been held annually since 1994, alternating between Tucson and elsewhere around the globe. In 2014 we celebrated the 20 year anniversary, borrowing from the Beatles’ famous Sgt. Pepper album cover to feature prominent consciousness researchers past and present (Figure 1). The 2015 conference was held last June at the University of Helsinki in Finland, and plans are underway for the 2016 conference (now called just ‘The Science of Consciousness’) back in Tucson next April (Figure 2)

Figure 1. ‘It was 20 years ago TODAY! Conference poster art for 20 year anniversary conference ‘Toward a Science of Consciousness 2014’.

Thanks to the Beatles, Abi Behar-Montefiore and Dave Cantrell, Biomedical Communications, The University of Arizona.

Figure 2. Conference poster for ‘The Science of Consciousness 2016’. Thanks to Roma Krebs, Biomedical Communications, The University of Arizona.

So….what is consciousness? Where do we stand after all these years?

Scientists and philosophers have historically likened the brain to contemporary information technology, from the ancient Greeks comparing memory to a ‘seal ring in wax,’ to the 19th century brain as a ‘telegraph switching circuit,’ to Freud’s sub-conscious desires ‘boiling over like a steam engine,’ to Karl Pribram’s notion of the mind as a hologram, and finally, with a fairly large scale modern consensus, the computer. The current standard dogma is that consciousness emerges from complex computation among brain neurons and their synaptic connections acting like binary ‘bits’ and logic switches. Within this general view are approaches such as ‘integrated information’, ‘global workspace’, ‘predictive coding’, ‘scale-invariance’, ‘Bayesian probabilities’, ‘pre-frontal feedback’, ‘higher order thought’, ‘coherent volleys’ and ‘synchronous oscillations’. But the core idea is that the brain is a computer, a complex network of simple bit-like neurons.

Accordingly, and because brain disorders like depression, Alzheimer’s disease and traumatic brain injury ravage humanity without effective treatments, scientists, governments and funding agencies have bet big on the brain-as-computer analogy. Billions and billions of dollars and euros are being poured into ‘brain mapping,’ the notion that identifying, and then simulating brain neurons and their synaptic connections can elucidate and reproduce brain function, leading to brain prosthetics and perhaps even ‘mind downloading’, transferring one’s consciousness into a computer when facing bodily death. President Obama’s Brain ‘Initiative’, the European ‘Human Brain Project’ and the Allen Institute’s efforts in Seattle to map the mouse cortex are aimed at such goals. But so far at least, the bet isn’t paying off.

For example, beginning more modestly, a world-wide consortium has precisely simulated the already-known 302 neuron ‘brain’ of a simple round worm called C elegans. The biological worm swims nimbly to forage, eat and mate. But even when prodded, the simulated C elegans just lies there, with no functional behavior. Something is missing. Funding agencies are getting nervous. Bring in the ‘P.R. guys.’

In a New York Times piece ‘Face It, Your Brain is a Computer’ (June 27, 2015), NYU psychologist/neuroscientist Gary Marcus desperately beat the dead horse. Following a series of failures by computers to simulate basic brain functions (much less approach the ‘C-word’, consciousness) Marcus is left to ask, in essence, if the brain isn’t a computer, what else could it possibly be?

Actually, rather than a computer, the brain is looking like a multi-scalar vibrational resonance system – not unlike an orchestra. Rather than a computational output, consciousness seems more like music.

Like many natural systems, dynamical brain information patterns repeat over spatiotemporal scales in fractal-like (‘1/f’) nested hierarchies of neuronal networks, with resonances and interference beats. One example of a multi-scalar spatial mapping is the 2014 Nobel Prize-winning work (O’Keefe, Moser and Moser) on ‘grid cells’, hexagonal representations of spatial location arrayed in layers of entorhinal cortex, each layer encoding a different spatial scale of surrounding environment. Moving from layer to layer in entorhinal cortex is precisely like zooming in and out in a Google map.

Indeed, neuroscientist Karl Pribram’s assessment of the brain as a ‘holographic storage device’ (which Marcus summarily dismissed) seems now on-target. Holograms encode distributed information as multi-scalar interference of coherent vibrations, e.g. from lasers. Pribram lacked a proper coherent source, a laser in the brain, but evidence now points to coherent dynamics in ubiquitous structures called microtubules inside brain neurons as high frequency origins of the brain’s vibrational hierarchy, the drumbeat, or percussion section of the orchestra.

Microtubule madness

Microtubules are cylindrical polymers of the protein ‘tubulin’, major components of the structural cytoskeleton inside cells, and the brain’s most prevalent protein. Their computer-like lattice structure, self-organizing capabilities and vibrational resonances have suggested to various scientists that microtubules might process information. Pondering purposeful behavior of single cell organisms without synaptic connections, the famed neuroscientist Charles Sherrington observed in 1951 ‘of nerve there is no trace, but the cytoskeleton might serve’, calling microtubules ‘the cell’s nervous system’.

I became obsessed with microtubules in a research project on mitosis in medical school in the early 1970s. In mitosis, or cell division, microtubules (‘mitotic spindles’) delicately tease and separate chromosomes into precisely equal ‘daughter cell’ pairs. If the process isn’t perfect, if something goes awry, if the chromosomes are separated unequally, abnormal genotypes, maldevelopment or cancer can ensue. My colleagues in the lab focused on the chromosomes and genes – this was the dawn of gene sequencing and genetic engineering – but I was fascinated with how the spindle microtubules ‘knew’ what to do and where to go. There seemed some form of intelligence, if not consciousness, at that level. As microtubules were also shown at that time to have polymer lattice structure similar to computers (also new to me then) and prevalent in neurons, I got the idea they were molecular scale information processors underlying consciousness.

In the 1980s, my colleagues and I proposed microtubules acted like computers, specifically ‘molecular automata’, processing information, encoding memory, oscillating coherently and regulating functions from within each neuron and other cells. During that time proponents of artificial intelligence (‘AI’) and later ‘The Singularity’ were assuming the brain’s 100 billion (10 11 ) neurons, each with about a thousand (10 3 ) synapses switching at about 100 (10 2 ) times per second would give a brain computational capacity of about 10 16 operations (‘ops’) per second. They asserted (and continue to assert) that when this 10 16 ops/sec capacity becomes realized and properly configured (e.g. via brain mapping) in silicon computers, then brain equivalence, including consciousness, would be attained. With lots more money, they implied, conscious computers were around the corner.

But as there were about a billion (10 9 ) microtubule subunits (’tubulins’) in every neuron, each tubulin oscillating and switching about 10 million (10 7 ) times per second, my colleagues and I calculated about 10 16 ops/sec per neuron at the microtubule level, pushing the AI/Singularity goalpost way ‘down the field’, to roughly 10 27 ops/sec for the whole brain. Publishing these ideas and presenting at various meetings, I became unpopular and a thorn in the side of AI/Singularity advocates.

Figure 3. Vibrational resonances at different frequencies and structural levels inside one neuron. Left to right: Interior of a pyramidal neuron, single microtubule, row of ‘tubulins’ and dipole oscillations (with anesthetic effect). Corresponding dynamics at various frequencies from work of Bandyopadhyay’s group are shown at bottom.

Artwork by Dave Cantrell and Paul Fini, Biomedical Communications, University of Arizona.

But then, one day in the early 1990’s, I ran smack into the ‘hard problem’ (though the term itself had not yet been coined). Someone said to me, ‘let’s say you’re right and there’s all this processing inside neurons. How would that explain consciousness?’ Essentially he was saying, how could computation, or vibrational resonances of any sort yield ‘qualia’, the taste of chocolate, the smell of lilac, the touch of soft skin or the feeling of love? I was a bit stunned. He was correct. Even if microtubules were the biological structures most directly related to consciousness, what was their mechanism? What was consciousness?

Fortunately that same anonymous person (to whom I remain grateful) suggested I read a book by the eminent British physicist Sir Roger Penrose called ‘The Emperor’s New Mind’. And so I did.

A quantum leap

The book was many things. First, The Emperor’s New Mind was a putdown of the AI assertion that consciousness would emerge from complex computation per se. A computer could be enormously intelligent, Roger explained, but completely lack understanding, feelings or awareness. Second, the book was an overview of modern physics, and third, amazingly, Penrose explained consciousness by triangulation with two other mysteries, general relativity and the measurement problem in quantum mechanics. In so doing, he connected consciousness to the fine scale structure of the universe.

I was stunned again, but in a good way. It seemed far-fetched, but was indeed an actual proposed mechanism for consciousness, still, to this day, the only actual mechanism ever proposed. The main idea was that the brain contained certain types of quantum computers connected to processes and events in the basic makeup of reality, spacetime geometry. As Chalmers later described it, consciousness was in some way ‘funda-mental’, intrinsic to the universe, and connected to the brain by a ‘psycho-physical bridge’. But how?

I learned that in quantum computers information states (e.g. binary ‘bits’ of 1 or 0) exist in ‘quantum superposition’, wave-like coexistence of multiple possible states, e.g. quantum bits, or ‘qubits’ of both 1 AND 0. After some period of interaction and computation, the qubits reduced, or collapsed to specific states of 1 OR 0 as the solution. But the mechanism by which reduction or collapse occurs is mysterious (the ‘measurement problem’ in quantum mechanics). It seemed to have something to do with consciousness.

In the early days of quantum mechanics, Niels Bohr, John von Neumann, Eugene Wigner and later Henry Stapp suggested that quantum superpositions persisted until observed by a conscious human, the ‘observer effect’, commonly termed the ‘Copenhagen interpretation’ after Bohr’s Danish origin. According to ‘Copenhagen’, conscious observation causes quantum superpositioned possibilities to reduce to definite states—consciousness collapses the wavefunction.

But this approach put consciousness outside science, as an unknown mysterious entity, and avoided addressing the underlying reality. To illustrate the absurdity, Erwin Schrödinger designed his still-famous thought experiment known as ‘Schrödinger’s cat’. Imagine a cat in a box with a vial of poison whose release is coupled to a quantum superposition of possibilities. According to Copenhagen, Schrödinger concluded, the cat would be both dead and alive until a conscious human opened the box and looked inside. Absurd it was, but the problem persists.

Another potential solution is ‘decoherence’ in which interaction between a quantum system and its classical environment disrupts superposition. But how can any quantum system truly be isolated?

Other views include the ‘multiple worlds interpretation’ (‘MWI’) proposed by Hugh Everett and others in which there is no collapse. Every possibility in a superposition survives, continuing to evolve into its own new ‘parallel’ universe, resulting in an infinite number of coexisting overlapping worlds. As crazy as it sounds, MWI is extremely popular among physicists.

Another set of interpretations assumes some sort of objective threshold causes reduction —‘objective reduction’ (‘OR’). Among these is an OR mechanism proposed by Sir Roger Penrose in ‘The Emperor’s New Mind’.

He began by considering how particles could conceivably exist in two or more locations simultaneously, relating it to Einstein’s general relativity in which mass is equivalent to curvature in spacetime geometry. Using simple 2-dimensional spacetime sheets (Figure 4), superposition is then seen as simultaneous alternative curvatures, essentially bubbles or blisters in the fine scale structure of the universe.

Figure 4. A ‘spacetime qubit’, with four-dimensional spacetime geometry depicted as a two-dimensional sheet. Left: A particle and its equivalent spacetime curvature oscillate between two positions. Right: quantum superposition of the particle in both locations is equivalent to alternative, separated spacetime curvatures.

Artwork by Dave Cantrell and Paul Fini, Biomedical Communications, University of Arizona.

Figure 5. Two possible fates of a superposition. Left: According to the multiple worlds interpretation (‘MWI’), superpositioned possibilities each evolve to form their own universe. Right: According to Penrose OR, superpositions evolve only until reaching threshold for objective reduction (‘OR’) at time τ given by EG ≈ ℏ/τ.Self-collapse then occurs accompanied by a moment of conscious experience (‘BING!!’).

Artwork by Dave Cantrell and Paul Fini, Biomedical Communications, University of Arizona.

If these separations were to evolve, each possibility might then give rise to its own universe, as in MWI (Figure 5a.). But Penrose considered spacetime separations to be unstable, reaching an objective threshold for ‘self-collapse’, or objective reduction/OR at time τ ≈ ℏ/EG, where ℏ is Planck’s constant over 2π, and EG the gravitational self-energy of the separation. Thus each OR event creates reality which then again dissolves into superposition, rippling and rearranging the structure of the universe (Figure 5b). OR avoids the need for multiple worlds.

But then Penrose added two profound features. First, he proposed each OR event is an instant of subjective experience – a moment of conscious awareness, of ‘qualia’ intrinsic to the universe (‘BING!!’ in Figure 4). Thus rather than consciousness causing collapse (as in Copenhagen), collapse causes consciousness, or, is identical to consciousness. This meant that simple, random OR moments of awareness occur ‘here, there and everywhere’ throughout the fine scale structure of the universe (appearing as ‘decoherence’). These would be generally random, non-cognitive and lack meaning or memory, and accordingly termed ‘proto-conscious’. I later thought of such random OR moments as the spurious sounds, tones and notes of an orchestra warming up. Somehow, the brain ‘orchestrates’ random OR notes into music.

The second feature was that particular spacetime curvatures and material states selected in organized (‘orchestrated’) OR events were not chosen randomly, as is proposed to be the case in Copenhagen and decoherence, but rather were influenced by what Penrose termed ‘non-computable Platonic values’ embedded in fundamental spacetime geometry. Within its very structure, the universe encoded mathematical truth, ethical and aesthetic values and qualia, with which our conscious thoughts and actions could resonate.

I was impressed. Roger Penrose had turned the observer effect upside-down, putting consciousness back into science, precisely on the edge between quantum and classical worlds. And the connection to spacetime geometry, non-locality and Platonic influences (following the ‘way of the Tao’, ‘divine guidance’) seemed to me a source of creativity and spirituality (though Roger has always avoided such terminology). Intuitively it felt right, and was maybe ‘crazy enough’ to be correct.

But Penrose lacked a biological candidate for OR-terminated quantum computing in the brain – a means for orchestration. He had a mechanism for consciousness, but not a biological structure. In microtubules, I had a biological structure, but not a mechanism. We teamed up in the mid 1990’s on a quantum theory of consciousness (‘orchestrated objective reduction’, ‘Orch OR’) linking microtubule quantum processes to fluctuations in the structure of the universe.

Our theory was immediately, harshly and repeatedly criticized and ridiculed, as the brain was thought too ‘warm, wet and noisy’ for seemingly delicate quantum coherence. And we were (and are) a threat to the AI/Singularity/Brain Mapping ‘industrial complex’. But evidence now shows (1) plant photosynthesis routinely uses quantum coherence in warm sunlight (if a potato can do it….?), (2) microtubules have quantum resonances in gigahertz, megahertz and kilohertz frequency ranges (the work of Anirban Bandyopadhyay and colleagues at National Institute of Material Science in Tsukuba, Japan) and (3) anesthetic gases selectively erase consciousness by quantum-level actions on brain microtubules (the work of Rod Eckenhoff and colleagues at the University of Pennsylvania). In 1998 we published 20 Orch OR testable predictions, 6 of which have been verified, and none refuted.

Figure 6. The ‘psycho-physical bridge’, from biology to the structure of the universe. Left: Dipole qubit in a microtubule governed by pi electron quantum resonance (Figure 3). Right: Corresponding smaller (e.g. Planck) scale spacetime qubit (Figure 4). These are proposed to be self-similar and linked in 1/f fractal-like spacetime geometry.

Artwork by Dave Cantrell and Paul Fini, Biomedical Communications, University of Arizona.

Consciousness in the Universe

The Orch OR theory portrays consciousness as rippling vibrations in the structure of the universe, self-similar patterns traversing enormous differences in scale – multiple ‘octaves’ – from the infinitesimally tiny levels of spacetime geometry, resonating upward to reach biology by quantum effects in microtubules, a ‘psycho-physical bridge’ (Figure 6). In this view consciousness is akin to music, and the brain more like a quantum orchestra than a computer. Random OR-mediated tones, notes and sounds intrinsic to the structure of the universe – the orchestra warming up – become music as the band begins to play. (There’s no need for a conductor as the music is self-organizing like jazz, jam sessions or Indian raga.)

These ideas are based on logic and evidence, but are admittedly speculative. However mainstream approaches from materialist science, brain mappers, AI/Singularity advocates (the ‘still-naked Emperor’) and militant atheists offer no evidence that consciousness emerges strictly from the brain-as-neuronal-computer. Based on synaptic connectivity they can’t simulate behavior of a simple worm. And the AI/Singularity view necessitates consciousness being an epiphenomenal illusion, with no real role to play. Accordingly, we are merely ‘helpless spectators’, as Thomas Huxley bleakly summarized.

On the contrary, implications of the brain as a quantum orchestra tuned to the universe include (1) causality, each self-collapse choosing a particular state which may lead to behavior. (2) Altered states of consciousness can occur at deeper levels and higher frequencies, both within brain neurons and the structure of the universe (as the Beatles sang, ‘The deeper you go, the higher you fly…’). (3) At sufficiently deep levels of spacetime geometry, consciousness may conceivably exist without biology and remain unified by entanglement, supporting possibilities for telepathy, so-called out-of-body experiences, and even afterlife and reincarnation. (4) If OR-mediated feelings are at large in the universe, they would have been there all along, able to spark the origin of life and drive its evolution. Human and animal psychological behavior are predicated on ‘reward’, or avoiding pain, i.e. on ‘feelings’. Without feelings, the Darwinian view that creatures act to promote survival of their genes is incomplete. Evolution may require an OR-mediated ‘quantum pleasure principle’ as its feedback fitness function. And (5), OR may also drive evolution of the universe itself.

In the year 2000 the journal Nature asked ten prominent physicists about prospects for a ‘Theory of Everything’, or ‘Grand Unified Theory’, reconciling seemingly disparate features in cosmology, quantum physics and relativity. Among them, only Sir Roger Penrose included consciousness as a key component of such a theory, tying together various mysteries.

How might this be so? ‘The anthropic principle’ is the philosophical consideration that the universe is perfectly tuned to accommodate life and consciousness. The 20 or so fundamental constants which govern the universe (e.g. the mass of the proton, the gravitational constant, etc.) are all precisely, exactly what are needed for us to exist – a coincidence of astronomically unlikely probability. We won the cosmic lottery. But how?

There are two types of explanations, the so-called ‘weak’ and ‘strong’ anthropic principles. The weak anthropic principle (e.g. by Brandon Carter) explains the apparent fine tuning as ‘selection bias’. Only in this particular universe (out of an infinite number of possible universes, e.g. as in the multiple worlds interpretation, ‘MWI’) are living beings present and able to ponder this question. The strong anthropic principle, as advocated by John Barrow and Frank Tipler, argues this one-and-only universe is somehow compelled to harbor conscious beings. But why would the universe be so compelled?

A logical answer is that consciousness is intrinsic to the universe, as suggested in Orch OR, and related to the 20 or so fundamental constants which regulate the universe and can evolve, perhaps over cycles of ‘big bangs’ as suggested in Roger’s book ‘Cycles of Time’. The serial universe evolves to optimize, tune and resonate consciousness….to feel good. Consciousness may be driving the universe.

Hameroff S, Penrose R (2014) Consciousness in the Universe – A Review of the Orch OR Theory Physics of Life Reviews 11(1):39-78

Stuart Hameroff MD
Professor, Anesthesiology and Psychology

Director, Center for Consciousness Studies

Banner-University Medical Center, The University of Arizona, Tucson, Arizona

Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.

About Stuart Hameroff

Get the Full Experience

View all articles in full from just £10 for 3 months.

Also In This Issue

With Consciousness in Mind (Part 3)
Issue | November 2015

  • On ‘Mind Change’
    Susan Greenfield
  • Strange Tools: Art and Human Nature
    Alva Noë
  • On Dreaming
    Katja Valli
  • Quantum Theory, the Implicate Order and Consciousness
    Basil Hiley
  • Education, Mindfulness and The Consciousness Quotient
    Sona Ahuja
  • Metaphysics Naturalized
    James Ladyman
  • Dreaming, Consciousness and Virtual Reality
    Antti Revonsuo
  • Higher-Order Thoughts and The Consciousness Paradox
    Rocco Gennaro
  • Awareness and Identification of Self
    David Rosenthal
  • A Radically Empirical Approach to the Exploration of Consciousness
    B. Alan Wallace
  • With Consciousness in Mind (Part 3) video talks (blog link)
    Richard Bright
  • Tantric Song (Blog Link)
    Richard Bright
  • Genius at Play: The Curious Mind of John Horton Conway
    Tony Robbin
  • The Science of Consciousness Conference 2016
    Richard Bright

Latest Blog Posts

  • Future Realities video talks
    July 3, 2017
  • Maps and Mapping video talks
    May 18, 2017
  • Earth video talks
    April 18, 2017
  • Pattern and Meaning video talks
    February 15, 2017
  • Future Thoughts and Meeting Points video talks
    December 12, 2016

Free Monthly Newsletter

Keep up to date on the latest content here at Interalia Magazine, and receive exclusive content.

No comments yet.

You must be a subscriber and logged in to leave a comment. Users of a Site License are unable to comment.


Is Life Special Just Because It’s Rare?

A rocket powered by kerosene and liquid oxygen and carrying a scientific observatory blasted off into space at 10:49 p.m., March 6, 2009 (by local calendars and clocks). The launch came from the third planet out from a G-type star, 25,000 light-years from the center of a galaxy called the Milky Way, itself located on the outskirts of the Virgo Cluster of galaxies. On the night of the launch, the sky was clear, with no precipitation or wind, and the temperature was 292 degrees by the absolute temperature scale. Local intelligent life forms cheered the launch. Shortly before the blastoff, the government agency responsible for spacecraft, named the National Aeronautics and Space Administration, wrote in the global network of computers: “We are looking at a gorgeous night to launch the Kepler observatory on the first-ever mission dedicated to finding planets like ours outside the solar system.”

A grain of sand: The Gobi Desert has an area of 500,000 square miles. If it represents all of the matter in the cosmos, then living matter is the equivalent of a single grain of sand. Adrienc / Getty Images

The above account might have been written by an intelligent life form located on exactly the kind of distant planet that Kepler would soon begin to search for. Named after the Renaissance astronomer Johannes Kepler, the observatory was specifically designed to find planets outside our solar system that would be “habitable”—that is, neither so near their central star that water would be boiled off, nor so far away that water would freeze. Most biologists consider liquid water to be a precondition for life, even life very different from that on Earth. Kepler has surveyed about 150,000 sun-like stellar systems in our galaxy and discovered over 1,000 alien planets. Its enormous stockpile of data is still being analyzed.

If the Gobi Desert represents all of the matter flung across the cosmos, living matter is a single grain of sand.

For centuries, we human beings have speculated on the possible existence and prevalence of life elsewhere in the universe. For the first time in history, we can begin to answer that profound question. At this point, the results of the Kepler mission can be extrapolated to suggest that something like 10 percent of all stars have a habitable planet in orbit. That fraction is large. With 100 billion stars just in our galaxy alone, and so many other galaxies out there, it is highly probable that there are many, many other solar systems with life. From this perspective, life in the cosmos is common.

However, there’s another, grander perspective from which life in the cosmos is rare. That perspective considers all forms of matter, both animate and inanimate. Even if all “habitable” planets (as determined by Kepler) do indeed harbor life, the fraction of all material in the universe in living form is fantastically small. Assuming that the fraction of planet Earth in living form, called the biosphere, is typical of other life-sustaining planets, I have estimated that the fraction of all matter in the universe in living form is roughly one-billionth of one-billionth. Here’s a way to visualize such a tiny fraction. If the Gobi Desert represents all of the matter flung across the cosmos, living matter is a single grain of sand on that desert. How should we think about this extreme rarity of life?

Listening for Extraterrestrial Blah Blah

If one is looking for signals from an extraterrestrial civilization, why not practice on some of the non-human communication systems already known on our own planet? Whales have had a global communication system for millions of years—longer than Homo sapiens. READ MORE

M ost of us human beings throughout history have considered ourselves and other life forms to contain some special, nonmaterial essence that is absent in nonliving matter and that obeys different principles than does nonliving matter. Such a belief is called “vitalism.” Plato and Aristotle were vitalists. Descartes was a vitalist. Jöns Jakob Berzelius, the 19th-century father of modern chemistry, was a vitalist. The hypothesized nonmaterial vital essence, especially in human beings, has sometimes been called “spirit.” Sometimes “soul.” The eighth-century B.C. Egyptian royal official Kuttamuwa built an 800-pound monument to house his immortal soul and asked that his friends feast there after his physical demise to commemorate him in his afterlife. The 10th-century Persian polymath Avicenna argued that since we would be able to think and to be self-aware even if we were totally disconnected from all external sensory input, there must be some nonmaterial soul inside of us. These are all vitalist ideas.

Modern biology has challenged the theory of vitalism. In 1828, the German chemist Friedrich Wöhler synthesized the organic substance urea from nonorganic chemicals. Urea is a byproduct of metabolism in many living organisms and, previous to Wöhler’s work, was believed to be uniquely associated with living beings. Later in the century, the German physiologist Max Rubner showed that the energy used by human beings in movement, respiration, and other forms of activity is precisely equal to the energy content of food consumed. That is, there are no hidden and nonmaterial sources of energy that power human beings. In more recent years, the composition of proteins, hormones, brain cells, and genes has been reduced to individual atoms, without the need to invoke nonmaterial substances.

Yet, I would argue that most of us, either knowingly or unknowingly, remain closet vitalists. Although there are moments when the material nature of our bodies screams out at us, such as when we have muscle injuries or change our mood with psychoactive drugs, our mental life seems to be a unique phenomenon arising from a different kind of substance, a nonmaterial substance. The sensations of consciousness, of thought and self-awareness, are so gripping and immediate and magnificent that we find it preposterous that they could have their origins entirely within the humdrum electrical and chemical tinglings of cells in our brains. However, neuroscientists say that is so.

A universe without comment is a universe without meaning.

Polls of the American public show that three-quarters of people believe in some form of life after death. Surely, this belief too is a version of vitalism. If our bodies and brains are nothing more than material atoms, then, as Lucretius wrote two millennia ago, when those atoms disperse as they do after death, there can be no further existence of the living being that once was.

Paradoxically, if we can give up the belief that our bodies and brains contain some transcendent, nonmaterial essence, if we can embrace the idea that we are completely material, then we arrive at a new kind of specialness—an alternative to the specialness of “vitalism.” We are special material. We humans living on our one planet wring our hands about the brevity of our lives and our mortal restraints, but we do not often think about how improbable it is to be alive at all. Of all the zillions of atoms and molecules in the universe, we have the privilege of being composed of those very, very few atoms that have joined together in the special arrangement to make living matter. We exist in that one-billionth of one-billionth. We are that one grain of sand on the desert.

And what is that special arrangement deemed “life?” The ability to form an outer membrane around the organism that separates it from the external world. The ability to organize material and processes within the organism. The ability to extract energy from the external world. The ability to respond to stimuli from the external world. The ability to maintain stability within the organism. The ability to grow. The ability to reproduce. We human beings, of course, have all of these properties and more. For we have billions of neurons connected to each other in an exquisite tapestry of communication and feedback loops. We have consciousness and self-awareness.

T he two tramps in Samuel Beckett’s Waiting for Godot, placed on a minimalist stage without time and without space, waiting interminably for the mysterious Godot, capture our bafflement with the meaning of existence.

Estragon: “What did we do yesterday?”
Vladimir: “What did we do yesterday?”
Estragon: “Yes.”
Vladimir: “Why . (Angrily) Nothing is certain when you’re about.”

Of course, there are questions that do not have answers.

But if we can manage to get outside of our usual thinking, if we can rise to a truly mind-bending view of the cosmos, there’s another way to think of existence. In our extraordinarily entitled position of being not only living matter but conscious matter, we are the cosmic “observers.” We are uniquely aware of ourselves and the cosmos around us. We can watch and record. We are the only mechanism by which the universe can comment on itself. All the rest, all those other grains of sand on the desert, are dumb, lifeless matter.

Of course, the universe does not need to comment on itself. A universe with no living matter at all could function without any trouble—mindlessly following the conservation of energy and the principle of cause and effect and the other laws of physics. A universe does not need minds, or any living matter at all. (Indeed, in the recent “multiverse” hypothesis endorsed by many physicists, the vast majority of universes are totally lifeless.) But in this writer’s opinion, a universe without comment is a universe without meaning. What does it mean to say that a waterfall, or a mountain, is beautiful? The concept of beauty, and indeed all concepts of value and meaning, require observers. Without a mind to observe it, a waterfall is only a waterfall, a mountain is only a mountain. It is we conscious matter, the rarest of all forms of matter, that can take stock and record and announce this cosmic panorama of existence before us.

I realize that there is a certain amount of circularity in the above comments. For meaning is relevant, perhaps, only in the context of minds and intelligence. If the minds don’t exist, then neither does meaning. However, the fact is that we do exist. And we have minds. We have thoughts. The physicists may contemplate billions of self-consistent universes that do not have planets or stars or living material, but we should not neglect our own modest universe and the fact of our own existence. And even though I have argued that our bodies and brains are nothing more than material atoms and molecules, we have created our own cosmos of meaning. We make societies. We create values. We make cities. We make science and art. And we have done so as far back as recorded history.

In his book The Mysterious Flame (1999), the British philosopher Colin McGinn argues that it is impossible to understand the phenomenon of consciousness because we cannot get outside of our minds to discuss it. We are inescapably trapped within the network of neurons whose mysterious experience we are attempting to analyze. Likewise, I would argue that we are imprisoned within our own cosmos of meaning. We cannot imagine a universe without meaning. We are not talking necessarily about some grand cosmic meaning, or a divine meaning bestowed by God, or even a lasting, eternal meaning. But just the simple, particular meaning of everyday events, fleeting events like the momentary play of light on a lake, or the birth of a child. For better or for worse, meaning is part of the way we exist in the world.

And given our existence, our universe must have meaning, big and small meanings. I have not met any of the life forms living out there in the vast cosmos beyond Earth. But I would be astonished if some of them were not intelligent. And I would be further astonished if those intelligences were not, like us, making science and art and attempting to take stock and record this cosmic panorama of existence. We share with those other beings not the mysterious, transcendent essence of vitalism, but the highly improbable fact of being alive.

Alan Lighman is professor of the practice of the humanities at MIT. This article, which first appeared online in our “Scaling” issue in October, 2015, is included in his forthcoming book, Probable Impossibilities: Musings on Beginnings and Endings. He is the author of six novels, including the international bestseller Einstein’s Dreams, as well as The Diagnosis, a finalist for the National Book Award. He is also the author of a memoir, three collections of essays, and several books on science.


DISCRETE GENERATIVE MODELS

This section focuses on generative models of discrete outcomes caused by discrete states that cannot be observed directly (i.e., latent or hidden states). In brief, the unknown variables in these models correspond to states of the world that generate the outcomes of policies or sequences of actions. Note that policies have to be inferred. In other words, in active inference one has to infer what policy one is currently pursuing, where this inference can be biased by prior beliefs or preferences. It is these prior preferences that lend action a purposeful and goal-directed aspect.

Figure 1 describes the basic form of these generative models in complementary formats, and the implicit Bayesian belief updating following the observation of new (sensory) outcomes. The equations on the left specify the generative model in terms of a probability distribution over outcomes, states, and policies that can be expressed in terms of marginal densities or factors. These factors are conditional distributions that entail conditional dependencies, encoded by the edges in the Bayesian network on the upper right. The model in Figure 1 generates outcomes in the following way. First, a policy (i.e., action sequence) is selected at the highest level using a softmax function of the free energy expected under plausible policies. Sequences of hidden states are then generated using the probability transitions specified by the selected policy, which are encoded in B matrices. These encode probability transitions in terms of policy-specific categorical distributions. As the policy unfolds, the states generate probabilistic outcomes at each point in time. The likelihood of each outcome is encoded by A matrices, in terms of categorical distributions over outcomes, under each state.

Generative model for discrete states and outcomes.Upper left panel: These equations specify the generative model. A generative model is the joint probability of outcomes or consequences and their (latent or hidden) causes see first equation. Usually, the model is expressed in terms of a likelihood (the probability of consequences given causes) and priors over causes. When a prior depends upon a random variable it is called an empirical prior. Here, the likelihood is specified by a matrix A whose elements are the probability of an outcome under every combination of hidden states. Cat denotes a categorical probability distribution. The empirical priors pertain to probabilistic transitions (in the B matrix) among hidden states that can depend upon actions, which are determined by policies (sequences of actions encoded by π). The key aspect of this generative model is that policies are more probable a priori if they minimize the (time integral of) expected free energy G,which depends upon prior preferences about outcomes or costs encoded in Cand the uncertainty or ambiguity about outcomes under each state, encoded by H. Finally, the vector D specifies the initial state. This completes the specification of the model in terms of parameters that constitute A, B, C, and D. Bayesian model inversion refers to the inverse mapping from consequences to causes that is, estimating the hidden states and other variables that cause outcomes. In approximate Bayesian inference, one specifies the form of an approximate posterior distribution. This particular form in this paper uses a mean field approximation, in which posterior beliefs are approximated by the product of marginal distributions over time points. Subscripts index time (or policy). See the main text and Table 1a in Friston, Parr, et al. (2017) for a detailed explanation of the variables (italic variables represent hidden states, while bold variables indicate expectations about those states). Upper right panel: This Bayesian network represents the conditional dependencies among hidden states and how they cause outcomes. Open circles are random variables (hidden states and policies), while filled circles denote observable outcomes. Squares indicate fixed or known variables, such as the model parameters. We have used a slightly unusual convention where parameters have been placed on top of the edges (conditional dependencies) that may mediate.Lower left panel: These equalities are the belief updates mediating approximate Bayesian inference and action selection. The (Iverson) brackets in the action selection panel return one if the condition in square brackets is satisfied and zero otherwise.Lower right panel: This is an equivalent representation of the Bayesian network in terms of a Forney or normal style factor graph. Here the nodes (square boxes) correspond to factors and the edges are associated with unknown variables. Filled squares denote observable outcomes. The edges are labeled in terms of the sufficient statistics of their marginal posteriors (see approximate posterior). Factors have been labeled intuitively in terms of the parameters encoding the associated probability distributions (on the upper left). The circled numbers correspond to the messages that are passed from nodes to edges (the labels are placed on the edge that carries the message from each node). These correspond to the messages implicit in the belief updates (on the lower left).

Generative model for discrete states and outcomes.Upper left panel: These equations specify the generative model. A generative model is the joint probability of outcomes or consequences and their (latent or hidden) causes see first equation. Usually, the model is expressed in terms of a likelihood (the probability of consequences given causes) and priors over causes. When a prior depends upon a random variable it is called an empirical prior. Here, the likelihood is specified by a matrix A whose elements are the probability of an outcome under every combination of hidden states. Cat denotes a categorical probability distribution. The empirical priors pertain to probabilistic transitions (in the B matrix) among hidden states that can depend upon actions, which are determined by policies (sequences of actions encoded by π). The key aspect of this generative model is that policies are more probable a priori if they minimize the (time integral of) expected free energy G,which depends upon prior preferences about outcomes or costs encoded in Cand the uncertainty or ambiguity about outcomes under each state, encoded by H. Finally, the vector D specifies the initial state. This completes the specification of the model in terms of parameters that constitute A, B, C, and D. Bayesian model inversion refers to the inverse mapping from consequences to causes that is, estimating the hidden states and other variables that cause outcomes. In approximate Bayesian inference, one specifies the form of an approximate posterior distribution. This particular form in this paper uses a mean field approximation, in which posterior beliefs are approximated by the product of marginal distributions over time points. Subscripts index time (or policy). See the main text and Table 1a in Friston, Parr, et al. (2017) for a detailed explanation of the variables (italic variables represent hidden states, while bold variables indicate expectations about those states). Upper right panel: This Bayesian network represents the conditional dependencies among hidden states and how they cause outcomes. Open circles are random variables (hidden states and policies), while filled circles denote observable outcomes. Squares indicate fixed or known variables, such as the model parameters. We have used a slightly unusual convention where parameters have been placed on top of the edges (conditional dependencies) that may mediate.Lower left panel: These equalities are the belief updates mediating approximate Bayesian inference and action selection. The (Iverson) brackets in the action selection panel return one if the condition in square brackets is satisfied and zero otherwise.Lower right panel: This is an equivalent representation of the Bayesian network in terms of a Forney or normal style factor graph. Here the nodes (square boxes) correspond to factors and the edges are associated with unknown variables. Filled squares denote observable outcomes. The edges are labeled in terms of the sufficient statistics of their marginal posteriors (see approximate posterior). Factors have been labeled intuitively in terms of the parameters encoding the associated probability distributions (on the upper left). The circled numbers correspond to the messages that are passed from nodes to edges (the labels are placed on the edge that carries the message from each node). These correspond to the messages implicit in the belief updates (on the lower left).


References

Huang, G. Is this a unified theory of the brain? New Scientist 2658, 30–33 (2008).

Friston K., Kilner, J. & Harrison, L. A free energy principle for the brain. J. Physiol. Paris 100, 70–87 (2006). An overview of the free-energy principle that describes its motivation and relationship to generative models and predictive coding. This paper focuses on perception and the neurobiological infrastructures involved.

Ashby, W. R. Principles of the self-organising dynamic system. J. Gen. Psychol. 37, 125–128 (1947).

Nicolis, G. & Prigogine, I. Self-Organisation in Non-Equilibrium Systems (Wiley, New York, 1977).

Haken, H. Synergistics: an Introduction. Non-Equilibrium Phase Transition and Self-Organisation in Physics, Chemistry and Biology 3rd edn (Springer, New York, 1983).

Kauffman, S. The Origins of Order: Self-Organization and Selection in Evolution (Oxford Univ. Press, Oxford, 1993).

Bernard, C. Lectures on the Phenomena Common to Animals and Plants (Thomas, Springfield, 1974).

Applebaum, D. Probability and Information: an Integrated Approach (Cambridge Univ. Press, Cambridge, UK, 2008).

Evans, D. J. A non-equilibrium free energy theorem for deterministic systems. Mol. Physics 101, 15551–11554 (2003).

Crauel, H. & Flandoli, F. Attractors for random dynamical systems. Probab. Theory Relat. Fields 100, 365–393 (1994).

Feynman, R. P. Statistical Mechanics: a Set of Lectures (Benjamin, Reading, Massachusetts, 1972).

Hinton, G. E. & von Cramp, D. Keeping neural networks simple by minimising the description length of weights. Proc. 6th Annu. ACM Conf. Computational Learning Theory 5–13 (1993).

MacKay. D. J. C. Free-energy minimisation algorithm for decoding and cryptoanalysis. Electron. Lett. 31, 445–447 (1995).

Neal, R. M. & Hinton, G. E. in Learning in Graphical Models (ed. Jordan, M. I.) 355–368 (Kluwer Academic, Dordrecht, 1998).

Itti, L. & Baldi, P. Bayesian surprise attracts human attention. Vision Res. 49, 1295–1306 (2009).

Friston, K., Daunizeau, J. & Kiebel, S. Active inference or reinforcement learning? PLoS ONE 4, e6421 (2009).

Knill, D. C. & Pouget, A. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci. 27, 712–719 (2004). A nice review of Bayesian theories of perception and sensorimotor control. Its focus is on Bayes optimality in the brain and the implicit nature of neuronal representations.

von Helmholtz, H. in Treatise on Physiological Optics Vol. III 3rd edn (Voss, Hamburg, 1909).

MacKay, D. M. in Automata Studies (eds Shannon, C. E. & McCarthy, J.) 235–251 (Princeton Univ. Press, Princeton, 1956).

Neisser, U. Cognitive Psychology (Appleton-Century-Crofts, New York, 1967).

Gregory, R. L. Perceptual illusions and brain models. Proc. R. Soc. Lond. B Biol. Sci. 171, 179–196 (1968).

Gregory, R. L. Perceptions as hypotheses. Philos. Trans. R. Soc. Lond. B Biol. Sci. 290, 181–197 (1980).

Ballard, D. H., Hinton, G. E. & Sejnowski, T. J. Parallel visual computation. Nature 306, 21–26 (1983).

Kawato, M., Hayakawa, H. & Inui, T. A forward-inverse optics model of reciprocal connections between visual areas. Network: Computation in Neural Systems 4, 415–422 (1993).

Dayan, P., Hinton, G. E. & Neal, R. M. The Helmholtz machine. Neural Comput. 7, 889–904 (1995). This paper introduces the central role of generative models and variational approaches to hierarchical self-supervised learning and relates this to the function of bottom-up and top-down cortical processing pathways.

Lee, T. S. & Mumford, D. Hierarchical Bayesian inference in the visual cortex. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 20, 1434–1448 (2003).

Kersten, D., Mamassian, P. & Yuille, A. Object perception as Bayesian inference. Annu. Rev. Psychol. 55, 271–304 (2004).

Friston, K. J. A theory of cortical responses. Philos. Trans. R. Soc. Lond. B Biol. Sci. 360, 815–836 (2005).

Beal, M. J. Variational Algorithms for Approximate Bayesian Inference. Thesis, University College London (2003).

Efron, B. & Morris, C. Stein's estimation rule and its competitors – an empirical Bayes approach. J. Am. Stats. Assoc. 68, 117–130 (1973).

Kass, R. E. & Steffey, D. Approximate Bayesian inference in conditionally independent hierarchical models (parametric empirical Bayes models). J. Am. Stat. Assoc. 407, 717–726 (1989).

Zeki, S. & Shipp, S. The functional logic of cortical connections. Nature 335, 311–317 (1988). Describes the functional architecture of cortical hierarchies with a focus on patterns of anatomical connections in the visual cortex. It emphasizes the role of functional segregation and integration (that is, message passing among cortical areas).

Felleman, D. J. & Van Essen, D. C. Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex 1, 1–47 (1991).

Mesulam, M. M. From sensation to cognition. Brain 121, 1013–1052 (1998).

Sanger, T. Probability density estimation for the interpretation of neural population codes. J. Neurophysiol. 76, 2790–2793 (1996).

Zemel, R., Dayan, P. & Pouget, A. Probabilistic interpretation of population code. Neural Comput. 10, 403–430 (1998).

Paulin, M. G. Evolution of the cerebellum as a neuronal machine for Bayesian state estimation. J. Neural Eng. 2, S219–S234 (2005).

Ma, W. J., Beck, J. M., Latham, P. E. & Pouget, A. Bayesian inference with probabilistic population codes. Nature Neurosci. 9, 1432–1438 (2006).

Friston, K., Mattout, J., Trujillo-Barreto, N., Ashburner, J. & Penny, W. Variational free energy and the Laplace approximation. Neuroimage 34, 220–234 (2007).

Rao, R. P. & Ballard, D. H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive field effects. Nature Neurosci. 2, 79–87 (1998). Applies predictive coding to cortical processing to provide a compelling account of extra-classical receptive fields in the visual system. It emphasizes the importance of top-down projections in providing predictions, by modelling perceptual inference.

Mumford, D. On the computational architecture of the neocortex. II. The role of cortico-cortical loops. Biol. Cybern. 66, 241–251 (1992).

Friston, K. Hierarchical models in the brain. PLoS Comput. Biol. 4, e1000211 (2008).

Murray, S. O., Kersten, D., Olshausen, B. A., Schrater, P. & Woods, D. L. Shape perception reduces activity in human primary visual cortex. Proc. Natl Acad. Sci. USA 99, 15164–15169 (2002).

Garrido, M. I., Kilner, J. M., Kiebel, S. J. & Friston, K. J. Dynamic causal modeling of the response to frequency deviants. J. Neurophysiol. 101, 2620–2631 (2009).

Sherman, S. M. & Guillery, R. W. On the actions that one nerve cell can have on another: distinguishing “drivers” from “modulators”. Proc. Natl Acad. Sci. USA 95, 7121–7126 (1998).

Angelucci, A. & Bressloff, P. C. Contribution of feedforward, lateral and feedback connections to the classical receptive field center and extra-classical receptive field surround of primate V1 neurons. Prog. Brain Res. 154, 93–120 (2006).

Grossberg, S. Towards a unified theory of neocortex: laminar cortical circuits for vision and cognition. Prog. Brain Res. 165, 79–104 (2007).

Grossberg, S. & Versace, M. Spikes, synchrony, and attentive learning by laminar thalamocortical circuits. Brain Res. 1218, 278–312 (2008).

Barlow, H. in Sensory Communication (ed. Rosenblith, W.) 217–234 (MIT Press, Cambridge, Massachusetts, 1961).

Linsker, R. Perceptual neural organisation: some approaches based on network models and information theory. Annu. Rev. Neurosci. 13, 257–281 (1990).

Oja, E. Neural networks, principal components, and subspaces. Int. J. Neural Syst. 1, 61–68 (1989).

Bell, A. J. & Sejnowski, T. J. An information maximisation approach to blind separation and blind de-convolution. Neural Comput. 7, 1129–1159 (1995).

Atick, J. J. & Redlich, A. N. What does the retina know about natural scenes? Neural Comput. 4, 196–210 (1992).

Optican, L. & Richmond, B. J. Temporal encoding of two-dimensional patterns by single units in primate inferior cortex. III Information theoretic analysis. J. Neurophysiol. 57, 132–146 (1987).

Olshausen, B. A. & Field, D. J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996).

Simoncelli, E. P. & Olshausen, B. A. Natural image statistics and neural representation. Annu. Rev. Neurosci. 24, 1193–1216 (2001). A nice review of information theory in visual processing. It covers natural scene statistics and empirical tests of the efficient coding hypothesis in individual neurons and populations of neurons.

Friston, K. J. The labile brain. III. Transients and spatio-temporal receptive fields. Philos. Trans. R. Soc. Lond. B Biol. Sci. 355, 253–265 (2000).

Bialek, W., Nemenman, I. & Tishby, N. Predictability, complexity, and learning. Neural Comput. 13, 2409–2463 (2001).

Lewen, G. D., Bialek, W. & de Ruyter van Steveninck, R. R. Neural coding of naturalistic motion stimuli. Network 12, 317–329 (2001).

Laughlin, S. B. Efficiency and complexity in neural coding. Novartis Found. Symp. 239, 177–187 (2001).

Tipping, M. E. Sparse Bayesian learning and the Relevance Vector Machine. J. Machine Learn. Res. 1, 211–244 (2001).

Paus, T., Keshavan, M. & Giedd, J. N. Why do many psychiatric disorders emerge during adolescence? Nature Rev. Neurosci. 9, 947–957 (2008).

Gilestro, G. F., Tononi, G. & Cirelli, C. Widespread changes in synaptic markers as a function of sleep and wakefulness in Drosophila. Science 324, 109–112 (2009).

Roweis, S. & Ghahramani, Z. A unifying review of linear Gaussian models. Neural Comput. 11, 305–345 (1999).

Hebb, D. O. The Organization of Behaviour (Wiley, New York, 1949).

Paulsen, O. & Sejnowski, T. J. Natural patterns of activity and long-term synaptic plasticity. Curr. Opin. Neurobiol. 10, 172–179 (2000).

von der Malsburg, C. The Correlation Theory of Brain Function. Internal Report 81–82, Dept. Neurobiology, Max-Planck-Institute for Biophysical Chemistry (1981).

Singer, W. & Gray, C. M. Visual feature integration and the temporal correlation hypothesis. Annu. Rev. Neurosci. 18, 555–586 (1995).

Bienenstock, E. L., Cooper, L. N. & Munro, P. W. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2, 32–48 (1982).

Abraham, W. C. & Bear, M. F. Metaplasticity: the plasticity of synaptic plasticity. Trends Neurosci. 19, 126–130 (1996).

Pareti, G. & De Palma, A. Does the brain oscillate? The dispute on neuronal synchronization. Neurol. Sci. 25, 41–47 (2004).

Leutgeb, S., Leutgeb, J. K., Moser, M. B. & Moser, E. I. Place cells, spatial maps and the population code for memory. Curr. Opin. Neurobiol. 15, 738–746 (2005).

Durstewitz, D. & Seamans, J. K. Beyond bistability: biophysics and temporal dynamics of working memory. Neuroscience 139, 119–133 (2006).

Anishchenko, A. & Treves, A. Autoassociative memory retrieval and spontaneous activity bumps in small-world networks of integrate-and-fire neurons. J. Physiol. Paris 100, 225–236 (2006).

Abbott, L. F., Varela, J. A., Sen, K. & Nelson, S. B. Synaptic depression and cortical gain control. Science 275, 220–224 (1997).

Yu, A. J. & Dayan, P. Uncertainty, neuromodulation and attention. Neuron 46, 681–692 (2005).

Doya, K. Metalearning and neuromodulation. Neural Netw. 15, 495–506 (2002).

Chawla, D., Lumer, E. D. & Friston, K. J. The relationship between synchronization among neuronal populations and their mean activity levels. Neural Comput. 11, 1389–1411 (1999).

Fries, P., Womelsdorf, T., Oostenveld, R. & Desimone, R. The effects of visual stimulation and selective visual attention on rhythmic neuronal synchronization in macaque area V4. J. Neurosci. 28, 4823–4835 (2008).

Womelsdorf, T. & Fries, P. Neuronal coherence during selective attentional processing and sensory-motor integration. J. Physiol. Paris 100, 182–193 (2006).

Desimone, R. Neural mechanisms for visual memory and their role in attention. Proc. Natl Acad. Sci. USA 93, 13494–13499 (1996). A nice review of mnemonic effects (such as repetition suppression) on neuronal responses and how they bias the competitive interactions between stimulus representations in the cortex. It provides a good perspective on attentional mechanisms in the visual system that is empirically grounded.

Treisman, A. Feature binding, attention and object perception. Philos. Trans. R. Soc. Lond. B Biol. Sci. 353, 1295–1306 (1998).

Maunsell, J. H. & Treue, S. Feature-based attention in visual cortex. Trends Neurosci. 29, 317–322 (2006).

Spratling, M. W. Predictive-coding as a model of biased competition in visual attention. Vision Res. 48, 1391–1408 (2008).

Reynolds, J. H. & Heeger, D. J. The normalization model of attention. Neuron 61, 168–185 (2009).

Schroeder, C. E., Mehta, A. D. & Foxe, J. J. Determinants and mechanisms of attentional modulation of neural processing. Front. Biosci. 6, D672–D684 (2001).

Hirayama, J., Yoshimoto, J. & Ishii, S. Bayesian representation learning in the cortex regulated by acetylcholine. Neural Netw. 17, 1391–1400 (2004).

Edelman, G. M. Neural Darwinism: selection and reentrant signaling in higher brain function. Neuron 10, 115–125 (1993).

Knobloch, F. Altruism and the hypothesis of meta-selection in human evolution. J. Am. Acad. Psychoanal. 29, 339–354 (2001).

Friston, K. J., Tononi, G., Reeke, G. N. Jr, Sporns, O. & Edelman, G. M. Value-dependent selection in the brain: simulation in a synthetic neural model. Neuroscience 59, 229–243 (1994).

Sutton, R. S. & Barto, A. G. Toward a modern theory of adaptive networks: expectation and prediction. Psychol. Rev. 88, 135–170 (1981).

Montague, P. R., Dayan, P., Person, C. & Sejnowski, T. J. Bee foraging in uncertain environments using predictive Hebbian learning. Nature 377, 725–728 (1995). A computational treatment of behaviour that combines ideas from optimal control theory and dynamic programming with the neurobiology of reward. This provided an early example of value learning in the brain.

Schultz, W. Predictive reward signal of dopamine neurons. J. Neurophysiol. 80, 1–27 (1998).

Daw, N. D. & Doya, K. The computational neurobiology of learning and reward. Curr. Opin. Neurobiol. 16, 199–204 (2006).

Redgrave, P. & Gurney, K. The short-latency dopamine signal: a role in discovering novel actions? Nature Rev. Neurosci. 7, 967–975 (2006).

Berridge, K. C. The debate over dopamine's role in reward: the case for incentive salience. Psychopharmacology (Berl.) 191, 391–431 (2007).

Sella, G. & Hirsh, A. E. The application of statistical physics to evolutionary biology. Proc. Natl Acad. Sci. USA 102, 9541–9546 (2005).

Rescorla, R. A. & Wagner, A. R. in Classical Conditioning II: Current Research and Theory (eds Black, A. H. & Prokasy, W. F.) 64–99 (Appleton Century Crofts, New York, 1972).

Bellman, R. On the Theory of Dynamic Programming. Proc. Natl Acad. Sci. USA 38, 716–719 (1952).

Watkins, C. J. C. H. & Dayan, P. Q-learning. Mach. Learn. 8, 279–292 (1992).

Todorov, E. in Advances in Neural Information Processing Systems (eds Scholkopf, B., Platt, J. & Hofmann T.) 19, 1369–1376 (MIT Press, 2006).

Camerer, C. F. Behavioural studies of strategic thinking in games. Trends Cogn. Sci. 7, 225–231 (2003).

Smith, J. M. & Price, G. R. The logic of animal conflict. Nature 246, 15–18 (1973).

Nash, J. Equilibrium points in n-person games. Proc. Natl Acad. Sci. USA 36, 48–49 (1950).

Wolpert, D. M. & Miall, R. C. Forward models for physiological motor control. Neural Netw. 9, 1265–1279 (1996).

Todorov, E. & Jordan, M. I. Smoothness maximization along a predefined path accurately predicts the speed profiles of complex arm movements. J. Neurophysiol. 80, 696–714 (1998).

Tseng, Y. W., Diedrichsen, J., Krakauer, J. W., Shadmehr, R. & Bastian, A. J. Sensory prediction-errors drive cerebellum-dependent adaptation of reaching. J. Neurophysiol. 98, 54–62 (2007).

Bays, P. M. & Wolpert, D. M. Computational principles of sensorimotor control that minimize uncertainty and variability. J. Physiol. 578, 387–396 (2007). A nice overview of computational principles in motor control. Its focus is on representing uncertainty and optimal estimation when extracting the sensory information required for motor planning.

Shadmehr, R. & Krakauer, J. W. A computational neuroanatomy for motor control. Exp. Brain Res. 185, 359–381 (2008).

Verschure, P. F., Voegtlin, T. & Douglas, R. J. Environmentally mediated synergy between perception and behaviour in mobile robots. Nature 425, 620–624 (2003).

Cohen, J. D., McClure, S. M. & Yu, A. J. Should I stay or should I go? How the human brain manages the trade-off between exploitation and exploration. Philos. Trans. R. Soc. Lond. B Biol. Sci. 362, 933–942 (2007).

Ishii, S., Yoshida, W. & Yoshimoto, J. Control of exploitation-exploration meta-parameter in reinforcement learning. Neural Netw. 15, 665–687 (2002).

Usher, M., Cohen, J. D., Servan-Schreiber, D., Rajkowski, J. & Aston-Jones, G. The role of locus coeruleus in the regulation of cognitive performance. Science 283, 549–554 (1999).

Voigt, C. A., Kauffman, S. & Wang, Z. G. Rational evolutionary design: the theory of in vitro protein evolution. Adv. Protein Chem. 55, 79–160 (2000).

Freeman, W. J. Characterization of state transitions in spatially distributed, chaotic, nonlinear, dynamical systems in cerebral cortex. Integr. Physiol. Behav. Sci. 29, 294–306 (1994).

Tsuda, I. Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Behav. Brain Sci. 24, 793–810 (2001).

Jirsa, V. K., Friedrich, R., Haken, H. & Kelso, J. A. A theoretical model of phase transitions in the human brain. Biol. Cybern. 71, 27–35 (1994). This paper develops a theoretical model (based on synergetics and nonlinear oscillator theory) that reproduces observed dynamics and suggests a formulation of biophysical coupling among brain systems.

Breakspear, M. & Stam, C. J. Dynamics of a neural system with a multiscale architecture. Philos. Trans. R. Soc. Lond. B Biol. Sci. 360, 1051–1074 (2005).

Bressler, S. L. & Tognoli, E. Operational principles of neurocognitive networks. Int. J. Psychophysiol. 60, 139–148 (2006).

Werner, G. Brain dynamics across levels of organization. J. Physiol. Paris 101, 273–279 (2007).

Pasquale, V., Massobrio, P., Bologna, L. L., Chiappalone, M. & Martinoia, S. Self-organization and neuronal avalanches in networks of dissociated cortical neurons. Neuroscience 153, 1354–1369 (2008).

Kitzbichler, M. G., Smith, M. L., Christensen, S. R. & Bullmore, E. Broadband criticality of human brain network synchronization. PLoS Comput. Biol. 5, e1000314 (2009).

Rabinovich, M., Huerta, R. & Laurent, G. Transient dynamics for neural processing. Science 321 48–50 (2008).

Tschacher, W. & Hake, H. Intentionality in non-equilibrium systems? The functional aspects of self-organised pattern formation. New Ideas Psychol. 25, 1–15 (2007).

Maturana, H. R. & Varela, F. De máquinas y seres vivos (Editorial Universitaria, Santiago, 1972). English translation available in Maturana, H. R. & Varela, F. in Autopoiesis and Cognition (Reidel, Dordrecht, 1980).

Fiorillo, C. D., Tobler, P. N. & Schultz, W. Discrete coding of reward probability and uncertainty by dopamine neurons. Science 299, 1898–1902 (2003).

Niv, Y., Duff, M. O. & Dayan, P. Dopamine, uncertainty and TD learning. Behav. Brain Funct. 1, 6 (2005).

Fletcher, P. C. & Frith, C. D. Perceiving is believing: a Bayesian approach to explaining the positive symptoms of schizophrenia. Nature Rev. Neurosci. 10, 48–58 (2009).

Phillips, W. A. & Silverstein, S. M. Convergence of biological and psychological perspectives on cognitive coordination in schizophrenia. Behav. Brain Sci. 26, 65–82 (2003).

Friston, K. & Kiebel, S. Cortical circuits for perceptual inference. Neural Netw. 22, 1093–1104 (2009).


Contents

Mapped sensory processing areas are a complex phenomenon and must therefore serve an adaptive advantage as it is highly unlikely for complex phenomena to appear otherwise. Sensory maps are also very old in evolutionary history as they are nearly ubiquitous in all species of animals and are found for nearly all sensory systems. Some advantages of sensory maps have been elucidated by scientific exploration:

  1. Filling In: When sensory stimulation is organized in the brain in some form of topographic pattern, then the animal might be able to “fill in” information that is missing using neighboring regions of the map since they will usually be activated together when all information is present. Loss of signal from one area can be filled in from adjacent areas of the brain if those areas are for physically related parts of the periphery. This is evident in animal studies where the neurons bordering a lesioned, or damaged, brain area (which used to process the sense of touch in a hand) to recover processing of that sensory region because they process information from adjacent hand areas. Ώ]
  2. Lateral Inhibition: Lateral inhibition is an organizing principle, it allows contrast in many systems from the visual to the somatosensory. This means that if adjacent areas inhibit one another then stimulation which activates one brain region can simultaneously inhibit the adjoining brain regions to create a sharper resolution between stimuli. This is evident in the visual system of humans where sharp lines can be detected between bright and dark regions because of simple cells which inhibit their neighbors.
  3. Summation: Organization also allows related stimuli to be summed in the neural assessment of sensory information. Examples of this are found in the summation of tactile inputs neurally or visual inputs under low light ΐ]

Published by the Royal Society. All rights reserved.

References

. 2009 Beyond the brain: embodied, situated and distributed cognition . Newcastle-upon-Tyne, UK: Cambridge Scholars Publisher . Google Scholar

. 2001 Smart behavior of true slime mold in a labyrinth . Res. Microbiol. 152, 767-770. (doi:10.1016/S0923-2508(01)01259-1) Crossref, PubMed, ISI, Google Scholar

Gadau J, Fewell J, Wilson EO

. 2009 Organization of insect societies: from genome to sociocomplexity . Cambridge, MA : Harvard University Press . Google Scholar

Passino KM, Seeley TD, Visscher PK

. 2008 Swarm cognition in honeybees . Behav. Ecol. Sociobiol. 62, 401-414. (doi:10.1007/s00265-007-0468-1) Crossref, ISI, Google Scholar

. 2001 Distributed problem solving in social insects . Ann. Math. Artif. Intell. 31, 199-221. (doi:10.1023/A:1016651613285) Crossref, ISI, Google Scholar

Hoff N, Sagoff A, Wood RJ, Nagpal R

. 2010 Two foraging algorithms for robot swarms using only local communication . In IEEE Int. Conf. on Robotics and Biomimetics, 14–18 December 2010 , pp. 123-130. Piscataway, NJ: IEEE. Google Scholar

Heims SJ, von Neumann J, Wiener N

. 1982 From mathematics to the technologies of life and death . Cambridge, MA : MIT Press . Google Scholar

Stanley-Jones D, Stanley-Jones K

. 1960 The kybernetics of living systems . Oxford, UK : Pergamon Press . Google Scholar

. 2005 A history of the cybernetics movement in the United States . J. Wash. Acad. Sci. 91, 54-66. Google Scholar

. 1998 Artificial intelligence: a new synthesis . San Francisco, CA : Morgan Kauffmann Publishers (Elsevier Science) . Google Scholar

Kandel ER, Schwartz JH, Jessell TM

. 1991 Principles of neural science . New York, NY : Elsevier . Google Scholar

. 2007 The tinkerer's accomplice. How design emerges from life itself . Cambridge, MA : Harvard University Press . Crossref, Google Scholar

. 2002 Emergence: the connected lives of ants, brains, cities, and software . New York, NY : Simon and Schuster . Google Scholar

. 2008 On the nature of minds, or: truth and consequences . J. Exp. Theor. Artif. Intell. 20, 181-196. (doi:10.1080/09528130802319086) Crossref, ISI, Google Scholar

. 2007 The ecological paradigm of mind and its implications for psychotherapy . Rev. Gen. Psychol. 11, 1-11. (doi:10.1037/1089-2680.11.1.1) Crossref, ISI, Google Scholar

. 2014 Neural Darwinism . New Perspect. Q. 31, 25-27. (doi:10.1111/npqu.11422) Crossref, Google Scholar

. 1995 Acetylcholine receptors: too many channels, too few functions . Science 269, 1681-1682. (doi:10.1126/science.7569892) Crossref, PubMed, ISI, Google Scholar

. 1994 Will new dopamine receptors offer a key to schizophrenia? Science 265, 1034-1035. (doi:10.1126/science.8066441) Crossref, ISI, Google Scholar

. 1986 The blind watchmaker . Essex, UK : Longman Scientific and Technical . Google Scholar

. 1972 The endurance of the mechanism—vitalism controversy . J. Hist. Biol. 5, 159-188. (doi:10.1007/BF02113490) Crossref, Google Scholar

. 2013 Homeostasis and the forgotten vitalist roots of adaptation . In Vitalism and the scientific image in post-enlightenment life science, 1800–2010. History, philosophy and theory of the life sciences (eds

), pp. 271-291. Berlin, Germany : Springer . Google Scholar

. 1927 An introduction to the study of experimental medicine . Classics of Medicine Library. USA: Henry Schumann, Inc. Google Scholar

Rosenblueth A, Wiener N, Bigelow J

. 1943 Behavior, purpose and teleology . Philos. Sci. 10, 18-24. (doi:10.1086/286788) Crossref, Google Scholar

1954 Men, machines and the world about . In Medicine and science (ed.

), pp. 13-28. New York, NY : New York Academy of Medicine and Science . Google Scholar

. 2007 Claude Bernard and an introduction to the study of experimental medicine: ‘Physical vitalism,’ dialectic, and epistemology . J. Hist. Med. Allied Sci. 62, 495-528. (doi:10.1093/jhmas/jrm015) Crossref, ISI, Google Scholar

. 1968 That was the molecular biology that was . Science 160, 390-395. (doi:10.1126/science.160.3826.390) Crossref, ISI, Google Scholar

. 2013 Biology's second law. Homeostasis, purpose and desire . In Beyond mechanism. Putting life back into biology (eds

), pp. 183-203. Lanham, MD : Lexington Books . Google Scholar

. 2008 Introduction. Vitalism without metaphysics? Medical vitalism in the enlightenment . Sci. Context 21, 461-463. (doi:10.1017/S0269889708001919) Crossref, ISI, Google Scholar

. 2998 The animal economy as object and program in Montpellier vitalism . Sci. Context 21, 537-579. (doi:10.1017/S0269889708001956) Crossref, ISI, Google Scholar

. 1976 Vitalism, the soul, and sensibility: the physiology of Théophile Bordeu . J. Hist. Med. Allied Sci. XXXI, 30-41. (doi:10.1093/jhmas/XXXI.1.30) Crossref, Google Scholar

. 2012 What is life? How chemistry becomes biology . London, UK : Oxford University Press . Google Scholar

. 2006 A fourth law of thermodynamics . Chemistry 15, 305-310. Google Scholar

. 2017 Purpose and desire. What makes something ‘alive’ and why modern Darwinism has failed to explain it . New York, NY : HarperOne . Google Scholar

. 1995 Darwin's dangerous idea. Evolution and the meanings of life . New York, NY : Simon & Schuster . Google Scholar

. 1958 Adaptation, natural selection, and behavior . In Behavior and evolution (eds

), pp. 390-416. New Haven, CT : Yale University Press . Google Scholar

. 2012 Mind and cosmos: why the materialist neo-Darwinian conception of nature is almost certainly false . Oxford, UK : Oxford University Press . Crossref, Google Scholar

. 1999 Foundations of biology: on the problem of ‘purpose’ in biology in relation to our acceptance of the Darwinian theory of natural selection . Found. Sci. 4, 3-23. (doi:10.1023/A:1009634718370) Crossref, Google Scholar

. 1970 Teleological explanation in evolutionary biology . Philos. Sci. 37, 1-15. (doi:10.1086/288276) Crossref, ISI, Google Scholar

. 1958 Teleology in science teaching . Science 128, 1402-1405. (doi:10.1126/science.128.3336.1402) Crossref, ISI, Google Scholar

. 2016 Homeostasis and the physiological dimension of niche construction theory in ecology and evolution . Ecol. Evol. 30, 203-219. (doi:10.1007/s10682-015-9795-2) Crossref, ISI, Google Scholar

. 2001 The extended organism: the physiology of animal-built structures. J. Scott Turner . Q. Rev. Biol. 76, 270. (doi:10.1086/393975) Crossref, Google Scholar

. 2000 The extended organism. The physiology of animal-built structures . Cambridge, MA : Harvard University Press . Google Scholar

. 2015 Toward major evolutionary transitions theory 2.0 . Proc. Natl Acad. Sci. USA 112, 10 104-10 111. (doi:10.1073/pnas.1421398112) Crossref, ISI, Google Scholar

. 2018 Many little lives . Inference: Int. Rev. Sci. 4. See https://inference-review.com/article/many-little-lives. Google Scholar

. 2011 Termites as models of swarm cognition . Swarm Intell. 5, 19-43. (doi:10.1007/s11721-010-0049-1) Crossref, ISI, Google Scholar

Aanen DK, Eggleton P, Rouland-Lefèvre C, Guldberg-Frøslev T, Rosendahl S, Boomsma JJ

. 2002 The evolution of fungus-growing termites and their mutualistic fungal symbionts . Proc. Natl Acad. Sci. USA 99, 14 887-14 892. (doi:10.1073/pnas.222313099) Crossref, ISI, Google Scholar

. 1979 Termite–fungus mutualism . In Insect–fungus symbiosis nutrition, mutualism and commensalism (ed.

), pp. 117-163. New York, NY : John Wiley and Sons . Google Scholar

. 1979 An analysis of building behaviour of the termite Macrotermes subhyalinus (Rambur) [primary research] . Wageningen, The Netherlands : Wageningen Universiteit . Google Scholar

Dangerfield JM, McCarthy TS, Ellery WN

. 1998 The mound-building termite Macrotermes michaelseni as an ecosystem engineer . J. Trop. Ecol. 14, 507-520. (doi:10.1017/S0266467498000364) Crossref, ISI, Google Scholar

. 2000 Ventilation of termite mounds: new results require a new model . Behav. Ecol. 11, 486-494. (doi:10.1093/beheco/11.5.486) Crossref, ISI, Google Scholar

. 1977 Ecological effects of termite mounds . Wild Rhodesia 14, 11-14. Google Scholar

. 1964 L'architecture du nid de Macrotermes natalensis et son sens fonctionnel . In Etudes sur les termites Africains (ed.

), pp. 327-362. Paris, France : Maisson et Cie . Google Scholar

. 2005 Extended physiology of an insect-built structure . Am. Entomol. 51, 36-38. (doi:10.1093/ae/51.1.36) Crossref, Google Scholar

. 2000 Architecture and morphogenesis in the mound of Macrotermes michaelseni (Sjostedt) (Isoptera: Termitidae , Macrotermitinae) in northern Namibia . Cimbebasia 16, 143-175. Google Scholar

. 2001 On the mound of Macrotermes michaelseni as an organ of respiratory gas exchange . Physiol. Biochem. Zool. 74, 798-822. (doi:10.1086/323990) Crossref, PubMed, ISI, Google Scholar

. 1969 Water movement in two termite mounds in Rhodesia . J. Ecol. 57, 441-451. (doi:10.2307/2258390) Crossref, ISI, Google Scholar

. 1959 La reconstruction du nid et les coordinations inter-individuelles chez Bellicositermes et Cubitermes sp. La théorie de la stigmergie: Essai d'interprétationdu comportement des termites constructeurs . Insectes Soc. 6, 41-80. (doi:10.1007/BF02223791) Crossref, Google Scholar

. 1982 Water intake by the termite Macrotermes michaelseni . Entomol. Exp. Appl. 31, 147-153. (doi:10.1111/j.1570-7458.1982.tb03126.x) Crossref, ISI, Google Scholar

. 1981 Behavioural elements and their meaning in incipient laboratory colonies of the fungus-growing termite Macrotermes michaelseni (Isoptera, Macrotermitinae) . Insectes Soc. 28, 371-382. (doi:10.1007/BF02224194) Crossref, ISI, Google Scholar

Turner JS, Marais E, Vinte M, Mudengi A, Park W

. 2006 Termites, water and soils . J. Agric. Sci. Soc. Namibia 2006, 40-45. Google Scholar

. 2006 Termites as mediators of the water economy of arid savanna ecosystems . In Dryland ecohydrology (eds

), pp. 303-313. Berlin, Germany : Springer . Crossref, Google Scholar

Green B, Bardunias P, Turner JS, Nagpal R, Werfel J

. 2017 Excavation and aggregation as organizing factors in de novo construction by mound-building termites . Proc. R. Soc. B 284, 2016-2730. (doi:10.1098/rspb.2016.2730) Link, ISI, Google Scholar

Petersen K, Bardunias P, Napp N, Werfel J, Nagpal R, Turner S

. 2015 Arrestant property of recently manipulated soil on Macrotermes michaelseni as determined through visual tracking and automatic labeling of individual termite behaviors . Behav. Processes 116, 8-11. (doi:10.1016/j.beproc.2015.04.004) Crossref, PubMed, ISI, Google Scholar

. 2016 Swarm cognition and swarm construction. Lessons from a social insect master builder . In Complexity, cognition, urban planning and design (eds

), pp. 111-126. Berlin, Germany : Springer . Google Scholar

Julesz B, Kropfl W, Petrig B

. 1980 Large evoked potentials to dynamic random-dot correlograms and stereograms permit quick determination of stereopsis . Proc. Natl Acad. Sci. USA 77, 2348-2351. (doi:10.1073/pnas.77.4.2348) Crossref, ISI, Google Scholar

Bonabeau E, Dorigo M, Theraulaz G

. 1999 Swarm intelligence. From natural to artificial systems . New York, NY : Oxford University Press . Google Scholar

. 2006 Anthills built to order. Automating construction with artificial swarms . Cambridge, MA : Massachusetts Institute of Technology . Google Scholar

Werfel J, Bar-Yam Y, Nagpal R

(eds). 2005 Building patterned structures with robot swarms . In Proc. of Nineteenth Int. Joint Conf. on Artificial Intelligence, Edinburgh, UK, 30 July--5 August 2005 , pp. 1505–1513. Edinburgh, UK : International Joint Conferences on Artificial Intelligence . Google Scholar

. 2010 Collective decision-making in multi-agent systems by implicit leadership . In Proc. of the 9th Int. Conf. on Autonomous Agents and Multiagent Systems, 10–14 May 2010 (eds

van der Hoek K, Lespérance L, Lespérance S

), pp. 1189–1196. Toronto, Canada : International Foundation for Autonomous Agents and Multiagent Systems . Google Scholar

. 1994 The muse in the machine. Computerizing the poetry of human thought . New York, NY : The Free Press . Google Scholar

. 2016 The restless clock: a history of the centuries-long argument over what makes living things tick . Chicago, IL : University of Chicago Press . Google Scholar


Emergent Theories of Learning

How can this system of edge-filters and shape pattern dictionaries develop automatically?

It appears that it self-organizes based on some simple local rules, very much like a cellular automata. This was recognized more than a decade ago. The short paper that really put together for me is called “A SELF-ORGANIZING NEURAL NETWORK MODEL OF THE PRIMARY VISUAL CORTEX“. The key idea is rather simple. Take a prototypical 2D laminar neural network like the simple cortical model discussed above. A 2D input pattern flows into the neural array from the bottom, and each neuron forms a bunch of connections across the input grid forming something like a circular pattern centered around the neuron (with synaptic weights falling of with a Gaussian like pattern) .

Mathematically, the neuron performs something like a matrix-multiplication of a local patch of the input with its synaptic weights. If you apply an appropriate simple hebbian learning rule to a random initial configuration of this system (synaptic weights increase in proportion to a presynaptic-postsynaptic coincidence), then these neurons will evolve to represent frequently occurring input patterns.

But now it gets more interesting: if you add an additional set of positive and negative lateral connections between neurons within a layer, then you can get more complex cellular automata-like behavior. More specifically, if the random lateral connections are picked from a distribution such that short-range connections are more positive and long-range connections are more likely to be negative, the neurons will tend to evolve into small column-like pockets where neurons are mutually supportive within columns but are antagonistic between columns. This representation also performs a nice segmentation of the hypothesis space. The model developed in the paper – the RF-LISSOM model – and later follow-ups provides a very convincing account of how V1’s features can be fully explained by the evolution of basic neurons with simple local hebbian learning rules and a couple of homeostatic self-regulating principles.

Can such a simple emergent model explain the rest of the ventral visual pathway?

It seems likely. If you took the output of V1 and fed it to another layer built of the same adapting neurons, you’d probably get something like V2. It wouldn’t be the exact softmax operation described by Poggio et al, but that is something of an idealization anyway. The V2 layer would organize into micro-columns which would tune to frequent output patterns of V1. The presence of an edge of a particular orientation is a good predictor of an edge of the same orientation activating somewhere nearby – both because the edge may be long and because as the image moves across the visual stream edges will move to nearby neuron populations. It thus seems likely that V2 neurons would self-organize into microcolumns tuned to edges of a particular orientation anywhere in their field – similar to the softmax operation description. As you go higher up the hierarchy, the tuning would be more complex, and you would have micro-columns adapting to represent more complex common edge collections.

Feedback

The self-organizing model discussed so far is missing one important type of connection pattern found in the real cortex, which is feedback connections which flow from higher regions back down towards the lower regions close to the input. These feedback connections tend to follow the feedforward connections bringing processed visual input up the hierarchy, but they flow in the opposite direction. These feedback connections seem pretty natural if we think of a pathway such as the visual system as a connected 3d region instead of a collection of 2d patches. If you took the various 2D patches of V1,V2, etc and stacked them on top of each other, you’d get some sort of tapered blob shape – kind of like a truncated pyramid. It would be wide at the base (v1 – the largest region) and would then taper as the layers are smaller as you go up the hierarchy. If you arranged the visual stream into such a 3D volume, the connections could just be described by some simple 3D distribution. Visual input comes in from the bottom and flows up the hierarchy, but information can also flow laterally within a layer and back down from higher to lower layers.

What is the role of the downward flowing feedback connections?

They help reinforce stable hypothesizes in the system. An initial flow of information up the hierarchy may lead to numerous competing theories about the scene. Feedback connections tracing the same paths as the inputs will tend to bias for the supportive components. For example, if the higher regions are expecting to see a building, this would then flow down the feedback connections to bias neurons representing appropriate collections of right angles, corners, horizontal and vertical edges, and numerous other unnameable statistical observations that lead to the building conclusion. If these supporting beliefs are strong enough vs their competition, the ‘building’ pathway will form a stable self-reinforcing loop. This is essentially very similar to Bayesian Belief Propagation – of course without necessarily simulating it exactly (which could be burdensome).

Its also interesting to note that the feedback connections will perform something similar to backpropagation. When a neuron fires, the hebbian learning rule will up-regulate any recently active synapses that contributed. With the feedback connections, this neuron will send back a signal down to the lower layer input neurons. As the system evolves into mutually supportive pathways, the feedback signal is likely to closely associate with the input neurons that activated the higher level synapses. The feedback signal will thus trace back the input and reinforce the contributing connections.

From cortical maps to a full intelligence engine

Reading this far, and if you’ve read my other short bits about the brain or much better yet the literature they derive from, you have a pretty good idea of how self-organizing hierarchical cortical maps work in theory and understand their great power. But there’s still a long way to go from there to a full scale intelligence engine such as a brain. In theory, one of these hierarchical inference networks can also, operating in reverse flow, translate high level abstract commands into detailed motor control sequences, very much like the hierarchical sensor input stream but in reverse. Hawkins gives some believable accounts of how such mechanisms could work.

Whats missing then? A good deal. There is much more to the brain than just a hierarchical probabilistic knowledge engine – although that certainly is a core component. One familiar with computer architecture would next ask, “what performs data routing?”. This is a crucial question, because its pretty clear you can’t do much useful computation with a fixed topology – to run any interesting algorithms you need some way for different brain regions to communicate to other brain regions dynamically. A fixed topology is less than sufficient.

That functionality appears to be provided by the thalamus, one of the oldest brain regions still part of the core networks. Its also perhaps the most important. Damage to the thalamus generally results in death or coma, which is to be expected if it is a major routing hub (vaguely equivalent to a CPU). For example, when you focus your attention on a speaker’s words, the first stages of processing probably flow through a fixed topology of layered computation, but once those are translated into the level of abstract thoughts, they need to be routed more widely to many general cortical layers that deal with abstract thinking – and this can not use a fixed topology.

At this apex level of the hierarchy, it doesn’t much matter whether the words originated as audio signals, visual patterns, or even from internal monologue, they need to eventually reach the same abstract processing regions for semantic parsing, memory recall and the general mechanisms of cognition. This requires at least some basic one to many and many to one dynamic routing. Selective attention requires similar routing.

The visual system performs selective attention and dynamic routing mechanically by actually moving the eye and thus the fovea, but consider that you need that same mechanism in many domains where the mechanical trick doesn’t apply. For instance, your body’s proprioception (sense of touch) sensor network also uses selective attention (focusing a large set of general processing resources on a narrow input domain) and this suggests a neural mechanism of dynamic routing.

Internal Monologue and the Core Routing Network

Venturing out of the realm of current literature and into my own theoretical space, I have the beginnings of a meta-theory concerning the brain’s general higher level organization which centers around a serial core routing network. We tend to think of the brain as massively parallel, which is true at the level of the cortical hierarchy described earlier. But the fact is that at the highest level of organization, at the apex of the cortical pyramid you have a network involving largely the hippocampus, cortex, and the thalamus which is functionally serial. We have a serial stream of consciousness which makes some sense for coordinating actions, language through a serial audible stream, and so on. Our inner monologue is essentially serial at the conscious level.

Note that having a serial top level network is not in any sense preordained. We could have evolved vocal cords which encoded two or more independent audio streams and had a community of voices echoing in our heads. Indeed, the range of human mind space already encompasses such variants on the fringe.

In my current simple model, the (typically) serial inner core routing network would mostly function as a simple broadcast network which connects the highest layers of the cortex, hippocampus, and thalamus. This core network maps to both the task-positive and task-negative networks in the neuroscience literature.

What types of messages are broadcast on the core routing network? Thoughts, naturally.

The neuro-typical experience of a serial inner monologue is the reverberations of symbolic thoughts activating the speech and auditory pathways. For most of us, we first learn to understand and then speak words through the audio interface, and then learn to read well after. As you are reading these words, you are probably hearing a voice in your head. Your projection of my voice to be exact. In a literal sense, I am programming your mind right now. But don’t be alarmed, this happens whenever you read and understand anything.

Perhaps if one learned words first through the visual senses and then later learned to understand speech, one would ‘see’ words in the mind’s eye. I’m not aware of any such examples, this is just a thought experiment.

Its difficult to image pre-linguistic thoughts, raw thoughts that are not connected to words. Its difficult to project down into that more constrained, primitive realm of mindspace. Certainly some of our thought streams are directly experiential (such as recalling a visual and tactile memory of walking barefoot on a sunny tropical beach), but its difficult to imagine a long period of thinking constrained to this domain alone.

The core routing network allows us to take words and translate them into patterns of mental activation which simulate the state of mind which originally generated the words themselves. This sounds interesting, its probably worth reading again.

Imagine the following in a little more detail:

You are walking on a deserted jungle beach somewhere in Costa Rica. The sun is blazing but a slight breeze keeps the air pleasant. Your feet sink gently into the wet sand as small waves lap at your ankles. A lone mosquito nibbles on your shoulder and you quickly brush it off.

Those are just words, but in reading them you recreate that scene in your mind as the words activate specific high level cortical patterns which cascade down into the lower levels of the sensory and motor pyramids using the feedback path discussed earlier. The pattern associations were learnt long ago and have been reinforced through numerous rapid replays coordinated by the hippocampus during your sleep. If you were to actually look at your thought patterns as visualized with a high resolution scanner, you would see a trace very similar to the trace of your brain actually experiencing the described scene. Its different of course, not quite as detailed, and the task-negative network does not activate motor outputs, but at the neural level thinking about performing an action is just a tad shy of performing said action.

This is the power of words.

So for a brain architecture, the high level recipe looks something like this: take a hierarchical feedforward and feedback (dual directional) multi-sensory and motor cortex, combine in a hippo-cortical-thalamic core routing network, add in an offline selective memory optimization process (sleep), and finally some form of widely parallel goal directed search operating in compressed cortical symbolic space, and you have something interesting. This of course is an over-simplification of the brain, it has many more major circuits and pathways, but nonetheless we don’t need all of the specific complexity of the brain. Whats more important are the general mechanisms underlying emergent complexity – such as learning.

Of course, the devil is in the details, but it looks like the main components of a brain architecture are within reasonable reach this decade. I see the outline of a next step where you take the components discussed above and integrate them into a AIXI like search optimizer – but crucially searching within the extremely compressed abstract symoblic space at the apex of the cortical pyramid.

Simulating and searching in such extraordinarily compressed spaces is the key to computational effeciency in the supremely complex realities the brain operates in, and AIXI can never scale by using actual full blown computer programs as the basis for simulation. The key lesson of the cortex is that intelligence relies on compressing and abstracting away nearly everything. Efficiency comes from destroying most of the information.


Watch the video: Πώς μπορούμε να αναπτύξουμε νέους νευρώνες στον εγκέφαλο. TED (May 2022).