What AI Gave to Neuroscience?

Yuri Barzov
6 min readApr 8, 2018

The current hype in AI originates from successful industrial applications of deep learning models which “are engineered systems inspired by the biological brain (whether the human brain or the brain of another animal),” as Ian Goodfellow, Yoshua Bengio and Aaron Courville put it in their renowned textbook on Deep Learning.

While neuroscience provided initial models for deep learning and keeps inspiring AI researchers the reverse impact of AI on neuroscience is also groundbreaking. It is especially interesting to explore the unfolding breakthrough in neuroscience inspired and supported by AI models now when it is already happening but has not yet reached the mainstream status neither has provoked any significant hype.

Let’s have a look.

Our brain performs statistical computations

Over the past decade many groups of researchers have discovered with human imaging studies that the activity of human brain may not only be modelled with Bayesian statistical algorithms but it actually performs calculations in accordance with Bayesian model of an ideal observer. Here come only a few sample works.

“We demonstrate that the auditory system, supported by a network of auditory cortical, hippocampal, and frontal sources, continually scans the environment, efficiently represents complex stimulus statistics, and rapidly (close to the bounds implied by an ideal observer model) responds to emergence of regular patterns, even when these are not behaviorally relevant. Neuronal activity correlated with the predictability of ongoing auditory input, both in terms of deterministic structure and the entropy of random sequences, providing clear neurophysiological evidence of the brain’s capacity to automatically encode high-order statistics in sensory input.” a group of researchers from UK states in its paper from January 2016.

Our brain measures confidence

“We propose that humans possess an accurate sense of confidence that allows them to evaluate the reliability of their knowledge, and use this information to strike the balance between prior knowledge and current evidence. Our functional MRI data suggest that a frontoparietal network implements this confidence-weighted learning algorithm, acting as a statistician that uses probabilistic information to estimate a hierarchical model of the world,” French neuroscientists Florent Meyniel and Stanislas Dehaene expand on the idea in their paper from May 2017.

Our brain detects unexpected uncertainty

Neural implementation of Bayesian learning would require separate encoding of the three levels of uncertainty: expected uncertainty (risk), estimation uncertainty (credibility), unexpected uncertainty (prediction error, surprise). Human imaging studies appear to be consistent with this view.

“Activation of the amygdala-hippocampus complex to novel images in a learning context may be conjectured to reflect unexpected uncertainty [37], [38]. Neural correlates with the Bayesian learning rate have been identified in the precuneus and anterior cingulate cortex [4], [39]. Because of the close relationship between the Bayesian learning rate and unexpected uncertainty (effects of risk and estimation uncertainty on the learning rate operate through unexpected uncertainty, as explained before), these neural signals could as well reflect unexpected uncertainty (changes in the likelihood that outcome probabilities have jumped),” Elise Payzan-LeNestour and Peter Bossaerts reported in their research paper in 2011.

“Here, participants performed a decision-making task while undergoing functional magnetic resonance imaging (fMRI), which, in combination with a Bayesian model-based analysis, enabled each form of uncertainty to be separately measured. We found representations of unexpected uncertainty in multiple cortical areas, as well as the noradrenergic brainstem nucleus locus coeruleus,” an expanded group of researchers including authors of the above paper stated in their publication in 2013.

Our brain feeds on unexpected uncertainty

Karl Friston, a British neuroscientist and authority in brain mapping introduced and developed the free energy principle as a candidate for the unified brain theory in a number of works in 2007–2010 and beyond. Free energy makes unexpected uncertainty, prediction error or surprise, whatever you call it, quantifiable. Brain lives by minimising free energy. It feeds on unexpected uncertainty thus.

“The free-energy principle is a simple postulate with complicated implications. It says that any adaptive change in the brain will minimize free-energy. This minimisation could be over evolutionary time (during natural selection) or milliseconds (during perceptual synthesis). In fact, the principle applies to any biological system that resists a tendency to disorder; from single-cell organisms to social networks.” Friston wrote in his paper in 2009.

In the most simple terms free energy is just the amount of prediction error. The motivation for the free-energy principle “rests upon the fact that self organising biological agents resist a tendency to disorder and therefore minimize the entropy of their sensory states.”

“Because entropy is the long-term average of surprise, agents must avoid surprising states (e.g. a fish out of water). But there is a problem; agents cannot evaluate surprise directly; this would entail knowing all the hidden states of the world causing sensory input. However, an agent can avoid surprising exchanges with the world if it minimises its free-energy because free-energy is always bigger than surprise.” Friston continued.“Biological agents must engage in some form of Bayesian perception to avoid surprising exchanges with the world.”

“In summary, the free-energy principle prescribes recognition dynamics if we specify (i) the form of the generative model used by the brain, (ii) the form of the recognition density and (iii) how its sufficient statistics are optimised.”

We meet quite a lot of machine learning terminology in Friston’s papers. It shouldn’t be a surprise given the influence Geoffrey Hinton had on him.

It all began with AI

“For me, and I suspect many others, Geoffrey Hinton’s ideas placed the Bayesian brain center stage in a tangible and formal fashion. In more general terms, Bayesian formulations of problems in machine learning provided an inescapable metaphor for neuronal computations. Notions like the Helmholtz machine and the central role of generative models not only became a natural way of thinking about the brain but also prescribed a principled approach to data analysis, particularly in the context of the ill posed problems we were dealing with at that time.” Karl Friston remembered in 2012 about early 1990-ies.

It doesn’t end in neuroscience…

Are you ready to learn what is life? Hint: it feeds on unexpected uncertainty. No kidding!

Erwin Schrodinger, the Nobel Prize winner for his work in quantum physics and the famous author of the Schrodinger Cat thought experiment wrote his book What Is Life? in 1944. In it he presented an idea that life avoids a rapid decay to an inert state of equilibrium (the decay to maximum entropy or death in other words) because it “feeds on negative entropy.”

“What an organism feeds upon is negative entropy. Or, to put it less paradoxically, the essential thing in metabolism is that the organism succeeds in freeing itself from all the entropy it cannot help producing while alive.” He wrote in Chapter 6 of his book and further explained that while “entropy = k log D, where k is the so-called Boltzmann constant ( =3.2983 . 10–24 cal./C), and D a quantitative measure of the atomistic disorder of the body in question,” than -(entropy) = k log (l/D).

In a later Note to Chapter 6 he clarified. “The remarks on negative entropy have met with doubt and Opposition from physicist colleagues. Let me say first, that if I had been law catering for them alone I should have let the discussion turn on free energy instead. It is the more familiar notion in this context. But this highly technical term seemed linguistically too near to energy for making the average reader alive to the contrast between the two things.”

Does life feed on unexpected uncertainty?

Karl Friston with a group of researcher has recently answered the question in the title of Schrodinger’s book. “The free-energy principle (FEP) is a formal model of neuronal processes that is widely recognised in neuroscience as a unifying theory of the brain and biobehaviour. More recently, however, it has been extended beyond the brain to explain the dynamics of living systems, and their unique capacity to avoid decay.”

We avoid death by feeding on surprise. Surprised? That’s what AI has already given to neuroscience. What will be the next hype?