Learning: Brain vs xNN

The neural and cognitive architecture for learning from a small sample is a nice article I would like to recommend. It highlights how human learners avoid generalization issues found in machine learning, proposes a general model explaining how the brain may simplify complex problems.
Synergy between cognitive functions and reinforcement learning allows simplification.
Recurrent loops between basal ganglia and cortex form the neuroanatomical substrate.
Neural oscillatory frequencies provide the computational means to reach convergence.

Artificial intelligence algorithms are capable of fantastic exploits, yet they are still grossly inefficient compared with the brain’s ability to learn from few exemplars or solve problems that have not been explicitly defined. What is the secret that the evolution of human intelligence has unlocked? Generalization is one answer, but there is more to it. The brain does not directly solve difficult problems, it is able to recast them into new and more tractable problems. Here, we propose a model whereby higher cognitive functions profoundly interact with reinforcement learning to drastically reduce the degrees of freedom of the search space, simplifying complex problems, and fostering more efficient learning.

This week, the opposit approach was taken, on comparing from the point of view of the NN.

How learning unfolds in the brain: toward an optimization view

How do changes in the brain lead to learning?
To answer this question, consider an artificial neural network (ANN), where learning proceeds by optimizing a given objective or cost function. This “optimization framework” may provide new insights into how the brain learns, as many idiosyncratic features of neural activity can be recapitulated by an ANN trained to perform the same task.
Nevertheless, there are key features of how neural population activity changes throughout learning that cannot be readily explained in terms of optimization and are not typically features of ANNs.
Here we detail three of these features:
(1) the inflexibility of neural variability throughout learning,
(2) the use of multiple learning processes even during simple tasks, and
(3) the presence of large task-nonspecific activity changes.
We propose that understanding the role of these features in the brain will be key to describing biological learning using an optimization framework.

The conclusions are:

To learn, the brain must discover the changes in neural activity that lead to improved behavior. How can we begin to understand the complex processes in the brain that drive these changes? The optimization framework for learning suggests that changes in neural activity during learning might be understood as the natural outcome of an objective function, learning rule, and network architecture.
As we propose here, applying the optimization framework to biological networks requires us to focus on the key ways in which neural activity in the brain differs from network activity in typical artificial networks. In particular, we have presented three key observations about learning from studies of neural population activity that we believe need to be accounted for by models of learning in the brain: (1) the inflexibility of neural variability throughout learning, (2) the use of multiple learning processes even during simple tasks, and (3) the presence of large task-nonspecific activity changes. These challenges were readily apparent when considering neural population activity but would have been harder to detect from the vantage points either of synaptic weight changes or single-unit tuning properties.

The optimization framework is a promising starting point for understanding learning in the brain. But as we have seen, even in relatively simple tasks, changes in neural population activity during learning are not always easy to interpret as an optimization process. This difficulty may be even more salient in the context of understanding how more complex, naturalistic tasks are learned. Moving forward, accounting for the features of population activity described here into new computational models of learning in the brain and new experimental designs may be a useful next step for reverse engineering the process by which the brain learns.

One response to “Learning: Brain vs xNN”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: