Karl Friston joins VERSES as Chief Scientist to Lead New Era in Artificial Intelligence.
VERSES published its research paper to arxiv.org to explore the applications and implications of Active Inference on the future of Artificial Intelligence.
“Designing Ecosystems of Intelligence from First Principles” lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyberphysical ecosystem of natural and synthetic sense-making, in which humans are integral participants—what we call “shared intelligence”.
This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one’s sensed world—also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph.

Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent’s generative world model provide a common ground or frame of reference.
Active inference plays a foundational role in this ecology of belief sharing—leading to a formal account of collective intelligence that rests on shared narratives and goals.
We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first—and key—step towards such an ecology.
Stages of development for active inference
S0: Systemic Intelligence.
This is contemporary state-of-the-art AI; namely, universal function approximation—mapping from input or sensory states to outputs or action states— that optimizes some well-defined value function or cost of (systemic) states. Examples include deep learning, Bayesian reinforcement learning, etc.
S1: Sentient Intelligence.
Sentient behavior or active inference based on belief updating and propagation (i.e., optimizing beliefs about states as opposed to states per se); where “sentient” means “responsive to sensory impressions.” This entails planning as inference;
namely, inferring courses of action that maximize expected information gain and expected value, where value is part of a generative (i.e., world) model; namely, prior preferences. This kind of intelligence is both information-seeking and preference-seeking. It is quintessentially curious.
S2: Sophisticated Intelligence.
Sentient behavior—as defined under S1—in which plans are predicated on the consequences of action for beliefs about states of the world, as opposed to states per se. I.e., a move from “what will happen if I do this?” to “what will I believe or know if I do this?”. This kind of inference generally uses generative models with discrete states that “carve nature at its joints”; namely, inference over coarse-grained representations and ensuing world models. This kind of intelligence is amenable to formulation in terms of modal logic, quantum computation, and category theory. This stage corresponds to “artificial general intelligence” in the popular narrative about the progress of AI.
S3: Sympathetic (or Sapient) Intelligence.
The deployment of sophisticated AI to recognize the nature and dispositions of users and other AI and—in consequence—recognize (and instantiate) attentional and dispositional states of self; namely, a kind of minimal selfhood (which entails generative models equipped with the capacity for Theory of Mind). This kind of intelligence is able to take the perspective of its users and interaction partners—it is perspectival, in the robust sense of being able to engage in dyadic and shared perspective taking.
S4: Shared (or Super) Intelligence.
The kind of collective that emerges from the coordination of Sympathetic Intelligence (as defined in S3) and their interaction partners or users—which may include naturally occurring intelligence such as ourselves, but also other sapient artifacts. This stage corresponds, roughly speaking, to “artificial super-intelligence” in the popular narrative about the progress of AI—with the important distinction that we believe that such intelligence will emerge from dense interactions between agents networked into a hyper-spatial web. We believe that the approach that we have outlined here is the most likely route toward this kind of hypothetical, planetary-scale, distributed super-intelligence.
Stage | Theoretical | Proof of principle | Deployment at scale | Biomimetic | Timeframe |
S1: Sentient | Established 1,2 | Established 3 | Provisional 4 | Aspirational | 2 years |
S2: Sophisticated | Established 5 | Provisional 6 | Aspirational | 4 years | |
S3: Sympathetic | Provisional 7 | Aspirational | 8 years | ||
S4: Shared | Provisional 8,9 | Aspirational | 16 years |
1 Friston, K.J. A free energy principle for a particular physics. doi:10.48550/arXiv.1906.10184 (2019).
2 Ramstead, M.J.D. et al. On Bayesian Mechanics: A Physics of and by Beliefs. doi:10.48550/arXiv.2205.11543 (2022).
3 Parr, T., Pezzulo, G. & Friston, K.J. Active Inference: The Free Energy Principle in Mind, Brain, and Behavior. (MIT Press, 2022). doi:10.7551/mitpress/12441.001.0001
4 Mazzaglia, P., Verbelen, T., Catal, O. & Dhoedt, B. The Free Energy Principle for Perception and Action: A Deep Learning Perspective. Entropy 24, 301, doi:10.3390/e24020301 (2022).
5 Da Costa, L. et al. Active inference on discrete state-spaces: A synthesis. Journal of Mathematical Psychology 99, 102447, doi:10.1016/j.jmp.2020.102447 (2020).
6 Friston, K.J., Parr, T. & de Vries, B. The graphical brain: Belief propagation and active inference. Network Neuroscience 1, 381-414, doi:10.1162/NETN_a_00018 (2017).
7 Friston, K.J. et al. Generative models, linguistic communication and active inference. Neuroscience and Biobehavioral Reviews 118, 42-64, doi:10.1016/j.neubiorev.2020.07.005 (2020).
8 Friston, K.J., Levin, M., Sengupta, B. & Pezzulo, G. Knowing one’s place: a free-energy approach to pattern regulation. Journal of the Royal Society Interface 12, doi:10.1098/rsif.2014.1383 (2015)
9 Albarracin, M., Demekas, D., Ramstead, M.J.D. & Heins, C. Epistemic Communities under Active Inference. Entropy 24, doi:10.3390/e24040476 (2022)
Implementation
A: Theoretical. The basis of belief updating (i.e., inference and learning) is underwritten by a formal calculus (e.g., Bayesian mechanics), with clear links to the physics of selforganization of open systems far from equilibrium.
B: Proof of principle. Software instances of the formal (mathematical) scheme, usually on a classical (i.e., von Neumann) architecture.
C: Deployment at scale. Scaled and efficient application of the theoretical principles
(i.e., methods) in a real-world setting (e.g., edge-computing, robotics, variational message passing on the web, etc.)
D: Biomimetic hardware. Implementations that elude the von Neumann bottleneck, on biomimetic or neuromorphic architectures. E.g., photonics, soft robotics, and belief propagation: i.e., message passing of the sufficient statistics of (Bayesian) beliefs.
One response to “Designing Ecosystems of Intelligence from First Principles”
[…] theorie over intelligentie (van eencelligen, het individu en verder tot ecosystemen). Intelligentie steunt op een model om verassingen in de realiteit minimaal te houden.Dit nieuwe model is tegengesteld met wat de inzichten van verlichting ons boden, of zoals JSB ons […]
LikeLike