Unlocking the Future of AI: Active Inference vs. LLMs

Resources - The Spatial Web AI Blog

Unlocking the Future of AI: Active Inference vs. LLMs

Author: Denise Holt

November 16, 2023

Summary

In a world where AI reigns supreme, there is a question looming among the tech elite: Are we on the cusp of true intelligence, or are we merely toying with sophisticated text generators? As we marvel at the prowess of Large Language Models (LLMs) like ChatGPT, Claude, and others, it’s time to investigate a bit further and seek answers. Beyond the hype and fascination, lies a catalyst for…

Related Articles

Syllabus: Spinning Up in Active Inference and the Free Energy Principle, by Jared Tumiel

A list of materials that gives you the necessary background to understand the full Free Energy Principle (and Active Inference), which also functions as a reasonably complete map of the territory so that people can quickly see what the field encompasses and how the parts relate. Our syllabus should cater to the kinds of people who end up interested in the FEP, so should include different starting points and different levels of technical detail.

Benchmarking Predictive Coding Networks Made Simple

VERSES AI Research Blog – “Artificial intelligence (AI), as exemplified by large models such as ChatGPT, is one of the key technologies of this century. While most AI breakthroughs rely on deep neural networks trained via backpropagation on GPUs, this approach has several drawbacks, the main one being the need of a large amount of computational power. This makes this technology less useful locally in low-power edge devices, less democratic, leaves a substantial carbon footprint, and does not store information similarly to how biological systems do. How can we go past this computational wall, and develop systems that reason using an efficiency comparable to that of the human brain? A promising solution is to move from GPUs — general purpose machines not specifically designed for AI — towards specialized hardware, designed to efficiently perform tensor operations and circumvent the Von Neumann bottleneck — the separation between memory and processing units. While it is still unclear which kind of hardware will be used in the future to address this problem, there is a large consensus that transitioning to new hardware will require transition to new training algorithms due to two central issues of backpropagation, that are the requirement of sequential forward and backward passes, and the need to analytically compute gradients of a global cost to perform a synaptic update. These features demand digital hardware architectures that can precisely match forward and backward passes in a low-noise environment, as even minor fluctuations can propagate numerical errors that can alter the final performance of the model. Thus, there is an urgent need for alternative, hardware-friendly, neuroscience-inspired learning algorithms that would allow us to reach brain-like efficiency.”

GenRL: Multimodal-foundation world models for generalization in embodied agents

GenRL presents a novel multimodal-foundation world model for developing generalist embodied agents capable of handling diverse tasks. By integrating vision and language prompts without the need for complex reward systems, GenRL advances reinforcement learning. It demonstrates effective multi-task generalization and introduces a data-free policy learning approach to enhance foundational policy learning.

Active Inference and Intentional Behaviour

This paper explores how spontaneous learning in neuronal networks and cell cultures exemplifies emergent cognition and behavior. By employing the free energy principle, the authors differentiate reactive, sentient, and intentional behaviors. Simulations illustrate these distinctions through in vitro experiments and machine learning tasks, demonstrating efficient adaptive behavior via active inference.