Unlocking the Future of AI: Active Inference vs. LLMs

Resources - The Spatial Web AI Blog

Unlocking the Future of AI: Active Inference vs. LLMs

Author: Denise Holt

November 16, 2023

Summary

In a world where AI reigns supreme, there is a question looming among the tech elite: Are we on the cusp of true intelligence, or are we merely toying with sophisticated text generators? As we marvel at the prowess of Large Language Models (LLMs) like ChatGPT, Claude, and others, it’s time to investigate a bit further and seek answers. Beyond the hype and fascination, lies a catalyst for…

Related Articles

Syllabus: Spinning Up in Active Inference and the Free Energy Principle, by Jared Tumiel

A list of materials that gives you the necessary background to understand the full Free Energy Principle (and Active Inference), which also functions as a reasonably complete map of the territory so that people can quickly see what the field encompasses and how the parts relate. Our syllabus should cater to the kinds of people who end up interested in the FEP, so should include different starting points and different levels of technical detail.

MATRIXX Software Collaborates on Groundbreaking TM Forum Catalyst for Spatial Web Monetization

MATRIXX Software, a global leader in billing and monetization solutions, today announced its participation in a groundbreaking TM Forum Moonshot Catalyst project titled “Spatial Web: Open Gateway to the Immersive Future.” The project, developed in collaboration with industry leaders including Telefónica, TIM Brasil, Red Hat, Hansen Technologies and nabstract.io, will be unveiled at TM Forum’s Digital Transformation World (DTW) Ignite event, June 17–19, in Copenhagen, Denmark.

The Next Leap in AI: Why Active Inference Could Redefine Machine Intelligence

We’re entering a new era of Artificial Intelligence. While today’s large language models (LLMs) dominate the headlines, they may only represent the first chapter in the story of truly intelligent systems.

The world has been mesmerized by the rise of Large Language Models (LLMs), but what if we’ve been looking at intelligence all wrong? A deeper framework is emerging, one that doesn’t just react to data, but embodies intention, adaptation, and action.

It’s called Active Inference, and it could radically reshape the way we build machines that think, feel, and evolve.

Dr. Karl Friston – the neuroscientist behind the Free Energy Principle, thinks so. His groundbreaking work in Active Inference offers a radically new direction: one where AI doesn’t just process language or big data, but actually understands and adapts like living organisms…

Benchmarking Predictive Coding Networks Made Simple

VERSES AI Research Blog – “Artificial intelligence (AI), as exemplified by large models such as ChatGPT, is one of the key technologies of this century. While most AI breakthroughs rely on deep neural networks trained via backpropagation on GPUs, this approach has several drawbacks, the main one being the need of a large amount of computational power. This makes this technology less useful locally in low-power edge devices, less democratic, leaves a substantial carbon footprint, and does not store information similarly to how biological systems do. How can we go past this computational wall, and develop systems that reason using an efficiency comparable to that of the human brain? A promising solution is to move from GPUs — general purpose machines not specifically designed for AI — towards specialized hardware, designed to efficiently perform tensor operations and circumvent the Von Neumann bottleneck — the separation between memory and processing units. While it is still unclear which kind of hardware will be used in the future to address this problem, there is a large consensus that transitioning to new hardware will require transition to new training algorithms due to two central issues of backpropagation, that are the requirement of sequential forward and backward passes, and the need to analytically compute gradients of a global cost to perform a synaptic update. These features demand digital hardware architectures that can precisely match forward and backward passes in a low-noise environment, as even minor fluctuations can propagate numerical errors that can alter the final performance of the model. Thus, there is an urgent need for alternative, hardware-friendly, neuroscience-inspired learning algorithms that would allow us to reach brain-like efficiency.”