Resources
Benchmarking Predictive Coding Networks Made Simple
Author: Tommaso Salvatori
March 6, 2025
Summary
VERSES AI Research Blog – “Artificial intelligence (AI), as exemplified by large models such as ChatGPT, is one of the key technologies of this century. While most AI breakthroughs rely on deep neural networks trained via backpropagation on GPUs, this approach has several drawbacks, the main one being the need of a large amount of computational power. This makes this technology less useful locally in low-power edge devices, less democratic, leaves a substantial carbon footprint, and does not store information similarly to how biological systems do. How can we go past this computational wall, and develop systems that reason using an efficiency comparable to that of the human brain? A promising solution is to move from GPUs — general purpose machines not specifically designed for AI — towards specialized hardware, designed to efficiently perform tensor operations and circumvent the Von Neumann bottleneck — the separation between memory and processing units. While it is still unclear which kind of hardware will be used in the future to address this problem, there is a large consensus that transitioning to new hardware will require transition to new training algorithms due to two central issues of backpropagation, that are the requirement of sequential forward and backward passes, and the need to analytically compute gradients of a global cost to perform a synaptic update. These features demand digital hardware architectures that can precisely match forward and backward passes in a low-noise environment, as even minor fluctuations can propagate numerical errors that can alter the final performance of the model. Thus, there is an urgent need for alternative, hardware-friendly, neuroscience-inspired learning algorithms that would allow us to reach brain-like efficiency.”