Learning Lab LIVE: UNIT 10 – Understanding Explainable AI

Denise Holt · December 14, 2025

This Learning Lab LIVE session provides a comprehensive exploration of explainable AI and why transparency is essential for building trustworthy autonomous intelligence. It explains what explainability means in practice, why it is increasingly required by regulators, and how it directly impacts trust, adoption, and ethical deployment in enterprise and mission-critical systems. The session begins by examining why many modern AI systems—particularly deep learning models and large language models—struggle to provide reliable explanations for their decisions.

The presentation then breaks down how large language models actually operate, revealing why correlation-based pattern matching leads to hallucinations, fabricated reasoning, eigenvector traps, and mode collapse. Through concrete examples and recent interpretability research, viewers gain a clear understanding of why LLMs can produce fluent outputs without genuine understanding or causal reasoning, and why this creates serious risks in high-stakes applications.

The session concludes by comparing alternative approaches to explainability, including instance-based learning and Active Inference. Viewers learn why Active Inference offers a fundamentally different solution: one that embeds causal reasoning, explicit generative models, and real-time adaptation directly into the architecture. By the end of this session, viewers will understand why Active Inference provides the strongest foundation for explainable, auditable, and trustworthy autonomous systems.


Key Topics Covered

  • What explainable AI is and why it matters
    Defines explainable AI and explains its role in trust, regulatory compliance, ethical deployment, and enterprise adoption.
  • Black-box versus transparent AI models
    Compares opaque systems such as deep neural networks and LLMs with transparent models like decision trees and Bayesian networks.
  • How large language models actually work
    Explains LLM training, transformer architectures, attention mechanisms, and why these systems predict language without understanding meaning.
  • Why LLM explanations cannot be trusted
    Demonstrates how LLMs fabricate post-hoc explanations that do not reflect their true internal decision processes.
  • Hallucinations and reliability risks
    Explores why hallucinations occur, why they undermine trust, and why mitigation is difficult without architectural changes.
  • Eigenvector traps and mode collapse
    Explains why LLMs converge on repetitive, generic outputs and why scaling or prompt tuning does not solve the problem.
  • Correlation versus causation in AI reasoning
    Clarifies why statistical correlation cannot replace causal reasoning in real-world decision-making.
  • Instance-based learning and explainability
    Introduces instance-based learning as a transparent, data-driven approach that ties predictions directly to historical examples.
  • Strengths of instance-based learning
    Covers transparency, auditability, anomaly detection, robustness to sparse data, and resistance to hallucinations.
  • Limitations of instance-based learning
    Explains why static data dependence, scaling challenges, and lack of real-time interaction limit its applicability.
  • Active Inference as a truly explainable AI framework
    Introduces Active Inference as a causal, generative, and inherently interpretable approach grounded in the Free Energy Principle.
  • Generative models and predictive simulation
    Explains how Active Inference agents simulate future scenarios, act, observe outcomes, and update models continuously.
  • Causal reasoning and uncertainty reduction
    Shows how Active Inference explicitly models cause-and-effect relationships and minimizes uncertainty through action.
  • Real-time adaptability and decision-making
    Demonstrates why Active Inference excels in dynamic, safety-critical, and mission-critical environments.
  • Ethical AI and regulatory alignment
    Explains how explicit decision logic supports auditing, accountability, compliance, and ethical governance.
  • Comparing AI approaches for explainability
    Provides a direct comparison of LLMs, instance-based learning, and Active Inference across explainability, adaptability, and risk.
  • Integrating Active Inference with other AI systems
    Explores how Active Inference can supervise or augment LLMs and other models to add transparency and safeguards.
  • Enterprise strategy for explainable AI adoption
    Outlines how leaders can identify high-value transparency use cases and implement explainable AI incrementally.
  • Preparing organizations for trustworthy AI
    Covers workforce education, leadership alignment, pilot programs, and building an AI-literate culture.
  • The future of explainable autonomous intelligence
    Explains why causal, self-explaining, adaptive AI systems will define the next phase of enterprise and societal transformation.

About Instructor

Denise Holt

15 Courses

Not Enrolled

Course Includes

  • 1 Course File

Ratings and Reviews

0.0
Avg. Rating
0 Ratings
5
0
4
0
3
0
2
0
1
0
What's your experience? We'd love to know!
No Reviews Found!
Show more reviews
What's your experience? We'd love to know!