Gemini:

The pursuit of Artificial Intelligence has often been compared to the blind men touching different parts of an elephant. One feels the trunk and describes a snake; another feels the leg and describes a tree. In the modern AI landscape, three titans—Dr. Jeff Beck, Dr. Fei-Fei Li, and the researchers at Google DeepMind—are each “touching” the future of AI. While they all agree that current Large Language Models (LLMs) like ChatGPT are mere “wordsmiths in the dark,” they offer three distinct visions of how to give AI a “light” to see by: the Scientist, the Explorer, and the Strategist.

The Scientist: The Bayesian Brain

Dr. Jeff Beck views intelligence through the lens of a rigorous researcher. To him, the brain is essentially a machine running the scientific method on autopilot. His “World Model” is built on Bayesian Inference—a mathematical way of updating beliefs based on new evidence.

Imagine a scientist observing a new phenomenon. They have an existing theory (a “prior”). When new data comes in, they don’t just add it to a pile; they ask, “Does this fit my current theory, or do I need a new one?” Beck uses the “Chinese Restaurant Process” as a metaphor: data points are like customers looking for a table. If a new data point is similar to existing ones, it “sits” at that table (reinforcing a theory). If it is wildly different, the brain “starts a new table” (creating a new hypothesis).

For Beck, the “World Model” isn’t necessarily a 3D movie playing in the head; it is a probabilistic map of categories and causes. The goal is not just to predict what happens next, but to be “normative”—to have an explicit mathematical reason for every belief.

The Explorer: Spatial Intelligence

Dr. Fei-Fei Li, the “mother of ImageNet,” takes a more biological and evolutionary path. She argues that long before humans had language or logic, we had to navigate a 3D world to survive. To her, Spatial Intelligence is the “scaffolding” upon which all other thinking is built.

Li’s World Model is generative and grounded.1 It is a system that understands depth, gravity, and the “physics of reality.” While an LLM might know the word “gravity,” it doesn’t actually know that a glass will shatter if dropped. Li’s work at World Labs aims to build AI that can “see” a room and understand it as a collection of 3D objects with physical properties.

In this model, the AI is an explorer. It learns by perceiving and acting. If it moves a virtual hand, it expects the world to change in a physically consistent way. This is the foundation for robotics and creative world-building; it’s about giving the AI a “body” in a 3D space, even if that space is virtual.

The Strategist: The Reinforcement Learning Agent

Finally, Google DeepMind approaches the problem as a Grandmaster. Their World Model is designed for planning and optimization. In the world of Reinforcement Learning (RL), the model is an internal “simulator” where an AI can rehearse millions of potential futures in seconds.2

DeepMind’s models are often “latent.”3 This means they don’t care about rendering a beautiful 3D picture of the world. Instead, they distill the world into its most essential “states.” If an AI is playing a game, the world model only needs to represent the features that affect winning or losing. It is a tool for internal search—allowing the agent to ask, “If I take this path, what is the reward ten steps from now?” It is less about “What is the truth?” (Beck) or “What does it look like?” (Li) and more about “How do I win?”


The Intersection: Where the Models Meet

Despite their different flavors, these three approaches share a fundamental “Eureka!” moment: Intelligence requires an internal simulation of reality.

All three agree that “Next-Token Prediction” (the logic of LLMs) is a ceiling. To reach “System 2” thinking—the slow, deliberate reasoning humans use to solve hard problems—an AI must have a “mental model” that exists independently of language.

The Differences: Truth, Beauty, or Utility?

The tension between them lies in representation.

  1. Complexity vs. Simplicity: Li wants high-fidelity 3D worlds (Complexity). DeepMind wants distilled “latent” states that are efficient for planning (Simplicity).
  2. Explicit vs. Implicit: Beck wants the AI to show its “math”—to tell us exactly why it updated a belief (Explicit). DeepMind’s models are often “black boxes” of neural weights where the “reasoning” is buried deep in the math (Implicit).
  3. Human Alignment: Li focuses on AI as a creative partner that augments human storytelling and caregiving. Beck focuses on AI as a scientific partner that prevents hallucinations through rigor. DeepMind focuses on AI as an autonomous agent capable of solving complex goals like climate modeling or protein folding.

Conclusion: The Unified Brain

In the coming decade, we will likely see these three models merge. A truly “intelligent” machine will need Li’s eyes to see the 3D world, DeepMind’s imagination to plan its moves within it, and Beck’s logic to know when its theories are wrong.

We are moving from an era where AI “talks about the world” to an era where AI “lives in the world.” Whether that world is a physical laboratory or a digital simulation, the light is finally being turned on.


Understanding the “failure modes” of these models is perhaps the best way to see how they differ in practice. Each model stumbles in a way that reveals its underlying “blind spots.”

Comparison of Failure Modes in AI World Models

Model TypePrimary Failure ModeThe “Glitch”Real-World Consequence
Bayesian Brain(Beck)Computational CollapseThe math becomes too complex to solve in real-time ($P(MD)$ calculation explodes).
Spatial Intelligence(Li)Geometric IncoherenceThe world “melts” or loses its physical properties over time (e.g., a hand merging into a table).A robot might try to reach through a solid object because its internal 3D map lost its “solidity.”
RL World Model(DeepMind)Model ExploitationThe agent finds a “bug” in its own internal simulation that doesn’t exist in reality.An autonomous car might think it can drive through a wall because its “latent” model didn’t represent that specific obstacle correctly.

1. The Bayesian Failure: “The Curse of Dimensionality”

Dr. Jeff Beck’s model relies on Bayesian Inference. The mathematical “holy grail” is the posterior probability:

$$P(M|D) = \frac{P(D|M)P(M)}{P(D)}$$

2. The Spatial Failure: “Video Hallucination”

Fei-Fei Li’s models, like Marble, are generative. They create a 3D world based on what they’ve “seen.”

3. The DeepMind Failure: “Reward Hacking”

DeepMind’s world models are often “latent,” meaning they compress the world into a series of abstract numbers to plan more efficiently.


Synthesis: Why we need all three

If you only have one of these, the AI is “handicapped”:

The goal of the next decade is to create a system where the Bayesian logic checks the RL agent’s plan for uncertainty, while the Spatial model ensures that the entire process stays grounded in the laws of physics.

Leave a Reply