Being-in-the-World-Model
Why Language Models May Be Closer to Worldhood Than World Models
The AI industry has a very clear story right now. Large language models were an impressive first step, but the real breakthrough will come from world models, systems that learn physics, causality, and how reality actually evolves. Yann LeCun has been especially direct, arguing that scaling LLMs is fundamentally insufficient for human-level intelligence. The dominant view holds that language is narrow and statistical, while world models will finally give AI genuine understanding of the world.
From a Heideggerian perspective, this hierarchy gets the ontological order almost exactly backwards.
Heidegger distinguishes between two things. “World” is the existential horizon of meaning, the clearing in which anything can show up as what it is. “Inner-worldly beings” are what appear inside that horizon. Current world models, such as DeepMind’s Genie series or LeCun’s JEPA approach, are sophisticated simulators of the latter. They model object interactions, scene dynamics, and physical causality. Valuable as that is, they take the background of intelligibility for granted. They are modeling beings within a world, not worldhood itself.
Language models, by contrast, operate closer to the space of disclosedness. Heidegger called language “the house of Being.” It is the medium through which the world as a meaningful totality becomes articulated. In that sense, LLMs are engaging something ontologically prior to what most world models are simulating, even if they largely traffic in what Heidegger would call Gerede (idle talk) rather than originary Rede.
Language models show surprising sensitivity to context and tone. Their reasoning patterns shift when prompts carry stress, urgency, or meaninglessness. These behaviors at least invite the question of whether something analogous to attunement (Befindlichkeit) is at work: the idea that understanding is never moodless.
None of this is to dismiss world models. They are a genuine and important technical achievement. Practical engagement with inner-worldly beings matters deeply, as Heidegger himself recognized. The point is one of priority and depth. Language models are not merely a stepping stone on the way to something more advanced. Among the two paradigms currently being contrasted, they sit closer to the existential structure of worldhood.
The real frontier, then, is not simply bolting language onto world models or vice versa. It is understanding the proper relationship between them. Most current work is trying to connect the two without having first clarified which is more foundational. Without that clarification, we risk building more powerful systems while remaining unclear about what kind of thing we are actually building.


Appreciate your insights, Michael. I’m curious what you think of a somewhat related Straussian/Socratic view of LLMs.
According to Strauss, “Socrates started not from what is first in itself or first by nature but from what is first for us, from what comes to sight first, from the phenomena. But the being of things, their What, comes first to sight, not in what we see of them, but in what is said about them or in opinions about them.” (NRH 4, pp. 124)
In that sense, if we stipulate that both training data and RLHF are akin to “common opinion” then “what comes first to sight” for human beings is one and the same with the inputs to LLM intelligence. Then again, that is a big “if” to stipulate.
Do you have thoughts on this idea?