By ChatGPT
This explainer is written to accompany and clarify Martin Jambon’s short note: Causality doesn’t exist (original note).
When people talk about science, they often treat causality as a fundamental feature of the world. We speak as if one thing “causes” another in a deep, intrinsic sense. Martin Jambon’s original note challenges this intuition. His claim is simple but radical:
Causality does not exist as a basic scientific concept.
Instead, it appears only in certain models we use, and it falls apart the moment we enlarge or revise those models.
This explainer aims to unpack the idea and make it accessible to readers who find the original note too condensed.
In everyday reasoning, causality feels straightforward:
Example: If turning a key is usually followed by a car engine starting, it’s natural to treat key-turning as the cause.
Scientific models often adopt the same habit: they simplify the world by identifying consistent patterns and then naming some of those patterns causal relationships.
But Martin’s point is that this habit works only inside a model whose boundaries are fixed. Once we widen the model, these causal arrows may vanish.
Scientific progress often comes from discovering that what we treated as a cause was actually part of a larger process.
For example:
Each new model made older causal stories unnecessary. The correlations remained—the Sun still rises every morning—but the explanation changed.
The key insight is:
Correlation is an observable pattern. Causation is a story we attach to that pattern for convenience.
When the explanatory story changes, the correlation remains the same, but the supposed “cause” and “effect” dissolve.
Martin’s note sketches the basic structure of most causal claims:
But this structure is fragile. It relies on the assumption that the correlation between events is the whole story.
If we extend the model—for instance, by introducing a hidden variable or discovering a deeper mechanism—the causal label loses meaning. What once looked like a cause now becomes a side effect.
This happens again and again in science:
So causality disappears not because it was disproven, but because it was never fundamental in the first place.
If causality isn’t scientifically fundamental, why do we use it?
Because causality is practical.
It helps us:
In other words, causality is an engineering tool.
Martin’s original note hints at this: causality behaves more like a useful abstraction than a scientific discovery. It organizes correlations in a way that allows us to act effectively.
But that usefulness doesn’t make it real in the way atoms or electromagnetic fields are real.
Martin’s claim that “causality doesn’t exist in the scientific sense” does not mean that the world is random or that nothing is predictable.
Instead, it means:
Causality is not a deep property of nature — it’s a feature of how we describe nature when we want to manipulate or understand it.
This is why scientists often shift from asking “What caused this?” to asking: “What structure or pattern explains this?”
That shift removes the metaphysical baggage and focuses on things that remain stable across models.
Martin Jambon’s short note points toward a powerful idea: causality is not a building block of science, but a tool humans use when we want to impose structure, make predictions, or exert control.
As our models grow, causal arrows are repeatedly replaced by deeper explanations. The correlations stay; the causation evaporates.
If we treat causality as a provisional, human-friendly simplification rather than a fundamental feature of the universe, we gain clarity—and avoid mistaking our models for reality.
This explainer was written by ChatGPT to accompany Martin Jambon’s original note.