“What if I told you that everything you’re seeing, hearing, and feeling at this moment isn’t actually reality? That it is a controlled hallucination?”
Imagine walking through an art gallery and encountering a sculpture of a face carved into a mask. As you admire it, you notice something strange: the inside of the mask appears to pop out toward you, even though logically you know it is concave and hollow. This visual trick, often referred to as a hollow-face illusion, showcases the extraordinary power of your brain to override raw sensory data with strong expectations.
Why does your mind cling to the idea that every face must protrude outward? How can it so stubbornly reject the truth in front of your eyes—and what does this reveal about the very nature of perception and consciousness?
According to one compelling framework in neuroscience, known as the free energy principle, your brain is not a passive camera simply collecting data from the world. Instead, it is a sophisticated prediction machine, constantly generating hypotheses about what you are seeing, hearing, and feeling—and then using incoming sensory signals to correct or confirm those predictions.
This blog post provides a deep dive into the key ideas from a transcript that discusses the free energy principle and related concepts like Markov blankets, Bayesian mechanics, and generative models of the brain. We will extend and elaborate on each of these themes, offering examples, references, and additional insights. By the end, you will see why illusions like the concave mask remain so convincing; how we evolved to be so adept at predicting reality; and what all of this means for understanding consciousness, intelligence, mental disorders, and even the future of AI.
This is more than just a novel perspective on how we see things. It’s a universal theory that ties together physics, biology, and psychology—a sweeping framework that might just revolutionize our understanding of the mind.
Peering into a Mask: Illusions as a Portal into the Brain
A Simple, Yet Astonishing Trick
In the referenced video, we begin by observing a rotating mask. At first glance, it looks predictably convex (protruding outward). Then the mask flips around, revealing its hollow, concave interior—yet our visual system resists seeing it as concave. We get an eerie sense of distortion, as though the inside is “popping out” to form another face.
Why Our Brain Insists on Convex Faces
The puzzle here, repeated in countless illusions, is that your brain knows faces are typically convex. We have grown up seeing faces around us—parents, friends, strangers in the street—and never encountered a hollow face in everyday life. This repeated exposure leads us to develop powerful “prior beliefs” about faces’ shapes. Thus, when we see lighting and shadow patterns that should indicate an inward shape, the brain effectively vetoes that interpretation. It decides that a weird lighting scenario is more likely than a concave face.
In a broader sense, illusions are not mere party tricks. They highlight the tension between bottom-up sensory data and top-down predictions. By studying illusions, neuroscientists gain a window into the underlying computations of perception—how our brains filter, interpret, and even distort reality to better fit our internal models.
The Brain as a Prediction Machine
Controlled Hallucinations
Neuroscientist Anil Seth calls our perceptions “controlled hallucinations” because the brain actively generates predictions about what is “out there.” The raw sensory input—photons hitting your retina, vibrations in your ears—serves to update or refine these predictions, rather than to produce them from scratch.
In the transcript, the narrator puts it succinctly:
“Your brain isn’t passively receiving information; it is actively generating predictions about what should be out there, and then uses sensory input to check those predictions.”
This synergy between prediction and sensory check means that what you consciously perceive is less about the raw data in your eyes and more about the brain’s best guess about what that data signifies.
Predictive Coding: A Two-Way Street
In more technical terms, this mechanism has been explored under the banner of predictive coding. Neural signals travel downward from high-level cortical areas that encode abstract beliefs (like “faces protrude outward”) toward lower-level sensory areas. Meanwhile, ascending signals (from retinas and ears) carry prediction errors—the mismatch between what was predicted and what is actually happening.
Your brain tries to minimize prediction error by updating either:
- The internal prediction (i.e., changing your belief about the scene)
- The incoming data (i.e., selectively attending to or ignoring contradictory sensory inputs)
When strong prior beliefs—such as “faces are convex”—clash with ambiguous or subtle sensory evidence—like “this mask is concave”—the belief often wins.
Evolutionary Roots: Why Prediction Matters
Brains as Model Builders for Survival
It can be easy to think that illusions are mere quirks of the mind. But illusions illustrate a broader evolutionary adaptation. Brains did not evolve to accurately reconstruct every photon. They evolved to help organisms survive and reproduce. That means making rapid, good-enough guesses about what is happening in the environment—even if those guesses occasionally lead us astray.
From this vantage point, illusions are not flaws; they are demonstrations of a highly optimized system. In the transcript, we read:
“The main purpose of the brain, like any trait favored by evolution, is to increase the chances of survival. Organisms need to react to stimuli appropriately, and having a sophisticated model to make sense of partial or ambiguous data is crucial.”
Noise, Ambiguity, and Partial Information
Over evolutionary time, organisms encountered incomplete sensory inputs. Imagine a tiger half-hidden by tall grass. If your sensory system needed a perfect view before detecting danger, you might be eaten first. Instead, we developed a mechanism for filling in the gaps and arriving at a best guess as soon as possible.
This bias toward meaningful interpretation means that if you see glimpses of orange stripes in a safari park, you quickly assume it’s a tiger, not half a stuffed toy. Sure, you might be mistaken sometimes, but the cost of a false positive (jumping when it’s only a toy) is less than the cost of a false negative (failing to notice a real tiger).
The Tiger Example: Inferring Hidden Causes
Explaining Away Ambiguities
The transcript explores a scenario where you see something that looks only partway like a tiger—maybe some stripes are occluded by a tree. A naive pattern-matching approach would only confirm “tiger” if every pixel matched a memorized template. But living brains do better:
- They recall the high-level concept of “tiger.”
- They know that objects can be partially occluded.
- They infer that partial stripes + partial shapes behind a tree likely indicates a fully formed tiger.
You do not see a “half-tiger.” Instead, your brain leaps to the more probable hidden cause: a real tiger partially hidden from view. You run—and in evolutionary terms, that’s a more effective survival strategy.
Minimizing Mistakes
In everyday life, we rarely notice that the retina captures only a partial image. Our perception seamlessly completes shapes, interprets partial edges, and clarifies uncertain signals. This capacity to “complete” patterns is the same capacity that can lead us to illusions when certain contexts—like artificial lighting, mirrors, or illusions—trick the generative model into seeing something that isn’t truly there.
Minimizing Free Energy: A Balancing Act
A Simple Metaphor: Tight vs. Loose Predictions
In the transcript, free energy is described in psychological or informational terms as the tension between raw sensory data and what the mind expects to see. The more your predictions diverge from actual data, the higher the “free energy,” and the more the brain must work to reconcile the difference.
You can think of it like a scale with:
- Sensory data on one side,
- Prior beliefs on the other.
If they do not balance, you have an “error signal” that pushes your system to update either the data you attend to or your prior beliefs (or both).
What Is “Free Energy”?
Originally, free energy was a concept in thermodynamics, describing how physical systems move to minimize energy states. In the context of brain function, Karl Friston and others adapted it into a variational free energy measure—a mathematical proxy for “surprise” or prediction error.
A low free-energy state means your internal model has high predictive power for the sensory inputs you encounter. You rarely get blindsided by unexpected signals or illusions. A high free-energy state means you keep encountering errors that you can’t explain away—leading to confusion, anxiety, or conflict in perception.
Generative Models and the Rendering of Reality
The Blender Analogy
One of the helpful metaphors in the transcript compares perception to a 3D graphics program like Blender. In a rendering tool, you can adjust just a few parameters—like an object’s angle, color, or light source—to produce a complex 2D image.
- A few sliders in your 3D scene can yield millions of pixel combinations in the final image.
- Similarly, your brain has latent (hidden) neurons encoding abstract causes, which can generate the vast array of possible sensory patterns you might see.
This is known as a generative model: from high-level causes (like “tiger,” “face,” “chair”), the brain renders lower-level features (like stripes, edges, shading) that eventually match or approximate the raw sensory input.
Compression and Abstraction
One might wonder, “Why all this complexity? Why not just store every possible image?” Because storing every possible arrangement of photons is astronomically inefficient. Evolution found a more compact solution: abstract categories, regularities, and latent variables. This helps your brain generalize from limited data and quickly adapt to new contexts.
Priors: The Lenses Through Which We See the World
Prior Beliefs in Action
Your brain does not start each moment from scratch. It carries “prior beliefs” about how objects look, how lighting typically falls, how gravity works, which foods are edible, and so on. These priors are acquired through both:
- Evolutionary timescales (e.g., we’re wired to fear large predators).
- Individual experience (e.g., we learn that cars typically appear in certain shapes and sizes).
When new sensory data arrives, it is interpreted against these pre-loaded assumptions. Hence illusions often exploit deeply embedded priors to create surprising experiences.
Faces and Light Sources
In the face-mask illusion, a crucial prior is that faces we encounter are almost always convex. Another prior might be that the primary light source usually comes from above, casting shadows in predictable ways. When the lighting and shape deviate from these norms, the easiest solution is to reinterpret the input so that it aligns with your existing beliefs (“face protruding outward under normal lighting”).
Approximate Inference: How the Brain Gets It Right (Most of the Time)
The Impossible Task of Exhaustive Search
If the brain tried every possible combination of latent variables to explain raw data, it would never finish. Even with only a handful of “sliders” in a mental 3D scene, you get combinatorial explosion.
Yet your perception is fast. When you see stripes in the grass, you identify “tiger!” in milliseconds. This suggests the brain must use efficient approximate inference strategies, known in computational terms as recognition networks, approximate Bayesian inference, or variational methods.
Recognition Models and Generative Models
- Generative Model: Given a latent cause (like “tiger,” “occlusion,” or “face”), it can render the expected sensory pattern.
- Recognition Model: Given the sensory pattern, it infers likely latent causes.
These two are trained together over a lifetime so that the recognition model can quickly produce good guesses, which the generative model then checks against the data. When mismatches occur, a few refinement steps correct the guess. Within fractions of a second, you lock onto a consistent interpretation.
The Mask Illusion Revisited: When Priors Override Sensory Input
Hardwired for Convex Faces
Returning to the rotating mask example, the transcript remarks:
“Your brain has two possible explanations: a concave face or a normal convex face with strange lighting. Given your lifetime experience of faces, the second explanation has much lower free energy.”
Your visual system effectively says, “It’s easier to assume the face is normal and the lighting is weird.” This keeps free energy lower than if you tried to accept the truly bizarre notion of a hollow face.
Conscious Knowledge vs. Unconscious Inference
Interestingly, you know the mask is concave. You can place your hand inside it, confirm with logic. Yet the illusion persists. This indicates that some aspects of visual perception are handled by evolutionarily older, reflex-like circuits in the brain. The cognitive knowledge in your frontal lobes doesn’t always override those deep visual priors.
This disjunction is part of why illusions are so fascinating: they demonstrate that “knowing” is distinct from “seeing.”
Markov Blankets and Bayesian Mechanics: Drawing Boundaries Between ‘Self’ and ‘World’
The Markov Blanket Concept
Beyond illusions, the transcript—and other commentary referencing Karl Friston’s work—introduces the idea of the Markov blanket. This is a statistical boundary that separates a system’s internal states from the external environment.
- Internal states represent the organism’s hidden structure (neurons firing, biochemical processes).
- Blanket states include sensory inputs (information flowing in from the environment) and active outputs (the actions the organism takes to influence the environment).
- External states lie outside the boundary, in the world at large.
In more formal Bayesian terms, the Markov blanket ensures that internal states are conditionally independent of external states, given the blanket variables. This means that if you want to model the inside of a cell, a brain, or even an entire organism, you only need to consider how it communicates with the world via those blanket states.
Bayesian Mechanics and Self-Organization
Friston’s free energy principle extends beyond illusions to all self-organizing systems. Whether we talk about single cells, brains, or societies, if a system persists over time, it can be mathematically described as minimizing surprise (or free energy) relative to environmental inputs.
Key takeaway: Markov blankets define how internal states remain coherent despite external fluctuations. Organisms do this by updating their predictions (internal states) to reduce error signals that come through the blanket—explaining how and why living systems can maintain homeostasis or equilibrium-like states far from simple thermodynamic equilibrium.
Beyond Illusions: Insights into Learning, Decision-Making, and Disorders
Learning and Neural Plasticity
The free energy principle does not just explain how we perceive illusions; it also gives a roadmap to how we learn. Each time your brain receives sensory inputs that deviate from your predictions, synaptic connections update. Over time, you refine your generative model, building new priors—or adjusting old ones—about the world.
- Example: If you move to a new country with different local customs, your prior beliefs about social cues get updated. What was surprising at first becomes normal after you reduce your prediction error through repeated interactions.
Decision-Making and Active Inference
Another extension is active inference. Instead of passively waiting to sense the world, an organism acts on the environment to gather better information or to realize its predictions.
- Perception: “What is out there?”
- Action: “Ensure the environment matches my model or reduce the discrepancy.”
Whether you reach out to touch the mask to confirm its shape, or run from an orange-striped shape in the grass, you are using action to minimize free energy in the future.
Disorders of Prediction
In mental health research, some theorists propose that schizophrenia, autism, and depression may involve disruptions in predictive processing. For instance:
- Schizophrenia could reflect an overweighting of sensory error signals, leading to hallucinations.
- Autism could involve aberrant priors or differences in how new evidence updates existing beliefs.
- Depression might be linked to prior beliefs that the world is consistently negative or uncontrollable, lowering the ability to update in the face of positive experiences.
The free energy principle, in that sense, frames these conditions as disruptions in the balance between top-down predictions and bottom-up data.
Implications for AI, Cognitive Science, and Future Research
A Blueprint for Artificial Systems
The generative-model approach heavily influences cutting-edge machine learning, especially in Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and predictive coding algorithms. By refining an internal model of data, these architectures can generate images or predict text in ways that eerily resemble human creativity.
If the free energy principle truly is universal, then we can build AI agents that minimize free energy to learn robust representations of the world. This might lead to more flexible, human-like intelligence that can handle ambiguity, occlusion, and partial data as skillfully as we do.
Markov Blankets at Multiple Scales
One fascinating idea is that Markov blankets may appear at multiple organizational levels: cells within tissues, individuals within societies, societies within ecosystems. Each level might function as a self-organizing system that maintains certain boundaries, using the free energy principle to keep a stable identity.
Research exploring these nested Markov blankets could unify concepts from quantum physics, evolutionary biology, and social sciences under a single conceptual umbrella.
Practical Applications
- Healthcare: Understanding how the brain’s predictive process can become unbalanced could yield better therapies for mental disorders.
- Education: Emphasizing how the brain actively constructs knowledge might shape more interactive, exploratory learning models.
- Human-Machine Interaction: If AI systems become more generative-model-based, we may need new ways to interface with them, possibly training them to share or align with our priors about the world.
Conclusion: A Call to See the World with New Eyes
Our journey through the free energy principle, generative models, priors, and illusions sheds light on a stunning realization: You do not see reality as it is; you see reality as your brain predicts it to be. This predictive capacity is evolution’s ingenious solution to survival in a noisy, ambiguous world. It compresses vast amounts of sensory input into simpler latent causes—like “face,” “tiger,” or “chair”—and uses these to rapidly interpret the environment.
Illusions, from the rotating mask to the half-hidden tiger, reveal the cracks in this predictive machinery—moments when top-down beliefs steamroll the truth, or when incomplete data leads to misinterpretations. These illusions are not failures of perception; they are windows into the deeper computational processes that keep us alive.
And in embracing ideas like Markov blankets and Bayesian mechanics, we glimpse a universal principle that may govern not only the human brain but any self-organizing system, from single cells to entire ecosystems. We see how the boundaries between “self” and “world” can be understood as flows of information, how internal states fight to minimize “free energy” in the face of relentless external pressures.
A Final Thought
Whether you consider illusions a quirky fascination or a profound clue about consciousness, one lesson rings clear: perception is not a passive mirror of reality. It is a constant dance between what your brain expects and what the world offers. That dance, orchestrated by the free energy principle, might just be the secret to how life maintains itself against the tide of entropy—how it keeps shape, function, and meaning in a sea of noise.
Thank you for reading. Next time you see an illusion—be it a rotated face mask, a mirage in the desert, or an odd reflection—remember that you are peering into the cogs and gears of your own predictive mind. The same mechanism that keeps you alive can also make you see things that aren’t really there. And that, more than anything, is a testament to the power and mystery of the human brain.
Further Reading and References
- Friston, K. (2010). “The Free-Energy Principle: A Unified Brain Theory?” Nature Reviews Neuroscience, 11, 127–138.
- Friston, K. J. (2013). “Life as We Know It.” Journal of the Royal Society Interface, 10, 20130475.
- Hohwy, J. (2013). The Predictive Mind. Oxford University Press.
- Seth, A. (2015). “The Cybernetic Bayesian Brain: From Interoceptive Inference to Sensorimotor Contingencies.” In Open MIND, ed. Metzinger, T.K., & Windt, J.M.
- Levin, M. (2023). “Technological approach to mind everywhere: An experimentally-grounded framework for understanding diverse bodies and minds.” Frontiers in Systems Neuroscience, 16, 768201.
- Kirsanov, A. (2024). A Universal Theory of Brain Function [Video Transcript Excerpt].
- Namjoshi, S. (2025). “Engineering Explained: Bayesian Mechanics” [Video Transcript Excerpt].
- Fields, C. & Levin, M. (2023). “Regulative Development as a Model for the Origin of Life and Artificial Life Studies.” BioSystems, 229, 104927.
(Note: Some references combine ideas from the original transcript content and related scholarly work to provide a solid grounding for further exploration.)