We tend to assume that other people perceive the world as we do. This false-consensus effect is responsible – among others – for interpreting the human mind as hardwired into the human brain. According to this view, cognition can be safely understood without considering the context in which it occurs. Mental states are postulated to be distinct, recognizable phenomena, shared with most our fellow humans due to our common evolutionary heritage. We express them rather quickly on our faces and through our body posture, when dedicated clusters of neurons leap into action and cause the body to act in a specific way.
At the heart of this traditional view of the human mind lies the theory of reactive perception, progressively accumulating elementary two-dimensional sensory features and bounding them together into complex three-dimensional perceptual shapes. This classical sandwich model of the mind assumes linear one-directional flow of information from sensory input to behavioral output, with cognition sandwiched between them, passively waiting for environmental stimuli.
However, perception isn’t reactive but proactive, powered by hypothesis-testing brain, constantly minimizing its prediction error. This marvelous organ diagnoses the world on the basis of stimulations of the senses. In case of vision, it tries to find out what in our surroundings is responsible for this particular retinal image. Making an intelligent guess isn’t easy, because there is too little sensory information available to the perceiver. The brain is forced to deliver the extra knowledge replacing the missing retinal information and recovering the 3D world from a 2D image. The extra knowledge becomes apparent as visual data is degraded (e.g. Charles Bonnet syndrome affecting people who are going blind),
or ambiguous scenes are presented (e.g. face-vase illusion).
The knowledge necessary to perform reliable inference from 2D image to 3D world takes the form of inborn biases or acquired beliefs. Both cannot be obtained from the retinal image. They are priors making it possible for the human brain to see, even when there is too little available information to be seen. In terms of neural connections, 10 percent of visual information comes from the eyes, and the rest are the priors coming from brain networks and making sense of all retinal images. This lack of balance between bottom-up input and top-down priors lets neuroscientists call perception “controlled hallucination”.