Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions.
You may also like
Artificial Intelligence from a Psychologist’s Point...
March 15, 2023Max Planck Institute for Biological Cybernetics
Mapping unknown territory
February 27, 2023Max Planck Institute for Biological Intelligence
Uridine makes you hungry
January 23, 2023Max Planck Institute for Metabolism Research
Amygdala Intercalated Cells: Gatekeepers and Conveyors...
January 19, 2023Max Planck Florida Institute for Neuroscience
Aversive bimodal associations differently impact...
January 19, 2023Max Planck Institute for Chemical Ecology
Commonalities and Asymmetries in the Neurobiological...
January 5, 2023Max Planck Institute for Psycholinguistics