Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions.
You may also like
Cannibal worms spare their own brood
February 1, 2021Center for Advanced European Studies and Research
Science on film, episode 4: Motivational states of the...
January 29, 2021Max Planck Institute for Biological Cybernetics
In a tight spot
January 27, 2021Max Planck Institute of Neurobiology
Tracing the many paths of vision
January 23, 2021Max Planck Institute of Neurobiology
Olfactory processing in the lateral horn of Drosophila
January 21, 2021Max Planck Institute for Chemical Ecology
Data-Driven Classification of Spectral Profiles...
December 19, 2020Max Planck Institute for Empirical Aesthetics