Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions.
You may also like
Event boundaries shape temporal organization of memory...
June 15, 2022Max Planck Institute for Empirical Aesthetics
Naturalistic viewing conditions can increase task...
June 14, 2022Max Planck Institute for Empirical Aesthetics
Higher-order olfactory neurons in the lateral horn...
June 13, 2022Max Planck Institute for Chemical Ecology
Getting in the Groove: Why samba makes everyone want...
Estimating the pace of change
April 26, 2022Max Planck Institute for Biological Cybernetics
Out of rhythm: Compromised precision of theta-gamma...
April 26, 2022Max Planck Institute for Human Development