Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions.
You may also like
Poetic birdsong, precisely tuned
July 24, 2023Max Planck Institute for Biological Intelligence
Deep learning models to study sentence comprehension...
June 28, 2023Max Planck Institute for Psycholinguistics
Oh, That’s Nice
June 28, 2023Max Planck Institute for Empirical Aesthetics
How the brain slows down when we focus our gaze
June 28, 2023Max Planck Institute for Biological Cybernetics
When pigeons dream
How tasty is the food? Ask your brain!
June 6, 2023Max Planck Institute for Biological Intelligence