Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

The Interactive Account of ventral occipitotemporal contributions to reading

Identifieur interne : 001959 ( Pmc/Checkpoint ); précédent : 001958; suivant : 001960

The Interactive Account of ventral occipitotemporal contributions to reading

Auteurs : Cathy J. Price [Royaume-Uni] ; Joseph T. Devlin [Royaume-Uni]

Source :

RBID : PMC:3223525

Abstract

The ventral occipitotemporal cortex (vOT) is involved in the perception of visually presented objects and written words. The Interactive Account of vOT function is based on the premise that perception involves the synthesis of bottom-up sensory input with top-down predictions that are generated automatically from prior experience. We propose that vOT integrates visuospatial features abstracted from sensory inputs with higher level associations such as speech sounds, actions and meanings. In this context, specialization for orthography emerges from regional interactions without assuming that vOT is selectively tuned to orthographic features. We discuss how the Interactive Account explains left vOT responses during normal reading and developmental dyslexia; and how it accounts for the behavioural consequences of left vOT damage.


Url:
DOI: 10.1016/j.tics.2011.04.001
PubMed: 21549634
PubMed Central: 3223525


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3223525

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">The Interactive Account of ventral occipitotemporal contributions to reading</title>
<author>
<name sortKey="Price, Cathy J" sort="Price, Cathy J" uniqKey="Price C" first="Cathy J." last="Price">Cathy J. Price</name>
<affiliation wicri:level="1">
<nlm:aff id="aff0005">Wellcome Trust Centre for Neuro-imaging, University College London, London WC1N 3BG, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Wellcome Trust Centre for Neuro-imaging, University College London, London WC1N 3BG</wicri:regionArea>
<wicri:noRegion>London WC1N 3BG</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Devlin, Joseph T" sort="Devlin, Joseph T" uniqKey="Devlin J" first="Joseph T." last="Devlin">Joseph T. Devlin</name>
<affiliation wicri:level="4">
<nlm:aff id="aff0010">Cognitive, Perceptual and Brain Sciences, Division of Psychology and Language Sciences, University of London, London WC1E 6BT, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Cognitive, Perceptual and Brain Sciences, Division of Psychology and Language Sciences, University of London, London WC1E 6BT</wicri:regionArea>
<orgName type="university">Université de Londres</orgName>
<placeName>
<settlement type="city">Londres</settlement>
<region type="country">Angleterre</region>
<region type="région" nuts="1">Grand Londres</region>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">21549634</idno>
<idno type="pmc">3223525</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3223525</idno>
<idno type="RBID">PMC:3223525</idno>
<idno type="doi">10.1016/j.tics.2011.04.001</idno>
<date when="2011">2011</date>
<idno type="wicri:Area/Pmc/Corpus">002820</idno>
<idno type="wicri:Area/Pmc/Curation">002820</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001959</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">The Interactive Account of ventral occipitotemporal contributions to reading</title>
<author>
<name sortKey="Price, Cathy J" sort="Price, Cathy J" uniqKey="Price C" first="Cathy J." last="Price">Cathy J. Price</name>
<affiliation wicri:level="1">
<nlm:aff id="aff0005">Wellcome Trust Centre for Neuro-imaging, University College London, London WC1N 3BG, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Wellcome Trust Centre for Neuro-imaging, University College London, London WC1N 3BG</wicri:regionArea>
<wicri:noRegion>London WC1N 3BG</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Devlin, Joseph T" sort="Devlin, Joseph T" uniqKey="Devlin J" first="Joseph T." last="Devlin">Joseph T. Devlin</name>
<affiliation wicri:level="4">
<nlm:aff id="aff0010">Cognitive, Perceptual and Brain Sciences, Division of Psychology and Language Sciences, University of London, London WC1E 6BT, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Cognitive, Perceptual and Brain Sciences, Division of Psychology and Language Sciences, University of London, London WC1E 6BT</wicri:regionArea>
<orgName type="university">Université de Londres</orgName>
<placeName>
<settlement type="city">Londres</settlement>
<region type="country">Angleterre</region>
<region type="région" nuts="1">Grand Londres</region>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Trends in Cognitive Sciences</title>
<idno type="ISSN">1364-6613</idno>
<idno type="eISSN">1879-307X</idno>
<imprint>
<date when="2011">2011</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The ventral occipitotemporal cortex (vOT) is involved in the perception of visually presented objects and written words. The Interactive Account of vOT function is based on the premise that perception involves the synthesis of bottom-up sensory input with top-down predictions that are generated automatically from prior experience. We propose that vOT integrates visuospatial features abstracted from sensory inputs with higher level associations such as speech sounds, actions and meanings. In this context, specialization for orthography emerges from regional interactions without assuming that vOT is selectively tuned to orthographic features. We discuss how the Interactive Account explains left vOT responses during normal reading and developmental dyslexia; and how it accounts for the behavioural consequences of left vOT damage.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Xue, G" uniqKey="Xue G">G. Xue</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ben Shachar, M" uniqKey="Ben Shachar M">M. Ben-Shachar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cohen, L" uniqKey="Cohen L">L. Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leff, A P" uniqKey="Leff A">A.P. Leff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pflugshaupt, T" uniqKey="Pflugshaupt T">T. Pflugshaupt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Starrfelt, R" uniqKey="Starrfelt R">R. Starrfelt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dehaene, S" uniqKey="Dehaene S">S. Dehaene</name>
</author>
<author>
<name sortKey="Cohen, L" uniqKey="Cohen L">L. Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dehaene, S" uniqKey="Dehaene S">S. Dehaene</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Song, Y" uniqKey="Song Y">Y. Song</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Starrfelt, R" uniqKey="Starrfelt R">R. Starrfelt</name>
</author>
<author>
<name sortKey="Gerlach, C" uniqKey="Gerlach C">C. Gerlach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xue, G" uniqKey="Xue G">G. Xue</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buchel, C" uniqKey="Buchel C">C. Buchel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Costantini, M" uniqKey="Costantini M">M. Costantini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Price, C J" uniqKey="Price C">C.J. Price</name>
</author>
<author>
<name sortKey="Devlin, J T" uniqKey="Devlin J">J.T. Devlin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xue, G" uniqKey="Xue G">G. Xue</name>
</author>
<author>
<name sortKey="Poldrack, R A" uniqKey="Poldrack R">R.A. Poldrack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Devlin, J T" uniqKey="Devlin J">J.T. Devlin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Price, C J" uniqKey="Price C">C.J. Price</name>
</author>
<author>
<name sortKey="Friston, K J" uniqKey="Friston K">K.J. Friston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Woodhead, Z V" uniqKey="Woodhead Z">Z.V. Woodhead</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reinke, K" uniqKey="Reinke K">K. Reinke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Song, Y" uniqKey="Song Y">Y. Song</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friston, K" uniqKey="Friston K">K. Friston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hinton, G E" uniqKey="Hinton G">G.E. Hinton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kassuba, T" uniqKey="Kassuba T">T. Kassuba</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Luders, H" uniqKey="Luders H">H. Luders</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cai, Q" uniqKey="Cai Q">Q. Cai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friston, K" uniqKey="Friston K">K. Friston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcclelland, J L" uniqKey="Mcclelland J">J.L. McClelland</name>
</author>
<author>
<name sortKey="Rumelhart, D E" uniqKey="Rumelhart D">D.E. Rumelhart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seidenberg, M S" uniqKey="Seidenberg M">M.S. Seidenberg</name>
</author>
<author>
<name sortKey="Mcclelland, J L" uniqKey="Mcclelland J">J.L. McClelland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rueckl, J" uniqKey="Rueckl J">J. Rueckl</name>
</author>
<author>
<name sortKey="Seidenberg, M S" uniqKey="Seidenberg M">M.S. Seidenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Plaut, D C" uniqKey="Plaut D">D.C. Plaut</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Coltheart, M" uniqKey="Coltheart M">M. Coltheart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jacobs, A M" uniqKey="Jacobs A">A.M. Jacobs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hupe, J M" uniqKey="Hupe J">J.M. Hupe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rao, R P" uniqKey="Rao R">R.P. Rao</name>
</author>
<author>
<name sortKey="Ballard, D H" uniqKey="Ballard D">D.H. Ballard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Twomey, T" uniqKey="Twomey T">T. Twomey</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yoncheva, Y N" uniqKey="Yoncheva Y">Y.N. Yoncheva</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kherif, F" uniqKey="Kherif F">F. Kherif</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dehaene, S" uniqKey="Dehaene S">S. Dehaene</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Price, C J" uniqKey="Price C">C.J. Price</name>
</author>
<author>
<name sortKey="Mechelli, A" uniqKey="Mechelli A">A. Mechelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Price, C J" uniqKey="Price C">C.J. Price</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kronbichler, M" uniqKey="Kronbichler M">M. Kronbichler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Graves, W W" uniqKey="Graves W">W.W. Graves</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Binder, J R" uniqKey="Binder J">J.R. Binder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Glezer, L S" uniqKey="Glezer L">L.S. Glezer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Devlin, J T" uniqKey="Devlin J">J.T. Devlin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, X" uniqKey="Wang X">X. Wang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duncan, K J" uniqKey="Duncan K">K.J. Duncan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wright, N D" uniqKey="Wright N">N.D. Wright</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Szwed, M" uniqKey="Szwed M">M. Szwed</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hegde, J" uniqKey="Hegde J">J. Hegde</name>
</author>
<author>
<name sortKey="Van Essen, D C" uniqKey="Van Essen D">D.C. Van Essen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Connor, C E" uniqKey="Connor C">C.E. Connor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dehaene, S" uniqKey="Dehaene S">S. Dehaene</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goswami, U" uniqKey="Goswami U">U. Goswami</name>
</author>
<author>
<name sortKey="Ziegler, J C" uniqKey="Ziegler J">J.C. Ziegler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blau, V" uniqKey="Blau V">V. Blau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brunswick, N" uniqKey="Brunswick N">N. Brunswick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Richlan, F" uniqKey="Richlan F">F. Richlan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shaywitz, B A" uniqKey="Shaywitz B">B.A. Shaywitz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Mark, S" uniqKey="Van Der Mark S">S. van der Mark</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wimmer, H" uniqKey="Wimmer H">H. Wimmer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Mark, S" uniqKey="Van Der Mark S">S. van der Mark</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brem, S" uniqKey="Brem S">S. Brem</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="James, K H" uniqKey="James K">K.H. James</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Turkeltaub, P E" uniqKey="Turkeltaub P">P.E. Turkeltaub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mccrory, E J" uniqKey="Mccrory E">E.J. McCrory</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hillis, A E" uniqKey="Hillis A">A.E. Hillis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hillis, A E" uniqKey="Hillis A">A.E. Hillis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marsh, E B" uniqKey="Marsh E">E.B. Marsh</name>
</author>
<author>
<name sortKey="Hillis, A E" uniqKey="Hillis A">A.E. Hillis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gaillard, R" uniqKey="Gaillard R">R. Gaillard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Starrfelt, R" uniqKey="Starrfelt R">R. Starrfelt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sereno, M I" uniqKey="Sereno M">M.I. Sereno</name>
</author>
<author>
<name sortKey="Tootell, R B" uniqKey="Tootell R">R.B. Tootell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="David, S V" uniqKey="David S">S.V. David</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brincat, S L" uniqKey="Brincat S">S.L. Brincat</name>
</author>
<author>
<name sortKey="Connor, C E" uniqKey="Connor C">C.E. Connor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gross, C G" uniqKey="Gross C">C.G. Gross</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Changizi, M A" uniqKey="Changizi M">M.A. Changizi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mumford, D" uniqKey="Mumford D">D. Mumford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dayan, P" uniqKey="Dayan P">P. Dayan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friston, K" uniqKey="Friston K">K. Friston</name>
</author>
<author>
<name sortKey="Kiebel, S" uniqKey="Kiebel S">S. Kiebel</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="review-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Trends Cogn Sci</journal-id>
<journal-id journal-id-type="iso-abbrev">Trends Cogn. Sci. (Regul. Ed.)</journal-id>
<journal-title-group>
<journal-title>Trends in Cognitive Sciences</journal-title>
</journal-title-group>
<issn pub-type="ppub">1364-6613</issn>
<issn pub-type="epub">1879-307X</issn>
<publisher>
<publisher-name>Elsevier Science</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">21549634</article-id>
<article-id pub-id-type="pmc">3223525</article-id>
<article-id pub-id-type="publisher-id">S1364-6613(11)00057-X</article-id>
<article-id pub-id-type="doi">10.1016/j.tics.2011.04.001</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Opinion</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>The Interactive Account of ventral occipitotemporal contributions to reading</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Price</surname>
<given-names>Cathy J.</given-names>
</name>
<email>c.price@fil.ion.ucl.ac.uk</email>
<xref rid="aff0005" ref-type="aff">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Devlin</surname>
<given-names>Joseph T.</given-names>
</name>
<xref rid="aff0010" ref-type="aff">2</xref>
</contrib>
</contrib-group>
<aff id="aff0005">
<label>1</label>
Wellcome Trust Centre for Neuro-imaging, University College London, London WC1N 3BG, UK</aff>
<aff id="aff0010">
<label>2</label>
Cognitive, Perceptual and Brain Sciences, Division of Psychology and Language Sciences, University of London, London WC1E 6BT, UK</aff>
<pub-date pub-type="pmc-release">
<day>1</day>
<month>6</month>
<year>2011</year>
</pub-date>
<pmc-comment> PMC Release delay is 0 months and 0 days and was based on .</pmc-comment>
<pub-date pub-type="ppub">
<month>6</month>
<year>2011</year>
</pub-date>
<volume>15</volume>
<issue>6</issue>
<fpage>246</fpage>
<lpage>253</lpage>
<permissions>
<copyright-statement>© 2011 Elsevier Ltd.</copyright-statement>
<copyright-year>2011</copyright-year>
<copyright-holder>Elsevier Ltd</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/3.0/">
<license-p>Open Access under
<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/3.0/">CC BY 3.0</ext-link>
license</license-p>
</license>
</permissions>
<abstract>
<p>The ventral occipitotemporal cortex (vOT) is involved in the perception of visually presented objects and written words. The Interactive Account of vOT function is based on the premise that perception involves the synthesis of bottom-up sensory input with top-down predictions that are generated automatically from prior experience. We propose that vOT integrates visuospatial features abstracted from sensory inputs with higher level associations such as speech sounds, actions and meanings. In this context, specialization for orthography emerges from regional interactions without assuming that vOT is selectively tuned to orthographic features. We discuss how the Interactive Account explains left vOT responses during normal reading and developmental dyslexia; and how it accounts for the behavioural consequences of left vOT damage.</p>
</abstract>
</article-meta>
</front>
<body>
<sec id="sec0005">
<title>The diverse response properties of vOT</title>
<p>There has been considerable interest in the role of the ventral occipitotemporal cortex (vOT) during reading. Learning to read increases left vOT activation in response to written words
<xref rid="bib0005 bib0010" ref-type="bibr">[1,2]</xref>
and damage to left vOT impairs the ability to read
<xref rid="bib0015 bib0020 bib0025 bib0030" ref-type="bibr">[3–6]</xref>
. These and other findings have led to claims that the response properties of vOT change during reading acquisition, leading to neuronal populations that are selectively tuned to orthographic inputs
<xref rid="bib0035 bib0040" ref-type="bibr">[7,8]</xref>
. However, a significant number of studies have reported that, even after learning to read, vOT is highly responsive to non-orthographic stimuli, with a selectivity that depends on the nature of the task and the stimulus
<xref rid="bib0045 bib0050 bib0055" ref-type="bibr">[9–11]</xref>
. The same vOT area also responds to orthographic and non-orthographic tactile stimuli
<xref rid="bib0060 bib0065 bib0070 bib0075" ref-type="bibr">[12–15]</xref>
. These diverse response properties suggest that vOT contributes to many different functions that change as it interacts with different areas
<xref rid="bib0005 bib0045 bib0055 bib0075 bib0080 bib0085 bib0090 bib0095 bib0100 bib0105" ref-type="bibr">[1,9,11,15–21]</xref>
. In this context, it is difficult to find a functional label that explains all vOT responses.</p>
<p>To explain the heterogeneity of responses in vOT, we formalize the Interactive Account of vOT function during reading by presenting it within a predictive coding (i.e. a generative) framework
<xref rid="bib0110 bib0115" ref-type="bibr">[22,23]</xref>
. This perspective provides a parsimonious explanation of empirical findings and is based on established theoretical and neurobiological principles. Before presenting this framework, we begin with an anatomical description of vOT.</p>
</sec>
<sec id="sec0010">
<title>The anatomy of vOT</title>
<p>vOT is centred on the occipitotemporal sulcus but extends medially onto the lateral crest of the fusiform gyrus and laterally onto the medial crest of the inferior temporal gyrus. In the posterior–anterior direction, vOT is located on the ventral border of the occipital and temporal lobes (
<xref rid="fig0005" ref-type="fig">Figure 1</xref>
a), which lies between
<italic>y</italic>
 = –50 and
<italic>y</italic>
 = –60 in standard Montreal Neurological Institute (MNI) space. More posteriorly, activation is highest to visual inputs, but more anteriorly activity increases in response to familiar visual, tactile or auditory stimuli
<xref rid="bib0120" ref-type="bibr">[24]</xref>
, consistent with a basal temporal language area
<xref rid="bib0125" ref-type="bibr">[25]</xref>
. Given its position between visual and language areas, it is not surprising that vOT responds to a range of visual stimuli as well as the language demands of the task
<xref rid="bib0005 bib0045 bib0055 bib0075 bib0080 bib0085 bib0090 bib0095 bib0100 bib0105" ref-type="bibr">[1,9,11,15–21]</xref>
. The association between vOT and language processing is further supported by observations that lateralization (left versus right hemisphere dominance) in vOT correlates with language lateralization in frontal language areas
<xref rid="bib0130" ref-type="bibr">[26]</xref>
.</p>
</sec>
<sec id="sec0015">
<title>The Interactive Account of vOT function</title>
<p>The Interactive Account is based on the premise that perception involves recurrent or reciprocal interactions between sensory cortices and higher order processing regions via a hierarchy of forward and backward connections (
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
)
<xref rid="bib0110" ref-type="bibr">[22]</xref>
. Within the hierarchy, the function of a region depends on its synthesis of bottom-up sensory inputs conveyed by forward connections and top-down predictions mediated by backward connections. These predictions are based on prior experience and are needed to resolve uncertainty and ambiguity about the causes of the sensory inputs on which predictions are based. The hierarchical nature of neocortical organization is reflected in the abundance of backward relative to forward connections
<xref rid="bib0135" ref-type="bibr">[27]</xref>
. Because functional magnetic resonance imaging (fMRI) does not distinguish between synaptic activity induced by forward connections and that induced by backward connections, it reports their combined contribution (
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
), which includes prediction error.</p>
<p>For reading, the sensory inputs are written words (or Braille in the tactile modality) and the predictions are based on prior association of visual or tactile inputs with phonology and semantics. In cognitive terms, vOT is therefore an interface between bottom-up sensory inputs and top-down predictions that call on non-visual stimulus attributes. Without prior knowledge the relationship between orthography and phonology, vOT activation to words will be low because phonological areas do not send backward predictions to vOT (
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
and
<xref rid="tb0010" ref-type="boxed-text">Box 1</xref>
). Once phonological associations are learned, backward connections can deliver top-down predictions to vOT when the stimuli are words or word-like. In this context, top-down processing does not imply a conscious strategy; it is mandated by unconscious (hierarchical) perceptual inference. In other words, it represents the intimate association between visual inputs and higher level linguistic representations that occurs automatically and is modulated by attention and task demands. Interpreting activation in vOT therefore requires consideration of the stimulus, experience-dependent learning and context (i.e. the task requirements and the attentional demands). Likewise, interpreting the effect of damage to vOT depends on how word recognition is affected by disrupting top-down inputs from higher order regions to vOT, and from vOT to lower level visual regions (
<xref rid="tb0015" ref-type="boxed-text">Box 2</xref>
).</p>
<p>Our account assumes that neuronal populations in vOT are not tuned selectively to orthographic inputs (
<xref rid="tb0020" ref-type="boxed-text">Box 3</xref>
). Instead, orthographic representations emerge from the interaction of backward and forward influences. In the forward direction, we postulate that neurons in vOT accumulate information about the elemental form of stimuli from complex receptive fields (
<xref rid="fig0005" ref-type="fig">Figure 1</xref>
and
<xref rid="tb0020" ref-type="boxed-text">Box 3</xref>
). In the backward direction, higher order conceptual and phonological knowledge predicts the pattern of activity distributed across multiple neurons within vOT. Put another way, orthographic representations are maintained by the consensual integration of visual inputs with higher level language representations
<xref rid="bib0085 bib0095 bib0100" ref-type="bibr">[17,19,20]</xref>
. This perspective allows the same neuronal populations to contribute to different functions depending on the regions with which they interact and the predictions for which the current context calls. In this context, the neural implementation of classical cognitive functions (e.g. orthography, semantics, phonology) is in distributed patterns of activity across hierarchical levels that are not fully dissociable from one another.</p>
<p>The visual information that is accumulated in vOT must be sufficiently specific to induce coherent patterns of activation in semantic and phonological areas that send top-down predictions back to vOT. For example, in McClelland and Rumelhart's
<xref rid="bib0140" ref-type="bibr">[28]</xref>
Interactive Activation model of visual word recognition, partial visual information cascades forward activating incomplete phonological and semantic patterns, which in turn feed back to support consistent orthographic patterns and suppress inconsistent ones. As in connectionist models of reading
<xref rid="bib0145 bib0150 bib0155" ref-type="bibr">[29–31]</xref>
, we propose that patterns of activation across vOT neurons encoding shape information are sufficient to partially activate neurons encoding semantics and phonology in higher order association regions, which provide recurrent inputs to vOT until the top-down predictions and bottom-up inputs are maximally consistent. Thus, predictions are optimized during the synthesis of bottom-up and top-down information (
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
).</p>
</sec>
<sec id="sec0020">
<title>Evidence for automatic (non-strategic) top-down influences on vOT</title>
<p>In cognitive terms, top-down processing typically refers to conscious, strategic and task-related effects. Automatic, non-strategic top-down processes are also recognized, particularly in computational models of reading
<xref rid="bib0115 bib0140 bib0155 bib0160 bib0165" ref-type="bibr">[23,28,31–33]</xref>
. The ubiquity of automatic top-down effects has been demonstrated neurophysiologically in monkeys, where inactivating higher-order cortical areas (by cooling) results in changes to extra-classical receptive fields, despite the monkey being anesthetized
<xref rid="bib0170 bib0175" ref-type="bibr">[34,35]</xref>
.</p>
<p>Here we make a clear distinction between strategic and non-strategic top-down influences on vOT activation. Strategic influences have been demonstrated in studies showing that vOT activation changes with task, even when the stimulus, attention and response times are controlled
<xref rid="bib0045 bib0105 bib0180 bib0185" ref-type="bibr">[9,21,36,37]</xref>
. In contrast, non-strategic top-down influences on vOT activation are generated automatically and unconsciously from previous experience with similar stimuli (
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
and
<xref rid="tb0010" ref-type="boxed-text">Box 1</xref>
). That is, visual words automatically engage processing of their sounds and meaning, which provide predictive feedback to the bottom-up processing of visual attributes.</p>
<p>A clear example of automatic (non-strategic) top-down effects on vOT activation comes from a picture-word priming experiment that found reduced vOT activation for unconsciously perceived primes that were conceptually and phonologically identical to a stimulus that was subsequently named
<xref rid="bib0190" ref-type="bibr">[38]</xref>
. For example, when a visually presented written object name (e.g. LION) was preceded by a rapidly presented, masked (unconscious) picture of the same object, activation in vOT was reduced relative to when it was preceded by a picture of a different object (e.g. a chair). Similarly, masked written object names (words) reduced vOT activation for pictures of the same objects. These findings can be explained easily by automatic, top-down predictions that prime visual shape information in vOT. In essence, the brief (and unconsciously perceived) prime is sufficient to engage phonological and/or semantic processing that automatically sends predictions regarding the identity of the next stimulus (the target) back to vOT, thereby reducing prediction error and activation. The fact that priming occurs across stimulus formats (pictures/words) demonstrates that these backward projections predict all visual forms of a concept (e.g. object form and written form). The same account also explains reduced vOT activation when a word is primed by the same word in a different case (e.g. AGE–age) without postulating the need for abstract visual word form detectors
<xref rid="bib0085 bib0195" ref-type="bibr">[17,39]</xref>
.</p>
<p>The effect of word–picture priming on vOT activation cannot be explained in terms of feed-forward visual processing because there is no visual similarity between the prime and the target that can serve as the basis for reduced vOT activation (e.g. through simple adaptation effects). Explanations based on strategic top-down processing are also insufficient, because participants are not aware of the primes and thus cannot use them to generate conscious expectations. The effects can nevertheless be explained by the Interactive Account in terms of automatic top-down influences that combine with bottom-up visual information to determine information processing in vOT.</p>
</sec>
<sec id="sec0025">
<title>vOT selectivity to words and other orthographic stimuli</title>
<p>Several studies have shown activity is higher in response to pseudowords than to words in posterior parts of the occipitotemporal sulcus (
<italic>y</italic>
 = –60 to
<italic>y</italic>
 = –70 in MNI space) and more sensitive to words than to pseudowords in anterior parts of the occipitotemporal sulcus (
<italic>y</italic>
 = –40 to
<italic>y</italic>
 = –50) (for a review, see
<xref rid="bib0200" ref-type="bibr">[40]</xref>
). However, here we consider the more perplexing pattern of selectivity that occurs at the centre of vOT (
<italic>y</italic>
 = –50 to
<italic>y</italic>
 = –60), where activity has been reported to be greater for: i) pseudowords (e.g. GHOTS) than for consonant letter strings (e.g. GHVST)
<xref rid="bib0205" ref-type="bibr">[41]</xref>
; (ii) pseudowords than words (e.g. GHOST)
<xref rid="bib0210" ref-type="bibr">[42]</xref>
; and (iii) low versus high frequency words (GHOST versus GREEN)
<xref rid="bib0215" ref-type="bibr">[43]</xref>
. This combination of effects cannot be explained by a progressive increase or decrease in vOT response to familiarity (consonants < pseudowords < low frequency words < high frequency words) because responses to pseudowords are higher than those to both unfamiliar consonants and familiar words. Nor can vOT response selectivity be explained by bigram or trigram frequency
<xref rid="bib0220" ref-type="bibr">[44]</xref>
, because greater activation has been reported for pseudowords than for words when bigram and trigram frequency are controlled
<xref rid="bib0210" ref-type="bibr">[42]</xref>
.</p>
<p>The Interactive Account explains vOT responses to different types of stimulus simply, in terms of interactions between bottom-up visual information and top-down predictions (
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
). During passive viewing tasks, activation increases for pseudowords relative to consonant letter strings because pseudowords are more word-like and therefore engage top-down predictions from phonological areas. By contrast, activation is greater for pseudowords than for words because, although both activate top-down predictions, there is a greater prediction error for pseudowords. That is, for a previously encountered stimulus (i.e. a word) there is a good match between predictions and the visual representations being predicted, producing minimal prediction error, whereas for unfamiliar pseudowords there is a poor match that increases prediction error and activation in vOT. Similarly, prediction error and activation will be less for high than for low frequency words because high frequency words are more familiar, which means their predictions are more efficient because they call on stronger associations between visual and linguistic codes.</p>
<p>This account also explains apparent word selectivity, such as repetition suppression in vOT for words primed by an identical word but not for those where the prime differs from the target by one letter (e.g. coat–boat)
<xref rid="bib0225" ref-type="bibr">[45]</xref>
. Clearly, the non-identical prime activates different phonological and semantic patterns than the target word, leading to increased prediction error in vOT
<xref rid="bib0190" ref-type="bibr">[38]</xref>
. In contrast, small orthographic differences between the prime and the target that result in only minor phonological and semantic changes (e.g. teacher–teach) yield minimal prediction error, resulting in reduced vOT activation
<xref rid="bib0230" ref-type="bibr">[46]</xref>
.</p>
<p>It is important to note that selectivity (in terms of greater activation for one stimulus relative to another) depends on numerous bottom-up and top-down processing demands that change with the task, familiarity with the stimulus, and the degree of overlap between the stimulus and other stimuli that might compete for a response (i.e. the orthographic neighbourhood effect). It is possible that selectivity can be reversed in one context relative to another. For example, during passive viewing conditions, vOT activation can be higher for words than for consonant strings because top-down predictions are activated by words that look familiar. In contrast, in attentionally demanding paradigms (e.g. the one-back task), vOT activation can be higher for consonants than for words
<xref rid="bib0235" ref-type="bibr">[47]</xref>
because, in the absence of top-down support from semantics and phonology, the visual processing demands of the task are greater for consonants.</p>
</sec>
<sec id="sec0030">
<title>vOT selectivity to words and pictures</title>
<p>When semantic and phonological associations are controlled by comparing written object names to pictures of the same objects, activation in vOT is typically greater for pictures than for written words
<xref rid="bib0240 bib0245" ref-type="bibr">[48,49]</xref>
, but again, it depends on the combination of the task
<xref rid="bib0050" ref-type="bibr">[10]</xref>
and the bottom up visual inputs. During a non-linguistic task such as passive viewing, colour decision or a one-back task, vOT activation can be higher for words than for pictures when the physical dimensions of the visual stimuli are matched
<xref rid="bib0010 bib0050" ref-type="bibr">[2,10]</xref>
, although the location of this effect may be anterior to vOT proper
<xref rid="bib0250" ref-type="bibr">[50]</xref>
. By contrast, during naming tasks, vOT activation has only been reported as greater for pictures than for words
<xref rid="bib0190 bib0245" ref-type="bibr">[38,49]</xref>
.</p>
<p>Again, the task-specific reversal of stimulus selectivity can be explained by the Interactive Account in terms of a combination of forward inputs, top-down predictions and the mismatch between them (i.e. the prediction error). Activation related to forward inputs is greater for larger and more complex visual stimuli (e.g. pictures). Activation related to top-down predictions is greater for words than for pictures during non-linguistic tasks because only words have a sufficiently tight relationship with phonology to induce top-down predictions automatically. Activation related to prediction error is higher for pictures than for words during naming tasks because access to phonology is needed to name pictures and words, but the links between vOT and phonological areas are less accurate (more error-prone) for pictures. Thus, the Interactive Account provides a systematic and parsimonious explanation of a previously unexplained range of empirical data.</p>
</sec>
<sec id="sec0035">
<title>Concluding remarks</title>
<p>In summary, we have presented an Interactive Account that is based on a generic framework for understanding brain function
<xref rid="bib0110" ref-type="bibr">[22]</xref>
(
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
). It explains vOT activation in terms of the synthesis of visual inputs carried in the forward connections, top-down predictions conveyed by backward connections, and the mismatch between these bottom-up and top-down inputs.</p>
<p>Although there are many outstanding questions (
<xref rid="tb0025" ref-type="boxed-text">Box 4</xref>
), we suggest that: (i) vOT activation to orthographic stimuli increases while individuals are learning to read because inter-regional interactions become established and top-down predictions from phonological and semantic processing areas become available; (ii) vOT activation is greater for pseudowords than for words, and for low relative to high frequency words because of increased prediction error; (iii) greater activation for pictures of objects than for their written names is the combined consequence of more complex visual features, less constrained top-down predictions and therefore increased prediction error; (iv) greater activation for written words than objects is observed when the task does not control for the top-down influence of language on written word processing; (v) damage to vOT impairs reading, object naming and perceptual processing because visual inputs are disconnected from top-down predictions from vOT; and (vi) vOT activation will be lower in developmental dyslexics, in whom top-down predictions from phonological and semantic processing areas are less automatically generated than in age-matched skilled readers.</p>
<p>The automatic interactions between visual, phonological and semantic information that we argue for are a fundamental property of almost all cognitive models of visual word recognition and are necessary to explain a range of reading behaviours
<xref rid="bib0140 bib0155 bib0160 bib0165" ref-type="bibr">[28,31–33]</xref>
. Incorporating them within a neural framework obviates the need to postulate a novel form of learning-related plasticity (e.g. ‘neuronal recycling’)
<xref rid="bib0035" ref-type="bibr">[7]</xref>
or reading-specific neuronal responses (e.g. ‘bigram detectors’)
<xref rid="bib0040" ref-type="bibr">[8]</xref>
. Instead, the Interactive Account relies on well established principles of neocortical function that are not specific to reading, but nonetheless accommodate this recently developed cultural skill.
<boxed-text id="tb0005">
<caption>
<title>Glossary</title>
</caption>
<p>
<list list-type="simple">
<list-item id="lsti0005">
<p>
<bold>Bottom-up sensory information</bold>
: external information arrives at the senses and projects to primary sensory cortices. These drive secondary, tertiary and higher order association cortices via forward connections arising primarily from superficial (layer II and III) pyramidal neurons. Within the ventral occipitotemporal cortex (vOT), the primary source of bottom-up information is visual, presumably from areas V2, V4v, and posterior parts of the lingual and fusiform gyri.</p>
</list-item>
<list-item id="lsti0010">
<p>
<bold>Generative models</bold>
: probabilistic models of how (sensory) data are caused. In machine learning, they include both bottom-up ‘recognition’ connections and top-down ‘predictive’ connections
<xref rid="bib0115" ref-type="bibr">[23]</xref>
. These models learn multilayer representations by adjusting the top-down connection weights to better predict sensory input. Existing computational models of reading use implicit generative models and share many important features such as interactivity and the use of prediction errors to learn weights (e.g. through back-propagation of errors).</p>
</list-item>
<list-item id="lsti0015">
<p>
<bold>Predictive coding</bold>
: a ubiquitous estimation scheme (developed in engineering) and instantiated in hierarchical generative models of brain function
<xref rid="bib0175 bib0380 bib0385 bib0390" ref-type="bibr">[35,76–78]</xref>
. Here, cortical regions receive bottom-up input encoding features present in the environment as well as top-down predictions. These predictions attempt to reconcile sensory input with one's internal knowledge of how input is generated. Thus, the function of any region is to integrate these two sources of input dynamically into a coherent, consistent, stable pattern of activity.</p>
</list-item>
<list-item id="lsti0020">
<p>
<bold>Prediction error</bold>
: the difference between bottom-up (sensory) input and top-down predictions. Within vOT, prediction error is minimized when they agree. Any irresolvable mismatch (e.g. when processing pseudowords) elicits prediction error, which elicits an increased BOLD signal response (
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
).</p>
</list-item>
<list-item id="lsti0025">
<p>
<bold>Top-down predictions</bold>
: the automatic input a region receives from areas above it in the anatomical hierarchy. These connections attempt to predict the bottom-up inputs based on the context and active features. Important sources of top-down input to vOT are (deep) pyramidal cells in cortical areas that contribute to representing the sound, meaning and actions associated with a given stimulus.</p>
</list-item>
</list>
</p>
</boxed-text>
</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="bib0005">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xue</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Language experience shapes fusiform activation when processing a logographic artificial language: an fMRI training study</article-title>
<source>Neuroimage</source>
<volume>31</volume>
<year>2006</year>
<fpage>1315</fpage>
<lpage>1326</lpage>
<pub-id pub-id-type="pmid">16644241</pub-id>
</element-citation>
</ref>
<ref id="bib0010">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ben-Shachar</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>The development of cortical sensitivity to visual word forms.</article-title>
<source>J. Cogn. Neurosci.</source>
<year>2011</year>
<fpage>21615</fpage>
</element-citation>
</ref>
<ref id="bib0015">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cohen</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>The pathophysiology of letter-by-letter reading</article-title>
<source>Neuropsychologia</source>
<volume>42</volume>
<year>2004</year>
<fpage>1768</fpage>
<lpage>1780</lpage>
<pub-id pub-id-type="pmid">15351626</pub-id>
</element-citation>
</ref>
<ref id="bib0020">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Leff</surname>
<given-names>A.P.</given-names>
</name>
</person-group>
<article-title>Structural anatomy of pure and hemianopic alexia</article-title>
<source>J. Neurol. Neurosurg. Psychiatry</source>
<volume>77</volume>
<year>2006</year>
<fpage>1004</fpage>
<lpage>1007</lpage>
<pub-id pub-id-type="pmid">16801352</pub-id>
</element-citation>
</ref>
<ref id="bib0025">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pflugshaupt</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>About the role of visual field defects in pure alexia</article-title>
<source>Brain</source>
<volume>132</volume>
<year>2009</year>
<fpage>1907</fpage>
<lpage>1917</lpage>
<pub-id pub-id-type="pmid">19498088</pub-id>
</element-citation>
</ref>
<ref id="bib0030">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Starrfelt</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Too little, too late: reduced visual span and speed characterize pure alexia</article-title>
<source>Cereb. Cortex</source>
<volume>19</volume>
<year>2009</year>
<fpage>2880</fpage>
<lpage>2890</lpage>
<pub-id pub-id-type="pmid">19366870</pub-id>
</element-citation>
</ref>
<ref id="bib0035">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dehaene</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Cultural recycling of cortical maps</article-title>
<source>Neuron</source>
<volume>56</volume>
<year>2007</year>
<fpage>384</fpage>
<lpage>398</lpage>
<pub-id pub-id-type="pmid">17964253</pub-id>
</element-citation>
</ref>
<ref id="bib0040">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dehaene</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>The neural code for written words: a proposal</article-title>
<source>Trends Cogn. Sci.</source>
<volume>9</volume>
<year>2005</year>
<fpage>335</fpage>
<lpage>341</lpage>
<pub-id pub-id-type="pmid">15951224</pub-id>
</element-citation>
</ref>
<ref id="bib0045">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Song</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>The role of top-down task context in learning to perceive objects</article-title>
<source>J. Neurosci.</source>
<volume>30</volume>
<year>2010</year>
<fpage>9869</fpage>
<lpage>9876</lpage>
<pub-id pub-id-type="pmid">20660269</pub-id>
</element-citation>
</ref>
<ref id="bib0050">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Starrfelt</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Gerlach</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>The visual what for area: words and pictures in the left fusiform gyrus</article-title>
<source>Neuroimage</source>
<volume>35</volume>
<year>2007</year>
<fpage>334</fpage>
<lpage>342</lpage>
<pub-id pub-id-type="pmid">17239621</pub-id>
</element-citation>
</ref>
<ref id="bib0055">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xue</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Facilitating memory for novel characters by reducing neural repetition suppression in the left fusiform cortex</article-title>
<source>PLoS ONE</source>
<volume>5</volume>
<year>2010</year>
<fpage>e13204</fpage>
<pub-id pub-id-type="pmid">20949093</pub-id>
</element-citation>
</ref>
<ref id="bib0060">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Visuo-haptic object-related activation in the ventral visual pathway</article-title>
<source>Nat. Neurosci.</source>
<volume>4</volume>
<year>2001</year>
<fpage>324</fpage>
<lpage>330</lpage>
<pub-id pub-id-type="pmid">11224551</pub-id>
</element-citation>
</ref>
<ref id="bib0065">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Buchel</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>A multimodal language region in the ventral visual pathway</article-title>
<source>Nature</source>
<volume>394</volume>
<year>1998</year>
<fpage>274</fpage>
<lpage>277</lpage>
<pub-id pub-id-type="pmid">9685156</pub-id>
</element-citation>
</ref>
<ref id="bib0070">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Costantini</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Haptic perception and body representation in lateral and medial occipito-temporal cortices</article-title>
<source>Neuropsychologia</source>
<volume>49</volume>
<year>2011</year>
<fpage>821</fpage>
<lpage>829</lpage>
<pub-id pub-id-type="pmid">21316376</pub-id>
</element-citation>
</ref>
<ref id="bib0075">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Price</surname>
<given-names>C.J.</given-names>
</name>
<name>
<surname>Devlin</surname>
<given-names>J.T.</given-names>
</name>
</person-group>
<article-title>The myth of the visual word form area</article-title>
<source>Neuroimage</source>
<volume>19</volume>
<year>2003</year>
<fpage>473</fpage>
<lpage>481</lpage>
<pub-id pub-id-type="pmid">12880781</pub-id>
</element-citation>
</ref>
<ref id="bib0080">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xue</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Poldrack</surname>
<given-names>R.A.</given-names>
</name>
</person-group>
<article-title>The neural substrates of visual perceptual learning of words: implications for the visual word form area hypothesis</article-title>
<source>J. Cogn. Neurosci.</source>
<volume>19</volume>
<year>2007</year>
<fpage>1643</fpage>
<lpage>1655</lpage>
<pub-id pub-id-type="pmid">18271738</pub-id>
</element-citation>
</ref>
<ref id="bib0085">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Devlin</surname>
<given-names>J.T.</given-names>
</name>
</person-group>
<article-title>The role of the posterior fusiform gyrus in reading</article-title>
<source>J. Cogn. Neurosci.</source>
<volume>18</volume>
<year>2006</year>
<fpage>911</fpage>
<lpage>922</lpage>
<pub-id pub-id-type="pmid">16839299</pub-id>
</element-citation>
</ref>
<ref id="bib0090">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Price</surname>
<given-names>C.J.</given-names>
</name>
<name>
<surname>Friston</surname>
<given-names>K.J.</given-names>
</name>
</person-group>
<article-title>Functional ontologies for cognition: the systematic definition of structure and function</article-title>
<source>Cogn. Neuropsychol.</source>
<volume>22</volume>
<year>2005</year>
<fpage>262</fpage>
<lpage>275</lpage>
<pub-id pub-id-type="pmid">21038249</pub-id>
</element-citation>
</ref>
<ref id="bib0095">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Woodhead</surname>
<given-names>Z.V.</given-names>
</name>
</person-group>
<article-title>The visual word form system in context</article-title>
<source>J. Neurosci.</source>
<volume>31</volume>
<year>2011</year>
<fpage>193</fpage>
<lpage>199</lpage>
<pub-id pub-id-type="pmid">21209204</pub-id>
</element-citation>
</ref>
<ref id="bib0100">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reinke</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>Functional specificity of the visual word form area: general activation for words and symbols but specific network activation for words</article-title>
<source>Brain Lang.</source>
<volume>104</volume>
<year>2008</year>
<fpage>180</fpage>
<lpage>189</lpage>
<pub-id pub-id-type="pmid">17531309</pub-id>
</element-citation>
</ref>
<ref id="bib0105">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Song</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>Short-term language experience shapes the plasticity of the visual word form area</article-title>
<source>Brain Res.</source>
<volume>1316</volume>
<year>2010</year>
<fpage>83</fpage>
<lpage>91</lpage>
<pub-id pub-id-type="pmid">20034482</pub-id>
</element-citation>
</ref>
<ref id="bib0110">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Friston</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>The free-energy principle: a unified brain theory?</article-title>
<source>Nat. Rev. Neurosci.</source>
<volume>11</volume>
<year>2010</year>
<fpage>127</fpage>
<lpage>138</lpage>
<pub-id pub-id-type="pmid">20068583</pub-id>
</element-citation>
</ref>
<ref id="bib0115">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hinton</surname>
<given-names>G.E.</given-names>
</name>
</person-group>
<article-title>Learning multiple layers of representation</article-title>
<source>Trends Cogn. Sci.</source>
<volume>11</volume>
<year>2007</year>
<fpage>428</fpage>
<lpage>434</lpage>
<pub-id pub-id-type="pmid">17921042</pub-id>
</element-citation>
</ref>
<ref id="bib0120">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kassuba</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>The left fusiform gyrus hosts trisensory representations of manipulable objects</article-title>
<source>Neuroimage</source>
<volume>56</volume>
<year>2011</year>
<fpage>1566</fpage>
<lpage>1577</lpage>
<pub-id pub-id-type="pmid">21334444</pub-id>
</element-citation>
</ref>
<ref id="bib0125">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Luders</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Basal temporal language area</article-title>
<source>Brain</source>
<volume>114</volume>
<year>1991</year>
<fpage>743</fpage>
<lpage>754</lpage>
<pub-id pub-id-type="pmid">2043946</pub-id>
</element-citation>
</ref>
<ref id="bib0130">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cai</surname>
<given-names>Q.</given-names>
</name>
</person-group>
<article-title>The left ventral occipito-temporal response to words depends on language lateralization but not on visual familiarity</article-title>
<source>Cereb. Cortex</source>
<volume>20</volume>
<year>2010</year>
<fpage>1153</fpage>
<lpage>1163</lpage>
<pub-id pub-id-type="pmid">19684250</pub-id>
</element-citation>
</ref>
<ref id="bib0135">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Friston</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>A free energy principle for the brain</article-title>
<source>J. Physiol. Paris</source>
<volume>100</volume>
<year>2006</year>
<fpage>70</fpage>
<lpage>87</lpage>
<pub-id pub-id-type="pmid">17097864</pub-id>
</element-citation>
</ref>
<ref id="bib0140">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McClelland</surname>
<given-names>J.L.</given-names>
</name>
<name>
<surname>Rumelhart</surname>
<given-names>D.E.</given-names>
</name>
</person-group>
<article-title>An interactive activation model of context effects in letter perception, part I: an account of basic findings</article-title>
<source>Psychol. Rev.</source>
<volume>88</volume>
<year>1981</year>
<fpage>375</fpage>
<lpage>407</lpage>
</element-citation>
</ref>
<ref id="bib0145">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seidenberg</surname>
<given-names>M.S.</given-names>
</name>
<name>
<surname>McClelland</surname>
<given-names>J.L.</given-names>
</name>
</person-group>
<article-title>A distributed, developmental model of word recognition and naming</article-title>
<source>Psychol. Rev.</source>
<volume>96</volume>
<year>1989</year>
<fpage>523</fpage>
<lpage>568</lpage>
<pub-id pub-id-type="pmid">2798649</pub-id>
</element-citation>
</ref>
<ref id="bib0150">
<label>30</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Rueckl</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Seidenberg</surname>
<given-names>M.S.</given-names>
</name>
</person-group>
<chapter-title>Computational modeling and the neural bases of reading and reading disorders</chapter-title>
<person-group person-group-type="editor">
<name>
<surname>Pugh</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>McCardle</surname>
<given-names>P.</given-names>
</name>
</person-group>
<source>How Children Learn to Read</source>
<year>2009</year>
<publisher-name>Psychology Press</publisher-name>
<fpage>99</fpage>
<lpage>131</lpage>
</element-citation>
</ref>
<ref id="bib0155">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Plaut</surname>
<given-names>D.C.</given-names>
</name>
</person-group>
<article-title>Understanding normal and impaired word reading: computational principles in quasi-regular domains</article-title>
<source>Psychol. Rev.</source>
<volume>103</volume>
<year>1996</year>
<fpage>56</fpage>
<lpage>115</lpage>
<pub-id pub-id-type="pmid">8650300</pub-id>
</element-citation>
</ref>
<ref id="bib0160">
<label>32</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Coltheart</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>DRC: a dual route cascaded model of visual word recognition and reading aloud</article-title>
<source>Psychol. Rev.</source>
<volume>108</volume>
<year>2001</year>
<fpage>204</fpage>
<lpage>256</lpage>
<pub-id pub-id-type="pmid">11212628</pub-id>
</element-citation>
</ref>
<ref id="bib0165">
<label>33</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jacobs</surname>
<given-names>A.M.</given-names>
</name>
</person-group>
<article-title>Receiver operating characteristics in the lexical decision task: evidence for a simple signal-detection process simulated by the multiple read-out model</article-title>
<source>J. Exp. Psychol. Learn. Mem. Cogn.</source>
<volume>29</volume>
<year>2003</year>
<fpage>481</fpage>
<lpage>488</lpage>
<pub-id pub-id-type="pmid">12776758</pub-id>
</element-citation>
</ref>
<ref id="bib0170">
<label>34</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hupe</surname>
<given-names>J.M.</given-names>
</name>
</person-group>
<article-title>Cortical feedback improves discrimination between figure and background by V1 V2 and V3 neurons</article-title>
<source>Nature</source>
<volume>394</volume>
<year>1998</year>
<fpage>784</fpage>
<lpage>787</lpage>
<pub-id pub-id-type="pmid">9723617</pub-id>
</element-citation>
</ref>
<ref id="bib0175">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rao</surname>
<given-names>R.P.</given-names>
</name>
<name>
<surname>Ballard</surname>
<given-names>D.H.</given-names>
</name>
</person-group>
<article-title>Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects</article-title>
<source>Nat. Neurosci.</source>
<volume>2</volume>
<year>1999</year>
<fpage>79</fpage>
<lpage>87</lpage>
<pub-id pub-id-type="pmid">10195184</pub-id>
</element-citation>
</ref>
<ref id="bib0180">
<label>36</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Twomey</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Top-down modulation of ventral occipito-temporal responses during visual word recognition</article-title>
<source>Neuroimage</source>
<volume>55</volume>
<year>2011</year>
<fpage>1242</fpage>
<lpage>1251</lpage>
<pub-id pub-id-type="pmid">21232615</pub-id>
</element-citation>
</ref>
<ref id="bib0185">
<label>37</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yoncheva</surname>
<given-names>Y.N.</given-names>
</name>
</person-group>
<article-title>Auditory selective attention to speech modulates activity in the visual word form area</article-title>
<source>Cereb. Cortex</source>
<volume>20</volume>
<year>2010</year>
<fpage>622</fpage>
<lpage>632</lpage>
<pub-id pub-id-type="pmid">19571269</pub-id>
</element-citation>
</ref>
<ref id="bib0190">
<label>38</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kherif</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Automatic top-down processing explains common left occipito-temporal responses to visual words and objects</article-title>
<source>Cereb. Cortex</source>
<volume>21</volume>
<year>2011</year>
<fpage>103</fpage>
<lpage>114</lpage>
<pub-id pub-id-type="pmid">20413450</pub-id>
</element-citation>
</ref>
<ref id="bib0195">
<label>39</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dehaene</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Cerebral mechanisms of word masking and unconscious repetition priming</article-title>
<source>Nat. Neurosci.</source>
<volume>4</volume>
<year>2001</year>
<fpage>752</fpage>
<lpage>758</lpage>
<pub-id pub-id-type="pmid">11426233</pub-id>
</element-citation>
</ref>
<ref id="bib0200">
<label>40</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Price</surname>
<given-names>C.J.</given-names>
</name>
<name>
<surname>Mechelli</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Reading and reading disturbance</article-title>
<source>Curr. Opin. Neurobiol.</source>
<volume>15</volume>
<year>2005</year>
<fpage>231</fpage>
<lpage>238</lpage>
<pub-id pub-id-type="pmid">15831408</pub-id>
</element-citation>
</ref>
<ref id="bib0205">
<label>41</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Price</surname>
<given-names>C.J.</given-names>
</name>
</person-group>
<article-title>Demonstrating the implicit processing of visually presented words and pseudowords</article-title>
<source>Cereb. Cortex</source>
<volume>6</volume>
<year>1996</year>
<fpage>62</fpage>
<lpage>70</lpage>
<pub-id pub-id-type="pmid">8670639</pub-id>
</element-citation>
</ref>
<ref id="bib0210">
<label>42</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kronbichler</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>The visual word form area and the frequency with which words are encountered: evidence from a parametric fMRI study</article-title>
<source>Neuroimage</source>
<volume>21</volume>
<year>2004</year>
<fpage>946</fpage>
<lpage>953</lpage>
<pub-id pub-id-type="pmid">15006661</pub-id>
</element-citation>
</ref>
<ref id="bib0215">
<label>43</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Graves</surname>
<given-names>W.W.</given-names>
</name>
</person-group>
<article-title>Neural systems for reading aloud: a multiparametric approach</article-title>
<source>Cereb. Cortex</source>
<volume>20</volume>
<year>2010</year>
<fpage>1799</fpage>
<lpage>1815</lpage>
<pub-id pub-id-type="pmid">19920057</pub-id>
</element-citation>
</ref>
<ref id="bib0220">
<label>44</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Binder</surname>
<given-names>J.R.</given-names>
</name>
</person-group>
<article-title>Tuning of the human left fusiform gyrus to sublexical orthographic structure</article-title>
<source>Neuroimage</source>
<volume>33</volume>
<year>2006</year>
<fpage>739</fpage>
<lpage>748</lpage>
<pub-id pub-id-type="pmid">16956773</pub-id>
</element-citation>
</ref>
<ref id="bib0225">
<label>45</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Glezer</surname>
<given-names>L.S.</given-names>
</name>
</person-group>
<article-title>Evidence for highly selective neuronal tuning to whole words in the “visual word form area”</article-title>
<source>Neuron</source>
<volume>62</volume>
<year>2009</year>
<fpage>199</fpage>
<lpage>204</lpage>
<pub-id pub-id-type="pmid">19409265</pub-id>
</element-citation>
</ref>
<ref id="bib0230">
<label>46</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Devlin</surname>
<given-names>J.T.</given-names>
</name>
</person-group>
<article-title>Morphology and the internal structure of words</article-title>
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>101</volume>
<year>2004</year>
<fpage>14984</fpage>
<lpage>14988</lpage>
<pub-id pub-id-type="pmid">15358857</pub-id>
</element-citation>
</ref>
<ref id="bib0235">
<label>47</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>X.</given-names>
</name>
</person-group>
<article-title>Left fusiform BOLD responses are inversely related to word-likeness in a one-back task</article-title>
<source>Neuroimage</source>
<volume>55</volume>
<year>2011</year>
<fpage>1346</fpage>
<lpage>1356</lpage>
<pub-id pub-id-type="pmid">21216293</pub-id>
</element-citation>
</ref>
<ref id="bib0240">
<label>48</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Duncan</surname>
<given-names>K.J.</given-names>
</name>
</person-group>
<article-title>Consistency and variability in functional localisers</article-title>
<source>Neuroimage</source>
<volume>46</volume>
<year>2009</year>
<fpage>1018</fpage>
<lpage>1026</lpage>
<pub-id pub-id-type="pmid">19289173</pub-id>
</element-citation>
</ref>
<ref id="bib0245">
<label>49</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wright</surname>
<given-names>N.D.</given-names>
</name>
</person-group>
<article-title>Selective activation around the left occipito-temporal sulcus for words relative to pictures: individual variability or false positives?</article-title>
<source>Hum. Brain Mapp.</source>
<volume>29</volume>
<year>2008</year>
<fpage>986</fpage>
<lpage>1000</lpage>
<pub-id pub-id-type="pmid">17712786</pub-id>
</element-citation>
</ref>
<ref id="bib0250">
<label>50</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Szwed</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Specialization for written words over objects in the visual cortex</article-title>
<source>Neuroimage</source>
<volume>56</volume>
<year>2011</year>
<fpage>330</fpage>
<lpage>344</lpage>
<pub-id pub-id-type="pmid">21296170</pub-id>
</element-citation>
</ref>
<ref id="bib0255">
<label>51</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hegde</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Van Essen</surname>
<given-names>D.C.</given-names>
</name>
</person-group>
<article-title>A comparative study of shape representation in macaque visual areas v2 and v4</article-title>
<source>Cereb. Cortex</source>
<volume>17</volume>
<year>2007</year>
<fpage>1100</fpage>
<lpage>1116</lpage>
<pub-id pub-id-type="pmid">16785255</pub-id>
</element-citation>
</ref>
<ref id="bib0260">
<label>52</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Connor</surname>
<given-names>C.E.</given-names>
</name>
</person-group>
<article-title>Transformation of shape information in the ventral pathway</article-title>
<source>Curr. Opin. Neurobiol.</source>
<volume>17</volume>
<year>2007</year>
<fpage>140</fpage>
<lpage>147</lpage>
<pub-id pub-id-type="pmid">17369035</pub-id>
</element-citation>
</ref>
<ref id="bib0265">
<label>53</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dehaene</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>How learning to read changes the cortical networks for vision and language</article-title>
<source>Science</source>
<volume>330</volume>
<year>2010</year>
<fpage>1359</fpage>
<lpage>1364</lpage>
<pub-id pub-id-type="pmid">21071632</pub-id>
</element-citation>
</ref>
<ref id="bib0270">
<label>54</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goswami</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Ziegler</surname>
<given-names>J.C.</given-names>
</name>
</person-group>
<article-title>A developmental perspective on the neural code for written words</article-title>
<source>Trends Cogn. Sci.</source>
<volume>10</volume>
<year>2006</year>
<fpage>142</fpage>
<lpage>143</lpage>
<pub-id pub-id-type="pmid">16517209</pub-id>
</element-citation>
</ref>
<ref id="bib0275">
<label>55</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blau</surname>
<given-names>V.</given-names>
</name>
</person-group>
<article-title>Deviant processing of letters and speech sounds as proximate cause of reading failure: a functional magnetic resonance imaging study of dyslexic children</article-title>
<source>Brain</source>
<volume>133</volume>
<year>2010</year>
<fpage>868</fpage>
<lpage>879</lpage>
<pub-id pub-id-type="pmid">20061325</pub-id>
</element-citation>
</ref>
<ref id="bib0280">
<label>56</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brunswick</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>Explicit and implicit processing of words and pseudowords by adult developmental dyslexics: a search for Wernicke's Wortschatz?</article-title>
<source>Brain</source>
<volume>122</volume>
<year>1999</year>
<fpage>1901</fpage>
<lpage>1917</lpage>
<pub-id pub-id-type="pmid">10506092</pub-id>
</element-citation>
</ref>
<ref id="bib0285">
<label>57</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Richlan</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>A common left occipito-temporal dysfunction in developmental dyslexia and acquired letter-by-letter reading?</article-title>
<source>PLoS ONE</source>
<volume>5</volume>
<year>2010</year>
<fpage>e12073</fpage>
<pub-id pub-id-type="pmid">20711448</pub-id>
</element-citation>
</ref>
<ref id="bib0290">
<label>58</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shaywitz</surname>
<given-names>B.A.</given-names>
</name>
</person-group>
<article-title>Disruption of posterior brain systems for reading in children with developmental dyslexia</article-title>
<source>Biol. Psychiatry</source>
<volume>52</volume>
<year>2002</year>
<fpage>101</fpage>
<lpage>110</lpage>
<pub-id pub-id-type="pmid">12114001</pub-id>
</element-citation>
</ref>
<ref id="bib0295">
<label>59</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van der Mark</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Children with dyslexia lack multiple specializations along the visual word-form (VWF) system</article-title>
<source>Neuroimage</source>
<volume>47</volume>
<year>2009</year>
<fpage>1940</fpage>
<lpage>1949</lpage>
<pub-id pub-id-type="pmid">19446640</pub-id>
</element-citation>
</ref>
<ref id="bib0300">
<label>60</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wimmer</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>A dual-route perspective on poor reading in a regular orthography: an fMRI study</article-title>
<source>Cortex</source>
<volume>46</volume>
<year>2010</year>
<fpage>1284</fpage>
<lpage>1298</lpage>
<pub-id pub-id-type="pmid">20650450</pub-id>
</element-citation>
</ref>
<ref id="bib0305">
<label>61</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van der Mark</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>The left occipitotemporal system in reading: disruption of focal fMRI connectivity to left inferior frontal and inferior parietal language areas in children with dyslexia</article-title>
<source>Neuroimage</source>
<volume>54</volume>
<year>2011</year>
<fpage>2426</fpage>
<lpage>2436</lpage>
<pub-id pub-id-type="pmid">20934519</pub-id>
</element-citation>
</ref>
<ref id="bib0310">
<label>62</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brem</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Brain sensitivity to print emerges when children learn letter-speech sound correspondences</article-title>
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>107</volume>
<year>2010</year>
<fpage>7939</fpage>
<lpage>7944</lpage>
<pub-id pub-id-type="pmid">20395549</pub-id>
</element-citation>
</ref>
<ref id="bib0315">
<label>63</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>James</surname>
<given-names>K.H.</given-names>
</name>
</person-group>
<article-title>Sensori-motor experience leads to changes in visual processing in the developing brain</article-title>
<source>Dev. Sci.</source>
<volume>13</volume>
<year>2010</year>
<fpage>279</fpage>
<lpage>288</lpage>
<pub-id pub-id-type="pmid">20136924</pub-id>
</element-citation>
</ref>
<ref id="bib0320">
<label>64</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Turkeltaub</surname>
<given-names>P.E.</given-names>
</name>
</person-group>
<article-title>Development of ventral stream representations for single letters</article-title>
<source>Ann. N. Y. Acad. Sci.</source>
<volume>1145</volume>
<year>2008</year>
<fpage>13</fpage>
<lpage>29</lpage>
<pub-id pub-id-type="pmid">19076386</pub-id>
</element-citation>
</ref>
<ref id="bib0325">
<label>65</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McCrory</surname>
<given-names>E.J.</given-names>
</name>
</person-group>
<article-title>More than words: a common neural basis for reading and naming deficits in developmental dyslexia?</article-title>
<source>Brain</source>
<volume>128</volume>
<year>2005</year>
<fpage>261</fpage>
<lpage>267</lpage>
<pub-id pub-id-type="pmid">15574467</pub-id>
</element-citation>
</ref>
<ref id="bib0330">
<label>66</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hillis</surname>
<given-names>A.E.</given-names>
</name>
</person-group>
<article-title>Restoring cerebral blood flow reveals neural regions critical for naming</article-title>
<source>J. Neurosci.</source>
<volume>26</volume>
<year>2006</year>
<fpage>8069</fpage>
<lpage>8073</lpage>
<pub-id pub-id-type="pmid">16885220</pub-id>
</element-citation>
</ref>
<ref id="bib0335">
<label>67</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hillis</surname>
<given-names>A.E.</given-names>
</name>
</person-group>
<article-title>The roles of the “visual word form area” in reading</article-title>
<source>Neuroimage</source>
<volume>24</volume>
<year>2005</year>
<fpage>548</fpage>
<lpage>559</lpage>
<pub-id pub-id-type="pmid">15627597</pub-id>
</element-citation>
</ref>
<ref id="bib0340">
<label>68</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marsh</surname>
<given-names>E.B.</given-names>
</name>
<name>
<surname>Hillis</surname>
<given-names>A.E.</given-names>
</name>
</person-group>
<article-title>Cognitive and neural mechanisms underlying reading and naming: evidence from letter-by-letter reading and optic aphasia</article-title>
<source>Neurocase</source>
<volume>11</volume>
<year>2005</year>
<fpage>325</fpage>
<lpage>337</lpage>
<pub-id pub-id-type="pmid">16251134</pub-id>
</element-citation>
</ref>
<ref id="bib0345">
<label>69</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gaillard</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Direct intracranial FMRI, and lesion evidence for the causal role of left inferotemporal cortex in reading</article-title>
<source>Neuron</source>
<volume>50</volume>
<year>2006</year>
<fpage>191</fpage>
<lpage>204</lpage>
<pub-id pub-id-type="pmid">16630832</pub-id>
</element-citation>
</ref>
<ref id="bib0350">
<label>70</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Starrfelt</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Visual processing in pure alexia: a case study</article-title>
<source>Cortex</source>
<volume>46</volume>
<year>2010</year>
<fpage>242</fpage>
<lpage>255</lpage>
<pub-id pub-id-type="pmid">19446802</pub-id>
</element-citation>
</ref>
<ref id="bib0355">
<label>71</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sereno</surname>
<given-names>M.I.</given-names>
</name>
<name>
<surname>Tootell</surname>
<given-names>R.B.</given-names>
</name>
</person-group>
<article-title>From monkeys to humans: what do we now know about brain homologies?</article-title>
<source>Curr. Opin. Neurobiol.</source>
<volume>15</volume>
<year>2005</year>
<fpage>135</fpage>
<lpage>144</lpage>
<pub-id pub-id-type="pmid">15831394</pub-id>
</element-citation>
</ref>
<ref id="bib0360">
<label>72</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>David</surname>
<given-names>S.V.</given-names>
</name>
</person-group>
<article-title>Spectral receptive field properties explain shape selectivity in area V4</article-title>
<source>J. Neurophysiol.</source>
<volume>96</volume>
<year>2006</year>
<fpage>3492</fpage>
<lpage>3505</lpage>
<pub-id pub-id-type="pmid">16987926</pub-id>
</element-citation>
</ref>
<ref id="bib0365">
<label>73</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brincat</surname>
<given-names>S.L.</given-names>
</name>
<name>
<surname>Connor</surname>
<given-names>C.E.</given-names>
</name>
</person-group>
<article-title>Underlying principles of visual shape selectivity in posterior inferotemporal cortex</article-title>
<source>Nat. Neurosci.</source>
<volume>7</volume>
<year>2004</year>
<fpage>880</fpage>
<lpage>886</lpage>
<pub-id pub-id-type="pmid">15235606</pub-id>
</element-citation>
</ref>
<ref id="bib0370">
<label>74</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gross</surname>
<given-names>C.G.</given-names>
</name>
</person-group>
<article-title>Visual receptive fields of neurons in inferotemporal cortex of the monkey</article-title>
<source>Science</source>
<volume>166</volume>
<year>1969</year>
<fpage>1303</fpage>
<lpage>1306</lpage>
<pub-id pub-id-type="pmid">4982685</pub-id>
</element-citation>
</ref>
<ref id="bib0375">
<label>75</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Changizi</surname>
<given-names>M.A.</given-names>
</name>
</person-group>
<article-title>The structures of letters and symbols throughout human history are selected to match those found in objects in natural scenes</article-title>
<source>Am. Nat.</source>
<volume>167</volume>
<year>2006</year>
<fpage>E117</fpage>
<lpage>139</lpage>
<pub-id pub-id-type="pmid">16671005</pub-id>
</element-citation>
</ref>
<ref id="bib0380">
<label>76</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mumford</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>The role of cortico-cortical loops</article-title>
<source>Biol. Cybern.</source>
<volume>66</volume>
<year>1992</year>
<fpage>241</fpage>
<lpage>251</lpage>
<pub-id pub-id-type="pmid">1540675</pub-id>
</element-citation>
</ref>
<ref id="bib0385">
<label>77</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dayan</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>The Helmholtz machine</article-title>
<source>Neural Comput.</source>
<volume>7</volume>
<year>1995</year>
<fpage>889</fpage>
<lpage>904</lpage>
<pub-id pub-id-type="pmid">7584891</pub-id>
</element-citation>
</ref>
<ref id="bib0390">
<label>78</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Friston</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kiebel</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Predictive coding under the free-energy principle</article-title>
<source>Philos. Trans. R. Soc. Lond. B: Biol. Sci.</source>
<volume>364</volume>
<year>2009</year>
<fpage>1211</fpage>
<lpage>1221</lpage>
<pub-id pub-id-type="pmid">19528002</pub-id>
</element-citation>
</ref>
</ref-list>
<ack>
<title>Acknowledgments</title>
<p>This work was funded by the Wellcome Trust. The authors thank Karl Friston and Marty Sereno for many useful discussions.</p>
</ack>
</back>
<floats-group>
<fig id="fig0005">
<label>Figure 1</label>
<caption>
<p>Visual word recognition in the ventral occipitotemporal cortex (vOT).
<bold>(a)</bold>
The anatomy of vOT and its relation to activation for visual word recognition (red-yellow) shown on the ventral surface of an inflated left hemisphere. vOT is centred on the occipitotemporal sulcus (broken white line) at the transition from the occipital (blue) to the temporal lobe (green).
<bold>(b)</bold>
Examples of simple shape stimuli that are important for recognizing both visual words and objects. Neurons within V2 respond to these types of simple shapes and project to V4, where the cells have more complex receptive fields that respond to combinations of these shapes within a retinotopic reference frame. These in turn project to vOT neurons that have receptive fields with multidimensional tuning functions, where simple shape elements are combined nonlinearly in an object-centred reference frame. Thus, unlike earlier visual areas, it is difficult – if not impossible – to find the optimal stimulus driving a cell using a simple line drawing. Adapted with permission from
<xref rid="bib0255" ref-type="bibr">[51]</xref>
.
<bold>(c)</bold>
A hypothetical example of a complex, object-centred receptive field for a vOT neuron. On the left are three ‘J's of different sizes in different retinal positions. Within early retinotopic areas, each J would be encoded by non-overlapping sets of neurons. By contrast, the receptive field illustrated on the right by a three by three grid of panels provides a more compact, stable object-centred representation. Here, curvature and orientation are plotted recursively within each receptive field region such that it will respond strongly to any combination of a vertical straight line at the top right and a concave-up curved horizontal line at the bottom. Although it is tempting to call this a ‘J-detector’, this would be incorrect – the receptive field responds equally well to the handle of an umbrella or trunk of an elephant but does not respond to the letter j written in script. Reproduced with permission from
<xref rid="bib0260" ref-type="bibr">[52]</xref>
. cs, collateral sulcus; mt, visual motion area; ots, occipitotemporal sulcus; pITG, posterior inferior temporal gyrus; sts, superior temporal sulcus; V1, central field of primary visual cortex; V2, secondary visual cortex; V4v, ventral component of visual area 4.</p>
</caption>
<graphic xlink:href="gr1"></graphic>
</fig>
<fig id="fig0010">
<label>Figure 2</label>
<caption>
<p>Activation in ventral occipitotemporal cortex (vOT) according to the predictive coding framework. The schematic in
<bold>(a)</bold>
, adapted from
<xref rid="bib0110" ref-type="bibr">[22]</xref>
, outlines the hierarchical architecture that underlies neuronal responses involved in the perception of visual inputs according to the predictive coding framework
<xref rid="bib0110" ref-type="bibr">[22]</xref>
. It shows the putative (pyramidal) cells that send forward driving connections (red) from the supragranular cortical layer; and nonlinear (modulatory) backward connections (black) from the infragranular layer. The backward connections predict the response to the forward connections. Predictions are optimized to minimize prediction error at each level in the hierarchy. Prediction error is the difference between the top-down prediction and the representations being predicted at each level. Prediction errors change the predictions through recurrent neuronal message passing until the error is minimized. Recurrent connectivity between different levels of the hierarchy is optimized by experience and therefore depends on learning (as illustrated by the broken lines between vOT and higher levels). In functional magnetic resonance imaging, activation is a measure of combined neuronal firing from the stimulus, predictions and their prediction error.
<bold>(b)</bold>
Inverted-U shape of activation levels in vOT across three stages of learning. Before learning (stage 1), activation from top-down predictions is precluded because stimuli cannot elicit them (because the appropriate associations have not been learned). This would be the case, for example, in pre-literates and illiterates viewing orthographic stimuli that have no semantic or phonological associations
<xref rid="bib0265" ref-type="bibr">[53]</xref>
or in literates viewing an unknown orthography (e.g. English readers viewing Chinese characters or an artificial orthography)
<xref rid="bib0005" ref-type="bibr">[1]</xref>
. In contrast, vOT activation levels are highest during learning (stage 2), when the stimulus is recognized as potentially meaningful (with semantic or phonological associations) but is not predicted efficiently (high prediction error). An example here would be when subjects view pseudowords (that engage high-level representations) but cannot predict their visual form efficiently
<xref rid="bib0205" ref-type="bibr">[41]</xref>
. With practice, exposure and experience-dependent learning or expertise (stage 3), prediction error decreases and vOT activation declines. The difference between stages 2 and 3 explains why vOT responses are lower for high versus low frequency words
<xref rid="bib0215" ref-type="bibr">[43]</xref>
, real words relative to pseudowords
<xref rid="bib0210" ref-type="bibr">[42]</xref>
and when words are primed by identical words versus pseudowords
<xref rid="bib0225" ref-type="bibr">[45]</xref>
.</p>
</caption>
<graphic xlink:href="gr2"></graphic>
</fig>
<boxed-text id="tb0010">
<label>Box 1</label>
<caption>
<title>Learning to read and developmental dyslexia</title>
</caption>
<p>Reading involves linking orthography (i.e. written symbols) to phonology (speech sounds) and semantics (meaning). Learning these associations enhances the ability to predict and perceive the defining visual features of symbols that have been learned. For example, letter combinations will be recognized more efficiently when they are familiar and strongly linked to phonology (e.g. WINE) than when they are less familiar (e.g. WINO)
<xref rid="bib0270" ref-type="bibr">[54]</xref>
. At the neural level, learning involves experience-dependent synaptic plasticity, which changes connection strengths and the efficiency of perceptual inference.</p>
<p>According to the Interactive Account of ventral occipitotemporal cortex (vOT) function during reading, top-down predictions are conveyed by backward connections from phonological and semantic areas to vOT (
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
). These top-down predictions are engaged during the early stages of learning to name objects, and when learning to read words or learning a new orthography. The predictions produce prediction errors, which drive learning to improve prediction. In pre-literates, vOT activation is low because orthographic inputs do not trigger appropriate representations in phonological or semantic areas and therefore there are no top-down influences (stage 1 in
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
b). In the early stages of learning to read, vOT activation is high because top-down predictions are engaged imprecisely and it takes longer for the system to suppress prediction errors and identify the word (stage 2 in
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
b). In skilled readers, vOT activation declines because learning improves the predictions, which explain prediction error efficiently (stage 3 in
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
b). In developmental dyslexics, abnormally low vOT activation
<xref rid="bib0275 bib0280 bib0285 bib0290 bib0295 bib0300" ref-type="bibr">[55–60]</xref>
and reduced functional connectivity between vOT and other language areas
<xref rid="bib0305" ref-type="bibr">[61]</xref>
are consistent with failure to establish hierarchical connections and access top-down predictions, perhaps because of a paucity of phonological knowledge (i.e. failure to progress from stage 1 to stage 2 [
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
b]).</p>
<p>This perspective explains the learning-related increases in vOT activation that have been demonstrated in non-reading pre-school children learning the sounds of letters
<xref rid="bib0310" ref-type="bibr">[62]</xref>
, adults learning sounds and meanings in an artificial orthography
<xref rid="bib0005" ref-type="bibr">[1]</xref>
and children improving their overt word reading speed
<xref rid="bib0010" ref-type="bibr">[2]</xref>
. In addition, vOT activation is reduced following visual form learning
<xref rid="bib0005" ref-type="bibr">[1]</xref>
, which demonstrates that learning-related effects are task dependent
<xref rid="bib0005 bib0315" ref-type="bibr">[1,63]</xref>
. The Interactive Account explains these effects in terms of experience-dependent plasticity and the resulting increases and decreases in prediction error (
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
b). The same learning-related principles apply irrespective of whether the stimuli are letters, words or objects
<xref rid="bib0045 bib0105 bib0320 bib0325" ref-type="bibr">[9,21,64,65]</xref>
.</p>
</boxed-text>
<boxed-text id="tb0015">
<label>Box 2</label>
<caption>
<title>Damage to the ventral occipitotemporal cortex and pure alexia</title>
</caption>
<p>Reading impairment is the most notable effect of selective damage to the ventral occipitotemporal cortex (vOT)
<xref rid="bib0015 bib0020 bib0025 bib0030" ref-type="bibr">[3–6]</xref>
. This deficit is typically referred to as ‘pure alexia’ because speech and language abilities remain intact, as does the ability to write words. Most patients with vOT damage also have difficulty naming objects
<xref rid="bib0330 bib0335 bib0340" ref-type="bibr">[66–68]</xref>
, consistent with a generic difficulty linking visual inputs to the language system. Nevertheless, a few patients with vOT damage have been reported with worse reading than naming accuracy
<xref rid="bib0030 bib0345" ref-type="bibr">[6,69]</xref>
. This does not mean that vOT is only necessary for reading, because: (i) accurate object naming following vOT damage has only been reported in patients with mild alexia, which manifests in reading speed rather than reading accuracy; (ii) difficulties with object recognition and naming become apparent when the speed of processing is taken into account
<xref rid="bib0030 bib0350" ref-type="bibr">[6,70]</xref>
; and (iii) better object naming after vOT damage may be supported by post hoc learning-related changes in other brain regions that provide alternative connections from vision to the language system, with these connections being more successful for object recognition than word recognition.</p>
<p>How does the Interactive Account explain why vOT is more critical for reading than for object recognition? According to the Interactive Account, damage to vOT will disconnect forward and backward connections at all levels of the hierarchy (
<xref rid="fig0010" ref-type="fig">Figure 2</xref>
), leading to imprecise perceptual inference. This will have a disproportionate effect on reading because written words comprise the same component parts occurring repeatedly in different, but sometimes highly similar, combinations (e.g. attitude, altitude, aptitude). Object recognition will also be impaired when vOT is disconnected from occipital and higher order language areas, but it will be less impaired than reading when it can proceed on the basis of holistic shape information and a limited number of defining features.</p>
</boxed-text>
<boxed-text id="tb0020">
<label>Box 3</label>
<caption>
<title>Neuronal properties of the ventral occipitotemporal cortex</title>
</caption>
<p>Reading-sensitive areas within the ventral occipitotemporal cortex (vOT) lie anterior and lateral to V4 (
<xref rid="fig0005" ref-type="fig">Figure 1</xref>
) and correspond most closely to the ventral posterior inferior temporal cortex in non-human primates
<xref rid="bib0355" ref-type="bibr">[71]</xref>
. Because we are unaware of any single cell neurophysiology data from vOT in humans, we have extrapolated the following three properties of vOT cells from monkey studies. First, individual neurons receive forward afferents from earlier visual fields such as V4 where combinations of simple shapes (forms) such as oriented bars, intersections, angles, arcs and contours are encoded (
<xref rid="fig0005" ref-type="fig">Figure 1</xref>
b)
<xref rid="bib0360" ref-type="bibr">[72]</xref>
. vOT neurons integrate information from these shape elements resulting in complex receptive fields that cannot be characterized solely in terms of example stimuli
<xref rid="bib0365" ref-type="bibr">[73]</xref>
. Second, vOT neurons tend to have large receptive fields that include the fovea
<xref rid="bib0370" ref-type="bibr">[74]</xref>
. As a result, they rely on an object-centred reference frame that provides a measure of independence from retinotopic location (
<xref rid="fig0005" ref-type="fig">Figure 1</xref>
c). This type of multipart, object-centred receptive field provides a compact, efficient representation that is largely insensitive to the specific placement or size of the stimulus on the retina. Third, each cell contributes to the encoding of multiple visual stimuli; there is no one-to-one mapping between neuronal activity and the orthography of words such as letters, bigrams and trigrams. Instead, encoding a visual word is accomplished via a pattern of firing over a population of vOT neurons. Any given neuron participates in multiple patterns, which can include both written words and other visual stimuli such as objects. Note that the opposite need not be true. Not all vOT neurons will contribute to visual word recognition given the limited set of shapes necessary to encode orthographic forms relative to those necessary for encoding natural scenes
<xref rid="bib0375" ref-type="bibr">[75]</xref>
. In other words, a neural population response represents a complex stimulus – be it a word or an object – in terms of its constituent elements.</p>
<p>In summary, vOT neurons are general-purpose analyzers of visual forms and underlie all types of complex visual pattern recognition, not just reading. Even the most selective cells respond to various shape patterns, providing a distributed structural code that is highly generative – that is, different combinations of these coding elements can represent a virtually infinite set of visual objects. Visual experience results in plastic changes that tune the receptive fields to facilitate recognition of the most commonly occurring patterns, but this does not alter their fundamental nature; no cells are ‘recycled’ to become reading-specific
<xref rid="bib0035 bib0040" ref-type="bibr">[7,8]</xref>
. Consequently, reading relies on the same neurophysiological mechanisms as any other form of higher order vision.</p>
</boxed-text>
<boxed-text id="tb0025">
<label>Box 4</label>
<caption>
<title>Outstanding questions</title>
</caption>
<p>
<list list-type="simple">
<list-item id="lsti0030">
<label></label>
<p>Where are the anatomical sources of top-down phonological and semantic influences and how do they depend on the task and attentional set?</p>
</list-item>
<list-item id="lsti0035">
<label></label>
<p>What are the anatomical pathways linking higher order association cortices to the ventral occipitotemporal cortex (vOT)?</p>
</list-item>
<list-item id="lsti0040">
<label></label>
<p>Are there paths linking vision to language that bypass vOT, and if so, under what circumstances can these sustain reading?</p>
</list-item>
<list-item id="lsti0045">
<label></label>
<p>What are the temporal dynamics of vOT contributions to reading?</p>
</list-item>
<list-item id="lsti0050">
<label></label>
<p>Do the left and right vOT contribute differentially to visual word and object recognition?</p>
</list-item>
<list-item id="lsti0055">
<label></label>
<p>Can direct single cell neurophysiology of the human vOT differentiate between reading-specific neuronal responses and the domain-general neural properties proposed here?</p>
</list-item>
<list-item id="lsti0060">
<label></label>
<p>Would damage (or transcranial magnetic stimulation) to the sources of backward connections to vOT impair the ability to distinguish words, pseudowords and random letter strings?</p>
</list-item>
</list>
</p>
</boxed-text>
</floats-group>
</pmc>
<affiliations>
<list>
<country>
<li>Royaume-Uni</li>
</country>
<region>
<li>Angleterre</li>
<li>Grand Londres</li>
</region>
<settlement>
<li>Londres</li>
</settlement>
<orgName>
<li>Université de Londres</li>
</orgName>
</list>
<tree>
<country name="Royaume-Uni">
<noRegion>
<name sortKey="Price, Cathy J" sort="Price, Cathy J" uniqKey="Price C" first="Cathy J." last="Price">Cathy J. Price</name>
</noRegion>
<name sortKey="Devlin, Joseph T" sort="Devlin, Joseph T" uniqKey="Devlin J" first="Joseph T." last="Devlin">Joseph T. Devlin</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001959 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001959 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:3223525
   |texte=   The Interactive Account of ventral occipitotemporal contributions to reading
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:21549634" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024