Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Decoding Visual Object Categories in Early Somatosensory Cortex

Identifieur interne : 001333 ( Pmc/Checkpoint ); précédent : 001332; suivant : 001334

Decoding Visual Object Categories in Early Somatosensory Cortex

Auteurs : Fraser W. Smith [Royaume-Uni, Canada] ; Melvyn A. Goodale [Canada]

Source :

RBID : PMC:4380001

Abstract

Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects.


Url:
DOI: 10.1093/cercor/bht292
PubMed: 24122136
PubMed Central: 4380001


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4380001

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Decoding Visual Object Categories in Early Somatosensory Cortex</title>
<author>
<name sortKey="Smith, Fraser W" sort="Smith, Fraser W" uniqKey="Smith F" first="Fraser W." last="Smith">Fraser W. Smith</name>
<affiliation wicri:level="1">
<nlm:aff id="af1">
<addr-line>School of Psychology</addr-line>
,
<institution>University of East Anglia</institution>
,
<addr-line>Norwich NR4 7TJ</addr-line>
,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="af2">
<addr-line>The Brain and Mind Institute</addr-line>
,
<institution>University of Western Ontario,</institution>
<addr-line>London, ON</addr-line>
,
<country>Canada</country>
<addr-line>N6A 5B7</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Goodale, Melvyn A" sort="Goodale, Melvyn A" uniqKey="Goodale M" first="Melvyn A." last="Goodale">Melvyn A. Goodale</name>
<affiliation wicri:level="1">
<nlm:aff id="af2">
<addr-line>The Brain and Mind Institute</addr-line>
,
<institution>University of Western Ontario,</institution>
<addr-line>London, ON</addr-line>
,
<country>Canada</country>
<addr-line>N6A 5B7</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24122136</idno>
<idno type="pmc">4380001</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4380001</idno>
<idno type="RBID">PMC:4380001</idno>
<idno type="doi">10.1093/cercor/bht292</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">001C29</idno>
<idno type="wicri:Area/Pmc/Curation">001C29</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001333</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Decoding Visual Object Categories in Early Somatosensory Cortex</title>
<author>
<name sortKey="Smith, Fraser W" sort="Smith, Fraser W" uniqKey="Smith F" first="Fraser W." last="Smith">Fraser W. Smith</name>
<affiliation wicri:level="1">
<nlm:aff id="af1">
<addr-line>School of Psychology</addr-line>
,
<institution>University of East Anglia</institution>
,
<addr-line>Norwich NR4 7TJ</addr-line>
,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="af2">
<addr-line>The Brain and Mind Institute</addr-line>
,
<institution>University of Western Ontario,</institution>
<addr-line>London, ON</addr-line>
,
<country>Canada</country>
<addr-line>N6A 5B7</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Goodale, Melvyn A" sort="Goodale, Melvyn A" uniqKey="Goodale M" first="Melvyn A." last="Goodale">Melvyn A. Goodale</name>
<affiliation wicri:level="1">
<nlm:aff id="af2">
<addr-line>The Brain and Mind Institute</addr-line>
,
<institution>University of Western Ontario,</institution>
<addr-line>London, ON</addr-line>
,
<country>Canada</country>
<addr-line>N6A 5B7</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Cerebral Cortex (New York, NY)</title>
<idno type="ISSN">1047-3211</idno>
<idno type="eISSN">1460-2199</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Amunts, K" uniqKey="Amunts K">K Amunts</name>
</author>
<author>
<name sortKey="Malikovic, A" uniqKey="Malikovic A">A Malikovic</name>
</author>
<author>
<name sortKey="Mohlberg, H" uniqKey="Mohlberg H">H Mohlberg</name>
</author>
<author>
<name sortKey="Schormann, T" uniqKey="Schormann T">T Schormann</name>
</author>
<author>
<name sortKey="Zilles, K" uniqKey="Zilles K">K Zilles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baler, B" uniqKey="Baler B">B Baler</name>
</author>
<author>
<name sortKey="Kleinschmidt, A" uniqKey="Kleinschmidt A">A Kleinschmidt</name>
</author>
<author>
<name sortKey="Muller, Ng" uniqKey="Muller N">NG Muller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Calvert, Ga" uniqKey="Calvert G">GA Calvert</name>
</author>
<author>
<name sortKey="Bullmore, Et" uniqKey="Bullmore E">ET Bullmore</name>
</author>
<author>
<name sortKey="Brammer, Mj" uniqKey="Brammer M">MJ Brammer</name>
</author>
<author>
<name sortKey="Campbell, R" uniqKey="Campbell R">R Campbell</name>
</author>
<author>
<name sortKey="Williams, Scr" uniqKey="Williams S">SCR Williams</name>
</author>
<author>
<name sortKey="Mcquire, Pk" uniqKey="Mcquire P">PK McQuire</name>
</author>
<author>
<name sortKey="Woodruff, Pwr" uniqKey="Woodruff P">PWR Woodruff</name>
</author>
<author>
<name sortKey="Iversen, Sd" uniqKey="Iversen S">SD Iversen</name>
</author>
<author>
<name sortKey="David, As" uniqKey="David A">AS David</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chang, Cc" uniqKey="Chang C">CC Chang</name>
</author>
<author>
<name sortKey="Lin, Cj" uniqKey="Lin C">CJ Lin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, Y" uniqKey="Chen Y">Y Chen</name>
</author>
<author>
<name sortKey="Namburi, P" uniqKey="Namburi P">P Namburi</name>
</author>
<author>
<name sortKey="Elliott, Lt" uniqKey="Elliott L">LT Elliott</name>
</author>
<author>
<name sortKey="Heinzle, J" uniqKey="Heinzle J">J Heinzle</name>
</author>
<author>
<name sortKey="Soon, Cs" uniqKey="Soon C">CS Soon</name>
</author>
<author>
<name sortKey="Chee, Mw" uniqKey="Chee M">MW Chee</name>
</author>
<author>
<name sortKey="Haynes, Jd" uniqKey="Haynes J">JD Haynes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clark, A" uniqKey="Clark A">A Clark</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Coutanche, Mn" uniqKey="Coutanche M">MN Coutanche</name>
</author>
<author>
<name sortKey="Thompson Schill, Sl" uniqKey="Thompson Schill S">SL Thompson-Schill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Damasio, Ar" uniqKey="Damasio A">AR Damasio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J Driver</name>
</author>
<author>
<name sortKey="Noesselt, T" uniqKey="Noesselt T">T Noesselt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duda, Ro" uniqKey="Duda R">RO Duda</name>
</author>
<author>
<name sortKey="Hart, Pe" uniqKey="Hart P">PE Hart</name>
</author>
<author>
<name sortKey="Stork, Dg" uniqKey="Stork D">DG Stork</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eickhoff, Sb" uniqKey="Eickhoff S">SB Eickhoff</name>
</author>
<author>
<name sortKey="Amunts, K" uniqKey="Amunts K">K Amunts</name>
</author>
<author>
<name sortKey="Mohlberg, H" uniqKey="Mohlberg H">H Mohlberg</name>
</author>
<author>
<name sortKey="Zilles, K" uniqKey="Zilles K">K Zilles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eickhoff, Sb" uniqKey="Eickhoff S">SB Eickhoff</name>
</author>
<author>
<name sortKey="Schleicher, A" uniqKey="Schleicher A">A Schleicher</name>
</author>
<author>
<name sortKey="Zilles, K" uniqKey="Zilles K">K Zilles</name>
</author>
<author>
<name sortKey="Amunts, K" uniqKey="Amunts K">K Amunts</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eickhoff, Sb" uniqKey="Eickhoff S">SB Eickhoff</name>
</author>
<author>
<name sortKey="Stephan, Ke" uniqKey="Stephan K">KE Stephan</name>
</author>
<author>
<name sortKey="Mohlberg, H" uniqKey="Mohlberg H">H Mohlberg</name>
</author>
<author>
<name sortKey="Grefkes, C" uniqKey="Grefkes C">C Grefkes</name>
</author>
<author>
<name sortKey="Fink, Gr" uniqKey="Fink G">GR Fink</name>
</author>
<author>
<name sortKey="Amunts, K" uniqKey="Amunts K">K Amunts</name>
</author>
<author>
<name sortKey="Zilles, K" uniqKey="Zilles K">K Zilles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ethofer, T" uniqKey="Ethofer T">T Ethofer</name>
</author>
<author>
<name sortKey="Van De Ville, D" uniqKey="Van De Ville D">D Van De Ville</name>
</author>
<author>
<name sortKey="Scherer, K" uniqKey="Scherer K">K Scherer</name>
</author>
<author>
<name sortKey="Vuilleumier, P" uniqKey="Vuilleumier P">P Vuilleumier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Etzel, Ja" uniqKey="Etzel J">JA Etzel</name>
</author>
<author>
<name sortKey="Gazzola, V" uniqKey="Gazzola V">V Gazzola</name>
</author>
<author>
<name sortKey="Keysers, C" uniqKey="Keysers C">C Keysers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gallivan, Jp" uniqKey="Gallivan J">JP Gallivan</name>
</author>
<author>
<name sortKey="Mcclean, Da" uniqKey="Mcclean D">DA McClean</name>
</author>
<author>
<name sortKey="Flanagan, Jr" uniqKey="Flanagan J">JR Flanagan</name>
</author>
<author>
<name sortKey="Culham, Jc" uniqKey="Culham J">JC Culham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gallivan, Jp" uniqKey="Gallivan J">JP Gallivan</name>
</author>
<author>
<name sortKey="Mcclean, Da" uniqKey="Mcclean D">DA McClean</name>
</author>
<author>
<name sortKey="Smith, Fw" uniqKey="Smith F">FW Smith</name>
</author>
<author>
<name sortKey="Culham, Jc" uniqKey="Culham J">JC Culham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gallivan, Jp" uniqKey="Gallivan J">JP Gallivan</name>
</author>
<author>
<name sortKey="Mcclean, Da" uniqKey="Mcclean D">DA McClean</name>
</author>
<author>
<name sortKey="Valyear, Kf" uniqKey="Valyear K">KF Valyear</name>
</author>
<author>
<name sortKey="Pettypiece, Ce" uniqKey="Pettypiece C">CE Pettypiece</name>
</author>
<author>
<name sortKey="Culham, Jc" uniqKey="Culham J">JC Culham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geyer, S" uniqKey="Geyer S">S Geyer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geyer, S" uniqKey="Geyer S">S Geyer</name>
</author>
<author>
<name sortKey="Ledberg, A" uniqKey="Ledberg A">A Ledberg</name>
</author>
<author>
<name sortKey="Schleicher, A" uniqKey="Schleicher A">A Schleicher</name>
</author>
<author>
<name sortKey="Kinomura, S" uniqKey="Kinomura S">S Kinomura</name>
</author>
<author>
<name sortKey="Schormann, T" uniqKey="Schormann T">T Schormann</name>
</author>
<author>
<name sortKey="Burgel, U" uniqKey="Burgel U">U Burgel</name>
</author>
<author>
<name sortKey="Klingberg, T" uniqKey="Klingberg T">T Klingberg</name>
</author>
<author>
<name sortKey="Larsson, J" uniqKey="Larsson J">J Larsson</name>
</author>
<author>
<name sortKey="Zilles, K" uniqKey="Zilles K">K Zilles</name>
</author>
<author>
<name sortKey="Roland, Pe" uniqKey="Roland P">PE Roland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Harrison, Sa" uniqKey="Harrison S">SA Harrison</name>
</author>
<author>
<name sortKey="Tong, F" uniqKey="Tong F">F Tong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hsiao, S" uniqKey="Hsiao S">S Hsiao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huang, Rs" uniqKey="Huang R">RS Huang</name>
</author>
<author>
<name sortKey="Sereno, Mi" uniqKey="Sereno M">MI Sereno</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kamitani, Y" uniqKey="Kamitani Y">Y Kamitani</name>
</author>
<author>
<name sortKey="Tong, F" uniqKey="Tong F">F Tong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keysers, C" uniqKey="Keysers C">C Keysers</name>
</author>
<author>
<name sortKey="Kaas, Jh" uniqKey="Kaas J">JH Kaas</name>
</author>
<author>
<name sortKey="Gazzola, V" uniqKey="Gazzola V">V Gazzola</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kriegeskorte, N" uniqKey="Kriegeskorte N">N Kriegeskorte</name>
</author>
<author>
<name sortKey="Goebel, R" uniqKey="Goebel R">R Goebel</name>
</author>
<author>
<name sortKey="Bandettini, P" uniqKey="Bandettini P">P Bandettini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kosslyn, Sm" uniqKey="Kosslyn S">SM Kosslyn</name>
</author>
<author>
<name sortKey="Ganis, G" uniqKey="Ganis G">G Ganis</name>
</author>
<author>
<name sortKey="Thompson, Wl" uniqKey="Thompson W">WL Thompson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lancaster, Jl" uniqKey="Lancaster J">JL Lancaster</name>
</author>
<author>
<name sortKey="Tordesillas, D" uniqKey="Tordesillas D">D Tordesillas</name>
</author>
<author>
<name sortKey="Martinez, M" uniqKey="Martinez M">M Martinez</name>
</author>
<author>
<name sortKey="Salinas, F" uniqKey="Salinas F">F Salinas</name>
</author>
<author>
<name sortKey="Evans, E" uniqKey="Evans E">E Evans</name>
</author>
<author>
<name sortKey="Zilles, K" uniqKey="Zilles K">K Zilles</name>
</author>
<author>
<name sortKey="Mazziota, Jc" uniqKey="Mazziota J">JC Mazziota</name>
</author>
<author>
<name sortKey="Fox, Pt" uniqKey="Fox P">PT Fox</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcintosh, Ar" uniqKey="Mcintosh A">AR McIntosh</name>
</author>
<author>
<name sortKey="Cabeza, Re" uniqKey="Cabeza R">RE Cabeza</name>
</author>
<author>
<name sortKey="Lobaugh, Nj" uniqKey="Lobaugh N">NJ Lobaugh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meyer, K" uniqKey="Meyer K">K Meyer</name>
</author>
<author>
<name sortKey="Damasio, A" uniqKey="Damasio A">A Damasio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meyer, K" uniqKey="Meyer K">K Meyer</name>
</author>
<author>
<name sortKey="Kaplan, Jt" uniqKey="Kaplan J">JT Kaplan</name>
</author>
<author>
<name sortKey="Essex, R" uniqKey="Essex R">R Essex</name>
</author>
<author>
<name sortKey="Webber, C" uniqKey="Webber C">C Webber</name>
</author>
<author>
<name sortKey="Damasio, H" uniqKey="Damasio H">H Damasio</name>
</author>
<author>
<name sortKey="Damasio, A" uniqKey="Damasio A">A Damasio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meyer, K" uniqKey="Meyer K">K Meyer</name>
</author>
<author>
<name sortKey="Kaplan, Jt" uniqKey="Kaplan J">JT Kaplan</name>
</author>
<author>
<name sortKey="Essex, R" uniqKey="Essex R">R Essex</name>
</author>
<author>
<name sortKey="Damasio, H" uniqKey="Damasio H">H Damasio</name>
</author>
<author>
<name sortKey="Damasio, Ar" uniqKey="Damasio A">AR Damasio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Milner, Ad" uniqKey="Milner A">AD Milner</name>
</author>
<author>
<name sortKey="Goodale, Ma" uniqKey="Goodale M">MA Goodale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Misaki, M" uniqKey="Misaki M">M Misaki</name>
</author>
<author>
<name sortKey="Kim, Y" uniqKey="Kim Y">Y Kim</name>
</author>
<author>
<name sortKey="Bandettini, Pa" uniqKey="Bandettini P">PA Bandettini</name>
</author>
<author>
<name sortKey="Kriegeskorte, N" uniqKey="Kriegeskorte N">N Kriegeskorte</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morrison, I" uniqKey="Morrison I">I Morrison</name>
</author>
<author>
<name sortKey="Tipper, Sp" uniqKey="Tipper S">SP Tipper</name>
</author>
<author>
<name sortKey="Fenton Adams, Wl" uniqKey="Fenton Adams W">WL Fenton-Adams</name>
</author>
<author>
<name sortKey="Bach, P" uniqKey="Bach P">P Bach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Muckli, L" uniqKey="Muckli L">L Muckli</name>
</author>
<author>
<name sortKey="Petro, Ls" uniqKey="Petro L">LS Petro</name>
</author>
<author>
<name sortKey="Smith, Fw" uniqKey="Smith F">FW Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nelson, Aj" uniqKey="Nelson A">AJ Nelson</name>
</author>
<author>
<name sortKey="Chen, R" uniqKey="Chen R">R Chen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nestor, A" uniqKey="Nestor A">A Nestor</name>
</author>
<author>
<name sortKey="Plaut, Dc" uniqKey="Plaut D">DC Plaut</name>
</author>
<author>
<name sortKey="Behrmann, M" uniqKey="Behrmann M">M Behrmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Op De Beeck, Hp" uniqKey="Op De Beeck H">HP Op De Beeck</name>
</author>
<author>
<name sortKey="Torfs, K" uniqKey="Torfs K">K Torfs</name>
</author>
<author>
<name sortKey="Wagemans, J" uniqKey="Wagemans J">J Wagemans</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pereira, F" uniqKey="Pereira F">F Pereira</name>
</author>
<author>
<name sortKey="Botvinick, M" uniqKey="Botvinick M">M Botvinick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rorden, C" uniqKey="Rorden C">C Rorden</name>
</author>
<author>
<name sortKey="Karnath, Ho" uniqKey="Karnath H">HO Karnath</name>
</author>
<author>
<name sortKey="Bonilla, L" uniqKey="Bonilla L">L Bonilla</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rossit, S" uniqKey="Rossit S">S Rossit</name>
</author>
<author>
<name sortKey="Mcadam, T" uniqKey="Mcadam T">T McAdam</name>
</author>
<author>
<name sortKey="Mclean, Da" uniqKey="Mclean D">DA Mclean</name>
</author>
<author>
<name sortKey="Goodale, Ma" uniqKey="Goodale M">MA Goodale</name>
</author>
<author>
<name sortKey="Culham, Jc" uniqKey="Culham J">JC Culham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ruben, J" uniqKey="Ruben J">J Ruben</name>
</author>
<author>
<name sortKey="Schwiemann, J" uniqKey="Schwiemann J">J Schwiemann</name>
</author>
<author>
<name sortKey="Deuchert, M" uniqKey="Deuchert M">M Deuchert</name>
</author>
<author>
<name sortKey="Meyer, R" uniqKey="Meyer R">R Meyer</name>
</author>
<author>
<name sortKey="Krause, T" uniqKey="Krause T">T Krause</name>
</author>
<author>
<name sortKey="Curio, G" uniqKey="Curio G">G Curio</name>
</author>
<author>
<name sortKey="Villringer, K" uniqKey="Villringer K">K Villringer</name>
</author>
<author>
<name sortKey="Kurth, R" uniqKey="Kurth R">R Kurth</name>
</author>
<author>
<name sortKey="Villringer, A" uniqKey="Villringer A">A Villringer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sakata, H" uniqKey="Sakata H">H Sakata</name>
</author>
<author>
<name sortKey="Tatra, M" uniqKey="Tatra M">M Tatra</name>
</author>
<author>
<name sortKey="Kusunoki, M" uniqKey="Kusunoki M">M Kusunoki</name>
</author>
<author>
<name sortKey="Murata, A" uniqKey="Murata A">A Murata</name>
</author>
<author>
<name sortKey="Tanaka, Y" uniqKey="Tanaka Y">Y Tanaka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smith, Fw" uniqKey="Smith F">FW Smith</name>
</author>
<author>
<name sortKey="Muckli, L" uniqKey="Muckli L">L Muckli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stokes, M" uniqKey="Stokes M">M Stokes</name>
</author>
<author>
<name sortKey="Thompson, R" uniqKey="Thompson R">R Thompson</name>
</author>
<author>
<name sortKey="Cusack, R" uniqKey="Cusack R">R Cusack</name>
</author>
<author>
<name sortKey="Duncan, J" uniqKey="Duncan J">J Duncan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Westen, D" uniqKey="Van Westen D">D Van Westen</name>
</author>
<author>
<name sortKey="Fransson, P" uniqKey="Fransson P">P Fransson</name>
</author>
<author>
<name sortKey="Olsrud, J" uniqKey="Olsrud J">J Olsrud</name>
</author>
<author>
<name sortKey="Rosen, B" uniqKey="Rosen B">B Rosen</name>
</author>
<author>
<name sortKey="Lundborg, G" uniqKey="Lundborg G">G Lundborg</name>
</author>
<author>
<name sortKey="Larsson, E" uniqKey="Larsson E">E Larsson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vetter, P" uniqKey="Vetter P">P Vetter</name>
</author>
<author>
<name sortKey="Smith, Fw" uniqKey="Smith F">FW Smith</name>
</author>
<author>
<name sortKey="Muckli, L" uniqKey="Muckli L">L Muckli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Walther, Db" uniqKey="Walther D">DB Walther</name>
</author>
<author>
<name sortKey="Caddigan, E" uniqKey="Caddigan E">E Caddigan</name>
</author>
<author>
<name sortKey="Fei Fei, L" uniqKey="Fei Fei L">L Fei-Fei</name>
</author>
<author>
<name sortKey="Beck, Dm" uniqKey="Beck D">DM Beck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Walther, Db" uniqKey="Walther D">DB Walther</name>
</author>
<author>
<name sortKey="Chai, B" uniqKey="Chai B">B Chai</name>
</author>
<author>
<name sortKey="Caddigan, E" uniqKey="Caddigan E">E Caddigan</name>
</author>
<author>
<name sortKey="Beck, Dm" uniqKey="Beck D">DM Beck</name>
</author>
<author>
<name sortKey="Fei Fei, L" uniqKey="Fei Fei L">L Fei-Fei</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zangenehpour, S" uniqKey="Zangenehpour S">S Zangenehpour</name>
</author>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhou, Y" uniqKey="Zhou Y">Y Zhou</name>
</author>
<author>
<name sortKey="Fuster, Jm" uniqKey="Fuster J">JM Fuster</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Cereb Cortex</journal-id>
<journal-id journal-id-type="iso-abbrev">Cereb. Cortex</journal-id>
<journal-id journal-id-type="publisher-id">cercor</journal-id>
<journal-id journal-id-type="hwp">cercor</journal-id>
<journal-title-group>
<journal-title>Cerebral Cortex (New York, NY)</journal-title>
</journal-title-group>
<issn pub-type="ppub">1047-3211</issn>
<issn pub-type="epub">1460-2199</issn>
<publisher>
<publisher-name>Oxford University Press</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24122136</article-id>
<article-id pub-id-type="pmc">4380001</article-id>
<article-id pub-id-type="doi">10.1093/cercor/bht292</article-id>
<article-id pub-id-type="publisher-id">bht292</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Articles</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Decoding Visual Object Categories in Early Somatosensory Cortex</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Smith</surname>
<given-names>Fraser W.</given-names>
</name>
<xref ref-type="aff" rid="af1">1</xref>
<xref ref-type="aff" rid="af2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Goodale</surname>
<given-names>Melvyn A.</given-names>
</name>
<xref ref-type="aff" rid="af2">2</xref>
</contrib>
<aff id="af1">
<label>1</label>
<addr-line>School of Psychology</addr-line>
,
<institution>University of East Anglia</institution>
,
<addr-line>Norwich NR4 7TJ</addr-line>
,
<country>UK</country>
</aff>
<aff id="af2">
<label>2</label>
<addr-line>The Brain and Mind Institute</addr-line>
,
<institution>University of Western Ontario,</institution>
<addr-line>London, ON</addr-line>
,
<country>Canada</country>
<addr-line>N6A 5B7</addr-line>
</aff>
</contrib-group>
<author-notes>
<corresp>Address correspondence to Dr Fraser W. Smith, School of Psychology, LSB Building, University of East Anglia, Norwich Research Park, Norwich NR4 7TJ, UK. Email:
<email>fraser.smith@uea.ac.uk</email>
</corresp>
</author-notes>
<pub-date pub-type="ppub">
<month>4</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="epub">
<day>11</day>
<month>10</month>
<year>2013</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>11</day>
<month>10</month>
<year>2013</year>
</pub-date>
<pmc-comment> PMC Release delay is 0 months and 0 days and was based on the . </pmc-comment>
<volume>25</volume>
<issue>4</issue>
<fpage>1020</fpage>
<lpage>1031</lpage>
<permissions>
<copyright-statement>© The Author 2013. Published by Oxford University Press.</copyright-statement>
<copyright-year>2013</copyright-year>
<license license-type="creative-commons" xlink:href="http://creativecommons.org/licenses/by-nc/3.0/">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by-nc/3.0/">http://creativecommons.org/licenses/by-nc/3.0/</ext-link>
), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:type="simple" xlink:href="bht292.pdf"></self-uri>
<abstract>
<p>Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects.</p>
</abstract>
<kwd-group>
<kwd>multisensory</kwd>
<kwd>multivoxel pattern analysis</kwd>
<kwd>posterior parietal cortex</kwd>
<kwd>S1</kwd>
<kwd>S2</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>It is well known that most input to neurons even in early sensory areas of cortex (i.e., primary sensory areas) comes from other cortical neurons (i.e., local or long-range connections). These connections provide a way for prior experience and context to shape responses of early sensory neurons. We recently showed (
<xref rid="BHT292C44" ref-type="bibr">Smith and Muckli 2010</xref>
) that such connections transmit content-specific information about natural visual scenes to nonstimulated subregions of primary visual cortex. Early sensory neurons, however, are also subjected to contextual influences from other sensory modalities via feedback from multisensory areas and direct links between early sensory cortices of different modalities (see
<xref rid="BHT292C9" ref-type="bibr">Driver and Noesselt 2008</xref>
). Here, we investigated whether different visual images of common object categories would be reliably discriminated in early somatosensory cortex despite participants having no haptic interaction whatsoever with those visual stimuli during the experiment. We expected that this might be possible due to associative links that are formed through experience between different sensory aspects of specific objects (e.g., the sight and haptic experiences associated with wine glasses; see
<xref rid="BHT292C8" ref-type="bibr">Damasio 1989</xref>
;
<xref rid="BHT292C30" ref-type="bibr">Meyer and Damasio 2009</xref>
).</p>
<p>Support for the above hypothesis is provided by several lines of evidence: 1) early sensory areas are subject to modulatory influences from other modalities (e.g.,
<xref rid="BHT292C3" ref-type="bibr">Calvert et al. 1997</xref>
;
<xref rid="BHT292C29" ref-type="bibr">McIntosh et al. 1998</xref>
;
<xref rid="BHT292C9" ref-type="bibr">Driver and Noesselt 2008</xref>
), 2) silent but sound-evoking movies (e.g., a movie of a piano key being struck) can be discriminated in early auditory cortex (
<xref rid="BHT292C31" ref-type="bibr">Meyer et al. 2010</xref>
), 3) natural auditory scenes (e.g., traffic vs. people talking) presented to blindfolded participants can be discriminated in early visual cortex (
<xref rid="BHT292C47" ref-type="bibr">Vetter et al. 2011</xref>
), and 4) visual stimuli presented in isolation lead to selective firing in primary somatosensory cortex after previous association with tactile stimulation (
<xref rid="BHT292C51" ref-type="bibr">Zhou and Fuster 2000</xref>
). Thus, there is some evidence for the hypothesis that cross-modal context can trigger content-specific representation in early sensory areas. In the present experiment, we ask whether the same is true with respect to object categories in the domain of vision and touch.
<xref rid="BHT292C32" ref-type="bibr">Meyer et al. (2011)</xref>
addressed a related point. These authors presented visual movies of hands exploring real objects and successfully discriminated such movies in primary somatosensory cortex. In contrast, here we only ever presented visual images of object categories, and no haptic interactions were ever depicted in our visual stimuli. Hence, the present experiments test whether the observation of haptic interaction is required in order to observe such cross-modal context effects from vision to touch. In addition, in the present work, we explicitly test whether such effects are present only for visual object categories for which we have a significant degree of visuo-haptic experience.</p>
<p>In Experiment 1, we presented participants with visual images of object categories for which they would tend to have rich visuo-haptic experience (wine glasses, mobile phones, and apples; see Fig.
<xref ref-type="fig" rid="BHT292F1">1</xref>
<italic>A</italic>
and
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supp. Fig. 1</ext-link>
<italic>A</italic>
) in a block design fMRI experiment. We used common object categories since we expected cross-modal connections to transmit contextual information maximally about specific object categories when participants have rich multisensory experience with those objects. Participants fixated and performed an orthogonal color-change detection task at fixation. We used a linear classifier to investigate whether or not the 3 categories of visual object could be discriminated from the fMRI activity in primary somatosensory cortex (see, e.g.,
<xref rid="BHT292C24" ref-type="bibr">Kamitani and Tong 2005</xref>
;
<xref rid="BHT292C44" ref-type="bibr">Smith and Muckli 2010</xref>
). If the decoding accuracy were significantly greater than chance then we could conclude that cross-modal connections from vision to the earliest cortical stages of somatosensory processing transmit content-specific information about object categories, even in the absence of any haptic interaction with those object categories during the experiment. In Experiment 2, we replicated Experiment 1, but also included a set of unfamiliar objects (modified from
<xref rid="BHT292C38" ref-type="bibr">Op De Beeck et al. 2008</xref>
) to explicitly test whether any observed cross-modal context effects were due to prior visuo-haptic experience with familiar objects. In addition, in Experiment 2, we also eliminated any need for a button-press response from the participant to eliminate any potential interactions with our effects of interest.
<fig id="BHT292F1" position="float">
<label>Figure 1.</label>
<caption>
<p>Visual object categories and exemplars. (
<italic>A</italic>
) The 3 different familiar visual object categories (rows: wine glasses, mobile phones, and apples) and individual exemplars (columns) shown to participants in Experiments 1 and 2. (
<italic>B</italic>
) The 3 different unfamiliar object categories (cubies, smoothies and spikies: modified from
<xref rid="BHT292C38" ref-type="bibr">Op De Beeck et al, 2008</xref>
) that were shown to participants in Experiment 2. To see a full color version of this figure, please see
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Material</ext-link>
.</p>
</caption>
<graphic xlink:href="bht29201"></graphic>
</fig>
</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and Methods</title>
<sec id="s2a">
<title>Subjects</title>
<p>A total of 8 right-handed (via self-report) participants took part in Experiment 1 (6 females) and a new set of 10 right-handed (via self-report) participants took part in Experiment 2 (6 females). All participants gave written, informed consent in accordance with procedures approved by the ethics committee of the Department of Psychology at the University of Western Ontario.</p>
</sec>
<sec id="s2b">
<title>Stimuli and Design</title>
<p>Participants were presented with full color images of visual objects in a block design protocol. There were 9 images in Experiment 1, comprising 3 different exemplars of three different categories of visual object (wine glasses, mobile phones, and apples; see Fig.
<xref ref-type="fig" rid="BHT292F1">1</xref>
<italic>A</italic>
and
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 1</ext-link>
<italic>A</italic>
). We chose these visual objects categories since most participants would tend to have a rich haptic experience with such objects. Each stimulus was superimposed upon a white background of 14 by 14° of visual angle. The maximum horizontal and vertical extent of any stimulus was ∼10.5° and 12°, respectively.</p>
<p>Each run comprised 18 stimulation blocks (12 s) preceded and followed by blocks of fixation (12 s). Thus, each visual image was shown twice within 1 run, with the sequence being randomly determined. Each run lasted hence 444 s, allowing for an initial 12 s of fixation. Within each 12 s block of stimulus presentation, the stimulus was flashed on and off repeatedly (i.e., on 200 ms, off 200 ms, repeated 30 times; we used this presentation cycle to maximize signal to noise, see
<xref rid="BHT292C44" ref-type="bibr">Smith and Muckli 2010</xref>
). Participants were instructed to maintain fixation throughout each run on a small central fixation checkerboard (0.4°). The task of the participant was to monitor the stream of frames for a change in color of the fixation marker (randomly chosen frames were changed from black and white to red and green). Participants responded to such a change with a button press with their left (i.e., nondominant) hand. Note that this task is orthogonal to the category of visual object presented. Participants completed between 4 and 6 runs of the main experiment.</p>
<p>In an independent finger-mapping experiment, we localized the cortical representation of the first 4 digits of the right (dominant) hand in the primary and secondary somatosensory cortices (see, e.g.,
<xref rid="BHT292C42" ref-type="bibr">Ruben et al. 2001</xref>
; Van
<xref rid="BHT292C46" ref-type="bibr">Westen et al. 2004</xref>
;
<xref rid="BHT292C23" ref-type="bibr">Huang and Sereno 2007</xref>
;
<xref rid="BHT292C36" ref-type="bibr">Nelson and Chen 2008</xref>
). Note that no mention of this part of the experiment, which was run at the very end of the session, was ever made to participants until all the visual object runs had been completed. We stimulated the pads of the first 4 digits of the right hand in a random sequence as in
<xref rid="BHT292C23" ref-type="bibr">Huang and Sereno (2007)</xref>
in a traditional block design (alternating blocks of 12 s stimulation vs. baseline). We chose this localizer design after pilot-testing revealed that it reliably activated both primary and secondary somatosensory cortices. There were 18 stimulation blocks in each run, each preceded and followed by 12 s of baseline, thus giving a total run length of 444 s. Small plastic clamps were placed around the pad of each of the first four fingers that could be driven by air pressure to displace the skin surface (somatosensory stimulus system, 4D neuroimaging). The amplitude and timing of stimulus delivery was controlled by a pneumatic delivery system (300 ms off, 100 ms on, within each 12-s stimulation block), itself controlled via in-house software written in Matlab. Participants were required to close their eyes for the duration of each finger mapping run. Each participant completed either 2 or 3 runs of this localizer.</p>
<p>In Experiment 2, participants were presented with the same objects as those in Experiment 1 (i.e., familiar objects: wine glasses, mobile phones, and apples) as well as a matched set of unfamiliar objects (Fig.
<xref ref-type="fig" rid="BHT292F1">1</xref>
<italic>B</italic>
and
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 1</ext-link>
<italic>B</italic>
). The unfamiliar objects were included to explicitly test to what extent prior visuo-haptic experience with the object categories might mediate decoding from visual images in somatosensory cortex. There were 3 categories of unfamiliar object (modified from
<xref rid="BHT292C38" ref-type="bibr">Op De Beeck et al. 2008</xref>
): cubies, smoothies and spikies (see Fig.
<xref ref-type="fig" rid="BHT292F1">1</xref>
<italic>B</italic>
and
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 1</ext-link>
<italic>B</italic>
) and 3 instances of each category to match the design with the familiar objects. In a block design, each object (9 familiar plus 9 unfamiliar) was presented for 12 s, being preceded and followed by 12 s fixation. Order of object presentation was random and participants each completed 10 runs. The task in this experiment was altered in order to remove any requirement for a finger-based button response—hence, participants again fixated centrally but here simply had to count the number of fixation color changes present in each run (randomly sampled from 7 to 17). In addition, we again performed a finger mapping experiment as in Experiment 1 but, this time, we mapped the digits of both hands independently (left and right). The remaining details were the same as Experiment 1.</p>
</sec>
<sec id="s2c">
<title>MRI Data Acquisition</title>
<p>Visual images were rear projected via an LCD projector (Silent Vision 6011, screen resolution of 1024 × 768, Avotec) onto a screen mounted behind the participant as he or she lay in the bore of the magnet. MRI data were collected with a 3T Siemens Tim Trio System with a 32-channel head coil and parallel imaging techniques (IPAT factor: 2) at the Robarts Research Institute, London, Ontario. Blood oxygen level–dependent (BOLD) signals were measured with an echo-planar imaging sequence (echo time: 30 ms, repetition time: 2000 ms, field of view: 200, flip angle: 75, 34 oblique slices, resolution 2.5 × 2.5 mm, slice thickness of 2.8 mm, no gap). Slices were positioned to cover the entire brain except anterior frontal and temporal regions (see Fig.
<xref ref-type="fig" rid="BHT292F5">5</xref>
). A high-resolution 3D anatomical scan (3D MPRAGE, 1 × 1 × 1 mm
<sup>3</sup>
resolution) was recorded in the same session as the functional scans. All experimental runs (either 4 or 5) plus 2 or 3 runs of the finger mapping localizer were completed in the same imaging session in Experiment 1. In Experiment 2, experimental runs (10) were completed in a first session followed by finger mapping runs in a second session.</p>
</sec>
<sec id="s2d">
<title>MRI Data Processing</title>
<p>Functional data for each main experimental run were slice time corrected, corrected for 3D motion, linearly de-trended and spatially normalized into the Talaraich space with Brain Voyager QX (Brain Innovation, Masstricht, The Netherlands). Note, we did not apply temporal filtering to the data used for pattern classification (see, e.g.,
<xref rid="BHT292C24" ref-type="bibr">Kamitani and Tong 2005</xref>
;
<xref rid="BHT292C48" ref-type="bibr">Walther et al. 2009</xref>
). We then normalized the time series of each voxel by its mean value, independently per run. Activity patterns were obtained by averaging the fMRI BOLD activity during each 12-s stimulation block (shifted for 4 s or 2 TRs in order to account for the hemodynamic lag) independently for each voxel. These are the estimates that we use as the input to the pattern classification algorithm described below. The functional localizer data were subject to the standard processing stream above including temporal filtering (2 cycles) and subsequently analyzed by using a GLM approach with 1 stimulus predictor coding stimulus onset convolved with a double gamma model of the hemodynamic response. We used
<italic>t</italic>
-values defined from the contrast of stimulation minus fixation to determine “finger-pad sensitive” voxels in each of our predefined anatomical masks of S1 and S2. For the mapping data of Experiment 1, we selected the top 100 voxels to contralateral stimulation (left PCG) or when pooled across hemispheres, and the top 50 voxels responding to ipsilateral stimulation (right PCG), since we would expect only smaller sub-regions to be active in this case. For the mapping data of Experiment 2, we selected the top 100 voxels within each region corresponding to either A) contralateral stimulation or B) both contra- and ipsilateral stimulation (top 50 to each).</p>
</sec>
<sec id="s2e">
<title>Anatomical Mask of S1</title>
<p>The primary somatosensory cortex, also known as S1 or the post-central gyrus (PCG) is a complex structure, consisting of four separate areas (see, e.g.,
<xref rid="BHT292C25" ref-type="bibr">Keysers et al. 2010</xref>
): area 3a, 3b, 1, and 2. Area 3a deals primarily with proprioceptive information, area 3b with tactile information, area 1 with a second level of tactile analysis, and finally area 2 combines information from each other area (3a, 3b, and 1) thus being the first level at which tactile and proprioceptive information is combined. Thus, defining an anatomical mask allows us to provide our pattern classification algorithms with the entire repertoire of information that is processed in primary somatosensory cortex (see
<xref rid="BHT292C32" ref-type="bibr">Meyer et al. 2011</xref>
). It further allows us to directly compare our results to those of
<xref rid="BHT292C32" ref-type="bibr">Meyer et al. (2011)</xref>
who likewise defined an anatomical mask of S1. Anatomical masks of both the left and the right PCG were defined for each participant on their anatomical MRI in Talairach space (we used
<italic>MRIcron</italic>
for this purpose,
<xref rid="BHT292C40" ref-type="bibr">Rorden et al. 2007</xref>
). We defined the masks in Talairach space so they would be directly comparable across participants and in the same space as our S2 masks (see below). In defining our masks, we used the same criteria as
<xref rid="BHT292C32" ref-type="bibr">Meyer et al. (2011)</xref>
: the lateroinferior border was taken to be the last axial slice upon which the corpus callosum was not visible. From anterior to posterior the masks were defined by the floors of the central and postcentral sulci and they did not extent to the medial wall. Example masks are shown for a representative participant in Figure
<xref ref-type="fig" rid="BHT292F2">2</xref>
. The size of each mask is reported in
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Tables 1 and 2</ext-link>
.
<fig id="BHT292F2" position="float">
<label>Figure 2.</label>
<caption>
<p>Anatomical masks of the postcentral gyri for a representative participant. The numbers in white refer to the slices through the
<italic>Z</italic>
-plane. The white box in the lower right image depicts the slices of the brain on which the PCG was marked (see Materials and Methods).</p>
</caption>
<graphic xlink:href="bht29202"></graphic>
</fig>
</p>
</sec>
<sec id="s2f">
<title>Probabilistic Maps of S2</title>
<p>The human secondary somatosensory cortices (S2) are located along the parietal operculae and consist of at least four cytoarchitectonic sub-regions (OP1-4;
<xref rid="BHT292C12" ref-type="bibr">Eickhoff et al. 2006A</xref>
,
<xref rid="BHT292C11" ref-type="bibr">2006B</xref>
). In order to define anatomical masks of S2 we summed the probabilistic maps of OP1–OP4 given in the Juelich Anatomy toolbox (
<xref rid="BHT292C13" ref-type="bibr">Eickhoff et al. 2005</xref>
). We used the 30% probability cutoff for each map since when summed together across maps this gave comparable numbers of voxels as in our anatomical masks of S1. The probability maps are defined in the MNI single subject space (COLIN27) with an affine correction in the
<italic>y</italic>
and
<italic>z</italic>
axes to make the Anterior Commissure the exact center of the coordinate system (so-called MNI Anatomical Space; see
<xref rid="BHT292C13" ref-type="bibr">Eickhoff et al. 2005</xref>
). In order to transform the coordinates from MNI space to TAL space (the reference space used in Brain Voyager and hence the present work), we applied the MNI to TAL transformation given in (
<xref rid="BHT292C28" ref-type="bibr">Lancaster et al. (2007</xref>
); icbm_fsl2tal.m downloaded from
<uri xlink:type="simple" xlink:href="http://brainmap.org/icbm2tal/">http://brainmap.org/icbm2tal/</uri>
) and shown to generate relatively good agreement between MNI and TAL space across the brain volume. Thus, we obtained one left S2 and one right S2 anatomical mask from these procedures that were subsequently used for each individual participant.</p>
</sec>
<sec id="s2g">
<title>Additional ROIs</title>
<p>We used the same procedure, as outlined above for S2, to define probabilistic anatomical masks of V1, premotor, and motor cortex in each hemisphere (again from the Juelich Anatomy toolbox,
<xref rid="BHT292C13" ref-type="bibr">Eickhoff et al. 2005</xref>
; see
<xref rid="BHT292C1" ref-type="bibr">Amunts et al. 2000</xref>
;
<xref rid="BHT292C20" ref-type="bibr">Geyer et al. 1996</xref>
; and
<xref rid="BHT292C19" ref-type="bibr">Geyer 2003</xref>
for the cytoarchitectonic maps corresponding to V1, primary motor cortex, and premotor cortex, respectively).</p>
</sec>
<sec id="s2h">
<title>Multivariate Pattern Classification Analysis</title>
<p>We trained a linear classifier (linear support vector machine, SVM) to learn the mapping between a set of multivariate observations of brain activity and the particular visual object category that had been presented (3 categories). We then tested the classifier on an “independent” set of test data. We trained the classifiers with a set of single-block brain activity patterns. We tested the classifiers on single block activity patterns for each stimulus class in the independent set of test data. To assess the performance of our classifiers in Experiment 1, we used a leave one-example out cross-validation procedure: thus our models were trained from all examples minus one for each class, and then tested on the remaining example in each class (see, e.g.,
<xref rid="BHT292C10" ref-type="bibr">Duda et al. 2001</xref>
;
<xref rid="BHT292C14" ref-type="bibr">Ethofer et al. 2009</xref>
;
<xref rid="BHT292C21" ref-type="bibr">Harrison and Tong 2009</xref>
). As we used a traditional block design (12 s ON 12 s OFF), this ensures that there is no significant influence of one trial to the next in terms of BOLD signal estimates, unlike in a rapid event-related design (see
<xref rid="BHT292C34" ref-type="bibr">Misaki et al. 2010</xref>
). Thus, a leave-one example out procedure is justified in the present case (see also
<xref rid="BHT292C39" ref-type="bibr">Pereira and Botvinick 2011</xref>
). In order to specify the sequence of cross-validation cycles (there are many possible here), we sorted the trials from each class (24 or 30, respectively) in terms of temporal occurrence and then left out the
<italic>n</italic>
th trial from each class (see also
<xref rid="BHT292C15" ref-type="bibr">Etzel et al. 2008</xref>
;
<xref rid="BHT292C21" ref-type="bibr">Harrison and Tong 2009</xref>
). This ensures that the test data were independent across cross-validation cycles, as would be the case in leave-one run out cross-validation schemes (see, e.g.,
<xref rid="BHT292C24" ref-type="bibr">Kamitani and Tong 2005</xref>
;
<xref rid="BHT292C44" ref-type="bibr">Smith and Muckli 2010</xref>
). We tested whether group level decoding accuracy was above chance by performing 1-sample
<italic>t</italic>
-tests against the chance level of 1/3 (see, e.g.,
<xref rid="BHT292C48" ref-type="bibr">Walther et al. 2009</xref>
,
<xref rid="BHT292C49" ref-type="bibr">2011</xref>
;
<xref rid="BHT292C5" ref-type="bibr">Chen et al. 2011</xref>
). In Experiment 2, we report our results with an even stronger test of the classifier's ability to generalize well, leave-one run out cross-validation performance (see
<xref rid="BHT292C7" ref-type="bibr">Coutanche and Thompson-Schill 2012</xref>
; for an advantage of having more runs to cross-validate over even with the same number of events).</p>
<p>The linear SVM algorithm was implemented using the LIBSVM toolbox (
<xref rid="BHT292C4" ref-type="bibr">Chang and Lin 2011</xref>
), with default parameters (notably
<italic>C</italic>
= 1). Note that the activity of each voxel in the training data was normalized within a range of −1 to 1 prior to input to the SVM. The test data were normalized using the same parameters (min, max) as obtained from the training set normalization in order to optimize the classification performance (see
<xref rid="BHT292C4" ref-type="bibr">Chang and Lin 2011</xref>
).</p>
</sec>
<sec id="s2i">
<title>Searchlight Analysis</title>
<p>We additionally performed a whole-brain searchlight analysis (
<xref rid="BHT292C26" ref-type="bibr">Kriegeskorte et al. 2006</xref>
;
<xref rid="BHT292C48" ref-type="bibr">Walther et al. 2009</xref>
;
<xref rid="BHT292C37" ref-type="bibr">Nestor et al. 2011</xref>
;
<xref rid="BHT292C39" ref-type="bibr">Pereira and Botvinick 2011</xref>
). We used the SearchMight toolbox of (
<xref rid="BHT292C39" ref-type="bibr">Pereira and Botvinick 2011</xref>
), implemented in Matlab, to perform whole-brain decoding with a linear SVM (
<italic>C</italic>
= 1, as above). This involved selecting a specific searchlight size, here a cube of 343 2 mm
<sup>3</sup>
voxels (7 × 7 × 7), and systematically shifting this cube throughout the whole-brain volume and performing the decoding analysis independently at each different center voxel position. These analyses were performed independently for each participant, using a common group mask, and then the resulting decoding accuracy maps were averaged across participants. Statistical significance was assessed by testing whether the mean accuracy across participants was significantly higher than chance (1/3) at each voxel (see also
<xref rid="BHT292C48" ref-type="bibr">Walther et al. 2009</xref>
,
<xref rid="BHT292C49" ref-type="bibr">2011</xref>
;
<xref rid="BHT292C5" ref-type="bibr">Chen et al. 2011</xref>
). Correction for multiple comparisons was ensured by use of a cluster threshold estimated by the program AlphaSim (B.D. Ward,
<uri xlink:type="simple" xlink:href="http://afni.nimh.nih.gov/afni/docpdf/AlphaSim.pdf">http://afni.nimh.nih.gov/afni/docpdf/AlphaSim.pdf</uri>
) Experiment 1 (
<italic>t</italic>
> 3.5, voxelwise
<italic>P</italic>
< 0.01, cluster
<italic>P</italic>
< 0.05, equal to 35 voxels), Experiment 2 (
<italic>t</italic>
> 3.25, voxelwise
<italic>P</italic>
< 0.01; cluster
<italic>P</italic>
< 0.05; equal to 37 voxels and 33 voxels for familiar and unfamiliar maps, respectively), Experiments 1 and 2 pooled (
<italic>t</italic>
> 2.9, voxelwise
<italic>P</italic>
< 0.01; cluster
<italic>P</italic>
< 0.05, equal to 64 voxels). Results are shown projected onto the cortical surface reconstruction of a reference brain (COLIN27) available in the NeuroElf package (
<uri xlink:type="simple" xlink:href="www.neuroelf.net">www.neuroelf.net</uri>
).</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>We defined anatomical masks of the bilateral postcentral gyrus (PCG) in each individual participant (see Fig.
<xref ref-type="fig" rid="BHT292F2">2</xref>
and Materials and Methods). Figure
<xref ref-type="fig" rid="BHT292F3">3</xref>
<italic>A</italic>
shows the cross-validated performance of the linear classifier (linear SVM) in decoding visual object category (3AFC) from the PCG. Remarkably, decoding of visual object category significantly exceeded chance in each case: right PCG mean decoding accuracy 40%,
<italic>t</italic>
<sub>(7)</sub>
= 3.68,
<italic>P</italic>
= 0.004 (chance 1/3); left PCG 36.8%,
<italic>t</italic>
<sub>(7)</sub>
= 2.18,
<italic>P</italic>
= 0.03 (chance 1/3); and pooled 37.5%,
<italic>t</italic>
<sub>(7)</sub>
= 2.71,
<italic>P</italic>
= 0.015 (chance 1/3). Decoding accuracy was significantly higher in the right as opposed to the left PCG (
<italic>t</italic>
<sub>(7)</sub>
= 2.61,
<italic>P</italic>
= 0.035). Thus, the earliest stages of somatosensory cortex, that is, the PCG, carry information about different categories of visual object presented to participants. The full confusion matrices underlying this performance can be seen in
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Figure 2</ext-link>
. Crucially, univariate signal changes did not differ across the 3 object categories in the PCG (
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 3 and Supplementary Text</ext-link>
).
<fig id="BHT292F3" position="float">
<label>Figure 3.</label>
<caption>
<p>Decoding of visual object categories: Experiment 1. (
<italic>A</italic>
) Cross-validated 3AFC mean decoding performance for the right and left postcentral gyri independently and pooled across hemispheres (error bars show one standard error of the mean across participants). Double stars:
<italic>P</italic>
< 0.0167, single star:
<italic>P</italic>
< 0.05. (
<italic>B</italic>
) The same when using voxels that were responsive to finger-pad stimulation of the right-hand finger-pads. (C–F) As in
<italic>A</italic>
but for several additional, anatomically defined, regions of interest.</p>
</caption>
<graphic xlink:href="bht29203"></graphic>
</fig>
</p>
<p>We additionally performed an independent tactile finger mapping experiment where we localized the cortical representation of the first 4 digits (thumb to ring finger) of the right hand (see
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 4</ext-link>
). This allowed us to define voxels that were maximally sensitive to tactile stimulation of the fingertips in the right hand in each hemisphere (note that we would of course expect a smaller number in the right hemisphere—ipsilateral to tactile stimulation). We used these data to select a subset of those voxels present in the anatomical PCG masks that showed high sensitivity to such tactile stimulation (see Materials and Methods). Cross-validated mean decoding accuracy was again above chance for each hemisphere independently and pooled across hemispheres (see Fig.
<xref ref-type="fig" rid="BHT292F3">3</xref>
<italic>B</italic>
: left PCG 37%,
<italic>t</italic>
<sub>(7)</sub>
= 2.01,
<italic>P</italic>
= 0.042 (chance 1/3); right PCG 39%,
<italic>t</italic>
<sub>(7)</sub>
= 3.36,
<italic>P</italic>
= 0.006; pooled 39%,
<italic>t</italic>
<sub>(7)</sub>
= 2.02,
<italic>P</italic>
= 0.042), although there was no difference in decoding across hemispheres (
<italic>P</italic>
= 0.26). Thus, visual object category could still be decoded reliably above chance even when restricting the classifier only to voxels that show high responses to tactile stimulation of the right-hand finger pads.</p>
<p>What about higher order somatosensory brain regions? We defined anatomical masks of S2 on the basis of probabilistic maps (
<xref rid="BHT292C13" ref-type="bibr">Eickhoff et al. 2005</xref>
). Figure
<xref ref-type="fig" rid="BHT292F3">3</xref>
<italic>C</italic>
shows mean decoding accuracy independently in left and right S2, and pooled across hemispheres. Decoding accuracy was reliably above chance in right S2: mean Decoding accuracy 37% (
<italic>t</italic>
<sub>(7)</sub>
= 3.18,
<italic>P</italic>
= 0.008, chance = 1/3), and when pooled 38% (
<italic>t</italic>
<sub>(7)</sub>
= 2.84,
<italic>P</italic>
= 0.0125, chance = 1/3) but not in left S2 35% (
<italic>t</italic>
<sub>(7)</sub>
= 0.76,
<italic>P</italic>
= 0.235, chance = 1/3, NS). There was no difference in decoding performance across hemispheres (
<italic>P</italic>
= 0.54). Thus, in agreement with our results in S1, S2 contains information that discriminates between categories of visual objects. We also repeated the analysis in S2 when preselecting voxels that had high responses to tactile stimulation of the right-hand finger tips (see Materials and methods) and found reliable decoding only when pooled across hemispheres (mean 39.8%,
<italic>t</italic>
<sub>(7)</sub>
= 2.28,
<italic>P</italic>
= 0.028). Thus, in agreement with our analyses on S1, S2 also contains information about visual object categories.</p>
<sec id="s3a">
<title>Additional Regions</title>
<p>We also defined anatomical masks of the left and right motor and premotor cortices. Decoding of visual objects never reached significance in any of these ROIs (Fig.
<xref ref-type="fig" rid="BHT292F3">3</xref>
<italic>E</italic>
,
<italic>F</italic>
, all
<italic>P</italic>
's > 0.05). In V1, however, as expected decoding accuracy was very high and robustly greater than chance (Fig.
<xref ref-type="fig" rid="BHT292F3">3</xref>
<italic>D</italic>
: right V1: 97%,
<italic>t</italic>
<sub>(7)</sub>
= 41.03,
<italic>P</italic>
< 0.0001; left V1: 98%,
<italic>t</italic>
<sub>(7)</sub>
= 80.74,
<italic>P</italic>
< 0.0001; pooled: 98%,
<italic>t</italic>
<sub>(7)</sub>
= 70.33,
<italic>P</italic>
< 0.0001). Furthermore, we ran a 2-way RM ANOVA (ROI: PCG, S2, V1, premotor, motor; side: right, left, pooled) to test whether decoding performance differed across different regions of interest. This analysis revealed only a main effect of ROI,
<italic>F</italic>
<sub>4,28</sub>
= 379.02,
<italic>P</italic>
< 0.0001. Follow-up tests demonstrated that decoding performance was significantly higher in V1 than in each other region (all
<italic>t</italic>
's > 22.2, all
<italic>P</italic>
's < 0.0001). No other effects reached significance although there was a trend for higher decoding in PCG than in premotor cortex (
<italic>P</italic>
= 0.0785).</p>
</sec>
<sec id="s3b">
<title>Within-Category Decoding in V1 and PCG</title>
<p>Finally, in order to further probe the nature of the representations present in the PCG we investigated within-category decoding within the PCG and V1 (i.e., discriminating between different images of the same object category—e.g., the 3 different images of apples). If the decoding within the PCG is due to differences in grasping/motor affordances for each object category then we would not expect to find any within-category decoding in this region, since such affordances would be equal within a category. Within V1, in contrast, we would expect to be able to decode the different images within a category (as well as between categories). We found, as expected, that within-category decoding was highly significant within each subregion of V1 (right: 59%,
<italic>t</italic>
<sub>(7)</sub>
= 4.55,
<italic>P</italic>
= .0013; left: 57%,
<italic>t</italic>
<sub>(7)</sub>
= 5.23,
<italic>P</italic>
= 0.0006; pooled: 60%,
<italic>t</italic>
<sub>(7)</sub>
= 5.07,
<italic>P</italic>
= 0.0007; chance = 33%). Crucially, however, such decoding was never significant within the PCG (all
<italic>P</italic>
's > 0.05, all means ≤ 33%) and moreover, between-category decoding (38%) was reliably stronger than within-category decoding (33%):
<italic>F</italic>
<sub>1,7</sub>
= 5.69,
<italic>P</italic>
= 0.048, in this region. Note that no reliable within-category decoding was found for any individual object category (i.e., wine glasses, mobile phones, or apples) in any region of the PCG (left, right, or pooled; see
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 5</ext-link>
). In addition, as expected within-category decoding (59%) was significantly weaker than between category (98%) in V1 (
<italic>F</italic>
<sub>1,7</sub>
= 78.280,
<italic>P</italic>
< 0.001). Hence, our results demonstrate that information is present within the PCG that can discriminate between but not within different visual object categories. For a full depiction of within-category decoding within the PCG, see
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Figure 5</ext-link>
.</p>
</sec>
<sec id="s3c">
<title>Whole-Brain Analysis</title>
<p>Finally, we ran a whole-brain searchlight decoding analysis (see
<xref rid="BHT292C26" ref-type="bibr">Kriegeskorte et al. 2006</xref>
) to investigate which other regions of cortex might discriminate between visual object categories. As expected, this analysis revealed high decoding performance throughout much of occipito-temporal cortex (Fig.
<xref ref-type="fig" rid="BHT292F5">5</xref>
<italic>A</italic>
), peaking at 98% correct in early visual cortex. We note that this high decoding performance observed in early visual cortex serves as validation of our methodological approach. The searchlight analysis corroborated our earlier region-of-interest (ROI) analysis by finding reliable decoding in a cluster centered in the right postcentral sulcus bordering area 2 of the PCG and the superior extent of right S2 (Fig.
<xref ref-type="fig" rid="BHT292F5">5</xref>
<italic>A</italic>
).</p>
<p>We found significant decoding, moreover, in several additional regions of posterior parietal cortex that could potentially mediate the transfer of information from visual cortex to early somatosensory brain regions (Fig.
<xref ref-type="fig" rid="BHT292F5">5</xref>
<italic>A</italic>
). We observed significant decoding in the superior parietal lobule, especially on the left (comprising area 7 and area 5), areas known to be important for integration of proprioceptive and visual information (e.g.,
<xref rid="BHT292C22" ref-type="bibr">Hsiao 2008</xref>
). We also found reliable decoding in several additional regions important in visuo-motor control: from posterior to more anterior sections of the IPS and bilateral superior parietal occipital sulcus—SPOC (e.g.,
<xref rid="BHT292C18" ref-type="bibr">Gallivan et al. 2011A</xref>
,
<xref rid="BHT292C17" ref-type="bibr">2011B</xref>
;
<xref rid="BHT292C41" ref-type="bibr">Rossit et al. 2013</xref>
). We also found smaller clusters around left dorsolateral prefrontal cortex and left premotor cortex—all areas important in visuo-motor control (e.g.,
<xref rid="BHT292C17" ref-type="bibr">Gallivan et al. 2011B</xref>
). In summary, the whole-brain analysis supports our ROI-based analyses, and highlights regions of parietal cortex that may potentially transmit information regarding different visual objects to early somatosensory cortex (S1 and S2) and regions of frontal cortex that might also be involved in representing motor plans.</p>
</sec>
<sec id="s3d">
<title>Experiment 2</title>
<p>In a second experiment, we replicated the design of Experiment 1 but added a set of unfamiliar visual objects (Fig.
<xref ref-type="fig" rid="BHT292F1">1</xref>
<italic>B</italic>
and
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 1</ext-link>
<italic>B</italic>
) to explicitly test whether prior visuo-haptic experience with the presented visual objects is necessary to observe reliable decoding in somatosensory cortex. In addition, this experiment allowed us to perform an independent replication of the results of Experiment 1 (with familiar visual objects) while testing the generalization of our effects to a task that excluded any overt motor response during scanning (i.e., no button press response required).</p>
</sec>
<sec id="s3e">
<title>Postcentral Gyri</title>
<p>As in Experiment 1, we defined individual anatomical masks of the postcentral gyri in each of our 10 participants. Decoding of familiar visual objects was above chance in each region (Fig.
<xref ref-type="fig" rid="BHT292F4">4</xref>
<italic>A</italic>
: right PCG: 37%
<italic>t</italic>
<sub>(9)</sub>
= 2.69,
<italic>P</italic>
= 0.012; left PCG: 37%
<italic>t</italic>
<sub>(9)</sub>
= 1.89,
<italic>P</italic>
= 0.046; pooled: 39%
<italic>t</italic>
<sub>(9)</sub>
= 3.43,
<italic>P</italic>
= 0.004), thus replicating our findings in Experiment 1. Decoding of unfamiliar objects (Fig.
<xref ref-type="fig" rid="BHT292F4">4</xref>
<italic>A</italic>
), on the other hand, was not above chance in any region (all
<italic>P</italic>
's > 0.092). There were no differences between decoding performance across hemispheres (left vs. right) for either familiar or unfamiliar objects (both
<italic>P</italic>
's > 0.58). The confusion matrices underlying this performance can be seen in
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Figure 2</ext-link>
. Importantly, univariate signal changes did not lead to reliable differences between object categories in the PCG (
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 3</ext-link>
). In addition, in Experiment 2, we improved our finger mapping procedure to map the somatosensory representation of the digits of each hand independently, allowing us to examine whether such effects were present in the PCG when limited to “contralateral finger-sensitive voxels” (top 100 voxels responding to stimulation of the contralateral hand). Mean decoding of familiar visual objects was significant in the right PCG 38% (
<italic>t</italic>
<sub>(9)</sub>
= 2.16,
<italic>P</italic>
= 0.029) and pooled PCG 36% (
<italic>t</italic>
<sub>(9)</sub>
= 2.19,
<italic>P</italic>
= 0.028) but not in left PCG (34%,
<italic>P</italic>
= 0.28), again replicating Experiment 1. Once more, however, no significant decoding was present for unfamiliar objects in any region (all
<italic>P</italic>
's > 0.11). There was a nonsignificant trend for higher decoding in the right PCG (
<italic>t</italic>
<sub>(9)</sub>
= 1.99,
<italic>P</italic>
= 0.078) for familiar but not unfamiliar (
<italic>P</italic>
= 0.34) objects. Thus, even when limiting the analysis to “contralateral finger-pad-sensitive voxels” decoding of familiar visual objects is possible in right PCG and pooled PCG.
<fig id="BHT292F4" position="float">
<label>Figure 4.</label>
<caption>
<p>Decoding of visual object categories: Experiment 2. (
<italic>A</italic>
) Cross-validated 3AFC mean decoding performance for the right and left postcentral gyri (independently and pooled across hemispheres) for both familiar and unfamiliar visual object categories (chance 33%; error bars show 1 standard error of the mean across participants). Double stars
<italic>P</italic>
< 0.0167, single star:
<italic>P</italic>
< 0.05. (
<italic>B</italic>
) The same as in
<italic>A</italic>
but with voxels selected as a function of response to tactile stimulation of the finger-pads of both hands (see Materials and Methods). (C–F) As in
<italic>A</italic>
but for several additional, anatomically defined, regions of interest.</p>
</caption>
<graphic xlink:href="bht29204"></graphic>
</fig>
</p>
<p>Moreover, in an additional analysis, we selected tactile sensitive voxels in each region that responded to either contra- or ipsilateral stimulation (top 50 to each): we did this to ensure that both earlier (e.g., 3b, 1) and later regions (e.g., area 2) within the PCG would have a good chance to be selected (see
<xref rid="BHT292C25" ref-type="bibr">Keysers et al. 2010</xref>
). This analysis (Fig.
<xref ref-type="fig" rid="BHT292F4">4</xref>
<italic>B</italic>
) revealed reliable decoding in each region again only for familiar objects: right PCG 40% (
<italic>t</italic>
<sub>(9)</sub>
= 4.52,
<italic>P</italic>
= 0.0007), left PCG 37% (
<italic>t</italic>
<sub>(9)</sub>
= 2.4,
<italic>P</italic>
= 0.02) and pooled 36.8% (
<italic>t</italic>
<sub>(9)</sub>
= 2.67,
<italic>P</italic>
= 0.013). Again, there were no significant effects of hemisphere on decoding performance for either familiar or unfamiliar objects (both
<italic>P</italic>
's > 0.15). This analysis hints toward the role that higher regions (e.g., area 2) within the PCG may play in contributing to the present effects, as area 2 is the only region within the primary somatosensory cortex to receive input from ipsilateral stimulation of the hand (see
<xref rid="BHT292C25" ref-type="bibr">Keysers et al. 2010</xref>
).</p>
<p>Thus, the results of Experiment 2 replicate those of Experiment 1 in showing that decoding of familiar visual objects is present in bilateral PCG. The more refined somatosensory mapping of hand digits allow us to demonstrate that such decoding is still present when voxels are limited to either those active in response to contralateral finger stimulation OR when limited to those active via both contra- and ipsilateral finger stimulation. Furthermore, on no occasion is the decoding of the unfamiliar object categories above chance in the PCG.</p>
</sec>
<sec id="s3f">
<title>Secondary Somatosensory Cortex (S2)</title>
<p>Decoding of familiar visual objects was above chance in the right S2 (Fig.
<xref ref-type="fig" rid="BHT292F4">4</xref>
<italic>C</italic>
: mean 38%
<italic>t</italic>
<sub>(9)</sub>
= 2.64,
<italic>P</italic>
= 0.013; all other
<italic>P</italic>
's > 0.12), replicating Experiment 1. Decoding of unfamiliar objects was not above chance in any region (Fig.
<xref ref-type="fig" rid="BHT292F4">4</xref>
<italic>C</italic>
: all
<italic>P</italic>
's > 0.12). There was no difference in performance across hemispheres (both
<italic>P</italic>
's > 0.1). When limiting the analysis to “contralateral finger-sensitive voxels” in S2 (top 100 voxels) decoding of familiar objects was only possible in the right S2 (mean 37%
<italic>t</italic>
<sub>(9)</sub>
= 2.82,
<italic>P</italic>
= 0.01). Decoding of unfamiliar objects was not possible in left or right S2, although there was a weak effect when pooled across hemispheres (mean 36%,
<italic>t</italic>
<sub>(9)</sub>
= 1.92,
<italic>P</italic>
= 0.0435). Again there were no differences across hemisphere (both
<italic>P</italic>
's > 0.29). Hence, there is clear evidence of decoding of familiar visual objects in right S2, whether using the whole anatomical region or whether pre-selecting voxels as a function of sensitivity to tactile stimulation of the contralateral hand. There is also a weak suggestion of a reliable effect of discriminating unfamiliar objects when pooling across hemispheres, for voxels preselected as a function of sensitivity to finger-pad stimulation.</p>
</sec>
<sec id="s3g">
<title>Additional Regions</title>
<p>As expected in V1 decoding was robustly present in all participants for both familiar (Fig.
<xref ref-type="fig" rid="BHT292F4">4</xref>
<italic>D</italic>
: right V1: 97%
<italic>t</italic>
<sub>(9)</sub>
= 49.1,
<italic>P</italic>
< 0.0001; left V1: 96.8%
<italic>t</italic>
<sub>(9)</sub>
= 55.2,
<italic>P</italic>
< 0.0001; pooled: 98%
<italic>t</italic>
<sub>(9)</sub>
= 76.6,
<italic>P</italic>
< 0.0001) and unfamiliar object categories (Fig.
<xref ref-type="fig" rid="BHT292F4">4</xref>
<italic>D</italic>
: right V1: 95.6%
<italic>t</italic>
<sub>(9)</sub>
= 30.1,
<italic>P</italic>
< 0.0001; left V1: 94%
<italic>t</italic>
<sub>(9)</sub>
= 22.7,
<italic>P</italic>
< 0.0001; pooled: 96.5%
<italic>t</italic>
<sub>(9)</sub>
= 35.6,
<italic>P</italic>
< 0.0001). To investigate whether decoding differed for familiar and unfamiliar objects in V1, we conducted a 2-way ANOVA with the factors (region and familiarity): there were no significant main effects or interactions in this analysis although there was a trend (
<italic>P</italic>
= 0.065) for a main effect of region. Thus decoding of both familiar and unfamiliar objects was similar in V1, as expected.</p>
<p>We used probabilistic anatomical masks to define the location of several additional control regions in our participants. Decoding of familiar visual objects (Fig.
<xref ref-type="fig" rid="BHT292F4">4</xref>
<italic>E</italic>
) was significant in both left (37%
<italic>t</italic>
<sub>(9)</sub>
= 2.68,
<italic>P</italic>
= 0.013) and right (36%,
<italic>t</italic>
<sub>(9)</sub>
= 1.97,
<italic>P</italic>
= 0.04) premotor cortex but not when pooled (36%,
<italic>t</italic>
<sub>(9)</sub>
= 1.41,
<italic>P</italic>
= 0.097). However, no reliable decoding was present in motor cortex itself (Fig.
<xref ref-type="fig" rid="BHT292F4">4</xref>
<italic>F</italic>
: all
<italic>P</italic>
's > 0.14). Decoding of unfamiliar objects was not significant in any motor or premotor region tested (Fig.
<xref ref-type="fig" rid="BHT292F4">4</xref>
<italic>E,F</italic>
: all
<italic>P</italic>
's > 0.41).</p>
<p>To investigate whether decoding performance differed as a function of familiarity and region, we also ran the same 2-way ANOVA in each other ROI: critically only 2 regions showed a main effect of familiarity on decoding performance: in the PCG but only when limited to finger sensitive voxels from stimulation of both hands,
<italic>F</italic>
<sub>1,9</sub>
= 7.29,
<italic>P</italic>
= 0.024, mean familiar 38%, unfamiliar 33%; and in premotor cortex,
<italic>F</italic>
<sub>1,9</sub>
= 5.75,
<italic>P</italic>
= 0.04, mean familiar 37%, unfamiliar 33%. No other main effects or interactions were significant in these or any other ROIs. Hence, decoding was reliably higher for familiar compared with unfamiliar object categories in these two brain regions, suggesting a specific influence of prior knowledge.</p>
<p>Finally we also ran a 3-way ANOVA, with the following factors: ROI (PCG, S2, V1, premotor, motor), familiarity (familiar, unfamiliar), and side (right, left, or pooled). This analysis revealed only a main effect of ROI,
<italic>F</italic>
<sub>4,36</sub>
= 586.07,
<italic>P</italic>
< 0.0001. Follow-up
<italic>t</italic>
-tests revealed that decoding performance in V1 was reliably higher than that present in each other region (all
<italic>t</italic>
's > 29.9, all
<italic>P</italic>
's < 0.0001). No other effects were significant (all
<italic>P</italic>
's > 0.05) although there was a nonsignificant trend for decoding to be higher in PCG than in motor cortex (
<italic>P</italic>
= 0.0787).</p>
</sec>
<sec id="s3h">
<title>Within-Category Decoding in the PCG and V1</title>
<p>Within-category decoding was not significant in any region of the PCG (all
<italic>P</italic>
's > 0.05) for familiar objects, for any particular object category independently, or when averaged across categories (see
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 5</ext-link>
). This finding therefore replicates Experiment 1. However, for unfamiliar objects, there was surprisingly a significant within-category decoding within the right PCG (38%,
<italic>t</italic>
<sub>(9)</sub>
= 2.44,
<italic>P</italic>
= 0.019) and when pooled across hemispheres (37%,
<italic>t</italic>
<sub>(9)</sub>
= 1.97,
<italic>P</italic>
= 0.04) but not in left PCG (34%,
<italic>t</italic>
<sub>(9)</sub>
= 0.29,
<italic>P</italic>
= 0.39), for the analysis that averaged across object categories. The results of this analysis when split by object categories clearly reveals that this effect is entirely driven by the “spikies” object category (see
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 5</ext-link>
) and that hence, for 2 of the 3 unfamiliar object categories, no within-category decoding was observed (“cubies” and “smoothies”; see
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 5</ext-link>
). We discuss below the special features present in the “spikies” object category that render it, in restrospect, a less than ideal control in the present experiment (see Discussion). A 3-way ANOVA (decoding type: between vs. within; region: left, right, pooled; familiarity: familiar, unfamiliar) revealed that there was no difference across type of decoding (between vs. within;
<italic>P</italic>
= 0.52) or any interactions with this factor. There was a main effect of region (
<italic>F</italic>
<sub>2,18</sub>
= 4.02,
<italic>P</italic>
= 0.036) with decoding overall being higher in right than left PCG.</p>
<p>In V1, as expected, within-category decoding was again highly significant for familiar objects (right: 64%,
<italic>t</italic>
<sub>(9)</sub>
= 7.22,
<italic>P</italic>
< 0.0001; left: 58%,
<italic>t</italic>
<sub>(9)</sub>
= 6.78,
<italic>P</italic>
< 0.0001; pooled: 64%,
<italic>t</italic>
<sub>(9)</sub>
= 7.04,
<italic>P</italic>
< 0.0001). The same pattern was also evident for unfamiliar objects (right: 68%,
<italic>t</italic>
<sub>(9)</sub>
= 8.35,
<italic>P</italic>
< .0001; left: 65%,
<italic>t</italic>
<sub>(9)</sub>
= 8.28,
<italic>P</italic>
< .0001; pooled: 70%,
<italic>t</italic>
<sub>(9)</sub>
= 10.26,
<italic>P</italic>
< 0.0001). Decoding was higher for between (96%) than within (65%) category (
<italic>F</italic>
<sub>1,9</sub>
= 141.56,
<italic>P</italic>
< 0.001) classifications in V1. In addition, there was an interaction between type of decoding and familiarity (
<italic>F</italic>
<sub>1,9</sub>
= 10.78,
<italic>P</italic>
= 0.009) with between-category decoding being slightly better for familiar than unfamiliar objects but within-category decoding being slightly better for unfamiliar objects. There was also an interaction between decoding type and region (
<italic>F</italic>
<sub>2,18</sub>
= 4.7,
<italic>P</italic>
= 0.023) due to a greater decrease in decoding in left V1 relative to right V1 (or pooled) for within- not between-category decoding.</p>
</sec>
<sec id="s3i">
<title>Whole-Brain Analysis</title>
<p>We show in Figure
<xref ref-type="fig" rid="BHT292F5">5</xref>
<italic>B,C</italic>
the results of a whole-brain searchlight analysis independently for familiar and for unfamiliar visual object categories in Experiment 2. The broad pattern of decoding is similar in both cases: a vast region of visual cortex extending into temporal and parietal cortex has very high decoding accuracies. There is evidence of reliable decoding in the SPOC, and along the IPS and the superior parietal lobule, as in Experiment 1. This pattern is, however, broadly similar across both familiar and unfamiliar object categories. The findings for familiar objects fit well with those in Experiment 1 in terms of visual and parietal cortex although there is no evidence of significant decoding around the postcentral sulcus/postcentral gyrus in Experiment 2, for familiar objects. In addition, fewer areas are found in frontal regions in Experiment 2 for familiar objects (only left dorsolateral prefrontal and right premotor cortex). In order to increase power, we combined our data from Experiments 1 and 2 in the familiar object condition (Fig.
<xref ref-type="fig" rid="BHT292F5">5</xref>
<italic>D</italic>
). This analysis revealed high decoding throughout visual cortex, extending again into posterior parietal (SPOC, IPS, SPL) cortex, all areas important in visuo-motor control (e.g.,
<xref rid="BHT292C18" ref-type="bibr">Gallivan et al. 2011a</xref>
,
<xref rid="BHT292C17" ref-type="bibr">2011b</xref>
;
<xref rid="BHT292C41" ref-type="bibr">Rossit et al. 2013</xref>
). In addition, reliable decoding was found around the postcentral sulcus bilaterally, extending into the postcentral gyri. Moreover, we also found reliable decoding in left premotor, motor, and dorsolateral prefrontal cortex and right premotor cortex, again all areas important in visuo-motor control (e.g.,
<xref rid="BHT292C17" ref-type="bibr">Gallivan et al. 2011B</xref>
;
<xref rid="BHT292C41" ref-type="bibr">Rossit et al. 2013</xref>
) and furthermore in left posterior cingulate. Thus, our combined analysis implies that the representation of familiar object categories spans at least the visual, tactile, and motor components associated with interacting with real world objects and further suggests a network of brain areas that could play a role in transmitting visual information to early somatosensory brain regions.
<fig id="BHT292F5" position="float">
<label>Figure 5.</label>
<caption>
<p>Whole-brain SearchLight Decoding. (
<italic>A</italic>
) Group information-based map for familiar visual object decoding in Experiment 1 projected onto an inflated cortical reconstruction of a reference brain. The value at each voxel in the map designates how well the decoder discriminates visual object categories reliably across participants at that position in space (voxelwise
<italic>P</italic>
< 0.01, cluster
<italic>P</italic>
< 0.05; see Materials and Methods). The continuous white lines mark the approximate anterior boundaries of the functional data coverage. (
<italic>B</italic>
) As in
<italic>A</italic>
but for familiar object decoding in Experiment 2. (
<italic>C</italic>
) As in
<italic>A</italic>
but for unfamiliar object decoding in Experiment 2. (
<italic>D</italic>
) As in
<italic>A</italic>
but for an analysis that pooled data for familiar object decoding across Experiments 1 and 2. PCG, postcentral gyrus; PCS, postcentral sulcus; SPL, superior parietal lobe; IPS, inferior parietal sulcus; SPOC, superior parietal–occipital cortex.</p>
</caption>
<graphic xlink:href="bht29205"></graphic>
</fig>
</p>
</sec>
<sec id="s3j">
<title>Overlap Between Searchlight and ROI Analyses</title>
<p>For the searchlight analysis that pooled across Experiments (i.e., familiar object decoding), almost 1/5 of voxels (18%) within a group mask of the right PCG (defined as voxels shared across at least 40% of subjects; see also
<xref rid="BHT292C32" ref-type="bibr">Meyer et al. 2011</xref>
) were found to be significant (
<italic>P</italic>
< 0.01, uncorrected) via the searchlight analysis (13% in the left hemisphere; see
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 6</ext-link>
). This differs considerably from the number that overlap when each individual experiment is analyzed separately: only between 3% and 7% overlap for decoding of familiar objects in Experiments 1 and 2. Hence, it is clear that by pooling results across Experiments we obtain a great increase of power for the searchlight analysis. A similar story is present for S2 (overlap ranges between 1% and 9% for individual experiments, whereas it reaches 13% for the pooled across experiments analysis in right S2 (4% for left S2 in the pooled analysis).</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>We have shown in 2 independent experiments that early somatosensory cortex (PCG/S1) carries information that discriminates familiar visual object categories. This is true despite the fact that participants had no concurrent haptic interaction with the objects presented in the experiment nor did our stimuli, which were static photographs, depict any haptic interactions with the object. In Experiment 2, we demonstrated that this effect was not found for visual object categories in general, but only for familiar visual object categories. Thus, cross-modal connections from vision to early somatosensory cortex transmit content-specific information about familiar object categories based on visual appearance alone. We further show that right S2, a higher order somatosensory brain region, and premotor cortex also contain information about familiar object categories. Our whole-brain analyses point toward several areas in posterior parietal cortex (e.g., IPS, SPL) that might underlie the transmission of information from vision to early somatosensory cortex.</p>
<sec id="s4a">
<title>Cross-Modal Context Effects in Early Sensory Areas</title>
<p>The present study is in agreement with a set of studies that show the important role that cross-modal connections can play in influencing activity even in the earliest regions of supposedly unimodal sensory cortex, both in a content-specific (
<xref rid="BHT292C31" ref-type="bibr">Meyer et al. 2010</xref>
,
<xref rid="BHT292C32" ref-type="bibr">2011</xref>
;
<xref rid="BHT292C47" ref-type="bibr">Vetter et al. 2011</xref>
) and modulatory fashion (e.g.,
<xref rid="BHT292C3" ref-type="bibr">Calvert et al. 1997</xref>
;
<xref rid="BHT292C29" ref-type="bibr">Mcintosh et al. 1998</xref>
;
<xref rid="BHT292C2" ref-type="bibr">Baler et al. 2006</xref>
;
<xref rid="BHT292C50" ref-type="bibr">Zangenehpour and Zatorre 2010</xref>
). Our results are consistent with
<xref rid="BHT292C32" ref-type="bibr">Meyer et al. (2011)</xref>
, but importantly show that cross-modal triggering of differential activity patterns in somatosensory cortex occurs even when participants simply view the different objects rather than view someone haptically exploring the objects. Indeed, it is conceivable that the different patterns of activation
<xref rid="BHT292C32" ref-type="bibr">Meyer et al. (2011)</xref>
observed in somatosensory cortex were driven by the differences in the depicted hand movements rather than by differences in the visual appearance of the objects themselves. In short, as those authors discuss, it is possible that what they observed is the activation of somatosensory components of mirror-neuron networks (e.g.,
<xref rid="BHT292C25" ref-type="bibr">Keysers et al. 2010</xref>
). Nevertheless, both our study and that of Meyer et al. found reliable decoding of visual stimuli in primary somatosensory cortex and in S2. Moreover, in the present work, we have shown that the effect is only present for familiar but not unfamiliar object categories (all categories were familiar in
<xref rid="BHT292C31" ref-type="bibr">Meyer et al. 2010</xref>
).</p>
<p>In addition, the whole-brain searchlight analysis allows us uniquely to suggest several regions in parietal cortex that might underlie the visual context sensitivity observed in early somatosensory cortex. Specifically, we found reliable decoding of familiar visual objects in SPL (bilaterally), in the IPS (both posterior and anterior) and SPOC in our analysis that pooled across experiments. Importantly, these regions are all known to be crucial for the online control of actions (
<xref rid="BHT292C33" ref-type="bibr">Milner and Goodale 2006</xref>
), and in fact, parts of the SPL are thought to be a multimodal integration site for visual and proprioceptive information (e.g.,
<xref rid="BHT292C22" ref-type="bibr">Hsiao 2008</xref>
). We also found reliable coding in the postcentral sulcus, extending into the PCG itself which corroborates our ROI analyses. For unfamiliar objects, in contrast, the network of brain areas that contain information discriminating the object categories is restricted to visual and posterior parietal regions (including parts of the SPL, IPS, and SPOC) not including any frontal brain regions (i.e., motor or premotor). The whole-brain results suggest the intriguing idea that visual images of familiar graspable objects may activate associated action/haptic coding routines in parietal cortex (see, e.g.,
<xref rid="BHT292C43" ref-type="bibr">Sakata et al. 1997</xref>
), that subsequently feedback all the way to early somatosensory cortex. In addition, we also found reliable decoding in frontal cortices as well (premotor and dorsolateral prefrontal cortex, both areas that have been implicated in visuo-motor processing, e.g.,
<xref rid="BHT292C17" ref-type="bibr">Gallivan et al. 2011B</xref>
). Although speculative, these ideas could be further tested by training a classifier on data from participants performing real actions with real objects and testing it on data acquired from visual presentation of the same objects alone. This would give an indication of whether or not the same representational space is being activated in each case. For unfamiliar objects, on the other hand, visual images trigger differential activity in posterior parietal, visual, and temporal brain areas while not leading to feedback that discriminates object categories in the early somatosensory cortices.</p>
<p>Moreover, our results fit well with an existing theory about the neural representation of perceptual experience (
<xref rid="BHT292C8" ref-type="bibr">Damasio 1989</xref>
;
<xref rid="BHT292C30" ref-type="bibr">Meyer and Damasio 2009</xref>
) that implies that early sensory areas (such as S1, V1, etc.) simultaneously represent the perceptual content of experience and that these sensory representations can be reactivated during recall or recognition. The present study provides converging evidence for the idea that activity in supposedly modality-specific early sensory areas can be shaped in a content-specific manner by relevant contextual information from other sensory modalities (e.g.,
<xref rid="BHT292C30" ref-type="bibr">Meyer and Damasio 2009</xref>
;
<xref rid="BHT292C31" ref-type="bibr">Meyer et al. 2010</xref>
,
<xref rid="BHT292C32" ref-type="bibr">2011</xref>
;
<xref rid="BHT292C47" ref-type="bibr">Vetter et al. 2011</xref>
).</p>
</sec>
<sec id="s4b">
<title>Finger-Pad Representations Within the PCG</title>
<p>We demonstrated that reliable decoding of familiar visual objects was possible in the PCG even when limiting our analysis to those voxels that are highly responsive to tactile stimulation of the finger-pads. In fact, these decoding analyses worked best when we selected voxels both from contra- and ipsilateral stimulation (Experiment 2), implying that higher order areas within the PCG (area 2) may underlie the present decoding effects. In addition, in Experiment 1, these analyses produced more robust classification in the right PCG which was subject to ipsilateral finger-pad stimulation in the localizer (i.e., the left-hand finger-pads) again suggesting the involvement of area 2 within the PCG. This evidence is further supported by our searchlight decoding analyses that reveal a cluster in the right postcentral sulcus that extends into posterior PCG, that is, area 2. In any case, these analyses imply that there is fine-grained information within the finger-specific representations of the somatosensory cortices that permits decoding of visual images of familiar object categories. It will be interesting in future research to attempt to localize such effects specifically to tactile or proprioceptive representations within the PCG and specifically to finger/hand representations versus those of another effector (e.g., leg).</p>
</sec>
<sec id="s4c">
<title>The Role of Prior Visuo-Haptic Experience</title>
<p>In Experiment 2 we demonstrated reliable decoding of familiar visual objects in bilateral PCG, right S2, bilateral premotor cortex, and of course in V1. Unfamiliar objects, on the other hand, were only reliably decoded in V1 (and on one subsampling analysis in right S2). In addition, within the PCG (when limited to finger-pad-sensitive voxels from both hemispheres) and premotor cortex, we found significantly stronger decoding for familiar than for unfamiliar visual objects. The whole-brain decoding analysis revealed reliable decoding in similar occipito-temporal and posterior parietal areas (IPS, SPL, and SPOC) for both familiar and unfamiliar object categories. However, we did not find any evidence of decoding for unfamiliar objects in more anterior parietal regions in any analysis carried out, that is, the postcentral sulcus or the PCG whereas for familiar objects, both the ROI analyses and the searchlight results (when pooled across experiments and in Experiment 1 by itself) provide evidence of reliable decoding within the PCG. We initially used familiar visual objects as we expected that cross-modal activation of object-specific activity patterns within early somatosensory cortex would be most likely to be detected with such objects since strong links already exist between different sensory modalities from joint multisensory experience for such object categories (see
<xref rid="BHT292C30" ref-type="bibr">Meyer and Damasio 2009</xref>
). Unfamiliar visual objects, however, just like familiar objects also afford certain grasping actions and tactile sensations directly from the visual appearance alone: hence it might still be useful and advantageous to activate early somatosensory representations even in this case, where such novel objects would be mapped onto our previous visuo-haptic experience and hence could still potentially trigger differential activity even in early somatosensory cortex. It may just be much harder to detect such effects in this case. This is especially true if we consider predictive coding as the basic computational strategy of the brain (see, e.g.,
<xref rid="BHT292C6" ref-type="bibr">Clark 2013</xref>
;
<xref rid="BHT292C35" ref-type="bibr">Muckli et al. 2013</xref>
). Although we did not find reliable decoding of novel object categories in the present experiment, we did not consistently find greater decoding for familiar than unfamiliar object categories across our analysis: the 2 compelling exceptions to this were in premotor cortex and in the PCG (when selecting only voxels responsive to finger-pad stimulation of both hands). One prediction of this hypothesis would be the existence of a gradient of decoding accuracy as a function of the amount of visuo-haptic experience a participant has with different sets of object categories. In addition, a further important test of the role of visual haptic experience in causing the present effects would be to explicitly train participants with a set of novel objects (visuo-haptically) and then examine at what stage of learning such objects could be decoded in the PCG, from visual presentation alone.</p>
<p>Additional analyses demonstrated that within-category decoding effects were absent for familiar objects in the PCG in both experiments. This is what we would expect if the between-category decoding results are due to motor/grasping affordances since these are constant within a category but differ across categories. If motor/grasping affordances are indeed responsible for the present effects then this would predict that decoding accuracy in the PCG should be higher across a set of familiar objects that have more distinct motor affordances than for a set that have more similar affordances. For unfamiliar objects, on the other hand, for 2 of 3 of the object categories, no within-category decoding was detected in the PCG, as expected. However, somewhat puzzlingly, reliable within-category decoding was found for the “spikies” object category within the PCG.</p>
<p>Specifically, in the context of presentation of the familiar object categories that are associated with hand-grasping actions (wine glasses, mobile phones, and apples), presentation of images from the “spikies” category might be especially attention grabbing (indeed these objects do lead to qualitatively high levels of BOLD signal amplitude, see
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary Fig. 3</ext-link>
). This could reflect the potential for injury that might result from performing a hand grasp to such objects. No other object category presented in the current experiments would appear to possess such a quality. Existing research has shown, furthermore, that simply viewing others in painful encounters (e.g., grasping a broken glass), leads to reliable activity within the somatosensory cortices, even within the PCG or S1 (see
<xref rid="BHT292C25" ref-type="bibr">Keysers et al. 2010</xref>
;
<xref rid="BHT292C52" ref-type="bibr">Morrison et al. 2013</xref>
). We speculate that, in the present case, this particular object category might have triggered increased sensitivity in the PCG due to the potential for pain that in turn leads to reliable within-category discrimination. Although this result might seem odd given the absence of any between-category decoding for unfamiliar objects it is possible to achieve such a result given the mathematics of machine learning algorithms.</p>
</sec>
<sec id="s4d">
<title>Decoding in Premotor Regions</title>
<p>In Experiment 2, we found reliable decoding of familiar objects in premotor cortex. This is in contrast to Experiment 1 where no reliable decoding of familiar objects was observed in this region. Recall that Experiment 2 was explicitly designed to remove any requirements for a motor response during the fMRI scanning whereas Experiment 1 required such a response (button press). Thus, it seems reasonable that any feedback from vision to pre-motor cortex was overwritten by the need for planning a motor action. Indeed,
<xref rid="BHT292C16" ref-type="bibr">Gallivan et al. (2013)</xref>
have shown that both left and right premotor areas (both ventral and dorsal) code information during the planning phase of an action about both left and right hand actions.</p>
</sec>
<sec id="s4e">
<title>The Role of Somatosensory Imagery</title>
<p>Visual imagery can activate the earliest sensory areas (i.e., V1) if sufficient detail is required in the mental image (see
<xref rid="BHT292C27" ref-type="bibr">Kosslyn et al. 2000</xref>
). This however requires a task optimized for generating a detailed visual image (e.g., visualizing line drawings of objects at different sizes:
<xref rid="BHT292C27" ref-type="bibr">Kosslyn et al. 2000</xref>
). Moreover,
<xref rid="BHT292C45" ref-type="bibr">Stokes et al. (2009)</xref>
, using MVPA, did not detect reliable decoding in early visual areas when subjects were asked to explicitly imagine the letters X or O. In the present experiments, we used orthogonal tasks requiring participants to fixate and either to detect color changes of the fixation marker (Experiment 1) or to count such changes and report the total number at the end of a run (Experiment 2). There was hence no task (volitional) requirement to form explicit mental images regarding the somatosensory (or motor) associations that the visual object images could potentially bring to mind. Although it is conceivable that our results might be explained to some extent by somatosensory imagery mechanisms, it seems unlikely that presentation of visual object images alone, with no task instruction to form detailed somatosensory images of such objects, would be expected to generate discriminable patterns of activity in the earliest regions of somatosensory cortex (i.e., S1). If imagery did occur during the present task then it must have been automatically generated by pre-existing links between vision and somatosensation for our particular object categories (see also
<xref rid="BHT292C50" ref-type="bibr">Zangenehpour and Zatorre 2010</xref>
). Thus, although we cannot rule out imagery as a potential mechanism we argue that it is unlikely that “explicit” mental imagery underlies the present effects. In future, a rapid event-related design and short stimulus presentation durations could be used in order to further explore this question.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusion</title>
<p>We have shown that familiar visual object categories can be discriminated in early regions of somatosensory cortex (S1 and S2), in the absence of concurrent somatosensory stimulation and in the absence of any object-hand interactions in the visual stimuli. Thus, cross-modal connections from vision to the primary somatosensory cortex transmit content-specific information about familiar object categories. We have further highlighted several areas in the parietal lobe (IPS, SPOC, SPL) that may underlie the transmission of information from visual to early somatosensory cortex. Our results fit well with theories on the neural representation of perceptual experience that suggest that early sensory areas jointly represent the perceptual contents of experience (e.g.,
<xref rid="BHT292C8" ref-type="bibr">Damasio 1989</xref>
;
<xref rid="BHT292C30" ref-type="bibr">Meyer and Damasio 2009</xref>
).</p>
</sec>
<sec id="s6">
<title>Supplementary Material</title>
<p>
<ext-link ext-link-type="uri" xlink:href="http://cercor.oxfordjournals.org/lookup/suppl/doi:10.1093/cercor/bht292/-/DC1">Supplementary material can be found at: http://www.cercor.oxfordjournals.org/</ext-link>
</p>
</sec>
<sec id="s7">
<title>Funding</title>
<p>F.W.S. was supported by a salary award from the NSERC-CREATE program. The research was supported by a grant to M.A.G. by the Canadian Institutes of Health Research. Funding to pay the Open Access publication charges for this article was provided by the University of East Anglia.</p>
</sec>
<sec sec-type="supplementary-material">
<title>Supplementary Material</title>
<supplementary-material id="PMC_1" content-type="local-data">
<caption>
<title>Supplementary Data</title>
</caption>
<media mimetype="text" mime-subtype="html" xlink:href="supp_25_4_1020__index.html"></media>
<media xlink:role="associated-file" mimetype="application" mime-subtype="msword" xlink:href="supp_bht292_bht292supp.docx"></media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<title>Notes</title>
<p>We thank Adam Mclean for help with I/O card setup and operation of the somatosensory stimulation device, and Stephanie Rossit for discussions about the functions of parietal cortex and insightful comments on the manuscript.
<italic>Conflict of Interest</italic>
: None declared.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="BHT292C1">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amunts</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Malikovic</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Mohlberg</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Schormann</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Zilles</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>Broadmann's areas 17 and 18 brought into stereotaxic space-where and how variable?</article-title>
<source>Neuroimage</source>
<year>2000</year>
<volume>11</volume>
<fpage>66</fpage>
<lpage>84</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1006/nimg.1999.0516">doi:10.1006/nimg.1999.0516</ext-link>
</comment>
<pub-id pub-id-type="pmid">10686118</pub-id>
</element-citation>
</ref>
<ref id="BHT292C2">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baler</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Kleinschmidt</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Muller</surname>
<given-names>NG</given-names>
</name>
</person-group>
<article-title>Cross-modal processing in early visual and auditory cortices depends on expected statistical relationship of multisensory information</article-title>
<source>J Neurosci</source>
<year>2006</year>
<volume>26</volume>
<fpage>12260</fpage>
<lpage>12265</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.1457-06.2006">doi:10.1523/JNEUROSCI.1457-06.2006</ext-link>
</comment>
<pub-id pub-id-type="pmid">17122051</pub-id>
</element-citation>
</ref>
<ref id="BHT292C3">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Calvert</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Bullmore</surname>
<given-names>ET</given-names>
</name>
<name>
<surname>Brammer</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Campbell</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>SCR</given-names>
</name>
<name>
<surname>McQuire</surname>
<given-names>PK</given-names>
</name>
<name>
<surname>Woodruff</surname>
<given-names>PWR</given-names>
</name>
<name>
<surname>Iversen</surname>
<given-names>SD</given-names>
</name>
<name>
<surname>David</surname>
<given-names>AS</given-names>
</name>
</person-group>
<article-title>Activation of auditory cortex during silent lipreading</article-title>
<source>Science</source>
<year>1997</year>
<volume>276</volume>
<fpage>593</fpage>
<lpage>596</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1126/science.276.5312.593">doi:10.1126/science.276.5312.593</ext-link>
</comment>
<pub-id pub-id-type="pmid">9110978</pub-id>
</element-citation>
</ref>
<ref id="BHT292C4">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chang</surname>
<given-names>CC</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>CJ</given-names>
</name>
</person-group>
<article-title>LIBSVM: a library for support vector machines</article-title>
<source>ACM Trans Intell Syst Technol</source>
<year>2011</year>
<volume>2</volume>
<fpage>27</fpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1145/1961189.1961199">doi:10.1145/1961189.1961199</ext-link>
</comment>
</element-citation>
</ref>
<ref id="BHT292C5">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Namburi</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Elliott</surname>
<given-names>LT</given-names>
</name>
<name>
<surname>Heinzle</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Soon</surname>
<given-names>CS</given-names>
</name>
<name>
<surname>Chee</surname>
<given-names>MW</given-names>
</name>
<name>
<surname>Haynes</surname>
<given-names>JD</given-names>
</name>
</person-group>
<article-title>Cortical surface-based searchlight decoding</article-title>
<source>Neuroimage</source>
<year>2011</year>
<volume>56</volume>
<fpage>582</fpage>
<lpage>592</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2010.07.035">doi:10.1016/j.neuroimage.2010.07.035</ext-link>
</comment>
<pub-id pub-id-type="pmid">20656043</pub-id>
</element-citation>
</ref>
<ref id="BHT292C6">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Clark</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Whatever next? Predictive brains, situated agents and the future of cognitive science</article-title>
<source>Behav Brain Sci</source>
<year>2013</year>
<volume>36</volume>
<fpage>181</fpage>
<lpage>204</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1017/S0140525X12000477">doi:10.1017/S0140525X12000477</ext-link>
</comment>
<pub-id pub-id-type="pmid">23663408</pub-id>
</element-citation>
</ref>
<ref id="BHT292C7">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Coutanche</surname>
<given-names>MN</given-names>
</name>
<name>
<surname>Thompson-Schill</surname>
<given-names>SL</given-names>
</name>
</person-group>
<article-title>The advantage of brief fMRI acquisition runs for multi-voxel pattern detection across runs</article-title>
<source>Neuroimage</source>
<year>2012</year>
<volume>57</volume>
<fpage>1113</fpage>
<lpage>1119</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2012.03.076">doi:10.1016/j.neuroimage.2012.03.076</ext-link>
</comment>
<pub-id pub-id-type="pmid">22498658</pub-id>
</element-citation>
</ref>
<ref id="BHT292C8">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Damasio</surname>
<given-names>AR</given-names>
</name>
</person-group>
<article-title>Time-locked multiregional retroactivation: a systems-level proposal for the neural substrates of recall and recognition</article-title>
<source>Cognition</source>
<year>1989</year>
<volume>33</volume>
<fpage>25</fpage>
<lpage>62</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/0010-0277(89)90005-X">doi:10.1016/0010-0277(89)90005-X</ext-link>
</comment>
<pub-id pub-id-type="pmid">2691184</pub-id>
</element-citation>
</ref>
<ref id="BHT292C9">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Driver</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Noesselt</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>Multisensory interplay reveals cross-modal influences on ‘sensory-specific’ brain regions, neural responses and judgements</article-title>
<source>Neuron</source>
<year>2008</year>
<volume>57</volume>
<fpage>11</fpage>
<lpage>23</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuron.2007.12.013">doi:10.1016/j.neuron.2007.12.013</ext-link>
</comment>
<pub-id pub-id-type="pmid">18184561</pub-id>
</element-citation>
</ref>
<ref id="BHT292C10">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Duda</surname>
<given-names>RO</given-names>
</name>
<name>
<surname>Hart</surname>
<given-names>PE</given-names>
</name>
<name>
<surname>Stork</surname>
<given-names>DG</given-names>
</name>
</person-group>
<source>Pattern classification</source>
<year>2001</year>
<publisher-loc>New York</publisher-loc>
<publisher-name>Wiley</publisher-name>
</element-citation>
</ref>
<ref id="BHT292C11">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eickhoff</surname>
<given-names>SB</given-names>
</name>
<name>
<surname>Amunts</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Mohlberg</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Zilles</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>The human parietal operculum. II. Stereotaxic maps and correlation with functional imaging results</article-title>
<source>Cereb Cortex</source>
<year>2006B</year>
<volume>16</volume>
<fpage>268</fpage>
<lpage>279</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/bhi106">doi:10.1093/cercor/bhi106</ext-link>
</comment>
<pub-id pub-id-type="pmid">15888606</pub-id>
</element-citation>
</ref>
<ref id="BHT292C12">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eickhoff</surname>
<given-names>SB</given-names>
</name>
<name>
<surname>Schleicher</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Zilles</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Amunts</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>The human parietal operculum. 1. Cytoarchitectonic mapping of subdivisions</article-title>
<source>Cereb Cortex</source>
<year>2006A</year>
<volume>16</volume>
<fpage>254</fpage>
<lpage>267</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/bhi105">doi:10.1093/cercor/bhi105</ext-link>
</comment>
<pub-id pub-id-type="pmid">15888607</pub-id>
</element-citation>
</ref>
<ref id="BHT292C13">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eickhoff</surname>
<given-names>SB</given-names>
</name>
<name>
<surname>Stephan</surname>
<given-names>KE</given-names>
</name>
<name>
<surname>Mohlberg</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Grefkes</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Fink</surname>
<given-names>GR</given-names>
</name>
<name>
<surname>Amunts</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Zilles</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>A new SPM toolbox for combining probabilistic maps and functional imaging data</article-title>
<source>Neuroimage</source>
<year>2005</year>
<volume>25</volume>
<fpage>1325</fpage>
<lpage>1335</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2004.12.034">doi:10.1016/j.neuroimage.2004.12.034</ext-link>
</comment>
<pub-id pub-id-type="pmid">15850749</pub-id>
</element-citation>
</ref>
<ref id="BHT292C14">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ethofer</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Van De Ville</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Scherer</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Vuilleumier</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Decoding of emotional information in voice-sensitive cortices</article-title>
<source>Curr Biol</source>
<year>2009</year>
<volume>19</volume>
<fpage>1028</fpage>
<lpage>1033</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.cub.2009.04.054">doi:10.1016/j.cub.2009.04.054</ext-link>
</comment>
<pub-id pub-id-type="pmid">19446457</pub-id>
</element-citation>
</ref>
<ref id="BHT292C15">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Etzel</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Gazzola</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Keysers</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Testing simulation theory with cross-modal multivariate classification of fMRI data</article-title>
<source>PLOS One</source>
<year>2008</year>
<volume>3</volume>
<issue>11</issue>
<fpage>e3690</fpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0003690">doi:10.1371/journal.pone.0003690</ext-link>
</comment>
<pub-id pub-id-type="pmid">18997869</pub-id>
</element-citation>
</ref>
<ref id="BHT292C16">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gallivan</surname>
<given-names>JP</given-names>
</name>
<name>
<surname>McClean</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Flanagan</surname>
<given-names>JR</given-names>
</name>
<name>
<surname>Culham</surname>
<given-names>JC</given-names>
</name>
</person-group>
<article-title>Where one hand meets the other: limb-specific and action-dependent movement plans decoded from preparatory signals in single human frontoparietal brain areas</article-title>
<source>J Neurosci</source>
<year>2013</year>
<volume>33</volume>
<fpage>1991</fpage>
<lpage>2008</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.0541-12.2013">doi:10.1523/JNEUROSCI.0541-12.2013</ext-link>
</comment>
<pub-id pub-id-type="pmid">23365237</pub-id>
</element-citation>
</ref>
<ref id="BHT292C17">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gallivan</surname>
<given-names>JP</given-names>
</name>
<name>
<surname>McClean</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>FW</given-names>
</name>
<name>
<surname>Culham</surname>
<given-names>JC</given-names>
</name>
</person-group>
<article-title>Decoding effector-dependent and effector-independent movement intentions from human parieto-frontal brain networks</article-title>
<source>J Neurosci</source>
<year>2011b</year>
<volume>31</volume>
<fpage>17149</fpage>
<lpage>17168</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.1058-11.2011">doi:10.1523/JNEUROSCI.1058-11.2011</ext-link>
</comment>
<pub-id pub-id-type="pmid">22114283</pub-id>
</element-citation>
</ref>
<ref id="BHT292C18">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gallivan</surname>
<given-names>JP</given-names>
</name>
<name>
<surname>McClean</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Valyear</surname>
<given-names>KF</given-names>
</name>
<name>
<surname>Pettypiece</surname>
<given-names>CE</given-names>
</name>
<name>
<surname>Culham</surname>
<given-names>JC</given-names>
</name>
</person-group>
<article-title>Decoding action intentions from preparatory brain activity in human parieto-frontal networks</article-title>
<source>J Neurosci</source>
<year>2011a</year>
<volume>31</volume>
<fpage>9599</fpage>
<lpage>9610</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.0080-11.2011">doi:10.1523/JNEUROSCI.0080-11.2011</ext-link>
</comment>
<pub-id pub-id-type="pmid">21715625</pub-id>
</element-citation>
</ref>
<ref id="BHT292C19">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Geyer</surname>
<given-names>S</given-names>
</name>
</person-group>
<source>The microstructural border between the motor and the cognitive domain in the human cerebral cortex</source>
<year>2003</year>
<edition>1st ed</edition>
<publisher-loc>Wien</publisher-loc>
<publisher-name>Springer</publisher-name>
</element-citation>
</ref>
<ref id="BHT292C20">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Geyer</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ledberg</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Schleicher</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Kinomura</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Schormann</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Burgel</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Klingberg</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Larsson</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zilles</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Roland</surname>
<given-names>PE</given-names>
</name>
</person-group>
<article-title>Two different areas within the primary motor cortex of man</article-title>
<source>Nature</source>
<year>2009</year>
<volume>382</volume>
<fpage>805</fpage>
<lpage>807</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/382805a0">doi:10.1038/382805a0</ext-link>
</comment>
<pub-id pub-id-type="pmid">8752272</pub-id>
</element-citation>
</ref>
<ref id="BHT292C21">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Harrison</surname>
<given-names>SA</given-names>
</name>
<name>
<surname>Tong</surname>
<given-names>F</given-names>
</name>
</person-group>
<article-title>Decoding reveals the contents of visual working memory in early visual areas</article-title>
<source>Nature</source>
<year>2009</year>
<volume>458</volume>
<fpage>632</fpage>
<lpage>635</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nature07832">doi:10.1038/nature07832</ext-link>
</comment>
<pub-id pub-id-type="pmid">19225460</pub-id>
</element-citation>
</ref>
<ref id="BHT292C22">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hsiao</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Central mechanisms of tactile shape perception</article-title>
<source>Curr Opin Neurobiol</source>
<year>2008</year>
<volume>18</volume>
<fpage>418</fpage>
<lpage>424</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.conb.2008.09.001">doi:10.1016/j.conb.2008.09.001</ext-link>
</comment>
<pub-id pub-id-type="pmid">18809491</pub-id>
</element-citation>
</ref>
<ref id="BHT292C23">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>RS</given-names>
</name>
<name>
<surname>Sereno</surname>
<given-names>MI</given-names>
</name>
</person-group>
<article-title>Dodecapus: An MR-compatible system for somatosensory stimulation</article-title>
<source>Neuroimage</source>
<year>2007</year>
<volume>34</volume>
<fpage>1060</fpage>
<lpage>1073</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2006.10.024">doi:10.1016/j.neuroimage.2006.10.024</ext-link>
</comment>
<pub-id pub-id-type="pmid">17182259</pub-id>
</element-citation>
</ref>
<ref id="BHT292C24">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kamitani</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Tong</surname>
<given-names>F</given-names>
</name>
</person-group>
<article-title>Decoding the visual and subjective contents of the human brain</article-title>
<source>Nat Neurosci</source>
<year>2005</year>
<volume>8</volume>
<fpage>679</fpage>
<lpage>685</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nn1444">doi:10.1038/nn1444</ext-link>
</comment>
<pub-id pub-id-type="pmid">15852014</pub-id>
</element-citation>
</ref>
<ref id="BHT292C25">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Keysers</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Kaas</surname>
<given-names>JH</given-names>
</name>
<name>
<surname>Gazzola</surname>
<given-names>V</given-names>
</name>
</person-group>
<article-title>Somatosensation in social perception</article-title>
<source>Nat Rev Neurosci</source>
<year>2010</year>
<volume>11</volume>
<fpage>417</fpage>
<lpage>428</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nrn2833">doi:10.1038/nrn2833</ext-link>
</comment>
<pub-id pub-id-type="pmid">20445542</pub-id>
</element-citation>
</ref>
<ref id="BHT292C26">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kriegeskorte</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Goebel</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Bandettini</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Information-based functional brain mapping</article-title>
<source>Proc Natl Acad Sci USA</source>
<year>2006</year>
<volume>103</volume>
<fpage>3863</fpage>
<lpage>3868</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1073/pnas.0600244103">doi:10.1073/pnas.0600244103</ext-link>
</comment>
<pub-id pub-id-type="pmid">16537458</pub-id>
</element-citation>
</ref>
<ref id="BHT292C27">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kosslyn</surname>
<given-names>SM</given-names>
</name>
<name>
<surname>Ganis</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Thompson</surname>
<given-names>WL</given-names>
</name>
</person-group>
<article-title>Neural foundations of imagery</article-title>
<source>Nat Rev Neurosci</source>
<year>2000</year>
<volume>2</volume>
<fpage>635</fpage>
<lpage>642</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/35090055">doi:10.1038/35090055</ext-link>
</comment>
<pub-id pub-id-type="pmid">11533731</pub-id>
</element-citation>
</ref>
<ref id="BHT292C28">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lancaster</surname>
<given-names>JL</given-names>
</name>
<name>
<surname>Tordesillas</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Martinez</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Salinas</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Zilles</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Mazziota</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>PT</given-names>
</name>
</person-group>
<article-title>Bias between MNI and Talairach coordinates analysed using the ICBM-152 brain template</article-title>
<source>Hum Brain Mapp</source>
<year>2007</year>
<volume>28</volume>
<fpage>1194</fpage>
<lpage>1205</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1002/hbm.20345">doi:10.1002/hbm.20345</ext-link>
</comment>
<pub-id pub-id-type="pmid">17266101</pub-id>
</element-citation>
</ref>
<ref id="BHT292C29">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McIntosh</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Cabeza</surname>
<given-names>RE</given-names>
</name>
<name>
<surname>Lobaugh</surname>
<given-names>NJ</given-names>
</name>
</person-group>
<article-title>Analysis of neural interactions explains the activation of occipital cortex by an auditory stimulus</article-title>
<source>J Neurophysiol</source>
<year>1998</year>
<volume>80</volume>
<fpage>2790</fpage>
<lpage>2796</lpage>
<pub-id pub-id-type="pmid">9819283</pub-id>
</element-citation>
</ref>
<ref id="BHT292C30">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meyer</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Damasio</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Convergence and divergence in a neural architecture for recognition and memory</article-title>
<source>Trends Neurosci</source>
<year>2009</year>
<volume>32</volume>
<fpage>376</fpage>
<lpage>382</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.tins.2009.04.002">doi:10.1016/j.tins.2009.04.002</ext-link>
</comment>
<pub-id pub-id-type="pmid">19520438</pub-id>
</element-citation>
</ref>
<ref id="BHT292C31">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meyer</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Kaplan</surname>
<given-names>JT</given-names>
</name>
<name>
<surname>Essex</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Webber</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Damasio</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Damasio</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Predicting visual stimuli on the basis of activity in auditory cortices</article-title>
<source>Nat Neurosci</source>
<year>2010</year>
<volume>13</volume>
<fpage>667</fpage>
<lpage>668</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nn.2533">doi:10.1038/nn.2533</ext-link>
</comment>
<pub-id pub-id-type="pmid">20436482</pub-id>
</element-citation>
</ref>
<ref id="BHT292C32">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meyer</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Kaplan</surname>
<given-names>JT</given-names>
</name>
<name>
<surname>Essex</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Damasio</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Damasio</surname>
<given-names>AR</given-names>
</name>
</person-group>
<article-title>Seeing touch is correlated with content-specific activity in primary somatosensory cortex</article-title>
<source>Cereb Cortex</source>
<year>2011</year>
<volume>21</volume>
<fpage>2113</fpage>
<lpage>2121</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/bhq289">doi:10.1093/cercor/bhq289</ext-link>
</comment>
<pub-id pub-id-type="pmid">21330469</pub-id>
</element-citation>
</ref>
<ref id="BHT292C33">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Milner</surname>
<given-names>AD</given-names>
</name>
<name>
<surname>Goodale</surname>
<given-names>MA</given-names>
</name>
</person-group>
<source>The visual brain in action</source>
<year>2006</year>
<edition>2nd ed</edition>
<publisher-loc>Oxford</publisher-loc>
<publisher-name>Oxford University Press</publisher-name>
</element-citation>
</ref>
<ref id="BHT292C34">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Misaki</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Bandettini</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Kriegeskorte</surname>
<given-names>N</given-names>
</name>
</person-group>
<article-title>Comparison of multivariate classifiers and response normalizations for pattern-information fMRI</article-title>
<source>Neuroimage</source>
<year>2010</year>
<volume>53</volume>
<fpage>103</fpage>
<lpage>118</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2010.05.051">doi:10.1016/j.neuroimage.2010.05.051</ext-link>
</comment>
<pub-id pub-id-type="pmid">20580933</pub-id>
</element-citation>
</ref>
<ref id="BHT292C52">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morrison</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Tipper</surname>
<given-names>SP</given-names>
</name>
<name>
<surname>Fenton-Adams</surname>
<given-names>WL</given-names>
</name>
<name>
<surname>Bach</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>“Feeling” Others' painful actions: the sensorimotor integration of pain and action information</article-title>
<source>Human Brain Map</source>
<year>2013</year>
<volume>34</volume>
<fpage>1982</fpage>
<lpage>1998</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1017/S0140525X12002361">doi:10.1017/S0140525X12002361</ext-link>
</comment>
</element-citation>
</ref>
<ref id="BHT292C35">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Muckli</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Petro</surname>
<given-names>LS</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>FW</given-names>
</name>
</person-group>
<article-title>Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future</article-title>
<source>Behav Brain Sci</source>
<year>2013</year>
<volume>36</volume>
<fpage>221</fpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1017/S0140525X12002361">doi:10.1017/S0140525X12002361</ext-link>
</comment>
<pub-id pub-id-type="pmid">23663531</pub-id>
</element-citation>
</ref>
<ref id="BHT292C36">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nelson</surname>
<given-names>AJ</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Digit somatotopy within cortical areas of the postcentral gyrus in humans</article-title>
<source>Cereb Cortex</source>
<year>2008</year>
<volume>18</volume>
<fpage>2341</fpage>
<lpage>2351</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/bhm257">doi:10.1093/cercor/bhm257</ext-link>
</comment>
<pub-id pub-id-type="pmid">18245039</pub-id>
</element-citation>
</ref>
<ref id="BHT292C37">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nestor</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Plaut</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Behrmann</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Unraveling the distributed neural code of facial identity through spatiotemporal pattern analysis</article-title>
<source>Proc Natl Acad Sci USA</source>
<year>2011</year>
<volume>108</volume>
<fpage>9998</fpage>
<lpage>10003</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1073/pnas.1102433108">doi:10.1073/pnas.1102433108</ext-link>
</comment>
<pub-id pub-id-type="pmid">21628569</pub-id>
</element-citation>
</ref>
<ref id="BHT292C38">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Op De Beeck</surname>
<given-names>HP</given-names>
</name>
<name>
<surname>Torfs</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Wagemans</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Perceived shape similarity among unfamiliar objects and the organization of the human object vision pathway</article-title>
<source>J Neurosci</source>
<year>2008</year>
<volume>28</volume>
<fpage>10111</fpage>
<lpage>10123</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.2511-08.2008">doi:10.1523/JNEUROSCI.2511-08.2008</ext-link>
</comment>
<pub-id pub-id-type="pmid">18829969</pub-id>
</element-citation>
</ref>
<ref id="BHT292C39">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pereira</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Botvinick</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Information mapping with pattern classifiers: a comparative study</article-title>
<source>Neuroimage</source>
<year>2011</year>
<volume>56</volume>
<fpage>476</fpage>
<lpage>496</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2010.05.026">doi:10.1016/j.neuroimage.2010.05.026</ext-link>
</comment>
<pub-id pub-id-type="pmid">20488249</pub-id>
</element-citation>
</ref>
<ref id="BHT292C40">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rorden</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Karnath</surname>
<given-names>HO</given-names>
</name>
<name>
<surname>Bonilla</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Improving lesion-symptom mapping</article-title>
<source>J Cogn Neurosci</source>
<year>2007</year>
<volume>19</volume>
<fpage>1081</fpage>
<lpage>1088</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1162/jocn.2007.19.7.1081">doi:10.1162/jocn.2007.19.7.1081</ext-link>
</comment>
<pub-id pub-id-type="pmid">17583985</pub-id>
</element-citation>
</ref>
<ref id="BHT292C41">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rossit</surname>
<given-names>S</given-names>
</name>
<name>
<surname>McAdam</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Mclean</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Goodale</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Culham</surname>
<given-names>JC</given-names>
</name>
</person-group>
<article-title>fMRI reveals a lower visual field preference for hand actions in human superior-parietal occipital cortex and precuneus</article-title>
<source>Cortex</source>
<year>2013</year>
<volume>49</volume>
<fpage>2525</fpage>
<lpage>2541</lpage>
<pub-id pub-id-type="pmid">23453790</pub-id>
</element-citation>
</ref>
<ref id="BHT292C42">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ruben</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Schwiemann</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Deuchert</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Krause</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Curio</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Villringer</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Kurth</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Villringer</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Somatotopic organization of human secondary somatosensory cortex</article-title>
<source>Cereb Cortex</source>
<year>2001</year>
<volume>11</volume>
<fpage>463</fpage>
<lpage>473</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/11.5.463">doi:10.1093/cercor/11.5.463</ext-link>
</comment>
<pub-id pub-id-type="pmid">11313298</pub-id>
</element-citation>
</ref>
<ref id="BHT292C43">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sakata</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Tatra</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Kusunoki</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Murata</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Tanaka</surname>
<given-names>Y</given-names>
</name>
</person-group>
<article-title>The parietal association cortex in depth perception and visual control of hand action</article-title>
<source>Trends Neurosci</source>
<year>1997</year>
<volume>20</volume>
<fpage>350</fpage>
<lpage>357</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/S0166-2236(97)01067-9">doi:10.1016/S0166-2236(97)01067-9</ext-link>
</comment>
<pub-id pub-id-type="pmid">9246729</pub-id>
</element-citation>
</ref>
<ref id="BHT292C44">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Smith</surname>
<given-names>FW</given-names>
</name>
<name>
<surname>Muckli</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Non-stimulated early visual areas carry information about surrounding context</article-title>
<source>Proc Natl Acad Sci USA</source>
<year>2010</year>
<volume>107</volume>
<fpage>20099</fpage>
<lpage>20103</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1073/pnas.1000233107">doi:10.1073/pnas.1000233107</ext-link>
</comment>
<pub-id pub-id-type="pmid">21041652</pub-id>
</element-citation>
</ref>
<ref id="BHT292C45">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stokes</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Thompson</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Cusack</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Duncan</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Top-down activation of shape-specific population codes in visual cortex during mental imagery</article-title>
<source>J Neurosci</source>
<year>2009</year>
<volume>29</volume>
<fpage>1565</fpage>
<lpage>1572</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.4657-08.2009">doi:10.1523/JNEUROSCI.4657-08.2009</ext-link>
</comment>
<pub-id pub-id-type="pmid">19193903</pub-id>
</element-citation>
</ref>
<ref id="BHT292C46">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Westen</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Fransson</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Olsrud</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Rosen</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Lundborg</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Larsson</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Finger somatotopy in area 3b: an fMRI-study</article-title>
<source>BMC Neurosci</source>
<year>2004</year>
<volume>5</volume>
<fpage>28</fpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1186/1471-2202-5-28">doi:10.1186/1471-2202-5-28</ext-link>
</comment>
<pub-id pub-id-type="pmid">15320953</pub-id>
</element-citation>
</ref>
<ref id="BHT292C47">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vetter</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>FW</given-names>
</name>
<name>
<surname>Muckli</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Decoding natural sounds in early visual cortex</article-title>
<source>J Vis</source>
<year>2011</year>
<volume>11</volume>
<fpage>779</fpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1167/11.11.779">doi:10.1167/11.11.779</ext-link>
</comment>
</element-citation>
</ref>
<ref id="BHT292C48">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Walther</surname>
<given-names>DB</given-names>
</name>
<name>
<surname>Caddigan</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Fei-Fei</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>DM</given-names>
</name>
</person-group>
<article-title>Natural scene categories revealed in distributed patterns of activity in the human brain</article-title>
<source>J. Neurosci</source>
<year>2009</year>
<volume>29</volume>
<fpage>10573</fpage>
<lpage>10581</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.0559-09.2009">doi:10.1523/JNEUROSCI.0559-09.2009</ext-link>
</comment>
<pub-id pub-id-type="pmid">19710310</pub-id>
</element-citation>
</ref>
<ref id="BHT292C49">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Walther</surname>
<given-names>DB</given-names>
</name>
<name>
<surname>Chai</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Caddigan</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>DM</given-names>
</name>
<name>
<surname>Fei-Fei</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Simple line drawings suffice for functional MRI decoding of natural scene categories</article-title>
<source>Proc Natl Acad Sci USA</source>
<year>2011</year>
<volume>23</volume>
<fpage>9661</fpage>
<lpage>9666</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1073/pnas.1015666108">doi:10.1073/pnas.1015666108</ext-link>
</comment>
<pub-id pub-id-type="pmid">21593417</pub-id>
</element-citation>
</ref>
<ref id="BHT292C50">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zangenehpour</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
</person-group>
<article-title>Crossmodal recruitment of primary visual cortex following brief exposure to bimodal audiovisual stimuli</article-title>
<source>Neuropsychologia</source>
<year>2010</year>
<volume>48</volume>
<fpage>591</fpage>
<lpage>600</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuropsychologia.2009.10.022">doi:10.1016/j.neuropsychologia.2009.10.022</ext-link>
</comment>
<pub-id pub-id-type="pmid">19883668</pub-id>
</element-citation>
</ref>
<ref id="BHT292C51">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Fuster</surname>
<given-names>JM</given-names>
</name>
</person-group>
<article-title>Visuo-tactile cross-modal associations in cortical somatosensory cells</article-title>
<source>Proc Natl Acad Sci USA</source>
<year>2000</year>
<volume>97</volume>
<fpage>9777</fpage>
<lpage>9782</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1073/pnas.97.17.9777">doi:10.1073/pnas.97.17.9777</ext-link>
</comment>
<pub-id pub-id-type="pmid">10944237</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Canada</li>
<li>Royaume-Uni</li>
</country>
</list>
<tree>
<country name="Royaume-Uni">
<noRegion>
<name sortKey="Smith, Fraser W" sort="Smith, Fraser W" uniqKey="Smith F" first="Fraser W." last="Smith">Fraser W. Smith</name>
</noRegion>
</country>
<country name="Canada">
<noRegion>
<name sortKey="Smith, Fraser W" sort="Smith, Fraser W" uniqKey="Smith F" first="Fraser W." last="Smith">Fraser W. Smith</name>
</noRegion>
<name sortKey="Goodale, Melvyn A" sort="Goodale, Melvyn A" uniqKey="Goodale M" first="Melvyn A." last="Goodale">Melvyn A. Goodale</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001333 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001333 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4380001
   |texte=   Decoding Visual Object Categories in Early Somatosensory Cortex
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:24122136" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024