Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Auditory modulation of visual stimulus encoding in human retinotopic cortex

Identifieur interne : 002131 ( Pmc/Curation ); précédent : 002130; suivant : 002132

Auditory modulation of visual stimulus encoding in human retinotopic cortex

Auteurs : Benjamin De Haas ; D. Samuel Schwarzkopf ; Maren Urner ; Geraint Rees

Source :

RBID : PMC:3625122

Abstract

Sounds can modulate visual perception as well as neural activity in retinotopic cortex. Most studies in this context investigated how sounds change neural amplitude and oscillatory phase reset in visual cortex. However, recent studies in macaque monkeys show that congruence of audio-visual stimuli also modulates the amount of stimulus information carried by spiking activity of primary auditory and visual neurons. Here, we used naturalistic video stimuli and recorded the spatial patterns of functional MRI signals in human retinotopic cortex to test whether the discriminability of such patterns varied with the presence and congruence of co-occurring sounds. We found that incongruent sounds significantly impaired stimulus decoding from area V2 and there was a similar trend for V3. This effect was associated with reduced inter-trial reliability of patterns (i.e. higher levels of noise), but was not accompanied by any detectable modulation of overall signal amplitude. We conclude that sounds modulate naturalistic stimulus encoding in early human retinotopic cortex without affecting overall signal amplitude. Subthreshold modulation, oscillatory phase reset and dynamic attentional modulation are candidate neural and cognitive mechanisms mediating these effects.


Url:
DOI: 10.1016/j.neuroimage.2012.12.061
PubMed: 23296187
PubMed Central: 3625122

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3625122

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Auditory modulation of visual stimulus encoding in human retinotopic cortex</title>
<author>
<name sortKey="De Haas, Benjamin" sort="De Haas, Benjamin" uniqKey="De Haas B" first="Benjamin" last="De Haas">Benjamin De Haas</name>
</author>
<author>
<name sortKey="Schwarzkopf, D Samuel" sort="Schwarzkopf, D Samuel" uniqKey="Schwarzkopf D" first="D. Samuel" last="Schwarzkopf">D. Samuel Schwarzkopf</name>
</author>
<author>
<name sortKey="Urner, Maren" sort="Urner, Maren" uniqKey="Urner M" first="Maren" last="Urner">Maren Urner</name>
</author>
<author>
<name sortKey="Rees, Geraint" sort="Rees, Geraint" uniqKey="Rees G" first="Geraint" last="Rees">Geraint Rees</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">23296187</idno>
<idno type="pmc">3625122</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3625122</idno>
<idno type="RBID">PMC:3625122</idno>
<idno type="doi">10.1016/j.neuroimage.2012.12.061</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">002131</idno>
<idno type="wicri:Area/Pmc/Curation">002131</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Auditory modulation of visual stimulus encoding in human retinotopic cortex</title>
<author>
<name sortKey="De Haas, Benjamin" sort="De Haas, Benjamin" uniqKey="De Haas B" first="Benjamin" last="De Haas">Benjamin De Haas</name>
</author>
<author>
<name sortKey="Schwarzkopf, D Samuel" sort="Schwarzkopf, D Samuel" uniqKey="Schwarzkopf D" first="D. Samuel" last="Schwarzkopf">D. Samuel Schwarzkopf</name>
</author>
<author>
<name sortKey="Urner, Maren" sort="Urner, Maren" uniqKey="Urner M" first="Maren" last="Urner">Maren Urner</name>
</author>
<author>
<name sortKey="Rees, Geraint" sort="Rees, Geraint" uniqKey="Rees G" first="Geraint" last="Rees">Geraint Rees</name>
</author>
</analytic>
<series>
<title level="j">Neuroimage</title>
<idno type="ISSN">1053-8119</idno>
<idno type="eISSN">1095-9572</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Sounds can modulate visual perception as well as neural activity in retinotopic cortex. Most studies in this context investigated how sounds change neural amplitude and oscillatory phase reset in visual cortex. However, recent studies in macaque monkeys show that congruence of audio-visual stimuli also modulates the amount of stimulus information carried by spiking activity of primary auditory and visual neurons. Here, we used naturalistic video stimuli and recorded the spatial patterns of functional MRI signals in human retinotopic cortex to test whether the discriminability of such patterns varied with the presence and congruence of co-occurring sounds. We found that incongruent sounds significantly impaired stimulus decoding from area V2 and there was a similar trend for V3. This effect was associated with reduced inter-trial reliability of patterns (i.e. higher levels of noise), but was not accompanied by any detectable modulation of overall signal amplitude. We conclude that sounds modulate naturalistic stimulus encoding in early human retinotopic cortex without affecting overall signal amplitude. Subthreshold modulation, oscillatory phase reset and dynamic attentional modulation are candidate neural and cognitive mechanisms mediating these effects.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Alink, A" uniqKey="Alink A">A. Alink</name>
</author>
<author>
<name sortKey="Euler, F" uniqKey="Euler F">F. Euler</name>
</author>
<author>
<name sortKey="Kriegeskorte, N" uniqKey="Kriegeskorte N">N. Kriegeskorte</name>
</author>
<author>
<name sortKey="Singer, W" uniqKey="Singer W">W. Singer</name>
</author>
<author>
<name sortKey="Kohler, A" uniqKey="Kohler A">A. Kohler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Allman, B L" uniqKey="Allman B">B.L. Allman</name>
</author>
<author>
<name sortKey="Meredith, M A" uniqKey="Meredith M">M.A. Meredith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Allman, B L" uniqKey="Allman B">B.L. Allman</name>
</author>
<author>
<name sortKey="Bittencourt Navarrete, R E" uniqKey="Bittencourt Navarrete R">R.E. Bittencourt-Navarrete</name>
</author>
<author>
<name sortKey="Keniston, L P" uniqKey="Keniston L">L.P. Keniston</name>
</author>
<author>
<name sortKey="Medina, A E" uniqKey="Medina A">A.E. Medina</name>
</author>
<author>
<name sortKey="Wang, M Y" uniqKey="Wang M">M.Y. Wang</name>
</author>
<author>
<name sortKey="Meredith, M A" uniqKey="Meredith M">M.A. Meredith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Allman, B L" uniqKey="Allman B">B.L. Allman</name>
</author>
<author>
<name sortKey="Keniston, L P" uniqKey="Keniston ">Æ.L.P. Keniston</name>
</author>
<author>
<name sortKey="Meredith, M A" uniqKey="Meredith M">M.A. Meredith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beauchamp, M S" uniqKey="Beauchamp M">M.S. Beauchamp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brainard, D H" uniqKey="Brainard D">D.H. Brainard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Budinger, E" uniqKey="Budinger E">E. Budinger</name>
</author>
<author>
<name sortKey="Heil, P" uniqKey="Heil P">P. Heil</name>
</author>
<author>
<name sortKey="Hess, A" uniqKey="Hess A">A. Hess</name>
</author>
<author>
<name sortKey="Scheich, H" uniqKey="Scheich H">H. Scheich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cappe, C" uniqKey="Cappe C">C. Cappe</name>
</author>
<author>
<name sortKey="Thut, G" uniqKey="Thut G">G. Thut</name>
</author>
<author>
<name sortKey="Romei, V" uniqKey="Romei V">V. Romei</name>
</author>
<author>
<name sortKey="Murray, M M" uniqKey="Murray M">M.M. Murray</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cardoso, M M B" uniqKey="Cardoso M">M.M.B. Cardoso</name>
</author>
<author>
<name sortKey="Sirotin, Y B" uniqKey="Sirotin Y">Y.B. Sirotin</name>
</author>
<author>
<name sortKey="Lima, B" uniqKey="Lima B">B. Lima</name>
</author>
<author>
<name sortKey="Glushenkova, E" uniqKey="Glushenkova E">E. Glushenkova</name>
</author>
<author>
<name sortKey="Das, A" uniqKey="Das A">A. Das</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clavagnier, S" uniqKey="Clavagnier S">S. Clavagnier</name>
</author>
<author>
<name sortKey="Falchier, A" uniqKey="Falchier A">A. Falchier</name>
</author>
<author>
<name sortKey="Kennedy, H" uniqKey="Kennedy H">H. Kennedy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dahl, C D" uniqKey="Dahl C">C.D. Dahl</name>
</author>
<author>
<name sortKey="Logothetis, N K" uniqKey="Logothetis N">N.K. Logothetis</name>
</author>
<author>
<name sortKey="Kayser, C" uniqKey="Kayser C">C. Kayser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deichmann, R" uniqKey="Deichmann R">R. Deichmann</name>
</author>
<author>
<name sortKey="Schwarzbauer, C" uniqKey="Schwarzbauer C">C. Schwarzbauer</name>
</author>
<author>
<name sortKey="Turner, R" uniqKey="Turner R">R. Turner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Destrieux, C" uniqKey="Destrieux C">C. Destrieux</name>
</author>
<author>
<name sortKey="Fischl, B" uniqKey="Fischl B">B. Fischl</name>
</author>
<author>
<name sortKey="Dale, A" uniqKey="Dale A">A. Dale</name>
</author>
<author>
<name sortKey="Halgren, E" uniqKey="Halgren E">E. Halgren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J. Driver</name>
</author>
<author>
<name sortKey="Noesselt, T" uniqKey="Noesselt T">T. Noesselt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M.O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M.S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Falchier, A" uniqKey="Falchier A">A. Falchier</name>
</author>
<author>
<name sortKey="Clavagnier, S" uniqKey="Clavagnier S">S. Clavagnier</name>
</author>
<author>
<name sortKey="Barone, P" uniqKey="Barone P">P. Barone</name>
</author>
<author>
<name sortKey="Kennedy, H" uniqKey="Kennedy H">H. Kennedy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, C R" uniqKey="Fetsch C">C.R. Fetsch</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A. Pouget</name>
</author>
<author>
<name sortKey="Deangelis, G C" uniqKey="Deangelis G">G.C. DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, D E" uniqKey="Angelaki D">D.E. Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fishman, M C" uniqKey="Fishman M">M.C. Fishman</name>
</author>
<author>
<name sortKey="Michael, P" uniqKey="Michael P">P. Michael</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Formisano, E" uniqKey="Formisano E">E. Formisano</name>
</author>
<author>
<name sortKey="De Martino, F" uniqKey="De Martino F">F. De Martino</name>
</author>
<author>
<name sortKey="Bonte, M" uniqKey="Bonte M">M. Bonte</name>
</author>
<author>
<name sortKey="Goebel, R" uniqKey="Goebel R">R. Goebel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friston, K" uniqKey="Friston K">K. Friston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friston, K J" uniqKey="Friston K">K.J. Friston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friston, K J" uniqKey="Friston K">K.J. Friston</name>
</author>
<author>
<name sortKey="Buechel, C" uniqKey="Buechel C">C. Buechel</name>
</author>
<author>
<name sortKey="Fink, G R" uniqKey="Fink G">G.R. Fink</name>
</author>
<author>
<name sortKey="Morris, J" uniqKey="Morris J">J. Morris</name>
</author>
<author>
<name sortKey="Rolls, E" uniqKey="Rolls E">E. Rolls</name>
</author>
<author>
<name sortKey="Dolan, R J" uniqKey="Dolan R">R.J. Dolan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Giani, A S" uniqKey="Giani A">A.S. Giani</name>
</author>
<author>
<name sortKey="Ortiz, E" uniqKey="Ortiz E">E. Ortiz</name>
</author>
<author>
<name sortKey="Belardinelli, P" uniqKey="Belardinelli P">P. Belardinelli</name>
</author>
<author>
<name sortKey="Kleiner, M" uniqKey="Kleiner M">M. Kleiner</name>
</author>
<author>
<name sortKey="Preissl, H" uniqKey="Preissl H">H. Preissl</name>
</author>
<author>
<name sortKey="Noppeney, U" uniqKey="Noppeney U">U. Noppeney</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hsieh, P J" uniqKey="Hsieh P">P.-J. Hsieh</name>
</author>
<author>
<name sortKey="Colas, J T" uniqKey="Colas J">J.T. Colas</name>
</author>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hsu, C W" uniqKey="Hsu C">C.-W. Hsu</name>
</author>
<author>
<name sortKey="Lin, C J" uniqKey="Lin C">C.-J. Lin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Iurilli, G" uniqKey="Iurilli G">G. Iurilli</name>
</author>
<author>
<name sortKey="Ghezzi, D" uniqKey="Ghezzi D">D. Ghezzi</name>
</author>
<author>
<name sortKey="Olcese, U" uniqKey="Olcese U">U. Olcese</name>
</author>
<author>
<name sortKey="Lassi, G" uniqKey="Lassi G">G. Lassi</name>
</author>
<author>
<name sortKey="Nazzaro, C" uniqKey="Nazzaro C">C. Nazzaro</name>
</author>
<author>
<name sortKey="Tonini, R" uniqKey="Tonini R">R. Tonini</name>
</author>
<author>
<name sortKey="Tucci, V" uniqKey="Tucci V">V. Tucci</name>
</author>
<author>
<name sortKey="Benfenati, F" uniqKey="Benfenati F">F. Benfenati</name>
</author>
<author>
<name sortKey="Medini, P" uniqKey="Medini P">P. Medini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jones, M R" uniqKey="Jones M">M.R. Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kayser, C" uniqKey="Kayser C">C. Kayser</name>
</author>
<author>
<name sortKey="Logothetis, N K" uniqKey="Logothetis N">N.K. Logothetis</name>
</author>
<author>
<name sortKey="Panzeri, S" uniqKey="Panzeri S">S. Panzeri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klemen, J" uniqKey="Klemen J">J. Klemen</name>
</author>
<author>
<name sortKey="Chambers, C D" uniqKey="Chambers C">C.D. Chambers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kriegeskorte, N" uniqKey="Kriegeskorte N">N. Kriegeskorte</name>
</author>
<author>
<name sortKey="Goebel, R" uniqKey="Goebel R">R. Goebel</name>
</author>
<author>
<name sortKey="Bandettini, P" uniqKey="Bandettini P">P. Bandettini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lakatos, P" uniqKey="Lakatos P">P. Lakatos</name>
</author>
<author>
<name sortKey="Chen, C M" uniqKey="Chen C">C.-M. Chen</name>
</author>
<author>
<name sortKey="O Connell, M N" uniqKey="O Connell M">M.N. O'Connell</name>
</author>
<author>
<name sortKey="Mills, A" uniqKey="Mills A">A. Mills</name>
</author>
<author>
<name sortKey="Schroeder, C E" uniqKey="Schroeder C">C.E. Schroeder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lakatos, P" uniqKey="Lakatos P">P. Lakatos</name>
</author>
<author>
<name sortKey="O Connell, M N" uniqKey="O Connell M">M.N. O'Connell</name>
</author>
<author>
<name sortKey="Barczak, A" uniqKey="Barczak A">A. Barczak</name>
</author>
<author>
<name sortKey="Mills, A" uniqKey="Mills A">A. Mills</name>
</author>
<author>
<name sortKey="Javitt, D C" uniqKey="Javitt D">D.C. Javitt</name>
</author>
<author>
<name sortKey="Schroeder, C E" uniqKey="Schroeder C">C.E. Schroeder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Large, E" uniqKey="Large E">E. Large</name>
</author>
<author>
<name sortKey="Jones, M" uniqKey="Jones M">M. Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Larsson, J" uniqKey="Larsson J">J. Larsson</name>
</author>
<author>
<name sortKey="Heeger, D J" uniqKey="Heeger D">D.J. Heeger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Luo, H" uniqKey="Luo H">H. Luo</name>
</author>
<author>
<name sortKey="Liu, Z" uniqKey="Liu Z">Z. Liu</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Macaluso, E" uniqKey="Macaluso E">E. Macaluso</name>
</author>
<author>
<name sortKey="Frith, C D" uniqKey="Frith C">C.D. Frith</name>
</author>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J. Driver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martuzzi, R" uniqKey="Martuzzi R">R. Martuzzi</name>
</author>
<author>
<name sortKey="Murray, M M" uniqKey="Murray M">M.M. Murray</name>
</author>
<author>
<name sortKey="Michel, C M" uniqKey="Michel C">C.M. Michel</name>
</author>
<author>
<name sortKey="Thiran, J P" uniqKey="Thiran J">J.-P. Thiran</name>
</author>
<author>
<name sortKey="Maeder, P P" uniqKey="Maeder P">P.P. Maeder</name>
</author>
<author>
<name sortKey="Clarke, S" uniqKey="Clarke S">S. Clarke</name>
</author>
<author>
<name sortKey="Meuli, R A" uniqKey="Meuli R">R.A. Meuli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meienbrock, A" uniqKey="Meienbrock A">A. Meienbrock</name>
</author>
<author>
<name sortKey="Naumer, M J" uniqKey="Naumer M">M.J. Naumer</name>
</author>
<author>
<name sortKey="Doehrmann, O" uniqKey="Doehrmann O">O. Doehrmann</name>
</author>
<author>
<name sortKey="Singer, W" uniqKey="Singer W">W. Singer</name>
</author>
<author>
<name sortKey="Muckli, L" uniqKey="Muckli L">L. Muckli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mesulam, M M" uniqKey="Mesulam M">M.M. Mesulam</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meyer, K" uniqKey="Meyer K">K. Meyer</name>
</author>
<author>
<name sortKey="Kaplan, J T" uniqKey="Kaplan J">J.T. Kaplan</name>
</author>
<author>
<name sortKey="Essex, R" uniqKey="Essex R">R. Essex</name>
</author>
<author>
<name sortKey="Webber, C" uniqKey="Webber C">C. Webber</name>
</author>
<author>
<name sortKey="Damasio, H" uniqKey="Damasio H">H. Damasio</name>
</author>
<author>
<name sortKey="Damasio, A" uniqKey="Damasio A">A. Damasio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Molholm, S" uniqKey="Molholm S">S. Molholm</name>
</author>
<author>
<name sortKey="Ritter, W" uniqKey="Ritter W">W. Ritter</name>
</author>
<author>
<name sortKey="Murray, M M" uniqKey="Murray M">M.M. Murray</name>
</author>
<author>
<name sortKey="Javitt, D C" uniqKey="Javitt D">D.C. Javitt</name>
</author>
<author>
<name sortKey="Schroeder, C E" uniqKey="Schroeder C">C.E. Schroeder</name>
</author>
<author>
<name sortKey="Foxe, J J" uniqKey="Foxe J">J.J. Foxe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morey, R D" uniqKey="Morey R">R.D. Morey</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Naue, N" uniqKey="Naue N">N. Naue</name>
</author>
<author>
<name sortKey="Rach, S" uniqKey="Rach S">S. Rach</name>
</author>
<author>
<name sortKey="Struber, D" uniqKey="Struber D">D. Strüber</name>
</author>
<author>
<name sortKey="Huster, R J" uniqKey="Huster R">R.J. Huster</name>
</author>
<author>
<name sortKey="Zaehle, T" uniqKey="Zaehle T">T. Zaehle</name>
</author>
<author>
<name sortKey="Korner, U" uniqKey="Korner U">U. Körner</name>
</author>
<author>
<name sortKey="Herrmann, C S" uniqKey="Herrmann C">C.S. Herrmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nishimoto, S" uniqKey="Nishimoto S">S. Nishimoto</name>
</author>
<author>
<name sortKey="Vu, A T" uniqKey="Vu A">A.T. Vu</name>
</author>
<author>
<name sortKey="Naselaris, T" uniqKey="Naselaris T">T. Naselaris</name>
</author>
<author>
<name sortKey="Benjamini, Y" uniqKey="Benjamini Y">Y. Benjamini</name>
</author>
<author>
<name sortKey="Yu, B" uniqKey="Yu B">B. Yu</name>
</author>
<author>
<name sortKey="Gallant, J L" uniqKey="Gallant J">J.L. Gallant</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Noesselt, T" uniqKey="Noesselt T">T. Noesselt</name>
</author>
<author>
<name sortKey="Rieger, J W" uniqKey="Rieger J">J.W. Rieger</name>
</author>
<author>
<name sortKey="Schoenfeld, M A" uniqKey="Schoenfeld M">M.A. Schoenfeld</name>
</author>
<author>
<name sortKey="Kanowski, M" uniqKey="Kanowski M">M. Kanowski</name>
</author>
<author>
<name sortKey="Hinrichs, H" uniqKey="Hinrichs H">H. Hinrichs</name>
</author>
<author>
<name sortKey="Heinze, H J" uniqKey="Heinze H">H.-J. Heinze</name>
</author>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J. Driver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pelli, D G" uniqKey="Pelli D">D.G. Pelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rockland, K S" uniqKey="Rockland K">K.S. Rockland</name>
</author>
<author>
<name sortKey="Ojima, H" uniqKey="Ojima H">H. Ojima</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Romei, V" uniqKey="Romei V">V. Romei</name>
</author>
<author>
<name sortKey="Murray, M M" uniqKey="Murray M">M.M. Murray</name>
</author>
<author>
<name sortKey="Cappe, C" uniqKey="Cappe C">C. Cappe</name>
</author>
<author>
<name sortKey="Thut, G" uniqKey="Thut G">G. Thut</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Romei, V" uniqKey="Romei V">V. Romei</name>
</author>
<author>
<name sortKey="Gross, J" uniqKey="Gross J">J. Gross</name>
</author>
<author>
<name sortKey="Thut, G" uniqKey="Thut G">G. Thut</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schroeder, C E" uniqKey="Schroeder C">C.E. Schroeder</name>
</author>
<author>
<name sortKey="Lakatos, P" uniqKey="Lakatos P">P. Lakatos</name>
</author>
<author>
<name sortKey="Kajikawa, Y" uniqKey="Kajikawa Y">Y. Kajikawa</name>
</author>
<author>
<name sortKey="Partan, S" uniqKey="Partan S">S. Partan</name>
</author>
<author>
<name sortKey="Puce, A" uniqKey="Puce A">A. Puce</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sereno, M I" uniqKey="Sereno M">M.I. Sereno</name>
</author>
<author>
<name sortKey="Dale, A M" uniqKey="Dale A">A.M. Dale</name>
</author>
<author>
<name sortKey="Reppas, J B" uniqKey="Reppas J">J.B. Reppas</name>
</author>
<author>
<name sortKey="Kwong, K K" uniqKey="Kwong K">K.K. Kwong</name>
</author>
<author>
<name sortKey="Belliveau, J W" uniqKey="Belliveau J">J.W. Belliveau</name>
</author>
<author>
<name sortKey="Brady, T J" uniqKey="Brady T">T.J. Brady</name>
</author>
<author>
<name sortKey="Rosen, B R" uniqKey="Rosen B">B.R. Rosen</name>
</author>
<author>
<name sortKey="Tootell, R B" uniqKey="Tootell R">R.B. Tootell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Staeren, N" uniqKey="Staeren N">N. Staeren</name>
</author>
<author>
<name sortKey="Renvall, H" uniqKey="Renvall H">H. Renvall</name>
</author>
<author>
<name sortKey="De Martino, F" uniqKey="De Martino F">F. De Martino</name>
</author>
<author>
<name sortKey="Goebel, R" uniqKey="Goebel R">R. Goebel</name>
</author>
<author>
<name sortKey="Formisano, E" uniqKey="Formisano E">E. Formisano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thorne, J D" uniqKey="Thorne J">J.D. Thorne</name>
</author>
<author>
<name sortKey="De Vos, M" uniqKey="De Vos M">M. De Vos</name>
</author>
<author>
<name sortKey="Viola, F C" uniqKey="Viola F">F.C. Viola</name>
</author>
<author>
<name sortKey="Debener, S" uniqKey="Debener S">S. Debener</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Watkins, S" uniqKey="Watkins S">S. Watkins</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
<author>
<name sortKey="Tanaka, S" uniqKey="Tanaka S">S. Tanaka</name>
</author>
<author>
<name sortKey="Haynes, J D" uniqKey="Haynes J">J.D. Haynes</name>
</author>
<author>
<name sortKey="Rees, G" uniqKey="Rees G">G. Rees</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Werner, S" uniqKey="Werner S">S. Werner</name>
</author>
<author>
<name sortKey="Noppeney, U" uniqKey="Noppeney U">U. Noppeney</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Neuroimage</journal-id>
<journal-id journal-id-type="iso-abbrev">Neuroimage</journal-id>
<journal-title-group>
<journal-title>Neuroimage</journal-title>
</journal-title-group>
<issn pub-type="ppub">1053-8119</issn>
<issn pub-type="epub">1095-9572</issn>
<publisher>
<publisher-name>Academic Press</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">23296187</article-id>
<article-id pub-id-type="pmc">3625122</article-id>
<article-id pub-id-type="publisher-id">S1053-8119(12)01247-5</article-id>
<article-id pub-id-type="doi">10.1016/j.neuroimage.2012.12.061</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Auditory modulation of visual stimulus encoding in human retinotopic cortex</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>de Haas</surname>
<given-names>Benjamin</given-names>
</name>
<email>benjamin.haas.09@ucl.ac.uk</email>
<xref rid="cr0005" ref-type="corresp"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Schwarzkopf</surname>
<given-names>D. Samuel</given-names>
</name>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Urner</surname>
<given-names>Maren</given-names>
</name>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Rees</surname>
<given-names>Geraint</given-names>
</name>
</contrib>
</contrib-group>
<aff id="af0005">UCL Institute of Cognitive Neuroscience, 17 Queen Square, London WC1N 3BG, UK</aff>
<aff id="af0010">Wellcome Trust Centre for Neuroimaging, University College London, 12 Queen Square, London WC1N 3AR, UK</aff>
<author-notes>
<corresp id="cr0005">
<label></label>
Corresponding author at: Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square, London WC1N 3AR, UK.
<email>benjamin.haas.09@ucl.ac.uk</email>
</corresp>
</author-notes>
<pub-date pub-type="pmc-release">
<day>15</day>
<month>4</month>
<year>2013</year>
</pub-date>
<pmc-comment> PMC Release delay is 0 months and 0 days and was based on .</pmc-comment>
<pub-date pub-type="ppub">
<day>15</day>
<month>4</month>
<year>2013</year>
</pub-date>
<volume>70</volume>
<fpage>258</fpage>
<lpage>267</lpage>
<history>
<date date-type="accepted">
<day>23</day>
<month>12</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>© 2013 Elsevier Inc.</copyright-statement>
<copyright-year>2013</copyright-year>
<copyright-holder>Elsevier Inc.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/3.0/">
<license-p>Open Access under
<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/3.0/">CC BY 3.0</ext-link>
license</license-p>
</license>
</permissions>
<abstract>
<p>Sounds can modulate visual perception as well as neural activity in retinotopic cortex. Most studies in this context investigated how sounds change neural amplitude and oscillatory phase reset in visual cortex. However, recent studies in macaque monkeys show that congruence of audio-visual stimuli also modulates the amount of stimulus information carried by spiking activity of primary auditory and visual neurons. Here, we used naturalistic video stimuli and recorded the spatial patterns of functional MRI signals in human retinotopic cortex to test whether the discriminability of such patterns varied with the presence and congruence of co-occurring sounds. We found that incongruent sounds significantly impaired stimulus decoding from area V2 and there was a similar trend for V3. This effect was associated with reduced inter-trial reliability of patterns (i.e. higher levels of noise), but was not accompanied by any detectable modulation of overall signal amplitude. We conclude that sounds modulate naturalistic stimulus encoding in early human retinotopic cortex without affecting overall signal amplitude. Subthreshold modulation, oscillatory phase reset and dynamic attentional modulation are candidate neural and cognitive mechanisms mediating these effects.</p>
</abstract>
<abstract abstract-type="graphical">
<title>Highlights</title>
<p>► Multivariate decoding of video identity from fMRI signals in V1V3. ► Decoding accuracy in V2 is significantly reduced for incongruent sounds. ► Reduced decoding accuracy is associated with reduced inter-trial reliability. ► No modulation of univariate signal amplitude by sounds. ► Noise levels in sensory areas are affected by multisensory congruence.</p>
</abstract>
<kwd-group>
<title>Keywords</title>
<kwd>Multisensory</kwd>
<kwd>Audio-visual</kwd>
<kwd>V2</kwd>
<kwd>Decoding</kwd>
<kwd>MVPA</kwd>
<kwd>fMRI</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s0005">
<title>Introduction</title>
<p>Perception of the environment requires integration of sensory information across the senses, but how our brains combine information from different sensory streams is still poorly understood. The earliest stages of cortical sensory processing were long thought to be unimodal and multisensory processing to be restricted to dedicated convergence areas (
<xref rid="bb0200" ref-type="bibr">Mesulam, 1998</xref>
). However, the past decade has seen new anatomical and functional evidence for multisensory interactions even at the level of primary sensory areas (see
<xref rid="bb0075 bb0150" ref-type="bibr">Driver and Noesselt, 2008; Klemen and Chambers, 2012</xref>
for an overview).</p>
<p>Tracer studies provide anatomical evidence for multisensory interactions at early stages of cortical processing (here referred to as ‘early multisensory interactions’ for convenience, not necessarily implying temporal precedence). There are direct feedback connections from primary auditory and multisensory areas to V1 and V2 in macaque (
<xref rid="bb0055 bb0085 bb0240" ref-type="bibr">Clavagnier et al., 2004; Falchier et al., 2002; Rockland and Ojima, 2003</xref>
) and similar connections in rodents (
<xref rid="bb0010 bb0040" ref-type="bibr">Allman et al., 2008; Budinger et al., 2006</xref>
). Although some bimodal neurons can be found even in primary sensory areas (i.e. neurons that can be driven by either visual or auditory input, e.g.
<xref rid="bb0095" ref-type="bibr">Fishman and Michael, 1973</xref>
), the effect of direct cross-modal connections seems to be modulatory, rather than driving. Recent evidence from cats and rodents points to subthreshold modulation of ‘unimodal’ visual neurons (that cannot be driven by auditory input alone) as the dominant form of multisensory interaction in early visual cortex (
<xref rid="bb0025 bb0020 bb0135" ref-type="bibr">Allman and Meredith, 2007; Allman et al., 2008, 2009; Iurilli et al., 2012</xref>
). Early multisensory interactions also result in phase resetting of ongoing oscillations, thereby modulating and aligning the periodic excitability of affected neurons (e.g.
<xref rid="bb0160 bb0165" ref-type="bibr">Lakatos et al., 2007, 2009</xref>
, cf.
<xref rid="bb0255" ref-type="bibr">Schroeder et al., 2008</xref>
).</p>
<p>In humans, cross-modal interactions modulate the amplitude or can drive neural signals in early visual cortex, as indexed by Blood Oxygenation Level Dependent (BOLD) fMRI (e.g.
<xref rid="bb0185 bb0190 bb0195 bb0230 bb0275" ref-type="bibr">Macaluso et al., 2000; Martuzzi et al., 2007; Meienbrock et al., 2007; Noesselt et al., 2007; Watkins et al., 2006</xref>
), event-related potentials (ERPs) (e.g.
<xref rid="bb0045 bb0210" ref-type="bibr">Cappe et al., 2010; Molholm et al., 2002</xref>
) and transcranial magnetic stimulation (TMS) excitability (e.g.
<xref rid="bb0250" ref-type="bibr">Romei et al., 2009</xref>
). Cross-modal phase reset of ongoing oscillations in visual cortex is found in human magnetoencephalography (MEG;
<xref rid="bb0180" ref-type="bibr">Luo et al., 2010</xref>
) and electroencephalography (EEG; consistent with phase-locked periodic modulations of perceptual performance
<xref rid="bb0220 bb0245 bb0270" ref-type="bibr">Naue et al., 2011; Romei et al., 2012; Thorne et al., 2011</xref>
).</p>
<p>When monkeys are presented with naturalistic sound stimuli, accompanying visual stimulation reduces the mean firing rate of primary auditory cortex neurons (
<xref rid="bb0060 bb0145" ref-type="bibr">Dahl et al., 2010; Kayser et al., 2010</xref>
). Moreover, inter-trial variability of spike trains is greatly reduced, thus enhancing mutual information between stimuli and spiking patterns. This effect is significantly stronger when the auditory and the visual input are congruent (
<xref rid="bb0145" ref-type="bibr">Kayser et al., 2010</xref>
). Visual neurons in STS show a similar behaviour for naturalistic visual stimuli (
<xref rid="bb0060" ref-type="bibr">Dahl et al., 2010</xref>
). Their response amplitude is somewhat reduced for bimodal audio-visual stimulation and the stimulus information carried by spike patterns is affected by multisensory context: incongruent sounds significantly worsen stimulus decoding based on spike trains.</p>
<p>Here we sought to test whether multisensory modulation of stimulus encoding extended to humans and early retinotopic visual cortices. We presented participants with naturalistic audiovisual stimuli in four different conditions: audio only (A), visual only (V), audiovisual congruent (AV congruent) and audio-visual incongruent (AV incongruent). We then used multivoxel pattern analysis (MVPA) to decode stimulus identities based on spatial patterns of BOLD signals evoked in V1-3 (as identified by retinotopic mapping,
<xref rid="bb0260" ref-type="bibr">Sereno et al., 1995</xref>
). Separate multivariate classifiers were trained and tested for each of the four conditions and for each ROI. This allowed us to compare decoding accuracies between conditions, thus obtaining an index of pattern discriminability for each condition.</p>
</sec>
<sec sec-type="materials|methods" id="s0010">
<title>Materials and methods</title>
<sec id="s0015">
<title>Participants</title>
<p>15 participants from the University College London (UCL) participant pool took part (mean age, 26 years, SD, 4 years; 7 females; 1 left handed). All participants had normal or corrected to normal vision and reported no hearing problems. Written informed consent was obtained from each participant and the study was approved by the UCL ethics committee. Participants were paid 10 GBP per hour for taking part in the experiment, which lasted up to 2.5 h.</p>
</sec>
<sec id="s0020">
<title>Stimuli</title>
<p>Four video clips were used as audio-visual stimuli, each lasting 3 s. Two clips showed natural scenes containing animals (a croaking frog and a crowing rooster). These two clips were downloaded from
<ext-link ext-link-type="uri" xlink:href="http://www.youtube.com">http://www.youtube.com</ext-link>
and edited. The two remaining clips showed the clothed torso of the first author while turning a key in a lock or ripping a paper apart. All clips were similar with regard to luminance and loudness and were projected onto a screen at the end of the scanner bore. Participants viewed the clips via a mirror mounted at the head coil of the scanner at a viewing distance of ~ 72 cm. Video clips were presented at a resolution of 640 × 360 pixels and subtended ~ 18 by 10° visual angle when viewed by participants in the scanner. During the experiment participants were asked to fixate a white dot projected on top of the videos at the centre of the screen (radius ~ 0.1° visual angle). In each trial the dot turned blue once, twice or three times and participants were asked to count and indicate the number of colour changes via a button box in a 2 s inter stimulus interval.</p>
<p>Audio tracks accompanying the video clips were presented via MRI compatible in-ear headphones (
<ext-link ext-link-type="uri" xlink:href="http://www.etymotic.com">http://www.etymotic.com</ext-link>
). Loudness was adjusted individually before the start of the experiment, aiming for a level that was comfortable for participants but still enabled them to easily tell apart sound clips in the presence of scanner noise.</p>
<p>All stimuli were programmed and presented in MATLAB (Mathworks, Ltd.) using the Cogent Graphics (
<ext-link ext-link-type="uri" xlink:href="http://www.vislab.ucl.ac.uk/cogent.php">http://www.vislab.ucl.ac.uk/cogent.php</ext-link>
) and Psychophysics Toolbox 3 extensions (
<xref rid="bb0035 bb0235" ref-type="bibr">Brainard, 1997; Pelli, 1997</xref>
;
<ext-link ext-link-type="uri" xlink:href="http://psychtoolbox.org">http://psychtoolbox.org</ext-link>
).</p>
</sec>
<sec id="s0025">
<title>Procedure</title>
<p>Each participant completed 17–24 runs of scanning in the main experiment, each run lasting just under 2 min. During the runs participants were presented with audio and/or visual clips and completed an incidental, superimposed fixation task (cf. above). During each run each of the 4 stimuli was presented once for each experimental condition (i.e. four times), amounting to 16 stimulus trials per run (cf.
<xref rid="f0005" ref-type="fig">Fig. 1</xref>
). Participants were either presented with videos only (V), sounds only (A), matching videos and sounds (AV congruent condition) or mismatching videos and sounds (AV incongruent condition). For audio-visually incongruent trials the sound files were swapped between fixed pairs of videos (rooster crowing and paper ripping; frog croaking and keys turning). Each 3 s clip was followed by a 2 s inter-stimulus interval during which participants were asked to indicate via a button box how many times the fixation dot changed its colour. In addition to the 16 stimulus trials there were 4 blank trials in each run that served as a baseline measure. During these trials participants completed the fixation task in the absence of audio-visual clips. The order of the 20 trials was randomised for each run, as was the number of fixation dot colour changes in each trial (1–3).</p>
</sec>
<sec id="s0030">
<title>Retinotopic mapping</title>
<p>To delineate the borders of visual areas V1-3 on an individual basis, each participant underwent an additional fMRI run viewing stimuli for phase encoded retinotopic mapping (
<xref rid="bb0260" ref-type="bibr">Sereno et al., 1995</xref>
). Stimuli for this run consisted of a wedge rotating clock-wise and an expanding ring. Both stimuli moved in discrete steps, synchronised with the acquisition of fMRI volumes, but with different frequencies (wedge: 12 cycles, 20 steps per cycle; ring: 20 cycles, 12 steps per cycle). They were centred around a fixation dot of ~ 0.25° diameter and spanned up to 8° of eccentricity. It is generally difficult to distinguish retinotopic maps inside the foveal confluence because the borders between regions are difficult to resolve at conventional voxel sizes. Moreover, the presence of a stable fixation dot precludes any systematic variation in the BOLD signal related to the mapping stimulus. Note that the size of the fixation dot for our mapping stimuli was slightly larger than the size of the fixation dot for our audiovisual stimuli (~ 0.25 vs. ~ 0.2° diameter). We are therefore confident that our region of interest analyses did not include the foveal representations. Ring and wedge were presented on a grey background and served as apertures revealing a dynamic high contrast stimulus. Participants were asked to fixate at all times and count brief colour changes of the fixation dot from blue to purple. These colour change events lasted 200 ms and could occur at every non-consecutive 200 ms window of the run with a probability of 5%.</p>
</sec>
<sec id="s0035">
<title>Image acquisition and pre-processing</title>
<p>All functional and structural scans were obtained with a Tim Trio 3T scanner (Siemens Medical Systems, Erlangen, Germany), using a 12-channel head coil. Functional images for the main experiment were acquired with a gradient echo planar imaging (EPI) sequence (3 mm isotropic resolution, matrix size 64 × 64, 40 transverse slices per volume, acquired in ascending order (whole head coverage); slice acquisition time 68 ms, TE 30 ms, TR 2.72 s). We obtained 42 volumes per run of the main experiment (including three dummy volumes at the beginning of each run and two at the end), resulting in a run duration of 114.24 s. Functional images for retinotopic mapping were acquired in one run of 247 volumes with an EPI sequence (including five dummy volumes at the beginning and two at the end of the run; 2.3 mm isotropic resolution, matrix size 96 × 96, 36 transverse slices per volume, acquired in interleaved order (centred on the occipital cortex); slice acquisition time 85 ms, TE 36 ms, TR 3.06 s per volume). In between the main experiment and the retinotopic mapping run we acquired fieldmaps to correct for geometric distortions in the functional images caused by heterogeneities in the B0 magnetic field (double-echo FLASH sequence with a short TE of 10 ms and a long sequence of 12.46 ms, 3 × 3 × 2 mm, 1 mm gap). Finally, we acquired a T1-weighted structural image of each participant using an MDEFT sequence (
<xref rid="bb0065" ref-type="bibr">Deichmann et al., 2004</xref>
; 1 mm isotropic resolution, matrix size 256 × 240, 176 sagittal slices, TE 2.48 ms, TR 7.92 ms, TI 910 ms).</p>
<p>All image files were converted to NIfTI format and pre-processed using SPM 8 (
<ext-link ext-link-type="uri" xlink:href="http://www.fil.ion.ucl.ac.uk/spm/software/spm8/">http://www.fil.ion.ucl.ac.uk/spm/software/spm8/</ext-link>
). The dummy volumes for each run were discarded to allow for the T1 signal to reach steady state. The remaining functional images of the main experiment and the retinotopic mapping session were independently mean bias corrected, realigned and unwarped (using voxel displacement maps generated from the fieldmaps). Finally the functional images were co-registered with the respective anatomical MDEFT scan for each participant and smoothed with a 5 mm Gaussian kernel.</p>
</sec>
<sec id="s0040">
<title>Data analysis</title>
<sec id="s0045">
<title>Multivariate pattern analysis</title>
<p>We specified separate general linear models for each run and each participant. Each general linear model contained regressors for each of the 16 trial types plus one regressor for the blank trials (boxcar regressors convolved with a canonical hemodynamic response function). Additional regressors of no interest were modelled for response intervals and for the six motion parameters estimated during re-alignment. The general linear models for each run and each participant were estimated and contrast images for each of the 16 trials (per run and condition) calculated. This resulted in separate contrast images and t-maps for each trial type of the experiment for each participant. These t-maps were masked with the retinotopic regions of interest (see below) and the resulting patterns were vectorised. For the decoding and correlation analyses the resulting patterns were mean corrected across stimuli within each condition. Note that this did not affect classification performance — the distribution of patterns in feature space was preserved, but now centred on zero. This allowed us to ensure that any common intercept of patterns across stimuli was disregarded for the similarity and reliability correlation analyses (see below). Beta maps for univariate analyses were not mean corrected. The aim of the decoding analysis was to decode
<italic>stimulus identity</italic>
from activation patterns in visual areas (i.e. which of the four videos was presented in a given trial) and to compare the accuracies of decoders
<italic>across conditions</italic>
(i.e. did stimulus decoding accuracy vary depending on audiovisual condition, cf.
<xref rid="f0005" ref-type="fig">Fig. 1</xref>
). Stimulus decoding was performed using custom code using the linear support vector machine (lSVM) implemented in the Bioinformatics toolbox for MATLAB (version R2010b,
<ext-link ext-link-type="uri" xlink:href="http://www.mathworks.com">http://www.mathworks.com</ext-link>
). Data from each condition were used for training and testing of separate classifiers to get condition-specific decoding accuracies. For each condition a four-way classifier was built, to decode which of the four stimuli was presented from a given activation pattern. The four-way classifier consisted of six lSVMs to test all possible pair-wise comparisons between the four stimuli. It then assigned one of the stimulus labels based on a one-against-one voting procedure (
<xref rid="bb0130" ref-type="bibr">Hsu and Lin, 2002</xref>
). The four-way classifier was trained and tested for accuracy in a jackknife procedure. In each iteration, the (condition-specific) data from all runs but one served as training data and the (condition-specific) data from the remaining run was used to test the prediction accuracy of the lSVM. Accuracies were stored and averaged across iterations at the end of this procedure, and the whole procedure was applied to each retinotopic ROI (V1-3) independently, yielding a four-way classification accuracy for each condition and ROI. Statistical analysis of the resulting accuracies was done in MATLAB and PASW 18.0 (SPSS inc./IBM). Accuracies were compared against chance level by subtracting .25 and using one sample t-tests. Accuracies were compared between conditions using ANOVAs and paired t-tests.</p>
<p>Potential differences in decoding accuracy between conditions could stem from two different sources. They could be due to changes in pattern reliability across trials, changes in pattern similarity between patterns evoked by different stimuli or both. We employed additional analyses to differentiate between those options. Changes in pattern reliability were tested by averaging the patterns for a given stimulus across trials separately from odd and even runs and computing the Pearson correlation coefficient for the two resulting mean patterns (in a ROI-and condition-specific manner). The resulting correlation coefficients were Fisher z-transformed, averaged for each condition and then compared across conditions using ANOVAs and paired t-tests. Changes in pattern similarity were tested by averaging the patterns for a given stimulus across all trials and computing correlations between these mean patterns for different stimuli (again, in a ROI- and condition-specific manner). The resulting Pearson correlation coefficients were compared as described above.</p>
</sec>
<sec id="s0050">
<title>Searchlight analysis</title>
<p>To test whether and where stimulus information was modulated by audiovisual context outside retinotopic cortices, we set up an additional, exploratory searchlight analysis (
<xref rid="bb0155" ref-type="bibr">Kriegeskorte et al., 2006</xref>
). For this analysis, activation patterns were derived from the same (trial-specific) t-maps that were used for the ROI analysis described above. The searchlight consisted of a sphere with a radius of 4 voxels that was centred on each grey matter voxel of each participant's brain in turn. During each iteration, the searchlight was used as a mask and the patterns of activation within this mask were read out for each trial. Then the same 4-way classification procedure used for the ROI analysis was applied to those patterns (cf. above). The resulting (condition specific) classification accuracies were projected back onto the seed voxel. Repeating this procedure for every grey matter voxel, we thus derived four accuracy maps for each participant (one per condition). To test for significant accuracy differences between conditions we subtracted the respective accuracy maps from each other. Specifically, we contrasted the audiovisual congruent condition with the muted condition and with the incongruent condition and the muted condition with the audio-visual incongruent condition. The resulting accuracy contrast maps were normalised to MNI space (
<ext-link ext-link-type="uri" xlink:href="http://www.loni.ucla.edu/ICBM/">http://www.loni.ucla.edu/ICBM/</ext-link>
) and tested for whole brain family-wise error (FWE) corrected significance at cluster level in SPM 8 (cluster forming threshold
<italic>p</italic>
 < .001 uncorrected). Significant clusters were identified anatomically using the Juelich Histological Atlas implemented in the SPM Anatomy Toolbox (v. 1.8,
<ext-link ext-link-type="uri" xlink:href="http://www.fz-juelich.de/inm/inm-1/DE/Forschung/_docs/SPMAnatomyToolbox/SPMAnatomyToolbox_node.html">http://www.fz-juelich.de/inm/inm-1/DE/Forschung/_docs/SPMAnatomyToolbox/SPMAnatomyToolbox_node.html</ext-link>
).</p>
</sec>
<sec id="s0055">
<title>Univariate analysis</title>
<p>To test whether audio-visual context had any influence on the overall signal amplitude in our ROIs we employed an additional univariate analysis. For this analysis we averaged the condition specific beta weights of voxels within our ROIs across stimuli and trials for each participant. We then compared the mean beta values between conditions for each ROI using ANOVAs and paired t-tests.</p>
<p>We additionally tested whether a different approach to univariate analyses would have yielded any differences between conditions. To test this, we concatenated all runs of a given participant in one design matrix in SPM8. This allowed us to build contrasts between conditions on the first level, utilising all trials of the respective conditions. These first level contrasts were then normalised to MNI space and tested for whole brain FWE corrected significance at cluster level in SPM8 (cluster forming threshold
<italic>p</italic>
 < .001 uncorrected).</p>
</sec>
<sec id="s0060">
<title>Retinotopic mapping</title>
<p>Retinotopic ROIs were identified using standard phase-encoded retinotopic mapping procedures (
<xref rid="bb0260" ref-type="bibr">Sereno et al., 1995</xref>
). We extracted and normalised the time series for each voxel and applied a fast Fourier transformation to it. Visually responsive voxels were identified by peaks in their power spectra that corresponded to our stimulus frequencies. The preferred polar angle and eccentricity of each voxel was then identified as the phase lag of the signal at the corresponding stimulus frequency (wedge and ring, respectively). The phase lags for each voxel were stored in a ‘polar’ and an ‘eccentricity’ volume and then projected onto the reconstructed, inflated cortical surface (surface based analysis was performed using FreeSurfer:
<ext-link ext-link-type="uri" xlink:href="http://surfer.nmr.mgh.harvard.edu">http://surfer.nmr.mgh.harvard.edu</ext-link>
). The resulting maps allowed us to identify meridian polar angle reversals and thus to delineate the borders of visual areas V1-3 on the cortical surface. These labels were then exported as three-dimensional masks into NIfTI space and served as ROIs.</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="s0065">
<title>Results</title>
<sec id="s0070">
<title>Behavioural data</title>
<p>Participants performed well on the fixation task for all four stimulus categories and the baseline category. Performance did not differ significantly between conditions (note that the task was independent of stimulus category; 95 ± 1%, 96 ± 1%, 96 ± 1%, 97 ± 1%, and 97 ± 1% correct for the AV congruent, AV incongruent, V, A and baseline category, respectively (mean ± standard error of the mean);
<italic>F</italic>
<sub>(2.49, 34.85)</sub>
 = 1.59,
<italic>p</italic>
 = .22,
<italic>n.s.</italic>
, Greenhouse–Geisser corrected for non-sphericity).</p>
</sec>
<sec id="s0075">
<title>Multivariate fMRI results</title>
<sec id="s0080">
<title>Multivariate ROI results</title>
<p>Visual stimulus identities could be decoded significantly above chance level (0.25) from V1-3 (ROIs were combined across hemispheres; all
<italic>p</italic>
 < 10
<sup>− 5</sup>
, cf.
<xref rid="f0010" ref-type="fig">Fig. 2</xref>
a)). When no visual stimulus was presented (A condition) decoding performance was at chance level (all
<italic>p</italic>
 > .4). To test whether the presence and congruence of co-occurring sounds had an influence on visual stimulus encoding we compared decoding accuracy in the three conditions containing visual stimuli (AV congruent, AV incongruent, V) for V1-3. Decoding performance did not differ significantly between conditions in V1 (
<italic>F</italic>
<sub>(2,28)</sub>
 = 0.46,
<italic>p</italic>
 = .64,
<italic>n.s.</italic>
). However, the presence and congruence of sounds had a significant effect on decoding performance in area V2 (
<italic>F</italic>
<sub>(2,28)</sub>
 = 7.17,
<italic>p</italic>
 = .003) and there was a non-significant trend for such an effect in area V3 (
<italic>F</italic>
<sub>(2,28)</sub>
 = 2.12,
<italic>p</italic>
 = .14,
<italic>n.s.</italic>
). Post-hoc t-tests revealed that stimulus decoding from activity patterns in area V2 was significantly worse in the AV incongruent condition compared to both, decoding in the AV congruent (
<italic>t</italic>
<sub>(14)</sub>
 = 3.29,
<italic>p</italic>
 = .005) and V (
<italic>t</italic>
<sub>(14)</sub>
 = 3.46,
<italic>p</italic>
 = .004) conditions. Pattern decoding from area V3 was significantly worse for the AV incongruent condition compared to the V condition (
<italic>t</italic>
<sub>(14)</sub>
 = 2.15,
<italic>p</italic>
 = .049).</p>
<p>To further investigate the effect of sounds on stimulus decoding from activation patterns in V1-3 we compared the reliability and similarity of stimulus-evoked patterns (cf.
<xref rid="s0010" ref-type="sec">Materials and methods</xref>
for details). There was no detectable influence of sounds on pattern similarity in V1-3 (V1:
<italic>F</italic>
(2,28) = 0.762,
<italic>p</italic>
 = .476,
<italic>n.s.</italic>
, V2:
<italic>F</italic>
(2,28) = 1.069,
<italic>p</italic>
 = .357,
<italic>n.s.</italic>
, V3:
<italic>F</italic>
(2,28) = 1.815,
<italic>p</italic>
 = .181,
<italic>n.s.</italic>
; cf.
<xref rid="f0010" ref-type="fig">Fig. 2</xref>
d). However, pattern reliability was significantly affected by the presence of sounds in V2 and V3 (V1:
<italic>F</italic>
(2,28) = 2.013,
<italic>p</italic>
 = .152,
<italic>n.s.</italic>
, V2:
<italic>F</italic>
(1.4,28, Greenhouse–Geisser corrected) = 6.647,
<italic>p</italic>
 = .011, V3:
<italic>F</italic>
(2,28) = 5.133
<italic>p</italic>
 = .013; cf.
<xref rid="f0010" ref-type="fig">Fig. 2</xref>
c) Post-hoc paired t-tests revealed that pattern reliability in V2 was significantly reduced in the AV incongruent condition, compared to both the AV congruent condition (
<italic>t</italic>
<sub>(14)</sub>
 = − 2.376,
<italic>p</italic>
 = .032) and the V condition (
<italic>t</italic>
<sub>(14)</sub>
 = − 5.406,
<italic>p</italic>
 < .0001). Pattern reliability in V3 was significantly reduced in the AV incongruent condition, compared to the V condition (
<italic>t</italic>
<sub>(14)</sub>
 = − 3.004,
<italic>p</italic>
 = .010).</p>
<p>For completeness, we computed a complete representation of all possible stimulus pattern correlations (16 by 16); please see the Supplementary results and
<xref rid="ec0005" ref-type="supplementary-material">Fig. S1</xref>
.</p>
<p>Our study was limited to investigating multisensory modulation of pattern discriminability in early visual cortices. It would have been interesting to compare this to similar modulations in early auditory cortex. However, auditory pattern decoding from BOLD signals typically has much lower accuracies than visual pattern decoding and appears to require high spatial resolution MRI sequences (e.g.
<xref rid="bb0100 bb0265" ref-type="bibr">Formisano et al., 2008; Staeren et al., 2009</xref>
). Nevertheless, for completeness we also extracted patterns of BOLD signals from bilateral anterior transversal temporal gyri (
<xref rid="bb0070" ref-type="bibr">Destrieux et al., 2010</xref>
) and tried to classify them. Stimulus decoding was generally unsuccessful for this data and did not improve even when using a more lenient anatomical criterion (including the whole of the superior temporal gyrus and plane). We conclude that an investigation of primary auditory cortex similar to our visual cortex analysis would rely on high-resolution scans and adequate functional localizers, ideally tonotopic-mapping.</p>
</sec>
<sec id="s0085">
<title>Searchlight results</title>
<p>We tested three contrasts: AV congruent–AV incongruent, AV congruent–V and V–AV incongruent (see
<xref rid="s0010" ref-type="sec">Materials and methods</xref>
for details.).</p>
<p>The AV congruent–AV incongruent contrast yielded no significant clusters at the corrected threshold. The AV congruent–V contrast revealed two significant clusters in the bilateral superior temporal gyri (FWE corrected
<italic>p</italic>
 < .05). Both clusters included early auditory cortex and part of the superior temporal gyrus (including TE 1.0, 1.2 and 3) and the right cluster extended in anterior direction to the temporal pole (cf.
<xref rid="t0005" ref-type="table">Table 1</xref>
and
<xref rid="f0015" ref-type="fig">Fig. 3</xref>
a)). The V–AV incongruent contrast yielded two significant clusters in visual cortex (FWE corrected
<italic>p</italic>
 < .05). The first cluster spanned part of the bilateral calcarine gyrus near the occipital pole, including parts of Brodmann area 17 and 18. The second cluster was located in the left lateral inferior occipital gyrus and coincided with the location reported for areas LO1/2 (
<xref rid="bb0175" ref-type="bibr">Larsson and Heeger, 2006</xref>
). See
<xref rid="t0005" ref-type="table">Table 1</xref>
and
<xref rid="f0015" ref-type="fig">Fig. 3</xref>
b).</p>
</sec>
</sec>
<sec id="s0090">
<title>Univariate fMRI analysis</title>
<p>To test where in the brain auditory context modulated the amplitude of the signal evoked by our stimuli (as opposed to information carried), we employed a univariate whole brain analysis. We tested the same three contrasts tested in the searchlight analysis: AV congruent–AV incongruent, AV congruent–V and V–AV incongruent (see
<xref rid="s0010" ref-type="sec">Materials and methods</xref>
for details).</p>
<p>The AV congruent–AV incongruent contrast yielded no significant results. The AV congruent–V contrast yielded two significant clusters in the bilateral superior temporal gyri (FWE corrected
<italic>p</italic>
 < .05). Both clusters included early auditory cortex (including TE 1.0, 1.1, 1.2 and 3) and the right cluster extended in anterior direction to the temporal pole (cf.
<xref rid="t0010" ref-type="table">Table 2</xref>
and
<xref rid="f0020" ref-type="fig">Fig. 4</xref>
a), note the similarity to the corresponding searchlight contrast). The V–AV incongruent contrast yielded two similar clusters of significantly greater activation for the AV incongruent condition (i.e. the one including auditory stimulation). These clusters again spanned almost the whole of bilateral superior temporal gyri, including early auditory cortex (cf.
<xref rid="t0010" ref-type="table">Table 2</xref>
and
<xref rid="f0020" ref-type="fig">Fig 4</xref>
b).</p>
<p>For a more direct comparison between univariate contrasts and the multivariate analysis we also tested for univariate effects in the retinotopically defined ROIs of each participant. For this contrast we averaged the voxel responses (betas) for each participant and condition across the whole of the respective ROI (cf.
<xref rid="f0010" ref-type="fig">Fig. 2</xref>
b)). Response amplitudes did not differ significantly between the three conditions involving visual stimuli in all three ROIs (V1:
<italic>F</italic>
<sub>(2,28)</sub>
 = 0.01,
<italic>p</italic>
 = .99,
<italic>n.s.</italic>
; V2:
<italic>F</italic>
<sub>(2,28)</sub>
 = 0.25,
<italic>p</italic>
 = .78,
<italic>n.s.</italic>
; V3:
<italic>F</italic>
<sub>(2,28)</sub>
 = 1.12,
<italic>p</italic>
 = .34).</p>
</sec>
</sec>
<sec sec-type="discussion" id="s0095">
<title>Discussion</title>
<p>We presented participants with naturalistic, dynamic audiovisual stimuli while they performed an incidental fixation task. Replicating previous studies (e.g.
<xref rid="bb0225" ref-type="bibr">Nishimoto et al., 2011</xref>
), we could decode stimulus identity from spatial patterns of BOLD signals in retinotopic cortices well above chance. More specifically, we could decode stimulus identity significantly better than chance from BOLD patterns in V1-3 (separately) for all conditions containing visual stimuli (AV congruent, AV incongruent and V), but not for the audio only (A) condition.</p>
<p>There were no detectable differences in mean amplitudes of BOLD signals evoked in V1-3 for the AV congruent, AV incongruent and V conditions. However, and most importantly, decoding accuracy varied significantly with the presence and congruence of sounds in V2 and somewhat in V3. Decoding accuracy for patterns in V2 was worse for the AV incongruent condition compared to both, the V and AV congruent condition. Decoding accuracy in V3 was worse for the AV incongruent compared to the V condition. Worsening of local decoding accuracies for the AV incongruent (compared to V) condition was confirmed and extended to area LO (and possibly V1) by searchlight analyses.</p>
<p>Significantly worse decoding for the AV incongruent condition in V2 (compared to the AV congruent and V conditions) was associated with reduced inter-trial reliability of patterns for a given stimulus in this condition (again, in comparison to the AV congruent and V conditions). In V3 reduced decoding accuracy for the AV incongruent condition relative to the V condition went along with reduced inter-trial reliability for the same comparison. In contrast to the reliability of
<italic>intra</italic>
-stimulus patterns, no significant modulation of
<italic>inter</italic>
-stimulus pattern similarity could be found.</p>
<sec id="s0100">
<title>Modulation of pattern discriminability</title>
<p>Our results demonstrate modulation of stimulus evoked pattern discriminability as a consequence of multisensory interactions in human early retinotopic cortex. They are in accord with and extend recent findings in macaque primary auditory cortex (
<xref rid="bb0145" ref-type="bibr">Kayser et al., 2010</xref>
) and superior temporal sulcus (
<xref rid="bb0060" ref-type="bibr">Dahl et al., 2010</xref>
). Notably, we observed these modulations in early visual cortex using high-contrast visual stimuli that covered only central parts of the visual field (< 10° eccentricity). Our data suggest that this effect reflected modulations of inter-trial reliability of neural activation patterns for a given stimulus, i.e. the average multivariate mean for a given stimulus was not shifted, but the trial-by-trial scatter around this mean depended on multisensory context. This is also in line with the findings of
<xref rid="bb0145" ref-type="bibr">Kayser et al. (2010)</xref>
and
<xref rid="bb0060" ref-type="bibr">Dahl et al. (2010)</xref>
.</p>
<p>Note that we could not discriminate BOLD signal patterns in visual cortex evoked by purely auditory stimuli. This contrasts with the findings that auditory motion direction can be decoded from lateral occipital cortex (
<xref rid="bb0005" ref-type="bibr">Alink et al., 2012</xref>
) and visual stimulus identity can be decoded from early auditory cortex (
<xref rid="bb0125 bb0205" ref-type="bibr">Hsieh et al., 2012; Meyer et al., 2010</xref>
). A possible explanation for this difference is that such effects rely on top-down attention or even cross-modally evoked imagery (
<xref rid="bb0125 bb0205" ref-type="bibr">Hsieh et al., 2012; Meyer et al., 2010</xref>
). It is possible that this kind of effect was prohibited or attenuated by our fixation task. Alternatively, it is possible that only certain types of auditory signal such as those associated with motion can be decoded from visual cortex.</p>
<p>Interestingly modulations of BOLD pattern discriminability in visual cortices were not accompanied by overall amplitude modulations in our experiment. This differs from the results of previous fMRI studies that found increased univariate signals in early sensory areas for audiovisual concurrent compared to purely visual stimulation (e.g.
<xref rid="bb0190 bb0230 bb0275" ref-type="bibr">Martuzzi et al., 2007; Noesselt et al., 2007; Watkins et al., 2006</xref>
). This difference might reflect the fact that these earlier studies used transient, periliminal or low contrast stimuli while here we used naturalistic stimuli. Also,
<xref rid="bb0145" ref-type="bibr">Kayser et al. (2010)</xref>
and
<xref rid="bb0060" ref-type="bibr">Dahl et al. (2010)</xref>
found some net amplitude
<italic>reduction</italic>
for bimodal stimulation. However, our V condition differed from their design: in our experiment it was not truly unimodal because scanner noise was present throughout the experiment. Increased BOLD amplitude is also observed in parts of early visual cortex for spatially incongruent (vs. congruent) audiovisual stimuli (
<xref rid="bb0195" ref-type="bibr">Meienbrock et al., 2007</xref>
). Our failure to find such an effect might be due to differences in stimuli and design. Audiovisual in/congruence was specific to spatial alignment in that earlier study while our manipulation affected temporal and semantic congruence as well. Also, we used an orthogonal fixation task, while the earlier study required participants to explicitly judge the spatial congruency of stimuli. Congruency effects may therefore be task-dependent and this should be examined in future work. Stimulus and congruency directed attention might influence multisensory modulation of univariate response levels. Finally, the effect reported in that earlier study was only observed for a subgroup of vertices within retinotopic ROIs of one hemisphere at a relaxed statistical threshold so our failure to observe such moderate effects may be due to a lack in statistical power. Whatever the reasons for the dissociation between modulation of overall amplitude and pattern discriminability in the present work, it renders our results important in the context of the debate about criteria for multisensory interactions. These usually concern different types of amplitude modulation and the question which of them qualify as ‘multisensory’ (e.g.
<xref rid="bb0030" ref-type="bibr">Beauchamp, 2005</xref>
). Our results demonstrate multisensory interactions in the absence of
<italic>any</italic>
detectable net amplitude modulation. Furthermore, one might argue that, in the context of naturalistic stimuli, modulation of pattern discriminability may be the most relevant effect of multisensory interactions. Recently, it has been argued that the role of primary sensory cortices in audio-visual integration might be limited to low level stimulus features and transient stimuli (
<xref rid="bb0120 bb0280" ref-type="bibr">Giani et al., 2012; Werner and Noppeney, 2010</xref>
). The basis for this argument is the observed insensitivity of the (univariate) BOLD signal amplitude in primary auditory cortex to higher order stimulus congruence (
<xref rid="bb0280" ref-type="bibr">Werner and Noppeney, 2010</xref>
) and the absence of cross-modulation frequencies for audio-visual steady-state responses in MEG (
<xref rid="bb0120" ref-type="bibr">Giani et al., 2012</xref>
; note that the latter method does not allow the presentation of audio-visual congruent stimuli). Our results suggest the null results in these studies could reflect an insensitivity of the analysis methods used to detect modulations of the encoded stimulus information (like pattern discriminability or pattern reliability). This underscores the need for further research to clarify the exact role of primary sensory cortices in audiovisual stimulus integration.</p>
</sec>
<sec id="s0105">
<title>Potential mechanisms modulating audiovisual pattern discriminability</title>
<p>How do sounds affect the reliability of early visual cortex signals? Most likely this effect rests on subthreshold modulation of visual neurons, rather than on classical bimodal neurons. Bimodal neurons in early visual cortex seem to be restricted to the far periphery of visual space (which we did not stimulate here) whereas subthreshold modulation also affects more central representations (
<xref rid="bb0025" ref-type="bibr">Allman and Meredith, 2007</xref>
). Furthermore, multisensory modulation of spike train discriminability is found for subthreshold modulation of visual neurons (
<xref rid="bb0060" ref-type="bibr">Dahl et al., 2010</xref>
). One could speculate that this subthreshold modulation in turn could be mediated via phase alignment of ongoing oscillations (e.g.
<xref rid="bb0160 bb0220 bb0245" ref-type="bibr">Lakatos et al., 2007; Naue et al., 2011; Romei et al., 2012</xref>
). Some results from a recent MEG study are of particular interest (
<xref rid="bb0180" ref-type="bibr">Luo et al., 2010</xref>
), showing that accuracy of decoding video stimuli from phase patterns of occipital channels depends on audiovisual congruency. Furthermore, in that MEG study the trial-by-trial phase coherence (i.e. reliability) for a given video stimulus was affected by audiovisual congruency as well. It has been proposed that temporal profiles of neural activity in different primary sensory areas can work as oscillatory attractors on each other, effectively yielding an ongoing modulation of excitability (
<xref rid="bb0165 bb0255" ref-type="bibr">Lakatos et al., 2009; Schroeder et al., 2008</xref>
). This could serve to minimise temporal uncertainty (
<xref rid="bb0105" ref-type="bibr">Friston, 2009</xref>
) and would be very similar to what was proposed as an early theory of ‘dynamic attention’ (
<xref rid="bb0140 bb0170" ref-type="bibr">Jones, 1976; Large and Jones, 1999</xref>
). Note, that for our design such effects would likely be stimulus driven, rather than top-down controlled — participants were engaged in a fixation task and had no incentive to concentrate on the dynamic stimuli in the background.</p>
<p>If temporal fine-tuning is indeed a mechanism behind our finding, it is interesting that MVPA was sensitive enough to pick it up despite the coarse temporal resolution of fMRI and the fact that decoding rests on
<italic>spatial</italic>
patterns of activation. The studies by
<xref rid="bb0145" ref-type="bibr">Kayser et al. (2010)</xref>
and
<xref rid="bb0060" ref-type="bibr">Dahl et al. (2010)</xref>
investigated modulation of single unit firing rate variability. This could translate to BOLD pattern variability, if the variance of the net population amplitude in a voxel would be modulated in effect — or at least the variance of modulatory pre-synaptic activity contributing to the BOLD-signal (
<xref rid="bb0050 bb0110" ref-type="bibr">Cardoso et al., 2012; Friston, 2012</xref>
).</p>
</sec>
<sec id="s0110">
<title>Null results with regard to enhanced pattern discriminability and V1</title>
<p>Our data did not show significant modulation of pattern discriminability in V1. For V2 and V3 they only showed reduced pattern discriminability in the AV incongruent condition, but no enhancement for the AV congruent condition. Null-results need to be interpreted cautiously for several reasons. In our case, there are additional, design-specific reasons to be cautious: Multisensory interactions are generally more likely for peripheral (e.g.
<xref rid="bb0025" ref-type="bibr">Allman and Meredith, 2007</xref>
) and degraded (e.g.
<xref rid="bb0080 bb0090" ref-type="bibr">Ernst and Banks, 2002; Fetsch et al., 2012</xref>
) stimuli. However, our visual stimuli were naturalistic and had high contrast, while the sounds we used were degraded due to scanner noise. Thus our design was suboptimal for evoking maximum cross-modal interaction effects and potentially biased towards detrimental effects on visual processing rather than enhancement. That said, one might expect audio-visual effects to be stronger in V2 than V1 if they rest on direct crosstalk with auditory cortex, because these connections seem to be much sparser in V1 than in V2 (
<xref rid="bb0240" ref-type="bibr">Rockland and Ojima, 2003</xref>
). Furthermore,
<xref rid="bb0145" ref-type="bibr">Kayser et al. (2010)</xref>
found enhancement of information representation in macaque A1 for AV congruent as well as for AV incongruent stimuli. However,
<xref rid="bb0060" ref-type="bibr">Dahl et al. (2010)</xref>
found only significant information degradation for visual neurons in the AV incongruent condition, but no significant enhancement for the AV congruent condition. In sum, it might be possible that the signal to noise ratio (SNR) of early visual responses is close to ceiling for naturalistic stimuli, and thus early auditory responses are more likely to gain from multisensory interactions. Future studies should parametrically vary the SNR of visual stimuli (or possibly both modalities) to shed further light on this question.</p>
</sec>
<sec id="s0115">
<title>Possible sources of multisensory interactions</title>
<p>Our data provide information about the effects of multisensory interactions in V1-3, but not about their source(s). The multisensory effects we observed could be mediated by feedback connections from multisensory cortices, by feed-forward connections from the superior colliculus and/or by direct connections between primary sensory areas (cf.
<xref rid="bb0075 bb0150" ref-type="bibr">Driver and Noesselt, 2008; Klemen and Chambers, 2012</xref>
) for an overview). In humans, analyses of functional connectivity could provide hints regarding these possibilities (e.g. psycho-physiological interactions (PPI)
<xref rid="bb0115" ref-type="bibr">Friston et al., 1997</xref>
). Unfortunately, however, the optimal design requirements for MVPA are very different from those for connectivity analyses (e.g. fast event related designs to acquire many pattern examples for MVPA vs. longer task blocks for PPI). Future studies could try to combine both analysis techniques by applying both kinds of designs in one sample. This would allow testing for correlations between the individual strength of modulation with regard to information representation and with regard to connectivity.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="s0120">
<title>Conclusions</title>
<p>Multisensory interactions affect human visual cortex processing from its earliest stages. For naturalistic stimuli, these interactions can be restricted to reliability modulations of fine-grained patterns and thus go undetected by common univariate analyses. This calls into question the exclusivity of criteria for multisensory interactions involving net amplitude modulation. The purpose of pattern discriminability modulations is likely to enhance encoding reliability (esp. for weak stimuli), but further research is needed.</p>
<p>The following are the supplementary data related to this article
<supplementary-material content-type="local-data" id="ec0005">
<caption>
<p>Supplementary material.</p>
</caption>
<media xlink:href="mmc1.doc"></media>
</supplementary-material>
<supplementary-material content-type="local-data" id="ec0010">
<caption>
<title>Fig. S1</title>
<p>Full correlation matrices for stimulus evoked patterns in V1-3.</p>
<p>Each cell represents the correlation between two average patterns of activation. Correlations on the diagonal represent correlations of the average pattern of activity for a given stimulus in odd runs with the average pattern for this stimulus in even runs. All other averages are across all runs. Letters indicate stimulus identity (‘F'rog, ‘K'eys, ‘R'ooster, ‘P'aper), following a ‘visual/auditory’ convention (cf. Supplementary results for further explanation). Coloured rectangles indicate within-condition correlations. The colour bar to the right indicates Pearson's correlation coefficient. a) to c) represent pattern correlations for V1-3, respectively.</p>
</caption>
<media xlink:href="mmc2.pptx"></media>
</supplementary-material>
</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="bb0005">
<element-citation publication-type="journal" id="rf0005">
<person-group person-group-type="author">
<name>
<surname>Alink</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Euler</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kriegeskorte</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Singer</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Kohler</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Auditory motion direction encoding in auditory cortex and high-level visual cortex</article-title>
<source>Hum. Brain Mapp.</source>
<volume>33</volume>
<year>2012</year>
<fpage>969</fpage>
<lpage>978</lpage>
<pub-id pub-id-type="pmid">21692141</pub-id>
</element-citation>
</ref>
<ref id="bb0025">
<element-citation publication-type="journal" id="rf0015">
<person-group person-group-type="author">
<name>
<surname>Allman</surname>
<given-names>B.L.</given-names>
</name>
<name>
<surname>Meredith</surname>
<given-names>M.A.</given-names>
</name>
</person-group>
<article-title>Multisensory processing in “unimodal” neurons: cross-modal subthreshold auditory effects in cat extrastriate visual cortex</article-title>
<source>J. Neurophysiol.</source>
<volume>98</volume>
<year>2007</year>
<fpage>545</fpage>
<lpage>549</lpage>
<pub-id pub-id-type="pmid">17475717</pub-id>
</element-citation>
</ref>
<ref id="bb0010">
<element-citation publication-type="journal" id="rf0250">
<person-group person-group-type="author">
<name>
<surname>Allman</surname>
<given-names>B.L.</given-names>
</name>
<name>
<surname>Bittencourt-Navarrete</surname>
<given-names>R.E.</given-names>
</name>
<name>
<surname>Keniston</surname>
<given-names>L.P.</given-names>
</name>
<name>
<surname>Medina</surname>
<given-names>A.E.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>M.Y.</given-names>
</name>
<name>
<surname>Meredith</surname>
<given-names>M.A.</given-names>
</name>
</person-group>
<article-title>Do cross-modal projections always result in multisensory integration?</article-title>
<source>Cereb. Cortex</source>
<volume>18</volume>
<year>2008</year>
<fpage>2066</fpage>
<lpage>2076</lpage>
<pub-id pub-id-type="pmid">18203695</pub-id>
</element-citation>
</ref>
<ref id="bb0020">
<element-citation publication-type="journal" id="rf0260">
<person-group person-group-type="author">
<name>
<surname>Allman</surname>
<given-names>B.L.</given-names>
</name>
<name>
<surname>Keniston</surname>
<given-names>Æ.L.P.</given-names>
</name>
<name>
<surname>Meredith</surname>
<given-names>M.A.</given-names>
</name>
</person-group>
<article-title>Not just for bimodal neurons anymore: the contribution of unimodal neurons to cortical multisensory processing</article-title>
<source>Brain Topogr.</source>
<year>2009</year>
<fpage>157</fpage>
<lpage>167</lpage>
<pub-id pub-id-type="pmid">19326204</pub-id>
</element-citation>
</ref>
<ref id="bb0030">
<element-citation publication-type="journal" id="rf0020">
<person-group person-group-type="author">
<name>
<surname>Beauchamp</surname>
<given-names>M.S.</given-names>
</name>
</person-group>
<article-title>Statistical criteria in FMRI studies of multisensory integration</article-title>
<source>Neuroinformatics</source>
<volume>3</volume>
<year>2005</year>
<fpage>93</fpage>
<lpage>113</lpage>
<pub-id pub-id-type="pmid">15988040</pub-id>
</element-citation>
</ref>
<ref id="bb0035">
<element-citation publication-type="journal" id="rf0025">
<person-group person-group-type="author">
<name>
<surname>Brainard</surname>
<given-names>D.H.</given-names>
</name>
</person-group>
<article-title>The psychophysics toolbox</article-title>
<source>Spat. Vis.</source>
<volume>10</volume>
<year>1997</year>
<fpage>433</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="pmid">9176952</pub-id>
</element-citation>
</ref>
<ref id="bb0040">
<element-citation publication-type="journal" id="rf0030">
<person-group person-group-type="author">
<name>
<surname>Budinger</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Heil</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Hess</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Scheich</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Multisensory processing via early cortical stages: connections of the primary auditory cortical field with other sensory systems</article-title>
<source>Neuroscience</source>
<volume>143</volume>
<year>2006</year>
<fpage>1065</fpage>
<lpage>1083</lpage>
<pub-id pub-id-type="pmid">17027173</pub-id>
</element-citation>
</ref>
<ref id="bb0045">
<element-citation publication-type="journal" id="rf0035">
<person-group person-group-type="author">
<name>
<surname>Cappe</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Thut</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Romei</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>M.M.</given-names>
</name>
</person-group>
<article-title>Auditory-visual multisensory interactions in humans: timing, topography, directionality, and sources</article-title>
<source>J. Neurosci.</source>
<volume>30</volume>
<year>2010</year>
<fpage>12572</fpage>
<lpage>12580</lpage>
<pub-id pub-id-type="pmid">20861363</pub-id>
</element-citation>
</ref>
<ref id="bb0050">
<element-citation publication-type="journal" id="rf0040">
<person-group person-group-type="author">
<name>
<surname>Cardoso</surname>
<given-names>M.M.B.</given-names>
</name>
<name>
<surname>Sirotin</surname>
<given-names>Y.B.</given-names>
</name>
<name>
<surname>Lima</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Glushenkova</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Das</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>The neuroimaging signal is a linear sum of neurally distinct stimulus- and task-related components</article-title>
<source>Nat. Neurosci.</source>
<volume>15</volume>
<year>2012</year>
<fpage>1298</fpage>
<lpage>1306</lpage>
<pub-id pub-id-type="pmid">22842146</pub-id>
</element-citation>
</ref>
<ref id="bb0055">
<element-citation publication-type="journal" id="rf0045">
<person-group person-group-type="author">
<name>
<surname>Clavagnier</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Falchier</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kennedy</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Long-distance feedback projections to area V1: implications for multisensory integration, spatial awareness, and visual consciousness</article-title>
<source>Cogn. Affect. Behav. Neurosci.</source>
<volume>4</volume>
<year>2004</year>
<fpage>117</fpage>
<lpage>126</lpage>
<pub-id pub-id-type="pmid">15460918</pub-id>
</element-citation>
</ref>
<ref id="bb0060">
<element-citation publication-type="journal" id="rf0050">
<person-group person-group-type="author">
<name>
<surname>Dahl</surname>
<given-names>C.D.</given-names>
</name>
<name>
<surname>Logothetis</surname>
<given-names>N.K.</given-names>
</name>
<name>
<surname>Kayser</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Modulation of visual responses in the superior temporal sulcus by audio-visual congruency</article-title>
<source>Front. Integr. Neurosci.</source>
<volume>4</volume>
<year>2010</year>
<fpage>10</fpage>
<pub-id pub-id-type="pmid">20428507</pub-id>
</element-citation>
</ref>
<ref id="bb0065">
<element-citation publication-type="journal" id="rf0055">
<person-group person-group-type="author">
<name>
<surname>Deichmann</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Schwarzbauer</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Optimisation of the 3D MDEFT sequence for anatomical brain imaging: technical implications at 1.5 and 3 T</article-title>
<source>Neuroimage</source>
<volume>21</volume>
<year>2004</year>
<fpage>757</fpage>
<lpage>767</lpage>
<pub-id pub-id-type="pmid">14980579</pub-id>
</element-citation>
</ref>
<ref id="bb0070">
<element-citation publication-type="journal" id="rf0060">
<person-group person-group-type="author">
<name>
<surname>Destrieux</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Fischl</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Dale</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Halgren</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature</article-title>
<source>Neuroimage</source>
<volume>53</volume>
<year>2010</year>
<fpage>1</fpage>
<lpage>15</lpage>
<pub-id pub-id-type="pmid">20547229</pub-id>
</element-citation>
</ref>
<ref id="bb0075">
<element-citation publication-type="journal" id="rf0065">
<person-group person-group-type="author">
<name>
<surname>Driver</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Noesselt</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Multisensory interplay reveals crossmodal influences on “sensory-specific” brain regions, neural responses, and judgments</article-title>
<source>Neuron</source>
<volume>57</volume>
<year>2008</year>
<fpage>11</fpage>
<lpage>23</lpage>
<pub-id pub-id-type="pmid">18184561</pub-id>
</element-citation>
</ref>
<ref id="bb0080">
<element-citation publication-type="journal" id="rf0070">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M.O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M.S.</given-names>
</name>
</person-group>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
<source>Nature</source>
<volume>415</volume>
<year>2002</year>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="pmid">11807554</pub-id>
</element-citation>
</ref>
<ref id="bb0085">
<element-citation publication-type="journal" id="rf0075">
<person-group person-group-type="author">
<name>
<surname>Falchier</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Clavagnier</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Barone</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kennedy</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Anatomical evidence of multimodal integration in primate striate cortex</article-title>
<source>J. Neurosci.</source>
<volume>22</volume>
<year>2002</year>
<fpage>5749</fpage>
<lpage>5759</lpage>
<pub-id pub-id-type="pmid">12097528</pub-id>
</element-citation>
</ref>
<ref id="bb0090">
<element-citation publication-type="journal" id="rf0080">
<person-group person-group-type="author">
<name>
<surname>Fetsch</surname>
<given-names>C.R.</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>G.C.</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>D.E.</given-names>
</name>
</person-group>
<article-title>Neural correlates of reliability-based cue weighting during multisensory integration</article-title>
<source>Nat. Neurosci.</source>
<volume>15</volume>
<year>2012</year>
<fpage>146</fpage>
<lpage>154</lpage>
<pub-id pub-id-type="pmid">22101645</pub-id>
</element-citation>
</ref>
<ref id="bb0095">
<element-citation publication-type="journal" id="rf0085">
<person-group person-group-type="author">
<name>
<surname>Fishman</surname>
<given-names>M.C.</given-names>
</name>
<name>
<surname>Michael</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Integration of auditory information in the cat's visual cortex</article-title>
<source>Vision Res.</source>
<volume>13</volume>
<year>1973</year>
<fpage>1415</fpage>
<lpage>1419</lpage>
<pub-id pub-id-type="pmid">4719075</pub-id>
</element-citation>
</ref>
<ref id="bb0100">
<element-citation publication-type="journal" id="rf0265">
<person-group person-group-type="author">
<name>
<surname>Formisano</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>De Martino</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Bonte</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Goebel</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>“Who” is saying “what”? Brain-based decoding of human voice and speech</article-title>
<source>Science</source>
<year>2008</year>
<fpage>970</fpage>
<lpage>973</lpage>
<pub-id pub-id-type="pmid">18988858</pub-id>
</element-citation>
</ref>
<ref id="bb0105">
<element-citation publication-type="journal" id="rf0090">
<person-group person-group-type="author">
<name>
<surname>Friston</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>The free-energy principle: a rough guide to the brain?</article-title>
<source>Trends Cogn. Sci.</source>
<volume>13</volume>
<year>2009</year>
<fpage>293</fpage>
<lpage>301</lpage>
<pub-id pub-id-type="pmid">19559644</pub-id>
</element-citation>
</ref>
<ref id="bb0110">
<element-citation publication-type="journal" id="rf0095">
<person-group person-group-type="author">
<name>
<surname>Friston</surname>
<given-names>K.J.</given-names>
</name>
</person-group>
<article-title>What does functional MRI measure? Two complementary perspectives</article-title>
<source>Trends Cogn. Sci.</source>
<volume>16</volume>
<year>2012</year>
<fpage>491</fpage>
<lpage>492</lpage>
</element-citation>
</ref>
<ref id="bb0115">
<element-citation publication-type="journal" id="rf0100">
<person-group person-group-type="author">
<name>
<surname>Friston</surname>
<given-names>K.J.</given-names>
</name>
<name>
<surname>Buechel</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Fink</surname>
<given-names>G.R.</given-names>
</name>
<name>
<surname>Morris</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Rolls</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Dolan</surname>
<given-names>R.J.</given-names>
</name>
</person-group>
<article-title>Psychophysiological and modulatory interactions in neuroimaging</article-title>
<source>Neuroimage</source>
<volume>6</volume>
<year>1997</year>
<fpage>218</fpage>
<lpage>229</lpage>
<pub-id pub-id-type="pmid">9344826</pub-id>
</element-citation>
</ref>
<ref id="bb0120">
<element-citation publication-type="journal" id="rf0105">
<person-group person-group-type="author">
<name>
<surname>Giani</surname>
<given-names>A.S.</given-names>
</name>
<name>
<surname>Ortiz</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Belardinelli</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kleiner</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Preissl</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Noppeney</surname>
<given-names>U.</given-names>
</name>
</person-group>
<article-title>Steady-state responses in MEG demonstrate information integration within but not across the auditory and visual senses</article-title>
<source>Neuroimage</source>
<volume>60</volume>
<year>2012</year>
<fpage>1478</fpage>
<lpage>1489</lpage>
<pub-id pub-id-type="pmid">22305992</pub-id>
</element-citation>
</ref>
<ref id="bb0125">
<element-citation publication-type="journal" id="rf0110">
<person-group person-group-type="author">
<name>
<surname>Hsieh</surname>
<given-names>P.-J.</given-names>
</name>
<name>
<surname>Colas</surname>
<given-names>J.T.</given-names>
</name>
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>Spatial pattern of BOLD fMRI activation reveals cross-modal information in auditory cortex</article-title>
<source>J. Neurophysiol.</source>
<volume>107</volume>
<year>2012</year>
<fpage>3428</fpage>
<lpage>3432</lpage>
<pub-id pub-id-type="pmid">22514287</pub-id>
</element-citation>
</ref>
<ref id="bb0130">
<element-citation publication-type="journal" id="rf0115">
<person-group person-group-type="author">
<name>
<surname>Hsu</surname>
<given-names>C.-W.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>C.-J.</given-names>
</name>
</person-group>
<article-title>A comparison of methods for multiclass support vector machines</article-title>
<source>EEE Trans. Neural Netw.</source>
<volume>13</volume>
<year>2002</year>
<fpage>415</fpage>
<lpage>425</lpage>
</element-citation>
</ref>
<ref id="bb0135">
<element-citation publication-type="journal" id="rf0120">
<person-group person-group-type="author">
<name>
<surname>Iurilli</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Ghezzi</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Olcese</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Lassi</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Nazzaro</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Tonini</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Tucci</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Benfenati</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Medini</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Sound-driven synaptic inhibition in primary visual cortex</article-title>
<source>Neuron</source>
<volume>73</volume>
<year>2012</year>
<fpage>814</fpage>
<lpage>828</lpage>
<pub-id pub-id-type="pmid">22365553</pub-id>
</element-citation>
</ref>
<ref id="bb0140">
<element-citation publication-type="journal" id="rf0270">
<person-group person-group-type="author">
<name>
<surname>Jones</surname>
<given-names>M.R.</given-names>
</name>
</person-group>
<article-title>Time, our lost dimension: toward a new theory of perception, attention, and memory</article-title>
<source>Psychol. Rev.</source>
<volume>83</volume>
<issue>5</issue>
<year>1976</year>
<fpage>323</fpage>
<lpage>355</lpage>
<comment>(Retrieved from
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi">http://www.ncbi</ext-link>
. Psychological review 83, 323–55)</comment>
<pub-id pub-id-type="pmid">794904</pub-id>
</element-citation>
</ref>
<ref id="bb0145">
<element-citation publication-type="journal" id="rf0130">
<person-group person-group-type="author">
<name>
<surname>Kayser</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Logothetis</surname>
<given-names>N.K.</given-names>
</name>
<name>
<surname>Panzeri</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Visual enhancement of the information representation in auditory cortex</article-title>
<source>Curr. Biol.</source>
<volume>20</volume>
<year>2010</year>
<fpage>19</fpage>
<lpage>24</lpage>
<pub-id pub-id-type="pmid">20036538</pub-id>
</element-citation>
</ref>
<ref id="bb0150">
<element-citation publication-type="journal" id="rf0135">
<person-group person-group-type="author">
<name>
<surname>Klemen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chambers</surname>
<given-names>C.D.</given-names>
</name>
</person-group>
<article-title>Current perspectives and methods in studying neural mechanisms of multisensory interactions</article-title>
<source>Neurosci. Biobehav. Rev.</source>
<volume>36</volume>
<year>2012</year>
<fpage>111</fpage>
<lpage>133</lpage>
<pub-id pub-id-type="pmid">21569794</pub-id>
</element-citation>
</ref>
<ref id="bb0155">
<element-citation publication-type="journal" id="rf0140">
<person-group person-group-type="author">
<name>
<surname>Kriegeskorte</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Goebel</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Bandettini</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Information-based functional brain mapping</article-title>
<source>Proc. Natl. Acad. Sci. U. S. A.</source>
<volume>103</volume>
<year>2006</year>
<fpage>3863</fpage>
<lpage>3868</lpage>
<pub-id pub-id-type="pmid">16537458</pub-id>
</element-citation>
</ref>
<ref id="bb0160">
<element-citation publication-type="journal" id="rf0145">
<person-group person-group-type="author">
<name>
<surname>Lakatos</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>C.-M.</given-names>
</name>
<name>
<surname>O'Connell</surname>
<given-names>M.N.</given-names>
</name>
<name>
<surname>Mills</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Schroeder</surname>
<given-names>C.E.</given-names>
</name>
</person-group>
<article-title>Neuronal oscillations and multisensory interaction in primary auditory cortex</article-title>
<source>Neuron</source>
<volume>53</volume>
<year>2007</year>
<fpage>279</fpage>
<lpage>292</lpage>
<pub-id pub-id-type="pmid">17224408</pub-id>
</element-citation>
</ref>
<ref id="bb0165">
<element-citation publication-type="journal" id="rf0150">
<person-group person-group-type="author">
<name>
<surname>Lakatos</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>O'Connell</surname>
<given-names>M.N.</given-names>
</name>
<name>
<surname>Barczak</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Mills</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Javitt</surname>
<given-names>D.C.</given-names>
</name>
<name>
<surname>Schroeder</surname>
<given-names>C.E.</given-names>
</name>
</person-group>
<article-title>The leading sense: supramodal control of neurophysiological context by attention</article-title>
<source>Neuron</source>
<volume>64</volume>
<year>2009</year>
<fpage>419</fpage>
<lpage>430</lpage>
<pub-id pub-id-type="pmid">19914189</pub-id>
</element-citation>
</ref>
<ref id="bb0170">
<element-citation publication-type="journal" id="rf0155">
<person-group person-group-type="author">
<name>
<surname>Large</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>The dynamics of attending: how people track time-varying events</article-title>
<source>Psychol. Rev.</source>
<volume>106</volume>
<year>1999</year>
<fpage>119</fpage>
<lpage>159</lpage>
</element-citation>
</ref>
<ref id="bb0175">
<element-citation publication-type="journal" id="rf0160">
<person-group person-group-type="author">
<name>
<surname>Larsson</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Heeger</surname>
<given-names>D.J.</given-names>
</name>
</person-group>
<article-title>Two retinotopic visual areas in human lateral occipital cortex</article-title>
<source>J. Neurosci.</source>
<volume>26</volume>
<year>2006</year>
<fpage>13128</fpage>
<lpage>13142</lpage>
<pub-id pub-id-type="pmid">17182764</pub-id>
</element-citation>
</ref>
<ref id="bb0180">
<element-citation publication-type="journal" id="rf0165">
<person-group person-group-type="author">
<name>
<surname>Luo</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Auditory cortex tracks both auditory and visual stimulus dynamics using low-frequency neuronal phase modulation</article-title>
<source>PLoS Biol.</source>
<volume>8</volume>
<year>2010</year>
<fpage>e1000445</fpage>
<pub-id pub-id-type="pmid">20711473</pub-id>
</element-citation>
</ref>
<ref id="bb0185">
<element-citation publication-type="journal" id="rf0275">
<person-group person-group-type="author">
<name>
<surname>Macaluso</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Frith</surname>
<given-names>C.D.</given-names>
</name>
<name>
<surname>Driver</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Modulation of human visual cortex by crossmodal spatial attention</article-title>
<source>Science</source>
<volume>289</volume>
<year>2000</year>
<fpage>1206</fpage>
<lpage>1208</lpage>
<pub-id pub-id-type="pmid">10947990</pub-id>
</element-citation>
</ref>
<ref id="bb0190">
<element-citation publication-type="journal" id="rf0280">
<person-group person-group-type="author">
<name>
<surname>Martuzzi</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>M.M.</given-names>
</name>
<name>
<surname>Michel</surname>
<given-names>C.M.</given-names>
</name>
<name>
<surname>Thiran</surname>
<given-names>J.-P.</given-names>
</name>
<name>
<surname>Maeder</surname>
<given-names>P.P.</given-names>
</name>
<name>
<surname>Clarke</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Meuli</surname>
<given-names>R.A.</given-names>
</name>
</person-group>
<article-title>Multisensory interactions within human primary cortices revealed by BOLD dynamics</article-title>
<source>Cereb. Cortex</source>
<volume>17</volume>
<year>2007</year>
<fpage>1672</fpage>
<lpage>1679</lpage>
<pub-id pub-id-type="pmid">16968869</pub-id>
</element-citation>
</ref>
<ref id="bb0195">
<element-citation publication-type="journal" id="rf0170">
<person-group person-group-type="author">
<name>
<surname>Meienbrock</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Naumer</surname>
<given-names>M.J.</given-names>
</name>
<name>
<surname>Doehrmann</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Singer</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Muckli</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Retinotopic effects during spatial audio-visual integration</article-title>
<source>Neuropsychologia</source>
<volume>45</volume>
<year>2007</year>
<fpage>531</fpage>
<lpage>539</lpage>
<pub-id pub-id-type="pmid">16797610</pub-id>
</element-citation>
</ref>
<ref id="bb0200">
<element-citation publication-type="journal" id="rf0285">
<person-group person-group-type="author">
<name>
<surname>Mesulam</surname>
<given-names>M.M.</given-names>
</name>
</person-group>
<article-title>From sensation to cognition</article-title>
<source>Brain</source>
<volume>121</volume>
<issue>Pt 6</issue>
<year>1998</year>
<fpage>1013</fpage>
<lpage>1052</lpage>
<pub-id pub-id-type="pmid">9648540</pub-id>
</element-citation>
</ref>
<ref id="bb0205">
<element-citation publication-type="journal" id="rf0175">
<person-group person-group-type="author">
<name>
<surname>Meyer</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kaplan</surname>
<given-names>J.T.</given-names>
</name>
<name>
<surname>Essex</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Webber</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Damasio</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Damasio</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Predicting visual stimuli on the basis of activity in auditory cortices</article-title>
<source>Nat. Neurosci.</source>
<volume>13</volume>
<year>2010</year>
<fpage>667</fpage>
<lpage>668</lpage>
<pub-id pub-id-type="pmid">20436482</pub-id>
</element-citation>
</ref>
<ref id="bb0210">
<element-citation publication-type="journal" id="rf0180">
<person-group person-group-type="author">
<name>
<surname>Molholm</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ritter</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>M.M.</given-names>
</name>
<name>
<surname>Javitt</surname>
<given-names>D.C.</given-names>
</name>
<name>
<surname>Schroeder</surname>
<given-names>C.E.</given-names>
</name>
<name>
<surname>Foxe</surname>
<given-names>J.J.</given-names>
</name>
</person-group>
<article-title>Multisensory auditory-visual interactions during early sensory processing in humans: a high-density electrical mapping study</article-title>
<source>Brain Res. Cogn. Brain Res.</source>
<volume>14</volume>
<year>2002</year>
<fpage>115</fpage>
<lpage>128</lpage>
<pub-id pub-id-type="pmid">12063135</pub-id>
</element-citation>
</ref>
<ref id="bb0215">
<element-citation publication-type="journal" id="rf0185">
<person-group person-group-type="author">
<name>
<surname>Morey</surname>
<given-names>R.D.</given-names>
</name>
</person-group>
<article-title>Confidence intervals from normalized data: a correction to Cousineau (2005)</article-title>
<source>Tutor. Quant. Methods Psychol.</source>
<volume>4</volume>
<year>2008</year>
<fpage>61</fpage>
<lpage>64</lpage>
</element-citation>
</ref>
<ref id="bb0220">
<element-citation publication-type="journal" id="rf0190">
<person-group person-group-type="author">
<name>
<surname>Naue</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Rach</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Strüber</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Huster</surname>
<given-names>R.J.</given-names>
</name>
<name>
<surname>Zaehle</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Körner</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Herrmann</surname>
<given-names>C.S.</given-names>
</name>
</person-group>
<article-title>Auditory event-related response in visual cortex modulates subsequent visual responses in humans</article-title>
<source>J. Neurosci.</source>
<volume>31</volume>
<year>2011</year>
<fpage>7729</fpage>
<lpage>7736</lpage>
<pub-id pub-id-type="pmid">21613485</pub-id>
</element-citation>
</ref>
<ref id="bb0225">
<element-citation publication-type="journal" id="rf0195">
<person-group person-group-type="author">
<name>
<surname>Nishimoto</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Vu</surname>
<given-names>A.T.</given-names>
</name>
<name>
<surname>Naselaris</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Benjamini</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Gallant</surname>
<given-names>J.L.</given-names>
</name>
</person-group>
<article-title>Reconstructing visual experiences from brain activity evoked by natural movies</article-title>
<source>Curr. Biol.</source>
<volume>21</volume>
<year>2011</year>
<fpage>1641</fpage>
<lpage>1646</lpage>
<pub-id pub-id-type="pmid">21945275</pub-id>
</element-citation>
</ref>
<ref id="bb0230">
<element-citation publication-type="journal" id="rf0200">
<person-group person-group-type="author">
<name>
<surname>Noesselt</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Rieger</surname>
<given-names>J.W.</given-names>
</name>
<name>
<surname>Schoenfeld</surname>
<given-names>M.A.</given-names>
</name>
<name>
<surname>Kanowski</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hinrichs</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Heinze</surname>
<given-names>H.-J.</given-names>
</name>
<name>
<surname>Driver</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Audiovisual temporal correspondence modulates human multisensory superior temporal sulcus plus primary sensory cortices</article-title>
<source>J. Neurosci.</source>
<volume>27</volume>
<year>2007</year>
<fpage>11431</fpage>
<lpage>11441</lpage>
<pub-id pub-id-type="pmid">17942738</pub-id>
</element-citation>
</ref>
<ref id="bb0235">
<element-citation publication-type="journal" id="rf0205">
<person-group person-group-type="author">
<name>
<surname>Pelli</surname>
<given-names>D.G.</given-names>
</name>
</person-group>
<article-title>The VideoToolbox software for visual psychophysics: transforming numbers into movies</article-title>
<source>Spat. Vis.</source>
<volume>10</volume>
<year>1997</year>
<fpage>437</fpage>
<lpage>442</lpage>
<pub-id pub-id-type="pmid">9176953</pub-id>
</element-citation>
</ref>
<ref id="bb0240">
<element-citation publication-type="journal" id="rf0210">
<person-group person-group-type="author">
<name>
<surname>Rockland</surname>
<given-names>K.S.</given-names>
</name>
<name>
<surname>Ojima</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Multisensory convergence in calcarine visual areas in macaque monkey</article-title>
<source>Int. J. Psychophysiol.</source>
<volume>50</volume>
<year>2003</year>
<fpage>19</fpage>
<lpage>26</lpage>
<pub-id pub-id-type="pmid">14511833</pub-id>
</element-citation>
</ref>
<ref id="bb0250">
<element-citation publication-type="journal" id="rf0220">
<person-group person-group-type="author">
<name>
<surname>Romei</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>M.M.</given-names>
</name>
<name>
<surname>Cappe</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Thut</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds</article-title>
<source>Curr. Biol.</source>
<volume>19</volume>
<year>2009</year>
<fpage>1799</fpage>
<lpage>1805</lpage>
<pub-id pub-id-type="pmid">19836243</pub-id>
</element-citation>
</ref>
<ref id="bb0245">
<element-citation publication-type="journal" id="rf0215">
<person-group person-group-type="author">
<name>
<surname>Romei</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Gross</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Thut</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Sounds reset rhythms of visual cortex and corresponding human visual perception</article-title>
<source>Curr. Biol.</source>
<volume>22</volume>
<year>2012</year>
<fpage>807</fpage>
<lpage>813</lpage>
<pub-id pub-id-type="pmid">22503499</pub-id>
</element-citation>
</ref>
<ref id="bb0255">
<element-citation publication-type="journal" id="rf0225">
<person-group person-group-type="author">
<name>
<surname>Schroeder</surname>
<given-names>C.E.</given-names>
</name>
<name>
<surname>Lakatos</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kajikawa</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Partan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Puce</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Neuronal oscillations and visual amplification of speech</article-title>
<source>Trends Cogn. Sci.</source>
<volume>12</volume>
<year>2008</year>
<fpage>106</fpage>
<lpage>113</lpage>
<pub-id pub-id-type="pmid">18280772</pub-id>
</element-citation>
</ref>
<ref id="bb0260">
<element-citation publication-type="journal" id="rf0290">
<person-group person-group-type="author">
<name>
<surname>Sereno</surname>
<given-names>M.I.</given-names>
</name>
<name>
<surname>Dale</surname>
<given-names>A.M.</given-names>
</name>
<name>
<surname>Reppas</surname>
<given-names>J.B.</given-names>
</name>
<name>
<surname>Kwong</surname>
<given-names>K.K.</given-names>
</name>
<name>
<surname>Belliveau</surname>
<given-names>J.W.</given-names>
</name>
<name>
<surname>Brady</surname>
<given-names>T.J.</given-names>
</name>
<name>
<surname>Rosen</surname>
<given-names>B.R.</given-names>
</name>
<name>
<surname>Tootell</surname>
<given-names>R.B.</given-names>
</name>
</person-group>
<article-title>Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging</article-title>
<source>Science</source>
<volume>268</volume>
<year>1995</year>
<fpage>889</fpage>
<lpage>893</lpage>
<pub-id pub-id-type="pmid">7754376</pub-id>
</element-citation>
</ref>
<ref id="bb0265">
<element-citation publication-type="journal" id="rf0230">
<person-group person-group-type="author">
<name>
<surname>Staeren</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Renvall</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>De Martino</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Goebel</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Formisano</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>Sound categories are represented as distributed patterns in the human auditory cortex</article-title>
<source>Curr. Biol.</source>
<volume>19</volume>
<year>2009</year>
<fpage>498</fpage>
<lpage>502</lpage>
<pub-id pub-id-type="pmid">19268594</pub-id>
</element-citation>
</ref>
<ref id="bb0270">
<element-citation publication-type="journal" id="rf0235">
<person-group person-group-type="author">
<name>
<surname>Thorne</surname>
<given-names>J.D.</given-names>
</name>
<name>
<surname>De Vos</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Viola</surname>
<given-names>F.C.</given-names>
</name>
<name>
<surname>Debener</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Cross-modal phase reset predicts auditory task performance in humans</article-title>
<source>J. Neurosci.</source>
<volume>31</volume>
<year>2011</year>
<fpage>3853</fpage>
<lpage>3861</lpage>
<pub-id pub-id-type="pmid">21389240</pub-id>
</element-citation>
</ref>
<ref id="bb0275">
<element-citation publication-type="journal" id="rf0240">
<person-group person-group-type="author">
<name>
<surname>Watkins</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Tanaka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Haynes</surname>
<given-names>J.D.</given-names>
</name>
<name>
<surname>Rees</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Sound alters activity in human V1 in association with illusory visual perception</article-title>
<source>Neuroimage</source>
<volume>31</volume>
<year>2006</year>
<fpage>1247</fpage>
<lpage>1256</lpage>
<pub-id pub-id-type="pmid">16556505</pub-id>
</element-citation>
</ref>
<ref id="bb0280">
<element-citation publication-type="journal" id="rf0245">
<person-group person-group-type="author">
<name>
<surname>Werner</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Noppeney</surname>
<given-names>U.</given-names>
</name>
</person-group>
<article-title>Distinct functional contributions of primary sensory and association areas to audiovisual integration in object categorization</article-title>
<source>J. Neurosci.</source>
<volume>30</volume>
<year>2010</year>
<fpage>2662</fpage>
<lpage>2675</lpage>
<pub-id pub-id-type="pmid">20164350</pub-id>
</element-citation>
</ref>
</ref-list>
<ack id="ac0005">
<title>Acknowledgments</title>
<p>This work was funded by the
<funding-source id="gts0005">Wellcome Trust</funding-source>
. The Wellcome Trust Centre for Neuroimaging is supported by core funding from the Wellcome Trust 091593/Z/10/Z. We thank Martin Hebart for helpful comments and support staff for help with scanning. Jon Driver provided valuable input to the design of this study.</p>
</ack>
</back>
<floats-group>
<fig id="f0005">
<label>Fig. 1</label>
<caption>
<p>Design.</p>
<p>
<list list-type="simple">
<list-item id="o0005">
<label>a)</label>
<p>Four audiovisual clips used as stimuli, each lasting 3 s. Participants counted colour changes of the fixation dot in each trial.</p>
</list-item>
<list-item id="o0010">
<label>b)</label>
<p>Each of the clips was presented multiple times in four conditions (illustrated here for one example clip): audiovisual congruent (AV congruent) in green, audiovisual incongruent (AV incongruent) in red, visual only (V) in light grey and audio only (A) in dark grey.</p>
</list-item>
<list-item id="o0015">
<label>c)</label>
<p>Separate multivariate classifiers were trained to decode which of the four stimuli was presented for each condition.</p>
</list-item>
</list>
</p>
</caption>
<graphic xlink:href="gr1"></graphic>
</fig>
<fig id="f0010">
<label>Fig. 2</label>
<caption>
<p>Results for regions of interest (ROIs).</p>
<p>Results for areas V1-3 are shown as bar plots. Bar colours indicate conditions: audiovisual congruent in green, audiovisual incongruent in red, visual only in light grey and audio only in dark grey. Error bars indicate the standard error of the mean adjusted for repeated measurements (
<xref rid="bb0215" ref-type="bibr">Morey, 2008</xref>
).</p>
<p>
<list list-type="simple">
<list-item id="o0020">
<label>a)</label>
<p>Classification accuracies for 4-way classification using linear support vector machines (see
<xref rid="s0010" ref-type="sec">Materials and methods</xref>
for details). The dashed line indicates chance level (.25). Stars indicate significantly different decoding accuracies between conditions involving visual stimulation (as indicated by paired
<italic>t</italic>
-tests, see
<xref rid="s0065" ref-type="sec">Results</xref>
section for details of respective ANOVAs; *
<italic>p</italic>
 < .05, **
<italic>p</italic>
 < .01).</p>
</list-item>
<list-item id="o0025">
<label>b)</label>
<p>Mean signal amplitudes estimated by the GLM. Note that amplitudes were not significantly different between conditions involving visual stimulation in any of the regions of interest. Note that beta maps used for this analysis were not mean corrected (see
<xref rid="s0010" ref-type="sec">Materials and methods</xref>
for details).</p>
</list-item>
<list-item id="o0030">
<label>c)</label>
<p>Pattern reliability as indicated by means of Fischer z-transformed correlation coefficients between patterns for a given stimulus in odd and even runs (see
<xref rid="s0010" ref-type="sec">Materials and methods</xref>
for details). Stars indicate significantly different pattern reliabilities between conditions involving visual stimulation (as indicated by paired
<italic>t</italic>
-tests, see
<xref rid="s0065" ref-type="sec">Results</xref>
section for details of respective ANOVAs; *
<italic>p</italic>
 < .05, **
<italic>p</italic>
 < .01, ***
<italic>p</italic>
 < .001).</p>
</list-item>
<list-item id="o0035">
<label>d)</label>
<p>Pattern similarity as indicated by means of Fischer z-transformed correlation coefficients between patterns for different stimuli (see
<xref rid="s0010" ref-type="sec">Materials and methods</xref>
for details). Note that pattern similarities were not significantly different between conditions involving visual stimulation in any of the regions of interest. Patterns are negatively correlated because they were mean corrected across stimuli within each condition (see
<xref rid="s0010" ref-type="sec">Materials and methods</xref>
for details).</p>
</list-item>
</list>
</p>
</caption>
<graphic xlink:href="gr2"></graphic>
</fig>
<fig id="f0015">
<label>Fig. 3</label>
<caption>
<p>Results for whole brain searchlight analysis.</p>
<p>Heat maps for searchlight contrasts. Searchlight maps indicating local pattern discriminability for each condition were normalised and contrasted on the second level (see
<xref rid="s0010" ref-type="sec">Materials and methods</xref>
for details). Colour coding for
<italic>t</italic>
-values is indicated by colour bars at the bottom of a) and b). Please note that the contrast between the audiovisual incongruent and congruent conditions was tested as well but yielded no significant results. Note that contrasts are directed and that contrasts of opposite direction yielded no significant results.</p>
<p>
<list list-type="simple">
<list-item id="o0040">
<label>a)</label>
<p>Increased pattern discriminability for the audio-visual congruent condition as compared with the visual only condition in bilateral superior temporal gyrus (see
<xref rid="t0005" ref-type="table">Table 1</xref>
and
<xref rid="s0065" ref-type="sec">Results</xref>
section for details).</p>
</list-item>
<list-item id="o0045">
<label>b)</label>
<p>Increased pattern discriminability for the visual only condition as compared with the audio-visual incongruent condition in left lateral occipital area and the banks of the calcarine.</p>
</list-item>
</list>
</p>
</caption>
<graphic xlink:href="gr3"></graphic>
</fig>
<fig id="f0020">
<label>Fig. 4</label>
<caption>
<p>Results for whole brain univariate analysis.</p>
<p>Heat maps indicating differences in signal amplitude between conditions. Colour coding for
<italic>t</italic>
-values is indicated by the colour bar at the bottom. See
<xref rid="s0065" ref-type="sec">Results</xref>
section and
<xref rid="t0010" ref-type="table">Table 2</xref>
for details. Note that contrasts are directed and that contrasts of opposite direction yielded no significant results.</p>
<p>
<list list-type="simple">
<list-item id="o0050">
<label>a)</label>
<p>Increased signal amplitude for the audio-visual congruent condition as compared with the visual only condition in bilateral superior temporal gyri.</p>
</list-item>
<list-item id="o0055">
<label>b)</label>
<p>Increased signal amplitude for the audio-visual incongruent condition as compared with the visual only condition in bilateral superior temporal gyri.</p>
</list-item>
</list>
</p>
</caption>
<graphic xlink:href="gr4"></graphic>
</fig>
<table-wrap id="t0005" position="float">
<label>Table 1</label>
<caption>
<p>Significant searchlight clusters. Details of clusters where decoding accuracy was significantly different between conditions. Coordinates of peak voxels are in MNI space, cluster size is in voxels and
<italic>p</italic>
-values are whole brain FWE corrected at cluster level,
<italic>t</italic>
-values correspond to peak voxels. Anatomical labels refer to the Juelich Histological atlas. See
<xref rid="s0010" ref-type="sec">Materials and methods</xref>
for details.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Contrast</th>
<th align="left">
<italic>p</italic>
value</th>
<th align="left">Cluster size</th>
<th align="left">
<italic>t</italic>
-value</th>
<th align="left">Peak voxel</th>
<th align="left">Label</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">AV congruent–V</td>
<td align="char">< .001</td>
<td align="left">861</td>
<td align="char">6.33</td>
<td align="left">[62 − 2 0]</td>
<td align="left">r superior temporal gyrus</td>
</tr>
<tr>
<td align="left">[50 16 − 12]</td>
<td align="left">r temporal pole</td>
</tr>
<tr>
<td align="left">[62 20 − 12]</td>
<td align="left">(Not assigned)</td>
</tr>
<tr>
<td align="char">.006</td>
<td align="left">408</td>
<td align="char">5.40</td>
<td align="left">[− 56 − 2 8]</td>
<td align="left">l superior temporal gyrus</td>
</tr>
<tr>
<td align="left">[− 52 4 2]</td>
<td align="left">l Rolandic operculum</td>
</tr>
<tr>
<td align="left">[− 60 6 − 10]</td>
<td align="left">(Not assigned)</td>
</tr>
<tr>
<td align="left">V–AV incongruent</td>
<td align="char">< .001</td>
<td align="left">699</td>
<td align="char">6.93</td>
<td align="left">[− 32 − 82 − 4]</td>
<td align="left">l inferior occipital gyrus</td>
</tr>
<tr>
<td align="char">< .022</td>
<td align="left">303</td>
<td align="char">7.75</td>
<td align="left">[− 4 − 94 − 2]</td>
<td align="left">l calcarine bank</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t0010" position="float">
<label>Table 2</label>
<caption>
<p>Significant clusters for the univariate analysis. Details of clusters for which signal intensity was significantly different between conditions. Coordinates of peak voxels are in MNI space, cluster size is in voxels and
<italic>p</italic>
-values are whole brain FWE corrected at cluster level,
<italic>t</italic>
-values correspond to peak voxels. Anatomical labels refer to the Juelich Histological atlas. See
<xref rid="s0010" ref-type="sec">Materials and methods</xref>
for details.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Contrast</th>
<th align="left">
<italic>p</italic>
value</th>
<th align="left">Cluster size</th>
<th align="left">
<italic>t</italic>
-value</th>
<th align="left">Peak voxels</th>
<th align="left">Labels</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">AV congruent–V</td>
<td align="char">< .001</td>
<td align="char">1392</td>
<td align="char">8.32</td>
<td align="left">[57 − 31 13]</td>
<td align="left">r superior temporal gyrus</td>
</tr>
<tr>
<td align="char">7.81</td>
<td align="left">[69 − 22 16]</td>
<td align="left"></td>
</tr>
<tr>
<td align="char">7.69</td>
<td align="left">[54 − 7 − 8]</td>
<td align="left"></td>
</tr>
<tr>
<td align="char">< .001</td>
<td align="char">900</td>
<td align="char">8.26</td>
<td align="left">[− 57 − 16 10]</td>
<td align="left">l superior temporal gyrus</td>
</tr>
<tr>
<td align="char">7.76</td>
<td align="left">[− 48 − 25 10]</td>
<td align="left"></td>
</tr>
<tr>
<td align="char">7.05</td>
<td align="left">[− 42 − 19 13]</td>
<td align="left">l Rolandic operculum</td>
</tr>
<tr>
<td align="left">AV incongruent–V</td>
<td align="char">< .001</td>
<td align="char">1461</td>
<td align="char">8.04</td>
<td align="left">[54 − 7 8]</td>
<td align="left">r superior temporal gyrus</td>
</tr>
<tr>
<td align="char">7.34</td>
<td align="left">[57 − 31 13]</td>
<td align="left"></td>
</tr>
<tr>
<td align="char">7.20</td>
<td align="left">[45 − 19 13]</td>
<td align="left">r Heschl's gyrus</td>
</tr>
<tr>
<td align="char">< .001</td>
<td align="char">1002</td>
<td align="char">7.27</td>
<td align="left">[− 48 − 25 10]</td>
<td align="left">l superior temporal gyrus</td>
</tr>
<tr>
<td align="char">7.17</td>
<td align="left">[− 54 − 1 − 14]</td>
<td align="left"></td>
</tr>
<tr>
<td align="char">7.02</td>
<td align="left">[− 48 − 1 − 8]</td>
<td align="left"></td>
</tr>
</tbody>
</table>
</table-wrap>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002131 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002131 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3625122
   |texte=   Auditory modulation of visual stimulus encoding in human retinotopic cortex
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:23296187" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024