Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review

Identifieur interne : 000413 ( Pmc/Curation ); précédent : 000412; suivant : 000414

Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review

Auteurs : Souta Hidaka [Japon] ; Wataru Teramoto [Japon] ; Yoichi Sugita [Japon]

Source :

RBID : PMC:4686600

Abstract

Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing.


Url:
DOI: 10.3389/fnint.2015.00062
PubMed: 26733827
PubMed Central: 4686600

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4686600

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review</title>
<author>
<name sortKey="Hidaka, Souta" sort="Hidaka, Souta" uniqKey="Hidaka S" first="Souta" last="Hidaka">Souta Hidaka</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Rikkyo University</institution>
<country>Saitama, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Teramoto, Wataru" sort="Teramoto, Wataru" uniqKey="Teramoto W" first="Wataru" last="Teramoto">Wataru Teramoto</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Department of Psychology, Kumamoto University</institution>
<country>Kumamoto, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sugita, Yoichi" sort="Sugita, Yoichi" uniqKey="Sugita Y" first="Yoichi" last="Sugita">Yoichi Sugita</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Psychology, Waseda University</institution>
<country>Tokyo, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26733827</idno>
<idno type="pmc">4686600</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4686600</idno>
<idno type="RBID">PMC:4686600</idno>
<idno type="doi">10.3389/fnint.2015.00062</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000413</idno>
<idno type="wicri:Area/Pmc/Curation">000413</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review</title>
<author>
<name sortKey="Hidaka, Souta" sort="Hidaka, Souta" uniqKey="Hidaka S" first="Souta" last="Hidaka">Souta Hidaka</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Rikkyo University</institution>
<country>Saitama, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Teramoto, Wataru" sort="Teramoto, Wataru" uniqKey="Teramoto W" first="Wataru" last="Teramoto">Wataru Teramoto</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Department of Psychology, Kumamoto University</institution>
<country>Kumamoto, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sugita, Yoichi" sort="Sugita, Yoichi" uniqKey="Sugita Y" first="Yoichi" last="Sugita">Yoichi Sugita</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Psychology, Waseda University</institution>
<country>Tokyo, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Integrative Neuroscience</title>
<idno type="eISSN">1662-5145</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D. Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D. Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alink, A" uniqKey="Alink A">A. Alink</name>
</author>
<author>
<name sortKey="Euler, F" uniqKey="Euler F">F. Euler</name>
</author>
<author>
<name sortKey="Kriegeskorte, N" uniqKey="Kriegeskorte N">N. Kriegeskorte</name>
</author>
<author>
<name sortKey="Singer, W" uniqKey="Singer W">W. Singer</name>
</author>
<author>
<name sortKey="Kohler, A" uniqKey="Kohler A">A. Kohler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alink, A" uniqKey="Alink A">A. Alink</name>
</author>
<author>
<name sortKey="Singer, W" uniqKey="Singer W">W. Singer</name>
</author>
<author>
<name sortKey="Muckli, L" uniqKey="Muckli L">L. Muckli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Angelaki, D E" uniqKey="Angelaki D">D. E. Angelaki</name>
</author>
<author>
<name sortKey="Cullen, K E" uniqKey="Cullen K">K. E. Cullen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Anstis, S" uniqKey="Anstis S">S. Anstis</name>
</author>
<author>
<name sortKey="Verstraten, F A" uniqKey="Verstraten F">F. A. Verstraten</name>
</author>
<author>
<name sortKey="Mather, G" uniqKey="Mather G">G. Mather</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baumann, O" uniqKey="Baumann O">O. Baumann</name>
</author>
<author>
<name sortKey="Greenlee, M W" uniqKey="Greenlee M">M. W. Greenlee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beer, A L" uniqKey="Beer A">A. L. Beer</name>
</author>
<author>
<name sortKey="Roder, B" uniqKey="Roder B">B. Röder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bernstein, I H" uniqKey="Bernstein I">I. H. Bernstein</name>
</author>
<author>
<name sortKey="Edelstein, B A" uniqKey="Edelstein B">B. A. Edelstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blake, R" uniqKey="Blake R">R. Blake</name>
</author>
<author>
<name sortKey="Sobel, K V" uniqKey="Sobel K">K. V. Sobel</name>
</author>
<author>
<name sortKey="James, T W" uniqKey="James T">T. W. James</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bremmer, F" uniqKey="Bremmer F">F. Bremmer</name>
</author>
<author>
<name sortKey="Schlack, A" uniqKey="Schlack A">A. Schlack</name>
</author>
<author>
<name sortKey="Shah, N J" uniqKey="Shah N">N. J. Shah</name>
</author>
<author>
<name sortKey="Zafiris, O" uniqKey="Zafiris O">O. Zafiris</name>
</author>
<author>
<name sortKey="Kubischik, M" uniqKey="Kubischik M">M. Kubischik</name>
</author>
<author>
<name sortKey="Hoffmann, K P" uniqKey="Hoffmann K">K. P. Hoffmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bresciani, J P" uniqKey="Bresciani J">J. P. Bresciani</name>
</author>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Drewing, K" uniqKey="Drewing K">K. Drewing</name>
</author>
<author>
<name sortKey="Bouyer, G" uniqKey="Bouyer G">G. Bouyer</name>
</author>
<author>
<name sortKey="Maury, V" uniqKey="Maury V">V. Maury</name>
</author>
<author>
<name sortKey="Kheddar, A" uniqKey="Kheddar A">A. Kheddar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bruce, C" uniqKey="Bruce C">C. Bruce</name>
</author>
<author>
<name sortKey="Desimone, R" uniqKey="Desimone R">R. Desimone</name>
</author>
<author>
<name sortKey="Gross, C G" uniqKey="Gross C">C. G. Gross</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Calvert, G A" uniqKey="Calvert G">G. A. Calvert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Calvert, G A" uniqKey="Calvert G">G. A. Calvert</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Stein, B E" uniqKey="Stein B">B. E. Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cappe, C" uniqKey="Cappe C">C. Cappe</name>
</author>
<author>
<name sortKey="Thelen, A" uniqKey="Thelen A">A. Thelen</name>
</author>
<author>
<name sortKey="Romei, V" uniqKey="Romei V">V. Romei</name>
</author>
<author>
<name sortKey="Thut, G" uniqKey="Thut G">G. Thut</name>
</author>
<author>
<name sortKey="Murray, M M" uniqKey="Murray M">M. M. Murray</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J. Driver</name>
</author>
<author>
<name sortKey="Noesselt, T" uniqKey="Noesselt T">T. Noesselt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J. Driver</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Felleman, D J" uniqKey="Felleman D">D. J. Felleman</name>
</author>
<author>
<name sortKey="Kaas, J H" uniqKey="Kaas J">J. H. Kaas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fendrich, R" uniqKey="Fendrich R">R. Fendrich</name>
</author>
<author>
<name sortKey="Corballis, P M" uniqKey="Corballis P">P. M. Corballis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fracasso, A" uniqKey="Fracasso A">A. Fracasso</name>
</author>
<author>
<name sortKey="Targher, S" uniqKey="Targher S">S. Targher</name>
</author>
<author>
<name sortKey="Zampini, M" uniqKey="Zampini M">M. Zampini</name>
</author>
<author>
<name sortKey="Melcher, D" uniqKey="Melcher D">D. Melcher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Freeman, E" uniqKey="Freeman E">E. Freeman</name>
</author>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J. Driver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fujisaki, W" uniqKey="Fujisaki W">W. Fujisaki</name>
</author>
<author>
<name sortKey="Kitazawa, S" uniqKey="Kitazawa S">S. Kitazawa</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fujisaki, W" uniqKey="Fujisaki W">W. Fujisaki</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fujisaki, W" uniqKey="Fujisaki W">W. Fujisaki</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fujisaki, W" uniqKey="Fujisaki W">W. Fujisaki</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S. Shimojo</name>
</author>
<author>
<name sortKey="Kashino, M" uniqKey="Kashino M">M. Kashino</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gebhard, J W" uniqKey="Gebhard J">J. W. Gebhard</name>
</author>
<author>
<name sortKey="Mowbray, G H" uniqKey="Mowbray G">G. H. Mowbray</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Getzmann, S" uniqKey="Getzmann S">S. Getzmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gleiss, S" uniqKey="Gleiss S">S. Gleiss</name>
</author>
<author>
<name sortKey="Kayser, C" uniqKey="Kayser C">C. Kayser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grefkes, C" uniqKey="Grefkes C">C. Grefkes</name>
</author>
<author>
<name sortKey="Fink, G R" uniqKey="Fink G">G. R. Fink</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hagen, M C" uniqKey="Hagen M">M. C. Hagen</name>
</author>
<author>
<name sortKey="Franzen, O" uniqKey="Franzen O">O. Franzén</name>
</author>
<author>
<name sortKey="Mcglone, F" uniqKey="Mcglone F">F. McGlone</name>
</author>
<author>
<name sortKey="Essick, G" uniqKey="Essick G">G. Essick</name>
</author>
<author>
<name sortKey="Dancer, C" uniqKey="Dancer C">C. Dancer</name>
</author>
<author>
<name sortKey="Pardo, J V" uniqKey="Pardo J">J. V. Pardo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haijiang, Q" uniqKey="Haijiang Q">Q. Haijiang</name>
</author>
<author>
<name sortKey="Saunders, J A" uniqKey="Saunders J">J. A. Saunders</name>
</author>
<author>
<name sortKey="Stone, R W" uniqKey="Stone R">R. W. Stone</name>
</author>
<author>
<name sortKey="Backus, B T" uniqKey="Backus B">B. T. Backus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hidaka, S" uniqKey="Hidaka S">S. Hidaka</name>
</author>
<author>
<name sortKey="Manaka, Y" uniqKey="Manaka Y">Y. Manaka</name>
</author>
<author>
<name sortKey="Teramoto, W" uniqKey="Teramoto W">W. Teramoto</name>
</author>
<author>
<name sortKey="Sugita, Y" uniqKey="Sugita Y">Y. Sugita</name>
</author>
<author>
<name sortKey="Miyauchi, R" uniqKey="Miyauchi R">R. Miyauchi</name>
</author>
<author>
<name sortKey="Gyoba, J" uniqKey="Gyoba J">J. Gyoba</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hidaka, S" uniqKey="Hidaka S">S. Hidaka</name>
</author>
<author>
<name sortKey="Teramoto, W" uniqKey="Teramoto W">W. Teramoto</name>
</author>
<author>
<name sortKey="Keetels, M" uniqKey="Keetels M">M. Keetels</name>
</author>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hidaka, S" uniqKey="Hidaka S">S. Hidaka</name>
</author>
<author>
<name sortKey="Teramoto, W" uniqKey="Teramoto W">W. Teramoto</name>
</author>
<author>
<name sortKey="Kobayashi, M" uniqKey="Kobayashi M">M. Kobayashi</name>
</author>
<author>
<name sortKey="Sugita, Y" uniqKey="Sugita Y">Y. Sugita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hidaka, S" uniqKey="Hidaka S">S. Hidaka</name>
</author>
<author>
<name sortKey="Teramoto, W" uniqKey="Teramoto W">W. Teramoto</name>
</author>
<author>
<name sortKey="Sugita, Y" uniqKey="Sugita Y">Y. Sugita</name>
</author>
<author>
<name sortKey="Manaka, Y" uniqKey="Manaka Y">Y. Manaka</name>
</author>
<author>
<name sortKey="Sakamoto, S" uniqKey="Sakamoto S">S. Sakamoto</name>
</author>
<author>
<name sortKey="Suzuki, Y" uniqKey="Suzuki Y">Y. Suzuki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Howard, I P" uniqKey="Howard I">I. P. Howard</name>
</author>
<author>
<name sortKey="Templeton, W B" uniqKey="Templeton W">W. B. Templeton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kafaligonul, H" uniqKey="Kafaligonul H">H. Kafaligonul</name>
</author>
<author>
<name sortKey="Oluk, C" uniqKey="Oluk C">C. Oluk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kawabe, T" uniqKey="Kawabe T">T. Kawabe</name>
</author>
<author>
<name sortKey="Miura, K" uniqKey="Miura K">K. Miura</name>
</author>
<author>
<name sortKey="Yamada, Y" uniqKey="Yamada Y">Y. Yamada</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keetels, M" uniqKey="Keetels M">M. Keetels</name>
</author>
<author>
<name sortKey="Stekelenburg, J J" uniqKey="Stekelenburg J">J. J. Stekelenburg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kim, R" uniqKey="Kim R">R. Kim</name>
</author>
<author>
<name sortKey="Peters, M A" uniqKey="Peters M">M. A. Peters</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kim, R S" uniqKey="Kim R">R. S. Kim</name>
</author>
<author>
<name sortKey="Seitz, A R" uniqKey="Seitz A">A. R. Seitz</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kitagawa, N" uniqKey="Kitagawa N">N. Kitagawa</name>
</author>
<author>
<name sortKey="Ichihara, S" uniqKey="Ichihara S">S. Ichihara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kobayashi, M" uniqKey="Kobayashi M">M. Kobayashi</name>
</author>
<author>
<name sortKey="Teramoto, W" uniqKey="Teramoto W">W. Teramoto</name>
</author>
<author>
<name sortKey="Hidaka, S" uniqKey="Hidaka S">S. Hidaka</name>
</author>
<author>
<name sortKey="Sugita, Y" uniqKey="Sugita Y">Y. Sugita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kobayashi, M" uniqKey="Kobayashi M">M. Kobayashi</name>
</author>
<author>
<name sortKey="Teramoto, W" uniqKey="Teramoto W">W. Teramoto</name>
</author>
<author>
<name sortKey="Hidaka, S" uniqKey="Hidaka S">S. Hidaka</name>
</author>
<author>
<name sortKey="Sugita, Y" uniqKey="Sugita Y">Y. Sugita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Konkle, T" uniqKey="Konkle T">T. Konkle</name>
</author>
<author>
<name sortKey="Wang, Q" uniqKey="Wang Q">Q. Wang</name>
</author>
<author>
<name sortKey="Hayward, V" uniqKey="Hayward V">V. Hayward</name>
</author>
<author>
<name sortKey="Moore, C I" uniqKey="Moore C">C. I. Moore</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, K P" uniqKey="Kording K">K. P. Körding</name>
</author>
<author>
<name sortKey="Beierholm, U" uniqKey="Beierholm U">U. Beierholm</name>
</author>
<author>
<name sortKey="Ma, W J" uniqKey="Ma W">W. J. Ma</name>
</author>
<author>
<name sortKey="Quartz, S" uniqKey="Quartz S">S. Quartz</name>
</author>
<author>
<name sortKey="Tenenbaum, J B" uniqKey="Tenenbaum J">J. B. Tenenbaum</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Krebber, M" uniqKey="Krebber M">M. Krebber</name>
</author>
<author>
<name sortKey="Harwood, J" uniqKey="Harwood J">J. Harwood</name>
</author>
<author>
<name sortKey="Spitzer, B" uniqKey="Spitzer B">B. Spitzer</name>
</author>
<author>
<name sortKey="Keil, J" uniqKey="Keil J">J. Keil</name>
</author>
<author>
<name sortKey="Senkowski, D" uniqKey="Senkowski D">D. Senkowski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuang, S" uniqKey="Kuang S">S. Kuang</name>
</author>
<author>
<name sortKey="Zhang, T" uniqKey="Zhang T">T. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leung, J" uniqKey="Leung J">J. Leung</name>
</author>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D. Alais</name>
</author>
<author>
<name sortKey="Carlile, S" uniqKey="Carlile S">S. Carlile</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewis, J W" uniqKey="Lewis J">J. W. Lewis</name>
</author>
<author>
<name sortKey="Beauchamp, M S" uniqKey="Beauchamp M">M. S. Beauchamp</name>
</author>
<author>
<name sortKey="Deyoe, E A" uniqKey="Deyoe E">E. A. DeYoe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maeda, F" uniqKey="Maeda F">F. Maeda</name>
</author>
<author>
<name sortKey="Kanai, R" uniqKey="Kanai R">R. Kanai</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S. Shimojo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mateeff, S" uniqKey="Mateeff S">S. Mateeff</name>
</author>
<author>
<name sortKey="Hohnsbein, J" uniqKey="Hohnsbein J">J. Hohnsbein</name>
</author>
<author>
<name sortKey="Noack, T" uniqKey="Noack T">T. Noack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meyer, G F" uniqKey="Meyer G">G. F. Meyer</name>
</author>
<author>
<name sortKey="Wuerger, S M" uniqKey="Wuerger S">S. M. Wuerger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Michel, M M" uniqKey="Michel M">M. M. Michel</name>
</author>
<author>
<name sortKey="Jacobs, R A" uniqKey="Jacobs R">R. A. Jacobs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morein Zamir, S" uniqKey="Morein Zamir S">S. Morein-Zamir</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Kingstone, A" uniqKey="Kingstone A">A. Kingstone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Murray, M M" uniqKey="Murray M">M. M. Murray</name>
</author>
<author>
<name sortKey="Wallace, M T" uniqKey="Wallace M">M. T. Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Murray, M" uniqKey="Murray M">M. Murray</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Harris, L" uniqKey="Harris L">L. Harris</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ogawa, A" uniqKey="Ogawa A">A. Ogawa</name>
</author>
<author>
<name sortKey="Amd Macaluso, E" uniqKey="Amd Macaluso E">E. amd Macaluso</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parise, C V" uniqKey="Parise C">C. V. Parise</name>
</author>
<author>
<name sortKey="Harrar, V" uniqKey="Harrar V">V. Harrar</name>
</author>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parise, C V" uniqKey="Parise C">C. V. Parise</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Radeau, M" uniqKey="Radeau M">M. Radeau</name>
</author>
<author>
<name sortKey="Bertelson, P" uniqKey="Bertelson P">P. Bertelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rizzo, M" uniqKey="Rizzo M">M. Rizzo</name>
</author>
<author>
<name sortKey="Nawrot, M" uniqKey="Nawrot M">M. Nawrot</name>
</author>
<author>
<name sortKey="Zihl, J" uniqKey="Zihl J">J. Zihl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Romei, V" uniqKey="Romei V">V. Romei</name>
</author>
<author>
<name sortKey="Murray, M M" uniqKey="Murray M">M. M. Murray</name>
</author>
<author>
<name sortKey="Cappe, C" uniqKey="Cappe C">C. Cappe</name>
</author>
<author>
<name sortKey="Thut, G" uniqKey="Thut G">G. Thut</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roseboom, W" uniqKey="Roseboom W">W. Roseboom</name>
</author>
<author>
<name sortKey="Kawabe, T" uniqKey="Kawabe T">T. Kawabe</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sanabria, D" uniqKey="Sanabria D">D. Sanabria</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scheef, L" uniqKey="Scheef L">L. Scheef</name>
</author>
<author>
<name sortKey="Boecker, H" uniqKey="Boecker H">H. Boecker</name>
</author>
<author>
<name sortKey="Daamen, M" uniqKey="Daamen M">M. Daamen</name>
</author>
<author>
<name sortKey="Fehse, U" uniqKey="Fehse U">U. Fehse</name>
</author>
<author>
<name sortKey="Landsberg, M W" uniqKey="Landsberg M">M. W. Landsberg</name>
</author>
<author>
<name sortKey="Granath, D O" uniqKey="Granath D">D. O. Granath</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schlack, A" uniqKey="Schlack A">A. Schlack</name>
</author>
<author>
<name sortKey="Albright, T D" uniqKey="Albright T">T. D. Albright</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seitz, A R" uniqKey="Seitz A">A. R. Seitz</name>
</author>
<author>
<name sortKey="Kim, R" uniqKey="Kim R">R. Kim</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seitz, A R" uniqKey="Seitz A">A. R. Seitz</name>
</author>
<author>
<name sortKey="Kim, R" uniqKey="Kim R">R. Kim</name>
</author>
<author>
<name sortKey="Van Wassenhove, V" uniqKey="Van Wassenhove V">V. van Wassenhove</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sekuler, R" uniqKey="Sekuler R">R. Sekuler</name>
</author>
<author>
<name sortKey="Sekuler, A B" uniqKey="Sekuler A">A. B. Sekuler</name>
</author>
<author>
<name sortKey="Lau, R" uniqKey="Lau R">R. Lau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
<author>
<name sortKey="Beierholm, U R" uniqKey="Beierholm U">U. R. Beierholm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
<author>
<name sortKey="Kamitani, Y" uniqKey="Kamitani Y">Y. Kamitani</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S. Shimojo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Kingstone, A" uniqKey="Kingstone A">A. Kingstone</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Lyons, J" uniqKey="Lyons J">J. Lyons</name>
</author>
<author>
<name sortKey="Gazzaniga, M" uniqKey="Gazzaniga M">M. Gazzaniga</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Kingstone, A" uniqKey="Kingstone A">A. Kingstone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Kingstone, A" uniqKey="Kingstone A">A. Kingstone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Kingstone, A" uniqKey="Kingstone A">A. Kingstone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Lloyd, D" uniqKey="Lloyd D">D. Lloyd</name>
</author>
<author>
<name sortKey="Kingstone, A" uniqKey="Kingstone A">A. Kingstone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J. Driver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B E" uniqKey="Stein B">B. E. Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B E" uniqKey="Stein B">B. E. Stein</name>
</author>
<author>
<name sortKey="Meredith, M A" uniqKey="Meredith M">M. A. Meredith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B E" uniqKey="Stein B">B. E. Stein</name>
</author>
<author>
<name sortKey="Stanford, T R" uniqKey="Stanford T">T. R. Stanford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stekelenburg, J J" uniqKey="Stekelenburg J">J. J. Stekelenburg</name>
</author>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sugita, Y" uniqKey="Sugita Y">Y. Sugita</name>
</author>
<author>
<name sortKey="Suzuki, Y" uniqKey="Suzuki Y">Y. Suzuki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tajadura Jimenez, A" uniqKey="Tajadura Jimenez A">A. Tajadura-Jiménez</name>
</author>
<author>
<name sortKey="V Ljam E, A" uniqKey="V Ljam E A">A. Väljamäe</name>
</author>
<author>
<name sortKey="Toshima, I" uniqKey="Toshima I">I. Toshima</name>
</author>
<author>
<name sortKey="Kimura, T" uniqKey="Kimura T">T. Kimura</name>
</author>
<author>
<name sortKey="Tsakiris, M" uniqKey="Tsakiris M">M. Tsakiris</name>
</author>
<author>
<name sortKey="Kitagawa, N" uniqKey="Kitagawa N">N. Kitagawa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teramoto, W" uniqKey="Teramoto W">W. Teramoto</name>
</author>
<author>
<name sortKey="Hidaka, S" uniqKey="Hidaka S">S. Hidaka</name>
</author>
<author>
<name sortKey="Sugita, Y" uniqKey="Sugita Y">Y. Sugita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teramoto, W" uniqKey="Teramoto W">W. Teramoto</name>
</author>
<author>
<name sortKey="Kobayashi, M" uniqKey="Kobayashi M">M. Kobayashi</name>
</author>
<author>
<name sortKey="Hidaka, S" uniqKey="Hidaka S">S. Hidaka</name>
</author>
<author>
<name sortKey="Sugita, Y" uniqKey="Sugita Y">Y. Sugita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teramoto, W" uniqKey="Teramoto W">W. Teramoto</name>
</author>
<author>
<name sortKey="Manaka, Y" uniqKey="Manaka Y">Y. Manaka</name>
</author>
<author>
<name sortKey="Hidaka, S" uniqKey="Hidaka S">S. Hidaka</name>
</author>
<author>
<name sortKey="Sugita, Y" uniqKey="Sugita Y">Y. Sugita</name>
</author>
<author>
<name sortKey="Miyauchi, R" uniqKey="Miyauchi R">R. Miyauchi</name>
</author>
<author>
<name sortKey="Sakamoto, S" uniqKey="Sakamoto S">S. Sakamoto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teramoto, W" uniqKey="Teramoto W">W. Teramoto</name>
</author>
<author>
<name sortKey="Sakamoto, S" uniqKey="Sakamoto S">S. Sakamoto</name>
</author>
<author>
<name sortKey="Furune, F" uniqKey="Furune F">F. Furune</name>
</author>
<author>
<name sortKey="Gyoba, J" uniqKey="Gyoba J">J. Gyoba</name>
</author>
<author>
<name sortKey="Suzuki, Y" uniqKey="Suzuki Y">Y. Suzuki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Dam, L C" uniqKey="Van Dam L">L. C. van Dam</name>
</author>
<author>
<name sortKey="Parise, C V" uniqKey="Parise C">C. V. Parise</name>
</author>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Stoep, N" uniqKey="Van Der Stoep N">N. Van der Stoep</name>
</author>
<author>
<name sortKey="Nijboer, T C W" uniqKey="Nijboer T">T. C. W. Nijboer</name>
</author>
<author>
<name sortKey="Van Der Stigchel, S" uniqKey="Van Der Stigchel S">S. van der Stigchel</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Kemenade, B M" uniqKey="Van Kemenade B">B. M. van Kemenade</name>
</author>
<author>
<name sortKey="Seymour, K" uniqKey="Seymour K">K. Seymour</name>
</author>
<author>
<name sortKey="Wacker, E" uniqKey="Wacker E">E. Wacker</name>
</author>
<author>
<name sortKey="Spitzer, B" uniqKey="Spitzer B">B. Spitzer</name>
</author>
<author>
<name sortKey="Blankenburg, F" uniqKey="Blankenburg F">F. Blankenburg</name>
</author>
<author>
<name sortKey="Sterzer, P" uniqKey="Sterzer P">P. Sterzer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Watanabe, K" uniqKey="Watanabe K">K. Watanabe</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S. Shimojo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Welch, R B" uniqKey="Welch R">R. B. Welch</name>
</author>
<author>
<name sortKey="Warren, D H" uniqKey="Warren D">D. H. Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Welch, R B" uniqKey="Welch R">R. B. Welch</name>
</author>
<author>
<name sortKey="Warren, D H" uniqKey="Warren D">D. H. Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Whitney, D" uniqKey="Whitney D">D. Whitney</name>
</author>
<author>
<name sortKey="Cavanagh, P" uniqKey="Cavanagh P">P. Cavanagh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Williams, D W" uniqKey="Williams D">D. W. Williams</name>
</author>
<author>
<name sortKey="Sekuler, R" uniqKey="Sekuler R">R. Sekuler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wuerger, S M" uniqKey="Wuerger S">S. M. Wuerger</name>
</author>
<author>
<name sortKey="Hofbauer, M" uniqKey="Hofbauer M">M. Hofbauer</name>
</author>
<author>
<name sortKey="Meyer, G F" uniqKey="Meyer G">G. F. Meyer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yamamoto, S" uniqKey="Yamamoto S">S. Yamamoto</name>
</author>
<author>
<name sortKey="Kitazawa, S" uniqKey="Kitazawa S">S. Kitazawa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zihl, J" uniqKey="Zihl J">J. Zihl</name>
</author>
<author>
<name sortKey="Von Cramon, D" uniqKey="Von Cramon D">D. von Cramon</name>
</author>
<author>
<name sortKey="Mai, N" uniqKey="Mai N">N. Mai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zihl, J" uniqKey="Zihl J">J. Zihl</name>
</author>
<author>
<name sortKey="Von Cramon, D" uniqKey="Von Cramon D">D. von Cramon</name>
</author>
<author>
<name sortKey="Mai, N" uniqKey="Mai N">N. Mai</name>
</author>
<author>
<name sortKey="Schmid, C H" uniqKey="Schmid C">C. H. Schmid</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zvyagintsev, M" uniqKey="Zvyagintsev M">M. Zvyagintsev</name>
</author>
<author>
<name sortKey="Nikolaev, A R" uniqKey="Nikolaev A">A. R. Nikolaev</name>
</author>
<author>
<name sortKey="Thonnessen, H" uniqKey="Thonnessen H">H. Thönnessen</name>
</author>
<author>
<name sortKey="Sachs, O" uniqKey="Sachs O">O. Sachs</name>
</author>
<author>
<name sortKey="Dammers, J" uniqKey="Dammers J">J. Dammers</name>
</author>
<author>
<name sortKey="Mathiak, K" uniqKey="Mathiak K">K. Mathiak</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="review-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Integr Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Integr Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Integr. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Integrative Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5145</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26733827</article-id>
<article-id pub-id-type="pmc">4686600</article-id>
<article-id pub-id-type="doi">10.3389/fnint.2015.00062</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Hidaka</surname>
<given-names>Souta</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<xref ref-type="author-notes" rid="fn003">
<sup></sup>
</xref>
<uri xlink:type="simple" xlink:href="http://loop.frontiersin.org/people/67635/overview"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Teramoto</surname>
<given-names>Wataru</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="author-notes" rid="fn002">
<sup>*</sup>
</xref>
<xref ref-type="author-notes" rid="fn003">
<sup></sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sugita</surname>
<given-names>Yoichi</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Department of Psychology, Rikkyo University</institution>
<country>Saitama, Japan</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Department of Psychology, Kumamoto University</institution>
<country>Kumamoto, Japan</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Department of Psychology, Waseda University</institution>
<country>Tokyo, Japan</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Shinya Yamamoto, National Institute of Advanced Industrial Science and Technology, Japan</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Georg Meyer, University of Liverpool, UK; Megan Peters, University of California, Los Angeles, USA</p>
</fn>
<corresp id="fn001">*Correspondence: Souta Hidaka
<email xlink:type="simple">hidaka@rikkyo.ac.jp</email>
;</corresp>
<corresp id="fn002">Wataru Teramoto
<email xlink:type="simple">teramoto@kumamoto-u.ac.jp</email>
</corresp>
<fn fn-type="other" id="fn003">
<p>†These authors have contributed equally to this work.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>22</day>
<month>12</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>9</volume>
<elocation-id>62</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>9</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>03</day>
<month>12</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2015 Hidaka, Teramoto and Sugita.</copyright-statement>
<copyright-year>2015</copyright-year>
<copyright-holder>Hidaka, Teramoto and Sugita</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing.</p>
</abstract>
<kwd-group>
<kwd>crossmodal interaction</kwd>
<kwd>spatial processing</kwd>
<kwd>temporal processing</kwd>
<kwd>spatiotemporal processing</kwd>
<kwd>motion processing</kwd>
<kwd>neural representations</kwd>
</kwd-group>
<funding-group>
<award-group>
<funding-source id="cn001">Japan Society for the Promotion of Science
<named-content content-type="fundref-id">10.13039/501100001691</named-content>
</funding-source>
<award-id rid="cn001">26285160</award-id>
</award-group>
</funding-group>
<counts>
<fig-count count="5"></fig-count>
<table-count count="1"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="108"></ref-count>
<page-count count="13"></page-count>
<word-count count="9389"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>In our daily life, we receive dynamic inputs to multiple modalities from, for example, moving cars, the face of a friend with whom we are conversing, and so on. While a large amount of inputs continue to change uniquely in each sensory modality, we can perceive them as integrated, coherent objects, or scenes. Our perceptual systems appropriately and flexibly associate and integrate these inputs (Ernst and Bülthoff,
<xref rid="B22" ref-type="bibr">2004</xref>
), thus enabling us to establish coherent and robust percepts of the external world in our brains.</p>
<p>Research regarding crossmodal perception/interactions and their underlying mechanisms has garnered much interest in the last few decades. The number of studies related to these issues has risen dramatically (Murray et al.,
<xref rid="B62" ref-type="bibr">2013</xref>
; Van der Stoep et al.,
<xref rid="B97" ref-type="bibr">2015</xref>
). Many researchers have investigated how multiple sensory inputs are integrated/associated in our perceptual systems. They have focused on spatial and temporal integration/association rules (Calvert et al.,
<xref rid="B15" ref-type="bibr">2004</xref>
; Stein,
<xref rid="B86" ref-type="bibr">2012</xref>
), as well as attentional (see Driver and Spence,
<xref rid="B18" ref-type="bibr">1998</xref>
for a review) and neural mechanisms (see Stein and Meredith,
<xref rid="B87" ref-type="bibr">1993</xref>
; Driver and Noesselt,
<xref rid="B17" ref-type="bibr">2008</xref>
; Stein and Stanford,
<xref rid="B88" ref-type="bibr">2008</xref>
for review). In addition to these studies, crossmodal interactions in the
<italic>spatiotemporal</italic>
domain (i.e., motion processing) have also been investigated (see Soto-Faraco et al.,
<xref rid="B78" ref-type="bibr">2003</xref>
,
<xref rid="B82" ref-type="bibr">2004a</xref>
, for review). A traditional view holds that vision is superior to audition in spatial processing, while audition is dominant over vision in temporal processing (Welch and Warren,
<xref rid="B101" ref-type="bibr">1986</xref>
). Similarly, in spatiotemporal processing, visual information is considered predominant over information from other sensory modalities (Soto-Faraco et al.,
<xref rid="B82" ref-type="bibr">2004a</xref>
). However, recent studies have demonstrated that sound can have a driving effect on visual motion perception (e.g., Hidaka et al.,
<xref rid="B37" ref-type="bibr">2009</xref>
). Moreover, studies regarding audio-visual perceptual associative learning have reported that, after an association is established between sounds and visual motion, sounds without spatial information can trigger visual motion perception (e.g., Teramoto et al.,
<xref rid="B92" ref-type="bibr">2010a</xref>
). Other sensory modalities such as motor action or smell have also exhibited similar driving effects on visual motion perception (e.g., Keetels and Stekelenburg,
<xref rid="B44" ref-type="bibr">2014</xref>
).</p>
<p>In this way, the findings regarding spatiotemporal processing of crossmodal interactions have been updated in recent years. Here, we summarize past and recent findings of spatiotemporal processing in crossmodal interactions. First, we briefly review some key findings of spatial and temporal processing in crossmodal interactions. Then, we focus on the literature on spatiotemporal processing in crossmodal interactions, including psychophysical and brain imaging findings.</p>
</sec>
<sec id="s2">
<title>Crossmodal interactions in spatial domain</title>
<p>One famous phenomenon in crossmodal interactions is the “spatial ventriloquism” effect. Typically, a visual event is presented in front of observers and a sound source related to the event is placed in a spatially discrepant position. In this situation, the sound is perceived as occurring at the position of the visual event (Howard and Templeton,
<xref rid="B41" ref-type="bibr">1966</xref>
; Figure
<xref ref-type="fig" rid="F1">1A</xref>
). As such, the visual modality is known to be dominant over other sensory modalities in spatial processing. The reason could be simply that visual information inherently has the most precise resolution in the spatial domain (i.e., highest spatial resolution) among the sensory modalities (modality appropriateness hypothesis: Welch and Warren,
<xref rid="B100" ref-type="bibr">1980</xref>
,
<xref rid="B101" ref-type="bibr">1986</xref>
). However, the classical modality appropriateness hypothesis has been refined by evidence provided in this decade. Alais and Burr (
<xref rid="B2" ref-type="bibr">2004b</xref>
) demonstrated that sounds could have a dominant spatial ventriloquism effect on visual stimuli when the stimuli were presented as spatially ambiguous (see also Radeau and Bertelson,
<xref rid="B66" ref-type="bibr">1987</xref>
). This obviously suggests that the manner of crossmodal interactions in the spatial domain is not just dependent on the unique property of each sensory modality. Rather, it would also be dependent on the relative certainty/reliability of the inputs (Ernst and Banks,
<xref rid="B20" ref-type="bibr">2002</xref>
; Ernst and Bülthoff,
<xref rid="B22" ref-type="bibr">2004</xref>
).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Schematic illustrations of spatial and temporal ventriloquism effects</bold>
.
<bold>(A)</bold>
In a typical spatial ventriloquism effect, the spatial position of sound sources is perceived as that of visual sources.
<bold>(B)</bold>
In contrast, the temporal position of visual sources perceptually shifts to that of sound sources in a typical temporal ventriloquism effect.</p>
</caption>
<graphic xlink:href="fnint-09-00062-g0001"></graphic>
</fig>
<p>Crossmodal interactions in the spatial domain other than with vision have also been reported. For example, sound could affect tactile distance perception (Tajadura-Jiménez et al.,
<xref rid="B91" ref-type="bibr">2012</xref>
). Observers were exposed to a situation where sound locations were always moderately far away relative to the tapped position of their arm. After the exposure, the perceived distance of the tactile sensations on the observer's arm was greater than the actual stimulated positions by the sound presentations.</p>
<p>Crossmodal interactions have been also investigated not only in two-dimensional, but also in three-dimensional spaces (see Van der Stoep et al.,
<xref rid="B97" ref-type="bibr">2015</xref>
for a review). For instance, Sugita and Suzuki (
<xref rid="B90" ref-type="bibr">2003</xref>
) reported that the timing of a perceived co-occurrence between light and sound was changed as a function of the distance between the observer and the light. This suggests that the brain compensates for temporal lag using viewing distance information, as if it “knows” the physical rule that the traveling velocity of light is faster than that of sound in space.</p>
<p>In traditional crossmodal interaction studies, it has been reported that interactions among crossmodal stimuli are most frequently observed when these stimuli are spatially congruent (spatial co-localization rule: Calvert et al.,
<xref rid="B15" ref-type="bibr">2004</xref>
; Stein,
<xref rid="B86" ref-type="bibr">2012</xref>
). However, recent findings suggest that the “spatial co-localization rule” is not generally applicable for all crossmodal interactions for given phenomena and tasks (Spence,
<xref rid="B84" ref-type="bibr">2013</xref>
for a review). As mentioned previously, the understanding of crossmodal interaction in the spatial domain has been updated in recent years.</p>
</sec>
<sec id="s3">
<title>Crossmodal interactions in temporal domain</title>
<p>In contrast to vision, audition and tactile sensation are known to be dominant in temporal processing. For example, an auditory driving effect has been reported; the perceived rate of visual flickers is modulated by the rate of concurrently presented sounds (Gebhard and Mowbray,
<xref rid="B31" ref-type="bibr">1959</xref>
). Shams et al. (
<xref rid="B77" ref-type="bibr">2000</xref>
) reported that a single visual flash is perceived as double when sounds are concurrently presented twice. A similar temporal modulatory effect of auditory stimuli on tactile sensation was also reported (Bresciani et al.,
<xref rid="B12" ref-type="bibr">2005</xref>
). Furthermore, in a well-known “temporal ventriloquism” effect, judgments of the presentation order of two visual flashes were improved when two sounds were presented before and after the flashes. In contrast, judgments degraded when two sounds were interspersed between the flashes. These findings indicate that the perceived temporal position of the visual stimuli is captured by the sounds (Fendrich and Corballis,
<xref rid="B24" ref-type="bibr">2001</xref>
; Morein-Zamir et al.,
<xref rid="B60" ref-type="bibr">2003</xref>
; Figure
<xref ref-type="fig" rid="F1">1B</xref>
). Regarding tactile sensation, researchers have reported that judgments of the presentation order of two tactile stimuli presented to observers' hands became worse when the hands were crossed. This indicates that tactile temporal perception could interact with proprioceptive information (Yamamoto and Kitazawa,
<xref rid="B105" ref-type="bibr">2001</xref>
).</p>
<p>How is temporal information integrated across sensory modalities (vision, audition, and tactile sensation) and what kinds of rules exist for crossmodal temporal binding? Fujisaki and Nishida (
<xref rid="B28" ref-type="bibr">2009</xref>
) found that the maximum limits of temporal integration are superior for the combination of audition and tactile sensation over other combinations. This seems to indicate that unique temporal characteristics of each modality determine the upper temporal integration limits of each sensory pair. However, Fujisaki and Nishida (
<xref rid="B29" ref-type="bibr">2010</xref>
) also reported that similar differences in upper temporal integration limits could also be observed for different feature combinations in a single modality (i.e., color, luminance, and orientation in vision). According to a recent viewpoint, differences in temporal processing among sensory modalities are considered dependent on which processes are involved (e.g., bottom-up or top-down/attentional processes, “what,” “when,” or “where” processing; see Fujisaki et al.,
<xref rid="B27" ref-type="bibr">2012</xref>
for a review).</p>
</sec>
<sec id="s4">
<title>Crossmodal interactions in spatiotemporal domain</title>
<p>Thus far, we briefly overviewed crossmodal interactions in the spatial and temporal domains. The findings generally suggest that vision is superior to audition in spatial processing, but audition and tactile sensation is dominant over vision in temporal processing. These superiorities/dominances could be also changed in some cases depending on the stimuli's reliability and/or the processes involved. It should be noted, however, that the inputs in our surrounding environments are dynamic, so that spatial and temporal information are indivisible. On this point, studies on crossmodal interactions have focused on the spatiotemporal processing (namely, motion perception) of information from multiple senses. Next, we review past and recent findings regarding crossmodal interactions in motion perception (Table
<xref ref-type="table" rid="T1">1</xref>
).</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Summary of psychophysical research evidences of crossmodal motion perception</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left" rowspan="1" colspan="1">
<bold>Representative studies</bold>
</th>
<th valign="top" align="left" rowspan="1" colspan="1">
<bold>Affecting modality</bold>
</th>
<th valign="top" align="left" rowspan="1" colspan="1">
<bold>Motion information</bold>
</th>
<th valign="top" align="left" rowspan="1" colspan="1">
<bold>Affected modality</bold>
</th>
<th valign="top" align="left" rowspan="1" colspan="1">
<bold>Motion information</bold>
</th>
<th valign="top" align="left" rowspan="1" colspan="1">
<bold>Effect domain</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" colspan="6" style="background-color:#bbbdc0" rowspan="1">
<bold>MODULATORY EFFECTS FROM VISION</bold>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Soto-Faraco et al.,
<xref rid="B79" ref-type="bibr">2002</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Vision</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Audition</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motion</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Konkle et al.,
<xref rid="B50" ref-type="bibr">2009</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Vision</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Touch</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motion</td>
</tr>
<tr>
<td valign="top" align="left" colspan="6" style="background-color:#bbbdc0" rowspan="1">
<bold>MODULATORY EFFECTS FROM SENSORY MODALITIES OTHER THAN VISION</bold>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Sekuler et al.,
<xref rid="B75" ref-type="bibr">1997</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Audition</td>
<td valign="top" align="left" rowspan="1" colspan="1">No</td>
<td valign="top" align="left" rowspan="1" colspan="1">Vision</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Event</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Getzmann,
<xref rid="B32" ref-type="bibr">2007</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Audition</td>
<td valign="top" align="left" rowspan="1" colspan="1">No</td>
<td valign="top" align="left" rowspan="1" colspan="1">Vision</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Time</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Kim et al.,
<xref rid="B45" ref-type="bibr">2012</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Audition</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Vision</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motion</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Sanabria et al.,
<xref rid="B70" ref-type="bibr">2005</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Audition</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Touch</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motion</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Sanabria et al.,
<xref rid="B70" ref-type="bibr">2005</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Touch</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Audition</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motion</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Konkle et al.,
<xref rid="B50" ref-type="bibr">2009</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Touch</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Vision</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motion</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Kuang and Zhang,
<xref rid="B53" ref-type="bibr">2014</xref>
<xref ref-type="table-fn" rid="TN1">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Smell</td>
<td valign="top" align="left" rowspan="1" colspan="1">No</td>
<td valign="top" align="left" rowspan="1" colspan="1">Vision</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motion</td>
</tr>
<tr>
<td valign="top" align="left" colspan="6" style="background-color:#bbbdc0" rowspan="1">
<bold>DRIVING EFFECTS</bold>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Kitagawa and Ichihara,
<xref rid="B47" ref-type="bibr">2002</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Vision</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Audition</td>
<td valign="top" align="left" rowspan="1" colspan="1">No</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motion</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Hidaka et al.,
<xref rid="B37" ref-type="bibr">2009</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Audition</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Vision</td>
<td valign="top" align="left" rowspan="1" colspan="1">No</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motion</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Teramoto et al.,
<xref rid="B92" ref-type="bibr">2010a</xref>
<xref ref-type="table-fn" rid="TN1">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Audition</td>
<td valign="top" align="left" rowspan="1" colspan="1">No</td>
<td valign="top" align="left" rowspan="1" colspan="1">Vision</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motion</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Keetels and Stekelenburg,
<xref rid="B44" ref-type="bibr">2014</xref>
</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motor action</td>
<td valign="top" align="left" rowspan="1" colspan="1">Yes</td>
<td valign="top" align="left" rowspan="1" colspan="1">Vision</td>
<td valign="top" align="left" rowspan="1" colspan="1">No</td>
<td valign="top" align="left" rowspan="1" colspan="1">Motion</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="TN1">
<label>*</label>
<p>Effects appear after perceptual associative learning.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<sec>
<title>Modulatory effects</title>
<p>As per the audio-visual interaction in motion perception, Sekuler et al. (
<xref rid="B75" ref-type="bibr">1997</xref>
) have reported a pioneering phenomenon called the “stream-bounce illusion.” In this illusion, two visual stimuli are presented as moving horizontally from opposite sides of a display toward each other at the same vertical location. The visual stimuli are perceived to overlap at the center of a display and then continue moving along their trajectories, and the perception of both streaming and bouncing can alternatively occur when only the visual stimuli are presented (Figure
<xref ref-type="fig" rid="F2">2A</xref>
). However, a transient sound induces dominant bouncing perception when the sound is presented at the overlap timing of the visual stimuli. This phenomenon is assumed to be purely based on audiovisual interactions (Watanabe and Shimojo,
<xref rid="B99" ref-type="bibr">2001</xref>
). Sound has also been reported to change the perception of the onset timing of inducers of visual apparent motion; this consequently modulate the optimal presentation timings of visual apparent motion (Getzmann,
<xref rid="B32" ref-type="bibr">2007</xref>
), the distance between apparently moving stimuli (Kawabe et al.,
<xref rid="B43" ref-type="bibr">2008</xref>
), and the perceived direction of an ambiguous visual apparent motion display (Freeman and Driver,
<xref rid="B26" ref-type="bibr">2008</xref>
, but also see Roseboom et al.,
<xref rid="B69" ref-type="bibr">2013</xref>
). These findings indicate that sound can modulate the perception of visual motion through changes in the interpretation of a visual event or by temporal ventriloquism effects.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Schematic illustrations of audiovisual illusions in motion perception</bold>
.
<bold>(A)</bold>
Stream-bounce illusion. Two visual stimuli moving toward each other at the same vertical location are perceived to overlap at the center and then to continue moving along their trajectories. Both streaming and bouncing percepts occur equally without auditory information. However, when a transient sound is presented at the timing of the visual coincidence, the bouncing perception becomes dominant. This is a typical auditory modulatory effect on visual motion perception.
<bold>(B)</bold>
Auditory aftereffects induced by visual adaptation. The adaptation to visual size changes (e.g., expansion) induces not only size change aftereffects for the visual test stimulus with constant size (shrinking), but also loudness change aftereffects for the auditory stimuli with constant loudness (decreasing in loudness). The involvement of motion processing could be assumed in this phenomenon because both auditory and visual stimuli are assumed to demonstrate motion in depth. The effect from audition to vision is not reported to occur.</p>
</caption>
<graphic xlink:href="fnint-09-00062-g0002"></graphic>
</fig>
<p>With regard to the effect of visual information in audiovisual spatiotemporal interactions, some studies have reported that visual stimuli changed the percept of a static sound as moving (e.g., Mateeff et al.,
<xref rid="B57" ref-type="bibr">1985</xref>
). Kitagawa and Ichihara (
<xref rid="B47" ref-type="bibr">2002</xref>
) demonstrated that the co-presentation of visual size changes (increasing or decreasing the size of visual images) with an auditory loudness change in a congruent direction could enhance the auditory loudness aftereffects (Figure
<xref ref-type="fig" rid="F2">2B</xref>
). Moreover, the adaptation to visual size changes alone could induce the auditory loudness change of a steady sound in the opposite direction of the visual adapting stimuli. In this situation, both auditory and visual stimuli are assumed to signify motion in depth and the aftereffect is considered to occur purely at perceptual levels. These findings thus suggest that visual motion information could directly affect or induce auditory motion perceptions. In contrast, in their study, the auditory loudness change did not have any modulatory or inducing effect on visual aftereffects.</p>
<p>While these audiovisual interactions in motion perception have been mainly demonstrated between moving and static/constant stimuli, Soto-Faraco and his colleagues have reported crossmodal interactions when auditory and visual stimuli were both dynamic (Figure
<xref ref-type="fig" rid="F3">3A</xref>
). Soto-Faraco et al. (
<xref rid="B79" ref-type="bibr">2002</xref>
) presented auditory stimuli through speakers set at horizontally separated positions so that auditory apparent motion was perceived. Visual stimuli were concurrently presented from LEDs attached to speakers to induce visual apparent motion. Observers correctly reported the motion direction (left/right) of auditory apparent motion when its direction was consistent with that of visual apparent motion. However, when the motion direction was inconsistent with the auditory and visual stimuli, the observers tended to misperceive the direction of auditory motion to be consistent with the visual direction. Together with the findings in several control experiments manipulating the spatial and temporal relationship between auditory and visual stimuli, they concluded that this visual capture effect on auditory motion reflects direct crossmodal interactions in motion perception. Based on detailed investigations including testing of sensory pairs other than audiovisual stimuli (Soto-Faraco et al.,
<xref rid="B78" ref-type="bibr">2003</xref>
,
<xref rid="B82" ref-type="bibr">2004a</xref>
,
<xref rid="B80" ref-type="bibr">b</xref>
,
<xref rid="B81" ref-type="bibr">2005</xref>
; Sanabria et al.,
<xref rid="B70" ref-type="bibr">2005</xref>
), Soto-Faraco et al. have concluded that common perceptual mechanisms and shared neural substrates for different sensory information exist in motion perception. They have also suggested that vision is superior to other sensory information in crossmodal interactions in motion perception, because sounds did not have such capturing effects on vision.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Schematic illustrations of dynamic visual capture and sound-induced visual motion. (A)</bold>
Dynamic visual capture. When the motion direction of apparent motion is incongruent between visual and auditory stimuli, the direction of auditory apparent motion is perceived as congruent with that of visual apparent motion. The opposite effect is reported less often.
<bold>(B)</bold>
Sound induced visual motion. Visual flashes presented at a fixed position are perceived as moving horizontally when auditory stimuli are presented in horizontal motion, especially at larger eccentricities. This is the first demonstration of an auditory driving effect on visual motion perception.</p>
</caption>
<graphic xlink:href="fnint-09-00062-g0003"></graphic>
</fig>
</sec>
<sec>
<title>Driving effects</title>
<p>The perceived direction of visual stimuli moving in opposing directions in the vertical dimension could be biased by sounds with a change in pitch (i.e., rising pitch induced upward visual motion perception and falling pitch induced downward visual motion perception; Maeda et al.,
<xref rid="B56" ref-type="bibr">2004</xref>
). The sensitivity to detect horizontal visual motion was also improved by sounds moving in a consistent direction (Kim et al.,
<xref rid="B45" ref-type="bibr">2012</xref>
). Consistent with the aforementioned auditory effect on the interpretation of visual events and temporal ventriloquism effects on visual stimuli, these findings clearly demonstrate the modulatory effect of sound on visual motion perception (see Blake et al.,
<xref rid="B10" ref-type="bibr">2004</xref>
for tactile modulatory effect on visual motion perception). In contrast, little or no auditory driving or inducing effects on visual motion perception have been demonstrated (Meyer and Wuerger,
<xref rid="B58" ref-type="bibr">2001</xref>
; Wuerger et al.,
<xref rid="B104" ref-type="bibr">2003</xref>
; Alais and Burr,
<xref rid="B1" ref-type="bibr">2004a</xref>
; Soto-Faraco et al.,
<xref rid="B80" ref-type="bibr">2004b</xref>
; but see Kitagawa and Ichihara,
<xref rid="B47" ref-type="bibr">2002</xref>
for visual driving effect on audition). In these studies, visual stimuli were presented clearly at a foveal or parafoveal position so that the percept of visual motion was salient. However, visual dominance over audition in the spatial domain collapsed when the visibility or reliability of visual inputs were degraded (Alais and Burr,
<xref rid="B2" ref-type="bibr">2004b</xref>
). We can therefore predict that auditory information could have capturing or inducing effects on visual motion perception in this situation.</p>
<p>From this viewpoint, Hidaka and his colleagues have demonstrated an auditory driving effect on visual motion perception (Figure
<xref ref-type="fig" rid="F3">3B</xref>
). In their experiment (Hidaka et al.,
<xref rid="B37" ref-type="bibr">2009</xref>
), auditory apparent motion was presented through headphones and a blinking visual target at a fixed location was presented either in foveal, parafoveal, or perifoveal positions. They found that the static visual target tended to be perceived as moving when it was presented at the perifoveal position (>10°). This auditory driving effect on visual motion perception was reported not only for horizontal, but also for vertical auditory motion (Teramoto et al.,
<xref rid="B94" ref-type="bibr">2010b</xref>
). Furthermore, Hidaka et al. (
<xref rid="B40" ref-type="bibr">2011b</xref>
) found that auditory continuous motion information induced visual motion perception for a static visual target. In addition, the auditory continuous motion determined the perceived direction of an ambiguous visual global motion display in which motion information was extracted from the integration of multiple local motion signals (Williams and Sekuler,
<xref rid="B103" ref-type="bibr">1984</xref>
). These findings indicate that the auditory driving effect could be dissociated from the attentional spatial capture effect (Spence and Driver,
<xref rid="B85" ref-type="bibr">1997</xref>
) or auditory spatial bias effect on visual targets (Radeau and Bertelson,
<xref rid="B66" ref-type="bibr">1987</xref>
; Alais and Burr,
<xref rid="B2" ref-type="bibr">2004b</xref>
). Rather, the effect is considered to be purely based on perceptual motion processing.</p>
<p>Recent studies have shown that similar effects are observed beyond the auditory-visual domain. Fracasso et al. (
<xref rid="B25" ref-type="bibr">2013</xref>
) reported that the auditory induced illusory visual motion triggered a visuo-motor response (eye movement) similar to actual visual motion. Keetels and Stekelenburg (
<xref rid="B44" ref-type="bibr">2014</xref>
) found that motor induced motion information (finger movements) also induce visual motion perception for a static visual target. Furthermore, it was recently reported that motion aftereffects could mutually transfer between visual and tactile modalities (Konkle et al.,
<xref rid="B50" ref-type="bibr">2009</xref>
).</p>
<p>These findings could extend the suggestions originally proposed by Soto-Faraco et al. Different sensory information flexibly and adequately cooperates
<italic>with each other</italic>
based on the reliability and saliency of information under common perceptual mechanisms and shared neural substrates in motion perception.</p>
</sec>
<sec>
<title>Effects of associative learning</title>
<p>How are common perceptual and neural mechanisms established in the brain across sensory modalities in motion perception? By considering the fact that each sensory organ has unique properties in perceptual processing, we may learn the manner in which the inputs of different sensory modalities should be associated or integrated after birth. The influential cue for the integration/association between sensory modalities is considered to be spatiotemporal consistency/proximity (Calvert et al.,
<xref rid="B15" ref-type="bibr">2004</xref>
). Research on crossmodal perceptual learning reported that repeated/redundant presentations of paired moving visual and auditory stimuli induced the facilitation of visual motion discrimination (Seitz et al.,
<xref rid="B73" ref-type="bibr">2006</xref>
; Kim et al.,
<xref rid="B46" ref-type="bibr">2008</xref>
). In addition, reliability of crossmodal inputs was also reported to affect the establishment of crossmodal associations (Ernst et al.,
<xref rid="B21" ref-type="bibr">2000</xref>
). Based on these findings, the establishment of crossmodal associations has been considered in the context of a maximum likelihood estimation model (Ernst and Banks,
<xref rid="B20" ref-type="bibr">2002</xref>
). However, this type of model lacks the viewpoint of how we know to utilize spatiotemporal information and/or reliability as influential cues to decide whether crossmodal inputs should be integrated or segregated. Recently, Bayesian models/frameworks have approached this problem (Ernst,
<xref rid="B19" ref-type="bibr">2007</xref>
). Körding et al. (
<xref rid="B51" ref-type="bibr">2007</xref>
) designed a model implementing the prior knowledge of whether auditory and visual stimuli should be integrated based on spatial proximity of these stimuli. They demonstrated that their model predicted behavioral performances of audio-visual spatial ventriloquism effects well (see also Shams and Beierholm,
<xref rid="B76" ref-type="bibr">2010</xref>
for a review). Further, not only spatiotemporal proximity but also correlative relationships of crossmodal inputs can play a key role in the determination of integration and segregation of these inputs (Parise et al.,
<xref rid="B65" ref-type="bibr">2012</xref>
,
<xref rid="B64" ref-type="bibr">2013</xref>
). These findings indicate that any combination of crossmodal stimuli is possible to be integrated if prior knowledge or experiences established that they are associable (see also van Dam et al.,
<xref rid="B96" ref-type="bibr">2014</xref>
for a review). In fact, new associations/relationships could be learned across arbitrary static/constant crossmodal inputs, even by adults (Fujisaki et al.,
<xref rid="B30" ref-type="bibr">2004</xref>
; Ernst,
<xref rid="B19" ref-type="bibr">2007</xref>
; Seitz et al.,
<xref rid="B74" ref-type="bibr">2007</xref>
). Thus, we could predict that arbitrary crossmodal associations could be established in motion perception as well.</p>
<p>Teramoto et al. (
<xref rid="B92" ref-type="bibr">2010a</xref>
) demonstrated that sounds without spatial information become a driver for visual motion perception after associative learning (Figure
<xref ref-type="fig" rid="F4">4</xref>
). In their experiments, two visual flashes were presented as visual apparent motion stimuli in a horizontal direction. The onset of each visual stimulus was accompanied by an auditory stimulus in one of two arbitrary pitches (higher (H) or lower (L)). Before a 3 min exposure to a paired presentation of the visual and auditory stimuli (e.g., leftward motion and H to L pitch change and rightward motion and L to H pitch change), the sounds did not affect the percept of visual apparent motion. In contrast, after the exposure, the sounds induced visual apparent motion in the exposed manner (in this case, the H to L pitch change induced leftward motion perception and vice versa). These association effects did not appear when the inter-stimulus interval of the visual stimuli was too long to be perceived as apparent motion during the exposure. In addition, the association effect was also found for the pairing of auditory pitch changes and directional information in a visual global motion display (Michel and Jacobs,
<xref rid="B59" ref-type="bibr">2007</xref>
; Hidaka et al.,
<xref rid="B39" ref-type="bibr">2011a</xref>
). Kafaligonul and Oluk (
<xref rid="B42" ref-type="bibr">2015</xref>
) have also reported that the exposure to auditory pitch changes and higher-order visual motion induced the association effect on both lower- and higher-order visual motion perception. In contrast, the exposure to auditory pitch changes and lower-order visual motion induced the association effect only for lower-order visual motion perception. These findings indicate that motion processing plays a key factor in the establishment of crossmodal associations in motion perception.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Schematic illustrations of perceptual associative learning between a sound sequence and visual apparent motion</bold>
. After a 3 min exposure (adaptation) to a paired presentation of sound with arbitrary pitch changes [higher (H) to lower (L)] and visual horizontal apparent motion (leftward motion), the sounds began to induce visual apparent motion in the previously exposed manner, specifically at the exposed visual field.</p>
</caption>
<graphic xlink:href="fnint-09-00062-g0004"></graphic>
</fig>
<p>Kobayashi et al. (
<xref rid="B48" ref-type="bibr">2012a</xref>
) presented auditory frequency changes that were physically different but perceptually indiscriminable in their experiments. They demonstrated that the undetectable frequency differences were unconsciously extracted and utilized for establishing associations with visual apparent motion. Furthermore, the effect did not occur when the stimulated eyes differed between the exposure and test sessions. Kobayashi et al. (
<xref rid="B49" ref-type="bibr">2012b</xref>
) further reported that the association effects produced sharp selectivity in auditory processing. After exposure to visual apparent motion and a specific frequency change of tones (e.g., 400 and 2100 Hz), the tones that differed from the previously associated frequencies by 0.25 octave (476–2496 or 566–2970 Hz in this case) did not have any effect on visual motion perception in the subsequent test session. Similarly, when the sounds were presented in the right ear of the observers during the exposure, sounds presented in the left ear did not affect visual motion perception. Similar selectivity was reported for the visual domain. The effect of the exposure to auditory pitch changes and the visual moving stimuli (e.g., at 10° to the right side of the visual field) were not observed at the contralateral side of the visual display (left side in this case), or even at the ipsilateral side with small deviations (e.g., 5° to the right side of the visual field; Teramoto et al.,
<xref rid="B92" ref-type="bibr">2010a</xref>
; Hidaka et al.,
<xref rid="B39" ref-type="bibr">2011a</xref>
). Similar eye field selectivity was found in the association effects from visual to auditory stimuli (i.e., visual motion information induced changes in the percept of auditory pitches; Teramoto et al.,
<xref rid="B93" ref-type="bibr">2013</xref>
). These selective aspects suggest that, under the establishment of associations in crossmodal spatiotemporal processing, some lower level of processing could be involved such as subthreshold processing, monaural/monocular processing, and the processing of frequency band and receptive fields.</p>
<p>These aforementioned perceptual association paradigms in crossmodal motion perception have been utilized for further investigations. The existence of crossmodal correspondences is well-known. For example, higher/lower pitch information could induce upper/lower impressions in space and changes in response (e.g., reaction time; Bernstein and Edelstein,
<xref rid="B9" ref-type="bibr">1971</xref>
; see also Spence,
<xref rid="B83" ref-type="bibr">2011</xref>
for a review). Hidaka et al. (
<xref rid="B38" ref-type="bibr">2013</xref>
) investigated whether this pitch-space correspondence could have a perceptual effect on motion perception. They found that, different from the spatial alternation of sound locations in a vertical direction, the alternation of pitch information (higher and lower) did not induce vertical visual apparent motion perception. In contrast, after the association was established between the alternation of pitch information and visual vertical apparent motion, the pitch changes affected visual motion perception. A notable point is that the association effects appeared to demonstrate pitch-space correspondence rules. The upward and downward directions of visual apparent motion were triggered by lower-to-higher and higher-to-lower pitch changes, respectively, regardless of the manner of the association between the pitch change (lower to higher or higher to lower) and the upward/downward visual motion in the exposure phase. The authors speculated that the associative exposure could activate the existing representations of pitch-space correspondence to induce their crossmodal effects on motion perception. Kuang and Zhang (
<xref rid="B53" ref-type="bibr">2014</xref>
) investigated whether a sensory combination other than audition and vision could produce association effects. They presented changes in smells (banana and fennel) with a visual global motion display. After exposure to these stimuli, the smells affected the perceived direction of the visual global motion display. This result suggests that crossmodal associative learning in spatiotemporal processing is not limited to audio-visual domains, but could generally occur among a variety of sensory pairs.</p>
<p>Recent findings clearly suggest that new perceptual associations could be established between arbitrary inputs through crossmodal spatiotemporal processing. After associations are formed, each sensory input affects the percept of the other one as if “replaying” the associated relationship. The association effects are assumed not to be limited to particular sensory combinations, but have sharp selectivity in each sensory modality. These findings suggest that perceptual associative learning is one of the most plausible underlying mechanisms to establish common perceptual and neural mechanisms in crossmodal spatiotemporal processing.</p>
</sec>
<sec>
<title>Functional brain characteristics in crossmodal spatiotemporal processing</title>
<p>Neural substrates of crossmodal interactions have been investigated using neurophysiological and brain imaging techniques in animals and humans. Researchers have shown that multisensory inputs could activate both subcortical (e.g., superior colliculus, pulvinar nucleus, and putamen) and cortical (e.g., sensory association areas in the temporal, parietal, and frontal) regions, and even brain areas that have been considered as primary sensory areas (e.g., visual and auditory areas; see Calvert,
<xref rid="B14" ref-type="bibr">2001</xref>
; Driver and Noesselt,
<xref rid="B17" ref-type="bibr">2008</xref>
; Murray and Wallace,
<xref rid="B61" ref-type="bibr">2011</xref>
for review).</p>
<p>Some researchers have investigated the neural mechanisms for motion processing in crossmodal interactions by using brain-imaging techniques. Lewis et al. (
<xref rid="B55" ref-type="bibr">2000</xref>
) presented visual and auditory motion stimuli independently and then investigated the overlapped and non-overlapped activation areas for those inputs. These motion stimuli commonly activated the lateral parietal, lateral frontal, anterior midline, and anterior insula areas. In contrast, visual and auditory stimulation independently activated the primary visual and V5/MT areas and the auditory primary areas as well as the surrounding areas including the periarcuate cortex, respectively. Interestingly, the inferior parietal lobule, dorsal occipital cortex, and the cortex overlapping hMT+ were activated by visual motion but suppressed by auditory motion. Auditory motion information strongly activated the superior temporal sulcus. In addition, during a speeded discrimination task for these motion stimuli, the intraparietal sulcus, anterior midline, and anterior insula were activated. Researchers have also reported that visual, auditory, and tactile motion information, which were independently presented, activated identical sensory association areas, such as the intraparietal sulcus, as well as the lateral inferior postcentral cortex and the premotor cortex (Bremmer et al.,
<xref rid="B11" ref-type="bibr">2001</xref>
; see also Grefkes and Fink,
<xref rid="B34" ref-type="bibr">2005</xref>
).</p>
<p>Furthermore, Baumann and Greenlee (
<xref rid="B7" ref-type="bibr">2007</xref>
) found that the brain areas related to crossmodal integration (e.g., the superior parietal lobule, the superior temporal gyrus, the intraparietal sulcus, and the supra marginal gyrus) were activated when visual random-dot motion display and auditory motion stimuli were presented in a congruent direction. However, activation in the V5/MT area was not observed. The authors speculated that this might be due to relatively weak visual motion stimulation. In contrast, Alink et al. (
<xref rid="B4" ref-type="bibr">2008</xref>
) reported that the activation in the V5/MT area became higher when the visual and auditory motion signals were presented as coherent rather than as incoherent. Interestingly, auditory motion information alone also activated the V5/MT area (see also Alink et al.,
<xref rid="B3" ref-type="bibr">2012</xref>
). Similarly, the areas identified to respond to auditory motion (auditory motion complex: AMC) were also activated by visual motion information. In addition, when visual motion information perceptually captured auditory information (Soto-Faraco et al.,
<xref rid="B79" ref-type="bibr">2002</xref>
), activation was enhanced in the V5/MT area, while AMC activation decreased.</p>
<p>Scheef et al. (
<xref rid="B71" ref-type="bibr">2009</xref>
) used a complex situation in which a visual motion signal containing biological meaning (i.e., a human jumping) and sounds consistent with visual motion (implying the jumping) were presented. They reported that activation in the V5/MT area, as well as in the superior temporal sulcus, the intraparietal complex, and the prefrontal regions, was enhanced by the sounds. Studies have also indicated that tactile motion information could activate the V5/MT area, as well as the somatosensory areas, similar to a visual motion signal or an interaction with visual information (Hagen et al.,
<xref rid="B35" ref-type="bibr">2002</xref>
; Blake et al.,
<xref rid="B10" ref-type="bibr">2004</xref>
; van Kemenade et al.,
<xref rid="B98" ref-type="bibr">2014</xref>
).</p>
<p>There have also been several electrophysiological studies regarding crossmodal interactions in motion perception. For instance, Stekelenburg and Vroomen (
<xref rid="B89" ref-type="bibr">2009</xref>
) focused on early event-related mismatch negativity (MMN) components (around 200 ms). They reported that MMN induced by changes in the auditory motion direction diminished when the visual capture effect on auditory motion occurred. Since MMN is assumed to reflect automatic, pre-attentive processes, these findings indicate the involvement of perceptual processes in crossmodal motion perception (see also Beer and Röder,
<xref rid="B8" ref-type="bibr">2005</xref>
; Zvyagintsev et al.,
<xref rid="B108" ref-type="bibr">2009</xref>
). Moreover, congruent audio-visual (Gleiss and Kayser,
<xref rid="B33" ref-type="bibr">2014</xref>
) and visuo-tactile (Krebber et al.,
<xref rid="B52" ref-type="bibr">2015</xref>
) motion information enhanced alpha-band and gamma-band activities in each primary sensory area. This suggests that both top-down and bottom-up processes underlie the integration of crossmodal motion information. While these studies mainly used motion stimuli in a two-dimensional plane, recent studies also demonstrate the involvement of early sensory areas to process audio-visual crossmodal stimuli simulating motion in depth, especially looming stimuli (Romei et al.,
<xref rid="B68" ref-type="bibr">2009</xref>
; Cappe et al.,
<xref rid="B16" ref-type="bibr">2012</xref>
; Ogawa and amd Macaluso,
<xref rid="B63" ref-type="bibr">2013</xref>
).</p>
<p>Taken together, crossmodal motion information could activate from higher sensory association areas (e.g., the intraparietal and superior temporal sulcus) to a relatively lower motion processing areas (e.g., V5/MT area) and primary sensory areas related to motion processing. The activation patterns in these areas are also assumed to be variable depending on the congruency of motion signals, the types of stimuli, and the experimental paradigm.</p>
</sec>
<sec>
<title>Possible linkages between perceptual and neural aspects</title>
<p>Thus far, we overviewed recent literatures regarding perceptual aspects and functional brain characteristics related to crossmodal spatiotemporal processing. Here, we discuss possible linkages between these aspects.</p>
<p>Some psychophysical studies have shown that motion aftereffects have occurred across sensory modalities (Kitagawa and Ichihara,
<xref rid="B47" ref-type="bibr">2002</xref>
; Konkle et al.,
<xref rid="B50" ref-type="bibr">2009</xref>
). Specifically, the aftereffect was
<italic>negative</italic>
(e.g., adaptation to upward motion subsequently induced downward motion perception for static stimuli). Visual
<italic>negative</italic>
motion aftereffects are assumed to occur due to the inhibition of neurons selective for the adapted motion direction and the activation/enhancement of neurons with selectivity opposite to the adapted direction (Anstis et al.,
<xref rid="B6" ref-type="bibr">1998</xref>
). In this case, we could assume that motion directional neurons in V5/MT and the sensory association areas (e.g., the superior temporal sulcus; Bruce et al.,
<xref rid="B13" ref-type="bibr">1981</xref>
) mediated the interplay of motion processing across sensory modalities, as shown in the brain imaging studies (Lewis et al.,
<xref rid="B55" ref-type="bibr">2000</xref>
; Bremmer et al.,
<xref rid="B11" ref-type="bibr">2001</xref>
; Grefkes and Fink,
<xref rid="B34" ref-type="bibr">2005</xref>
; Baumann and Greenlee,
<xref rid="B7" ref-type="bibr">2007</xref>
; Alink et al.,
<xref rid="B4" ref-type="bibr">2008</xref>
; Figure
<xref ref-type="fig" rid="F5">5</xref>
).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Schematic illustrations of neural bases of crossmodal motion perception</bold>
. Interactions of motion information across sensory modalities could occur in V5/MT and association areas and induce negative aftereffects. In contrast, it could be assumed that some lower levels of brain representation and processing areas, including the subcortical, primary sensory, and V5/MT areas, play key roles for the establishment of new neural representations regarding the associations of motion information and arbitrary information without motion signals with positive aftereffects. See main text for details.</p>
</caption>
<graphic xlink:href="fnint-09-00062-g0005"></graphic>
</fig>
<p>On the other hand, studies on crossmodal perceptual associative learning have consistently demonstrated
<italic>positive</italic>
aftereffects (Michel and Jacobs,
<xref rid="B59" ref-type="bibr">2007</xref>
; Teramoto et al.,
<xref rid="B92" ref-type="bibr">2010a</xref>
; Hidaka et al.,
<xref rid="B39" ref-type="bibr">2011a</xref>
; Kuang and Zhang,
<xref rid="B53" ref-type="bibr">2014</xref>
; Kafaligonul and Oluk,
<xref rid="B42" ref-type="bibr">2015</xref>
). In these studies, visual motion stimuli were presented with sounds or smells not containing any motion stimuli. In a crossmodal
<italic>negative</italic>
motion aftereffect, motion information is clearly presented in multiple modalities so that existing neural representations responsible for motion processing are considered to be involved. In contrast, during crossmodal perceptual associative learning, we could assume that new neural representations are established between motion information and arbitrary information without a motion signal (c.f. Haijiang et al.,
<xref rid="B36" ref-type="bibr">2006</xref>
). After the association is formed, the arbitrary information simply works as a predictive cue for motion perception to the paired stimuli to induce a
<italic>positive</italic>
aftereffect. In line with this idea, Schlack and Albright (
<xref rid="B72" ref-type="bibr">2007</xref>
) reported that, after associations were established between the orientation information of static visual arrows and motion directions, neurons in the MT area of rhesus monkeys became selective for the orientation information of the static arrows. Moreover, the behavioral results showed that audiovisual perceptual association effects in motion perception have visual field selectivity ranging within 5° at 5–10° of eccentricity (Teramoto et al.,
<xref rid="B92" ref-type="bibr">2010a</xref>
; Hidaka et al.,
<xref rid="B39" ref-type="bibr">2011a</xref>
). This spatial selectivity aspect almost matches the V5/MT neurons receptive field's size (e.g., Felleman and Kaas,
<xref rid="B23" ref-type="bibr">1984</xref>
). We could assume that the area involved in motion processing (V5/MT area) would be the potential brain region where new neural representations for crossmodal motion processing are formed in crossmodal
<italic>positive</italic>
motion aftereffects.</p>
<p>As discussed above, functional brain research regarding crossmodal spatiotemporal processing focused on cortical activities with higher sensory association areas to motion processing areas. However, in spatial or temporal processing, multisensory inputs were reported to activate both subcortical and lower cortical regions. Additionally, audiovisual perceptual association effects in motion perception demonstrated sharp selectivity in both visual (eye selectivity; Kobayashi et al.,
<xref rid="B48" ref-type="bibr">2012a</xref>
) and auditory (ear and frequency selectivity; Kobayashi et al.,
<xref rid="B49" ref-type="bibr">2012b</xref>
) domains. These would indicate that lower cortical regions such as primary sensory areas and/or subcortical regions could play key roles for crossmodal motion processing.</p>
<p>Consistent with the suggestions from many crossmodal studies, lower and higher brain regions and bottom-up and top-down processing would be mutually and closely involved in crossmodal interactions in spatiotemporal processing.</p>
</sec>
</sec>
<sec id="s5">
<title>Concluding remarks</title>
<p>In this review, we encapsulated the recent evidence regarding crossmodal interactions in spatiotemporal processing (i.e., motion perception). The traditional view in crossmodal studies has regarded the dominant effects of vision over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. Sensory information, other than from vision (e.g., sound), had been assumed to have only modulatory effects on crossmodal motion perception. However, recent findings clearly demonstrate that sound and motor action could have a driving effect on visual motion perception, specifically when the visibility of the visual stimuli was degraded. Studies regarding perceptual associative learning have further reported that an association could be established between sounds without spatial information and visual motion information by a 3 min exposure. Then, the sounds acquired a driving effect on visual motion perception. Other sensory information (smell) was also reported to have similar driving effects on visual motion perception. Crossmodal interactions studies at neural levels demonstrate that activation in lower and higher cortical brain regions, including the area related to visual motion processing, is commonly modulated by crossmodal motion information.</p>
<p>These findings clearly suggest that multimodal information could mutually interact in spatiotemporal processing and that common perceptual and neural underlying mechanisms for crossmodal spatiotemporal processing would exist. Importantly, it has been also demonstrated that crossmodal interactions in motion perception flexibly and adequately occur, based on the reliability and saliency of information in spatiotemporal domain. The brain activation patterns related to crossmodal motion perception are also considered to be variable depending on the congruency of motion signals. These characteristics would be concordant with recent Bayesian models/frameworks in crossmodal integrations assuming that prior knowledge/experience whether crossmodal inputs should be integrated or segregated play key roles (Shams and Beierholm,
<xref rid="B76" ref-type="bibr">2010</xref>
; van Dam et al.,
<xref rid="B96" ref-type="bibr">2014</xref>
). The perceptual associative learning effects also indicate that arbitrary, unrelated crossmodal spatiotemporal information could interact if prior knowledge/experiences of integration are formed between them. These indicate that perceptual associative learning is one of the most plausible underlying mechanisms to establish common perceptual and neural representations of crossmodal spatiotemporal processing in the brain.</p>
<p>Several research questions remain to be addressed in future studies. For example, studies regarding crossmodal spatiotemporal processing have mainly investigated the interactions between vision and other sensory modalities. Visual information has been considered the most influential input regarding motion perception. Therefore, past findings are assumed to inevitably include the products of visual processing and related neural activities (e.g., the involvement of the V5/MT area). Of course, our perceptual systems can also receive motion information from the auditory and tactile sensory modalities that affected visual motion perception. Some studies have reported that auditory or tactile motion can be perceived against bilateral lesions of the lateral occipital cortex, including V5/MT and/or the posterior parietal cortex, while this induced visual motion blindness (Zihl et al.,
<xref rid="B106" ref-type="bibr">1983</xref>
,
<xref rid="B107" ref-type="bibr">1991</xref>
; Rizzo et al.,
<xref rid="B67" ref-type="bibr">1995</xref>
). Moreover, spatiotemporal information not only exists in the external world. Internal spatiotemporal information, namely vestibular sensation such as head movement or self-motion, are also present and they interact and coordinate with other sensory ones (see Angelaki and Cullen,
<xref rid="B5" ref-type="bibr">2008</xref>
for a review). Thus, the generalization and validity of the existing evidence regarding crossmodal spatiotemporal processing should be confirmed by focusing on crossmodal interactions excluding vision.</p>
<p>Detailed investigation regarding the process of establishing neural substrates through perceptual associative learning is also necessary. For example, when and where do new neural substrates appear in the brain during perceptual associative learning? Which brain areas begin to activate and how does the amount of neural activity change over time? Does the perceptual aftereffect shift from positive to negative along with the development of forming neural representations? Answers to these questions could contribute to an understanding of the manner in which the brain acquires perceptual and neural bases for crossmodal interactions in spatiotemporal processing.</p>
<p>We should also focus on the possible effects that occur from spatiotemporal processes to spatial and temporal processes, while previous studies have mainly investigated the opposite. For example, in the visual domain, motion information changes the percept of surrounding space (Whitney and Cavanagh,
<xref rid="B102" ref-type="bibr">2000</xref>
). In crossmodal studies, vestibular motion information, such as head movement (Leung et al.,
<xref rid="B54" ref-type="bibr">2008</xref>
) and forward self-motion (Teramoto et al.,
<xref rid="B95" ref-type="bibr">2012</xref>
), could distort the percept of auditory space. Further investigation of this aspect could contribute to e comprehensive understanding of the influence of crossmodal interactions in spatiotemporal, spatial, and temporal processing.</p>
</sec>
<sec id="s6">
<title>Author contributions</title>
<p>SH and WT are involved in literature search. SH, WT, and YS are involved in the interpretations of findings and the writing of the paper including figures and tables.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>This research was supported by Grant-in-Aid for Scientific Research (B) (No. 26285160) from the Japan Society for the Promotion of Science.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004a</year>
).
<article-title>No direction-specific bimodal facilitation for audiovisual motion detection</article-title>
.
<source>Brain Res. Cogn. Brain Res.</source>
<volume>19</volume>
,
<fpage>185</fpage>
<lpage>194</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cogbrainres.2003.11.011</pub-id>
<pub-id pub-id-type="pmid">15019714</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004b</year>
).
<article-title>The ventriloquist effect results from near-optimal bimodal integration</article-title>
.
<source>Curr. Biol.</source>
<volume>14</volume>
,
<fpage>257</fpage>
<lpage>262</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2004.01.029</pub-id>
<pub-id pub-id-type="pmid">14761661</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alink</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Euler</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kriegeskorte</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Singer</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Kohler</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Auditory motion direction encoding in auditory cortex and high-level visual cortex</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>33</volume>
,
<fpage>969</fpage>
<lpage>978</lpage>
.
<pub-id pub-id-type="doi">10.1002/hbm.21263</pub-id>
<pub-id pub-id-type="pmid">21692141</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alink</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Singer</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Muckli</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Capture of auditory motion by vision is represented by an activation shift from auditory to visual motion cortex</article-title>
.
<source>J. Neurosci.</source>
<volume>28</volume>
,
<fpage>2690</fpage>
<lpage>2697</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2980-07.2008</pub-id>
<pub-id pub-id-type="pmid">18337398</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Angelaki</surname>
<given-names>D. E.</given-names>
</name>
<name>
<surname>Cullen</surname>
<given-names>K. E.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Vestibular system: the many facets of a multimodal sense</article-title>
.
<source>Annu. Rev. Neurosci.</source>
<volume>31</volume>
,
<fpage>125</fpage>
<lpage>150</lpage>
.
<pub-id pub-id-type="doi">10.1146/annurev.neuro.31.060407.125555</pub-id>
<pub-id pub-id-type="pmid">18338968</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Anstis</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Verstraten</surname>
<given-names>F. A.</given-names>
</name>
<name>
<surname>Mather</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>The motion aftereffect</article-title>
.
<source>Trends Cogn. Sci. (Regul. Ed).</source>
<volume>2</volume>
,
<fpage>111</fpage>
<lpage>117</lpage>
.
<pub-id pub-id-type="doi">10.1016/S1364-6613(98)01142-5</pub-id>
<pub-id pub-id-type="pmid">21227087</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baumann</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Greenlee</surname>
<given-names>M. W.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Neural correlates of coherent audiovisual motion perception</article-title>
.
<source>Cereb. Cortex</source>
<volume>17</volume>
,
<fpage>1433</fpage>
<lpage>1443</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhl055</pub-id>
<pub-id pub-id-type="pmid">16928890</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beer</surname>
<given-names>A. L.</given-names>
</name>
<name>
<surname>Röder</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Attending to visual or auditory motion affects perception within and across modalities: an event-related potential study</article-title>
.
<source>Eur. J. Neurosci.</source>
<volume>21</volume>
,
<fpage>1116</fpage>
<lpage>1130</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1460-9568.2005.03927.x</pub-id>
<pub-id pub-id-type="pmid">15787717</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bernstein</surname>
<given-names>I. H.</given-names>
</name>
<name>
<surname>Edelstein</surname>
<given-names>B. A.</given-names>
</name>
</person-group>
(
<year>1971</year>
).
<article-title>Effects of some variations in auditory input upon visual choice reaction time</article-title>
.
<source>J. Exp. Psychol.</source>
<volume>87</volume>
,
<fpage>241</fpage>
<lpage>247</lpage>
.
<pub-id pub-id-type="doi">10.1037/h0030524</pub-id>
<pub-id pub-id-type="pmid">5542226</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blake</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Sobel</surname>
<given-names>K. V.</given-names>
</name>
<name>
<surname>James</surname>
<given-names>T. W.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Neural synergy between kinetic vision and touch</article-title>
.
<source>Psychol. Sci.</source>
<volume>15</volume>
,
<fpage>397</fpage>
<lpage>402</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.0956-7976.2004.00691.x</pub-id>
<pub-id pub-id-type="pmid">15147493</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bremmer</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Schlack</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Shah</surname>
<given-names>N. J.</given-names>
</name>
<name>
<surname>Zafiris</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Kubischik</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hoffmann</surname>
<given-names>K. P.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2001</year>
).
<article-title>Polymodal motion processing in posterior parietal and premotor cortex: a human fMRI study strongly implies equivalencies between humans and monkeys</article-title>
.
<source>Neuron</source>
<volume>29</volume>
,
<fpage>287</fpage>
<lpage>296</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0896-6273(01)00198-2</pub-id>
<pub-id pub-id-type="pmid">11182099</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bresciani</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Drewing</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Bouyer</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Maury</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Kheddar</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Feeling what you hear: auditory signals can modulate tactile tap perception</article-title>
.
<source>Exp. Brain Res.</source>
<volume>162</volume>
,
<fpage>172</fpage>
<lpage>180</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-004-2128-2</pub-id>
<pub-id pub-id-type="pmid">15791465</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bruce</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Desimone</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Gross</surname>
<given-names>C. G.</given-names>
</name>
</person-group>
(
<year>1981</year>
).
<article-title>Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque</article-title>
.
<source>J. Neurophys.</source>
<volume>46</volume>
,
<fpage>369</fpage>
<lpage>384</lpage>
.
<pub-id pub-id-type="pmid">6267219</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Calvert</surname>
<given-names>G. A.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Crossmodal processing in the human brain: insights from functional neuroimaging studies</article-title>
.
<source>Cereb. Cortex</source>
<volume>11</volume>
,
<fpage>1110</fpage>
<lpage>1123</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/11.12.1110</pub-id>
<pub-id pub-id-type="pmid">11709482</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="book">
<person-group person-group-type="editor">
<name>
<surname>Calvert</surname>
<given-names>G. A.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
</person-group>
(eds.). (
<year>2004</year>
).
<source>The Handbook of Multisensory Processing</source>
.
<publisher-loc>Cambridge</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cappe</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Thelen</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Romei</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Thut</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>M. M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Looming signals reveal synergistic principles of multisensory integration</article-title>
.
<source>J. Neurosci.</source>
<volume>32</volume>
,
<fpage>1171</fpage>
<lpage>1182</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.5517-11.2012</pub-id>
<pub-id pub-id-type="pmid">22279203</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Driver</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Noesselt</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Multisensory interplay reveals crossmodal influences on ‘sensory-specific’ brain regions, neural responses, and judgments</article-title>
.
<source>Neuron</source>
<volume>57</volume>
,
<fpage>11</fpage>
<lpage>23</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuron.2007.12.013</pub-id>
<pub-id pub-id-type="pmid">18184561</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Driver</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Attention and the crossmodal construction of space</article-title>
.
<source>Trends Cogn. Sci. (Regul. Ed).</source>
<volume>2</volume>
,
<fpage>254</fpage>
<lpage>262</lpage>
.
<pub-id pub-id-type="doi">10.1016/S1364-6613(98)01188-7</pub-id>
<pub-id pub-id-type="pmid">21244924</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Learning to integrate arbitrary signals from vision and touch</article-title>
.
<source>J. Vis.</source>
<volume>7</volume>
:
<fpage>7</fpage>
.
<pub-id pub-id-type="doi">10.1167/7.5.7</pub-id>
<pub-id pub-id-type="pmid">18217847</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
,
<fpage>429</fpage>
<lpage>433</lpage>
.
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Touch can change visual slant perception</article-title>
.
<source>Nat. Neurosci.</source>
<volume>3</volume>
,
<fpage>69</fpage>
<lpage>73</lpage>
.
<pub-id pub-id-type="doi">10.1038/71140</pub-id>
<pub-id pub-id-type="pmid">10607397</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Merging the senses into a robust percept</article-title>
.
<source>Trends Cogn. Sci. (Regul. Ed).</source>
<volume>8</volume>
,
<fpage>162</fpage>
<lpage>169</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.tics.2004.02.002</pub-id>
<pub-id pub-id-type="pmid">15050512</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Felleman</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Kaas</surname>
<given-names>J. H.</given-names>
</name>
</person-group>
(
<year>1984</year>
).
<article-title>Receptive-field properties of neurons in middle temporal visual area (MT) of owl monkeys</article-title>
.
<source>J. Neurophys.</source>
<volume>52</volume>
,
<fpage>488</fpage>
<lpage>513</lpage>
.
<pub-id pub-id-type="pmid">6481441</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fendrich</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Corballis</surname>
<given-names>P. M.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>The temporal cross-capture of audition and vision</article-title>
.
<source>Percept. Psychophys.</source>
<volume>63</volume>
,
<fpage>719</fpage>
<lpage>725</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03194432</pub-id>
<pub-id pub-id-type="pmid">11436740</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fracasso</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Targher</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zampini</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Melcher</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Fooling the eyes: the influence of a sound-induced visual motion illusion on eye movements</article-title>
.
<source>PloS ONE</source>
<volume>8</volume>
:
<fpage>e62131</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0062131</pub-id>
<pub-id pub-id-type="pmid">23637981</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Freeman</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Driver</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Direction of visual apparent motion driven solely by timing of a static sound</article-title>
.
<source>Curr. Biol.</source>
<volume>18</volume>
,
<fpage>1262</fpage>
<lpage>1266</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2008.07.066</pub-id>
<pub-id pub-id-type="pmid">18718760</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Fujisaki</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Kitazawa</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Multisensory timing</article-title>
, in
<source>The New Handbook of Multisensory Processing</source>
, ed
<person-group person-group-type="editor">
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>The MIT Press</publisher-name>
),
<fpage>301</fpage>
<lpage>317</lpage>
.</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fujisaki</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Audio–tactile superiority over visuo–tactile and audio–visual combinations in the temporal resolution of synchrony perception</article-title>
.
<source>Exp. Brain Res.</source>
<volume>198</volume>
,
<fpage>245</fpage>
<lpage>259</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-009-1870-x</pub-id>
<pub-id pub-id-type="pmid">19499212</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fujisaki</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>A common perceptual temporal limit of binding synchronous inputs across different sensory attributes and modalities</article-title>
.
<source>Proc. Biol. Sci.</source>
<volume>277</volume>
,
<fpage>2281</fpage>
<lpage>2290</lpage>
.
<pub-id pub-id-type="doi">10.1098/rspb.2010.0243</pub-id>
<pub-id pub-id-type="pmid">20335212</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fujisaki</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Shimojo</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kashino</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Recalibration of audiovisual simultaneity</article-title>
.
<source>Nat. Neurosci.</source>
<volume>7</volume>
,
<fpage>773</fpage>
<lpage>778</lpage>
.
<pub-id pub-id-type="doi">10.1038/nn1268</pub-id>
<pub-id pub-id-type="pmid">15195098</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gebhard</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Mowbray</surname>
<given-names>G. H.</given-names>
</name>
</person-group>
(
<year>1959</year>
).
<article-title>On discriminating the rate of visual flicker and auditory flutter.</article-title>
<source>Am. J. Psychol.</source>
<fpage>521</fpage>
<lpage>529</lpage>
.
<pub-id pub-id-type="doi">10.2307/1419493</pub-id>
<pub-id pub-id-type="pmid">13827044</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Getzmann</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>The effect of brief auditory stimuli on visual apparent motion</article-title>
.
<source>Perception</source>
<volume>36</volume>
,
<fpage>1089</fpage>
<lpage>1103</lpage>
.
<pub-id pub-id-type="doi">10.1068/p5741</pub-id>
<pub-id pub-id-type="pmid">17844974</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gleiss</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kayser</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Oscillatory mechanisms underlying the enhancement of visual motion perception by multisensory congruency</article-title>
.
<source>Neuropsychologia</source>
<volume>53</volume>
,
<fpage>84</fpage>
<lpage>93</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2013.11.005</pub-id>
<pub-id pub-id-type="pmid">24262657</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grefkes</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Fink</surname>
<given-names>G. R.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>The functional organization of the intraparietal sulcus in humans and monkeys</article-title>
.
<source>J. Anat.</source>
<volume>207</volume>
,
<fpage>3</fpage>
<lpage>17</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1469-7580.2005.00426.x</pub-id>
<pub-id pub-id-type="pmid">16011542</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hagen</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Franzén</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>McGlone</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Essick</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Dancer</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Pardo</surname>
<given-names>J. V.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Tactile motion activates the human middle temporal/V5 (MT/V5) complex</article-title>
.
<source>Eur. J. Neurosci.</source>
<volume>16</volume>
,
<fpage>957</fpage>
<lpage>964</lpage>
.
<pub-id pub-id-type="doi">10.1046/j.1460-9568.2002.02139.x</pub-id>
<pub-id pub-id-type="pmid">12372032</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haijiang</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Saunders</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Stone</surname>
<given-names>R. W.</given-names>
</name>
<name>
<surname>Backus</surname>
<given-names>B. T.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Demonstration of cue recruitment: change in visual appearance by means of Pavlovian conditioning</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>103</volume>
,
<fpage>483</fpage>
<lpage>488</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.0506728103</pub-id>
<pub-id pub-id-type="pmid">16387858</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hidaka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Manaka</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Teramoto</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Sugita</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Miyauchi</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Gyoba</surname>
<given-names>J.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2009</year>
).
<article-title>Alternation of sound location induces visual motion perception of a static object</article-title>
.
<source>PLoS ONE</source>
<volume>4</volume>
:
<fpage>e8188</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0008188</pub-id>
<pub-id pub-id-type="pmid">19997648</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hidaka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Teramoto</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Keetels</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Effect of pitch-space correspondence on sound-induced visual motion perception</article-title>
.
<source>Exp. Brain Res.</source>
<volume>231</volume>
,
<fpage>117</fpage>
<lpage>126</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-013-3674-2</pub-id>
<pub-id pub-id-type="pmid">24030519</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hidaka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Teramoto</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Kobayashi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sugita</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2011a</year>
).
<article-title>Sound-contingent visual motion aftereffect</article-title>
.
<source>BMC Neurosci.</source>
<volume>12</volume>
:
<fpage>44</fpage>
.
<pub-id pub-id-type="doi">10.1186/1471-2202-12-44</pub-id>
<pub-id pub-id-type="pmid">21569617</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hidaka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Teramoto</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Sugita</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Manaka</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Sakamoto</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Suzuki</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2011b</year>
).
<article-title>Auditory motion information drives visual motion perception</article-title>
.
<source>PLoS ONE</source>
<volume>6</volume>
:
<fpage>e17499</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0017499</pub-id>
<pub-id pub-id-type="pmid">21408078</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Howard</surname>
<given-names>I. P.</given-names>
</name>
<name>
<surname>Templeton</surname>
<given-names>W. B.</given-names>
</name>
</person-group>
(
<year>1966</year>
).
<source>Human Spatial Orientation</source>
.
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Wiley</publisher-name>
.</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kafaligonul</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Oluk</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>Audiovisual associations alter the perception of low-level visual motion</article-title>
.
<source>Front. Integr. Neurosci.</source>
<volume>9</volume>
:
<issue>26</issue>
.
<pub-id pub-id-type="doi">10.3389/fnint.2015.00026</pub-id>
<pub-id pub-id-type="pmid">25873869</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kawabe</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Miura</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Yamada</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Audiovisual tau effect</article-title>
.
<source>Acta Psychol.</source>
<volume>128</volume>
,
<fpage>249</fpage>
<lpage>254</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.actpsy.2008.01.004</pub-id>
<pub-id pub-id-type="pmid">18328993</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Keetels</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Stekelenburg</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Motor-induced visual motion: hand movements driving visual motion perception</article-title>
.
<source>Exp. Brain Res.</source>
<volume>232</volume>
,
<fpage>2865</fpage>
<lpage>2877</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-014-3959-0</pub-id>
<pub-id pub-id-type="pmid">24820287</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kim</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Peters</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>0+1 > 1: How adding noninformative sound improves performance on a visual task</article-title>
.
<source>Psychol. Sci.</source>
<volume>23</volume>
,
<fpage>6</fpage>
<lpage>12</lpage>
.
<pub-id pub-id-type="doi">10.1177/0956797611420662</pub-id>
<pub-id pub-id-type="pmid">22127367</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kim</surname>
<given-names>R. S.</given-names>
</name>
<name>
<surname>Seitz</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Benefits of stimulus congruency for multisensory facilitation of visual learning</article-title>
.
<source>PLoS ONE</source>
<volume>3</volume>
:
<fpage>e1532</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0001532</pub-id>
<pub-id pub-id-type="pmid">18231612</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kitagawa</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Ichihara</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Hearing visual motion in depth</article-title>
.
<source>Nature</source>
<volume>416</volume>
,
<fpage>172</fpage>
<lpage>174</lpage>
.
<pub-id pub-id-type="doi">10.1038/416172a</pub-id>
<pub-id pub-id-type="pmid">11894093</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kobayashi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Teramoto</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Hidaka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Sugita</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2012a</year>
).
<article-title>Indiscriminable sounds determine the direction of visual motion</article-title>
.
<source>Sci. Rep.</source>
<volume>2</volume>
:
<fpage>365</fpage>
.
<pub-id pub-id-type="doi">10.1038/srep00365</pub-id>
<pub-id pub-id-type="pmid">22511997</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kobayashi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Teramoto</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Hidaka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Sugita</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2012b</year>
).
<article-title>Sound frequency and aural selectivity in sound-contingent visual motion aftereffect</article-title>
.
<source>PLoS ONE</source>
<volume>7</volume>
:
<fpage>e36803</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0036803</pub-id>
<pub-id pub-id-type="pmid">22649500</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Konkle</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Hayward</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Moore</surname>
<given-names>C. I.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Motion aftereffects transfer between touch and vision</article-title>
.
<source>Curr. Biol.</source>
<volume>19</volume>
,
<fpage>745</fpage>
<lpage>750</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2009.03.035</pub-id>
<pub-id pub-id-type="pmid">19361996</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>K. P.</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Quartz</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Tenenbaum</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Causal inference in multisensory perception</article-title>
.
<source>PLoS ONE</source>
<volume>2</volume>
:
<fpage>e943</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0000943</pub-id>
<pub-id pub-id-type="pmid">17895984</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Krebber</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Harwood</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Spitzer</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Keil</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Senkowski</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices</article-title>
.
<source>Neuroimage</source>
<volume>117</volume>
,
<fpage>160</fpage>
<lpage>169</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2015.05.056</pub-id>
<pub-id pub-id-type="pmid">26026813</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kuang</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Smelling directions: olfaction modulates ambiguous visual motion perception</article-title>
.
<source>Sci. Rep.</source>
<volume>4</volume>
,
<fpage>5796</fpage>
.
<pub-id pub-id-type="doi">10.1038/srep05796</pub-id>
<pub-id pub-id-type="pmid">25052162</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Leung</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Alais</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Carlile</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Compression of auditory space during rapid head turns</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>105</volume>
,
<fpage>6492</fpage>
<lpage>6497</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.0710837105</pub-id>
<pub-id pub-id-type="pmid">18427118</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewis</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Beauchamp</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>DeYoe</surname>
<given-names>E. A.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>A comparison of visual and auditory motion processing in human cerebral cortex</article-title>
.
<source>Cereb. Cortex.</source>
<volume>10</volume>
,
<fpage>873</fpage>
<lpage>888</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/10.9.873</pub-id>
<pub-id pub-id-type="pmid">10982748</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maeda</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kanai</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Shimojo</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Changing pitch induced visual motion illusion</article-title>
.
<source>Curr. Biol.</source>
<volume>14</volume>
,
<fpage>R990</fpage>
<lpage>R991</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2004.11.018</pub-id>
<pub-id pub-id-type="pmid">15589145</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mateeff</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hohnsbein</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Noack</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Dynamic visual capture: apparent auditory motion induced by a moving visual target</article-title>
.
<source>Perception</source>
<volume>14</volume>
,
<fpage>721</fpage>
<lpage>727</lpage>
<pub-id pub-id-type="doi">10.1068/p140721</pub-id>
<pub-id pub-id-type="pmid">3837873</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meyer</surname>
<given-names>G. F.</given-names>
</name>
<name>
<surname>Wuerger</surname>
<given-names>S. M.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Cross-modal integration of auditory and visual motion signals</article-title>
.
<source>Neuroreport</source>
<volume>12</volume>
,
<fpage>2557</fpage>
<lpage>2560</lpage>
.
<pub-id pub-id-type="doi">10.1097/00001756-200108080-00053</pub-id>
<pub-id pub-id-type="pmid">11496148</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Michel</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Parameter learning but not structure learning: a Bayesian network model of constraints on early perceptual learning</article-title>
.
<source>J. Vis.</source>
<volume>7</volume>
:
<fpage>4</fpage>
.
<pub-id pub-id-type="doi">10.1167/7.1.4</pub-id>
<pub-id pub-id-type="pmid">17461672</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morein-Zamir</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Auditory capture of vision: examining temporal ventriloquism</article-title>
.
<source>Cogn. Brain Res.</source>
<volume>17</volume>
,
<fpage>154</fpage>
<lpage>163</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0926-6410(03)00089-2</pub-id>
<pub-id pub-id-type="pmid">12763201</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Murray</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Wallace</surname>
<given-names>M. T.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<source>The neural bases of multisensory processes.</source>
<publisher-loc>Boca Raton, FL</publisher-loc>
:
<publisher-name>CRC Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Murray</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Harris</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>International multisensory researchfForum 2012 meeting special issue</article-title>
.
<source>Multisens. Res.</source>
<volume>26</volume>
,
<fpage>287</fpage>
<lpage>289</lpage>
.
<pub-id pub-id-type="doi">10.1163/22134808-00002416</pub-id>
<pub-id pub-id-type="pmid">23964480</pub-id>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ogawa</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>amd Macaluso</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Audio–visual interactions for motion perception in depth modulate activity in visual area V3A</article-title>
.
<source>Neuroimage</source>
<volume>71</volume>
,
<fpage>158</fpage>
<lpage>167</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2013.01.012</pub-id>
<pub-id pub-id-type="pmid">23333414</pub-id>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parise</surname>
<given-names>C. V.</given-names>
</name>
<name>
<surname>Harrar</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Cross-correlation between auditory and visual signals promotes multisensory integration</article-title>
.
<source>Multisens. Res.</source>
<volume>26</volume>
,
<fpage>307</fpage>
<lpage>316</lpage>
.
<pub-id pub-id-type="doi">10.1163/22134808-00002417</pub-id>
<pub-id pub-id-type="pmid">23964482</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parise</surname>
<given-names>C. V.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>When correlation implies causation in multisensory integration</article-title>
.
<source>Curr. Biol.</source>
<volume>22</volume>
,
<fpage>46</fpage>
<lpage>49</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2011.11.039</pub-id>
<pub-id pub-id-type="pmid">22177899</pub-id>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Radeau</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bertelson</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>Auditory–visual interaction and the timing of inputs. Thomas (1941) revisited</article-title>
.
<source>Psychol. Res</source>
.
<volume>49</volume>
,
<fpage>17</fpage>
<lpage>22</lpage>
.
<pub-id pub-id-type="doi">10.1007/BF00309198</pub-id>
<pub-id pub-id-type="pmid">3615744</pub-id>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rizzo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nawrot</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zihl</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Motion and shape perception in cerebral akinetopsia</article-title>
.
<source>Brain</source>
<volume>118</volume>
,
<fpage>1105</fpage>
<lpage>1127</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/118.5.1105</pub-id>
<pub-id pub-id-type="pmid">7496774</pub-id>
</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Romei</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Cappe</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Thut</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds</article-title>
.
<source>Curr. Biol.</source>
<volume>19</volume>
,
<fpage>1799</fpage>
<lpage>1805</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2009.09.027</pub-id>
<pub-id pub-id-type="pmid">19836243</pub-id>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roseboom</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Kawabe</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Direction of visual apparent motion driven by perceptual organization of cross-modal signals</article-title>
.
<source>J. Vis.</source>
<volume>13</volume>
:
<fpage>6</fpage>
.
<pub-id pub-id-type="doi">10.1167/13.1.6</pub-id>
<pub-id pub-id-type="pmid">23291646</pub-id>
</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sanabria</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Spatiotemporal interactions between audition and touch depend on hand posture</article-title>
.
<source>Exp. Brain Res.</source>
<volume>165</volume>
,
<fpage>505</fpage>
<lpage>514</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-005-2327-5</pub-id>
<pub-id pub-id-type="pmid">15942735</pub-id>
</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scheef</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Boecker</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Daamen</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Fehse</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Landsberg</surname>
<given-names>M. W.</given-names>
</name>
<name>
<surname>Granath</surname>
<given-names>D. O.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2009</year>
).
<article-title>Multimodal motion processing in area V5/MT: evidence from an artificial class of audio-visual events</article-title>
.
<source>Brain Res.</source>
<volume>1252</volume>
,
<fpage>94</fpage>
<lpage>104</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.brainres.2008.10.067</pub-id>
<pub-id pub-id-type="pmid">19083992</pub-id>
</mixed-citation>
</ref>
<ref id="B72">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schlack</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Albright</surname>
<given-names>T. D.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Remembering visual motion: neural correlates of associative plasticity and motion recall in cortical area MT</article-title>
.
<source>Neuron</source>
<volume>53</volume>
,
<fpage>881</fpage>
<lpage>890</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuron.2007.02.028</pub-id>
<pub-id pub-id-type="pmid">17359922</pub-id>
</mixed-citation>
</ref>
<ref id="B73">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seitz</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Sound facilitates visual learning</article-title>
.
<source>Curr. Biol.</source>
<volume>16</volume>
,
<fpage>1422</fpage>
<lpage>1427</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2006.05.048</pub-id>
<pub-id pub-id-type="pmid">16860741</pub-id>
</mixed-citation>
</ref>
<ref id="B74">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seitz</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>van Wassenhove</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Simultaneous and independent acquisition of multisensory and unisensory associations</article-title>
.
<source>Perception</source>
<volume>36</volume>
,
<fpage>1445</fpage>
<lpage>1454</lpage>
.
<pub-id pub-id-type="doi">10.1068/p5843</pub-id>
<pub-id pub-id-type="pmid">18265827</pub-id>
</mixed-citation>
</ref>
<ref id="B75">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sekuler</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Sekuler</surname>
<given-names>A. B.</given-names>
</name>
<name>
<surname>Lau</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Sound alters visual motion perception</article-title>
.
<source>Nature</source>
<volume>385</volume>
,
<fpage>308</fpage>
.
<pub-id pub-id-type="doi">10.1038/385308a0</pub-id>
<pub-id pub-id-type="pmid">9002513</pub-id>
</mixed-citation>
</ref>
<ref id="B76">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U. R.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Causal inference in perception</article-title>
.
<source>Trends Cogn. Sci. (Regul. Ed).</source>
<volume>14</volume>
,
<fpage>425</fpage>
<lpage>432</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.tics.2010.07.001</pub-id>
<pub-id pub-id-type="pmid">20705502</pub-id>
</mixed-citation>
</ref>
<ref id="B77">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Kamitani</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Shimojo</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Illusions: What you see is what you hear</article-title>
.
<source>Nature</source>
<volume>408</volume>
,
<fpage>788</fpage>
.
<pub-id pub-id-type="doi">10.1038/35048669</pub-id>
<pub-id pub-id-type="pmid">11130706</pub-id>
</mixed-citation>
</ref>
<ref id="B78">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Multisensory contributions to the perception of motion</article-title>
.
<source>Neuropsychologia</source>
<volume>41</volume>
,
<fpage>1847</fpage>
<lpage>1862</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0028-3932(03)00185-4</pub-id>
<pub-id pub-id-type="pmid">14527547</pub-id>
</mixed-citation>
</ref>
<ref id="B79">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lyons</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Gazzaniga</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>The ventriloquist in motion: illusory capture of dynamic information across sensory modalities</article-title>
.
<source>Brain Res. Cogn. Brain Res.</source>
<volume>14</volume>
,
<fpage>139</fpage>
<lpage>146</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0926-6410(02)00068-X</pub-id>
<pub-id pub-id-type="pmid">12063137</pub-id>
</mixed-citation>
</ref>
<ref id="B80">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004b</year>
).
<article-title>Cross-modal dynamic capture: congruency effects in the perception of motion across sensory modalities</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform.</source>
<volume>30</volume>
,
<fpage>330</fpage>
<lpage>345</lpage>
.
<pub-id pub-id-type="doi">10.1037/0096-1523.30.2.330</pub-id>
<pub-id pub-id-type="pmid">15053692</pub-id>
</mixed-citation>
</ref>
<ref id="B81">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Assessing automaticity in the audiovisual integration of motion</article-title>
.
<source>Acta Psychol</source>
.
<volume>118</volume>
,
<fpage>71</fpage>
<lpage>92</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.actpsy.2004.10.008</pub-id>
<pub-id pub-id-type="pmid">15627410</pub-id>
</mixed-citation>
</ref>
<ref id="B82">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Lloyd</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004a</year>
).
<article-title>Moving multisensory research along motion perception across sensory modalities</article-title>
.
<source>Curr. Dir. Psychol. Sci.</source>
<volume>13</volume>
,
<fpage>29</fpage>
<lpage>32</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.0963-7214.2004.01301008.x</pub-id>
</mixed-citation>
</ref>
<ref id="B83">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Crossmodal correspondences: a tutorial review</article-title>
.
<source>Atten. Percept. Psychophys.</source>
<volume>73</volume>
,
<fpage>971</fpage>
<lpage>995</lpage>
<pub-id pub-id-type="doi">10.3758/s13414-010-0073-7</pub-id>
<pub-id pub-id-type="pmid">21264748</pub-id>
</mixed-citation>
</ref>
<ref id="B84">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Just how important is spatial coincidence to multisensory integration? Evaluating the spatial rule</article-title>
.
<source>Ann. N. Y. Acad. Sci.</source>
<volume>1296</volume>
,
<fpage>31</fpage>
<lpage>49</lpage>
.
<pub-id pub-id-type="doi">10.1111/nyas.12121</pub-id>
<pub-id pub-id-type="pmid">23710729</pub-id>
</mixed-citation>
</ref>
<ref id="B85">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Driver</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Audiovisual links in exogenous covert spatial orienting</article-title>
.
<source>Percept. Psychophys.</source>
<volume>59</volume>
,
<fpage>1</fpage>
<lpage>22</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03206843</pub-id>
<pub-id pub-id-type="pmid">9038403</pub-id>
</mixed-citation>
</ref>
<ref id="B86">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<source>The New Handbook of Multisensory Processing.</source>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>The MIT Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B87">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
<name>
<surname>Meredith</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<source>The Merging of the Senses</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>The MIT Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B88">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
<name>
<surname>Stanford</surname>
<given-names>T. R.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Multisensory integration: current issues from the perspective of the single neuron</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>9</volume>
,
<fpage>255</fpage>
<lpage>266</lpage>
.
<pub-id pub-id-type="doi">10.1038/nrn2331</pub-id>
<pub-id pub-id-type="pmid">18354398</pub-id>
</mixed-citation>
</ref>
<ref id="B89">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stekelenburg</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Neural correlates of audiovisual motion capture</article-title>
.
<source>Exp. Brain Res.</source>
<volume>198</volume>
,
<fpage>383</fpage>
<lpage>390</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-009-1763-z</pub-id>
<pub-id pub-id-type="pmid">19296094</pub-id>
</mixed-citation>
</ref>
<ref id="B90">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sugita</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Suzuki</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Audiovisual perception: implicit estimation of sound-arrival time</article-title>
.
<source>Nature</source>
<volume>421</volume>
,
<fpage>911</fpage>
<lpage>911</lpage>
.
<pub-id pub-id-type="doi">10.1038/421911a</pub-id>
<pub-id pub-id-type="pmid">12606990</pub-id>
</mixed-citation>
</ref>
<ref id="B91">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tajadura-Jiménez</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Väljamäe</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Toshima</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Kimura</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Tsakiris</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kitagawa</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Action sounds recalibrate perceived tactile distance</article-title>
.
<source>Curr. Biol.</source>
<volume>22</volume>
,
<fpage>R516</fpage>
<lpage>R517</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2012.04.028</pub-id>
<pub-id pub-id-type="pmid">22789996</pub-id>
</mixed-citation>
</ref>
<ref id="B92">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teramoto</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Hidaka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Sugita</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2010a</year>
).
<article-title>Sounds move a static visual object</article-title>
.
<source>PLoS ONE</source>
<volume>5</volume>
:
<fpage>e12255</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0012255</pub-id>
<pub-id pub-id-type="pmid">20808861</pub-id>
</mixed-citation>
</ref>
<ref id="B93">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teramoto</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Kobayashi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hidaka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Sugita</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Vision contingent auditory pitch aftereffects</article-title>
.
<source>Exp. Brain Res.</source>
<volume>229</volume>
,
<fpage>97</fpage>
<lpage>102</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-013-3596-z</pub-id>
<pub-id pub-id-type="pmid">23727883</pub-id>
</mixed-citation>
</ref>
<ref id="B94">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teramoto</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Manaka</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Hidaka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Sugita</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Miyauchi</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Sakamoto</surname>
<given-names>S.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2010b</year>
).
<article-title>Visual motion perception induced by sounds in vertical plane</article-title>
.
<source>Neurosci. Lett.</source>
<volume>479</volume>
,
<fpage>221</fpage>
<lpage>225</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neulet.2010.05.065</pub-id>
<pub-id pub-id-type="pmid">20639000</pub-id>
</mixed-citation>
</ref>
<ref id="B95">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teramoto</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Sakamoto</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Furune</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Gyoba</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Suzuki</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Compression of auditory space during forward self-motion</article-title>
.
<source>PLoS ONE</source>
<volume>7</volume>
:
<fpage>e39402</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0039402</pub-id>
<pub-id pub-id-type="pmid">22768076</pub-id>
</mixed-citation>
</ref>
<ref id="B96">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>van Dam</surname>
<given-names>L. C.</given-names>
</name>
<name>
<surname>Parise</surname>
<given-names>C. V.</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Modeling multisensory integration</article-title>
, in
<source>Sensory Integration and the Unity of Consciousness</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Bennettand</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Hill</surname>
<given-names>C. S.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
),
<fpage>209</fpage>
<lpage>229</lpage>
.</mixed-citation>
</ref>
<ref id="B97">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van der Stoep</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Nijboer</surname>
<given-names>T. C. W.</given-names>
</name>
<name>
<surname>van der Stigchel</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>Multisensory interactions in the depth plane in front and rear space: a review</article-title>
.
<source>Neuropsychologia</source>
<volume>70</volume>
,
<fpage>335</fpage>
<lpage>349</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2014.12.007</pub-id>
<pub-id pub-id-type="pmid">25498407</pub-id>
</mixed-citation>
</ref>
<ref id="B98">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Kemenade</surname>
<given-names>B. M.</given-names>
</name>
<name>
<surname>Seymour</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Wacker</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Spitzer</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Blankenburg</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Sterzer</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Tactile and visual motion direction processing in hMT+/V5</article-title>
.
<source>Neuroimage</source>
<volume>84</volume>
,
<fpage>420</fpage>
<lpage>427</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2013.09.004</pub-id>
<pub-id pub-id-type="pmid">24036354</pub-id>
</mixed-citation>
</ref>
<ref id="B99">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Watanabe</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Shimojo</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>When sound affects vision: effects of auditory grouping on visual motion perception</article-title>
.
<source>Psych. Sci.</source>
<volume>12</volume>
,
<fpage>109</fpage>
<lpage>116</lpage>
.
<pub-id pub-id-type="doi">10.1111/1467-9280.00319</pub-id>
<pub-id pub-id-type="pmid">11340918</pub-id>
</mixed-citation>
</ref>
<ref id="B100">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Welch</surname>
<given-names>R. B.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>D. H.</given-names>
</name>
</person-group>
(
<year>1980</year>
).
<article-title>Immediate perceptual response to intersensory discrepancy</article-title>
.
<source>Psychol. Bull.</source>
<volume>88</volume>
,
<fpage>638</fpage>
<lpage>667</lpage>
.
<pub-id pub-id-type="doi">10.1037/0033-2909.88.3.638</pub-id>
<pub-id pub-id-type="pmid">7003641</pub-id>
</mixed-citation>
</ref>
<ref id="B101">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Welch</surname>
<given-names>R. B.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>D. H.</given-names>
</name>
</person-group>
(
<year>1986</year>
).
<article-title>Intersensory interactions</article-title>
, in
<source>Handbook of Perception and Human Performance</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Boff</surname>
<given-names>K. R.</given-names>
</name>
<name>
<surname>Kaufman</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Thomas</surname>
<given-names>J. P.</given-names>
</name>
</person-group>
(
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Wiley</publisher-name>
),
<fpage>25.1</fpage>
<lpage>25.36</lpage>
.</mixed-citation>
</ref>
<ref id="B102">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Whitney</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Cavanagh</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Motion distorts visual space: shifting the perceived position of remote stationary objects</article-title>
.
<source>Nat. Neurosci.</source>
<volume>3</volume>
,
<fpage>954</fpage>
<lpage>959</lpage>
.
<pub-id pub-id-type="doi">10.1038/78878</pub-id>
<pub-id pub-id-type="pmid">10966628</pub-id>
</mixed-citation>
</ref>
<ref id="B103">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Williams</surname>
<given-names>D. W.</given-names>
</name>
<name>
<surname>Sekuler</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1984</year>
).
<article-title>Coherent global motion percepts from stochastic local motions</article-title>
.
<source>Vis. Res.</source>
<volume>24</volume>
,
<fpage>55</fpage>
<lpage>62</lpage>
.
<pub-id pub-id-type="doi">10.1016/0042-6989(84)90144-5</pub-id>
<pub-id pub-id-type="pmid">6695508</pub-id>
</mixed-citation>
</ref>
<ref id="B104">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wuerger</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Hofbauer</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>G. F.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>The integration of auditory and visual motion signals at threshold</article-title>
.
<source>Percept. Psychophys.</source>
<volume>65</volume>
,
<fpage>1188</fpage>
<lpage>1196</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03194844</pub-id>
<pub-id pub-id-type="pmid">14710954</pub-id>
</mixed-citation>
</ref>
<ref id="B105">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yamamoto</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kitazawa</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Reversal of subjective temporal order due to arm crossing</article-title>
.
<source>Nat. Neurosci.</source>
<volume>4</volume>
,
<fpage>759</fpage>
<lpage>765</lpage>
.
<pub-id pub-id-type="doi">10.1038/89559</pub-id>
<pub-id pub-id-type="pmid">11426234</pub-id>
</mixed-citation>
</ref>
<ref id="B106">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zihl</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>von Cramon</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Mai</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>Selective disturbance of movement vision after bilateral brain damage</article-title>
.
<source>Brain</source>
<volume>106</volume>
,
<fpage>313</fpage>
<lpage>340</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/106.2.313</pub-id>
<pub-id pub-id-type="pmid">6850272</pub-id>
</mixed-citation>
</ref>
<ref id="B107">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zihl</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>von Cramon</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Mai</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Schmid</surname>
<given-names>C. H.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Disturbance of movement vision after bilateral posterior brain damage</article-title>
.
<source>Brain</source>
<volume>114</volume>
,
<fpage>2235</fpage>
<lpage>2252</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/114.5.2235</pub-id>
<pub-id pub-id-type="pmid">1933243</pub-id>
</mixed-citation>
</ref>
<ref id="B108">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zvyagintsev</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nikolaev</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Thönnessen</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Sachs</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Dammers</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mathiak</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Spatially congruent visual motion modulates activity of the primary auditory cortex</article-title>
.
<source>Exp. Brain Res.</source>
<volume>198</volume>
,
<fpage>391</fpage>
<lpage>402</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-009-1830-5</pub-id>
<pub-id pub-id-type="pmid">19449155</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000413 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000413 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4686600
   |texte=   Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:26733827" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024