Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Cross-Modal Integration of Lexical-Semantic Features during Word Processing: Evidence from Oscillatory Dynamics during EEG

Identifieur interne : 003131 ( Ncbi/Merge ); précédent : 003130; suivant : 003132

Cross-Modal Integration of Lexical-Semantic Features during Word Processing: Evidence from Oscillatory Dynamics during EEG

Auteurs : Markus J. Van Ackeren ; Shirley-Ann Rueschemeyer

Source :

RBID : PMC:4090000

Abstract

In recent years, numerous studies have provided converging evidence that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms supporting the integration of this distributed semantic content into coherent conceptual representations. In the current study we aimed to address this issue by using EEG to look at the spatial and temporal dynamics of feature integration during word comprehension. Specifically, participants were presented with two modality-specific features (i.e., visual or auditory features such as silver and loud) and asked to verify whether these two features were compatible with a subsequently presented target word (e.g., WHISTLE). Each pair of features described properties from either the same modality (e.g., silver, tiny  =  visual features) or different modalities (e.g., silver, loud  =  visual, auditory). Behavioral and EEG data were collected. The results show that verifying features that are putatively represented in the same modality-specific network is faster than verifying features across modalities. At the neural level, integrating features across modalities induces sustained oscillatory activity around the theta range (4–6 Hz) in left anterior temporal lobe (ATL), a putative hub for integrating distributed semantic content. In addition, enhanced long-range network interactions in the theta range were seen between left ATL and a widespread cortical network. These results suggest that oscillatory dynamics in the theta range could be involved in integrating multimodal semantic content by creating transient functional networks linking distributed modality-specific networks and multimodal semantic hubs such as left ATL.


Url:
DOI: 10.1371/journal.pone.0101042
PubMed: 25007074
PubMed Central: 4090000

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4090000

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Cross-Modal Integration of Lexical-Semantic Features during Word Processing: Evidence from Oscillatory Dynamics during EEG</title>
<author>
<name sortKey="Van Ackeren, Markus J" sort="Van Ackeren, Markus J" uniqKey="Van Ackeren M" first="Markus J." last="Van Ackeren">Markus J. Van Ackeren</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Rueschemeyer, Shirley Ann" sort="Rueschemeyer, Shirley Ann" uniqKey="Rueschemeyer S" first="Shirley-Ann" last="Rueschemeyer">Shirley-Ann Rueschemeyer</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25007074</idno>
<idno type="pmc">4090000</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4090000</idno>
<idno type="RBID">PMC:4090000</idno>
<idno type="doi">10.1371/journal.pone.0101042</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">002363</idno>
<idno type="wicri:Area/Pmc/Curation">002363</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000D32</idno>
<idno type="wicri:Area/Ncbi/Merge">003131</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Cross-Modal Integration of Lexical-Semantic Features during Word Processing: Evidence from Oscillatory Dynamics during EEG</title>
<author>
<name sortKey="Van Ackeren, Markus J" sort="Van Ackeren, Markus J" uniqKey="Van Ackeren M" first="Markus J." last="Van Ackeren">Markus J. Van Ackeren</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Rueschemeyer, Shirley Ann" sort="Rueschemeyer, Shirley Ann" uniqKey="Rueschemeyer S" first="Shirley-Ann" last="Rueschemeyer">Shirley-Ann Rueschemeyer</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>In recent years, numerous studies have provided converging evidence that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms supporting the integration of this distributed semantic content into coherent conceptual representations. In the current study we aimed to address this issue by using EEG to look at the spatial and temporal dynamics of feature integration during word comprehension. Specifically, participants were presented with two modality-specific features (i.e., visual or auditory features such as
<italic>silver</italic>
and
<italic>loud</italic>
) and asked to verify whether these two features were compatible with a subsequently presented target word (e.g.,
<italic>WHISTLE</italic>
). Each pair of features described properties from either the same modality (e.g.,
<italic>silver, tiny</italic>
 =  visual features) or different modalities (e.g.,
<italic>silver, loud</italic>
 =  visual, auditory). Behavioral and EEG data were collected. The results show that verifying features that are putatively represented in the same modality-specific network is faster than verifying features across modalities. At the neural level, integrating features across modalities induces sustained oscillatory activity around the theta range (4–6 Hz) in left anterior temporal lobe (ATL), a putative hub for integrating distributed semantic content. In addition, enhanced long-range network interactions in the theta range were seen between left ATL and a widespread cortical network. These results suggest that oscillatory dynamics in the theta range could be involved in integrating multimodal semantic content by creating transient functional networks linking distributed modality-specific networks and multimodal semantic hubs such as left ATL.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Barsalou, Lw" uniqKey="Barsalou L">LW Barsalou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Binder, Jr" uniqKey="Binder J">JR Binder</name>
</author>
<author>
<name sortKey="Desai, Rh" uniqKey="Desai R">RH Desai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pulvermuller, F" uniqKey="Pulvermuller F">F Pulvermüller</name>
</author>
<author>
<name sortKey="Fadiga, L" uniqKey="Fadiga L">L Fadiga</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vigliocco, G" uniqKey="Vigliocco G">G Vigliocco</name>
</author>
<author>
<name sortKey="Meteyard, L" uniqKey="Meteyard L">L Meteyard</name>
</author>
<author>
<name sortKey="Andrews, M" uniqKey="Andrews M">M Andrews</name>
</author>
<author>
<name sortKey="Kousta, S" uniqKey="Kousta S">S Kousta</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Simmons, Wk" uniqKey="Simmons W">WK Simmons</name>
</author>
<author>
<name sortKey="Ramjee, V" uniqKey="Ramjee V">V Ramjee</name>
</author>
<author>
<name sortKey="Beauchamp, Ms" uniqKey="Beauchamp M">MS Beauchamp</name>
</author>
<author>
<name sortKey="Mcrae, K" uniqKey="Mcrae K">K McRae</name>
</author>
<author>
<name sortKey="Martin, A" uniqKey="Martin A">A Martin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hauk, O" uniqKey="Hauk O">O Hauk</name>
</author>
<author>
<name sortKey="Johnsrude, I" uniqKey="Johnsrude I">I Johnsrude</name>
</author>
<author>
<name sortKey="Pulvermuller, F" uniqKey="Pulvermuller F">F Pulvermüller</name>
</author>
<author>
<name sortKey="Pulvermuller, F" uniqKey="Pulvermuller F">F Pulvermuller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gonzalez, J" uniqKey="Gonzalez J">J González</name>
</author>
<author>
<name sortKey="Barros Loscertales, A" uniqKey="Barros Loscertales A">A Barros-Loscertales</name>
</author>
<author>
<name sortKey="Pulvermuller, F" uniqKey="Pulvermuller F">F Pulvermüller</name>
</author>
<author>
<name sortKey="Meseguer, V" uniqKey="Meseguer V">V Meseguer</name>
</author>
<author>
<name sortKey="Sanjuan, A" uniqKey="Sanjuan A">A Sanjuán</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hoenig, K" uniqKey="Hoenig K">K Hoenig</name>
</author>
<author>
<name sortKey="Sim, E J" uniqKey="Sim E">E-J Sim</name>
</author>
<author>
<name sortKey="Bochev, V" uniqKey="Bochev V">V Bochev</name>
</author>
<author>
<name sortKey="Herrnberger, B" uniqKey="Herrnberger B">B Herrnberger</name>
</author>
<author>
<name sortKey="Kiefer, M" uniqKey="Kiefer M">M Kiefer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martin, A" uniqKey="Martin A">A Martin</name>
</author>
<author>
<name sortKey="Chao, Ll" uniqKey="Chao L">LL Chao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Dam, Wo" uniqKey="Van Dam W">WO Van Dam</name>
</author>
<author>
<name sortKey="Van Dijk, M" uniqKey="Van Dijk M">M van Dijk</name>
</author>
<author>
<name sortKey="Bekkering, H" uniqKey="Bekkering H">H Bekkering</name>
</author>
<author>
<name sortKey="Rueschemeyer, S A" uniqKey="Rueschemeyer S">S-A Rueschemeyer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chatterjee, A" uniqKey="Chatterjee A">A Chatterjee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hauk, O" uniqKey="Hauk O">O Hauk</name>
</author>
<author>
<name sortKey="Tschentscher, N" uniqKey="Tschentscher N">N Tschentscher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schneider, Tr" uniqKey="Schneider T">TR Schneider</name>
</author>
<author>
<name sortKey="Debener, S" uniqKey="Debener S">S Debener</name>
</author>
<author>
<name sortKey="Oostenveld, R" uniqKey="Oostenveld R">R Oostenveld</name>
</author>
<author>
<name sortKey="Engel, Ak" uniqKey="Engel A">AK Engel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schneider, Tr" uniqKey="Schneider T">TR Schneider</name>
</author>
<author>
<name sortKey="Engel, Ak" uniqKey="Engel A">AK Engel</name>
</author>
<author>
<name sortKey="Debener, S" uniqKey="Debener S">S Debener</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schneider, Tr" uniqKey="Schneider T">TR Schneider</name>
</author>
<author>
<name sortKey="Lorenz, S" uniqKey="Lorenz S">S Lorenz</name>
</author>
<author>
<name sortKey="Senkowski, D" uniqKey="Senkowski D">D Senkowski</name>
</author>
<author>
<name sortKey="Engel, Ak" uniqKey="Engel A">AK Engel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Senkowski, D" uniqKey="Senkowski D">D Senkowski</name>
</author>
<author>
<name sortKey="Schneider, Tr" uniqKey="Schneider T">TR Schneider</name>
</author>
<author>
<name sortKey="Foxe, Jj" uniqKey="Foxe J">JJ Foxe</name>
</author>
<author>
<name sortKey="Engel, Ak" uniqKey="Engel A">AK Engel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Damasio, Ar" uniqKey="Damasio A">AR Damasio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patterson, K" uniqKey="Patterson K">K Patterson</name>
</author>
<author>
<name sortKey="Nestor, Pj" uniqKey="Nestor P">PJ Nestor</name>
</author>
<author>
<name sortKey="Rogers, Tt" uniqKey="Rogers T">TT Rogers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warrington, Ek" uniqKey="Warrington E">EK Warrington</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Donner, Th" uniqKey="Donner T">TH Donner</name>
</author>
<author>
<name sortKey="Siegel, M" uniqKey="Siegel M">M Siegel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Stein, A" uniqKey="Von Stein A">A Von Stein</name>
</author>
<author>
<name sortKey="Sarnthein, J" uniqKey="Sarnthein J">J Sarnthein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bastiaansen, Mc" uniqKey="Bastiaansen M">MC Bastiaansen</name>
</author>
<author>
<name sortKey="Van Der Linden, M" uniqKey="Van Der Linden M">M van der Linden</name>
</author>
<author>
<name sortKey="Ter Keurs, M" uniqKey="Ter Keurs M">M Ter Keurs</name>
</author>
<author>
<name sortKey="Dijkstra, T" uniqKey="Dijkstra T">T Dijkstra</name>
</author>
<author>
<name sortKey="Hagoort, P" uniqKey="Hagoort P">P Hagoort</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bastiaansen, Mcm" uniqKey="Bastiaansen M">MCM Bastiaansen</name>
</author>
<author>
<name sortKey="Oostenveld, R" uniqKey="Oostenveld R">R Oostenveld</name>
</author>
<author>
<name sortKey="Jensen, O" uniqKey="Jensen O">O Jensen</name>
</author>
<author>
<name sortKey="Hagoort, P" uniqKey="Hagoort P">P Hagoort</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klimesch, W" uniqKey="Klimesch W">W Klimesch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klimesch, W" uniqKey="Klimesch W">W Klimesch</name>
</author>
<author>
<name sortKey="Freunberger, R" uniqKey="Freunberger R">R Freunberger</name>
</author>
<author>
<name sortKey="Sauseng, P" uniqKey="Sauseng P">P Sauseng</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tallon Baudry, C" uniqKey="Tallon Baudry C">C Tallon-Baudry</name>
</author>
<author>
<name sortKey="Bertrand, O" uniqKey="Bertrand O">O Bertrand</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Werf, J" uniqKey="Van Der Werf J">J Van Der Werf</name>
</author>
<author>
<name sortKey="Jensen, O" uniqKey="Jensen O">O Jensen</name>
</author>
<author>
<name sortKey="Fries, P" uniqKey="Fries P">P Fries</name>
</author>
<author>
<name sortKey="Medendorp, Wp" uniqKey="Medendorp W">WP Medendorp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Landauer, Tk" uniqKey="Landauer T">TK Landauer</name>
</author>
<author>
<name sortKey="Foltz, Pw" uniqKey="Foltz P">PW Foltz</name>
</author>
<author>
<name sortKey="Laham, D" uniqKey="Laham D">D Laham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Dantzig, S" uniqKey="Van Dantzig S">S Van Dantzig</name>
</author>
<author>
<name sortKey="Cowell, Ra" uniqKey="Cowell R">RA Cowell</name>
</author>
<author>
<name sortKey="Zeelenberg, R" uniqKey="Zeelenberg R">R Zeelenberg</name>
</author>
<author>
<name sortKey="Pecher, D" uniqKey="Pecher D">D Pecher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lynott, D" uniqKey="Lynott D">D Lynott</name>
</author>
<author>
<name sortKey="Connell, L" uniqKey="Connell L">L Connell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Neuper, C" uniqKey="Neuper C">C Neuper</name>
</author>
<author>
<name sortKey="Wortz, M" uniqKey="Wortz M">M Wörtz</name>
</author>
<author>
<name sortKey="Pfurtscheller, G" uniqKey="Pfurtscheller G">G Pfurtscheller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oostenveld, R" uniqKey="Oostenveld R">R Oostenveld</name>
</author>
<author>
<name sortKey="Fries, P" uniqKey="Fries P">P Fries</name>
</author>
<author>
<name sortKey="Maris, E" uniqKey="Maris E">E Maris</name>
</author>
<author>
<name sortKey="Schoffelen, Jm" uniqKey="Schoffelen J">JM Schoffelen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maris, E" uniqKey="Maris E">E Maris</name>
</author>
<author>
<name sortKey="Oostenveld, R" uniqKey="Oostenveld R">R Oostenveld</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oostenveld, R" uniqKey="Oostenveld R">R Oostenveld</name>
</author>
<author>
<name sortKey="Stegeman, Df" uniqKey="Stegeman D">DF Stegeman</name>
</author>
<author>
<name sortKey="Praamstra, P" uniqKey="Praamstra P">P Praamstra</name>
</author>
<author>
<name sortKey="Van Oosterom, A" uniqKey="Van Oosterom A">A Van Oosterom</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nolte, G" uniqKey="Nolte G">G Nolte</name>
</author>
<author>
<name sortKey="Bai, O" uniqKey="Bai O">O Bai</name>
</author>
<author>
<name sortKey="Wheaton, L" uniqKey="Wheaton L">L Wheaton</name>
</author>
<author>
<name sortKey="Mari, Z" uniqKey="Mari Z">Z Mari</name>
</author>
<author>
<name sortKey="Vorbach, S" uniqKey="Vorbach S">S Vorbach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pecher, D" uniqKey="Pecher D">D Pecher</name>
</author>
<author>
<name sortKey="Zeelenberg, R" uniqKey="Zeelenberg R">R Zeelenberg</name>
</author>
<author>
<name sortKey="Barsalou, Lw" uniqKey="Barsalou L">LW Barsalou</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcnorgan, C" uniqKey="Mcnorgan C">C McNorgan</name>
</author>
<author>
<name sortKey="Reid, J" uniqKey="Reid J">J Reid</name>
</author>
<author>
<name sortKey="Mcrae, K" uniqKey="Mcrae K">K McRae</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Plaut, Dc" uniqKey="Plaut D">DC Plaut</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Milner, Pm" uniqKey="Milner P">PM Milner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Singer, W" uniqKey="Singer W">W Singer</name>
</author>
<author>
<name sortKey="Gray, Cm" uniqKey="Gray C">CM Gray</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Der Malsburg, C" uniqKey="Von Der Malsburg C">C Von der Malsburg</name>
</author>
<author>
<name sortKey="Schneider, W" uniqKey="Schneider W">W Schneider</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jensen, O" uniqKey="Jensen O">O Jensen</name>
</author>
<author>
<name sortKey="Tesche, Cd" uniqKey="Tesche C">CD Tesche</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Raghavachari, S" uniqKey="Raghavachari S">S Raghavachari</name>
</author>
<author>
<name sortKey="Lisman, Je" uniqKey="Lisman J">JE Lisman</name>
</author>
<author>
<name sortKey="Tully, M" uniqKey="Tully M">M Tully</name>
</author>
<author>
<name sortKey="Madsen, Jr" uniqKey="Madsen J">JR Madsen</name>
</author>
<author>
<name sortKey="Bromfield, Eb" uniqKey="Bromfield E">EB Bromfield</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Summerfield, C" uniqKey="Summerfield C">C Summerfield</name>
</author>
<author>
<name sortKey="Mangels J, A" uniqKey="Mangels J A">a Mangels J</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wu, X" uniqKey="Wu X">X Wu</name>
</author>
<author>
<name sortKey="Chen, X" uniqKey="Chen X">X Chen</name>
</author>
<author>
<name sortKey="Li, Z" uniqKey="Li Z">Z Li</name>
</author>
<author>
<name sortKey="Han, S" uniqKey="Han S">S Han</name>
</author>
<author>
<name sortKey="Zhang, D" uniqKey="Zhang D">D Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bastiaansen, Mcm" uniqKey="Bastiaansen M">MCM Bastiaansen</name>
</author>
<author>
<name sortKey="Van Berkum Jj, A" uniqKey="Van Berkum Jj A">a van Berkum JJ</name>
</author>
<author>
<name sortKey="Hagoort, P" uniqKey="Hagoort P">P Hagoort</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kutas, M" uniqKey="Kutas M">M Kutas</name>
</author>
<author>
<name sortKey="Federmeier, K" uniqKey="Federmeier K">K Federmeier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sober, Sj" uniqKey="Sober S">SJ Sober</name>
</author>
<author>
<name sortKey="Sabes, Pn" uniqKey="Sabes P">PN Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rosenblum, Ld" uniqKey="Rosenblum L">LD Rosenblum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hald L, A" uniqKey="Hald L A">a Hald L</name>
</author>
<author>
<name sortKey="Bastiaansen, Mcm" uniqKey="Bastiaansen M">MCM Bastiaansen</name>
</author>
<author>
<name sortKey="Hagoort, P" uniqKey="Hagoort P">P Hagoort</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, L" uniqKey="Wang L">L Wang</name>
</author>
<author>
<name sortKey="Jensen, O" uniqKey="Jensen O">O Jensen</name>
</author>
<author>
<name sortKey="Van Den Brink, D" uniqKey="Van Den Brink D">D van den Brink</name>
</author>
<author>
<name sortKey="Weder, N" uniqKey="Weder N">N Weder</name>
</author>
<author>
<name sortKey="Schoffelen, J M" uniqKey="Schoffelen J">J-M Schoffelen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hagoort, P" uniqKey="Hagoort P">P Hagoort</name>
</author>
<author>
<name sortKey="Hald, L" uniqKey="Hald L">L Hald</name>
</author>
<author>
<name sortKey="Bastiaansen, M" uniqKey="Bastiaansen M">M Bastiaansen</name>
</author>
<author>
<name sortKey="Petersson, Km" uniqKey="Petersson K">KM Petersson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Raposo, A" uniqKey="Raposo A">A Raposo</name>
</author>
<author>
<name sortKey="Moss, He" uniqKey="Moss H">HE Moss</name>
</author>
<author>
<name sortKey="Stamatakis, Ea" uniqKey="Stamatakis E">EA Stamatakis</name>
</author>
<author>
<name sortKey="Tyler, Lk" uniqKey="Tyler L">LK Tyler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boulenger, V" uniqKey="Boulenger V">V Boulenger</name>
</author>
<author>
<name sortKey="Hauk, O" uniqKey="Hauk O">O Hauk</name>
</author>
<author>
<name sortKey="Pulvermuller, F" uniqKey="Pulvermuller F">F Pulvermüller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Ackeren, Mj" uniqKey="Van Ackeren M">MJ Van Ackeren</name>
</author>
<author>
<name sortKey="Casasanto, D" uniqKey="Casasanto D">D Casasanto</name>
</author>
<author>
<name sortKey="Bekkering, H" uniqKey="Bekkering H">H Bekkering</name>
</author>
<author>
<name sortKey="Hagoort, P" uniqKey="Hagoort P">P Hagoort</name>
</author>
<author>
<name sortKey="Rueschemeyer, S A" uniqKey="Rueschemeyer S">S-A Rueschemeyer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kiefer, M" uniqKey="Kiefer M">M Kiefer</name>
</author>
<author>
<name sortKey="Sim, E J" uniqKey="Sim E">E-J Sim</name>
</author>
<author>
<name sortKey="Herrnberger, B" uniqKey="Herrnberger B">B Herrnberger</name>
</author>
<author>
<name sortKey="Grothe, J" uniqKey="Grothe J">J Grothe</name>
</author>
<author>
<name sortKey="Hoenig, K" uniqKey="Hoenig K">K Hoenig</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25007074</article-id>
<article-id pub-id-type="pmc">4090000</article-id>
<article-id pub-id-type="publisher-id">PONE-D-13-53015</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0101042</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Science</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Learning</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group>
<subject>Behavioral Neuroscience</subject>
<subject>Cognitive Neuroscience</subject>
<subject>Learning and Memory</subject>
<subject>Neurolinguistics</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Psychology</subject>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Medicine and Health Sciences</subject>
<subj-group>
<subject>Diagnostic Medicine</subject>
<subj-group>
<subject>Clinical Neurophysiology</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Mental Health and Psychiatry</subject>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Social Sciences</subject>
<subj-group>
<subject>Linguistics</subject>
<subj-group>
<subject>Languages</subject>
<subj-group>
<subject>Natural Language</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Psycholinguistics</subject>
<subject>Semantics</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Sociology</subject>
<subj-group>
<subject>Communications</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Cross-Modal Integration of Lexical-Semantic Features during Word Processing: Evidence from Oscillatory Dynamics during EEG</article-title>
<alt-title alt-title-type="running-head">Cross-Modal Integration of Semantic Features</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>van Ackeren</surname>
<given-names>Markus J.</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Rueschemeyer</surname>
<given-names>Shirley-Ann</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
</contrib>
</contrib-group>
<aff id="aff1">
<addr-line>Department of Psychology, University of York, York, United Kingdom</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Bolhuis</surname>
<given-names>Johan J.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>Utrecht University, Netherlands</addr-line>
</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>mjva500@york.ac.uk</email>
</corresp>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con">
<p>Conceived and designed the experiments: MVA SAR. Performed the experiments: MVA. Analyzed the data: MVA. Contributed reagents/materials/analysis tools: MVA. Wrote the paper: MVA SAR.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<pub-date pub-type="epub">
<day>9</day>
<month>7</month>
<year>2014</year>
</pub-date>
<volume>9</volume>
<issue>7</issue>
<elocation-id>e101042</elocation-id>
<history>
<date date-type="received">
<day>18</day>
<month>12</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>3</day>
<month>6</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-year>2014</copyright-year>
<copyright-holder>van Ackeren, Rueschemeyer</copyright-holder>
<license>
<license-p>This is an open-access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>In recent years, numerous studies have provided converging evidence that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms supporting the integration of this distributed semantic content into coherent conceptual representations. In the current study we aimed to address this issue by using EEG to look at the spatial and temporal dynamics of feature integration during word comprehension. Specifically, participants were presented with two modality-specific features (i.e., visual or auditory features such as
<italic>silver</italic>
and
<italic>loud</italic>
) and asked to verify whether these two features were compatible with a subsequently presented target word (e.g.,
<italic>WHISTLE</italic>
). Each pair of features described properties from either the same modality (e.g.,
<italic>silver, tiny</italic>
 =  visual features) or different modalities (e.g.,
<italic>silver, loud</italic>
 =  visual, auditory). Behavioral and EEG data were collected. The results show that verifying features that are putatively represented in the same modality-specific network is faster than verifying features across modalities. At the neural level, integrating features across modalities induces sustained oscillatory activity around the theta range (4–6 Hz) in left anterior temporal lobe (ATL), a putative hub for integrating distributed semantic content. In addition, enhanced long-range network interactions in the theta range were seen between left ATL and a widespread cortical network. These results suggest that oscillatory dynamics in the theta range could be involved in integrating multimodal semantic content by creating transient functional networks linking distributed modality-specific networks and multimodal semantic hubs such as left ATL.</p>
</abstract>
<funding-group>
<funding-statement>This work was funded by the Department of Psychology at the University of York. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<page-count count="10"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>The embodied framework of language suggests that lexical-semantic knowledge (i.e., word meaning) is stored in part in modality-specific networks that are distributed across the cortex
<xref rid="pone.0101042-Barsalou1" ref-type="bibr">[1]</xref>
<xref rid="pone.0101042-Vigliocco1" ref-type="bibr">[4]</xref>
. For example, words denoting colors (e.g.,
<italic>red, green</italic>
) have been shown to engage parts of the ventral visual stream
<xref rid="pone.0101042-Simmons1" ref-type="bibr">[5]</xref>
, while words denoting actions (e.g.,
<italic>kick</italic>
,
<italic>pick</italic>
) engage the dorsal motor network
<xref rid="pone.0101042-Hauk1" ref-type="bibr">[6]</xref>
. In recent years, much has been done to understand the automaticity, flexibility and reliability of the link between action/perception and word meaning
<xref rid="pone.0101042-Simmons1" ref-type="bibr">[5]</xref>
,
<xref rid="pone.0101042-Gonzlez1" ref-type="bibr">[7]</xref>
<xref rid="pone.0101042-VanDam1" ref-type="bibr">[10]</xref>
. The current study extends this body of literature by addressing the question of how distributed lexical-semantic features are
<italic>integrated</italic>
during word comprehension.</p>
<p>Although ample evidence for the link between word meaning and perception/action systems exists, the bulk of research in this field has reduced lexical-semantic information to one dominant modality (e.g., vision for
<italic>red</italic>
and action for
<italic>kick</italic>
). The motivation for focusing on single modalities is clearly methodological: by focusing on words with a clear association to one modality, good hypotheses can be generated for testing empirically. However, words clearly refer to items that are experienced through multiple modalities in the real world (e.g., a football is associated with both a specific visual form and a specific action), and embodied accounts of language have done little to address how multimodal information interacts during the processing of word meaning. The one exception to this rule has been the attempt to understand how lexical-semantic processing can be focused flexibly on information from one modality versus another. For example, van Dam and colleagues
<xref rid="pone.0101042-VanDam1" ref-type="bibr">[10]</xref>
demonstrated that words denoting objects that are strongly associated with both action and visual information (e.g.,
<italic>tennis ball</italic>
) reliably activate both motor and visual pathways in the cortex. Interestingly, motor pathways also responded more strongly when participants were asked to indicate what to do with the object rather than what it looks like. Likewise, Hoenig and colleagues
<xref rid="pone.0101042-Hoenig1" ref-type="bibr">[8]</xref>
have shown that even for objects with dominant modality-specific features (e.g., actions for artifacts), the pattern of activation in visual and motor networks is differentially modulated if a dominant (action) or non-dominant (visual) feature is primed. Notably, modality-specific networks show a stronger response to the target if the prime was not a dominant feature. Taken together, the studies by van Dam et al.
<xref rid="pone.0101042-VanDam1" ref-type="bibr">[10]</xref>
and Hoenig et al.
<xref rid="pone.0101042-Hoenig1" ref-type="bibr">[8]</xref>
suggest that word meaning is partially stored in a network of areas that are recruited in a modality-specific and flexible way. However, it should also be pointed out that most of this evidence is of a correlational nature. As yet, little is known about the causal role of modality-specific networks in lexical-semantic processing, and how they are related to more abstract semantic knowledge
<xref rid="pone.0101042-Chatterjee1" ref-type="bibr">[11]</xref>
,
<xref rid="pone.0101042-Hauk2" ref-type="bibr">[12]</xref>
.</p>
<p>While studies highlighting the flexible recruitment of different types of modality-specific information confirm that single words are associated with multiple types of perceptual experience, it is still unknown how information from multiple sources in the brain (e.g., visual and action features) is united to form a coherent concept that is both visual and motoric. Cross-modal integration has been studied extensively with respect to object perception
<xref rid="pone.0101042-Schneider1" ref-type="bibr">[13]</xref>
<xref rid="pone.0101042-Senkowski1" ref-type="bibr">[16]</xref>
. However, its role in forming lexical-semantic representations has been largely neglected, even within the embodied framework. Several theoretical perspectives have argued for the existence of amodal integration ‘hubs’ or foci, at which information relevant for lexical-semantic processing is combined
<xref rid="pone.0101042-Damasio1" ref-type="bibr">[17]</xref>
,
<xref rid="pone.0101042-Patterson1" ref-type="bibr">[18]</xref>
. Neuropsychological data has provided compelling evidence that the anterior temporal lobes (ATL) may be a good candidate for such a hub
<xref rid="pone.0101042-Patterson1" ref-type="bibr">[18]</xref>
,
<xref rid="pone.0101042-Warrington1" ref-type="bibr">[19]</xref>
. Thus, there is a general acceptance that information from distributed modality-specific networks is integrated in some way, somewhere in the brain. However, virtually no research has looked at what the neural mechanisms underlying semantic integration might be in these hub regions or more widely across the brain.</p>
<p>One way to investigate the mechanisms underlying integration across cortical areas is to study modulations in oscillatory power in EEG and MEG signals that have been related to network interactions at different cortical scales
<xref rid="pone.0101042-Donner1" ref-type="bibr">[20]</xref>
,
<xref rid="pone.0101042-VonStein1" ref-type="bibr">[21]</xref>
. Specifically, low frequency modulations (< 20 Hz) are often reported when tasks require the retrieval and integration of information from distant cortical sites, which is generally the case for memory and language
<xref rid="pone.0101042-Bastiaansen1" ref-type="bibr">[22]</xref>
<xref rid="pone.0101042-Klimesch2" ref-type="bibr">[25]</xref>
. In contrast, modulations in high frequency bands (>30 Hz) are observed when tasks require local, modality-specific, network interactions such as saccade planning or visual object binding
<xref rid="pone.0101042-TallonBaudry1" ref-type="bibr">[26]</xref>
,
<xref rid="pone.0101042-VanDerWerf1" ref-type="bibr">[27]</xref>
. According to this framework, the specific network dynamics underlying the integrating of lexical-semantic features across different modalities should be reflected in a modulation in low frequencies.</p>
<p>The aim of the current study was to investigate what mechanisms underlie the integration of semantic features across modalities. This question was addressed in two experiments using a dual property verification task. Participants were asked to indicate whether a feature pair (e.g.,
<italic>silver, loud</italic>
) is consistent with a target word (e.g.,
<italic>WHISTLE</italic>
). Critically, the feature pair could either be from the same modality (e.g., both visual), or from different modalities (e.g., visual and auditory). In Experiment 1 we analyzed verification times for cross-modal and modality-specific feature contexts to investigate whether integrating multimodal semantic content, that is content, which is represented in distributed semantic networks, incurs a processing cost. Specifically, we hypothesize that integrating features represented within a single modality-specific network is faster than integrating features across modalities. In Experiment 2, we used EEG to measure changes in oscillatory neuronal activity during the target word when participants were asked to integrate features from the same or different modalities. Oscillatory neuronal activity could be a neural mechanism that contributes to semantic integration by linking modality-specific networks to multimodal convergence zones such as ATL. In line with this idea, we hypothesize that integrating semantic information from multiple modalities will be reflected in enhanced low frequency oscillatory activity in multimodal convergence zones, as well as substantial network interaction between these regions and a widespread cortical network.</p>
</sec>
<sec id="s2">
<title>Experiment 1</title>
<p>In Experiment 1 participants indicate whether two features (e.g.,
<italic>silver, loud</italic>
) are consistent with a target word (e.g.,
<italic>WHISTLE</italic>
). Specifically, a feature pair could either be associated with modality-specific or cross-modal semantic content. We hypothesize that integrating modality-specific feature pairs is faster than integrating cross-modal feature pairs, highlighting that word meaning is integrated more readily within modality-specific semantic networks than across.</p>
<sec id="s2a">
<title>Methods</title>
<sec id="s2a1">
<title>Participants</title>
<p>Sixteen healthy individuals participated in Experiment 1 (13 female), all of which had normal or corrected to normal vision and no known auditory deficit. The age range was 18 to 24 (
<italic>M</italic>
 = 19.88).</p>
<p>All participants were students at the University of York, and participated on a voluntary basis. As compensation for their participation, participants received either a financial reward or course credits. Participants gave written informed consent according to the Declaration of Helsinki. In addition they were given the opportunity of a more detailed debriefing after the study. The study was approved by the Ethics Committee of the Psychology Department at the University of York.</p>
</sec>
<sec id="s2a2">
<title>Stimulus material</title>
<p>120 target nouns (e.g.,
<italic>WHISTLE</italic>
) were each paired with two adjective features from the same (e.g.,
<italic>silver-tiny</italic>
), and two features from different modalities (e.g.,
<italic>silver-loud</italic>
) (
<xref ref-type="fig" rid="pone-0101042-g001">Figure 1a</xref>
). Crucially, targets were presented only in one of the two feature contexts. That is, each participant saw 60 targets with a modality-specific (MS) feature pair and 60 different targets with a cross-modal (CM) feature pair. The conditions were counterbalanced and trials were presented in a pseudo-randomized order. In addition, 60 trials were included in which at least one feature was false. To familiarize participants with the experiment 10 additional practice trials were presented before the start of the experiment. Thus, each participant saw 190 target words and feature pairs.</p>
<fig id="pone-0101042-g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0101042.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Experimental design of the dual property verification paradigm.</title>
<p>A The top panel provides an overview of the design in which a target was either paired with a cross-modal (visual-haptic [VH; HV], visual-auditory [VA; AV], auditory-haptic [AH; HA]), or modality-specific feature pair (Visual [V], Auditory [A], Haptic [H]). The three modalities of interest were visual, haptic, and auditory. B The bottom panel depicts the time course of a single trial. All words are presented one after the other. Therefore, features can only be fully integrated when the target appears (e.g.,
<italic>WHISTLE</italic>
).</p>
</caption>
<graphic xlink:href="pone.0101042.g001"></graphic>
</fig>
<p>Since the target (
<italic>WHISTLE</italic>
) and one feature (
<italic>silver</italic>
) were the same in both conditions, only variable features (
<italic>tiny, loud</italic>
) were matched for word frequency (log-scaled, British National Corpus), and length. In order to control for differences in semantic association between feature pairs and targets, latent semantic analysis (LSA) scores were extracted for each feature pair and target combination. LSA is a measure of semantic similarity that quantifies how commonly two or more words occur in the same context in written texts
<xref rid="pone.0101042-Landauer1" ref-type="bibr">[28]</xref>
. For example highly associated words like
<italic>camel</italic>
and
<italic>hump</italic>
yield a higher LSA score (LSA  = .53) than less highly associated words such as
<italic>camel</italic>
and
<italic>hairy</italic>
(LSA  = .20). Lastly, each feature pair was rated on a five-point scale (N  =  18) for how diagnostic and how related it is to its target word. None of these scores differed significantly between conditions (see
<xref ref-type="table" rid="pone-0101042-t001">Table 1</xref>
).</p>
<table-wrap id="pone-0101042-t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0101042.t001</object-id>
<label>Table 1</label>
<caption>
<title>Matching of the experimental items.</title>
</caption>
<alternatives>
<graphic id="pone-0101042-t001-1" xlink:href="pone.0101042.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Feature Pair</td>
<td align="left" rowspan="1" colspan="1">LSA</td>
<td align="left" rowspan="1" colspan="1">Relatedness</td>
<td align="left" rowspan="1" colspan="1">Diagnosticity</td>
<td align="left" rowspan="1" colspan="1">Frequency</td>
<td align="left" rowspan="1" colspan="1">Length</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Cross-modal</bold>
</td>
<td align="left" rowspan="1" colspan="1">0.21 (.01)</td>
<td align="left" rowspan="1" colspan="1">3.28 (.06)</td>
<td align="left" rowspan="1" colspan="1">2.65 (.07)</td>
<td align="left" rowspan="1" colspan="1">3.88 (.07)</td>
<td align="left" rowspan="1" colspan="1">6.48 (.18)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Modality-specific</bold>
</td>
<td align="left" rowspan="1" colspan="1">0.22 (.01)</td>
<td align="left" rowspan="1" colspan="1">3.37 (.07)</td>
<td align="left" rowspan="1" colspan="1">2.66 (.08)</td>
<td align="left" rowspan="1" colspan="1">3.87 (.08)</td>
<td align="left" rowspan="1" colspan="1">6.24 (.17)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>p-value</bold>
</td>
<td align="left" rowspan="1" colspan="1">(
<italic>p</italic>
 = .66)</td>
<td align="left" rowspan="1" colspan="1">(
<italic>p</italic>
 = .32)</td>
<td align="left" rowspan="1" colspan="1">(
<italic>p</italic>
 = .92)</td>
<td align="left" rowspan="1" colspan="1">(
<italic>p</italic>
 = .93)</td>
<td align="left" rowspan="1" colspan="1">(
<italic>p</italic>
 = .48)</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label></label>
<p>Scores were averaged over all items in each condition. P-values were computed using independent-samples t-tests. The standard error of the mean is provided in brackets.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>Language is inherently polysemous, and most semantic features can be associated with multiple modalities, depending on the context. For example, a feature like
<italic>high</italic>
can be used to describe the size of a mountain (visual) or the pitch of a sound (auditory). This issue was addressed recently in two norming studies
<xref rid="pone.0101042-VanDantzig1" ref-type="bibr">[29]</xref>
,
<xref rid="pone.0101042-Lynott1" ref-type="bibr">[30]</xref>
. Specifically, participants were asked to rate features in isolation or as feature-concept pairs on how likely the feature is experienced through one of five modalities (visual, haptic, auditory, olfactory, and gustatory). The features in the current study were based on averaged ratings from previous studies
<xref rid="pone.0101042-VanDantzig1" ref-type="bibr">[29]</xref>
,
<xref rid="pone.0101042-Lynott1" ref-type="bibr">[30]</xref>
and a small proportion (2.6%) of additional auditory features (e.g.,
<italic>ticking, quacking</italic>
). Features were selected, which had been categorized as predominantly visual, haptic, or auditory (see
<xref ref-type="fig" rid="pone-0101042-g002">Figure 2</xref>
).</p>
<fig id="pone-0101042-g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0101042.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Mean of the modality ratings for visual, haptic, and auditory features.</title>
<p>The three spider plots indicate the mean rating score
<xref rid="pone.0101042-VanDantzig1" ref-type="bibr">[29]</xref>
,
<xref rid="pone.0101042-Lynott1" ref-type="bibr">[30]</xref>
over all features in the each of the three modalities of interest (Visual, Haptic, and Auditory).</p>
</caption>
<graphic xlink:href="pone.0101042.g002"></graphic>
</fig>
<p>All stimuli were presented using Neurobehavioral Systems Presentation software (
<ext-link ext-link-type="uri" xlink:href="http://www.neurobs.com">www.neurobs.com</ext-link>
) on a 22” TFT screen with a screen resolution of 1680×1050 and a refresh rate of 60 Hz.</p>
</sec>
<sec id="s2a3">
<title>Procedure</title>
<p>Participants were seated in front of a computer screen at a distance of 40 cm. Words were presented in light grey on a black background with a font size of 40 pt. Each trial started with the disappearance of a fixation cross that was presented at a variable interval between 1500 and 2500 ms. Individual features were presented subsequently, for 500 ms, with a 500 ms blank screen in between. The target was presented last (
<xref ref-type="fig" rid="pone-0101042-g001">Figure 1b</xref>
). Participants were instructed to indicate whether both features are consistent with the target. Responses were provided on a button box while the target was on the screen (2000 ms). Response times and number of errors were measured for subsequent analyses. Each participant saw a target only once and in one of two conditions (CM or MS).</p>
</sec>
</sec>
<sec id="s2b">
<title>Results and Discussion</title>
<p>One participant was excluded from the analysis because performance rates on the task were at chance. Furthermore, outliers at three standard deviations from the mean were excluded from the analysis.</p>
<p>In order to test whether participants were able to perform the task, a one-sample t-test was conducted on the proportion of correctly identified feature-target pairs, against a test-value of 0.5. This test confirmed that participants' performance on the task was well above chance (
<italic>t</italic>
(14)  =  15.43,
<italic>p</italic>
<.001) with a mean proportion of .73 correctly recognized features.</p>
<p>To test for a main effect of modality-specificity, the median reaction time was computed for each condition and participant, and averaged separately for MS (visual, auditory, haptic) and CM (visual-auditory, auditory-haptic, and visual-haptic) feature pairs, resulting in two values per participant (CM and MS). The distribution of these values across participants met the assumptions of a paired-sample t-test. The test statistic revealed that participants were overall slower to respond to CM (
<italic>M</italic>
 = 981.6,
<italic>SE</italic>
 = 64.64) versus MS (
<italic>M</italic>
 = 909.36,
<italic>SE</italic>
 = 55.95) feature pairs (
<italic>t</italic>
(14) = 3.65,
<italic>p</italic>
 = .003).</p>
<p>The effect of modality-specificity on verification time was further investigated for each of the three possible modality combinations using analysis of variance (ANOVA) with repeated measures (
<xref ref-type="fig" rid="pone-0101042-g003">Figure 3</xref>
). In each analysis, a CM condition (e.g., visual-auditory) was compared to two MS conditions (e.g., visual and auditory). The first ANOVA tested for an effect of condition on verification time across the visual (V), auditory (A), and visual-auditory (VA) conditions. The test revealed a significant main effect of condition (Wilks' Lambda  = .33,
<italic>F</italic>
(2,13) = 13.24,
<italic>p</italic>
 = .001, partial η
<sup>2</sup>
 = .67). Planned comparisons using a Helmert contrast indicated that participants responded more slowly during CM (visual-auditory) than MS feature pairs (visual and auditory, respectively) (
<italic>F</italic>
(1,14) = 26.67,
<italic>p</italic>
<.001, partial η
<sup>2</sup>
 = .66). The second ANOVA tested for a main effect of condition on verification time across the auditory (A), haptic (H), and auditory-haptic (AH) conditions. The results showed a significant main effect of condition (Wilks' Lambda  = .43,
<italic>F</italic>
(2,13) = 8.61,
<italic>p</italic>
 = .004, partial η
<sup>2</sup>
 = .57). Planned comparisons using a Helmert contrast revealed that participants verified CM feature pairs (auditory-visual) more slowly than MS feature pairs (auditory and haptic respectively) (
<italic>F</italic>
(1,14) = 9.22,
<italic>p</italic>
 = .009, partial η
<sup>2</sup>
 = .40). The final ANOVA was conducted to test for a main effect of condition across the visual (V), haptic (H) and visual-haptic (VH) condition. There was no main effect in this analysis (Wilks Lambda  = .72,
<italic>F</italic>
(2,13) = 2.64,
<italic>p</italic>
>.1).</p>
<fig id="pone-0101042-g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0101042.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Cross-modal integration costs in verification times.</title>
<p>Bar graphs depict the mean verification time in the MS (Visual, Auditory, and Haptic), and CM condition (Visual-Auditory, Auditory-Haptic, Visual-Haptic). Error bars denote standard error of the mean (***
<italic>p</italic>
<.001; **
<italic>p</italic>
<.01).</p>
</caption>
<graphic xlink:href="pone.0101042.g003"></graphic>
</fig>
<p>The goal of Experiment 1 was to investigate whether integrating semantic features represented within a single modality is faster than integrating features across modalities. The current results suggest that this is indeed the case. Verification times for two semantic features with respect to a target (e.g.,
<italic>WHISTLE</italic>
) were delayed when participants saw two features from different modalities (e.g.,
<italic>silver, loud</italic>
). However, this effect seems to be restricted to visual-auditory, and auditory-haptic feature combinations. A possible explanation for this finding is that visual lexical-semantic features can be difficult to distinguish from haptic features. This was also evident in the rating study in which features were often rated similarly as being experienced by seeing, and touching (
<xref ref-type="fig" rid="pone-0101042-g002">Figure 2</xref>
)
<xref rid="pone.0101042-VanDantzig1" ref-type="bibr">[29]</xref>
,
<xref rid="pone.0101042-Lynott1" ref-type="bibr">[30]</xref>
.</p>
</sec>
</sec>
<sec id="s3">
<title>Experiment 2</title>
<p>Experiment 2 uses EEG to investigate oscillatory dynamics during semantic integration within, and across different modalities. We hypothesize that integrating cross-modal semantic content will be reflected in enhanced low frequency oscillatory activity in multimodal semantic hubs, such as ATL, as well as substantial network interaction between these regions and a widespread cortical network.</p>
<sec id="s3a">
<title>Methods</title>
<sec id="s3a1">
<title>Participants</title>
<p>For Experiment 2, 22 healthy participants (8 female) were tested, all of which had normal or corrected to normal vision and no known auditory deficit. The age range was between 19 and 34 (M =  21.26). Four participants were excluded from the analysis due to excessive movement and blinking (3), and a technical error (1). None of the participants had participated in Experiment 1</p>
<p>Participants gave written informed consent according to the Declaration of Helsinki. In addition they were given the opportunity of a more detailed debriefing after the study. The study was approved by the Ethics Committee of the Psychology Department at the University of York.</p>
</sec>
<sec id="s3a2">
<title>Stimulus material</title>
<p>The stimulus materials in Experiment 2 were exactly the same as in Experiment 1.</p>
</sec>
<sec id="s3a3">
<title>Procedure</title>
<p>In Experiment 2, participants were wearing an electrode cap that was connected via an amplifier to the recording computer while performing the verification task. The setting was the same as in Experiment 1. However, in order to prevent contamination of the EEG signal from movement and response planning
<xref rid="pone.0101042-Neuper1" ref-type="bibr">[31]</xref>
, the task was changed such that participants only responded in case they encountered a false feature.</p>
</sec>
<sec id="s3a4">
<title>Data recording and pre-processing</title>
<p>EEG was acquired from 64 Ag-AgCl electrodes that were positioned on an electrode cap according to a 10–20 system. All electrodes were re-referenced offline to the algebraic average of the two mastoids. Horizontal and Vertical eye movements were recorded with a set of bipolar Ag-AgCl electrodes. The signal was amplified using an ANT amplifier with a band-pass filter between 0.5 and 100 Hz. Impedances of the cortical electrodes were kept below 10 kΩ. The signal was recorded with a sampling frequency of 500 Hz.</p>
<p>Offline analyses were conducted using Matlab 7.14 (Mathworks, Natick, MA) and Fieldtrip, a Matlab toolbox for analyzing EEG/MEG data
<xref rid="pone.0101042-Oostenveld1" ref-type="bibr">[32]</xref>
. Trials were only considered if the participant correctly withheld the response on a target. Artifact rejection was performed in three consecutive steps. First, muscle artifacts were removed using semi-automatic artifact rejection. Subsequently, extended infomax independent component analysis (ICA), with a weight change stop criterion of <10
<sup>−7</sup>
, was performed to identify, and reject ocular components. Finally, each trial was visually inspected for any remaining artifacts. The average number of correct trials that survived the rejection protocol did not differ significantly between condition (MS:
<italic>M</italic>
 = 48,
<italic>SE</italic>
 = 1.26; CM:
<italic>M</italic>
 = 47,
<italic>SE</italic>
 = 1.24;
<italic>t</italic>
(17) = −1.29,
<italic>p</italic>
 = .21).</p>
</sec>
<sec id="s3a5">
<title>Spectral analysis</title>
<p>In order to estimate spectral power changes over time, time-frequency representations (TFR) were computed for each trial, using a 500 ms fixed sliding time window with time steps of 50 ms, resulting in a frequency resolution of ∼2 Hz. A Hanning taper was applied to each of these segments to reduce spectral leakage. TFR's were calculated for frequencies between 2 and 20 Hz in steps of 2 Hz. These transformations were performed at the individual trial level and reflect both evoked and induced components of the signal. Subsequently, trials were averaged for each condition and subject, and percentage signal change was computed using a common baseline over both conditions. The time window for the baseline was between 750 and 250 ms before the onset of the trial. The baseline normalization procedure is equivalent to the event-related de-synchronization technique (Pfurtscheller & Lopes da Silva, 1999), except that positive values denote synchronization, and negative values de-synchronization ((active-passive)/passive*100). Total power was averaged over 6 regions of interest.</p>
</sec>
<sec id="s3a6">
<title>Statistical analysis</title>
<p>Inferential statistics on the time-frequency windows following the presentation of the target word were computed using a cluster-based permutation approach
<xref rid="pone.0101042-Maris1" ref-type="bibr">[33]</xref>
. Cluster-based permutation effectively reduces the number of comparisons by clustering neighboring samples above a given threshold along the dimensions: time, frequency, and space. In the current study, paired-sample t-tests were computed over subjects for each ROI-time-frequency point (0–1000 ms, 2–20 Hz, 6 ROI). Subsequently, t-values were thresholded at α = .05. Neighboring t-values above the threshold criterion were included into the same cluster, and ranked according to the size of the cluster. Finally, cluster-level statistics were computed by comparing the sum of all t-values within a given cluster against a permutation null-distribution. The null-distribution was constructed by randomly permuting the conditions (iterations = 1000), and calculating the maximum cluster-level statistic for each iteration.</p>
<p>A similar procedure was used for the seed-based whole-brain connectivity analysis. The difference between each condition (CM and MS) and the baseline was computed for an early (0–500 ms) and late (500–1000 ms) time window. The value at each location in source space was thresholded using a permutation distribution (α = .05, 1000 iterations), and combined with values from spatially adjacent locations. We used a maximum statistic to control for multiple comparisons at the cluster-level, which was equivalent to the sensor space analysis.</p>
</sec>
<sec id="s3a7">
<title>Source reconstruction</title>
<p>Sources of oscillatory activity at the whole-brain level were estimated using a linear beamforming method (Gross et al., 2001; Liljeström et al., 2005). The forward model was computed on a regular three dimensional grid (10×10×10 mm spacing) using a realistic volume-conductor model
<xref rid="pone.0101042-Oostenveld2" ref-type="bibr">[34]</xref>
. Paired-sample t-tests were computed for the difference between conditions at each location in the brain. Subsequently, t-values were transformed into z-values and masked at α  =  0.05.</p>
</sec>
<sec id="s3a8">
<title>Connectivity analysis</title>
<p>The analysis of cortico-cortical connectivity in source space was conducted for an early (0–500 ms) and a late time window (500–1000 ms) at the frequency that showed the strongest power difference in sensor space (∼6 Hz). The same number of trials were randomly selected for the CM and MS condition as well as the baseline period. A cross-spectral density (CSD) matrix was computed from the tapered Fourier spectra of each trial and used to estimate filter coefficients for the adaptive spatial filter. Subsequently, the Fourier spectra were projected through these filter coefficients along the strongest dipole orientation.</p>
<p>Functional connectivity between each location in the brain and all others was estimated using the imaginary part of coherency (ImCoh). ImCoh is only sensitive to signals at a non-zero time-lag, and therefore insensitive to connectivity artifacts resulting from volume conduction
<xref rid="pone.0101042-Nolte1" ref-type="bibr">[35]</xref>
. We computed ImCoh based on the Fourier spectra at each location in the grid. Subsequently, a stabilizing z-transform was applied using the inverse hyperbolic tangent (tanh
<sup>−1</sup>
). Since the main interest was in the functional connectivity between nodes rather than the direction of the effect, the absolute was computed for each of the resulting z-values.</p>
<p>For subsequent graph analysis, a binary adjacency matrix was computed for each participant by thresholding with the maximum value at which none of the nodes in any of the conditions was disconnected to the rest of the network. Finally, the log10 transformed difference between the number of connections (degrees) in the seed region versus baseline was computed for each condition, and subjected to statistical testing.</p>
</sec>
</sec>
<sec id="s3b">
<title>Results and Discussion</title>
<p>The time-frequency analysis of total power revealed a sustained increase in the theta band (4–6 Hz) and a decrease in the alpha, and low beta band (8–20 Hz) while the target word (e.g.,
<italic>WHISTLE</italic>
) was on the screen (
<xref ref-type="fig" rid="pone-0101042-g004">Figure 4a</xref>
). In order to test for differences between conditions (CM>MS), a cluster-based permutation approach was used
<xref rid="pone.0101042-Maris1" ref-type="bibr">[33]</xref>
. In the first step of the analysis, the clustering algorithm revealed one significant cluster (4–6 Hz, peak at 750–850 ms) at left and central electrodes (LA, LP, MA, MP) (
<xref ref-type="fig" rid="pone-0101042-g004">Figure 4b</xref>
;
<xref ref-type="fig" rid="pone-0101042-g005">Figure 5</xref>
). In order to control for multiple comparisons, a maximum permutation statistic was used in which the summed cluster t-value was compared against a permutation distribution with 1000 iterations. The maximum statistic revealed a significant difference between conditions at the cluster level (
<italic>p</italic>
 = .002, two-tailed), suggesting enhanced theta power in the cross-modal condition. Source reconstruction of this effect revealed a major peak in left ATL as well as left middle occipital gyrus (MOG) (
<xref ref-type="fig" rid="pone-0101042-g006">Figure 6A</xref>
).</p>
<fig id="pone-0101042-g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0101042.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Modulation in low frequency cortical oscillations for the target word in a cross-modal or modality-specific context.</title>
<p>A The top panel shows time-frequency representations, averaged over all significant clusters. The first two panels show the grand average percent signal change with respect to the baseline. The third panel depicts the masked statistical difference between the two conditions in t-values. The contour plot reveals one significant cluster in the theta range (4–6 Hz). B The first two bottom panels depict the topography of the effect in each condition (4–6 Hz, peak at 750–850 ms) relative to baseline. The third panel signifies the statistical difference between conditions in t-values. Electrodes within significant clusters are marked with dots (p = .002, cluster-corrected)</p>
</caption>
<graphic xlink:href="pone.0101042.g004"></graphic>
</fig>
<fig id="pone-0101042-g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0101042.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Time-frequency plots for each of the 6 ROI.</title>
<p>The ROI were middle anterior (MA), left anterior (LA), right anterior (RA), middle posterior (MP), left posterior (LP), and right posterior (RP) electrodes. Time-frequency representations depict the statistical difference in t-values for the target word in the CM versus MS feature context. The contours indicate the peak of the cluster-corrected statistical difference (p = .002).</p>
</caption>
<graphic xlink:href="pone.0101042.g005"></graphic>
</fig>
<fig id="pone-0101042-g006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0101042.g006</object-id>
<label>Figure 6</label>
<caption>
<title>Source reconstruction and connectivity analysis.</title>
<p>A Source reconstruction of the effect in the theta band, depicted as thresholded z-values, reveals peaks in left ATL and MOG. B Bar graphs show a significant increase in the number of connections between ATL and the rest of the brain in the early time window (0–500 ms). In the late time window (500–1000 ms), only the CM condition shows a significant increase in the number of connections relative to baseline. Error bars depict SEM C Results of the whole-brain connectivity analysis, seeded in the ATL (white dot). Connectivity maps show the difference in absolute, z-transformed, imaginary coherence between each condition and the baseline. In the early time window both conditions show a strong increase in connectivity between the ATL and a widespread cortical network. In the second time window, only the cross-modal condition shows continuing network activity above baseline.</p>
</caption>
<graphic xlink:href="pone.0101042.g006"></graphic>
</fig>
<p>The grid point in the left ATL (mni coordinate: −49 22 −30), which was most sensitive to the power difference between conditions, was taken as the seed for subsequent connectivity analyses. One sample t-tests were used to test for an increase in the log-transformed number of connections (degrees) relative to baseline in an early (0–500 ms) and late (500–1000 ms) time window. In the early time window, both conditions showed a significant increase in the number of connections (CM:
<italic>t</italic>
(17)  = .389,
<italic>p</italic>
<.001; MS:
<italic>t</italic>
(17)  = .355,
<italic>p</italic>
 = .001, one-sided). However, in the late time window, an effect was found only in the CM condition (CM:
<italic>t</italic>
(17)  =  2.13,
<italic>p</italic>
 = .024; MS:
<italic>t</italic>
(17) = .56,
<italic>p</italic>
 = .291, one-sided). Further, paired-sample t-tests were used to test for a difference between conditions directly. A difference between conditions was observed only in the late (
<italic>t</italic>
(17) = 2.36,
<italic>p</italic>
 = .031, two-sided), but not the early time window (
<italic>t</italic>
(17) = .012,
<italic>p</italic>
 = .991, two-sided). Taken together, this suggests that during the first 500 ms after target onset, the ATL is communicating with a wide cortical network in both conditions (CM and MS). However, during the second 500 ms, this effect persists only in the CM condition (
<xref ref-type="fig" rid="pone-0101042-g006">Figure 6B</xref>
).</p>
<p>To illustrate which specific regions show enhanced functional connectivity with the ATL, we used a whole-brain cluster-based permutation procedure on the z-transformed ImCoh values, comparing each condition to the baseline. This approach was similar to the procedure we used in sensor-space. As depicted in the top panel of
<xref ref-type="fig" rid="pone-0101042-g006">figure 6C</xref>
, a large cluster of nodes was connected to the ATL in the early time window for both conditions (CM:
<italic>p</italic>
 = .004; MS,
<italic>p</italic>
 = .008, one-sided). However, in the late time window a significant difference relative to baseline was only observed in the CM condition (
<italic>p</italic>
 = .032, one-sided). Connections during the second time window were found to regions that are involved in auditory (right Heschl's gyrus), somatosensory (bilateral post-central gyrus), and visual object processing (right posterior MTG), as well as medial and lateral frontal lobes.</p>
<p>The aim of Experiment 2 was to investigate whether integrating semantic features over a wider cortical network is reflected in enhanced oscillatory activity at low frequencies. Time-frequency analysis revealed an increase in theta power (4–6 Hz) for both conditions, which was more sustained during cross-modal integration. This effect localized most strongly to the left ATL, which is thought to be a major hub for integrating multimodal semantic content
<xref rid="pone.0101042-Patterson1" ref-type="bibr">[18]</xref>
. Subsequent seed-based whole-brain connectivity analysis confirmed that the number of connections between the ATL and the rest of the network increases in both CM and MS conditions during the first 500 ms. However, these network interactions extend into the second 500 ms only in the CM condition. Specifically, the ATL communicates with modality-specific auditory, somatosensory and high-level visual areas as well as regions in the frontal lobe. Taken together, these findings suggest that theta oscillations are associated with network dynamics in a widespread cortical network. Previous research has associated theta oscillations with lexical-semantic processing
<xref rid="pone.0101042-Bastiaansen1" ref-type="bibr">[22]</xref>
,
<xref rid="pone.0101042-Bastiaansen2" ref-type="bibr">[23]</xref>
. However, the current study is the first to show that theta power is sensitive to the spatial distribution of semantic features in the cortex. The implications of these findings for semantic processing are discussed in the next section.</p>
</sec>
</sec>
<sec id="s4">
<title>General Discussion</title>
<p>Embodied theories of language have argued that word meaning is partially stored in modality-specific cortical networks, converging in multisensory association areas in anterior temporal, and inferior parietal lobes
<xref rid="pone.0101042-Barsalou1" ref-type="bibr">[1]</xref>
<xref rid="pone.0101042-Vigliocco1" ref-type="bibr">[4]</xref>
,
<xref rid="pone.0101042-Damasio1" ref-type="bibr">[17]</xref>
. The aim of the current study was to investigate the mechanisms underlying integration of semantic features during language processing. Two experiments are reported in which participants were asked to verify whether two features from the same (e.g.,
<italic>silver - tiny</italic>
) or different modality (e.g.,
<italic>silver - loud</italic>
) are consistent with a given target word (e.g.,
<italic>WHISTLE</italic>
). The results from Experiment 1 show that integrating features from the same modality is faster than integrating features from different modalities. These findings suggest that word meaning is integrated more readily within a single modality-specific network than across networks. Integrating information across networks in particular should engage multimodal convergence zones.
<xref ref-type="sec" rid="s3">Experiment 2</xref>
shows that integrating features from different modalities induces a sustained theta power increase in left ATL, a putative hub for semantic convergence
<xref rid="pone.0101042-Patterson1" ref-type="bibr">[18]</xref>
. Low frequency theta oscillations could reflect a neural mechanism by which multimodal word meaning is combined locally in temporal association cortices. However, assuming that word meaning is partially stored in distributed cortical networks, multimodal integration necessarily requires long-range communication between left ATL and the rest of the cortex. The seed-based connectivity analysis in the theta range revealed that this is indeed the case; left ATL communicates with a widespread cortical network that includes, but is not limited to, modality-specific regions. In other words, local theta power in left ATL reflects long-range communication between temporal areas and the rest of the cortex, which, according to embodied theories of semantics, is necessary for the integration of word meaning from multiple modality-specific semantic networks.</p>
<sec id="s4a">
<title>Integrating multimodal semantic information comes at a cost</title>
<p>Experiment 1 shows that participants are faster to verify features of a target word (e.g.,
<italic>WHISTLE</italic>
) from the same (e.g.,
<italic>silver-tiny</italic>
) versus two different modalities (e.g.,
<italic>silver-loud</italic>
), suggesting that word meaning converges more readily within a modality-specific semantic network than across networks. This is in line with behavioral studies that have examined switching costs during word comprehension
<xref rid="pone.0101042-Pecher1" ref-type="bibr">[36]</xref>
as well as dual property verification tasks
<xref rid="pone.0101042-Barsalou2" ref-type="bibr">[37]</xref>
(but see also
<xref rid="pone.0101042-McNorgan1" ref-type="bibr">[38]</xref>
). It is also broadly in accordance with a cognitive model proposing graded semantic convergence from modality-specific to multimodal
<xref rid="pone.0101042-Plaut1" ref-type="bibr">[39]</xref>
.</p>
</sec>
<sec id="s4b">
<title>Theta oscillations in left ATL during multimodal semantic feature integration</title>
<p>The principle by which information from distributed neural populations is combined is a much-debated topic in neuroscience. It has been argued that transient networks emerge from synchronized firing of large neuronal populations, which is recorded as oscillatory activity at the scalp
<xref rid="pone.0101042-Milner1" ref-type="bibr">[40]</xref>
<xref rid="pone.0101042-VonderMalsburg1" ref-type="bibr">[42]</xref>
. In humans, changes in oscillatory neuronal activity in the theta range have been observed during different stages of memory processing, as well as lexical-semantic retrieval
<xref rid="pone.0101042-Bastiaansen1" ref-type="bibr">[22]</xref>
<xref rid="pone.0101042-Klimesch2" ref-type="bibr">[25]</xref>
,
<xref rid="pone.0101042-Jensen1" ref-type="bibr">[43]</xref>
<xref rid="pone.0101042-Wu1" ref-type="bibr">[46]</xref>
. The current study extends previous findings to show that theta oscillations are particularly sensitive to the
<italic>integration</italic>
of semantic features of an object, which are thought to be partially represented in distributed, modality-specific, networks
<xref rid="pone.0101042-Barsalou1" ref-type="bibr">[1]</xref>
<xref rid="pone.0101042-Vigliocco1" ref-type="bibr">[4]</xref>
.</p>
<p>It has been argued that modality-specific semantic networks converge in multimodal association cortices
<xref rid="pone.0101042-Damasio1" ref-type="bibr">[17]</xref>
,
<xref rid="pone.0101042-Patterson1" ref-type="bibr">[18]</xref>
. For example, there is compelling evidence from patients with semantic dementia suggesting that ATL is involved in semantic processing at a general level
<xref rid="pone.0101042-Patterson1" ref-type="bibr">[18]</xref>
,
<xref rid="pone.0101042-Warrington1" ref-type="bibr">[19]</xref>
, yet little is known about the neural dynamics within this region.
<xref ref-type="sec" rid="s3">Experiment 2</xref>
reports a modulation in local theta power within left ATL when participants integrate features from multiple modality-specific semantic networks. The connectivity analysis of the data from Experiment 2 further revealed that theta oscillations also participate in long-range network interactions linking left ATL with a widespread cortical network. These findings are an important step in bridging the gap between anatomy and cognition; the theta rhythm could be a neural signature reflecting transient network interactions within left ATL, as well as between this region and distributed modality-specific networks. Such functional networks are necessary for linking semantic content in space and time.</p>
<p>Lastly, we find that the effect peaks at a very late point in time (∼750 ms), most likely reflecting the tail of a sustained oscillatory response that is triggered much earlier in time. Importantly, we do not argue that this is the moment when semantic integration takes place. Rather, oscillatory dynamics in the theta range could be involved in creating the conditions necessary for semantic integration by linking multiple functional networks over a period of time. The fact that cross-modal integration incurs a higher processing demand is reflected in a longer integration window. This is also in line with the finding that theta is the only known oscillatory frequency which shows a linear increase during sentence processing
<xref rid="pone.0101042-Bastiaansen3" ref-type="bibr">[47]</xref>
. Again, we would like to emphasize that the primary goal of the current study was to investigate the oscillatory dynamics, rather than the timing of semantic integration, which has been addressed extensively in previous work using the event-related potential technique
<xref rid="pone.0101042-Kutas1" ref-type="bibr">[48]</xref>
.</p>
</sec>
<sec id="s4c">
<title>Relation to multisensory integration and cross-modal matching</title>
<p>Multisensory integration is an essential component of everyday life. For example, both visual and proprioceptive information are required when performing goal-directed actions
<xref rid="pone.0101042-Sober1" ref-type="bibr">[49]</xref>
, speech comprehension greatly benefits from visual information about lip movements
<xref rid="pone.0101042-Rosenblum1" ref-type="bibr">[50]</xref>
, and hearing the sound of an animal facilitates its visual detection
<xref rid="pone.0101042-Schneider2" ref-type="bibr">[14]</xref>
. Although these examples bear a superficial resemblance to the processes we investigated in the current study, it should be noted that there are fundamental differences in integrating cross-modal sensory, and lexical-semantic content respectively. These differences are with respect to a) the time scale and b) the directionality of information flow.</p>
<p>Previous studies have investigated oscillatory changes during multisensory integration using cross-modal matching. For example, Schneider and colleagues
<xref rid="pone.0101042-Schneider1" ref-type="bibr">[13]</xref>
showed that matching the visual image of an object (e.g., picture of a sheep) to its sound (e.g. sound of a sheep) induces an early increase in the gamma band (40–50 Hz) between 120–180 ms. Similar findings have been reported for haptic to auditory matching
<xref rid="pone.0101042-Schneider3" ref-type="bibr">[15]</xref>
. In contrast, effects of semantic integration in language are usually observed around 400 ms
<xref rid="pone.0101042-Kutas1" ref-type="bibr">[48]</xref>
and at frequencies below 30 Hz
<xref rid="pone.0101042-Bastiaansen1" ref-type="bibr">[22]</xref>
,
<xref rid="pone.0101042-Bastiaansen2" ref-type="bibr">[23]</xref>
,
<xref rid="pone.0101042-HaldL1" ref-type="bibr">[51]</xref>
,
<xref rid="pone.0101042-Wang1" ref-type="bibr">[52]</xref>
(however, see
<xref rid="pone.0101042-Hagoort1" ref-type="bibr">[53]</xref>
). This is not surprising given the fact that lexical retrieval involves multiple processing stages (e.g., visual processing of letters). In this respect, the current findings should primarily be interpreted as reflecting language but not sensory processing.</p>
<p>Another difference between sensory and semantic integration is the directionality of information flow. While sensory processing in a given modality is largely automatic and dependent on external stimulation (bottom-up), retrieving modality-specific word meaning requires prior experience with the referent of a word and is highly context-dependent (top-down). For example, previous imaging work has shown that action words do not activate the action system to the same extent if they are presented as idiomatic expressions (e.g.,
<italic>he kicked the bucket</italic>
)
<xref rid="pone.0101042-Raposo1" ref-type="bibr">[54]</xref>
(but see
<xref rid="pone.0101042-Boulenger1" ref-type="bibr">[55]</xref>
). Furthermore, it has been shown that neutral sentences (e.g.,
<italic>it is hot in here</italic>
) activate parts of the action system if presented in a context in which they are interpreted as indirect requests (e.g., a room with a closed window)
<xref rid="pone.0101042-VanAckeren1" ref-type="bibr">[56]</xref>
. In the current study, participants were primed to think about a particular instance of an object (e.g.,
<italic>a silver and loud whistle</italic>
). In other words, the relevant information was not directly encoded in the stimulus (a visual word), but needed to be retrieved from memory.</p>
<p>In sum, imaging studies have shown that lexical-semantic content activates modality-specific cortical networks similar to sensory stimulation
<xref rid="pone.0101042-Simmons1" ref-type="bibr">[5]</xref>
,
<xref rid="pone.0101042-Gonzlez1" ref-type="bibr">[7]</xref>
,
<xref rid="pone.0101042-Kiefer1" ref-type="bibr">[57]</xref>
. But despite their spatial similarity, lexical-semantic and sensory processes operate at very different time-scales and through different computations (bottom-up versus top-down). While much is known about the mechanisms underlying multisensory integration, the current study is among the first to address how cross-modal semantic information is integrated through language.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusions</title>
<p>Previous research suggests that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms by which distributed semantic information is combined into a coherent conceptual representation. The current study addresses exactly this question: What are the mechanisms underlying cross-modal semantic integration? Participants were asked to indicate whether two features from the same (e.g.,
<italic>silver - tiny</italic>
) or different modalities (e.g.,
<italic>silver - loud</italic>
) are consistent with a target word (e.g.,
<italic>WHISTLE</italic>
).
<xref ref-type="sec" rid="s2">Experiment 1</xref>
revealed that integrating semantic features represented within a single modality is faster than integrating features across modalities. In Experiment 2, EEG recordings revealed sustained oscillatory activity in the theta range, when participants were asked to integrate features from different modalities. The effect was localized to left ATL, a putative semantic hub that is thought to be involved in linking multimodal semantic content
<xref rid="pone.0101042-Patterson1" ref-type="bibr">[18]</xref>
. While the importance of this region for semantic processing and integration is largely uncontested, little is known about its mechanics. The current findings are an important step towards bridging this gap between anatomy and function; oscillatory dynamics in the theta range could be a neural mechanism that is involved in establishing transient functional connections between distributed modality-specific, and multimodal semantic networks. Further evidence for this claim is the finding that theta oscillations in Experiment 2 also participate in long-range interactions linking left ATL to a widespread cortical network.</p>
</sec>
</body>
<back>
<ack>
<p>The authors thank Kathrin Müsch and Garreth Prendergast for valuable comments.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0101042-Barsalou1">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Barsalou</surname>
<given-names>LW</given-names>
</name>
(
<year>2008</year>
)
<article-title>Grounded cognition</article-title>
.
<source>Annu Rev Psychol</source>
<volume>59</volume>
:
<fpage>617</fpage>
<lpage>645</lpage>
<pub-id pub-id-type="pmid">17705682</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Binder1">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Binder</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>Desai</surname>
<given-names>RH</given-names>
</name>
(
<year>2011</year>
)
<article-title>The neurobiology of semantic memory</article-title>
.
<source>Trends Cogn Sci</source>
<volume>15</volume>
:
<fpage>527</fpage>
<lpage>536</lpage>
<pub-id pub-id-type="pmid">22001867</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Pulvermller1">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pulvermüller</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Fadiga</surname>
<given-names>L</given-names>
</name>
(
<year>2010</year>
)
<article-title>Active perception: sensorimotor circuits as a cortical basis for language</article-title>
.
<source>Nat Rev Neurosci</source>
<volume>11</volume>
:
<fpage>351</fpage>
<lpage>360</lpage>
<pub-id pub-id-type="pmid">20383203</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Vigliocco1">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Vigliocco</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Meteyard</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Andrews</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Kousta</surname>
<given-names>S</given-names>
</name>
(
<year>2009</year>
)
<article-title>Toward a theory of semantic representation</article-title>
.
<source>Lang Cogn</source>
<volume>1</volume>
:
<fpage>219</fpage>
<lpage>247</lpage>
</mixed-citation>
</ref>
<ref id="pone.0101042-Simmons1">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Simmons</surname>
<given-names>WK</given-names>
</name>
,
<name>
<surname>Ramjee</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Beauchamp</surname>
<given-names>MS</given-names>
</name>
,
<name>
<surname>McRae</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Martin</surname>
<given-names>A</given-names>
</name>
,
<etal>et al</etal>
(
<year>2007</year>
)
<article-title>A common neural substrate for perceiving and knowing about color</article-title>
.
<source>Neuropsychologia</source>
<volume>45</volume>
:
<fpage>2802</fpage>
<lpage>2810</lpage>
<pub-id pub-id-type="pmid">17575989</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Hauk1">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hauk</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Johnsrude</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Pulvermüller</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Pulvermuller</surname>
<given-names>F</given-names>
</name>
(
<year>2004</year>
)
<article-title>Somatotopic representation of action words in human motor and premotor cortex</article-title>
.
<source>Neuron</source>
<volume>41</volume>
:
<fpage>301</fpage>
<lpage>307</lpage>
<pub-id pub-id-type="pmid">14741110</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Gonzlez1">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>González</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Barros-Loscertales</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Pulvermüller</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Meseguer</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Sanjuán</surname>
<given-names>A</given-names>
</name>
,
<etal>et al</etal>
(
<year>2006</year>
)
<article-title>Reading cinnamon activates olfactory brain regions</article-title>
.
<source>NeuroImage</source>
<volume>32</volume>
:
<fpage>906</fpage>
<lpage>912</lpage>
<pub-id pub-id-type="pmid">16651007</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Hoenig1">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hoenig</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Sim</surname>
<given-names>E-J</given-names>
</name>
,
<name>
<surname>Bochev</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Herrnberger</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Kiefer</surname>
<given-names>M</given-names>
</name>
(
<year>2008</year>
)
<article-title>Conceptual flexibility in the human brain: dynamic recruitment of semantic maps from visual, motor, and motion-related areas</article-title>
.
<source>J Cogn Neurosci</source>
<volume>20</volume>
:
<fpage>1799</fpage>
<lpage>1814</lpage>
<pub-id pub-id-type="pmid">18370598</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Martin1">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Martin</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Chao</surname>
<given-names>LL</given-names>
</name>
(
<year>2001</year>
)
<article-title>Semantic memory and the brain: structure and processes</article-title>
.
<source>Curr Opin Neurobiol</source>
<volume>11</volume>
:
<fpage>194</fpage>
<lpage>201</lpage>
<pub-id pub-id-type="pmid">11301239</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-VanDam1">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Van Dam</surname>
<given-names>WO</given-names>
</name>
,
<name>
<surname>van Dijk</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Bekkering</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Rueschemeyer</surname>
<given-names>S-A</given-names>
</name>
(
<year>2012</year>
)
<article-title>Flexibility in embodied lexical-semantic representations</article-title>
.
<source>Hum Brain Mapp</source>
<volume>33</volume>
:
<fpage>2322</fpage>
<lpage>2333</lpage>
<pub-id pub-id-type="pmid">21976384</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Chatterjee1">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Chatterjee</surname>
<given-names>A</given-names>
</name>
(
<year>2010</year>
)
<article-title>Disembodying cognition</article-title>
.
<source>Lang Cogn</source>
<volume>2</volume>
:
<fpage>79</fpage>
<lpage>116</lpage>
<pub-id pub-id-type="pmid">20802833</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Hauk2">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hauk</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Tschentscher</surname>
<given-names>N</given-names>
</name>
(
<year>2013</year>
)
<article-title>The Body of Evidence: What Can Neuroscience Tell Us about Embodied Semantics?</article-title>
<source>Front Psychol</source>
<volume>4</volume>
:
<fpage>50</fpage>
<pub-id pub-id-type="pmid">23407791</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Schneider1">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schneider</surname>
<given-names>TR</given-names>
</name>
,
<name>
<surname>Debener</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Oostenveld</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Engel</surname>
<given-names>AK</given-names>
</name>
(
<year>2008</year>
)
<article-title>Enhanced EEG gamma-band activity reflects multisensory semantic matching in visual-to-auditory object priming</article-title>
.
<source>NeuroImage</source>
<volume>42</volume>
:
<fpage>1244</fpage>
<lpage>1254</lpage>
<pub-id pub-id-type="pmid">18617422</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Schneider2">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schneider</surname>
<given-names>TR</given-names>
</name>
,
<name>
<surname>Engel</surname>
<given-names>AK</given-names>
</name>
,
<name>
<surname>Debener</surname>
<given-names>S</given-names>
</name>
(
<year>2008</year>
)
<article-title>Multisensory Identification of Natural Objects in a Two-Way Crossmodal Priming Paradigm</article-title>
.
<source>Exp Psychol</source>
<volume>55</volume>
:
<fpage>121</fpage>
<lpage>132</lpage>
<pub-id pub-id-type="pmid">18444522</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Schneider3">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schneider</surname>
<given-names>TR</given-names>
</name>
,
<name>
<surname>Lorenz</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Senkowski</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Engel</surname>
<given-names>AK</given-names>
</name>
(
<year>2011</year>
)
<article-title>Gamma-band activity as a signature for cross-modal priming of auditory object recognition by active haptic exploration</article-title>
.
<source>J Neurosci</source>
<volume>31</volume>
:
<fpage>2502</fpage>
<lpage>2510</lpage>
<pub-id pub-id-type="pmid">21325518</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Senkowski1">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Senkowski</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Schneider</surname>
<given-names>TR</given-names>
</name>
,
<name>
<surname>Foxe</surname>
<given-names>JJ</given-names>
</name>
,
<name>
<surname>Engel</surname>
<given-names>AK</given-names>
</name>
(
<year>2008</year>
)
<article-title>Crossmodal binding through neural coherence: implications for multisensory processing</article-title>
.
<source>Trends Neurosci</source>
<volume>31</volume>
:
<fpage>401</fpage>
<lpage>409</lpage>
<pub-id pub-id-type="pmid">18602171</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Damasio1">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Damasio</surname>
<given-names>AR</given-names>
</name>
(
<year>1989</year>
)
<article-title>The brain binds entities and events by multiregional activation from convergence zones</article-title>
.
<source>Neural Comput</source>
<volume>1</volume>
:
<fpage>123</fpage>
<lpage>132</lpage>
</mixed-citation>
</ref>
<ref id="pone.0101042-Patterson1">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Patterson</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Nestor</surname>
<given-names>PJ</given-names>
</name>
,
<name>
<surname>Rogers</surname>
<given-names>TT</given-names>
</name>
(
<year>2007</year>
)
<article-title>Where do you know what you know? The representation of semantic knowledge in the human brain</article-title>
.
<source>Nat Rev Neurosci</source>
<volume>8</volume>
:
<fpage>976</fpage>
<lpage>987</lpage>
<pub-id pub-id-type="pmid">18026167</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Warrington1">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Warrington</surname>
<given-names>EK</given-names>
</name>
(
<year>1975</year>
)
<article-title>The selective impairment of semantic memory</article-title>
.
<source>Q J Exp Psychol</source>
<volume>27</volume>
:
<fpage>635</fpage>
<lpage>657</lpage>
<pub-id pub-id-type="pmid">1197619</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Donner1">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Donner</surname>
<given-names>TH</given-names>
</name>
,
<name>
<surname>Siegel</surname>
<given-names>M</given-names>
</name>
(
<year>2011</year>
)
<article-title>A framework for local cortical oscillation patterns</article-title>
.
<source>Trends Cogn Sci</source>
<volume>15</volume>
:
<fpage>191</fpage>
<lpage>199</lpage>
<pub-id pub-id-type="pmid">21481630</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-VonStein1">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Von Stein</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Sarnthein</surname>
<given-names>J</given-names>
</name>
(
<year>2000</year>
)
<article-title>Different frequencies for different scales of cortical integration: from local gamma to long range alpha/theta synchronization</article-title>
.
<source>Int J Psychophysiol</source>
<volume>38</volume>
:
<fpage>301</fpage>
<lpage>313</lpage>
<pub-id pub-id-type="pmid">11102669</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Bastiaansen1">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bastiaansen</surname>
<given-names>MC</given-names>
</name>
,
<name>
<surname>van der Linden</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Ter Keurs</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Dijkstra</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Hagoort</surname>
<given-names>P</given-names>
</name>
(
<year>2005</year>
)
<article-title>Theta responses are involved in lexical-semantic retrieval during language processing</article-title>
.
<source>J Cogn Neurosci</source>
<volume>17</volume>
:
<fpage>530</fpage>
<lpage>541</lpage>
<pub-id pub-id-type="pmid">15814011</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Bastiaansen2">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bastiaansen</surname>
<given-names>MCM</given-names>
</name>
,
<name>
<surname>Oostenveld</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Jensen</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Hagoort</surname>
<given-names>P</given-names>
</name>
(
<year>2008</year>
)
<article-title>I see what you mean: Theta power increases are involved in the retrieval of lexical semantic information</article-title>
.
<source>Brain Lang</source>
<volume>106</volume>
:
<fpage>15</fpage>
<lpage>28</lpage>
<pub-id pub-id-type="pmid">18262262</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Klimesch1">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Klimesch</surname>
<given-names>W</given-names>
</name>
(
<year>1999</year>
)
<article-title>EEG alpha and theta oscillations reflect cognitive and memory performance: a review and analysis</article-title>
.
<source>Brain Res Brain Res Rev</source>
<volume>29</volume>
:
<fpage>169</fpage>
<lpage>195</lpage>
<pub-id pub-id-type="pmid">10209231</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Klimesch2">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Klimesch</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Freunberger</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Sauseng</surname>
<given-names>P</given-names>
</name>
(
<year>2010</year>
)
<article-title>Oscillatory mechanisms of process binding in memory</article-title>
.
<source>Neurosci Biobehav Rev</source>
<volume>34</volume>
:
<fpage>1002</fpage>
<lpage>1014</lpage>
<pub-id pub-id-type="pmid">19837109</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-TallonBaudry1">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Tallon-Baudry</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Bertrand</surname>
<given-names>O</given-names>
</name>
(
<year>1999</year>
)
<article-title>Oscillatory gamma activity in humans and its role in object representation</article-title>
.
<source>Trends Cogn Sci</source>
<volume>3</volume>
:
<fpage>151</fpage>
<lpage>162</lpage>
<pub-id pub-id-type="pmid">10322469</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-VanDerWerf1">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Van Der Werf</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Jensen</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Fries</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Medendorp</surname>
<given-names>WP</given-names>
</name>
(
<year>2008</year>
)
<article-title>Gamma-band activity in human posterior parietal cortex encodes the motor goal during delayed prosaccades and antisaccades</article-title>
.
<source>J Neurosci</source>
<volume>28</volume>
:
<fpage>8397</fpage>
<lpage>8405</lpage>
<pub-id pub-id-type="pmid">18716198</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Landauer1">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Landauer</surname>
<given-names>TK</given-names>
</name>
,
<name>
<surname>Foltz</surname>
<given-names>PW</given-names>
</name>
,
<name>
<surname>Laham</surname>
<given-names>D</given-names>
</name>
(
<year>1998</year>
)
<article-title>An introduction to latent semantic analysis</article-title>
.
<source>Discourse Process</source>
<volume>25</volume>
:
<fpage>259</fpage>
<lpage>284</lpage>
</mixed-citation>
</ref>
<ref id="pone.0101042-VanDantzig1">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Van Dantzig</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Cowell</surname>
<given-names>RA</given-names>
</name>
,
<name>
<surname>Zeelenberg</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Pecher</surname>
<given-names>D</given-names>
</name>
(
<year>2010</year>
)
<article-title>A sharp image or a sharp knife: norms for the modality-exclusivity of 774 concept-property items</article-title>
.
<source>Behav Res Methods</source>
<volume>43</volume>
:
<fpage>145</fpage>
<lpage>154</lpage>
<pub-id pub-id-type="pmid">21287109</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Lynott1">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lynott</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Connell</surname>
<given-names>L</given-names>
</name>
(
<year>2009</year>
)
<article-title>Modality exclusivity norms for 423 object properties</article-title>
.
<source>Behav Res Methods</source>
<volume>41</volume>
:
<fpage>558</fpage>
<lpage>564</lpage>
<pub-id pub-id-type="pmid">19363198</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Neuper1">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Neuper</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Wörtz</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Pfurtscheller</surname>
<given-names>G</given-names>
</name>
(
<year>2006</year>
)
<article-title>ERD/ERS patterns reflecting sensorimotor activation and deactivation</article-title>
.
<source>Prog Brain Res</source>
<volume>159</volume>
:
<fpage>211</fpage>
<lpage>222</lpage>
<pub-id pub-id-type="pmid">17071233</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Oostenveld1">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Oostenveld</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Fries</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Maris</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Schoffelen</surname>
<given-names>JM</given-names>
</name>
(
<year>2011</year>
)
<article-title>FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data</article-title>
.
<source>Comput Intell Neurosci</source>
<volume>2011</volume>
:
<fpage>1</fpage>
<pub-id pub-id-type="pmid">21837235</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Maris1">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Maris</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Oostenveld</surname>
<given-names>R</given-names>
</name>
(
<year>2007</year>
)
<article-title>Nonparametric statistical testing of EEG- and MEG-data</article-title>
.
<source>J Neurosci Methods</source>
<volume>164</volume>
:
<fpage>177</fpage>
<lpage>190</lpage>
<pub-id pub-id-type="pmid">17517438</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Oostenveld2">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Oostenveld</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Stegeman</surname>
<given-names>DF</given-names>
</name>
,
<name>
<surname>Praamstra</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Van Oosterom</surname>
<given-names>A</given-names>
</name>
(
<year>2003</year>
)
<article-title>Brain symmetry and topographic analysis of lateralized event-related potentials</article-title>
.
<source>Clin Neurophysiol</source>
<volume>114</volume>
:
<fpage>1194</fpage>
<lpage>1202</lpage>
<pub-id pub-id-type="pmid">12842715</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Nolte1">
<label>35</label>
<mixed-citation publication-type="journal">
<name>
<surname>Nolte</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Bai</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Wheaton</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Mari</surname>
<given-names>Z</given-names>
</name>
,
<name>
<surname>Vorbach</surname>
<given-names>S</given-names>
</name>
,
<etal>et al</etal>
(
<year>2004</year>
)
<article-title>Identifying true brain interaction from EEG data using the imaginary part of coherency</article-title>
.
<source>Clin Neurophysiol</source>
<volume>115</volume>
:
<fpage>2292</fpage>
<lpage>2307</lpage>
<pub-id pub-id-type="pmid">15351371</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Pecher1">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pecher</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Zeelenberg</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Barsalou</surname>
<given-names>LW</given-names>
</name>
(
<year>2003</year>
)
<article-title>Verifying different-modality properties for concepts produces switching costs</article-title>
.
<source>Psychol Sci</source>
<volume>14</volume>
:
<fpage>119</fpage>
<lpage>124</lpage>
<pub-id pub-id-type="pmid">12661672</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Barsalou2">
<label>37</label>
<mixed-citation publication-type="book">Barsalou LW, Pecher D, Zeelenberg R, Simmons WK, Hamann SB (2005) Multimodal simulation in conceptual processing. In: Ahn W, Goldstone R, Love B, Markman A, Wolff P, editors. Categorization inside and outside the lab: Festschrift in honor of Douglas L. Medin. Washington, DC: American Psychological Association. pp. 249–270.</mixed-citation>
</ref>
<ref id="pone.0101042-McNorgan1">
<label>38</label>
<mixed-citation publication-type="journal">
<name>
<surname>McNorgan</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Reid</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>McRae</surname>
<given-names>K</given-names>
</name>
(
<year>2011</year>
)
<article-title>Integrating conceptual knowledge within and across representational modalities</article-title>
.
<source>Cognition</source>
<volume>118</volume>
:
<fpage>211</fpage>
<lpage>233</lpage>
<pub-id pub-id-type="pmid">21093853</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Plaut1">
<label>39</label>
<mixed-citation publication-type="journal">
<name>
<surname>Plaut</surname>
<given-names>DC</given-names>
</name>
(
<year>2002</year>
)
<article-title>Graded modality-specific specialization in semantics: a computational account of optic aphasia</article-title>
.
<source>Cogn Neuropsychol</source>
<volume>19</volume>
:
<fpage>603</fpage>
<lpage>639</lpage>
<pub-id pub-id-type="pmid">20957556</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Milner1">
<label>40</label>
<mixed-citation publication-type="journal">
<name>
<surname>Milner</surname>
<given-names>PM</given-names>
</name>
(
<year>1967</year>
)
<article-title>A model for visual shape recognition</article-title>
.
<source>Psychol Rev</source>
<volume>81</volume>
:
<fpage>521</fpage>
<lpage>535</lpage>
<pub-id pub-id-type="pmid">4445414</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Singer1">
<label>41</label>
<mixed-citation publication-type="journal">
<name>
<surname>Singer</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Gray</surname>
<given-names>CM</given-names>
</name>
(
<year>1995</year>
)
<article-title>Visual Feature Integration and the Temporal Correlation Hypothesis</article-title>
.
<source>Annu Rev Neurosci</source>
<volume>18</volume>
:
<fpage>555</fpage>
<lpage>586</lpage>
<pub-id pub-id-type="pmid">7605074</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-VonderMalsburg1">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Von der Malsburg</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Schneider</surname>
<given-names>W</given-names>
</name>
(
<year>1986</year>
)
<article-title>A neural cocktail-party processor</article-title>
.
<source>Biol Cybern</source>
<volume>54</volume>
:
<fpage>29</fpage>
<lpage>40</lpage>
<pub-id pub-id-type="pmid">3719028</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Jensen1">
<label>43</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jensen</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Tesche</surname>
<given-names>CD</given-names>
</name>
(
<year>2002</year>
)
<article-title>Frontal theta activity in humans increases with memory load in a working memory task</article-title>
.
<source>Eur J Neurosci</source>
<volume>15</volume>
:
<fpage>1395</fpage>
<lpage>1399</lpage>
<pub-id pub-id-type="pmid">11994134</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Raghavachari1">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>Raghavachari</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Lisman</surname>
<given-names>JE</given-names>
</name>
,
<name>
<surname>Tully</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Madsen</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>Bromfield</surname>
<given-names>EB</given-names>
</name>
,
<etal>et al</etal>
(
<year>2006</year>
)
<article-title>Theta oscillations in human cortex during a working-memory task: evidence for local generators</article-title>
.
<source>J Neurophysiol</source>
<volume>95</volume>
:
<fpage>1630</fpage>
<lpage>1638</lpage>
<pub-id pub-id-type="pmid">16207788</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Summerfield1">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Summerfield</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Mangels J</surname>
<given-names>a</given-names>
</name>
(
<year>2005</year>
)
<article-title>Coherent theta-band EEG activity predicts item-context binding during encoding</article-title>
.
<source>NeuroImage</source>
<volume>24</volume>
:
<fpage>692</fpage>
<lpage>703</lpage>
<pub-id pub-id-type="pmid">15652304</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Wu1">
<label>46</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wu</surname>
<given-names>X</given-names>
</name>
,
<name>
<surname>Chen</surname>
<given-names>X</given-names>
</name>
,
<name>
<surname>Li</surname>
<given-names>Z</given-names>
</name>
,
<name>
<surname>Han</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Zhang</surname>
<given-names>D</given-names>
</name>
(
<year>2007</year>
)
<article-title>Binding of verbal and spatial information in human working memory involves large-scale neural synchronization at theta frequency</article-title>
.
<source>NeuroImage</source>
<volume>35</volume>
:
<fpage>1654</fpage>
<lpage>1662</lpage>
<pub-id pub-id-type="pmid">17379539</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Bastiaansen3">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bastiaansen</surname>
<given-names>MCM</given-names>
</name>
,
<name>
<surname>van Berkum JJ</surname>
<given-names>a</given-names>
</name>
,
<name>
<surname>Hagoort</surname>
<given-names>P</given-names>
</name>
(
<year>2002</year>
)
<article-title>Event-related theta power increases in the human EEG during online sentence processing</article-title>
.
<source>Neurosci Lett</source>
<volume>323</volume>
:
<fpage>13</fpage>
<lpage>16</lpage>
<pub-id pub-id-type="pmid">11911979</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Kutas1">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kutas</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Federmeier</surname>
<given-names>K</given-names>
</name>
(
<year>2000</year>
)
<article-title>Electrophysiology reveals semantic memory use in language comprehension</article-title>
.
<source>Trends Cogn Sci</source>
<volume>4</volume>
:
<fpage>463</fpage>
<lpage>470</lpage>
<pub-id pub-id-type="pmid">11115760</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Sober1">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sober</surname>
<given-names>SJ</given-names>
</name>
,
<name>
<surname>Sabes</surname>
<given-names>PN</given-names>
</name>
(
<year>2003</year>
)
<article-title>Multisensory integration during motor planning</article-title>
.
<source>J Neurosci</source>
<volume>23</volume>
:
<fpage>6982</fpage>
<lpage>6992</lpage>
<pub-id pub-id-type="pmid">12904459</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Rosenblum1">
<label>50</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rosenblum</surname>
<given-names>LD</given-names>
</name>
(
<year>2008</year>
)
<article-title>Speech Perception as a Multimodal Phenomenon</article-title>
.
<source>Curr Dir Psychol Sci</source>
<volume>17</volume>
:
<fpage>405</fpage>
<lpage>409</lpage>
<pub-id pub-id-type="pmid">23914077</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-HaldL1">
<label>51</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hald L</surname>
<given-names>a</given-names>
</name>
,
<name>
<surname>Bastiaansen</surname>
<given-names>MCM</given-names>
</name>
,
<name>
<surname>Hagoort</surname>
<given-names>P</given-names>
</name>
(
<year>2006</year>
)
<article-title>EEG theta and gamma responses to semantic violations in online sentence processing</article-title>
.
<source>Brain Lang</source>
<volume>96</volume>
:
<fpage>90</fpage>
<lpage>105</lpage>
<pub-id pub-id-type="pmid">16083953</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Wang1">
<label>52</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wang</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Jensen</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>van den Brink</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Weder</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Schoffelen</surname>
<given-names>J-M</given-names>
</name>
,
<etal>et al</etal>
(
<year>2012</year>
)
<article-title>Beta oscillations relate to the N400m during language comprehension</article-title>
.
<source>Hum Brain Mapp</source>
<volume>33</volume>
:
<fpage>2898</fpage>
<lpage>2912</lpage>
<pub-id pub-id-type="pmid">22488914</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Hagoort1">
<label>53</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hagoort</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Hald</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Bastiaansen</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Petersson</surname>
<given-names>KM</given-names>
</name>
(
<year>2004</year>
)
<article-title>Integration of Word Meaning and World Knowledge in Language Comprehension</article-title>
.
<source>Science</source>
<volume>304</volume>
:
<fpage>438</fpage>
<lpage>441</lpage>
<pub-id pub-id-type="pmid">15031438</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Raposo1">
<label>54</label>
<mixed-citation publication-type="journal">
<name>
<surname>Raposo</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Moss</surname>
<given-names>HE</given-names>
</name>
,
<name>
<surname>Stamatakis</surname>
<given-names>EA</given-names>
</name>
,
<name>
<surname>Tyler</surname>
<given-names>LK</given-names>
</name>
(
<year>2009</year>
)
<article-title>Modulation of motor and premotor cortices by actions, action words and action sentences</article-title>
.
<source>Neuropsychologia</source>
<volume>47</volume>
:
<fpage>388</fpage>
<lpage>396</lpage>
<pub-id pub-id-type="pmid">18930749</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-Boulenger1">
<label>55</label>
<mixed-citation publication-type="journal">
<name>
<surname>Boulenger</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Hauk</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Pulvermüller</surname>
<given-names>F</given-names>
</name>
(
<year>2009</year>
)
<article-title>Grasping ideas with the motor system: semantic somatotopy in idiom comprehension</article-title>
.
<source>Cereb Cortex</source>
<volume>19</volume>
:
<fpage>1905</fpage>
<lpage>1914</lpage>
<pub-id pub-id-type="pmid">19068489</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0101042-VanAckeren1">
<label>56</label>
<mixed-citation publication-type="journal">
<name>
<surname>Van Ackeren</surname>
<given-names>MJ</given-names>
</name>
,
<name>
<surname>Casasanto</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Bekkering</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Hagoort</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Rueschemeyer</surname>
<given-names>S-A</given-names>
</name>
(
<year>2012</year>
)
<article-title>Pragmatics in Action: Indirect Requests Engage Theory of Mind Areas and the Cortical Motor Network</article-title>
.
<source>J Cogn Neurosci</source>
<volume>24</volume>
</mixed-citation>
</ref>
<ref id="pone.0101042-Kiefer1">
<label>57</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kiefer</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Sim</surname>
<given-names>E-J</given-names>
</name>
,
<name>
<surname>Herrnberger</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Grothe</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Hoenig</surname>
<given-names>K</given-names>
</name>
(
<year>2008</year>
)
<article-title>The sound of concepts: four markers for a link between auditory and conceptual brain systems</article-title>
.
<source>J Neurosci</source>
<volume>28</volume>
:
<fpage>12224</fpage>
<lpage>12230</lpage>
<pub-id pub-id-type="pmid">19020016</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Rueschemeyer, Shirley Ann" sort="Rueschemeyer, Shirley Ann" uniqKey="Rueschemeyer S" first="Shirley-Ann" last="Rueschemeyer">Shirley-Ann Rueschemeyer</name>
<name sortKey="Van Ackeren, Markus J" sort="Van Ackeren, Markus J" uniqKey="Van Ackeren M" first="Markus J." last="Van Ackeren">Markus J. Van Ackeren</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003131 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 003131 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:4090000
   |texte=   Cross-Modal Integration of Lexical-Semantic Features during Word Processing: Evidence from Oscillatory Dynamics during EEG
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:25007074" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024