Serveur d'exploration sur la télématique

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series

Identifieur interne : 000529 ( Pmc/Corpus ); précédent : 000528; suivant : 000530

Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series

Auteurs : Patricia Wollstadt ; Mario Martínez-Zarzuela ; Raul Vicente ; Francisco J. Díaz-Pernas ; Michael Wibral

Source :

RBID : PMC:4113280

Abstract

Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.


Url:
DOI: 10.1371/journal.pone.0102833
PubMed: 25068489
PubMed Central: 4113280

Links to Exploration step

PMC:4113280

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series</title>
<author>
<name sortKey="Wollstadt, Patricia" sort="Wollstadt, Patricia" uniqKey="Wollstadt P" first="Patricia" last="Wollstadt">Patricia Wollstadt</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Martinez Zarzuela, Mario" sort="Martinez Zarzuela, Mario" uniqKey="Martinez Zarzuela M" first="Mario" last="Martínez-Zarzuela">Mario Martínez-Zarzuela</name>
<affiliation>
<nlm:aff id="aff4">
<addr-line>Department of Signal Theory and Communications and Telematics Engineering, University of Valladolid, Valladolid, Spain</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Vicente, Raul" sort="Vicente, Raul" uniqKey="Vicente R" first="Raul" last="Vicente">Raul Vicente</name>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Frankfurt Institute for Advanced Studies (FIAS), Goethe University, Frankfurt, Germany</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff3">
<addr-line>Max-Planck Institute for Brain Research, Frankfurt, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Diaz Pernas, Francisco J" sort="Diaz Pernas, Francisco J" uniqKey="Diaz Pernas F" first="Francisco J." last="Díaz-Pernas">Francisco J. Díaz-Pernas</name>
<affiliation>
<nlm:aff id="aff4">
<addr-line>Department of Signal Theory and Communications and Telematics Engineering, University of Valladolid, Valladolid, Spain</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Wibral, Michael" sort="Wibral, Michael" uniqKey="Wibral M" first="Michael" last="Wibral">Michael Wibral</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25068489</idno>
<idno type="pmc">4113280</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4113280</idno>
<idno type="RBID">PMC:4113280</idno>
<idno type="doi">10.1371/journal.pone.0102833</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">000529</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000529</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series</title>
<author>
<name sortKey="Wollstadt, Patricia" sort="Wollstadt, Patricia" uniqKey="Wollstadt P" first="Patricia" last="Wollstadt">Patricia Wollstadt</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Martinez Zarzuela, Mario" sort="Martinez Zarzuela, Mario" uniqKey="Martinez Zarzuela M" first="Mario" last="Martínez-Zarzuela">Mario Martínez-Zarzuela</name>
<affiliation>
<nlm:aff id="aff4">
<addr-line>Department of Signal Theory and Communications and Telematics Engineering, University of Valladolid, Valladolid, Spain</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Vicente, Raul" sort="Vicente, Raul" uniqKey="Vicente R" first="Raul" last="Vicente">Raul Vicente</name>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Frankfurt Institute for Advanced Studies (FIAS), Goethe University, Frankfurt, Germany</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff3">
<addr-line>Max-Planck Institute for Brain Research, Frankfurt, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Diaz Pernas, Francisco J" sort="Diaz Pernas, Francisco J" uniqKey="Diaz Pernas F" first="Francisco J." last="Díaz-Pernas">Francisco J. Díaz-Pernas</name>
<affiliation>
<nlm:aff id="aff4">
<addr-line>Department of Signal Theory and Communications and Telematics Engineering, University of Valladolid, Valladolid, Spain</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Wibral, Michael" sort="Wibral, Michael" uniqKey="Wibral M" first="Michael" last="Wibral">Michael Wibral</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Turing, Am" uniqKey="Turing A">AM Turing</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Langton, Cg" uniqKey="Langton C">CG Langton</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wibral, M" uniqKey="Wibral M">M Wibral</name>
</author>
<author>
<name sortKey="Lizier, Jt" uniqKey="Lizier J">JT Lizier</name>
</author>
<author>
<name sortKey="Vogler, S" uniqKey="Vogler S">S Vögler</name>
</author>
<author>
<name sortKey="Priesemann, V" uniqKey="Priesemann V">V Priesemann</name>
</author>
<author>
<name sortKey="Galuske, R" uniqKey="Galuske R">R Galuske</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lizier, Jt" uniqKey="Lizier J">JT Lizier</name>
</author>
<author>
<name sortKey="Prokopenko, M" uniqKey="Prokopenko M">M Prokopenko</name>
</author>
<author>
<name sortKey="Zomaya, Ay" uniqKey="Zomaya A">AY Zomaya</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Harder, M" uniqKey="Harder M">M Harder</name>
</author>
<author>
<name sortKey="Salge, C" uniqKey="Salge C">C Salge</name>
</author>
<author>
<name sortKey="Polani, D" uniqKey="Polani D">D Polani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bertschinger, N" uniqKey="Bertschinger N">N Bertschinger</name>
</author>
<author>
<name sortKey="Rauh, J" uniqKey="Rauh J">J Rauh</name>
</author>
<author>
<name sortKey="Olbrich, E" uniqKey="Olbrich E">E Olbrich</name>
</author>
<author>
<name sortKey="Jost, J" uniqKey="Jost J">J Jost</name>
</author>
<author>
<name sortKey="Ay, N" uniqKey="Ay N">N Ay</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lizier, Jt" uniqKey="Lizier J">JT Lizier</name>
</author>
<author>
<name sortKey="Prokopenko, M" uniqKey="Prokopenko M">M Prokopenko</name>
</author>
<author>
<name sortKey="Zomaya, Ay" uniqKey="Zomaya A">AY Zomaya</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schreiber, T" uniqKey="Schreiber T">T Schreiber</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lizier, Jt" uniqKey="Lizier J">JT Lizier</name>
</author>
<author>
<name sortKey="Prokopenko, M" uniqKey="Prokopenko M">M Prokopenko</name>
</author>
<author>
<name sortKey="Zomaya, Ay" uniqKey="Zomaya A">AY Zomaya</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="G Mez, C" uniqKey="G Mez C">C Gómez</name>
</author>
<author>
<name sortKey="Lizier, Jt" uniqKey="Lizier J">JT Lizier</name>
</author>
<author>
<name sortKey="Schaum, M" uniqKey="Schaum M">M Schaum</name>
</author>
<author>
<name sortKey="Wollstadt, P" uniqKey="Wollstadt P">P Wollstadt</name>
</author>
<author>
<name sortKey="Grutzner, C" uniqKey="Grutzner C">C Grützner</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vicente, R" uniqKey="Vicente R">R Vicente</name>
</author>
<author>
<name sortKey="Wibral, M" uniqKey="Wibral M">M Wibral</name>
</author>
<author>
<name sortKey="Lindner, M" uniqKey="Lindner M">M Lindner</name>
</author>
<author>
<name sortKey="Pipa, G" uniqKey="Pipa G">G Pipa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wibral, M" uniqKey="Wibral M">M Wibral</name>
</author>
<author>
<name sortKey="Rahm, B" uniqKey="Rahm B">B Rahm</name>
</author>
<author>
<name sortKey="Rieder, M" uniqKey="Rieder M">M Rieder</name>
</author>
<author>
<name sortKey="Lindner, M" uniqKey="Lindner M">M Lindner</name>
</author>
<author>
<name sortKey="Vicente, R" uniqKey="Vicente R">R Vicente</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palus, M" uniqKey="Palus M">M Paluš</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vakorin, Va" uniqKey="Vakorin V">VA Vakorin</name>
</author>
<author>
<name sortKey="Kovacevic, N" uniqKey="Kovacevic N">N Kovacevic</name>
</author>
<author>
<name sortKey="Mcintosh, Ar" uniqKey="Mcintosh A">AR McIntosh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vakorin, Va" uniqKey="Vakorin V">VA Vakorin</name>
</author>
<author>
<name sortKey="Krakovska, Oa" uniqKey="Krakovska O">OA Krakovska</name>
</author>
<author>
<name sortKey="Mcintosh, Ar" uniqKey="Mcintosh A">AR McIntosh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chavez, M" uniqKey="Chavez M">M Chávez</name>
</author>
<author>
<name sortKey="Martinerie, J" uniqKey="Martinerie J">J Martinerie</name>
</author>
<author>
<name sortKey="Le Van Quyen, M" uniqKey="Le Van Quyen M">M Le Van Quyen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amblard, Po" uniqKey="Amblard P">PO Amblard</name>
</author>
<author>
<name sortKey="Michel, Oj" uniqKey="Michel O">OJ Michel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barnett, L" uniqKey="Barnett L">L Barnett</name>
</author>
<author>
<name sortKey="Barrett, Ab" uniqKey="Barrett A">AB Barrett</name>
</author>
<author>
<name sortKey="Seth, Ak" uniqKey="Seth A">AK Seth</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besserve, M" uniqKey="Besserve M">M Besserve</name>
</author>
<author>
<name sortKey="Scholkopf, B" uniqKey="Scholkopf B">B Scholkopf</name>
</author>
<author>
<name sortKey="Logothetis, Nk" uniqKey="Logothetis N">NK Logothetis</name>
</author>
<author>
<name sortKey="Panzeri, S" uniqKey="Panzeri S">S Panzeri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buehlmann, A" uniqKey="Buehlmann A">A Buehlmann</name>
</author>
<author>
<name sortKey="Deco, G" uniqKey="Deco G">G Deco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Garofalo, M" uniqKey="Garofalo M">M Garofalo</name>
</author>
<author>
<name sortKey="Nieus, T" uniqKey="Nieus T">T Nieus</name>
</author>
<author>
<name sortKey="Massobrio, P" uniqKey="Massobrio P">P Massobrio</name>
</author>
<author>
<name sortKey="Martinoia, S" uniqKey="Martinoia S">S Martinoia</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gourevitch, B" uniqKey="Gourevitch B">B Gourevitch</name>
</author>
<author>
<name sortKey="Eggermont, Jj" uniqKey="Eggermont J">JJ Eggermont</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lizier, Jt" uniqKey="Lizier J">JT Lizier</name>
</author>
<author>
<name sortKey="Heinzle, J" uniqKey="Heinzle J">J Heinzle</name>
</author>
<author>
<name sortKey="Horstmann, A" uniqKey="Horstmann A">A Horstmann</name>
</author>
<author>
<name sortKey="Haynes, Jd" uniqKey="Haynes J">JD Haynes</name>
</author>
<author>
<name sortKey="Prokopenko, M" uniqKey="Prokopenko M">M Prokopenko</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ludtke, N" uniqKey="Ludtke N">N Lüdtke</name>
</author>
<author>
<name sortKey="Logothetis, Nk" uniqKey="Logothetis N">NK Logothetis</name>
</author>
<author>
<name sortKey="Panzeri, S" uniqKey="Panzeri S">S Panzeri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Neymotin, Sa" uniqKey="Neymotin S">SA Neymotin</name>
</author>
<author>
<name sortKey="Jacobs, Km" uniqKey="Jacobs K">KM Jacobs</name>
</author>
<author>
<name sortKey="Fenton, Aa" uniqKey="Fenton A">AA Fenton</name>
</author>
<author>
<name sortKey="Lytton, Ww" uniqKey="Lytton W">WW Lytton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sabesan, S" uniqKey="Sabesan S">S Sabesan</name>
</author>
<author>
<name sortKey="Good, Lb" uniqKey="Good L">LB Good</name>
</author>
<author>
<name sortKey="Tsakalis, Ks" uniqKey="Tsakalis K">KS Tsakalis</name>
</author>
<author>
<name sortKey="Spanias, A" uniqKey="Spanias A">A Spanias</name>
</author>
<author>
<name sortKey="Treiman, Dm" uniqKey="Treiman D">DM Treiman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Staniek, M" uniqKey="Staniek M">M Staniek</name>
</author>
<author>
<name sortKey="Lehnertz, K" uniqKey="Lehnertz K">K Lehnertz</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roux, F" uniqKey="Roux F">F Roux</name>
</author>
<author>
<name sortKey="Wibral, M" uniqKey="Wibral M">M Wibral</name>
</author>
<author>
<name sortKey="Singer, W" uniqKey="Singer W">W Singer</name>
</author>
<author>
<name sortKey="Aru, J" uniqKey="Aru J">J Aru</name>
</author>
<author>
<name sortKey="Uhlhaas, Pj" uniqKey="Uhlhaas P">PJ Uhlhaas</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Faes, L" uniqKey="Faes L">L Faes</name>
</author>
<author>
<name sortKey="Nollo, G" uniqKey="Nollo G">G Nollo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Faes, L" uniqKey="Faes L">L Faes</name>
</author>
<author>
<name sortKey="Nollo, G" uniqKey="Nollo G">G Nollo</name>
</author>
<author>
<name sortKey="Porta, A" uniqKey="Porta A">A Porta</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Faes, L" uniqKey="Faes L">L Faes</name>
</author>
<author>
<name sortKey="Nollo, G" uniqKey="Nollo G">G Nollo</name>
</author>
<author>
<name sortKey="Porta, A" uniqKey="Porta A">A Porta</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kwon, O" uniqKey="Kwon O">O Kwon</name>
</author>
<author>
<name sortKey="Yang, Js" uniqKey="Yang J">JS Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kim, J" uniqKey="Kim J">J Kim</name>
</author>
<author>
<name sortKey="Kim, G" uniqKey="Kim G">G Kim</name>
</author>
<author>
<name sortKey="An, S" uniqKey="An S">S An</name>
</author>
<author>
<name sortKey="Kwon, Yk" uniqKey="Kwon Y">YK Kwon</name>
</author>
<author>
<name sortKey="Yoon, S" uniqKey="Yoon S">S Yoon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ay, N" uniqKey="Ay N">N Ay</name>
</author>
<author>
<name sortKey="Polani, D" uniqKey="Polani D">D Polani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lizier, Jt" uniqKey="Lizier J">JT Lizier</name>
</author>
<author>
<name sortKey="Prokopenko, M" uniqKey="Prokopenko M">M Prokopenko</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chicharro, D" uniqKey="Chicharro D">D Chicharro</name>
</author>
<author>
<name sortKey="Ledberg, A" uniqKey="Ledberg A">A Ledberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stramaglia, S" uniqKey="Stramaglia S">S Stramaglia</name>
</author>
<author>
<name sortKey="Wu, Gr" uniqKey="Wu G">GR Wu</name>
</author>
<author>
<name sortKey="Pellicoro, M" uniqKey="Pellicoro M">M Pellicoro</name>
</author>
<author>
<name sortKey="Marinazzo, D" uniqKey="Marinazzo D">D Marinazzo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bettencourt, Lm" uniqKey="Bettencourt L">LM Bettencourt</name>
</author>
<author>
<name sortKey="Stephens, Gj" uniqKey="Stephens G">GJ Stephens</name>
</author>
<author>
<name sortKey="Ham, Mi" uniqKey="Ham M">MI Ham</name>
</author>
<author>
<name sortKey="Gross, Gw" uniqKey="Gross G">GW Gross</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wibral, M" uniqKey="Wibral M">M Wibral</name>
</author>
<author>
<name sortKey="Pampu, N" uniqKey="Pampu N">N Pampu</name>
</author>
<author>
<name sortKey="Priesemann, V" uniqKey="Priesemann V">V Priesemann</name>
</author>
<author>
<name sortKey="Siebenhuhner, F" uniqKey="Siebenhuhner F">F Siebenhühner</name>
</author>
<author>
<name sortKey="Seiwert, H" uniqKey="Seiwert H">H Seiwert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wibral, M" uniqKey="Wibral M">M Wibral</name>
</author>
<author>
<name sortKey="Wollstadt, P" uniqKey="Wollstadt P">P Wollstadt</name>
</author>
<author>
<name sortKey="Meyer, U" uniqKey="Meyer U">U Meyer</name>
</author>
<author>
<name sortKey="Pampu, N" uniqKey="Pampu N">N Pampu</name>
</author>
<author>
<name sortKey="Priesemann, V" uniqKey="Priesemann V">V Priesemann</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lindner, M" uniqKey="Lindner M">M Lindner</name>
</author>
<author>
<name sortKey="Vicente, R" uniqKey="Vicente R">R Vicente</name>
</author>
<author>
<name sortKey="Priesemann, V" uniqKey="Priesemann V">V Priesemann</name>
</author>
<author>
<name sortKey="Wibral, M" uniqKey="Wibral M">M Wibral</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kraskov, A" uniqKey="Kraskov A">A Kraskov</name>
</author>
<author>
<name sortKey="Stoegbauer, H" uniqKey="Stoegbauer H">H Stoegbauer</name>
</author>
<author>
<name sortKey="Grassberger, P" uniqKey="Grassberger P">P Grassberger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Owens, Jd" uniqKey="Owens J">JD Owens</name>
</author>
<author>
<name sortKey="Houston, M" uniqKey="Houston M">M Houston</name>
</author>
<author>
<name sortKey="Luebke, D" uniqKey="Luebke D">D Luebke</name>
</author>
<author>
<name sortKey="Green, S" uniqKey="Green S">S Green</name>
</author>
<author>
<name sortKey="Stone, Je" uniqKey="Stone J">JE Stone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brodtkorb, Ar" uniqKey="Brodtkorb A">AR Brodtkorb</name>
</author>
<author>
<name sortKey="Hagen, Tr" uniqKey="Hagen T">TR Hagen</name>
</author>
<author>
<name sortKey="S Tra, Ml" uniqKey="S Tra M">ML Sætra</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, D" uniqKey="Lee D">D Lee</name>
</author>
<author>
<name sortKey="Dinov, I" uniqKey="Dinov I">I Dinov</name>
</author>
<author>
<name sortKey="Dong, B" uniqKey="Dong B">B Dong</name>
</author>
<author>
<name sortKey="Gutman, B" uniqKey="Gutman B">B Gutman</name>
</author>
<author>
<name sortKey="Yanovsky, I" uniqKey="Yanovsky I">I Yanovsky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martinez Zarzuela, M" uniqKey="Martinez Zarzuela M">M Martínez-Zarzuela</name>
</author>
<author>
<name sortKey="G Mez, C" uniqKey="G Mez C">C Gómez</name>
</author>
<author>
<name sortKey="Diaz Pernas, Fj" uniqKey="Diaz Pernas F">FJ Díaz-Pernas</name>
</author>
<author>
<name sortKey="Fernandez, A" uniqKey="Fernandez A">A Fernández</name>
</author>
<author>
<name sortKey="Hornero, R" uniqKey="Hornero R">R Hornero</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Konstantinidis, Ei" uniqKey="Konstantinidis E">EI Konstantinidis</name>
</author>
<author>
<name sortKey="Frantzidis, Ca" uniqKey="Frantzidis C">CA Frantzidis</name>
</author>
<author>
<name sortKey="Pappas, C" uniqKey="Pappas C">C Pappas</name>
</author>
<author>
<name sortKey="Bamidis, Pd" uniqKey="Bamidis P">PD Bamidis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arefin, As" uniqKey="Arefin A">AS Arefin</name>
</author>
<author>
<name sortKey="Riveros, C" uniqKey="Riveros C">C Riveros</name>
</author>
<author>
<name sortKey="Berretta, R" uniqKey="Berretta R">R Berretta</name>
</author>
<author>
<name sortKey="Moscato, P" uniqKey="Moscato P">P Moscato</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilson, Ja" uniqKey="Wilson J">JA Wilson</name>
</author>
<author>
<name sortKey="Williams, Jc" uniqKey="Williams J">JC Williams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, D" uniqKey="Chen D">D Chen</name>
</author>
<author>
<name sortKey="Wang, L" uniqKey="Wang L">L Wang</name>
</author>
<author>
<name sortKey="Ouyang, G" uniqKey="Ouyang G">G Ouyang</name>
</author>
<author>
<name sortKey="Li, X" uniqKey="Li X">X Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, Y" uniqKey="Liu Y">Y Liu</name>
</author>
<author>
<name sortKey="Schmidt, B" uniqKey="Schmidt B">B Schmidt</name>
</author>
<author>
<name sortKey="Liu, W" uniqKey="Liu W">W Liu</name>
</author>
<author>
<name sortKey="Maskell, Dl" uniqKey="Maskell D">DL Maskell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Merkwirth, C" uniqKey="Merkwirth C">C Merkwirth</name>
</author>
<author>
<name sortKey="Parlitz, U" uniqKey="Parlitz U">U Parlitz</name>
</author>
<author>
<name sortKey="Lauterborn, W" uniqKey="Lauterborn W">W Lauterborn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gardner, Wa" uniqKey="Gardner W">WA Gardner</name>
</author>
<author>
<name sortKey="Napolitano, A" uniqKey="Napolitano A">A Napolitano</name>
</author>
<author>
<name sortKey="Paura, L" uniqKey="Paura L">L Paura</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ragwitz, M" uniqKey="Ragwitz M">M Ragwitz</name>
</author>
<author>
<name sortKey="Kantz, H" uniqKey="Kantz H">H Kantz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kozachenko, L" uniqKey="Kozachenko L">L Kozachenko</name>
</author>
<author>
<name sortKey="Leonenko, N" uniqKey="Leonenko N">N Leonenko</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Victor, Jd" uniqKey="Victor J">JD Victor</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maris, E" uniqKey="Maris E">E Maris</name>
</author>
<author>
<name sortKey="Oostenveld, R" uniqKey="Oostenveld R">R Oostenveld</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bentley, Jl" uniqKey="Bentley J">JL Bentley</name>
</author>
<author>
<name sortKey="Friedman, Jh" uniqKey="Friedman J">JH Friedman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arya, S" uniqKey="Arya S">S Arya</name>
</author>
<author>
<name sortKey="Mount, Dm" uniqKey="Mount D">DM Mount</name>
</author>
<author>
<name sortKey="Netanyahu, Ns" uniqKey="Netanyahu N">NS Netanyahu</name>
</author>
<author>
<name sortKey="Silverman, R" uniqKey="Silverman R">R Silverman</name>
</author>
<author>
<name sortKey="Wu, Ay" uniqKey="Wu A">AY Wu</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grutzner, C" uniqKey="Grutzner C">C Grützner</name>
</author>
<author>
<name sortKey="Uhlhaas, Pj" uniqKey="Uhlhaas P">PJ Uhlhaas</name>
</author>
<author>
<name sortKey="Genc, E" uniqKey="Genc E">E Genc</name>
</author>
<author>
<name sortKey="Kohler, A" uniqKey="Kohler A">A Kohler</name>
</author>
<author>
<name sortKey="Singer, W" uniqKey="Singer W">W Singer</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mooney, Cm" uniqKey="Mooney C">CM Mooney</name>
</author>
<author>
<name sortKey="Ferguson, Ga" uniqKey="Ferguson G">GA Ferguson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oostenveld, R" uniqKey="Oostenveld R">R Oostenveld</name>
</author>
<author>
<name sortKey="Fries, P" uniqKey="Fries P">P Fries</name>
</author>
<author>
<name sortKey="Maris, E" uniqKey="Maris E">E Maris</name>
</author>
<author>
<name sortKey="Schoffelen, Jm" uniqKey="Schoffelen J">JM Schoffelen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gross, J" uniqKey="Gross J">J Gross</name>
</author>
<author>
<name sortKey="Kujala, J" uniqKey="Kujala J">J Kujala</name>
</author>
<author>
<name sortKey="Hamalainen, M" uniqKey="Hamalainen M">M Hamalainen</name>
</author>
<author>
<name sortKey="Timmermann, L" uniqKey="Timmermann L">L Timmermann</name>
</author>
<author>
<name sortKey="Schnitzler, A" uniqKey="Schnitzler A">A Schnitzler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brookes, Mj" uniqKey="Brookes M">MJ Brookes</name>
</author>
<author>
<name sortKey="Vrba, J" uniqKey="Vrba J">J Vrba</name>
</author>
<author>
<name sortKey="Robinson, Se" uniqKey="Robinson S">SE Robinson</name>
</author>
<author>
<name sortKey="Stevenson, Cm" uniqKey="Stevenson C">CM Stevenson</name>
</author>
<author>
<name sortKey="Peters, Am" uniqKey="Peters A">AM Peters</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bar, M" uniqKey="Bar M">M Bar</name>
</author>
<author>
<name sortKey="Kassam, Ks" uniqKey="Kassam K">KS Kassam</name>
</author>
<author>
<name sortKey="Ghuman, As" uniqKey="Ghuman A">AS Ghuman</name>
</author>
<author>
<name sortKey="Boshyan, J" uniqKey="Boshyan J">J Boshyan</name>
</author>
<author>
<name sortKey="Schmid, Am" uniqKey="Schmid A">AM Schmid</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Verdes, Pf" uniqKey="Verdes P">PF Verdes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pompe, B" uniqKey="Pompe B">B Pompe</name>
</author>
<author>
<name sortKey="Runge, J" uniqKey="Runge J">J Runge</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marschinski, R" uniqKey="Marschinski R">R Marschinski</name>
</author>
<author>
<name sortKey="Kantz, H" uniqKey="Kantz H">H Kantz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sauseng, P" uniqKey="Sauseng P">P Sauseng</name>
</author>
<author>
<name sortKey="Klimesch, W" uniqKey="Klimesch W">W Klimesch</name>
</author>
<author>
<name sortKey="Gruber, Wr" uniqKey="Gruber W">WR Gruber</name>
</author>
<author>
<name sortKey="Hanslmayr, S" uniqKey="Hanslmayr S">S Hanslmayr</name>
</author>
<author>
<name sortKey="Freunberger, R" uniqKey="Freunberger R">R Freunberger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Makeig, S" uniqKey="Makeig S">S Makeig</name>
</author>
<author>
<name sortKey="Debener, S" uniqKey="Debener S">S Debener</name>
</author>
<author>
<name sortKey="Onton, J" uniqKey="Onton J">J Onton</name>
</author>
<author>
<name sortKey="Delorme, A" uniqKey="Delorme A">A Delorme</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shah, As" uniqKey="Shah A">AS Shah</name>
</author>
<author>
<name sortKey="Bressler, Sl" uniqKey="Bressler S">SL Bressler</name>
</author>
<author>
<name sortKey="Knuth, Kh" uniqKey="Knuth K">KH Knuth</name>
</author>
<author>
<name sortKey="Ding, M" uniqKey="Ding M">M Ding</name>
</author>
<author>
<name sortKey="Mehta, Ad" uniqKey="Mehta A">AD Mehta</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sayers, Bm" uniqKey="Sayers B">BM Sayers</name>
</author>
<author>
<name sortKey="Beagley, H" uniqKey="Beagley H">H Beagley</name>
</author>
<author>
<name sortKey="Henshall, W" uniqKey="Henshall W">W Henshall</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Makeig, S" uniqKey="Makeig S">S Makeig</name>
</author>
<author>
<name sortKey="Westerfield, M" uniqKey="Westerfield M">M Westerfield</name>
</author>
<author>
<name sortKey="Jung, Tp" uniqKey="Jung T">TP Jung</name>
</author>
<author>
<name sortKey="Enghoff, S" uniqKey="Enghoff S">S Enghoff</name>
</author>
<author>
<name sortKey="Townsend, J" uniqKey="Townsend J">J Townsend</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jansen, Bh" uniqKey="Jansen B">BH Jansen</name>
</author>
<author>
<name sortKey="Agarwal, G" uniqKey="Agarwal G">G Agarwal</name>
</author>
<author>
<name sortKey="Hegde, A" uniqKey="Hegde A">A Hegde</name>
</author>
<author>
<name sortKey="Boutros, Nn" uniqKey="Boutros N">NN Boutros</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klimesch, W" uniqKey="Klimesch W">W Klimesch</name>
</author>
<author>
<name sortKey="Schack, B" uniqKey="Schack B">B Schack</name>
</author>
<author>
<name sortKey="Schabus, M" uniqKey="Schabus M">M Schabus</name>
</author>
<author>
<name sortKey="Doppelmayr, M" uniqKey="Doppelmayr M">M Doppelmayr</name>
</author>
<author>
<name sortKey="Gruber, W" uniqKey="Gruber W">W Gruber</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Turi, G" uniqKey="Turi G">G Turi</name>
</author>
<author>
<name sortKey="Gotthardt, S" uniqKey="Gotthardt S">S Gotthardt</name>
</author>
<author>
<name sortKey="Singer, W" uniqKey="Singer W">W Singer</name>
</author>
<author>
<name sortKey="Vuong, Ta" uniqKey="Vuong T">TA Vuong</name>
</author>
<author>
<name sortKey="Munk, M" uniqKey="Munk M">M Munk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moller, E" uniqKey="Moller E">E Möller</name>
</author>
<author>
<name sortKey="Schack, B" uniqKey="Schack B">B Schack</name>
</author>
<author>
<name sortKey="Arnold, M" uniqKey="Arnold M">M Arnold</name>
</author>
<author>
<name sortKey="Witte, H" uniqKey="Witte H">H Witte</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ding, M" uniqKey="Ding M">M Ding</name>
</author>
<author>
<name sortKey="Bressler, Sl" uniqKey="Bressler S">SL Bressler</name>
</author>
<author>
<name sortKey="Yang, W" uniqKey="Yang W">W Yang</name>
</author>
<author>
<name sortKey="Liang, H" uniqKey="Liang H">H Liang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hesse, W" uniqKey="Hesse W">W Hesse</name>
</author>
<author>
<name sortKey="Moller, E" uniqKey="Moller E">E Möller</name>
</author>
<author>
<name sortKey="Arnold, M" uniqKey="Arnold M">M Arnold</name>
</author>
<author>
<name sortKey="Schack, B" uniqKey="Schack B">B Schack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leistritz, L" uniqKey="Leistritz L">L Leistritz</name>
</author>
<author>
<name sortKey="Hesse, W" uniqKey="Hesse W">W Hesse</name>
</author>
<author>
<name sortKey="Arnold, M" uniqKey="Arnold M">M Arnold</name>
</author>
<author>
<name sortKey="Witte, H" uniqKey="Witte H">H Witte</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wibral, M" uniqKey="Wibral M">M Wibral</name>
</author>
<author>
<name sortKey="Turi, G" uniqKey="Turi G">G Turi</name>
</author>
<author>
<name sortKey="Linden, Dej" uniqKey="Linden D">DEJ Linden</name>
</author>
<author>
<name sortKey="Kaiser, J" uniqKey="Kaiser J">J Kaiser</name>
</author>
<author>
<name sortKey="Bledowski, C" uniqKey="Bledowski C">C Bledowski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Andrzejak, Rg" uniqKey="Andrzejak R">RG Andrzejak</name>
</author>
<author>
<name sortKey="Ledberg, A" uniqKey="Ledberg A">A Ledberg</name>
</author>
<author>
<name sortKey="Deco, G" uniqKey="Deco G">G Deco</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Georgieva, Ss" uniqKey="Georgieva S">SS Georgieva</name>
</author>
<author>
<name sortKey="Todd, Jt" uniqKey="Todd J">JT Todd</name>
</author>
<author>
<name sortKey="Peeters, R" uniqKey="Peeters R">R Peeters</name>
</author>
<author>
<name sortKey="Orban, Ga" uniqKey="Orban G">GA Orban</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N Kanwisher</name>
</author>
<author>
<name sortKey="Tong, F" uniqKey="Tong F">F Tong</name>
</author>
<author>
<name sortKey="Nakayama, K" uniqKey="Nakayama K">K Nakayama</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Andrews, Tj" uniqKey="Andrews T">TJ Andrews</name>
</author>
<author>
<name sortKey="Schluppeck, D" uniqKey="Schluppeck D">D Schluppeck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mckeeff, Tj" uniqKey="Mckeeff T">TJ McKeeff</name>
</author>
<author>
<name sortKey="Tong, F" uniqKey="Tong F">F Tong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Faes, L" uniqKey="Faes L">L Faes</name>
</author>
<author>
<name sortKey="Nollo, G" uniqKey="Nollo G">G Nollo</name>
</author>
<author>
<name sortKey="Porta, A" uniqKey="Porta A">A Porta</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25068489</article-id>
<article-id pub-id-type="pmc">4113280</article-id>
<article-id pub-id-type="publisher-id">PONE-D-13-53869</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0102833</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Biophysics</subject>
<subj-group>
<subject>Biophysics Theory</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Computational Biology</subject>
<subj-group>
<subject>Computational Neuroscience</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Neuroscience</subject>
<subject>Neuroimaging</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Physiology</subject>
<subj-group>
<subject>Electrophysiology</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Computer and Information Sciences</subject>
<subj-group>
<subject>Computing Methods</subject>
<subj-group>
<subject>Mathematical Computing</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Information Theory</subject>
</subj-group>
<subj-group>
<subject>Software Engineering</subject>
<subj-group>
<subject>Software Tools</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Systems Science</subject>
<subj-group>
<subject>Complex Systems</subject>
<subject>Nonlinear Dynamics</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Medicine and Health Sciences</subject>
<subj-group>
<subject>Medical Physics</subject>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Physical Sciences</subject>
<subj-group>
<subject>Mathematics</subject>
<subj-group>
<subject>Applied Mathematics</subject>
<subj-group>
<subject>Algorithms</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Probability Theory</subject>
<subj-group>
<subject>Probability Density</subject>
<subject>Probability Distribution</subject>
<subject>Stochastic Processes</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Statistics (Mathematics)</subject>
<subj-group>
<subject>Biostatistics</subject>
<subject>Statistical Methods</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Numerical Analysis</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Physics</subject>
<subj-group>
<subject>Interdisciplinary Physics</subject>
<subject>Statistical Mechanics</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series</article-title>
<alt-title alt-title-type="running-head">Non-Stationary Transfer Entropy</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" equal-contrib="yes">
<name>
<surname>Wollstadt</surname>
<given-names>Patricia</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author" equal-contrib="yes">
<name>
<surname>Martínez-Zarzuela</surname>
<given-names>Mario</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Vicente</surname>
<given-names>Raul</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Díaz-Pernas</surname>
<given-names>Francisco J.</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wibral</surname>
<given-names>Michael</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Frankfurt Institute for Advanced Studies (FIAS), Goethe University, Frankfurt, Germany</addr-line>
</aff>
<aff id="aff3">
<label>3</label>
<addr-line>Max-Planck Institute for Brain Research, Frankfurt, Germany</addr-line>
</aff>
<aff id="aff4">
<label>4</label>
<addr-line>Department of Signal Theory and Communications and Telematics Engineering, University of Valladolid, Valladolid, Spain</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Marinazzo</surname>
<given-names>Daniele</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>Universiteit Gent, Belgium</addr-line>
</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>p.wollstadt@stud.uni-frankfurt.de</email>
</corresp>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con">
<p>Conceived and designed the experiments: MW RV MMZ PW FDP. Performed the experiments: PW MMZ MW. Analyzed the data: PW MMZ MW. Contributed reagents/materials/analysis tools: MMZ PW MW RV FDP. Wrote the paper: PW MMZ RV MW. Conceived and designed the parallel algorithm: MW MMZ. Implemented the algorithm: MMZ PW MW FDP. Designed the software used in analysis: MW RV MMZ PW FDP.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<pub-date pub-type="epub">
<day>28</day>
<month>7</month>
<year>2014</year>
</pub-date>
<volume>9</volume>
<issue>7</issue>
<elocation-id>e102833</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>12</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>6</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-year>2014</copyright-year>
<copyright-holder>Wollstadt et al</copyright-holder>
<license>
<license-p>This is an open-access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.</p>
</abstract>
<funding-group>
<funding-statement>MW and RV received financial support from LOEWE Grant “Neuronale Koordination Forschungsschwerpunkt Frankfurt (NeFF)”. MMZ received financial support from the University of Valladolid. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<page-count count="21"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>We typically think of the brain as some kind of information processing system, albeit mostly without having a strict definition of information processing in mind. However, more formal accounts of information processing exist, and may be applied to brain research. In efforts dating back to Alan Turing
<xref rid="pone.0102833-Turing1" ref-type="bibr">[1]</xref>
it was shown that any act of information processing can be broken down into the three components of information storage, information transfer, and information modification
<xref rid="pone.0102833-Turing1" ref-type="bibr">[1]</xref>
<xref rid="pone.0102833-Lizier1" ref-type="bibr">[4]</xref>
. These components can be easily identified in theoretical or technical information processing systems, such as ordinary computers, based on the specialized machinery for and the spatial separation of these component functions. In these examples, a separation of the components of information processing via a specialized mathematical formalism seems almost superfluous. However, in biological systems in general, and in the brain in particular, we deal with a form of distributed information processing based on a large number of interacting agents (neurons), and each agent at each moment in time subserves any of the three component functions to a varying degree (see
<xref rid="pone.0102833-Wibral1" ref-type="bibr">[5]</xref>
for an example of time-varying storage). In neural systems it is indeed crucial to understand where and when information storage, transfer and modification take place, to constrain possible algorithms run by the system. While there is still a struggle to properly define information modification
<xref rid="pone.0102833-Lizier2" ref-type="bibr">[6]</xref>
,
<xref rid="pone.0102833-Lizier3" ref-type="bibr">[7]</xref>
and its proper measure
<xref rid="pone.0102833-Williams1" ref-type="bibr">[8]</xref>
<xref rid="pone.0102833-Bertschinger2" ref-type="bibr">[12]</xref>
, well established measures for (local active) information storage
<xref rid="pone.0102833-Lizier4" ref-type="bibr">[13]</xref>
, information transfer
<xref rid="pone.0102833-Schreiber1" ref-type="bibr">[14]</xref>
, and its localization in time and space
<xref rid="pone.0102833-Lizier5" ref-type="bibr">[15]</xref>
,
<xref rid="pone.0102833-Lizier6" ref-type="bibr">[16]</xref>
exist, and are applied in neuroscience (for information storage see
<xref rid="pone.0102833-Wibral1" ref-type="bibr">[5]</xref>
,
<xref rid="pone.0102833-Gmez1" ref-type="bibr">[17]</xref>
,
<xref rid="pone.0102833-Dasgupta1" ref-type="bibr">[18]</xref>
, for information transfer see below).</p>
<p>Especially the measure for information transfer, transfer entropy (TE), has seen a dramatic surge of interest in neuroscience
<xref rid="pone.0102833-Vicente1" ref-type="bibr">[19]</xref>
<xref rid="pone.0102833-Faes1" ref-type="bibr">[41]</xref>
, physiology
<xref rid="pone.0102833-Faes2" ref-type="bibr">[42]</xref>
<xref rid="pone.0102833-Faes4" ref-type="bibr">[44]</xref>
, and other fields
<xref rid="pone.0102833-Lizier2" ref-type="bibr">[6]</xref>
,
<xref rid="pone.0102833-Lizier5" ref-type="bibr">[15]</xref>
,
<xref rid="pone.0102833-Lizier7" ref-type="bibr">[31]</xref>
,
<xref rid="pone.0102833-Kwon1" ref-type="bibr">[45]</xref>
,
<xref rid="pone.0102833-Kim1" ref-type="bibr">[46]</xref>
. Nevertheless, conceptual and practical problems still exist. On the conceptual side, information transfer has been for a while confused with causal interactions, and only some recent studies
<xref rid="pone.0102833-Ay1" ref-type="bibr">[47]</xref>
<xref rid="pone.0102833-Chicharro1" ref-type="bibr">[49]</xref>
made clear that there can be no one-to-one mapping between causal interactions and information transfer, because causal interactions will subserve all
<italic>three</italic>
components of information processing (transfer, storage, modification). However, it is information transfer, rather than causal interactions, we might be interested in when trying to understand a computational process in the brain
<xref rid="pone.0102833-Lizier8" ref-type="bibr">[48]</xref>
.</p>
<p>On the practical side, efforts to apply measures of information transfer in neuroscience have been hampered by two obstacles: (1) the need to analyze the information processing in a multivariate manner, to arrive at unambiguous conclusions that are not clouded by spurious traces of information transfer, e.g. due to effects of cascades and common drivers; (2) the fact that available estimators of information transfer typically require the processes under investigation to be stationary.</p>
<p>The first obstacle can in principle be overcome by conditioning TE on all other processes in a system, using a fully multivariate approach that had already been formulated by Schreiber
<xref rid="pone.0102833-Schreiber1" ref-type="bibr">[14]</xref>
. However, the naive application of this approach normally fails because the samples available for estimation are typically too few. Therefore, recently four approaches to build an approximate representation of the information transfer network have been suggested: Lizier and Rubinov
<xref rid="pone.0102833-Lizier9" ref-type="bibr">[50]</xref>
, Faes and colleagues
<xref rid="pone.0102833-Faes4" ref-type="bibr">[44]</xref>
, and Stramaglia and colleagues
<xref rid="pone.0102833-Stramaglia1" ref-type="bibr">[51]</xref>
presented algorithms for iterative inclusion of processes into an approximate multivariate description. In the approach suggested by Stramaglia and colleagues, conditional mutual information terms are additionally computed at each level as a self-truncating series expansion, following a suggestion by Bettencourt and colleagues
<xref rid="pone.0102833-Bettencourt1" ref-type="bibr">[52]</xref>
. In contrast to these approaches that explicitly compute conditional TE terms, we recently suggested an approximation based on a reconstruction of information transfer delays
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
and a graphical pruning algorithm
<xref rid="pone.0102833-Wibral5" ref-type="bibr">[54]</xref>
. While the first three approaches will eventually be closer to the ground truth, the graphical method may be better applicable to very limited amounts of data. In sum, the first problem of multivariate analysis can be considered solved for practical purposes, given enough data are available.</p>
<p>The second obstacle of dealing with non-stationary processes is also not a fundamental one, as the definition of TE relies on the availability of multiple realizations of (two or more) random processes, that can be obtained by running an ensemble of many identical copies of the processes in question, or by running one process multiple times. Only when obtaining data from such copies or repetitions is impossible, we have to turn to a stationarity assumption in order to evaluate the necessary probability density functions (PDF) based on a single realization.</p>
<p>Fortunately, in neuroscience we can often obtain many realizations of the processes in question by repeating an experiment. In fact, this is the typical procedure in neuroscience - we repeat trials under conditions that are kept as constant as possible (i.e we create a cyclostationary process). The possibility to use such an
<italic>ensemble</italic>
of data to estimate the time resolved TE has already been demonstrated theoretically by Gomez-Herrero and colleagues
<xref rid="pone.0102833-GomezHerrero1" ref-type="bibr">[55]</xref>
. Practically, however, the statistical testing necessary for this ensemble-based method leads to an increase in computational cost by several orders of magnitude, as some shortcuts in statistical validation that can be taken for stationary data cannot be used for the ensemble approach (see
<xref rid="pone.0102833-Lindner1" ref-type="bibr">[56]</xref>
): For stationary data, TE is calculated per trial and
<italic>one</italic>
set of trial-based surrogate data may be used for statistical testing. The ensemble method does not allow for trial-based TE estimation as TE is estimated across trials. Instead, the ensemble method requires the generation of a sufficiently large number of surrogate data sets, for
<italic>all</italic>
of which TE has to be estimated, thus multiplying the computational demand by the number of surrogate data sets. Therefore, the use of the ensemble method has remained a theoretical possibility so far, especially in combination with the nearest neighbor-based estimation techniques by Kraskov and colleagues
<xref rid="pone.0102833-Kraskov1" ref-type="bibr">[57]</xref>
that provide the most precise, yet computationally most heavy TE estimates. For example, the analysis of magnetoencephalographic data presented here would require a runtime of 8200 h for 15 subjects and a single experimental condition. It is easy to see that any practical application of the methods hinges on a substantial speed-up of the computation.</p>
<p>Fortunately, the algorithms involved in ensemble-based TE estimation, lend themselves easily to data-parallel processing, since most of the algorithm's fundamental parts can be computed simultaneously. Thus, our problem matches the massively parallel architecture of Graphics Processing Unit (GPU) devices well. GPUs were originally devised only for computer graphics, but are routinely used to speed up computations in many areas today
<xref rid="pone.0102833-Owens1" ref-type="bibr">[58]</xref>
,
<xref rid="pone.0102833-Brodtkorb1" ref-type="bibr">[59]</xref>
. Also in neuroscience, where applied algorithms continue to grow faster in complexity than the CPU performance, the use of GPUs with data-parallel methods is becoming increasingly important
<xref rid="pone.0102833-Lee1" ref-type="bibr">[60]</xref>
and GPUs have successfully been used to speedup time series analysis in neuroscientific experiments
<xref rid="pone.0102833-MartnezZarzuela1" ref-type="bibr">[61]</xref>
<xref rid="pone.0102833-Liu1" ref-type="bibr">[66]</xref>
.</p>
<p>Thus, in order to overcome the limitations set by the computational demands of TE analysis from an ensemble of data, we developed a GPU implementation of the algorithm, where the neighbor searches underlying the binless TE estimation
<xref rid="pone.0102833-Kraskov1" ref-type="bibr">[57]</xref>
are executed in parallel on the GPU. After parallelizing this computationally most heavy aspect of TE estimation we were able to use the ensemble method for TE estimation proposed by
<xref rid="pone.0102833-GomezHerrero1" ref-type="bibr">[55]</xref>
, to estimate time-resolved TE from non-stationary neural time-series in acceptable time. Using the new GPU-based TE estimation tool on a high-end consumer graphics card reduced computation time by a factor of 50 compared to the CPU optimized TE search used previously
<xref rid="pone.0102833-Merkwirth1" ref-type="bibr">[67]</xref>
. In practical terms, this speedup shortens the duration of an ensemble-based analysis for typical neural data sets enough to make the application of the ensemble method feasible for the first time.</p>
</sec>
<sec id="s2">
<title>Background</title>
<p>Our study focuses on making the application of ensemble-based estimation of TE from non-stationary data practical using a GPU-based algorithm. For the convenience of the reader, we will also present the necessary background on stationarity, TE estimation using the Kraskov-Stögbauer-Grassberger (KSG) estimator
<xref rid="pone.0102833-Vicente1" ref-type="bibr">[19]</xref>
, and the ensemble method of Gomez-Herrero et al.
<xref rid="pone.0102833-GomezHerrero1" ref-type="bibr">[55]</xref>
in condensed form in a short background section below. Readers well familiar with these topics can safely skip ahead to the
<italic>Implementation</italic>
section below.</p>
<sec id="s2a">
<title>Notation</title>
<p>To describe practical TE estimation from time series recorded in a system of interest
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e001.jpg"></inline-graphic>
</inline-formula>
(e.g. a brain area), we first have to formalize these recordings mathematically: We define an observed time series
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e002.jpg"></inline-graphic>
</inline-formula>
as a realization of a random process
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e003.jpg"></inline-graphic>
</inline-formula>
. A random process here is simply a collection of individual random variables sorted by an integer index
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e004.jpg"></inline-graphic>
</inline-formula>
, representing time. TE or other information theoretic functionals are then calculated from the random variables' joint PDFs
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e005.jpg"></inline-graphic>
</inline-formula>
and conditional PDFs
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e006.jpg"></inline-graphic>
</inline-formula>
(with
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e007.jpg"></inline-graphic>
</inline-formula>
), where
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e008.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e009.jpg"></inline-graphic>
</inline-formula>
are all possible outcomes of the random variables
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e010.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e011.jpg"></inline-graphic>
</inline-formula>
, and where
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e012.jpg"></inline-graphic>
</inline-formula>
.</p>
<p>We call information theoretic quantities functionals as they are defined as functions that map from the space of PDFs to the real numbers. If we have to estimate the underlying probabilities from experimental data first, the mapping from the data to the information theoretic quantity (a real number) is called an estimator.</p>
</sec>
<sec id="s2b">
<title>Stationarity and non-stationarity in experimental time series</title>
<p>PDFs in neuroscience are typically not known
<italic>a priori</italic>
, so in order to estimate information theoretic functionals, these PDFs have to be reconstructed from a sufficient amount of observed realizations of the process. How these realizations are obtained from data depends on whether the process in question is stationary or non-stationary. Stationarity of a process means that PDFs of the random variables that form the random process do not change over time, such that
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e013.jpg"></inline-graphic>
</inline-formula>
. Any PDF
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e014.jpg"></inline-graphic>
</inline-formula>
may then be estimated from one observation of process
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e015.jpg"></inline-graphic>
</inline-formula>
by means of collecting realizations
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e016.jpg"></inline-graphic>
</inline-formula>
<italic>over time</italic>
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e017.jpg"></inline-graphic>
</inline-formula>
.</p>
<p>For processes that do not fulfill the stationarity-assumption, temporal pooling is not applicable as PDFs vary over time
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e018.jpg"></inline-graphic>
</inline-formula>
and some random variables
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e019.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e020.jpg"></inline-graphic>
</inline-formula>
(at least two) are associated with different PDFs
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e021.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e022.jpg"></inline-graphic>
</inline-formula>
(
<xref ref-type="fig" rid="pone-0102833-g001">Figure 1</xref>
). To still gain the necessary multiple observations of a random variable
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e023.jpg"></inline-graphic>
</inline-formula>
we may resort to either run multiple physical copies of the process
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e024.jpg"></inline-graphic>
</inline-formula>
or – in cases where physical copies are unavailable – we may repeat a process in time. If we choose the number of repetitions large enough, i.e. there is a sufficiently large set
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e025.jpg"></inline-graphic>
</inline-formula>
of time points
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e026.jpg"></inline-graphic>
</inline-formula>
, at which the process is repeated, we can assume that
<disp-formula id="pone.0102833.e027">
<graphic xlink:href="pone.0102833.e027.jpg" position="anchor" orientation="portrait"></graphic>
<label>(1)</label>
</disp-formula>
</p>
<fig id="pone-0102833-g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0102833.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Pooling of data over an ensemble of time series for transfer entropy (TE) estimation.</title>
<p>(A) Schematic account of TE. Two scalar time series
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e028.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e029.jpg"></inline-graphic>
</inline-formula>
recorded from the
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e030.jpg"></inline-graphic>
</inline-formula>
repetition of processes
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e031.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e032.jpg"></inline-graphic>
</inline-formula>
, coupled with a delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e033.jpg"></inline-graphic>
</inline-formula>
(indicated by green arrow). Colored boxes indicate delay embedded states
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e034.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e035.jpg"></inline-graphic>
</inline-formula>
for both time series with dimension
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e036.jpg"></inline-graphic>
</inline-formula>
samples (colored dots). The star on the
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e037.jpg"></inline-graphic>
</inline-formula>
time series indicates the scalar observation
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e038.jpg"></inline-graphic>
</inline-formula>
that is obtained at the target time of information transfer
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e039.jpg"></inline-graphic>
</inline-formula>
. The red arrow indicates self-information-transfer from the past of the target process to the random variable
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e040.jpg"></inline-graphic>
</inline-formula>
at the target time.
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e041.jpg"></inline-graphic>
</inline-formula>
is chosen such that
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e042.jpg"></inline-graphic>
</inline-formula>
and influences of the state
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e043.jpg"></inline-graphic>
</inline-formula>
arrive exactly at the information target variable
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e044.jpg"></inline-graphic>
</inline-formula>
. Information in the past state of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e045.jpg"></inline-graphic>
</inline-formula>
is useful to predict the future value of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e046.jpg"></inline-graphic>
</inline-formula>
and we obtain nonzero TE. (B) To estimate probability density functions for
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e047.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e048.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e049.jpg"></inline-graphic>
</inline-formula>
at a certain point in time
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e050.jpg"></inline-graphic>
</inline-formula>
, we collect their realizations from observed repetitions
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e051.jpg"></inline-graphic>
</inline-formula>
. (C) Realizations for a single repetition are concatenated into one embedding vector and (D) combined into one ensemble state space. Note, that data are pooled over the ensemble of data instead of time. Nearest neighbor counts within the ensemble state space can then be used to derive TE using the Kraskov-estimator proposed in
<xref rid="pone.0102833-Kraskov1" ref-type="bibr">[57]</xref>
.</p>
</caption>
<graphic xlink:href="pone.0102833.g001"></graphic>
</fig>
<p>i.e. PDFs
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e052.jpg"></inline-graphic>
</inline-formula>
at time point
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e053.jpg"></inline-graphic>
</inline-formula>
relative to the onset of the repetition at
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e054.jpg"></inline-graphic>
</inline-formula>
are equal over all
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e055.jpg"></inline-graphic>
</inline-formula>
repetitions. We call the repeated observations of a process an
<italic>ensemble</italic>
of time series. We may obtain a reliable estimation of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e056.jpg"></inline-graphic>
</inline-formula>
from this ensemble by evaluating
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e057.jpg"></inline-graphic>
</inline-formula>
over all observations
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e058.jpg"></inline-graphic>
</inline-formula>
. For the sake of readability, we will refer to these observations from the ensemble as
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e059.jpg"></inline-graphic>
</inline-formula>
, where
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e060.jpg"></inline-graphic>
</inline-formula>
refers to a time point
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e061.jpg"></inline-graphic>
</inline-formula>
, relative to the beginning of the process at time
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e062.jpg"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e063.jpg"></inline-graphic>
</inline-formula>
refers to the index of the repetition. If a process is repeated periodically, i.e. the repetitions are spaced by a fixed interval
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e064.jpg"></inline-graphic>
</inline-formula>
, we call such a process cyclostationary
<xref rid="pone.0102833-Gardner1" ref-type="bibr">[68]</xref>
:
<disp-formula id="pone.0102833.e065">
<graphic xlink:href="pone.0102833.e065.jpg" position="anchor" orientation="portrait"></graphic>
<label>(2)</label>
</disp-formula>
</p>
<p>In neuroscience, ensemble evaluation for the estimation of information theoretic functionals becomes relevant as physical copies of a process are typically not available and stationarity of a process can not necessarily be assumed. Gomez-Herrero and colleagues recently showed how ensemble averaging may be used to nevertheless estimate information theoretic functionals from cyclostationary processes
<xref rid="pone.0102833-GomezHerrero1" ref-type="bibr">[55]</xref>
. In neuroscience for example, a cyclostationary process, and thus an ensemble of data, is obtained by repeating an experimental manipulation, e.g. the presentation of a stimulus; these repetitions are often called experimental
<italic>trials</italic>
. In the remainder of this article, we will use the term repetition, and interpret trials from a neuroscience experiment as a special case of repetitions of a random process. Building on such repetitions, we next demonstrate a computationally efficient approach to the estimation of TE using the ensemble method proposed in
<xref rid="pone.0102833-GomezHerrero1" ref-type="bibr">[55]</xref>
.</p>
</sec>
<sec id="s2c">
<title>Transfer entropy estimation from an ensemble of time series</title>
<sec id="s2c1">
<title>Ensemble-based TE functional</title>
<p>When independent repetitions of an experimental condition are available, it is possible to use ensemble evaluation to estimate various PDFs from an ensemble of repetitions of the time series
<xref rid="pone.0102833-GomezHerrero1" ref-type="bibr">[55]</xref>
. By eliminating the need for pooling data over time, and instead pooling over repetitions, ensemble methods can be used to estimate information theoretic functionals for non-stationary time series. Here, we follow the approach of
<xref rid="pone.0102833-GomezHerrero1" ref-type="bibr">[55]</xref>
and present an ensemble TE functional that extends the TE functional presented in
<xref rid="pone.0102833-Vicente1" ref-type="bibr">[19]</xref>
,
<xref rid="pone.0102833-Wibral2" ref-type="bibr">[20]</xref>
,
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
and also takes into account an extension of the original formulation of TE, presented in
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
, guaranteeing self prediction optimality (indicated by the subscript
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e066.jpg"></inline-graphic>
</inline-formula>
). In the next subsection, we will then present a practical and data-efficient estimator of this functional. The functional reads
<disp-formula id="pone.0102833.e067">
<graphic xlink:href="pone.0102833.e067.jpg" position="anchor" orientation="portrait"></graphic>
<label>(3)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e068.jpg"></inline-graphic>
</inline-formula>
is the conditional mutual information, and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e069.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e070.jpg"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e071.jpg"></inline-graphic>
</inline-formula>
are the current value and the
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e072.jpg"></inline-graphic>
</inline-formula>
-dimensional past state variables of the target process Y, and the
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e073.jpg"></inline-graphic>
</inline-formula>
-dimensional past state variable at time
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e074.jpg"></inline-graphic>
</inline-formula>
of the source process X, respectively (see next paragraph for an explanation of states).</p>
<p>Rewriting this, taking into account repetitions
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e075.jpg"></inline-graphic>
</inline-formula>
of the random processes explicitly we obtain:
<disp-formula id="pone.0102833.e076">
<graphic xlink:href="pone.0102833.e076.jpg" position="anchor" orientation="portrait"></graphic>
<label>(4)</label>
</disp-formula>
</p>
<p>Here,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e077.jpg"></inline-graphic>
</inline-formula>
is the assumed delay of the information transfer between processes
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e078.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e079.jpg"></inline-graphic>
</inline-formula>
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
;
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e080.jpg"></inline-graphic>
</inline-formula>
denotes the future observation of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e081.jpg"></inline-graphic>
</inline-formula>
in repetition
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e082.jpg"></inline-graphic>
</inline-formula>
;
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e083.jpg"></inline-graphic>
</inline-formula>
denotes the past state of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e084.jpg"></inline-graphic>
</inline-formula>
in repetition
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e085.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e086.jpg"></inline-graphic>
</inline-formula>
denotes the past state of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e087.jpg"></inline-graphic>
</inline-formula>
in repetition
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e088.jpg"></inline-graphic>
</inline-formula>
. Note, that the functional
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e089.jpg"></inline-graphic>
</inline-formula>
used here is a modified form of the original TE formulation introduced by Schreiber
<xref rid="pone.0102833-Schreiber1" ref-type="bibr">[14]</xref>
. Schreiber defined TE as a conditional mutual information
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e090.jpg"></inline-graphic>
</inline-formula>
, whereas the functional in eq. 3 implements the conditional mutual information
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e091.jpg"></inline-graphic>
</inline-formula>
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
. The latter functional,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e092.jpg"></inline-graphic>
</inline-formula>
, contains the definition of Schreiber as a special case for
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e093.jpg"></inline-graphic>
</inline-formula>
. Note that the two functionals are identical if
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e094.jpg"></inline-graphic>
</inline-formula>
is used with the physically correct delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e095.jpg"></inline-graphic>
</inline-formula>
(i.e.
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e096.jpg"></inline-graphic>
</inline-formula>
) and a proper embedding for the source, and the Schreiber measures is used with an over-embedding such that the source state at
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e097.jpg"></inline-graphic>
</inline-formula>
is still fully covered by the source embedding.</p>
<p>In addition to the original formulation of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e098.jpg"></inline-graphic>
</inline-formula>
in
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
, here we explicitly state that the necessary realizations of the random variables in question are obtained through
<italic>ensemble evaluation</italic>
over repetitions
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e099.jpg"></inline-graphic>
</inline-formula>
– assuming the underlying processes to be repeatable or cyclostationary. Furthermore, we note explicitly that this ensemble-based functional introduces the possibility of time resolved TE estimates.</p>
<p>We recently showed that the estimator presented in
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
can also be used to recover an unknown information transfer delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e100.jpg"></inline-graphic>
</inline-formula>
between two processes
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e101.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e102.jpg"></inline-graphic>
</inline-formula>
, as
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e103.jpg"></inline-graphic>
</inline-formula>
is maximal when the assumed delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e104.jpg"></inline-graphic>
</inline-formula>
is equal to the true information transfer delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e105.jpg"></inline-graphic>
</inline-formula>
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
. This holds for the extended estimator presented here, thus
<disp-formula id="pone.0102833.e106">
<graphic xlink:href="pone.0102833.e106.jpg" position="anchor" orientation="portrait"></graphic>
<label>(5)</label>
</disp-formula>
</p>
</sec>
<sec id="s2c2">
<title>State space reconstruction and practical estimator</title>
<p>Transfer entropy differs from the lagged mutual information
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e107.jpg"></inline-graphic>
</inline-formula>
by the additional conditioning on the past of the target time series,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e108.jpg"></inline-graphic>
</inline-formula>
. This additional conditioning serves two important functions. First, as mentioned already by Schreiber in the original paper
<xref rid="pone.0102833-Schreiber1" ref-type="bibr">[14]</xref>
, and later detailed by Lizier
<xref rid="pone.0102833-Lizier1" ref-type="bibr">[4]</xref>
and Wibral and colleagues
<xref rid="pone.0102833-Wibral3" ref-type="bibr">[39]</xref>
,
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
, it removes the information about the future of the target time-series
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e109.jpg"></inline-graphic>
</inline-formula>
that is already contained in its own past,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e110.jpg"></inline-graphic>
</inline-formula>
. Second, this additional conditioning allows for a discovery of information transfer from the source
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e111.jpg"></inline-graphic>
</inline-formula>
to the target that can only be seen when taking into account information from the past of the target
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e112.jpg"></inline-graphic>
</inline-formula>
<xref rid="pone.0102833-Williams2" ref-type="bibr">[69]</xref>
. In the second case, the past information from the target serves to ‘decode’ this information transfer, and acts like a key in cryptography. As a consequence of this importance of the past of the target process it is very important to take all the necessary information in this past into account when evaluating the TE as in
<xref ref-type="disp-formula" rid="pone.0102833.e076">equation 4</xref>
.</p>
<p>To this end we need to form a collection of past random variables
<disp-formula id="pone.0102833.e113">
<graphic xlink:href="pone.0102833.e113.jpg" position="anchor" orientation="portrait"></graphic>
<label>(6)</label>
</disp-formula>
such that their realizations,
<disp-formula id="pone.0102833.e114">
<graphic xlink:href="pone.0102833.e114.jpg" position="anchor" orientation="portrait"></graphic>
<label>(7)</label>
</disp-formula>
are maximally informative about the future of the target process,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e115.jpg"></inline-graphic>
</inline-formula>
.</p>
<p>This task is complicated by the fact the we often deal with multidimensional systems, of which we only observe a scalar variable (here modeled as our random processes X,Y). To see this, think for example of a pendulum (which is a two dimensional system) of which we record only the current position
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e116.jpg"></inline-graphic>
</inline-formula>
. If the pendulum is at its lowest point, it could be standing still, going left, or going right. To properly describe which state the pendulum is in, we need to know at least the realization of one more random variable
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e117.jpg"></inline-graphic>
</inline-formula>
back in time. Collections of such past random variables whose realizations uniquely describe the state of a process are called
<italic>state variables</italic>
.</p>
<p>Such a sufficient collection of past variables, called a delay embedding vector, can always be reconstructed from scalar observations for low dimensional deterministic systems, such as the above pendulum, as shown by Takens
<xref rid="pone.0102833-Takens1" ref-type="bibr">[70]</xref>
. Unfortunately, most real world systems are high-dimensional stochastic dynamic systems (best described by non-linear Langevin equations) rather than low-dimensional deterministic ones. For these systems it is not obvious that a delay embedding similar to Takens' approach would yield the desired results. In fact, many systems can be shown to require an infinite number of past random variables when only a scalar observable of the high-dimensional stochastic process is accessible. Nevertheless, as shown by Ragwitz and Kantz
<xref rid="pone.0102833-Ragwitz1" ref-type="bibr">[71]</xref>
, the behavior of scalar observables of most of these systems can be approximated very well by a finite collection of such past variables for all practical purposes; in other words, these systems can be approximated well by a finite order, one-dimensional Markov-process.</p>
<p>For practical TE estimation using
<xref ref-type="disp-formula" rid="pone.0102833.e076">equation 4</xref>
, we therefore proceed by first reconstructing the state variables of such approximated Markov processes for the two systems
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e118.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e119.jpg"></inline-graphic>
</inline-formula>
from their scalar time series. Then, we use the statistics of nearest ensemble neighbors with a modified KSG estimator for TE evaluation
<xref rid="pone.0102833-Kraskov1" ref-type="bibr">[57]</xref>
.</p>
<p>Thus, we select a delay embedding vector of the form
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e120.jpg"></inline-graphic>
</inline-formula>
from
<xref ref-type="disp-formula" rid="pone.0102833.e113">equation 6</xref>
as our collection of past random variables – with realizations in repetition
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e121.jpg"></inline-graphic>
</inline-formula>
given by
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e122.jpg"></inline-graphic>
</inline-formula>
. Here,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e123.jpg"></inline-graphic>
</inline-formula>
is called the embedding dimension and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e124.jpg"></inline-graphic>
</inline-formula>
the embedding delay. These embedding parameters
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e125.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e126.jpg"></inline-graphic>
</inline-formula>
, are chosen such that they optimize a local predictor
<xref rid="pone.0102833-Ragwitz1" ref-type="bibr">[71]</xref>
, as this avoids an overestimation of TE
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
; other approaches related to minimizing non-linear prediction errors are also possible
<xref rid="pone.0102833-Faes4" ref-type="bibr">[44]</xref>
. In particular,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e127.jpg"></inline-graphic>
</inline-formula>
is chosen such that
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e128.jpg"></inline-graphic>
</inline-formula>
is conditionally independent of any
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e129.jpg"></inline-graphic>
</inline-formula>
with
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e130.jpg"></inline-graphic>
</inline-formula>
given
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e131.jpg"></inline-graphic>
</inline-formula>
. The same is done for the process X at time
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e132.jpg"></inline-graphic>
</inline-formula>
.</p>
<p>Next, we decompose
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e133.jpg"></inline-graphic>
</inline-formula>
into a sum of four individual Shannon entropies:
<disp-formula id="pone.0102833.e134">
<graphic xlink:href="pone.0102833.e134.jpg" position="anchor" orientation="portrait"></graphic>
<label>(8)</label>
</disp-formula>
</p>
<p>The Shannon differential entropies in
<xref ref-type="disp-formula" rid="pone.0102833.e134">equation 8</xref>
can be estimated in a data efficient way using nearest neighbor techniques
<xref rid="pone.0102833-Kozachenko1" ref-type="bibr">[72]</xref>
,
<xref rid="pone.0102833-Victor1" ref-type="bibr">[73]</xref>
. Nearest neighbor estimators yield a non-parametric estimate of entropies, assuming only a smoothness of the underlying PDF. It is however problematic to simply apply a nearest neighbor estimator (for example the Kozachenko-Leonenko estimator
<xref rid="pone.0102833-Kozachenko1" ref-type="bibr">[72]</xref>
) to each term appearing in eq. 8. This is because the dimensionality of each space associated with the terms differs largely over terms. Thus, a fixed number of neighbors for the search would lead to very different spatial scales (range of distances) for each term. Since the error bias of each term is dependent on these scales, the errors would not cancel each other but accumulate. We therefore use a modified KSG estimator which handles this problem by only fixing the number of neighbors
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e135.jpg"></inline-graphic>
</inline-formula>
in the highest dimensional space (k-nearest neighbor search, kNNS) and by projecting the resulting distances to the lower dimensional spaces as the range to look for and count neighbors there (range search, RS) (see
<xref rid="pone.0102833-Kraskov1" ref-type="bibr">[57]</xref>
, type 1 estimator, and
<xref rid="pone.0102833-Lindner1" ref-type="bibr">[56]</xref>
,
<xref rid="pone.0102833-Vicente2" ref-type="bibr">[74]</xref>
). In the ensemble variant of TE estimation we proceed by searching for nearest neighbors across points from all repetitions instead of searching the same repetition as the point of reference of the search – thus we form an
<italic>ensemble search space</italic>
by combining points over repetitions. Finally, the ensemble estimator of TE reads
<disp-formula id="pone.0102833.e136">
<graphic xlink:href="pone.0102833.e136.jpg" position="anchor" orientation="portrait"></graphic>
<label>(9)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e137.jpg"></inline-graphic>
</inline-formula>
denotes the digamma function and the angle brackets (
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e138.jpg"></inline-graphic>
</inline-formula>
) indicate an averaging over points in different repetitions
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e139.jpg"></inline-graphic>
</inline-formula>
at time instant
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e140.jpg"></inline-graphic>
</inline-formula>
. The distances to the
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e141.jpg"></inline-graphic>
</inline-formula>
-th nearest neighbor in the highest dimensional space (spanned by
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e142.jpg"></inline-graphic>
</inline-formula>
) define the radius of the spheres for the counting of the number of points (
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e143.jpg"></inline-graphic>
</inline-formula>
) in these spheres around each state vector (
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e144.jpg"></inline-graphic>
</inline-formula>
) involved.</p>
<p>In cases where the number of repetitions is not sufficient to provide the necessary amount of data to reliably estimate Shannon entropies through an ensemble average, one may combine ensemble evaluation with collecting realizations over time. In these cases, we count neighbors in a time window
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e145.jpg"></inline-graphic>
</inline-formula>
with
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e146.jpg"></inline-graphic>
</inline-formula>
, where
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e147.jpg"></inline-graphic>
</inline-formula>
controls the temporal resolution of the TE estimation:
<disp-formula id="pone.0102833.e148">
<graphic xlink:href="pone.0102833.e148.jpg" position="anchor" orientation="portrait"></graphic>
<label>(10)</label>
</disp-formula>
</p>
</sec>
</sec>
</sec>
<sec id="s3">
<title>Implementation</title>
<p>The estimation of TE from finite time series consists of the estimation of joint and marginal entropies as shown in
<xref ref-type="disp-formula" rid="pone.0102833.e136">equations 9</xref>
and
<xref ref-type="disp-formula" rid="pone.0102833.e148">10</xref>
, calculated from nearest neighbor statistics, i.e. distances and the count of neighbors within these distances. In practice we obtain these neighbor counts by applying kNNS and RS to reconstructed state spaces. In particular, we use a kNNS in the highest dimensional space to determine the k-th nearest neighbor of a data point and the associated distance. This distance is then used as the range for the RS in the marginal spaces, that return the point counts
<italic>n</italic>
. Both searches have a high computational cost. This cost increases even further in a practical setting, where we need to calculate TE for a sufficient number of surrogate data sets for statistical testing (see
<xref rid="pone.0102833-Vicente1" ref-type="bibr">[19]</xref>
and below for details). To enable TE estimation and statistical testing despite its computational cost, we implemented ad-hoc kNNS and RS algorithms in NVIDIA® CUDA™ C/C++ code
<xref rid="pone.0102833-NVIDIA1" ref-type="bibr">[75]</xref>
. This allows to run thousands of searches in parallel on a modern GPU.</p>
<p>To allow for a better understanding of the parallelization used, we will now briefly describe the main work flow of TE analysis in the open source MathWorks® MATLAB® toolbox TRENTOOL
<xref rid="pone.0102833-Lindner1" ref-type="bibr">[56]</xref>
, which implements the approach to TE estimation described in the
<italic>Background</italic>
section. The work flow includes the steps of data preprocessing prior to the use of the GPU algorithm for neighbor searches as well as the statistical testing of resulting TE values. In a subsequent section we will describe the core implementation of the algorithm in more detail and present its integration into TRENTOOL.</p>
<sec id="s3a">
<title>Main analysis work flow in TRENTOOL</title>
<sec id="s3a1">
<title>Practical TE estimation in TRENTOOL</title>
<p>The practical GPU-based TE estimation in TRENTOOL 3.0 is divided into the two steps of data preparation and TE estimation (see
<xref ref-type="fig" rid="pone-0102833-g002">Figure 2</xref>
and the TRENTOOL 3.0 manual: http:
<ext-link ext-link-type="uri" xlink:href="http://www.trentool.de">www.trentool.de</ext-link>
). As a first step, data is prepared by optimizing embedding parameters for state space reconstruction (
<xref ref-type="fig" rid="pone-0102833-g002">Figure 2</xref>
, panel
<bold>A</bold>
). As a second step, TE is estimated by following the approach for ensemble-based TE estimation lined out in the preceding section (
<xref ref-type="fig" rid="pone-0102833-g002">Figure 2</xref>
, panel
<bold>B</bold>
). TRENTOOL estimates
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e149.jpg"></inline-graphic>
</inline-formula>
(eq. 4) for a given pair of processes
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e150.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e151.jpg"></inline-graphic>
</inline-formula>
and given values for
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e152.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e153.jpg"></inline-graphic>
</inline-formula>
. For each pair, we call
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e154.jpg"></inline-graphic>
</inline-formula>
the source and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e155.jpg"></inline-graphic>
</inline-formula>
the target process.</p>
<fig id="pone-0102833-g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0102833.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Transfer entropy estimation using the ensemble method in TRENTOOL 3.0.</title>
<p>(A) Data preparation and optimization of embedding parameters in function
<monospace>TEprepare.m</monospace>
; (B) transfer entropy (TE) estimation from prepared data in
<monospace>TEsurrogatestats_ensemble.m</monospace>
(yellow boxes indicate variables being passed between sub-functions). TE is estimated via iterating over all channel combinations provided in the data. For each channel combination: (1) Data is embedded individually per repetition and combined over repetitions into one ensemble state space (chunk), (2)
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e156.jpg"></inline-graphic>
</inline-formula>
surrogate data sets are created by shuffling the repetitions of the target time series, (3) each surrogate data set is embedded per repetition and combined into one chunk (forming
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e157.jpg"></inline-graphic>
</inline-formula>
chunks in total), (4)
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e158.jpg"></inline-graphic>
</inline-formula>
chunks of original and surrogate data are passed to the GPU where nearest neighbor searches are conducted in parallel, (5) calculation of TE values from returned neighbor counts for original data and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e159.jpg"></inline-graphic>
</inline-formula>
surrogate data sets using the KSG-estimator
<xref rid="pone.0102833-Kraskov1" ref-type="bibr">[57]</xref>
, (6) statistical testing of original TE value against distribution of surrogate TE values; (C) output of TEsurrogatestats_ensemble
<monospace>.m</monospace>
, an array with dimension [no. channels
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e160.jpg"></inline-graphic>
</inline-formula>
5], where rows hold results for all channel combinations: (1) p-value of TE for this channel combination, (2) significance at the designated alpha level (1 - significant, 0 - not significant), (3) significance after correction for multiple comparisons, (4) absolute difference between the TE value for original data and the median of surrogate TE values, (5) presence of volume conduction (this is always set to 0 when using the ensemble method as instantaneous mixing is by default controlled for by conditioning on the current state of the source time series
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e161.jpg"></inline-graphic>
</inline-formula>
<xref rid="pone.0102833-Faes5" ref-type="bibr">[119]</xref>
).</p>
</caption>
<graphic xlink:href="pone.0102833.g002"></graphic>
</fig>
<p>After data preparation
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e162.jpg"></inline-graphic>
</inline-formula>
(eq. 9 and 10) is estimated in six steps: (1) using optimized embedding parameters, original data is embedded per repetition and repetitions are concatenated forming the ensemble search space of the original data, (2)
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e163.jpg"></inline-graphic>
</inline-formula>
sets of surrogate data are created from the original data by shuffling the repetitions of the target process
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e164.jpg"></inline-graphic>
</inline-formula>
, (3) each surrogate dataset is embedded per repetition and concatenated forming
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e165.jpg"></inline-graphic>
</inline-formula>
additional ensemble search spaces for surrogate data, (4) all
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e166.jpg"></inline-graphic>
</inline-formula>
search spaces of embedded original and surrogate data are passed to a wrapper function that calls the GPU functions to perform individual neighbor searches for each search space in parallel (in the following, we will refer to each of the
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e167.jpg"></inline-graphic>
</inline-formula>
ensembles as one data
<italic>chunk</italic>
), (5) TE values are calculated for original and surrogate data chunks from the neighbor counts using the KSG- estimator
<xref rid="pone.0102833-Kraskov1" ref-type="bibr">[57]</xref>
, (6) TE values for original data are tested statistically against the distribution of surrogate TE values.</p>
<p>The proposed GPU algorithm is accessed in step (4). As we will further explain below (see paragraph on
<italic>Input data</italic>
), the GPU implementation uses the fact that all of the necessary computations on surrogate data sets and the original data are independent and can thus be performed in parallel.</p>
</sec>
<sec id="s3a2">
<title>TE calculation and statistical testing against surrogate data</title>
<p>Estimated TE values need to be tested for their statistical significance
<xref rid="pone.0102833-Lindner1" ref-type="bibr">[56]</xref>
(step (6) of the main TRENTOOL work flow). For this statistical test under a null hypothesis of
<italic>no</italic>
information transfer between a source
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e168.jpg"></inline-graphic>
</inline-formula>
and target time series
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e169.jpg"></inline-graphic>
</inline-formula>
, we estimate
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e170.jpg"></inline-graphic>
</inline-formula>
and compare it to a distribution of TE values calculated from surrogate data sets. Surrogate data sets are formed by shuffling repetitions in
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e171.jpg"></inline-graphic>
</inline-formula>
to obtain
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e172.jpg"></inline-graphic>
</inline-formula>
, such that
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e173.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e174.jpg"></inline-graphic>
</inline-formula>
, where
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e175.jpg"></inline-graphic>
</inline-formula>
denotes a random permutation of the repetitions
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e176.jpg"></inline-graphic>
</inline-formula>
(
<xref ref-type="fig" rid="pone-0102833-g003">Figure 3</xref>
). From this surrogate data set, we calculate surrogate TE values
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e177.jpg"></inline-graphic>
</inline-formula>
. By repeating this process a sufficient number of times
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e178.jpg"></inline-graphic>
</inline-formula>
, we obtain a distribution of values
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e179.jpg"></inline-graphic>
</inline-formula>
. To asses the statistical significance of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e180.jpg"></inline-graphic>
</inline-formula>
, we calculate a p-value as the proportion of surrogate TE values
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e181.jpg"></inline-graphic>
</inline-formula>
equal or larger than
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e182.jpg"></inline-graphic>
</inline-formula>
. This p-value is then compared to a critical alpha level (see for example
<xref rid="pone.0102833-Lindner1" ref-type="bibr">[56]</xref>
,
<xref rid="pone.0102833-Maris1" ref-type="bibr">[76]</xref>
).</p>
<fig id="pone-0102833-g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0102833.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Creation of surrogate data sets.</title>
<p>(A) Original time series with information transfer (solid arrow) from a source state
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e183.jpg"></inline-graphic>
</inline-formula>
to a corresponding target time point
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e184.jpg"></inline-graphic>
</inline-formula>
, given the time point's history
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e185.jpg"></inline-graphic>
</inline-formula>
. Solid arrows indicate the direction of transfer entropy (TE) analysis, while information transfer is present. (B) Shuffled target time series, repetitions are permutes, such that
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e186.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e187.jpg"></inline-graphic>
</inline-formula>
, where
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e188.jpg"></inline-graphic>
</inline-formula>
denotes a random permutation. Dashed arrows indicate the direction of TE analysis, while no more information flow is present.</p>
</caption>
<graphic xlink:href="pone.0102833.g003"></graphic>
</fig>
</sec>
<sec id="s3a3">
<title>Reconstruction of information transfer delays</title>
<p>
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e189.jpg"></inline-graphic>
</inline-formula>
may be used to reconstruct the interaction transfer delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e190.jpg"></inline-graphic>
</inline-formula>
between
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e191.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e192.jpg"></inline-graphic>
</inline-formula>
(eq. 5,
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
).
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e193.jpg"></inline-graphic>
</inline-formula>
may be reconstructed by
<italic>scanning</italic>
possible values for
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e194.jpg"></inline-graphic>
</inline-formula>
:
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e195.jpg"></inline-graphic>
</inline-formula>
is estimated for all values in
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e196.jpg"></inline-graphic>
</inline-formula>
; The value that maximizes the
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e197.jpg"></inline-graphic>
</inline-formula>
is kept as the reconstructed information transfer delay. We used the reconstruction of information transfer delays as an additional parameter when testing the proposed implementation for correctness and robustness.</p>
</sec>
</sec>
<sec id="s3b">
<title>Implementation of the GPU algorithm</title>
<sec id="s3b1">
<title>Parallelized nearest neighbor searches</title>
<p>The KSG estimator used for estimating
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e198.jpg"></inline-graphic>
</inline-formula>
in eq. 9 and 10 uses neighbor (distance-)statistics obtained from kNNS and RS algorithms to estimate Shannon differential entropies. Thus, the choice of computationally efficient kNNS and RS algorithms is crucial to any practical implementation of the
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e199.jpg"></inline-graphic>
</inline-formula>
estimator. kNNS algorithms typically return a list of the k nearest neighbors for each reference point, while RS algorithms typically return a list of all neighbors within a given range for each reference point. kNNS and RS algorithms have been studied extensively because of their broad potential for application in nearest neighbor searches and related problems. Several approaches have been proposed to reduce their high computational cost: partitioning of input data into k-d Trees, Quadtrees or equivalent data structures
<xref rid="pone.0102833-Bentley1" ref-type="bibr">[77]</xref>
or approximation algorithms (ANN: Approximate Nearest Neighbors)
<xref rid="pone.0102833-Arya1" ref-type="bibr">[78]</xref>
,
<xref rid="pone.0102833-Muja1" ref-type="bibr">[79]</xref>
. Furthermore, some authors have explored how to parallelize the kNNS algorithm on a GPU using different implementations: exhaustive brute force searches
<xref rid="pone.0102833-Garcia1" ref-type="bibr">[80]</xref>
,
<xref rid="pone.0102833-Sismanis1" ref-type="bibr">[81]</xref>
, tree-based searches
<xref rid="pone.0102833-Brown1" ref-type="bibr">[82]</xref>
,
<xref rid="pone.0102833-Li1" ref-type="bibr">[83]</xref>
and ANN searches
<xref rid="pone.0102833-Li1" ref-type="bibr">[83]</xref>
,
<xref rid="pone.0102833-Pan1" ref-type="bibr">[84]</xref>
.</p>
<p>Although performance of existing implementations of kNNS for GPU was promising, they were not applicable to TE estimation. The most critical reason was that existing implementations did not allow for the concurrent treatment of several problem instances by the GPU and maximum performance was only achieved for very large kNNS problem instances. Unfortunately, the problem instances typically expected in our application are numerous (i.e.
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e200.jpg"></inline-graphic>
</inline-formula>
problem instances per pair of time series), but rather small compared to the main memory on a typical GPU device in use today. Thus, an implementation that handled only one instance at a time would not have made optimal use of the underlying hardware. Therefore, we designed an implementation that is able to handle several problem instances at once to perform neighbor searches for chunks of embedded original and surrogate data in parallel. Moreover, we aimed at a flexible GPU implementation of kNNS and RS that maximized the use of the GPU's hardware resources for variable configurations of data – thus making the implementation independent of the design of the neuroscientific experiment.</p>
<p>Our implementation is written in CUDA (Compute Unified Device Architecture)
<xref rid="pone.0102833-NVIDIA1" ref-type="bibr">[75]</xref>
(a port to OpenCL™
<xref rid="pone.0102833-Khronos1" ref-type="bibr">[85]</xref>
is work in progress). CUDA is a parallel computing framework created by NVIDIA that includes extensions to high level languages such as C/C++, giving access to the native instruction set and memory of the parallel computational elements in CUDA enabled GPUs. Accelerating an algorithm using CUDA includes translating it into data-parallel sequences of operations and then carefully mapping these operations to the underlying resources to get maximum performance
<xref rid="pone.0102833-Owens1" ref-type="bibr">[58]</xref>
,
<xref rid="pone.0102833-Brodtkorb1" ref-type="bibr">[59]</xref>
. To understand the implementation suggested here, we will give a brief explanation of these resources, i.e. the GPU's hardware architecture, before explaining the implementation in more detail (additionally, see
<xref rid="pone.0102833-Owens1" ref-type="bibr">[58]</xref>
,
<xref rid="pone.0102833-Brodtkorb1" ref-type="bibr">[59]</xref>
,
<xref rid="pone.0102833-NVIDIA1" ref-type="bibr">[75]</xref>
).</p>
</sec>
<sec id="s3b2">
<title>GPU resources</title>
<p>GPU resources comprise of massively parallel processors with up to thousands of cores (processing units). These cores are divided among Stream Multiprocessors (SMs) in order to guarantee automatic scalability of the algorithms to different versions of the hardware. Each SM contains 32 to 192 cores that execute operations described in the CUDA kernel code. Operations executed by one core are called a CUDA thread. Threads are grouped in blocks, which are in turn organized in a grid. The grid is the entry point to the GPU resources. It handles one kernel call at a time and executes it on multiple data in parallel. Within the grid, each block of threads is executed by one SM. The SM executes the threads of a block by issuing them in groups of 32 threads, called warps. Threads within one warp are executed concurrently, while as many warps as possible are scheduled per SM to be resident at a time, such that the utilization of all the cores is maximized.</p>
</sec>
<sec id="s3b3">
<title>Input data</title>
<p>As input, the proposed RS and kNNS algorithms expect a set of data points representing the search space and a second set of data points that serve as reference points in the searches. One such problem instance is considered one data chunk. Our implementation is able to handle several data chunks simultaneously to make maximum use of the GPU resources. Thus, several chunks may be combined, using an additional index vector to encode the sizes of individual chunks. These chunks are then passed at once to the GPU algorithm to be searched in parallel.</p>
<p>In the estimation of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e201.jpg"></inline-graphic>
</inline-formula>
, according to the work flow described in paragraph
<italic>Practical TE estimation in TRENTOOL</italic>
, we used the proposed implementation to parallelize neighbor searches over surrogate data sets for a given pair of time series
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e202.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e203.jpg"></inline-graphic>
</inline-formula>
and given values for
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e204.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e205.jpg"></inline-graphic>
</inline-formula>
. Thus, in one call to the GPU algorithms
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e206.jpg"></inline-graphic>
</inline-formula>
data chunks were passed as input, where chunks represented the search space for the original pair of time series and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e207.jpg"></inline-graphic>
</inline-formula>
search spaces for corresponding surrogate data sets. Points within the search spaces may have either been collected through temporal or ensemble pooling of embedded data points or a combination of both (eq. 9 or 10).</p>
</sec>
<sec id="s3b4">
<title>Core algorithm</title>
<p>In the core GPU-based search algorithm, the kNNS implementation is mapped to CUDA threads as depicted in
<xref ref-type="fig" rid="pone-0102833-g004">Figure 4</xref>
(the RS implementation behaves similarly). Each chunk consists of a set of data points that represents the search space and are at the same time used as reference points for individual searches. Each individual search is handled by one CUDA thread. Parallelization of these searches on the GPU happens in two ways: (1) the GPU algorithm is able to handle several chunks, (2) each chunk can be searched in parallel, such that individual searches within one chunk are handled simultaneously. An individual search is conducted by a CUDA thread by brute-force measuring the infinity norm distance of the given reference point to any other point within the same chunk. Simultaneously, other threads measure these distances for other points in the same chunk or handle a different chunk altogether. Searching several chunks in parallel is an essential feature of the proposed solution, that maximizes the utilization of GPU resources. From the GPU execution point of view, simultaneous searches are realized by handling a variable number of kNNS (or RS) problem instances through one grid launch. The number of searches that can be executed in parallel is thus only limited by the device's global memory that holds the input data and the number of threads that can be started simultaneously (both limitations are taken into account). Furthermore, the solution is implemented such that optimal performance is guaranteed.</p>
<fig id="pone-0102833-g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0102833.g004</object-id>
<label>Figure 4</label>
<caption>
<title>GPU implementation of the parallelized nearest neighbor search in TRENTOOL 3.0</title>
<p>Chunks of data are prepared on the CPU (embedding and concatenation) and passed to the GPU. Data points are managed in the global memory as Structures of Arrays (SoA). To make maximum use of the memory bandwidth, data is padded to ensure coalesced reading and writing from and to the streaming multiprocessor (SM) units. Each SM handles one chunk in one thread block (dashed box). One block conducts brute force neighbor searches for all data points in the chunk and collects results in its shared memory (red and blue arrows and shaded areas). Results are eventually returned to the CPU.</p>
</caption>
<graphic xlink:href="pone.0102833.g004"></graphic>
</fig>
</sec>
<sec id="s3b5">
<title>Low-level implementation details</title>
<p>There are several strategies that are essential for optimal performance when implementing algorithms for GPU devices. Most important are the reduction of memory latencies and the optimal use of hardware resources by ensuring high occupancy (the ratio of number of active warps per SM to the maximum number of possible active warps
<xref rid="pone.0102833-Owens1" ref-type="bibr">[58]</xref>
). To maximize occupancy, we designed our algorithm's kernels such that always more than one block of threads (ideally many) are loaded per SM
<xref rid="pone.0102833-Owens1" ref-type="bibr">[58]</xref>
. We can do this since many searches are executed concurrently in every kernel launch. By maximizing occupancy, we both ensure hardware utilization and improve performance by hiding data memory latency from the GPU's global memory to the SMs' registers
<xref rid="pone.0102833-NVIDIA1" ref-type="bibr">[75]</xref>
. Moreover, in order to reduce memory latencies we take care of input data memory alignment and guarantee that memory readings issued by the threads of a warp are coalesced into as few memory transfers as possible. Additionally, with the aim of minimizing sparse data accesses to memory, data points are organized as Structures of Arrays (SoA). Finally, we use the shared memory inside the SMs (a self-programmed intermediate cache between global memory and SMs) to keep track of nearest neighbors associated information during searches. The amount of shared memory and registers is limited in a SM. The maximum possible occupancy depends on the number of registers and shared memory needed by a block, which in turn depends on the number of threads in the block. For our implementation, we used a suitable block size of 512 threads.</p>
</sec>
<sec id="s3b6">
<title>Implementation interface</title>
<p>The GPU functionality is accessed through MATLAB scripts for kNNS (‘fnearneigh_gpu.mex’) and RS (‘range_search_all_gpu.mex’), which encapsulate all the associated complexity. Both scripts are called from TRENTOOL using a wrapper function. In its current implementation in TRENTOOL (see paragraph
<italic>Practical TE estimation in TRENTOOL</italic>
), the wrapper function takes all
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e208.jpg"></inline-graphic>
</inline-formula>
chunks as input and launches a kernel that searches all chunks in parallel through the mex-files for kNNS and RS. The wrapper makes sure that the input size does not exceed the GPU device's available global memory and the maximum number of threads that can be started simultaneously. If necessary, the wrapper function splits the input into several kernel calls; it also manages the output, i.e. the neighbor counts for each chunk, which are passed on for TE calculation.</p>
</sec>
</sec>
</sec>
<sec id="s4">
<title>Evaluation</title>
<p>To evaluate the proposed algorithm we investigated four properties: first, whether the speedup is sufficient to allow the application of the method to real-world neural datasets; second, the correctness of results on simulated data, where the ground truth is known; third, the robustness of the algorithm for limited sample sizes; fourth, whether plausible results are achieved on a neural example dataset.</p>
<sec id="s4a">
<title>Ethics statement</title>
<p>The neural example dataset was taken from an experiment described in
<xref rid="pone.0102833-Grtzner1" ref-type="bibr">[86]</xref>
. All subjects gave written informed consent before the experiment. The study was approved by the local ethics committee (Johann Wolfgang Goethe University, Frankfurt, Germany).</p>
</sec>
<sec id="s4b">
<title>Evaluation of computational speedup</title>
<p>To test for an increase in performance due to the parallelization of neighbor searches, we compared practical execution times of the proposed GPU implementation to execution times of the serial kNNS and RS algorithms implemented in the MATLAB toolbox TSTOOL (http:
<ext-link ext-link-type="uri" xlink:href="http://www.dpi.physik.uni-goettingen.de/tstool/">www.dpi.physik.uni-goettingen.de/tstool/</ext-link>
). This toolbox wraps a FORTRAN implementation of kNNS and RS, and has proven the fastest CPU toolbox for our purpose. All testing was done in MATLAB 2008b (MATLAB 7.7, The MathWorks Inc., Natick, MA, 2008). As input, we used increasing numbers of chunks of simulated data from two coupled Lorenz systems, further described below. Repetitions of simulated time series were embedded and combined to form ensemble state spaces, i.e. chunks of data (c.f. paragraph
<italic>Input Data</italic>
). To obtain increasing input sizes, we duplicated these chunks the desired number of times. While the CPU implementation needed to iteratively perform searches on individual chunks, the GPU implementation searched chunks in parallel (note that chunks are treated independently here, so that there is no speedup because of the duplicated chunk data). Note that for both, CPU and GPU implementations, data handling prior to nearest neighbor searches is identical. We were thus able to confine the testing of performance differences to the respective kNNS and RS algorithms only, as all data handling prior to nearest neighbor searches was conducted using the same, highly optimized TRENTOOL functionalities.</p>
<p>Analogous to TE estimation implemented in TRENTOOL, we conducted one kNNS (with
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e209.jpg"></inline-graphic>
</inline-formula>
, TRENTOOL default, see also
<xref rid="pone.0102833-Kraskov2" ref-type="bibr">[87]</xref>
) in the highest dimensional space and used the returned distances for a RS in one lower dimensional space. Both functions were called for increasing numbers of chunks to obtain the execution time as a function of input size. One chunk of data from the highest dimensional space had dimensions [30094
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e210.jpg"></inline-graphic>
</inline-formula>
17] and size 1.952 MB (single precision); one chunk of data from the lower dimensional space had dimensions [30094
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e211.jpg"></inline-graphic>
</inline-formula>
8] and size 0.918 MB (single precision). Performance testing of the serial implementation was carried out on an Intel Xeon CPU (E5540, clocked at 2.53 GHz), where we measured execution times of the TSTOOL kNNS (functions ‘nn_prepare.m’ and ‘nn_search.m’) and the TSTOOL RS (function ‘range_search.m’). Testing of the parallel implementation was carried out three times on GPU devices of varying processing power (NVIDIA Tesla C2075, GeForce GTX 580 and GeForce GTX Titan). On the GPUs, we measured execution times for the proposed kNNS (‘fnearneigh_gpu.mex’) and RS (‘range_search_all_gpu.mex’) implementation. When the GPU's global memory capacity was exceeded by higher input sizes, data was split and computed over several runs (i.e. calls to the GPU). All performance testing was done by measuring execution times using the MATLAB functions tic and toc.</p>
<p>To obtain reliable results for the serial implementation we ran both kNNS and RS 200 times on the data, receiving an average execution time of 1.26 s for kNNS and an average execution time of 24.1 s for RS. We extrapolated these execution times to higher numbers of chunks and compared them to measured execution times of the parallel searches on three NVIDIA GPU devices. On average, execution times on the GPU compared to the CPU were faster by a factor of 22 on the NVIDIA Tesla C2075, by a factor of 33 for the NVIDIA GTX 580 and by a factor of 50 for the NVIDIA GTX Titan (
<xref ref-type="fig" rid="pone-0102833-g005">Figure 5</xref>
).</p>
<fig id="pone-0102833-g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0102833.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Practical performance measures of the ensemble method for GPU compared to CPU.</title>
<p>Combined execution times in s for serial and parallel implementations of k-nearest neighbor and range search as a function of input size (number of data chunks). Execution times were measured for the serial implementation running on a CPU (black) and for our parallel implementation using one of three GPU devices (blue, red, green) of varying computing power. Computation using a GPU was considerably faster than using a CPU (by factors 22, 33 and 50 respectively).</p>
</caption>
<graphic xlink:href="pone.0102833.g005"></graphic>
</fig>
<p>To put these numbers into perspective, we note that in a neuroscience experiment the number of chunks to be processed is the product of (typical numbers): channel pairs for TE (100) * number of surrogate data sets (1000) * experimental conditions (4) * number of subjects (15). This results in a total computational load on the order of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e212.jpg"></inline-graphic>
</inline-formula>
chunks to be processed. Given an execution time of 24.1 s/50 on the NVIDIA GTX Titan for a typical test dataset, these computations will take
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e213.jpg"></inline-graphic>
</inline-formula>
or 4.8 weeks on a single GPU, which is feasible compared to the initial duration of 240 weeks on a single CPU. Even when considering a trivial parallelization of the computations over multiple CPU cores and CPUs, the GPU based solution is by far more cost and energy efficient than any possible CPU-based solution. If in addition a scanning of various possible information transfer delays is important, then parallelization over multiple GPUs seems to be the only viable option.</p>
</sec>
<sec id="s4c">
<title>Evaluation on Lorenz systems</title>
<p>To test the ability of the presented implementation to successfully reconstruct information transfer between systems with a non-stationary coupling, we simulated various coupling scenarios between stochastic and deterministic systems. We introduced non-stationary into the coupling of two processes by varying the coupling strength over the course of a repetition (all other parameters were held constant). Simulations for individual scenarios are described in detail below. For the estimation of TE we used MathWork's MATLAB, and the TRENTOOL toolbox extended by the implementation of the ensemble method proposed above (version 3.0, see also
<xref rid="pone.0102833-Lindner1" ref-type="bibr">[56]</xref>
and http:
<ext-link ext-link-type="uri" xlink:href="http://www.trentool.de">www.trentool.de</ext-link>
). For a detailed testing of the used estimator
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e214.jpg"></inline-graphic>
</inline-formula>
(eq. 4) refer to
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
.</p>
<sec id="s4c1">
<title>Coupled Lorenz systems</title>
<p>Simulated data was taken from two unidirectionally coupled Lorenz systems labeled
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e215.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e216.jpg"></inline-graphic>
</inline-formula>
. Systems interacted in direction
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e217.jpg"></inline-graphic>
</inline-formula>
according to equations:
<disp-formula id="pone.0102833.e218">
<graphic xlink:href="pone.0102833.e218.jpg" position="anchor" orientation="portrait"></graphic>
<label>(11)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e219.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e220.jpg"></inline-graphic>
</inline-formula>
is the coupling delay and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e221.jpg"></inline-graphic>
</inline-formula>
is the coupling strength;
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e222.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e223.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e224.jpg"></inline-graphic>
</inline-formula>
are the
<italic>Prandtl number</italic>
, the
<italic>Rayleigh number</italic>
, and a geometrical scale. Note, that
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e225.jpg"></inline-graphic>
</inline-formula>
for the test cases (no self feedback, no coupling from
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e226.jpg"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e227.jpg"></inline-graphic>
</inline-formula>
). Numerical solutions to these differential equations were computed using the
<italic>dde23</italic>
solver in MATLAB and results were resampled such that the delays amounted to the values given below. For analysis purposes we analyzed the V-coordinates of the systems.</p>
<p>We introduced non-stationarity in the coupling between both systems by varying the coupling strength
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e228.jpg"></inline-graphic>
</inline-formula>
over time. In particular, a coupling
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e229.jpg"></inline-graphic>
</inline-formula>
was set for a limited time interval only, whereas before and after the coupling interval
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e230.jpg"></inline-graphic>
</inline-formula>
was set to
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e231.jpg"></inline-graphic>
</inline-formula>
. A constant information transfer delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e232.jpg"></inline-graphic>
</inline-formula>
was simulated for the whole coupling interval. We simulated 150 repetitions with 3000 data points each, with a coupling interval from approximately 1000 to 2000 data points (see
<xref ref-type="fig" rid="pone-0102833-g006">Figure 6</xref>
, panel
<bold>A</bold>
).</p>
<fig id="pone-0102833-g006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0102833.g006</object-id>
<label>Figure 6</label>
<caption>
<title>Transfer entropy reconstruction from non-stationary Lorenz systems.</title>
<p>We used two dynamically coupled Lorenz systems (A) to simulate non-stationarity in data generating processes. A coupling
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e233.jpg"></inline-graphic>
</inline-formula>
was present during a time interval from 1000 to 2000 ms only (
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e234.jpg"></inline-graphic>
</inline-formula>
otherwise). The information transfer delay was set to
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e235.jpg"></inline-graphic>
</inline-formula>
. Transfer entropy (TE) values were reconstructed using the ensemble method combined with the scanning approach proposed in
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
to reconstruct information transfer delays. Assumed delays
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e236.jpg"></inline-graphic>
</inline-formula>
were scanned from 35 to 55 ms (1 ms resolution). In (B) the maximum TE values for original data over this interval are shown in blue. Red bars indicate the corresponding mean over surrogate TE values (error bars indicate 1 SD). Significant TE was found for the second time window only; here, the delay was reconstructed as
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e237.jpg"></inline-graphic>
</inline-formula>
.</p>
</caption>
<graphic xlink:href="pone.0102833.g006"></graphic>
</fig>
<p>For each scenario, 500 surrogate data sets were computed to allow for statistical testing of the reconstructed information transfer. Surrogate data were created by permutation of data points in blocks of the target time series (
<xref ref-type="fig" rid="pone-0102833-g003">Figure 3</xref>
), leaving each repetition intact. The value
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e238.jpg"></inline-graphic>
</inline-formula>
for the nearest neighbor search was set to 4 for all analyses (TRENTOOL default, see also
<xref rid="pone.0102833-Kraskov2" ref-type="bibr">[87]</xref>
).</p>
</sec>
<sec id="s4c2">
<title>Results</title>
<p>We analyzed data from three time windows from 200 to 450 ms, 1600 to 1850 ms and 2750 to 3000 ms using the estimator proposed in eq. 10 with
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e239.jpg"></inline-graphic>
</inline-formula>
, assuming local stationarity (
<xref ref-type="fig" rid="pone-0102833-g006">Figure 6</xref>
, panel
<bold>A</bold>
). For each time window, we scanned assumed delays in the interval
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e240.jpg"></inline-graphic>
</inline-formula>
.
<xref ref-type="fig" rid="pone-0102833-g006">Figure 6</xref>
, panel
<bold>B</bold>
, shows the maximum TE value from original data (blue) over all assumed
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e241.jpg"></inline-graphic>
</inline-formula>
and the corresponding mean surrogate TE value (red). Significant differences between original TE and surrogate TE were found in the second time window only (indicated by an asterisk). No significant information transfer was found during the non-coupling intervals. The information transfer delay reconstructed for the second analysis window was 49 ms (true information transfer delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e242.jpg"></inline-graphic>
</inline-formula>
). Thus, the proposed implementation was able to reliably detect a coupling between both systems and reconstructed the corresponding information transfer delay with an error of less than 10%.</p>
</sec>
</sec>
<sec id="s4d">
<title>Evaluation on autoregressive processes</title>
<p>To asses the performance of the proposed implementation on non-abrupt changes in coupling, we simulated various coupling scenarios for two autoregressive processes
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e243.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e244.jpg"></inline-graphic>
</inline-formula>
of order 1 (AR(1)-processes) with variable couplings over time. In each scenario, couplings were modulated using hyperbolic functions to realize a smooth transition between uncoupled and coupled regimes. The AR(1)-processes were simulated according to the equations
<disp-formula id="pone.0102833.e245">
<graphic xlink:href="pone.0102833.e245.jpg" position="anchor" orientation="portrait"></graphic>
<label>(12)</label>
</disp-formula>
</p>
<p>
<disp-formula id="pone.0102833.e246">
<graphic xlink:href="pone.0102833.e246.jpg" position="anchor" orientation="portrait"></graphic>
<label>(13)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e247.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e248.jpg"></inline-graphic>
</inline-formula>
are the AR parameters,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e249.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e250.jpg"></inline-graphic>
</inline-formula>
denote coupling strength,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e251.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e252.jpg"></inline-graphic>
</inline-formula>
are the coupling delays and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e253.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e254.jpg"></inline-graphic>
</inline-formula>
denote uncorrelated, unit-variance, zero-mean Gaussian white noise terms.</p>
<sec id="s4d1">
<title>Simulated coupling scenarios</title>
<p>We simulated three coupling scenarios, where the coupling varied in strength over the course of a repetition (duration 3000 ms): (1) unidirectional coupling
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e255.jpg"></inline-graphic>
</inline-formula>
with a coupling onset around 1000 ms; (2) unidirectional coupling with a two-step increase in coupling
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e256.jpg"></inline-graphic>
</inline-formula>
at around 1000 ms and around 2000 ms; (3) bidirectional coupling
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e257.jpg"></inline-graphic>
</inline-formula>
with onset around 1000 ms and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e258.jpg"></inline-graphic>
</inline-formula>
with onset around 2000 ms. See
<xref ref-type="table" rid="pone-0102833-t001">table 1</xref>
for specific parameter values used in each scenario.</p>
<table-wrap id="pone-0102833-t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0102833.t001</object-id>
<label>Table 1</label>
<caption>
<title>Parameter settings for simulated autoregressive processes.</title>
</caption>
<alternatives>
<graphic id="pone-0102833-t001-1" xlink:href="pone.0102833.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Testcase</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e259.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e260.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e261.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e262.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e263.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e264.jpg"></inline-graphic>
</inline-formula>
</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Unidirectional</td>
<td align="left" rowspan="1" colspan="1">0.75</td>
<td align="left" rowspan="1" colspan="1">0.35</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">−0.35</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">10</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Two-step unidirectional</td>
<td align="left" rowspan="1" colspan="1">0.75</td>
<td align="left" rowspan="1" colspan="1">0.35</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">−0.35</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">10</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Bidirectional</td>
<td align="left" rowspan="1" colspan="1">0.475</td>
<td align="left" rowspan="1" colspan="1">0.35</td>
<td align="left" rowspan="1" colspan="1">−0.4</td>
<td align="left" rowspan="1" colspan="1">−0.35</td>
<td align="left" rowspan="1" colspan="1">20</td>
<td align="left" rowspan="1" colspan="1">10</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>We realized a varying coupling strength
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e265.jpg"></inline-graphic>
</inline-formula>
(and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e266.jpg"></inline-graphic>
</inline-formula>
for scenario (3)) by modulating coupling parameters
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e267.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e268.jpg"></inline-graphic>
</inline-formula>
with a hyperbolic tangent function. No coupling was realized by setting
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e269.jpg"></inline-graphic>
</inline-formula>
. For scenarios (1) and (3) we used the coupling
<disp-formula id="pone.0102833.e270">
<graphic xlink:href="pone.0102833.e270.jpg" position="anchor" orientation="portrait"></graphic>
<label>(14)</label>
</disp-formula>
</p>
<p>
<disp-formula id="pone.0102833.e271">
<graphic xlink:href="pone.0102833.e271.jpg" position="anchor" orientation="portrait"></graphic>
<label>(15)</label>
</disp-formula>
where 0.05 was the slope and 2000 and 1000 are the inflection points of the hyperbolic tangent respectively. Note that we additionally scaled the tanh function such that function value ranged from 0 to 1. For coupling scenario (2), the two-step increase in
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e272.jpg"></inline-graphic>
</inline-formula>
was expressed as:
<disp-formula id="pone.0102833.e273">
<graphic xlink:href="pone.0102833.e273.jpg" position="anchor" orientation="portrait"></graphic>
<label>(16)</label>
</disp-formula>
</p>
<p>We chose the arguments of the hyperbolic function such that the function's slope led to a smooth increase in the coupling over an epoch of approximately 200 ms around the inflection points at 1 and 2 s respectively (
<xref ref-type="fig" rid="pone-0102833-g007">Figure 7</xref>
, panels
<bold>A</bold>
<bold>D</bold>
). For each scenario, we simulated 50 trials of length 3000 ms with a sampling rate of 1000 Hz. We then estimated time resolved TE for analysis windows of length
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e274.jpg"></inline-graphic>
</inline-formula>
. Again, we mixed temporal and ensemble pooling according to eq. 10. For the scenario with unidirectional coupling (1) we used four analysis windows to cover the change in coupling (from 0.2 to 0.5 s, 0.5 to 0.8 s, 0.8 to 1.1 s, and 1.1 to 1.4 s, see
<xref ref-type="fig" rid="pone-0102833-g007">Figure 7</xref>
, panel
<bold>E</bold>
), for the two-step increase (2) and bidirectional (3) scenarios, we used eight analysis windows each (from 0.2 to 0.5 s, 0.5 to 0.8 s, 0.8 to 1.1 s, 1.1 to 1.4 s, 1.4 to 1.7 s, 1.7 to 2.0 s, 2.0 to 2.3 s, and 2.3 to 2.6 s, see
<xref ref-type="fig" rid="pone-0102833-g007">Figure 7</xref>
, panels
<bold>F</bold>
and
<bold>G</bold>
). As for the Lorenz systems, 500 surrogate data sets were used for the statistical testing in each analysis. Surrogate data were created by blockwise (i.e. repetitionwise) permutation of data points in the target time series. The value
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e275.jpg"></inline-graphic>
</inline-formula>
for the nearest neighbor search was set to 4 for all analyses (TRENTOOL default, see also
<xref rid="pone.0102833-Kraskov2" ref-type="bibr">[87]</xref>
).</p>
<fig id="pone-0102833-g007" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0102833.g007</object-id>
<label>Figure 7</label>
<caption>
<title>Transfer entropy reconstruction from coupled autoregressive processes.</title>
<p>We simulated two dynamically coupled autoregressive processes (A) with coupling delays
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e276.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e277.jpg"></inline-graphic>
</inline-formula>
, and coupling scenarios: (B) unidirectional coupling
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e278.jpg"></inline-graphic>
</inline-formula>
(blue line) with onset around 1 s, coupling
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e279.jpg"></inline-graphic>
</inline-formula>
set to 0 (red line); (C) unidirectional coupling
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e280.jpg"></inline-graphic>
</inline-formula>
(blue line) with onset around 1 s and an increase in coupling strength at around 2 s, coupling
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e281.jpg"></inline-graphic>
</inline-formula>
set to 0 (red line); (D) bidirectional coupling
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e282.jpg"></inline-graphic>
</inline-formula>
(blue line) with onset around 1 s and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e283.jpg"></inline-graphic>
</inline-formula>
(red line) with onset around 2 s. (E-G) Time-resolved transfer entropy (TE) for both directions of interaction, blue and red lines indicate raw TE values for
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e284.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e285.jpg"></inline-graphic>
</inline-formula>
respectively. Dashed lines denote significance thresholds at 0.01% (corrected for multiple comparisons over signal combinations). Shaded areas (red and blue) indicate the maximum absolute TE values for significant information transfer (indicated by asterisks in red and blue). (E) TE values for unidirectional coupling; (F) unidirectional coupling with a two-step increase in coupling strength; (G) bidirectional coupling.</p>
</caption>
<graphic xlink:href="pone.0102833.g007"></graphic>
</fig>
</sec>
<sec id="s4d2">
<title>Results – Scenario (1), unidirectional coupling</title>
<p>For scenario (1) of two unidirectionally coupled AR(1)-processes with a delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e286.jpg"></inline-graphic>
</inline-formula>
, we used a scanning approach
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
to reconstruct TE and the corresponding information transfer delay. We scanned assumed delays in the interval
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e287.jpg"></inline-graphic>
</inline-formula>
and used four analysis windows of length 300 ms each, ranging from 0.2 to 1.4 s. For the first two analysis windows, no significant information transfer was found (0.2 to 0.5 and 0.5 to 0.8 s). For the third and fourth analysis window we detected significant TE, where we found a maximum significant TE value at 7 ms for the third analysis window (0.8 to 1.1 s) and a maximum at 9 ms for the fourth window (1.1 to 1.4 s). Thus, the proposed implementation was able to detect information transfer between both processes if present (later than 1.1 s). During the transition in coupling strength between 0.8 and 1.1 s TE was detected, but the method showed a small error in the reconstructed information transfer delay. This may be due to too little data to detect the weaker coupling at this epoch of the simulated coupling (see below).</p>
</sec>
<sec id="s4d3">
<title>Results – Scenario (2), unidirectional coupling with two-step increase</title>
<p>For scenario (2), we again used the scanning approach for TE reconstruction, using an interval of assumed delays
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e288.jpg"></inline-graphic>
</inline-formula>
, where the true delay was simulated at
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e289.jpg"></inline-graphic>
</inline-formula>
. No TE was detected prior to the coupling onset around 1 s. TE was detected for analysis windows 4, 5, and 6 (1.1 to 1.4, 1.4 to 1.7, 1.7 to 2.0 s) with reconstructed information transfer delays of 10, 4, and 7 ms respectively. Further, significant TE was found for analysis windows 7 and 8 (after the second increase in coupling strength around 2 s). Here, the correct coupling of 10 ms was reconstructed. One false positive result was obtained in window 6 (1.7 to 2.0 s), where significant TE was found in the direction
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e290.jpg"></inline-graphic>
</inline-formula>
.</p>
<p>Note, that the method's ability to recover information transfer from data depends on the strength of the coupling relative to the amount of data that is available for TE estimation. This is observable in the reconstructed TE in the third analysis window for scenario (1) and (2): in scenario (2) no TE is detected, whereas in scenario (1) weak information transfer is already reconstructed for the third window. Note, that in scenario (2) the simulated coupling between 1 and 2 s is much weaker than the coupling in the unidirectional scenario (1) (
<xref ref-type="fig" rid="pone-0102833-g007">Figure 7</xref>
, panels
<bold>C</bold>
and
<bold>B</bold>
). This resulted in smaller and non-significant absolute TE values and in reconstructed information transfer delays that were less precise.</p>
</sec>
<sec id="s4d4">
<title>Results – Scenario (3), bidirectional coupling</title>
<p>For scenario (3), we used the scanning approach for TE reconstruction, using an interval of assumed delays
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e291.jpg"></inline-graphic>
</inline-formula>
, where the true delay was simulated at
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e292.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e293.jpg"></inline-graphic>
</inline-formula>
. No TE in either direction was detected prior to the first coupling onset around 1 s. TE for the first direction
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e294.jpg"></inline-graphic>
</inline-formula>
was detected after coupling onset around 1 s for analysis windows 4, 5, 6, 7, and 8. Reconstructed information transfer delays were 8 and 2 ms for analysis windows 4 and 5. For each of the following analysis windows 6 to 8 the correct delay of 10 ms was reconstructed.</p>
<p>TE for the second direction
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e295.jpg"></inline-graphic>
</inline-formula>
was detected after coupling onset around 2 s for analysis windows 7 and 8, where also the correct coupling of 20 ms was reconstructed. Thus, the proposed implementation was able to reconstruct information transfer in bidirectionally coupled systems.</p>
</sec>
</sec>
<sec id="s4e">
<title>Evaluation of the robustness of ensemble-based TE-estimation</title>
<p>We tested the robustness of the ensemble method for cases where the amount of data available for TE estimation was severely limited. We created two coupled Lorenz systems
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e296.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e297.jpg"></inline-graphic>
</inline-formula>
from which we sampled a maximum number of 300 repetitions of 300 ms each at 1000 Hz, using a coupling delay of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e298.jpg"></inline-graphic>
</inline-formula>
(see
<xref ref-type="disp-formula" rid="pone.0102833.e218">equation 11</xref>
). We embedded the resulting data with their optimal embedding parameters for different values of the assumed delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e299.jpg"></inline-graphic>
</inline-formula>
(30 to 60 ms, step size of 1 ms, also see
<xref ref-type="disp-formula" rid="pone.0102833.e076">equation 4</xref>
). From the embedded data, we used subsets of data points with varying size
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e300.jpg"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e301.jpg"></inline-graphic>
</inline-formula>
) to estimate TE according to
<xref ref-type="disp-formula" rid="pone.0102833.e148">equation 10</xref>
(we always used the first
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e302.jpg"></inline-graphic>
</inline-formula>
consecutive data points for TE estimation). For each
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e303.jpg"></inline-graphic>
</inline-formula>
and number of data points
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e304.jpg"></inline-graphic>
</inline-formula>
, we created surrogate data to test the estimated TE value for statistical significance. Furthermore, we reconstructed the corresponding information transfer delay for each
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e305.jpg"></inline-graphic>
</inline-formula>
by finding the maximum TE value over all values for
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e306.jpg"></inline-graphic>
</inline-formula>
. A reconstructed TE value was considered a robust estimation of the simulated coupling if the reconstructed delay value was able to recover the simulated information transfer delay of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e307.jpg"></inline-graphic>
</inline-formula>
with an error of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e308.jpg"></inline-graphic>
</inline-formula>
, i.e.
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e309.jpg"></inline-graphic>
</inline-formula>
.</p>
<p>A sufficiently accurate reconstruction was reached for 10000 and 30000 data points (
<xref ref-type="fig" rid="pone-0102833-g008">Figure 8</xref>
). For 5000 data points estimation was off by approximately 7% (the reconstructed information transfer delay was 48 ms), less data entering the estimation led to a further decline in accuracy of the recovered information transfer delay (here, reconstructed delays were 50 ms and 54 ms for 2000 and 500 data points respectively).</p>
<fig id="pone-0102833-g008" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0102833.g008</object-id>
<label>Figure 8</label>
<caption>
<title>Robustness of transfer entropy estimation with respect to limited amounts of data.</title>
<p>Estimated transfer entropy (TE) values
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e310.jpg"></inline-graphic>
</inline-formula>
for estimations using varying numbers of data points (color coded) as a function of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e311.jpg"></inline-graphic>
</inline-formula>
. Data was sampled from two Lorenz systems
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e312.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e313.jpg"></inline-graphic>
</inline-formula>
with coupling
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e314.jpg"></inline-graphic>
</inline-formula>
. The simulated information transfer delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e315.jpg"></inline-graphic>
</inline-formula>
is indicated by a vertical dotted line. Sampled data was embedded and varying numbers of embedded data points (500, 2000, 5000, 10000, 30000) were used for TE estimation. For each estimation, the maximum
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e316.jpg"></inline-graphic>
</inline-formula>
values for all values of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e317.jpg"></inline-graphic>
</inline-formula>
are indicated by solid dots. Dashed lines indicate significance thresholds (
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e318.jpg"></inline-graphic>
</inline-formula>
).</p>
</caption>
<graphic xlink:href="pone.0102833.g008"></graphic>
</fig>
</sec>
<sec id="s4f">
<title>Evaluation on neural time series from magnetoencephalography</title>
<p>To demonstrate the proposed method's suitability for time-resolved reconstruction of information transfer and the corresponding delays from biological time series, we analyzed magnetoencephalographic (MEG) recordings from a perceptual closure experiment described in
<xref rid="pone.0102833-Grtzner1" ref-type="bibr">[86]</xref>
.</p>
<sec id="s4f1">
<title>Subjects</title>
<p>MEG data were obtained from 15 healthy subjects (11 females; mean
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e319.jpg"></inline-graphic>
</inline-formula>
SD age, 25.4
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e320.jpg"></inline-graphic>
</inline-formula>
5.6 years), recruited from the local community.</p>
</sec>
<sec id="s4f2">
<title>Task</title>
<p>Subjects were presented with a randomized sequence of degraded black and white picture of human faces
<xref rid="pone.0102833-Mooney1" ref-type="bibr">[88]</xref>
(
<xref ref-type="fig" rid="pone-0102833-g009">Figure 9</xref>
, panel
<bold>A</bold>
) and scrambled stimuli, where black and white patches were randomly rearranged to minimize the likelihood of detecting a face. Subjects had to indicate the detection of a face or no-face by a button press. Each stimulus was presented for 200 ms, with a random inter-repetition interval (IRI) of 3500 to 4500 ms (9, panel
<bold>E</bold>
). For further analysis we used repetitions with correctly identified face conditions only.</p>
<fig id="pone-0102833-g009" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0102833.g009</object-id>
<label>Figure 9</label>
<caption>
<title>Transfer entropy reconstruction from electrophysiological data.</title>
<p>Time resolved reconstruction of transfer entropy (TE) from magnetoencephalographic (MEG) source data, recorded during a face recognition task. (A) Face stimulus
<xref rid="pone.0102833-Mooney1" ref-type="bibr">[88]</xref>
. (B) Cortical sources after beamforming of MEG data (L, left; R, right: L orbitofrontal cortex (OFC); R middle frontal gyrus (MiFG); L inferior frontal gyrus (IFG left); R inferior frontal gyrus (IFG right); L anterior inferotemporal cortex (aTL left); L cingulate gyrus (cing); R premotor cortex (premotor); R superior temporal gyrus (STG); R anterior inferotemporal cortex (aTL right); L fusiform gyrus (FFA); L angular/supramarginal gyrus (SMG); R superior parietal lobule/precuneus (SPL); L caudal ITG/LOC (cITG); R primary visual cortex (V1)). (C) Reconstructed TE in three single subjects (red box) in three time windows (0−150 ms, 150−300 ms, 300−450 ms). Each link (red arrows) corresponds to significant TE on single subject level (corrected for multiple comparisons). (D) Thresholded TE links over 15 subjects (blue box) in three time windows (0−150 ms, 150−300 ms, 300−450 ms). Each link (black arrows) corresponds to significant TE in eight and more individual subjects (
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e321.jpg"></inline-graphic>
</inline-formula>
, after correction for multiple comparisons). Blue arrows indicate differences between time windows, i.e. links that occur for the first time in the respective window. (E) Experimental design: stimulus was presented for 200 ms (gray shading), during the inter stimulus interval (ISI, 1800 ms) a fixation cross was displayed.</p>
</caption>
<graphic xlink:href="pone.0102833.g009"></graphic>
</fig>
</sec>
<sec id="s4f3">
<title>MEG and MRI data acquisition</title>
<p>MEG data were recorded using a 275-channel whole-head system (Omega 2005, VSM MedTech Ltd., BC, Canada) at a rate of 600 Hz in a synthetic third order axial gradiometer configuration. The data were filtered with 4th order Butterworth filters with 0.5 Hz high-pass and 150 Hz low-pass. Behavioral responses were recorded using a fiber optic response pad (Lumitouch, Photon Control Inc., Burnaby, BC, Canada).</p>
<p>Structural magnetic resonance images (MRI) were obtained with a 3 T Siemens Allegra, using 3D magnetization-prepared rapid-acquisition gradient echo sequence. Anatomical images were used to create individual head models for MEG source reconstruction.</p>
</sec>
<sec id="s4f4">
<title>Data analysis</title>
<p>MEG data were analyzed using the open source MATLAB toolboxes FieldTrip (version 2008-12-08;
<xref rid="pone.0102833-Oostenveld1" ref-type="bibr">[89]</xref>
), SPM2 (
<ext-link ext-link-type="uri" xlink:href="http://www.fil.ion.ucl.ac.uk/spm/">(http://www.fil.ion.ucl.ac.uk/spm/</ext-link>
), and TRENTOOL
<xref rid="pone.0102833-Lindner1" ref-type="bibr">[56]</xref>
. We will briefly describe the applied analysis here, for a more in depth treatment refer to
<xref rid="pone.0102833-Grtzner1" ref-type="bibr">[86]</xref>
.</p>
<p>For data preprocessing, data epochs (repetitions) were defined from the continuously recorded MEG signals from −1000 to 1000 ms with respect to the onset of the visual stimulus. Only data repetitions with correct responses were considered for analysis. Data epochs contaminated by eye blinks, muscle activity, or jump artifacts in the sensors were discarded. Data epochs were baseline corrected by subtracting the mean amplitude during an epoch ranging from −500 to −100 ms before stimulus onset.</p>
<p>To investigate differences in source activation in the face and non-face condition, we used a frequency domain beamformer
<xref rid="pone.0102833-Gross1" ref-type="bibr">[90]</xref>
at frequencies of interest that had been identified at the sensor level (80 Hz with a spectral smoothing of 20 Hz). We computed the frequency domain beamformer filters for combined data epochs (“common filters”) consisting of activation (multiple windows, duration, 200 ms; onsets at every 50 ms from 0 to 450 ms) and baseline data (−350 to −150 ms) for each analysis interval. To compensate for the short duration of the data windows, we used a regularization of
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e322.jpg"></inline-graphic>
</inline-formula>
<xref rid="pone.0102833-Brookes1" ref-type="bibr">[91]</xref>
.</p>
<p>To find significant source activations in the face versus non-face condition, we first conducted a within-subject t-test for activation versus baseline effects. Next, the t-values of this test statistic were subjected to a second-level randomization test at the group level to obtain effects of differences between face and no-face conditions; a p-value
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e323.jpg"></inline-graphic>
</inline-formula>
0.01 was considered significant. We identified 14 sources with differential spectral power between both conditions in the frequency band of interest in occipital, parietal, temporal, and frontal cortices (see
<xref ref-type="fig" rid="pone-0102833-g009">Figure 9</xref>
, panel
<bold>B</bold>
, and
<xref rid="pone.0102833-Grtzner1" ref-type="bibr">[86]</xref>
for exact anatomical locations). We then reconstructed source time courses for TE analysis, this time using a broadband beamformer with a bandwidth of 10 to 150 Hz.</p>
<p>We estimated TE between beamformer source time courses using our ensemble method with a mixed pooling of embedded time points over repetitions
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e324.jpg"></inline-graphic>
</inline-formula>
and time windows
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e325.jpg"></inline-graphic>
</inline-formula>
(eq. 10). We analyzed three non-overlapping time windows
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e326.jpg"></inline-graphic>
</inline-formula>
of 150 ms each (0–150 ms, 150–300 ms, 300–450 ms,
<xref ref-type="fig" rid="pone-0102833-g009">Figure 9</xref>
, panel
<bold>C</bold>
). We furthermore reconstructed information transfer delays for significant information transfer by scanning over a range of assumed delays from 5 to 17 ms (resolution 2 ms), following the approach in
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
. We corrected the resulting information transfer pattern for cascade effects as well as common drive effects using a graph-based post-hoc correction proposed in
<xref rid="pone.0102833-Wibral5" ref-type="bibr">[54]</xref>
.</p>
</sec>
<sec id="s4f5">
<title>Results</title>
<p>Time-resolved GPU-based TE analysis revealed significant information transfer at the group-level (
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e327.jpg"></inline-graphic>
</inline-formula>
corrected for multiple comparison; binomial test under the null hypothesis of the number of occurrences
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e328.jpg"></inline-graphic>
</inline-formula>
of a link being
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e329.jpg"></inline-graphic>
</inline-formula>
-distributed, where
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e330.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e331.jpg"></inline-graphic>
</inline-formula>
), that changed over time (
<xref ref-type="fig" rid="pone-0102833-g009">Figure 9</xref>
, panel
<bold>D</bold>
and
<xref ref-type="table" rid="pone-0102833-t002">table 2</xref>
for reconstructed information transfer delays). Our preliminary findings of information transfer are in line with hypothesis formulated in
<xref rid="pone.0102833-Bar1" ref-type="bibr">[92]</xref>
,
<xref rid="pone.0102833-Cavanagh1" ref-type="bibr">[93]</xref>
and
<xref rid="pone.0102833-Grtzner1" ref-type="bibr">[86]</xref>
, and the time-dependent changes show the our method's sensitivity to the dynamics of information processing during experimental stimulation, in line with the simulation results above.</p>
<table-wrap id="pone-0102833-t002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0102833.t002</object-id>
<label>Table 2</label>
<caption>
<title>Reconstructed information transfer delays for magnetoencephalographic data.</title>
</caption>
<alternatives>
<graphic id="pone-0102833-t002-2" xlink:href="pone.0102833.t002"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Source</td>
<td align="left" rowspan="1" colspan="1">Target</td>
<td align="left" rowspan="1" colspan="1">0–150 ms</td>
<td align="left" rowspan="1" colspan="1">150–300 ms</td>
<td align="left" rowspan="1" colspan="1">300–450 ms</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">SPL</td>
<td align="left" rowspan="1" colspan="1">IFG left</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.50</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">SPL</td>
<td align="left" rowspan="1" colspan="1">cITG</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">cITG</td>
<td align="left" rowspan="1" colspan="1">IFG left</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">cITG</td>
<td align="left" rowspan="1" colspan="1">FFA</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">cITG</td>
<td align="left" rowspan="1" colspan="1">SMG</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG</td>
<td align="left" rowspan="1" colspan="1">aTL right</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG</td>
<td align="left" rowspan="1" colspan="1">Premotor</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.83</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG</td>
<td align="left" rowspan="1" colspan="1">FFA</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.50</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">aTL right</td>
<td align="left" rowspan="1" colspan="1">STG</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">aTL right</td>
<td align="left" rowspan="1" colspan="1">Premotor</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.60</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">SMG</td>
<td align="left" rowspan="1" colspan="1">SPL</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">SMG</td>
<td align="left" rowspan="1" colspan="1">V1</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">SMG</td>
<td align="left" rowspan="1" colspan="1">IFG left</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.20</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">SMG</td>
<td align="left" rowspan="1" colspan="1">FFA</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.22</td>
<td align="left" rowspan="1" colspan="1">5.20</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">OFC</td>
<td align="left" rowspan="1" colspan="1">IFG left</td>
<td align="left" rowspan="1" colspan="1">5.18</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">OFC</td>
<td align="left" rowspan="1" colspan="1">FFA</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.20</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">MiFG</td>
<td align="left" rowspan="1" colspan="1">IFG right</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">MiFG</td>
<td align="left" rowspan="1" colspan="1">Premotor</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG right</td>
<td align="left" rowspan="1" colspan="1">MiFG</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG right</td>
<td align="left" rowspan="1" colspan="1">Premotor</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG left</td>
<td align="left" rowspan="1" colspan="1">SPL</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG left</td>
<td align="left" rowspan="1" colspan="1">cITG</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG left</td>
<td align="left" rowspan="1" colspan="1">SMG</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.40</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG left</td>
<td align="left" rowspan="1" colspan="1">OFC</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG left</td>
<td align="left" rowspan="1" colspan="1">FFA</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">FFA</td>
<td align="left" rowspan="1" colspan="1">cITG</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.22</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">FFA</td>
<td align="left" rowspan="1" colspan="1">OFC</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">FFA</td>
<td align="left" rowspan="1" colspan="1">IFG left</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">FFA</td>
<td align="left" rowspan="1" colspan="1">SMG</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">V1</td>
<td align="left" rowspan="1" colspan="1">cITG</td>
<td align="left" rowspan="1" colspan="1">5.25</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.25</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">V1</td>
<td align="left" rowspan="1" colspan="1">SMG</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor</td>
<td align="left" rowspan="1" colspan="1">STG</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor</td>
<td align="left" rowspan="1" colspan="1">MiFG</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor</td>
<td align="left" rowspan="1" colspan="1">IFG right</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
<td align="left" rowspan="1" colspan="1">5.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Cing</td>
<td align="left" rowspan="1" colspan="1">STG</td>
<td align="left" rowspan="1" colspan="1">5.25</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Cing</td>
<td align="left" rowspan="1" colspan="1">FFA</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">5.67</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label></label>
<p>Mean over reconstructed interaction delays for significant information transfers in three analysis windows. Information transfer delays were investigated in steps of 2 ms, from 5–17 ms. Fractional numbers arise from averaging over subjects.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s5">
<title>Discussion</title>
<sec id="s5a">
<title>Efficient transfer entropy estimation from an ensemble of time series</title>
<p>We presented an efficient implementation of the ensemble method for TE estimation proposed by
<xref rid="pone.0102833-GomezHerrero1" ref-type="bibr">[55]</xref>
. As laid out in the introduction, estimating TE from an ensemble of data allows to analyze information transfer between time series that are non-stationary and enables the estimation of TE in a time-resolved fashion. This is especially relevant to neuroscientific experiments, where rapidly changing (and thus non-stationary) neural activity is believed to reflect neural information processing. However, up until now the ensemble method has remained out of reach for application in neuroscience because of its computational cost. Only with using parallelization on a GPU, as presented here, the ensemble method becomes a viable tool for the analysis of neural data. Thus, our approach makes it possible for the first time to efficiently analyze information transfer between neural time series on short time scales. This allows us to handle the non-stationarity of underlying processes and makes a time- resolved estimation of TE possible. To facilitate the use of the ensemble method it has been implemented as part of the open source toolbox TRENTOOL (version 3.0).</p>
<p>Even though we will focus on neural data when discussing applications of the ensemble method for TE estimation below, this approach is well suited for applications in other fields. For example, TE as defined in
<xref rid="pone.0102833-Schreiber1" ref-type="bibr">[14]</xref>
has been applied in physiology
<xref rid="pone.0102833-Faes2" ref-type="bibr">[42]</xref>
<xref rid="pone.0102833-Faes4" ref-type="bibr">[44]</xref>
, climatology
<xref rid="pone.0102833-Verdes1" ref-type="bibr">[94]</xref>
,
<xref rid="pone.0102833-Pompe1" ref-type="bibr">[95]</xref>
, financial time series analysis
<xref rid="pone.0102833-Kwon1" ref-type="bibr">[45]</xref>
,
<xref rid="pone.0102833-Marschinski1" ref-type="bibr">[96]</xref>
, and in the theory of cellular automata
<xref rid="pone.0102833-Lizier8" ref-type="bibr">[48]</xref>
. Large datasets from these and other fields may now be easily analyzed with the presented approach and its implementation in TRENTOOL.</p>
</sec>
<sec id="s5b">
<title>Notes on the practical application of the ensemble method for TE estimation</title>
<sec id="s5b1">
<title>Applicability to simulated and real world experimental data</title>
<p>To validate the proposed implementation of the ensemble method, we applied it to simulated data as well as MEG recordings. For simulated data, information transfer could reliably be reconstructed despite the non-stationarity in the underlying generating processes. For MEG data the obtained speed-up was large enough to analyze these data in practical time. Information transfer reconstructed in a time-resolved fashion from the MEG source data was in line with findings by
<xref rid="pone.0102833-Grtzner1" ref-type="bibr">[86]</xref>
,
<xref rid="pone.0102833-Bar1" ref-type="bibr">[92]</xref>
,
<xref rid="pone.0102833-Cavanagh1" ref-type="bibr">[93]</xref>
, as discussed below.</p>
<p>Note, that even though our proposed implementation of the ensemble method reduces analysis times by a significant amount, the estimation of TE from neural time series is still time consuming relative to other measures of connectivity. For the example MEG data set presented in this paper, TE estimation for one subject and one analysis window took 93 hours on average (when scanning over seven values for the assumed information transfer delay
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e332.jpg"></inline-graphic>
</inline-formula>
and reconstructing TE for all possible combinations of 14 sources). Thus, for 15 subjects with three analysis windows each, the whole analysis would take approximately six months when carried out in a serial fashion on one computer equipped with a modern GPU (e.g. NVIDIA GTX Titan). This time may however be reduced by parallelizing the analysis over subjects and analysis windows on multiple GPUs, as it was done for this study.</p>
</sec>
<sec id="s5b2">
<title>Available data and choice of window size</title>
<p>As available data is often limited in neuroscience and other real-world applications, the user has to make sure that enough data enters the analysis, such that a reliable estimation of TE is possible. In the proposed implementation of the ensemble method for TE estimation the amount of data entering the estimation directly depends on the size of the chosen analysis window and the amount of available repetitions of the process being analyzed. Furthermore, the choice of the embedding parameters lead to varying numbers of embedded data that can be obtained from scalar time series. When estimating TE from neural data, we therefore recommend to control the amount of data in one analysis window that is available after embedding and to design experiments accordingly. For example, the presented MEG data set was sampled at 600 Hz, with 137 repetitions of the stimulus on average, which - after embedding - led to 8800 data points per analysis window of 150 ms. In comparison, for simulated data TE was reconstructed correctly for 10000 data points and more. Thus, in our example MEG data set, shorter analysis windows would not have been advisable because of an insufficient amount of data per analysis window for reliable TE estimation. If shorter analysis windows are necessary, they will have to be counterbalanced by a higher number of experimental repetitions.</p>
<p>Thus, the choice of an appropriate analysis window is crucial to guarantee reliable TE estimation, while still resolving the temporal dynamics under investigation. A further data limiting factor is the need for an appropriate embedding of the scalar time series. To embed the time series at a given point
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e333.jpg"></inline-graphic>
</inline-formula>
, enough history for this sample (embedding dimension times the embedding delay in sample points) has to be recorded. We call this epoch the
<italic>embedding window</italic>
. The need for an appropriate embedding thus constitutes another constraint for the data necessary for TE estimation. Thus, the choice of an optimal embedding dimension (e.g. through the use of Ragwitz' criterion
<xref rid="pone.0102833-Ragwitz1" ref-type="bibr">[71]</xref>
) is crucial as the use of larger than optimal embedding dimensions wastes available data and may lead to a weaker detection rate in noisy data
<xref rid="pone.0102833-Lindner1" ref-type="bibr">[56]</xref>
.</p>
<p>Note, that the embedding window should not be confused with the analysis window. The analysis window strictly describes the data points, for which neighbor statistics enter TE estimation – where neighbor counts may be averaged over an epoch
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e334.jpg"></inline-graphic>
</inline-formula>
or may come from a single point in time
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e335.jpg"></inline-graphic>
</inline-formula>
only. The embedding window however, describes the data points that enter the embedding of a single point in time. Thus, the temporal resolution of TE analysis may still be in single time steps
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e336.jpg"></inline-graphic>
</inline-formula>
(i.e. only one time point entering the analysis), even though the embedding window spans several points in time that contain the history for this single point.</p>
</sec>
</sec>
<sec id="s5c">
<title>Repeatability of neuronal processes</title>
<p>When applying the ensemble method to estimate TE from neural recordings, we treat experimental repetitions as multiple realizations of the neural processes under investigation. In doing so, we assume stationarity of these processes
<italic>over repetitions</italic>
. We claim that in most cases this assumption of stationarity is justified for processes concerned with the processing of experimental stimuli and that the assumption also holds for stimulus-independent processes that contribute to neural recordings. We will first present the different contributions to neural recordings and subsequently discuss their individual statistical properties, i.e. their stationarity over repetitions. Note, that the term stationarity refers to the stability of the
<italic>probability distribution underlying</italic>
the observed realizations of contributions over repetitions and does not require individual realizations to be identical; i.e. stationarity does not preclude a variability in observed realizations, but rather implies some variance in observed realizations, that is reflective of the variance in the underlying probability distribution.</p>
<p>Contributions to neural recordings may either be stimulus-related (
<italic>event-related activity</italic>
) or stimulus-independent (
<italic>spontaneous ongoing activity</italic>
). Within the category of event-related activity, contributions can be further distinguished into phase-locked and non phase-locked contributions (the latter is commonly called
<italic>induced activity</italic>
). Phase-locked activity has a fixed polarity and latency with respect to the stimulus and – on averaging over repetitions – contributes to an event-related potential or field (ERP/F). Phase-locked activity is further distinguished into two types of contributions, that are discussed as mechanisms in the ERP/F-generation (e.g.
<xref rid="pone.0102833-Sauseng1" ref-type="bibr">[97]</xref>
<xref rid="pone.0102833-Shah1" ref-type="bibr">[99]</xref>
): (1)
<italic>additive evoked contributions</italic>
, i.e. neural activity that is in addition to ongoing activity and represents the stereotypical response of a neural population to the presented stimulus in each repetition
<xref rid="pone.0102833-Jervis1" ref-type="bibr">[100]</xref>
<xref rid="pone.0102833-Schroeder1" ref-type="bibr">[102]</xref>
; (2)
<italic>phase- reset contributions</italic>
, i.e. the phase of ongoing activity is reset by the stimulus, such that phase-aligned signals no longer cancel each other out on averaging over repetitions
<xref rid="pone.0102833-Sayers1" ref-type="bibr">[103]</xref>
<xref rid="pone.0102833-Klimesch1" ref-type="bibr">[106]</xref>
. In contrast to these two subtypes of phase-locked activity, induced activity is event-related activity that is not phase-locked to the stimulus, such that latency and polarity vary randomly over repetitions and induced activity averages out over repetitions.</p>
<p>We therefore have to consider four types of contributions to neural recordings: (1) additive evoked contributions, (2) phase-reset contributions, (3) induced contributions and (4) spontaneous ongoing contributions, the last being stimulus-independent. Stationarity can be assumed for all these contributions if no learning effects occur during the experiment. Learning effects may lead to slow drifts, i.e. changing mean and variances, in the recorded signal. Such learning effects may easily be tested for by comparing the first and second half of recorded repetitions with respect to equal variances and means. If variances and means are equal, learning effects can most likely be excluded. Empirically, the stationarity assumption, specifically of phase-locked contributions, can also be verified using a modified independent component analysis recently proposed in
<xref rid="pone.0102833-Turi1" ref-type="bibr">[107]</xref>
.</p>
<p>To sum up the statistical properties of different contributions to neural data and their relevance for using an ensemble approach to TE estimation, we conclude that all contributions to neural recordings can be considered stationary over repetitions by default. Non-stationarity over repetitions will only be a problem in paradigms that introduce (slow) drifts or trends in the recorded signal, for example by facilitating learning during the experiment. Testing for drifts may be done by comparing mean and variance in a split-half analysis.</p>
</sec>
<sec id="s5d">
<title>Relation of the ensemble method to local information dynamics</title>
<p>We will now discuss the relation of the ensemble approach suggested here to the local transfer entropy (LTE) approach of Lizier
<xref rid="pone.0102833-Lizier1" ref-type="bibr">[4]</xref>
,
<xref rid="pone.0102833-Lizier5" ref-type="bibr">[15]</xref>
. This may be useful as both approaches at first glance seem to have a similar goal, i.e. assessing information transfer more locally in time. As we will show, the approaches differ in what quantities they localize. From this difference it also follows that they can (and should be) combined when necessary.</p>
<p>In detail, the ensemble approach used here tries to exploit cyclostationarity or repeatability of random processes to obtain multiple PDFs from the different (quasi-) stationary parts of the repeated process cycle, or a PDF for each step in time from replications of a process, respectively. In contrast, local information dynamics localizes information transfer in time (and space) given the PDF of a
<italic>stationary</italic>
process.</p>
<p>The local information dynamics approach to information transfer computes information transfer for stationary random processes from their joint and marginal PDFs for each process step, thereby fully localizing information transfer in time. The quantity proposed for this purpose is the LTE
<xref rid="pone.0102833-Lizier5" ref-type="bibr">[15]</xref>
:
<disp-formula id="pone.0102833.e337">
<graphic xlink:href="pone.0102833.e337.jpg" position="anchor" orientation="portrait"></graphic>
<label>(17)</label>
</disp-formula>
</p>
<p>LTE relates to TE in the same way Shannon information relates to Shannon entropy – by means of taking an expected value under the common PDF
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e338.jpg"></inline-graphic>
</inline-formula>
of the collection of random variables
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e339.jpg"></inline-graphic>
</inline-formula>
that form the processes
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e340.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e341.jpg"></inline-graphic>
</inline-formula>
, which exchange information. Stationarity here guarantees that all the random variables
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e342.jpg"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e343.jpg"></inline-graphic>
</inline-formula>
) have a common PDF (as the PDF is not allowed to change over time):
<disp-formula id="pone.0102833.e344">
<graphic xlink:href="pone.0102833.e344.jpg" position="anchor" orientation="portrait"></graphic>
<label>(18)</label>
</disp-formula>
</p>
<p>In contrast, the approach presented here does not assume that the random processes
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e345.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e346.jpg"></inline-graphic>
</inline-formula>
are stationary, but that either replications if the process can be obtained, or that the process is cyclostationary. Under these constraints a
<italic>local</italic>
PDF can be obtained. The events drawn from this PDF may then be analyzed in terms of their average information transfer, i.e. using TE as presented here, or by inspecting them individually, computing LTE for each event. In this sense, the approach suggested here is aimed at extracting the proper local PDFs, while local information dynamics comes into play once these proper PDFs have been obtained. We are certain that both approaches can be fruitfully combined in future studies.</p>
</sec>
<sec id="s5e">
<title>Relation of the ensemble method to other measures of connectivity for non-stationary data</title>
<p>Linear Granger causality (GC) is – as has been shown recently by
<xref rid="pone.0102833-Barnett1" ref-type="bibr">[26]</xref>
– equivalent to TE for variables with a
<italic>jointly</italic>
Gaussian distribution. Thus, for data that exhibit such a distribution, information transfer may be analyzed more easily within the GC framework. Similar to the ensemble method for TE estimation, extensions to GC estimation have been proposed that deal with non-stationary data by fitting time-variant parameters. For example, Möller and colleagues presented an approach that fitted multivariate autoregressive models (MVAR) with time-dependent parameters to an ensemble of EEG signals
<xref rid="pone.0102833-Mller1" ref-type="bibr">[108]</xref>
. Similar measures, that fit time-dependent parameters in autoregressive models to data ensembles, were used by
<xref rid="pone.0102833-Ding1" ref-type="bibr">[109]</xref>
and
<xref rid="pone.0102833-Hesse1" ref-type="bibr">[110]</xref>
. A different approach to dealing with non-stationarity was taken by Leistritz and colleagues
<xref rid="pone.0102833-Leistritz1" ref-type="bibr">[111]</xref>
. These authors proposed to use self-exciting threshold autoregressive (SETAR) models to model neural time series within a GC framework. SETAR models extend traditional AR models by introducing state-dependent model parameters and allow for the modeling of transient components in the signal.</p>
<p>The presented methods for the estimation of time-variant linear GC may yield a computationally less expensive approach to the estimation of information transfer from an ensemble of data. However, linear GC is equivalent to TE regarding the full recovery of information transfer for data with a
<italic>jointly</italic>
Gaussian distribution only. For non-Gaussian data, linear GC may fail to capture higher order interactions. As neural data are most likely non-Gaussian, the application of TE may have an advantage for the analysis of information transfer in this type of data. The non-Gaussian nature of neural data can for example be seen, when comparing brain electrical source signals from physical inverse methods to time courses of corresponding ICA components
<xref rid="pone.0102833-Wibral6" ref-type="bibr">[112]</xref>
. Here, ICA components and extracted brain signals closely match. Given that ICA components are as non-Gaussian as possible (by definition of the ICA), we can infer that brain signals are very likely non-Gaussian.</p>
<p>We also note that a nonstationary measure of
<italic>coupling</italic>
between dynamic systems building on repetitions of time series and next-neighbor statistics was suggested by Andrzejak and colleagues
<xref rid="pone.0102833-Andrzejak1" ref-type="bibr">[113]</xref>
. The key difference of their approach to the ensemble method suggested here is that the previous states of the target time series are not taken into account explicitly in their method. Hence, their measure is not (and was not intended to be) a measure of information transfer (see
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
for details why a measure of information transfer needs to include the past state of target time series, and
<xref rid="pone.0102833-Lizier8" ref-type="bibr">[48]</xref>
for the difference between measures of (causal) coupling and information transfer). In addition, their methods explicitly tries to determine the
<italic>direction</italic>
of coupling between to systems. This implies that there should be a dominant direction of coupling in order to obtain meaningful results. Transfer entropy, in contrast, easily separates and quantifies both directions of information transfer related to bidirectional coupling, under some mild conditions related to entropy production in each of the two coupled systems
<xref rid="pone.0102833-Wibral4" ref-type="bibr">[53]</xref>
.</p>
</sec>
<sec id="s5f">
<title>Relation of the ensemble method to the direct method for the calculation of mutual information of Strong and colleagues</title>
<p>The ensemble method proposed here shares the use of replications (or trials) with the so called ‘direct method’ of Strong and colleagues
<xref rid="pone.0102833-Strong1" ref-type="bibr">[114]</xref>
. The authors introduced this method to calculate mutual information between a controlled stimulus set and neural responses. Similarities also exist in the sense that the surrogate data method for statistical evaluation used in our ensemble method builds on trial-to-trial variability, as does Strong's method (by looking at intrinsic variability versus variability driven by stimulus changes).</p>
<p>However, the two methods differ conceptually on two accounts: First, the quantity estimated is different – symmetric mutual information in Strong's method compared to inherently asymmetric conditional mutual information in the case of TE. Second, the method of Strong and colleagues requires a direct intervention in the source of information (i.e. the stimuli) to work, whereas TE in general is independent of such interventions. This has far reaching consequences for the interpretation of the two measures: The intervention inherent in Strong's method places it somewhat closer to causal measures such as Ay and Polani's causal information flow
<xref rid="pone.0102833-Ay1" ref-type="bibr">[47]</xref>
, whereas intervention-free TE has a clear interpretation as the information transfered in relation to distributed computation
<xref rid="pone.0102833-Lizier8" ref-type="bibr">[48]</xref>
. As a consequence, TE maybe easily applied to quantify neural information transfer from one neuron or brain area to another even under
<italic>constant</italic>
stimulus conditions. In contrast, using Strong's method inside a neural system in this way would require precisely setting of the activity of the source neuron or brain area, something that may often be difficult to do.</p>
</sec>
<sec id="s5g">
<title>Application of the proposed implementation to other dependency measures</title>
<p>The use of ensemble pooling of observations for the estimation of time-resolved dependency measures has been proposed in a variety of frameworks. For example, Andrzejak and colleagues
<xref rid="pone.0102833-Andrzejak1" ref-type="bibr">[113]</xref>
use ensemble pooling of delay-embedded time series in combination with nearest neighbor statistics as a general approach to the estimation of arbitrary non-linear dependency measures. However, the practical application of ensemble pooling and nearest neighbor statistics together with the necessary generation of a sufficient amount of surrogate data sets (typically
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e347.jpg"></inline-graphic>
</inline-formula>
1000 in neuroscience applications where correction for multiple comparisons is necessary) was always hindered by its high computational cost. Only with the presentation of a GPU algorithm for nearest neighbor searches, we provide an implementation of the ensemble method that allows its practical application. Note that even though we use ensemble pooling and GPU search algorithms to specifically estimate TE, the presented implementation may easily be adapted to other dependency measures that are calculated from (conditional) mutual informations estimated from nearest neighbor statistics.</p>
</sec>
<sec id="s5h">
<title>Application to MEG source activity in a perceptual closure task</title>
<p>Application of the ensemble-based TE estimation to MEG source activities revealed a time varying pattern of information transfers, as expected in the nonstationary setting of the visual task. While a full discussion of the revealed information transfer pattern is beyond the scope of this study, we point out individual connections transferring information that underline the validity of our results. Notable connections in the first time window transfer information from the early visual cortices (V1) to the orbitofrontal cortex (OFC) – in line with earlier findings by Bar an colleagues
<xref rid="pone.0102833-Bar1" ref-type="bibr">[92]</xref>
, that suggest a role of the OFC in early visual scene segmentation and gist perception. Another brain area receiving information from early visual cortex is the caudal inferior temporal gyrus (cITG)
<xref rid="pone.0102833-Georgieva1" ref-type="bibr">[115]</xref>
, an area responsible for the processing of shape-from-shading information, which is thought to be essential for perception of Mooney stimuli as they were used here. Both of these areas, OFC and cITG at later stages of processing exchange information with the fusiform face area, which is essential for the processing of faces
<xref rid="pone.0102833-Kanwisher1" ref-type="bibr">[116]</xref>
<xref rid="pone.0102833-McKeeff1" ref-type="bibr">[118]</xref>
, and thereby expected to receive information from other areas in this task. Indeed, FFA seems to be an essential hub in the task-related network investigated in this study and receives increasing amounts of incoming information transfer as the task progresses in time. This is in line with the fact that the most pronounced task-related differences in FFA activity were found at latencies
<inline-formula>
<inline-graphic xlink:href="pone.0102833.e348.jpg"></inline-graphic>
</inline-formula>
200 ms previously
<xref rid="pone.0102833-Grtzner1" ref-type="bibr">[86]</xref>
.</p>
<p>Our data also clearly show a great variability in information transfer pattern across subjects, which we relate to the limited amount of data per subject, rather than to true variation. Moreover, future investigations will have to show whether more fine grained temporal segmentation of the neural information processing in this task is possible and whether it will provide additional insights.</p>
</sec>
<sec id="s5i">
<title>Conclusion and further directions</title>
<p>We presented an implementation of the ensemble method for TE presented in
<xref rid="pone.0102833-GomezHerrero1" ref-type="bibr">[55]</xref>
, that uses a GPU to handle computationally most demanding aspects of the analysis. We chose an implementation that is flexible enough to scale well with different experimental designs as well as with future hardware developments. Our implementation was able to successfully reconstruct information transfer in simulated and neural data in a time-resolved fashion. Nearest neighbor searches using a GPU exhibited substantially reduced execution times. The implementation has been made available as part of the open source MATLAB toolbox TRENTOOL
<xref rid="pone.0102833-Lindner1" ref-type="bibr">[56]</xref>
for the use with CUDA-enabled GPU devices.</p>
<p>We conclude that the ensemble method in its presented implementation is a suitable tool for the analysis of non-stationary neural time series, enabling this type of analysis for the first time. It may also be applicable in other fields that are concerned with the analysis of information transfer within complex dynamic systems.</p>
</sec>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="pone.0102833-Turing1">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Turing</surname>
<given-names>AM</given-names>
</name>
(
<year>1936</year>
)
<article-title>On computable numbers, with an application to the Entscheidungsproblem</article-title>
.
<source>Proceedings of the London Mathematical Society</source>
<volume>42</volume>
:
<fpage>230</fpage>
<lpage>265</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Langton1">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Langton</surname>
<given-names>CG</given-names>
</name>
(
<year>1990</year>
)
<article-title>Computation at the edge of chaos: Phase transitions and emergent computation</article-title>
.
<source>Physica D: Nonlinear Phenomena</source>
<volume>42</volume>
:
<fpage>12</fpage>
<lpage>37</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Mitchell1">
<label>3</label>
<mixed-citation publication-type="other">Mitchell M (1998) Computation in cellular automata: A selected review. In: Gramβ T, Bornholdt S, Groβ M, Mitchell M, Pellizzari T, editors, Non-Standard Computation, Weinheim: Wiley-VCH Verlag GmbH & Co. KGaA. pp. 95–140.</mixed-citation>
</ref>
<ref id="pone.0102833-Lizier1">
<label>4</label>
<mixed-citation publication-type="other">Lizier JT (2013) The local information dynamics of distributed computation in complex systems. Springer Theses Series. Berlin/Heidelberg: Springer.</mixed-citation>
</ref>
<ref id="pone.0102833-Wibral1">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wibral</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Lizier</surname>
<given-names>JT</given-names>
</name>
,
<name>
<surname>Vögler</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Priesemann</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Galuske</surname>
<given-names>R</given-names>
</name>
(
<year>2014</year>
)
<article-title>Local active information storage as a tool to understand distributed neural information processing</article-title>
.
<source>Front Neuroinform</source>
<volume>8</volume>
:
<fpage>1</fpage>
<pub-id pub-id-type="pmid">24501593</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Lizier2">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lizier</surname>
<given-names>JT</given-names>
</name>
,
<name>
<surname>Prokopenko</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Zomaya</surname>
<given-names>AY</given-names>
</name>
(
<year>2010</year>
)
<article-title>Information modification and particle collisions in distributed computation</article-title>
.
<source>Chaos</source>
<volume>20</volume>
:
<fpage>037109</fpage>
<lpage>037109</lpage>
<pub-id pub-id-type="pmid">20887075</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Lizier3">
<label>7</label>
<mixed-citation publication-type="other">Lizier JT, Flecker B, Williams PL (2013) Towards a synergy-based approach to measuring information modification. arXiv preprint arXiv:13033440.</mixed-citation>
</ref>
<ref id="pone.0102833-Williams1">
<label>8</label>
<mixed-citation publication-type="other">Williams PL, Beer RD (2010) Nonnegative decomposition of multivariate information. arXiv preprint arXiv:10042515.</mixed-citation>
</ref>
<ref id="pone.0102833-Bertschinger1">
<label>9</label>
<mixed-citation publication-type="other">Bertschinger N, Rauh J, Olbrich E, Jost J (2012) Shared information – New insights and problems in decomposing information in complex systems. arXiv preprint arXiv:12105902.</mixed-citation>
</ref>
<ref id="pone.0102833-Griffith1">
<label>10</label>
<mixed-citation publication-type="other">Griffith V, Koch C (2012) Quantifying synergistic mutual information. arXiv preprint arXiv:12054265.</mixed-citation>
</ref>
<ref id="pone.0102833-Harder1">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Harder</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Salge</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Polani</surname>
<given-names>D</given-names>
</name>
(
<year>2013</year>
)
<article-title>Bivariate measure of redundant information</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>87</volume>
:
<fpage>012130</fpage>
<pub-id pub-id-type="pmid">23410306</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Bertschinger2">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bertschinger</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Rauh</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Olbrich</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Jost</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Ay</surname>
<given-names>N</given-names>
</name>
(
<year>2014</year>
)
<article-title>Quantifying unique information</article-title>
.
<source>Entropy</source>
<volume>16</volume>
:
<fpage>2161</fpage>
<lpage>2183</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Lizier4">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lizier</surname>
<given-names>JT</given-names>
</name>
,
<name>
<surname>Prokopenko</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Zomaya</surname>
<given-names>AY</given-names>
</name>
(
<year>2012</year>
)
<article-title>Local measures of information storage in complex distributed computation</article-title>
.
<source>Inform Sciences</source>
<volume>208</volume>
:
<fpage>39</fpage>
<lpage>54</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Schreiber1">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schreiber</surname>
<given-names>T</given-names>
</name>
(
<year>2000</year>
)
<article-title>Measuring information transfer</article-title>
.
<source>Phys Rev Lett</source>
<volume>85</volume>
:
<fpage>461</fpage>
<lpage>464</lpage>
<pub-id pub-id-type="pmid">10991308</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Lizier5">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lizier</surname>
<given-names>JT</given-names>
</name>
,
<name>
<surname>Prokopenko</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Zomaya</surname>
<given-names>AY</given-names>
</name>
(
<year>2008</year>
)
<article-title>Local information transfer as a spatiotemporal filter for complex systems</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>77</volume>
:
<fpage>026110</fpage>
<pub-id pub-id-type="pmid">18352093</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Lizier6">
<label>16</label>
<mixed-citation publication-type="other">Lizier JT (2014) Measuring the dynamics of information processing on a local scale in time and space. In: Wibral M, Vicente R, Lizier JT, editors, Directed Information Measures in Neuroscience, Springer Berlin Heidelberg, Understanding Complex Systems. pp. 161–193.</mixed-citation>
</ref>
<ref id="pone.0102833-Gmez1">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gómez</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Lizier</surname>
<given-names>JT</given-names>
</name>
,
<name>
<surname>Schaum</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Wollstadt</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Grützner</surname>
<given-names>C</given-names>
</name>
,
<etal>et al</etal>
(
<year>2014</year>
)
<article-title>Reduced predictable information in brain signals in autism spectrum disorder</article-title>
.
<source>Front Neuroinform</source>
<volume>8</volume>
:
<fpage>9</fpage>
<pub-id pub-id-type="pmid">24592235</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Dasgupta1">
<label>18</label>
<mixed-citation publication-type="other">Dasgupta S, Wörgötter F, Manoonpong P (2013) Information dynamics based self-adaptive reservoir for delay temporal memory tasks. Evolving Systems: 1–15.</mixed-citation>
</ref>
<ref id="pone.0102833-Vicente1">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Vicente</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Wibral</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Lindner</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Pipa</surname>
<given-names>G</given-names>
</name>
(
<year>2011</year>
)
<article-title>Transfer entropy – a model-free measure of effective connectivity for the neurosciences</article-title>
.
<source>J Comput Neurosci</source>
<volume>30</volume>
:
<fpage>45</fpage>
<lpage>67</lpage>
<pub-id pub-id-type="pmid">20706781</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Wibral2">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wibral</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Rahm</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Rieder</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Lindner</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Vicente</surname>
<given-names>R</given-names>
</name>
,
<etal>et al</etal>
(
<year>2011</year>
)
<article-title>Transfer entropy in magnetoencephalographic data: quantifying information flow in cortical and cerebellar networks</article-title>
.
<source>Prog Biophys Mol Biol</source>
<volume>105</volume>
:
<fpage>80</fpage>
<lpage>97</lpage>
<pub-id pub-id-type="pmid">21115029</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Palu1">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Paluš</surname>
<given-names>M</given-names>
</name>
(
<year>2001</year>
)
<article-title>Synchronization as adjustment of information rates: detection from bivariate time series</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>63</volume>
:
<fpage>046211</fpage>
<pub-id pub-id-type="pmid">11308934</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Vakorin1">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Vakorin</surname>
<given-names>VA</given-names>
</name>
,
<name>
<surname>Kovacevic</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>McIntosh</surname>
<given-names>AR</given-names>
</name>
(
<year>2010</year>
)
<article-title>Exploring transient transfer entropy based on a group-wise ICA decomposition of EEG data</article-title>
.
<source>Neuroimage</source>
<volume>49</volume>
:
<fpage>1593</fpage>
<lpage>1600</lpage>
<pub-id pub-id-type="pmid">19698792</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Vakorin2">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Vakorin</surname>
<given-names>VA</given-names>
</name>
,
<name>
<surname>Krakovska</surname>
<given-names>OA</given-names>
</name>
,
<name>
<surname>McIntosh</surname>
<given-names>AR</given-names>
</name>
(
<year>2009</year>
)
<article-title>Confounding effects of indirect connections on causality estimation</article-title>
.
<source>J Neurosci Methods</source>
<volume>184</volume>
:
<fpage>152</fpage>
<lpage>160</lpage>
<pub-id pub-id-type="pmid">19628006</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Chvez1">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Chávez</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Martinerie</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Le Van Quyen</surname>
<given-names>M</given-names>
</name>
(
<year>2003</year>
)
<article-title>Statistical assessment of nonlinear causality: application to epileptic EEG signals</article-title>
.
<source>J Neurosci Methods</source>
<volume>124</volume>
:
<fpage>113</fpage>
<lpage>28</lpage>
<pub-id pub-id-type="pmid">12706841</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Amblard1">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Amblard</surname>
<given-names>PO</given-names>
</name>
,
<name>
<surname>Michel</surname>
<given-names>OJ</given-names>
</name>
(
<year>2011</year>
)
<article-title>On directed information theory and Granger causality graphs</article-title>
.
<source>J Comput Neurosci</source>
<volume>30</volume>
:
<fpage>7</fpage>
<lpage>16</lpage>
<pub-id pub-id-type="pmid">20333542</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Barnett1">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Barnett</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Barrett</surname>
<given-names>AB</given-names>
</name>
,
<name>
<surname>Seth</surname>
<given-names>AK</given-names>
</name>
(
<year>2009</year>
)
<article-title>Granger causality and transfer entropy are equivalent for Gaussian variables</article-title>
.
<source>Phys Rev Lett</source>
<volume>103</volume>
:
<fpage>238701</fpage>
<pub-id pub-id-type="pmid">20366183</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Besserve1">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Besserve</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Scholkopf</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Logothetis</surname>
<given-names>NK</given-names>
</name>
,
<name>
<surname>Panzeri</surname>
<given-names>S</given-names>
</name>
(
<year>2010</year>
)
<article-title>Causal relationships between frequency bands of extracellular signals in visual cortex revealed by an information theoretic analysis</article-title>
.
<source>J Comput Neurosci</source>
<volume>29</volume>
:
<fpage>547</fpage>
<lpage>566</lpage>
<pub-id pub-id-type="pmid">20396940</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Buehlmann1">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Buehlmann</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Deco</surname>
<given-names>G</given-names>
</name>
(
<year>2010</year>
)
<article-title>Optimal information transfer in the cortex through synchronization</article-title>
.
<source>PLoS Comput Biol</source>
<volume>6</volume>
:
<fpage>e1000934</fpage>
<pub-id pub-id-type="pmid">20862355</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Garofalo1">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Garofalo</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Nieus</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Massobrio</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Martinoia</surname>
<given-names>S</given-names>
</name>
(
<year>2009</year>
)
<article-title>Evaluation of the performance of information theory-based methods and cross-correlation to estimate the functional connectivity in cortical networks</article-title>
.
<source>PLoS One</source>
<volume>4</volume>
:
<fpage>e6482</fpage>
<pub-id pub-id-type="pmid">19652720</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Gourevitch1">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gourevitch</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Eggermont</surname>
<given-names>JJ</given-names>
</name>
(
<year>2007</year>
)
<article-title>Evaluating information transfer between auditory cortical neurons</article-title>
.
<source>J Neurophysiol</source>
<volume>97</volume>
:
<fpage>2533</fpage>
<lpage>2543</lpage>
<pub-id pub-id-type="pmid">17202243</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Lizier7">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lizier</surname>
<given-names>JT</given-names>
</name>
,
<name>
<surname>Heinzle</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Horstmann</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Haynes</surname>
<given-names>JD</given-names>
</name>
,
<name>
<surname>Prokopenko</surname>
<given-names>M</given-names>
</name>
(
<year>2011</year>
)
<article-title>Multivariate informationtheoretic measures reveal directed information structure and task relevant changes in fmri connectivity</article-title>
.
<source>J Comput Neurosci</source>
<volume>30</volume>
:
<fpage>85</fpage>
<lpage>107</lpage>
<pub-id pub-id-type="pmid">20799057</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Ldtke1">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lüdtke</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Logothetis</surname>
<given-names>NK</given-names>
</name>
,
<name>
<surname>Panzeri</surname>
<given-names>S</given-names>
</name>
(
<year>2010</year>
)
<article-title>Testing methodologies for the nonlinear analysis of causal relationships in neurovascular coupling</article-title>
.
<source>Magn Reson Imaging</source>
<volume>28</volume>
:
<fpage>1113</fpage>
<lpage>1119</lpage>
<pub-id pub-id-type="pmid">20409664</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Neymotin1">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Neymotin</surname>
<given-names>SA</given-names>
</name>
,
<name>
<surname>Jacobs</surname>
<given-names>KM</given-names>
</name>
,
<name>
<surname>Fenton</surname>
<given-names>AA</given-names>
</name>
,
<name>
<surname>Lytton</surname>
<given-names>WW</given-names>
</name>
(
<year>2011</year>
)
<article-title>Synaptic information transfer in computer models of neocortical columns</article-title>
.
<source>J Comput Neurosci</source>
<volume>30</volume>
:
<fpage>69</fpage>
<lpage>84</lpage>
<pub-id pub-id-type="pmid">20556639</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Sabesan1">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sabesan</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Good</surname>
<given-names>LB</given-names>
</name>
,
<name>
<surname>Tsakalis</surname>
<given-names>KS</given-names>
</name>
,
<name>
<surname>Spanias</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Treiman</surname>
<given-names>DM</given-names>
</name>
,
<etal>et al</etal>
(
<year>2009</year>
)
<article-title>Information flow and application to epileptogenic focus localization from intracranial EEG</article-title>
.
<source>IEEE Trans Neural Syst Rehabil Eng</source>
<volume>17</volume>
:
<fpage>244</fpage>
<lpage>53</lpage>
<pub-id pub-id-type="pmid">19497831</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Staniek1">
<label>35</label>
<mixed-citation publication-type="journal">
<name>
<surname>Staniek</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Lehnertz</surname>
<given-names>K</given-names>
</name>
(
<year>2009</year>
)
<article-title>Symbolic transfer entropy: inferring directionality in biosignals</article-title>
.
<source>Biomed Tech (Berl)</source>
<volume>54</volume>
:
<fpage>323</fpage>
<lpage>8</lpage>
<pub-id pub-id-type="pmid">19938889</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Vakorin3">
<label>36</label>
<mixed-citation publication-type="other">Vakorin VA, Misic B, Kraskovska O, McIntosh AR (2011) Empirical and theoretical aspects of generation and transfer of information in a neuromagnetic source network. Front Syst Neurosci 5.</mixed-citation>
</ref>
<ref id="pone.0102833-Roux1">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Roux</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Wibral</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Singer</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Aru</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Uhlhaas</surname>
<given-names>PJ</given-names>
</name>
(
<year>2013</year>
)
<article-title>The phase of thalamic alpha activity modulates cortical gamma-band activity: evidence from resting-state meg recordings</article-title>
.
<source>J Neurosci</source>
<volume>33</volume>
:
<fpage>17827</fpage>
<lpage>17835</lpage>
<pub-id pub-id-type="pmid">24198372</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Pampu1">
<label>38</label>
<mixed-citation publication-type="other">Pampu NC, Vicente R, Muresan RC, Priesemann V, Siebenhuhner F, et al. (2013) Transfer entropy as a tool for reconstructing interaction delays in neural signals. In: Signals, Circuits and Systems (ISSCS), 2013 International Symposium on. IEEE, pp. 1–4.</mixed-citation>
</ref>
<ref id="pone.0102833-Wibral3">
<label>39</label>
<mixed-citation publication-type="other">Wibral M, Vicente R, Lindner M (2014) Transfer entropy in neuroscience. In: Wibral M, Vicente R, Lizier JT, editors, Directed Information Measures in Neuroscience, Springer Berlin Heidelberg, Understanding Complex Systems. pp. 3–36.</mixed-citation>
</ref>
<ref id="pone.0102833-Marinazzo1">
<label>40</label>
<mixed-citation publication-type="other">Marinazzo D, Wu G, Pellicoro M, Stramaglia S (2014) Information transfer in the brain: Insights from a unified approach. In: Wibral M, Vicente R, Lizier JT, editors, Directed Information Measures in Neuroscience, Springer Berlin Heidelberg, Understanding Complex Systems. pp. 87–110.</mixed-citation>
</ref>
<ref id="pone.0102833-Faes1">
<label>41</label>
<mixed-citation publication-type="other">Faes L, Porta A (2014) Conditional entropy-based evaluation of information dynamics in physiological systems. In: Wibral M, Vicente R, Lizier JT, editors, Directed Information Measures in Neuroscience, Springer Berlin Heidelberg, Understanding Complex Systems. pp. 61–86.</mixed-citation>
</ref>
<ref id="pone.0102833-Faes2">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Faes</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Nollo</surname>
<given-names>G</given-names>
</name>
(
<year>2006</year>
)
<article-title>Bivariate nonlinear prediction to quantify the strength of complex dynamical interactions in short-term cardiovascular variability</article-title>
.
<source>Med Biol Eng Comput</source>
<volume>44</volume>
:
<fpage>383</fpage>
<lpage>392</lpage>
<pub-id pub-id-type="pmid">16937180</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Faes3">
<label>43</label>
<mixed-citation publication-type="journal">
<name>
<surname>Faes</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Nollo</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Porta</surname>
<given-names>A</given-names>
</name>
(
<year>2011</year>
)
<article-title>Non-uniform multivariate embedding to assess the information transfer in cardiovascular and cardiorespiratory variability series</article-title>
.
<source>Comput Biol Med</source>
<volume>42</volume>
:
<fpage>290</fpage>
<lpage>297</lpage>
<pub-id pub-id-type="pmid">21419400</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Faes4">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>Faes</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Nollo</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Porta</surname>
<given-names>A</given-names>
</name>
(
<year>2011</year>
)
<article-title>Information-based detection of nonlinear granger causality in multivariate processes via a nonuniform embedding technique</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>83</volume>
:
<fpage>051112</fpage>
<pub-id pub-id-type="pmid">21728495</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Kwon1">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kwon</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Yang</surname>
<given-names>JS</given-names>
</name>
(
<year>2008</year>
)
<article-title>Information flow between stock indices</article-title>
.
<source>Europhys Lett</source>
<volume>82</volume>
:
<fpage>68003</fpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Kim1">
<label>46</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kim</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Kim</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>An</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Kwon</surname>
<given-names>YK</given-names>
</name>
,
<name>
<surname>Yoon</surname>
<given-names>S</given-names>
</name>
(
<year>2013</year>
)
<article-title>Entropy-based analysis and bioinformatics-inspired integration of global economic information transfer</article-title>
.
<source>PLoS ONE</source>
<volume>8</volume>
:
<fpage>e51986</fpage>
<pub-id pub-id-type="pmid">23300959</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Ay1">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ay</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Polani</surname>
<given-names>D</given-names>
</name>
(
<year>2008</year>
)
<article-title>Information flows in causal networks</article-title>
.
<source>Adv Complex Syst</source>
<volume>11</volume>
:
<fpage>17</fpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Lizier8">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lizier</surname>
<given-names>JT</given-names>
</name>
,
<name>
<surname>Prokopenko</surname>
<given-names>M</given-names>
</name>
(
<year>2010</year>
)
<article-title>Differentiating information transfer and causal effect</article-title>
.
<source>Eur Phys J B</source>
<volume>73</volume>
:
<fpage>605</fpage>
<lpage>615</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Chicharro1">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>Chicharro</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Ledberg</surname>
<given-names>A</given-names>
</name>
(
<year>2012</year>
)
<article-title>When two become one: the limits of causality analysis of brain dynamics</article-title>
.
<source>PLoS One</source>
<volume>7</volume>
:
<fpage>e32466</fpage>
<pub-id pub-id-type="pmid">22438878</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Lizier9">
<label>50</label>
<mixed-citation publication-type="other">Lizier JT, Rubinov M (2012) Multivariate construction of effective computational networks from observational data. Max Planck Institute for Mathematics in the Sciences Preprint 25/2012.</mixed-citation>
</ref>
<ref id="pone.0102833-Stramaglia1">
<label>51</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stramaglia</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Wu</surname>
<given-names>GR</given-names>
</name>
,
<name>
<surname>Pellicoro</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Marinazzo</surname>
<given-names>D</given-names>
</name>
(
<year>2012</year>
)
<article-title>Expanding the transfer entropy to identify information circuits in complex systems</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>86</volume>
:
<fpage>066211</fpage>
<pub-id pub-id-type="pmid">23368028</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Bettencourt1">
<label>52</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bettencourt</surname>
<given-names>LM</given-names>
</name>
,
<name>
<surname>Stephens</surname>
<given-names>GJ</given-names>
</name>
,
<name>
<surname>Ham</surname>
<given-names>MI</given-names>
</name>
,
<name>
<surname>Gross</surname>
<given-names>GW</given-names>
</name>
(
<year>2007</year>
)
<article-title>Functional structure of cortical neuronal networks grown in vitro</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>75</volume>
:
<fpage>021915</fpage>
<pub-id pub-id-type="pmid">17358375</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Wibral4">
<label>53</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wibral</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Pampu</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Priesemann</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Siebenhühner</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Seiwert</surname>
<given-names>H</given-names>
</name>
,
<etal>et al</etal>
(
<year>2013</year>
)
<article-title>Measuring information-transfer delays</article-title>
.
<source>PloS one</source>
<volume>8</volume>
:
<fpage>e55809</fpage>
<pub-id pub-id-type="pmid">23468850</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Wibral5">
<label>54</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wibral</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Wollstadt</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Meyer</surname>
<given-names>U</given-names>
</name>
,
<name>
<surname>Pampu</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Priesemann</surname>
<given-names>V</given-names>
</name>
,
<etal>et al</etal>
(
<year>2012</year>
)
<article-title>Revisiting Wiener's principle of causality – interaction-delay reconstruction using transfer entropy and multivariate analysis on delay-weighted graphs</article-title>
.
<source>Conf Proc IEEE Eng Med Biol Soc</source>
<volume>2012</volume>
:
<fpage>3676</fpage>
<lpage>3679</lpage>
<pub-id pub-id-type="pmid">23366725</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-GomezHerrero1">
<label>55</label>
<mixed-citation publication-type="other">Gomez-Herrero G, Wu W, Rutanen K, Soriano M, Pipa G, et al. (2010) Assessing coupling dynamics from an ensemble of time series. arXiv preprint arXiv:10080539.</mixed-citation>
</ref>
<ref id="pone.0102833-Lindner1">
<label>56</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lindner</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Vicente</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Priesemann</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Wibral</surname>
<given-names>M</given-names>
</name>
(
<year>2011</year>
)
<article-title>TRENTOOL: A MATLAB open source toolbox to analyse information flow in time series data with transfer entropy</article-title>
.
<source>BMC Neurosci</source>
<volume>12</volume>
:
<fpage>119</fpage>
<pub-id pub-id-type="pmid">22098775</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Kraskov1">
<label>57</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kraskov</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Stoegbauer</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Grassberger</surname>
<given-names>P</given-names>
</name>
(
<year>2004</year>
)
<article-title>Estimating mutual information</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>69</volume>
:
<fpage>066138</fpage>
<pub-id pub-id-type="pmid">15244698</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Owens1">
<label>58</label>
<mixed-citation publication-type="journal">
<name>
<surname>Owens</surname>
<given-names>JD</given-names>
</name>
,
<name>
<surname>Houston</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Luebke</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Green</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Stone</surname>
<given-names>JE</given-names>
</name>
,
<etal>et al</etal>
(
<year>2008</year>
)
<article-title>GPU computing</article-title>
.
<source>Proc IEEE</source>
<volume>96</volume>
:
<fpage>879</fpage>
<lpage>899</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Brodtkorb1">
<label>59</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brodtkorb</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Hagen</surname>
<given-names>TR</given-names>
</name>
,
<name>
<surname>Sætra</surname>
<given-names>ML</given-names>
</name>
(
<year>2013</year>
)
<article-title>Graphics processing unit (GPU) programming strategies and trends in GPU computing</article-title>
.
<source>J Parallel Distr Com</source>
<volume>73</volume>
:
<fpage>4</fpage>
<lpage>13</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Lee1">
<label>60</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lee</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Dinov</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Dong</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Gutman</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Yanovsky</surname>
<given-names>I</given-names>
</name>
,
<etal>et al</etal>
(
<year>2012</year>
)
<article-title>CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms</article-title>
.
<source>Comput Methods Programs Biomed</source>
<volume>106</volume>
:
<fpage>175</fpage>
<lpage>187</lpage>
<pub-id pub-id-type="pmid">21159404</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-MartnezZarzuela1">
<label>61</label>
<mixed-citation publication-type="journal">
<name>
<surname>Martínez-Zarzuela</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Gómez</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Díaz-Pernas</surname>
<given-names>FJ</given-names>
</name>
,
<name>
<surname>Fernández</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Hornero</surname>
<given-names>R</given-names>
</name>
(
<year>2013</year>
)
<article-title>Crossapproximate entropy parallel computation on GPUs for biomedical signal analysis. Application to MEG recordings</article-title>
.
<source>Comput Methods Programs Biomed</source>
<volume>112</volume>
:
<fpage>189</fpage>
<lpage>199</lpage>
<pub-id pub-id-type="pmid">23915803</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Konstantinidis1">
<label>62</label>
<mixed-citation publication-type="journal">
<name>
<surname>Konstantinidis</surname>
<given-names>EI</given-names>
</name>
,
<name>
<surname>Frantzidis</surname>
<given-names>CA</given-names>
</name>
,
<name>
<surname>Pappas</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Bamidis</surname>
<given-names>PD</given-names>
</name>
(
<year>2012</year>
)
<article-title>Real time emotion aware applications: A case study employing emotion evocative pictures and neuro-physiological sensing enhanced by graphic processor units</article-title>
.
<source>Comput Methods Programs Biomed</source>
<volume>107</volume>
:
<fpage>16</fpage>
<lpage>27</lpage>
<pub-id pub-id-type="pmid">22520825</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Arefin1">
<label>63</label>
<mixed-citation publication-type="journal">
<name>
<surname>Arefin</surname>
<given-names>AS</given-names>
</name>
,
<name>
<surname>Riveros</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Berretta</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Moscato</surname>
<given-names>P</given-names>
</name>
(
<year>2012</year>
)
<article-title>GPU-FS-kNN: A software tool for fast and scalable kNN computation using GPUs</article-title>
.
<source>PLoS One</source>
<volume>7</volume>
:
<fpage>e44000</fpage>
<pub-id pub-id-type="pmid">22937144</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Wilson1">
<label>64</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wilson</surname>
<given-names>JA</given-names>
</name>
,
<name>
<surname>Williams</surname>
<given-names>JC</given-names>
</name>
(
<year>2009</year>
)
<article-title>Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction</article-title>
.
<source>Front Neuroeng</source>
<volume>2</volume>
:
<fpage>11</fpage>
<pub-id pub-id-type="pmid">19636394</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Chen1">
<label>65</label>
<mixed-citation publication-type="journal">
<name>
<surname>Chen</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Wang</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Ouyang</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Li</surname>
<given-names>X</given-names>
</name>
(
<year>2011</year>
)
<article-title>Massively parallel neural signal processing on a manycore platform</article-title>
.
<source>Comput Sci Eng</source>
<volume>13</volume>
:
<fpage>42</fpage>
<lpage>51</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Liu1">
<label>66</label>
<mixed-citation publication-type="journal">
<name>
<surname>Liu</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Schmidt</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Liu</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Maskell</surname>
<given-names>DL</given-names>
</name>
(
<year>2010</year>
)
<article-title>CUDA-MEME: Accelerating motif discovery in biological sequences using CUDA-enabled graphics processing units</article-title>
.
<source>Pattern Recognit Lett</source>
<volume>31</volume>
:
<fpage>2170</fpage>
<lpage>2177</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Merkwirth1">
<label>67</label>
<mixed-citation publication-type="journal">
<name>
<surname>Merkwirth</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Parlitz</surname>
<given-names>U</given-names>
</name>
,
<name>
<surname>Lauterborn</surname>
<given-names>W</given-names>
</name>
(
<year>2000</year>
)
<article-title>Fast nearest-neighbor searching for nonlinear signal processing</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>62</volume>
:
<fpage>2089</fpage>
<lpage>2097</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Gardner1">
<label>68</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gardner</surname>
<given-names>WA</given-names>
</name>
,
<name>
<surname>Napolitano</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Paura</surname>
<given-names>L</given-names>
</name>
(
<year>2006</year>
)
<article-title>Cyclostationarity: Half a century of research</article-title>
.
<source>Signal Process</source>
<volume>86</volume>
:
<fpage>639</fpage>
<lpage>697</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Williams2">
<label>69</label>
<mixed-citation publication-type="other">Williams PL, Beer RD (2011) Generalized measures of information transfer. arXiv preprint arXiv:11021507.</mixed-citation>
</ref>
<ref id="pone.0102833-Takens1">
<label>70</label>
<mixed-citation publication-type="other">Takens F (1981) Dynamical Systems and Turbulence, Warwick 1980, Springer, volume 898 of
<italic>Lecture Notes in Mathematics</italic>
, chapter Detecting Strange Attractors in Turbulence. pp. 366–381.</mixed-citation>
</ref>
<ref id="pone.0102833-Ragwitz1">
<label>71</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ragwitz</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Kantz</surname>
<given-names>H</given-names>
</name>
(
<year>2002</year>
)
<article-title>Markov models from data by simple nonlinear time series predictors in delay embedding spaces</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>65</volume>
:
<fpage>056201</fpage>
<pub-id pub-id-type="pmid">12059674</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Kozachenko1">
<label>72</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kozachenko</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Leonenko</surname>
<given-names>N</given-names>
</name>
(
<year>1987</year>
)
<article-title>Sample estimate of entropy of a random vector</article-title>
.
<source>Probl Inform Transm</source>
<volume>23</volume>
:
<fpage>95</fpage>
<lpage>100</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Victor1">
<label>73</label>
<mixed-citation publication-type="journal">
<name>
<surname>Victor</surname>
<given-names>JD</given-names>
</name>
(
<year>2005</year>
)
<article-title>Binless strategies for estimation of information from neural data</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>72</volume>
:
<fpage>051903</fpage>
<pub-id pub-id-type="pmid">16383641</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Vicente2">
<label>74</label>
<mixed-citation publication-type="other">Vicente R, Wibral M (2014) Efficient estimation of information transfer. In: Wibral M, Vicente R, Lizier JT, editors, Directed Information Measures in Neuroscience, Springer Berlin Heidelberg, Understanding Complex Systems. pp. 37–58.</mixed-citation>
</ref>
<ref id="pone.0102833-NVIDIA1">
<label>75</label>
<mixed-citation publication-type="other">NVIDIA Corporation (2013). CUDA toolkit documentation. Available:
<ext-link ext-link-type="uri" xlink:href="http://docs.nvidia.com/cuda">http://docs.nvidia.com/cuda</ext-link>
Accessed 7 November 2013.</mixed-citation>
</ref>
<ref id="pone.0102833-Maris1">
<label>76</label>
<mixed-citation publication-type="journal">
<name>
<surname>Maris</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Oostenveld</surname>
<given-names>R</given-names>
</name>
(
<year>2007</year>
)
<article-title>Nonparametric statistical testing of EEG- and MEG-data</article-title>
.
<source>J Neurosci Methods</source>
<volume>164</volume>
:
<fpage>177</fpage>
<lpage>90</lpage>
<pub-id pub-id-type="pmid">17517438</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Bentley1">
<label>77</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bentley</surname>
<given-names>JL</given-names>
</name>
,
<name>
<surname>Friedman</surname>
<given-names>JH</given-names>
</name>
(
<year>1979</year>
)
<article-title>Data structures for range searching</article-title>
.
<source>ACM Comput Surv</source>
<volume>11</volume>
:
<fpage>397</fpage>
<lpage>409</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Arya1">
<label>78</label>
<mixed-citation publication-type="journal">
<name>
<surname>Arya</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Mount</surname>
<given-names>DM</given-names>
</name>
,
<name>
<surname>Netanyahu</surname>
<given-names>NS</given-names>
</name>
,
<name>
<surname>Silverman</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Wu</surname>
<given-names>AY</given-names>
</name>
(
<year>1998</year>
)
<article-title>An optimal algorithm for approximate nearest neighbor searching fixed dimensions</article-title>
.
<source>J ACM</source>
<volume>45</volume>
:
<fpage>891</fpage>
<lpage>923</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Muja1">
<label>79</label>
<mixed-citation publication-type="other">Muja M, Lowe DG (2009) Fast approximate nearest neighbors with automatic algorithm configuration. In: In VISAPP International Conference on Computer Vision Theory and Applications. pp. 331–340.</mixed-citation>
</ref>
<ref id="pone.0102833-Garcia1">
<label>80</label>
<mixed-citation publication-type="other">Garcia V, Debreuve E, Nielsen F, Barlaud M (2010) K-nearest neighbor search: Fast GPU-based implementations and application to high-dimensional feature matching. In: Image Processing (ICIP), 2010 17th IEEE International Conference on. pp. 3757–3760.</mixed-citation>
</ref>
<ref id="pone.0102833-Sismanis1">
<label>81</label>
<mixed-citation publication-type="other">Sismanis N, Pitsianis N, Sun X (2012) Parallel search of k-nearest neighbors with synchronous operations. In: High Performance Extreme Computing (HPEC), 2012 IEEE Conference on. pp. 1–6.</mixed-citation>
</ref>
<ref id="pone.0102833-Brown1">
<label>82</label>
<mixed-citation publication-type="other">Brown S, Snoeyink J. GPU nearest neighbor searches using a minimal kd-tree. Available:
<ext-link ext-link-type="uri" xlink:href="http://cs.unc.edu/~shawndb">http://cs.unc.edu/~shawndb</ext-link>
Accessed 7 November 2013.</mixed-citation>
</ref>
<ref id="pone.0102833-Li1">
<label>83</label>
<mixed-citation publication-type="other">Li S, Simons LC, Pakaravoor JB, Abbasinejad F, Owens JD, et al. (2012) kANN on the GPU with shifted sorting. In: Dachsbacher C, Munkberg J, Pantaleoni J, editors, Proceedings of the Fourth ACM SIGGRAPH/Eurographics conference on High-Performance Graphics. High Performance Graphics 2012, The Eurographics Association, pp. 39–47.</mixed-citation>
</ref>
<ref id="pone.0102833-Pan1">
<label>84</label>
<mixed-citation publication-type="other">Pan J, Manocha D (2012) Bi-level locality sensitive hashing for k-nearest neighbor computation. In: Data Engineering (ICDE), 2012 IEEE 28th International Conference on. pp. 378–389. doi: 10.1109/ICDE.2012.40.</mixed-citation>
</ref>
<ref id="pone.0102833-Khronos1">
<label>85</label>
<mixed-citation publication-type="other">Khronos OpenCL Working Group, Munshi A (2009). The OpenCL specification version: 1.0 document revision: 48. Available:
<ext-link ext-link-type="uri" xlink:href="http://www.khronos.org/registry/cl/specs/opencl-1.0.pdf">http://www.khronos.org/registry/cl/specs/opencl-1.0.pdf</ext-link>
Accessed 30 May 2014.</mixed-citation>
</ref>
<ref id="pone.0102833-Grtzner1">
<label>86</label>
<mixed-citation publication-type="journal">
<name>
<surname>Grützner</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Uhlhaas</surname>
<given-names>PJ</given-names>
</name>
,
<name>
<surname>Genc</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Kohler</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Singer</surname>
<given-names>W</given-names>
</name>
,
<etal>et al</etal>
(
<year>2010</year>
)
<article-title>Neuroelectromagnetic correlates of perceptual closure processes</article-title>
.
<source>J Neurosci</source>
<volume>30</volume>
:
<fpage>8342</fpage>
<lpage>8352</lpage>
<pub-id pub-id-type="pmid">20554885</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Kraskov2">
<label>87</label>
<mixed-citation publication-type="other">Kraskov A (2004) Synchronization and Interdependence measures and their application to the electroencephalogram of epilepsy patients and clustering of data. Ph.D. thesis, University of Wuppertal.</mixed-citation>
</ref>
<ref id="pone.0102833-Mooney1">
<label>88</label>
<mixed-citation publication-type="journal">
<name>
<surname>Mooney</surname>
<given-names>CM</given-names>
</name>
,
<name>
<surname>Ferguson</surname>
<given-names>GA</given-names>
</name>
(
<year>1951</year>
)
<article-title>A new closure test</article-title>
.
<source>Can J Psychol</source>
<volume>5</volume>
:
<fpage>129</fpage>
<lpage>133</lpage>
<pub-id pub-id-type="pmid">14870072</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Oostenveld1">
<label>89</label>
<mixed-citation publication-type="journal">
<name>
<surname>Oostenveld</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Fries</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Maris</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Schoffelen</surname>
<given-names>JM</given-names>
</name>
(
<year>2011</year>
)
<article-title>FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data</article-title>
.
<source>Comput Intell Neurosci</source>
<volume>2011</volume>
:
<fpage>156869</fpage>
<pub-id pub-id-type="pmid">21253357</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Gross1">
<label>90</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gross</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Kujala</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Hamalainen</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Timmermann</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Schnitzler</surname>
<given-names>A</given-names>
</name>
,
<etal>et al</etal>
(
<year>2001</year>
)
<article-title>Dynamic imaging of coherent sources: studying neural interactions in the human brain</article-title>
.
<source>Proc Natl Acad Sci U S A</source>
<volume>98</volume>
:
<fpage>694</fpage>
<lpage>699</lpage>
<pub-id pub-id-type="pmid">11209067</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Brookes1">
<label>91</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brookes</surname>
<given-names>MJ</given-names>
</name>
,
<name>
<surname>Vrba</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Robinson</surname>
<given-names>SE</given-names>
</name>
,
<name>
<surname>Stevenson</surname>
<given-names>CM</given-names>
</name>
,
<name>
<surname>Peters</surname>
<given-names>AM</given-names>
</name>
,
<etal>et al</etal>
(
<year>2008</year>
)
<article-title>Optimising experimental design for meg beamformer imaging</article-title>
.
<source>Neuroimage</source>
<volume>39</volume>
:
<fpage>1788</fpage>
<lpage>1802</lpage>
<pub-id pub-id-type="pmid">18155612</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Bar1">
<label>92</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bar</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Kassam</surname>
<given-names>KS</given-names>
</name>
,
<name>
<surname>Ghuman</surname>
<given-names>AS</given-names>
</name>
,
<name>
<surname>Boshyan</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Schmid</surname>
<given-names>AM</given-names>
</name>
,
<etal>et al</etal>
(
<year>2006</year>
)
<article-title>Top-down facilitation of visual recognition</article-title>
.
<source>P Natl Acad Sci USA</source>
<volume>103</volume>
:
<fpage>449</fpage>
<lpage>454</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Cavanagh1">
<label>93</label>
<mixed-citation publication-type="other">Cavanagh P (1991) Whats up in top-down processing. In: Gorea A, editor, Representations of vision: Trends and tacit assumptions in vision research, Cambridge University Press. pp. 295–304.</mixed-citation>
</ref>
<ref id="pone.0102833-Verdes1">
<label>94</label>
<mixed-citation publication-type="journal">
<name>
<surname>Verdes</surname>
<given-names>PF</given-names>
</name>
(
<year>2005</year>
)
<article-title>Assessing causality from multivariate time series</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>72</volume>
:
<fpage>026222</fpage>
<pub-id pub-id-type="pmid">16196699</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Pompe1">
<label>95</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pompe</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Runge</surname>
<given-names>J</given-names>
</name>
(
<year>2011</year>
)
<article-title>Momentary information transfer as a coupling measure of time series</article-title>
.
<source>Phys Rev E Stat Nonlin Soft Matter Phys</source>
<volume>83</volume>
:
<fpage>051122</fpage>
<pub-id pub-id-type="pmid">21728505</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Marschinski1">
<label>96</label>
<mixed-citation publication-type="journal">
<name>
<surname>Marschinski</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Kantz</surname>
<given-names>H</given-names>
</name>
(
<year>2002</year>
)
<article-title>Analysing the information flow between financial time series</article-title>
.
<source>Eur Phys J B</source>
<volume>30</volume>
:
<fpage>275</fpage>
<lpage>281</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Sauseng1">
<label>97</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sauseng</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Klimesch</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Gruber</surname>
<given-names>WR</given-names>
</name>
,
<name>
<surname>Hanslmayr</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Freunberger</surname>
<given-names>R</given-names>
</name>
,
<etal>et al</etal>
(
<year>2007</year>
)
<article-title>Are event-related potential components generated by phase resetting of brain oscillations? A critical discussion</article-title>
.
<source>Neuroscience</source>
<volume>146</volume>
:
<fpage>1435</fpage>
<lpage>1444</lpage>
<pub-id pub-id-type="pmid">17459593</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Makeig1">
<label>98</label>
<mixed-citation publication-type="journal">
<name>
<surname>Makeig</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Debener</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Onton</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Delorme</surname>
<given-names>A</given-names>
</name>
(
<year>2004</year>
)
<article-title>Mining event-related brain dynamics</article-title>
.
<source>Trends Cogn Sci</source>
<volume>8</volume>
:
<fpage>204</fpage>
<lpage>210</lpage>
<pub-id pub-id-type="pmid">15120678</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Shah1">
<label>99</label>
<mixed-citation publication-type="journal">
<name>
<surname>Shah</surname>
<given-names>AS</given-names>
</name>
,
<name>
<surname>Bressler</surname>
<given-names>SL</given-names>
</name>
,
<name>
<surname>Knuth</surname>
<given-names>KH</given-names>
</name>
,
<name>
<surname>Ding</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Mehta</surname>
<given-names>AD</given-names>
</name>
,
<etal>et al</etal>
(
<year>2004</year>
)
<article-title>Neural dynamics and the fundamental mechanisms of event-related brain potentials</article-title>
.
<source>Cereb Cortex</source>
<volume>14</volume>
:
<fpage>476</fpage>
<lpage>483</lpage>
<pub-id pub-id-type="pmid">15054063</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Jervis1">
<label>100</label>
<mixed-citation publication-type="other">Jervis BW, Nichols MJ, Johnson TE, Allen E, Hudson NR (1983) A fundamental investigation of the composition of auditory evoked potentials. IEEE Trans Biomed Eng: 43–50.</mixed-citation>
</ref>
<ref id="pone.0102833-Mangun1">
<label>101</label>
<mixed-citation publication-type="other">Mangun GR (1992) Human brain potentials evoked by visual stimuli: induced rhythms or timelocked components? In: Basar E, Bullock TH, editors, Induced rhythms in the brain, Boston, MA: Birkhauser. pp. 217–231.</mixed-citation>
</ref>
<ref id="pone.0102833-Schroeder1">
<label>102</label>
<mixed-citation publication-type="other">Schroeder CE, Steinschneider M, Javitt DC, Tenke CE, Givre SJ, et al. (1995) Localization of ERP generators and identification of underlying neural processes. Electroen Clin Neuro Suppl 44: 55.</mixed-citation>
</ref>
<ref id="pone.0102833-Sayers1">
<label>103</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sayers</surname>
<given-names>BM</given-names>
</name>
,
<name>
<surname>Beagley</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Henshall</surname>
<given-names>W</given-names>
</name>
(
<year>1974</year>
)
<article-title>The mechanism of auditory evoked EEG responses</article-title>
.
<source>Nature</source>
<volume>247</volume>
:
<fpage>481</fpage>
<lpage>483</lpage>
<pub-id pub-id-type="pmid">4818547</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Makeig2">
<label>104</label>
<mixed-citation publication-type="journal">
<name>
<surname>Makeig</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Westerfield</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Jung</surname>
<given-names>TP</given-names>
</name>
,
<name>
<surname>Enghoff</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Townsend</surname>
<given-names>J</given-names>
</name>
,
<etal>et al</etal>
(
<year>2002</year>
)
<article-title>Dynamic brain sources of visual evoked responses</article-title>
.
<source>Science</source>
<volume>295</volume>
:
<fpage>690</fpage>
<lpage>694</lpage>
<pub-id pub-id-type="pmid">11809976</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Jansen1">
<label>105</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jansen</surname>
<given-names>BH</given-names>
</name>
,
<name>
<surname>Agarwal</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Hegde</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Boutros</surname>
<given-names>NN</given-names>
</name>
(
<year>2003</year>
)
<article-title>Phase synchronization of the ongoing EEG and auditory EP generation</article-title>
.
<source>Clin Neurophysiol</source>
<volume>114</volume>
:
<fpage>79</fpage>
<lpage>85</lpage>
<pub-id pub-id-type="pmid">12495767</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Klimesch1">
<label>106</label>
<mixed-citation publication-type="journal">
<name>
<surname>Klimesch</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Schack</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Schabus</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Doppelmayr</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Gruber</surname>
<given-names>W</given-names>
</name>
,
<etal>et al</etal>
(
<year>2004</year>
)
<article-title>Phase-locked alpha and theta oscillations generate the P1–N1 complex and are related to memory performance</article-title>
.
<source>Cognitive Brain Res</source>
<volume>19</volume>
:
<fpage>302</fpage>
<lpage>316</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Turi1">
<label>107</label>
<mixed-citation publication-type="journal">
<name>
<surname>Turi</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Gotthardt</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Singer</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Vuong</surname>
<given-names>TA</given-names>
</name>
,
<name>
<surname>Munk</surname>
<given-names>M</given-names>
</name>
,
<etal>et al</etal>
(
<year>2012</year>
)
<article-title>Quantifying additive evoked contributions to the event-related potential</article-title>
.
<source>Neuroimage</source>
<volume>59</volume>
:
<fpage>2607</fpage>
<lpage>2624</lpage>
<pub-id pub-id-type="pmid">21982933</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Mller1">
<label>108</label>
<mixed-citation publication-type="journal">
<name>
<surname>Möller</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Schack</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Arnold</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Witte</surname>
<given-names>H</given-names>
</name>
(
<year>2001</year>
)
<article-title>Instantaneous multivariate EEG coherence analysis by means of adaptive high-dimensional autoregressive models</article-title>
.
<source>J Neurosci Meth</source>
<volume>105</volume>
:
<fpage>143</fpage>
<lpage>158</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Ding1">
<label>109</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ding</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Bressler</surname>
<given-names>SL</given-names>
</name>
,
<name>
<surname>Yang</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Liang</surname>
<given-names>H</given-names>
</name>
(
<year>2000</year>
)
<article-title>Short-window spectral analysis of cortical eventrelated potentials by adaptive multivariate autoregressive modeling: data preprocessing, model validation, and variability assessment</article-title>
.
<source>Biol Cybern</source>
<volume>83</volume>
:
<fpage>35</fpage>
<lpage>45</lpage>
<pub-id pub-id-type="pmid">10933236</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Hesse1">
<label>110</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hesse</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Möller</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Arnold</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Schack</surname>
<given-names>B</given-names>
</name>
(
<year>2003</year>
)
<article-title>The use of time-variant EEG Granger causality for inspecting directed interdependencies of neural assemblies</article-title>
.
<source>J Neurosci Meth</source>
<volume>124</volume>
:
<fpage>27</fpage>
<lpage>44</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Leistritz1">
<label>111</label>
<mixed-citation publication-type="journal">
<name>
<surname>Leistritz</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Hesse</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Arnold</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Witte</surname>
<given-names>H</given-names>
</name>
(
<year>2006</year>
)
<article-title>Development of interaction measures based on adaptive non-linear time series analysis of biomedical signals</article-title>
.
<source>Biomed Tech</source>
<volume>51</volume>
:
<fpage>64</fpage>
<lpage>69</lpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Wibral6">
<label>112</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wibral</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Turi</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Linden</surname>
<given-names>DEJ</given-names>
</name>
,
<name>
<surname>Kaiser</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Bledowski</surname>
<given-names>C</given-names>
</name>
(
<year>2008</year>
)
<article-title>Decomposition of working memoryrelated scalp ERPs: crossvalidation of fMRI-constrained source analysis and ICA</article-title>
.
<source>Int J Psychophysiol</source>
<volume>67</volume>
:
<fpage>200</fpage>
<lpage>211</lpage>
<pub-id pub-id-type="pmid">17692981</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Andrzejak1">
<label>113</label>
<mixed-citation publication-type="journal">
<name>
<surname>Andrzejak</surname>
<given-names>RG</given-names>
</name>
,
<name>
<surname>Ledberg</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Deco</surname>
<given-names>G</given-names>
</name>
(
<year>2006</year>
)
<article-title>Detecting event-related time-dependent directional couplings</article-title>
.
<source>New Journal of Physics</source>
<volume>8</volume>
:
<fpage>6</fpage>
</mixed-citation>
</ref>
<ref id="pone.0102833-Strong1">
<label>114</label>
<mixed-citation publication-type="other">Strong SP, de Ruyter van Steveninck RR, Bialek W, Koberle R (1998) On the application of information theory to neural spike trains. Pac Symp Biocomput: 621–632.</mixed-citation>
</ref>
<ref id="pone.0102833-Georgieva1">
<label>115</label>
<mixed-citation publication-type="journal">
<name>
<surname>Georgieva</surname>
<given-names>SS</given-names>
</name>
,
<name>
<surname>Todd</surname>
<given-names>JT</given-names>
</name>
,
<name>
<surname>Peeters</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Orban</surname>
<given-names>GA</given-names>
</name>
(
<year>2008</year>
)
<article-title>The extraction of 3D shape from texture and shading in the human brain</article-title>
.
<source>Cereb Cortex</source>
<volume>18</volume>
:
<fpage>2416</fpage>
<lpage>2438</lpage>
<pub-id pub-id-type="pmid">18281304</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Kanwisher1">
<label>116</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kanwisher</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Tong</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Nakayama</surname>
<given-names>K</given-names>
</name>
(
<year>1998</year>
)
<article-title>The effect of face inversion on the human fusiform face area</article-title>
.
<source>Cognition</source>
<volume>68</volume>
:
<fpage>B1</fpage>
<lpage>B11</lpage>
<pub-id pub-id-type="pmid">9775518</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Andrews1">
<label>117</label>
<mixed-citation publication-type="journal">
<name>
<surname>Andrews</surname>
<given-names>TJ</given-names>
</name>
,
<name>
<surname>Schluppeck</surname>
<given-names>D</given-names>
</name>
(
<year>2004</year>
)
<article-title>Neural responses to Mooney images reveal a modular representation of faces in human visual cortex</article-title>
.
<source>Neuroimage</source>
<volume>21</volume>
:
<fpage>91</fpage>
<lpage>98</lpage>
<pub-id pub-id-type="pmid">14741646</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-McKeeff1">
<label>118</label>
<mixed-citation publication-type="journal">
<name>
<surname>McKeeff</surname>
<given-names>TJ</given-names>
</name>
,
<name>
<surname>Tong</surname>
<given-names>F</given-names>
</name>
(
<year>2007</year>
)
<article-title>The timing of perceptual decisions for ambiguous face stimuli in the human ventral visual cortex</article-title>
.
<source>Cerebral Cortex</source>
<volume>17</volume>
:
<fpage>669</fpage>
<lpage>678</lpage>
<pub-id pub-id-type="pmid">16648454</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0102833-Faes5">
<label>119</label>
<mixed-citation publication-type="journal">
<name>
<surname>Faes</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Nollo</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Porta</surname>
<given-names>A</given-names>
</name>
(
<year>2013</year>
)
<article-title>Compensated transfer entropy as a tool for reliably estimating information transfer in physiological time series</article-title>
.
<source>Entropy</source>
<volume>15</volume>
:
<fpage>198</fpage>
<lpage>219</lpage>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/TelematiV1/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000529 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000529 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    TelematiV1
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     PMC:4113280
   |texte=   Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/RBID.i   -Sk "pubmed:25068489" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a TelematiV1 

Wicri

This area was generated with Dilib version V0.6.31.
Data generation: Thu Nov 2 16:09:04 2017. Site generation: Sun Mar 10 16:42:28 2024