Serveur d'exploration sur l'OCR

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Precise-Spike-Driven Synaptic Plasticity: Learning Hetero-Association of Spatiotemporal Spike Patterns

Identifieur interne : 000176 ( Pmc/Curation ); précédent : 000175; suivant : 000177

Precise-Spike-Driven Synaptic Plasticity: Learning Hetero-Association of Spatiotemporal Spike Patterns

Auteurs : Qiang Yu [Singapour] ; Huajin Tang [Singapour, République populaire de Chine] ; Kay Chen Tan [Singapour] ; Haizhou Li [Singapour, Australie]

Source :

RBID : PMC:3818323

Abstract

A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.


Url:
DOI: 10.1371/journal.pone.0078318
PubMed: 24223789
PubMed Central: 3818323

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3818323

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Precise-Spike-Driven Synaptic Plasticity: Learning Hetero-Association of Spatiotemporal Spike Patterns</title>
<author>
<name sortKey="Yu, Qiang" sort="Yu, Qiang" uniqKey="Yu Q" first="Qiang" last="Yu">Qiang Yu</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore</addr-line>
</nlm:aff>
<country xml:lang="fr">Singapour</country>
<wicri:regionArea>Department of Electrical and Computer Engineering, National University of Singapore, Singapore</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Tang, Huajin" sort="Tang, Huajin" uniqKey="Tang H" first="Huajin" last="Tang">Huajin Tang</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore, Singapore</addr-line>
</nlm:aff>
<country xml:lang="fr">Singapour</country>
<wicri:regionArea>Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<addr-line>College of Computer Science, Sichuan University, Chengdu, China</addr-line>
</nlm:aff>
<country xml:lang="fr">République populaire de Chine</country>
<wicri:regionArea>College of Computer Science, Sichuan University, Chengdu</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Tan, Kay Chen" sort="Tan, Kay Chen" uniqKey="Tan K" first="Kay Chen" last="Tan">Kay Chen Tan</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore</addr-line>
</nlm:aff>
<country xml:lang="fr">Singapour</country>
<wicri:regionArea>Department of Electrical and Computer Engineering, National University of Singapore, Singapore</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Li, Haizhou" sort="Li, Haizhou" uniqKey="Li H" first="Haizhou" last="Li">Haizhou Li</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore, Singapore</addr-line>
</nlm:aff>
<country xml:lang="fr">Singapour</country>
<wicri:regionArea>Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<addr-line>School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, Australia</addr-line>
</nlm:aff>
<country xml:lang="fr">Australie</country>
<wicri:regionArea>School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24223789</idno>
<idno type="pmc">3818323</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3818323</idno>
<idno type="RBID">PMC:3818323</idno>
<idno type="doi">10.1371/journal.pone.0078318</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">000176</idno>
<idno type="wicri:Area/Pmc/Curation">000176</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Precise-Spike-Driven Synaptic Plasticity: Learning Hetero-Association of Spatiotemporal Spike Patterns</title>
<author>
<name sortKey="Yu, Qiang" sort="Yu, Qiang" uniqKey="Yu Q" first="Qiang" last="Yu">Qiang Yu</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore</addr-line>
</nlm:aff>
<country xml:lang="fr">Singapour</country>
<wicri:regionArea>Department of Electrical and Computer Engineering, National University of Singapore, Singapore</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Tang, Huajin" sort="Tang, Huajin" uniqKey="Tang H" first="Huajin" last="Tang">Huajin Tang</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore, Singapore</addr-line>
</nlm:aff>
<country xml:lang="fr">Singapour</country>
<wicri:regionArea>Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<addr-line>College of Computer Science, Sichuan University, Chengdu, China</addr-line>
</nlm:aff>
<country xml:lang="fr">République populaire de Chine</country>
<wicri:regionArea>College of Computer Science, Sichuan University, Chengdu</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Tan, Kay Chen" sort="Tan, Kay Chen" uniqKey="Tan K" first="Kay Chen" last="Tan">Kay Chen Tan</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore</addr-line>
</nlm:aff>
<country xml:lang="fr">Singapour</country>
<wicri:regionArea>Department of Electrical and Computer Engineering, National University of Singapore, Singapore</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Li, Haizhou" sort="Li, Haizhou" uniqKey="Li H" first="Haizhou" last="Li">Haizhou Li</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore, Singapore</addr-line>
</nlm:aff>
<country xml:lang="fr">Singapour</country>
<wicri:regionArea>Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<addr-line>School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, Australia</addr-line>
</nlm:aff>
<country xml:lang="fr">Australie</country>
<wicri:regionArea>School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ghosh Dastidar, S" uniqKey="Ghosh Dastidar S">S Ghosh-Dastidar</name>
</author>
<author>
<name sortKey="Adeli, H" uniqKey="Adeli H">H Adeli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maass, W" uniqKey="Maass W">W Maass</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Panzeri, S" uniqKey="Panzeri S">S Panzeri</name>
</author>
<author>
<name sortKey="Brunel, N" uniqKey="Brunel N">N Brunel</name>
</author>
<author>
<name sortKey="Logothetis, Nk" uniqKey="Logothetis N">NK Logothetis</name>
</author>
<author>
<name sortKey="Kayser, C" uniqKey="Kayser C">C Kayser</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Berry, Mj" uniqKey="Berry M">MJ Berry</name>
</author>
<author>
<name sortKey="Meister, M" uniqKey="Meister M">M Meister</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Uzzell, Vj" uniqKey="Uzzell V">VJ Uzzell</name>
</author>
<author>
<name sortKey="Chichilnisky, Ej" uniqKey="Chichilnisky E">EJ Chichilnisky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reinagel, P" uniqKey="Reinagel P">P Reinagel</name>
</author>
<author>
<name sortKey="Reid, Rc" uniqKey="Reid R">RC Reid</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bair, W" uniqKey="Bair W">W Bair</name>
</author>
<author>
<name sortKey="Koch, C" uniqKey="Koch C">C Koch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mainen, Zf" uniqKey="Mainen Z">ZF Mainen</name>
</author>
<author>
<name sortKey="Sejnowski, Tj" uniqKey="Sejnowski T">TJ Sejnowski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kempter, R" uniqKey="Kempter R">R Kempter</name>
</author>
<author>
<name sortKey="Gerstner, W" uniqKey="Gerstner W">W Gerstner</name>
</author>
<author>
<name sortKey="Van Hemmen, Jl" uniqKey="Van Hemmen J">JL van Hemmen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Borst, A" uniqKey="Borst A">A Borst</name>
</author>
<author>
<name sortKey="Theunissen, Fe" uniqKey="Theunissen F">FE Theunissen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hopfield, Jj" uniqKey="Hopfield J">JJ Hopfield</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shadlen, Mn" uniqKey="Shadlen M">MN Shadlen</name>
</author>
<author>
<name sortKey="Movshon, Ja" uniqKey="Movshon J">JA Movshon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gutig, R" uniqKey="Gutig R">R Gütig</name>
</author>
<author>
<name sortKey="Sompolinsky, H" uniqKey="Sompolinsky H">H Sompolinsky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Widrow, B" uniqKey="Widrow B">B Widrow</name>
</author>
<author>
<name sortKey="Lehr, M" uniqKey="Lehr M">M Lehr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knudsen, Ei" uniqKey="Knudsen E">EI Knudsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thach, Wt" uniqKey="Thach W">WT Thach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ito, M" uniqKey="Ito M">M Ito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carey, Mr" uniqKey="Carey M">MR Carey</name>
</author>
<author>
<name sortKey="Medina, Jf" uniqKey="Medina J">JF Medina</name>
</author>
<author>
<name sortKey="Lisberger, Sg" uniqKey="Lisberger S">SG Lisberger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brader, Jm" uniqKey="Brader J">JM Brader</name>
</author>
<author>
<name sortKey="Senn, W" uniqKey="Senn W">W Senn</name>
</author>
<author>
<name sortKey="Fusi, S" uniqKey="Fusi S">S Fusi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bohte, Sm" uniqKey="Bohte S">SM Bohte</name>
</author>
<author>
<name sortKey="Kok, Jn" uniqKey="Kok J">JN Kok</name>
</author>
<author>
<name sortKey="Poutre, Jal" uniqKey="Poutre J">JAL Poutré</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Florian, Rv" uniqKey="Florian R">RV Florian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mohemmed, A" uniqKey="Mohemmed A">A Mohemmed</name>
</author>
<author>
<name sortKey="Schliebs, S" uniqKey="Schliebs S">S Schliebs</name>
</author>
<author>
<name sortKey="Matsuda, S" uniqKey="Matsuda S">S Matsuda</name>
</author>
<author>
<name sortKey="Kasabov, N" uniqKey="Kasabov N">N Kasabov</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yu, Q" uniqKey="Yu Q">Q Yu</name>
</author>
<author>
<name sortKey="Tang, H" uniqKey="Tang H">H Tang</name>
</author>
<author>
<name sortKey="Tan, Kc" uniqKey="Tan K">KC Tan</name>
</author>
<author>
<name sortKey="Li, H" uniqKey="Li H">H Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hu, J" uniqKey="Hu J">J Hu</name>
</author>
<author>
<name sortKey="Tang, H" uniqKey="Tang H">H Tang</name>
</author>
<author>
<name sortKey="Tan, Kc" uniqKey="Tan K">KC Tan</name>
</author>
<author>
<name sortKey="Li, H" uniqKey="Li H">H Li</name>
</author>
<author>
<name sortKey="Shi, L" uniqKey="Shi L">L Shi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ponulak, F" uniqKey="Ponulak F">F Ponulak</name>
</author>
<author>
<name sortKey="Kasinski, Aj" uniqKey="Kasinski A">AJ Kasinski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kempter, R" uniqKey="Kempter R">R Kempter</name>
</author>
<author>
<name sortKey="Gerstner, W" uniqKey="Gerstner W">W Gerstner</name>
</author>
<author>
<name sortKey="Van Hemmen, Jl" uniqKey="Van Hemmen J">JL van Hemmen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bi, Gq" uniqKey="Bi G">GQ Bi</name>
</author>
<author>
<name sortKey="Poo, Mm" uniqKey="Poo M">MM Poo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ghosh Dastidar, S" uniqKey="Ghosh Dastidar S">S Ghosh-Dastidar</name>
</author>
<author>
<name sortKey="Adeli, H" uniqKey="Adeli H">H Adeli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Izhikevich, Em" uniqKey="Izhikevich E">EM Izhikevich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hodgkin, A" uniqKey="Hodgkin A">A Hodgkin</name>
</author>
<author>
<name sortKey="Huxley, A" uniqKey="Huxley A">A Huxley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Izhikevich, Em" uniqKey="Izhikevich E">EM Izhikevich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wade, Jj" uniqKey="Wade J">JJ Wade</name>
</author>
<author>
<name sortKey="Mcdaid, Lj" uniqKey="Mcdaid L">LJ McDaid</name>
</author>
<author>
<name sortKey="Santos, Ja" uniqKey="Santos J">JA Santos</name>
</author>
<author>
<name sortKey="Sayers, Hm" uniqKey="Sayers H">HM Sayers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Masquelier, T" uniqKey="Masquelier T">T Masquelier</name>
</author>
<author>
<name sortKey="Guyonneau, R" uniqKey="Guyonneau R">R Guyonneau</name>
</author>
<author>
<name sortKey="Thorpe, Sj" uniqKey="Thorpe S">SJ Thorpe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rubinov, M" uniqKey="Rubinov M">M Rubinov</name>
</author>
<author>
<name sortKey="Sporns, O" uniqKey="Sporns O">O Sporns</name>
</author>
<author>
<name sortKey="Thivierge, J" uniqKey="Thivierge J">J Thivierge</name>
</author>
<author>
<name sortKey="Breakspear, M" uniqKey="Breakspear M">M Breakspear</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rossum, M" uniqKey="Rossum M">M Rossum</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gardner, E" uniqKey="Gardner E">E Gardner</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shriki, O" uniqKey="Shriki O">O Shriki</name>
</author>
<author>
<name sortKey="Kohn, A" uniqKey="Kohn A">A Kohn</name>
</author>
<author>
<name sortKey="Shamir, M" uniqKey="Shamir M">M Shamir</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nadasdy, Z" uniqKey="Nadasdy Z">Z Nadasdy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gollisch, T" uniqKey="Gollisch T">T Gollisch</name>
</author>
<author>
<name sortKey="Meister, M" uniqKey="Meister M">M Meister</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Llinas, Rr" uniqKey="Llinas R">RR Llinas</name>
</author>
<author>
<name sortKey="Grace, Aa" uniqKey="Grace A">AA Grace</name>
</author>
<author>
<name sortKey="Yarom, Y" uniqKey="Yarom Y">Y Yarom</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koepsell, K" uniqKey="Koepsell K">K Koepsell</name>
</author>
<author>
<name sortKey="Wang, X" uniqKey="Wang X">X Wang</name>
</author>
<author>
<name sortKey="Vaingankar, V" uniqKey="Vaingankar V">V Vaingankar</name>
</author>
<author>
<name sortKey="Wei, Y" uniqKey="Wei Y">Y Wei</name>
</author>
<author>
<name sortKey="Wang, Q" uniqKey="Wang Q">Q Wang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jacobs, J" uniqKey="Jacobs J">J Jacobs</name>
</author>
<author>
<name sortKey="Kahana, Mj" uniqKey="Kahana M">MJ Kahana</name>
</author>
<author>
<name sortKey="Ekstrom, Ad" uniqKey="Ekstrom A">AD Ekstrom</name>
</author>
<author>
<name sortKey="Fried, I" uniqKey="Fried I">I Fried</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Foehring, Rc" uniqKey="Foehring R">RC Foehring</name>
</author>
<author>
<name sortKey="Lorenzon, Nm" uniqKey="Lorenzon N">NM Lorenzon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seamans, Jk" uniqKey="Seamans J">JK Seamans</name>
</author>
<author>
<name sortKey="Yang, Cr" uniqKey="Yang C">CR Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Artola, A" uniqKey="Artola A">A Artola</name>
</author>
<author>
<name sortKey="Brocher, S" uniqKey="Brocher S">S Bröcher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ngezahayo, A" uniqKey="Ngezahayo A">A Ngezahayo</name>
</author>
<author>
<name sortKey="Schachner, M" uniqKey="Schachner M">M Schachner</name>
</author>
<author>
<name sortKey="Artola, A" uniqKey="Artola A">A Artola</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lisman, J" uniqKey="Lisman J">J Lisman</name>
</author>
<author>
<name sortKey="Spruston, N" uniqKey="Spruston N">N Spruston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Froemke, Rc" uniqKey="Froemke R">RC Froemke</name>
</author>
<author>
<name sortKey="Poo, Mm" uniqKey="Poo M">Mm Poo</name>
</author>
<author>
<name sortKey="Dan, Y" uniqKey="Dan Y">Y Dan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xu, Y" uniqKey="Xu Y">Y Xu</name>
</author>
<author>
<name sortKey="Zeng, X" uniqKey="Zeng X">X Zeng</name>
</author>
<author>
<name sortKey="Zhong, S" uniqKey="Zhong S">S Zhong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thorpe, S" uniqKey="Thorpe S">S Thorpe</name>
</author>
<author>
<name sortKey="Fize, D" uniqKey="Fize D">D Fize</name>
</author>
<author>
<name sortKey="Marlot, C" uniqKey="Marlot C">C Marlot</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vanrullen, R" uniqKey="Vanrullen R">R VanRullen</name>
</author>
<author>
<name sortKey="Guyonneau, R" uniqKey="Guyonneau R">R Guyonneau</name>
</author>
<author>
<name sortKey="Thorpe, Sj" uniqKey="Thorpe S">SJ Thorpe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Booij, O" uniqKey="Booij O">O Booij</name>
</author>
<author>
<name sortKey="Nguyen, Ht" uniqKey="Nguyen H">HT Nguyen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Victor, Jd" uniqKey="Victor J">JD Victor</name>
</author>
<author>
<name sortKey="Purpura, Kp" uniqKey="Purpura K">KP Purpura</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Rossum, Mc" uniqKey="Van Rossum M">MC Van Rossum</name>
</author>
<author>
<name sortKey="Bi, G" uniqKey="Bi G">G Bi</name>
</author>
<author>
<name sortKey="Turrigiano, G" uniqKey="Turrigiano G">G Turrigiano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buonomano, Dv" uniqKey="Buonomano D">DV Buonomano</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24223789</article-id>
<article-id pub-id-type="pmc">3818323</article-id>
<article-id pub-id-type="publisher-id">PONE-D-13-13263</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0078318</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Precise-Spike-Driven Synaptic Plasticity: Learning Hetero-Association of Spatiotemporal Spike Patterns</article-title>
<alt-title alt-title-type="running-head">Precise-Spike-Driven (PSD) Synaptic Plasticity</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Yu</surname>
<given-names>Qiang</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Tang</surname>
<given-names>Huajin</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Tan</surname>
<given-names>Kay Chen</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Li</surname>
<given-names>Haizhou</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore, Singapore</addr-line>
</aff>
<aff id="aff3">
<label>3</label>
<addr-line>College of Computer Science, Sichuan University, Chengdu, China</addr-line>
</aff>
<aff id="aff4">
<label>4</label>
<addr-line>School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, Australia</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Lytton</surname>
<given-names>William W.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>SUNY Downstate MC, United States of America</addr-line>
</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>htang@i2r.a-star.edu.sg</email>
</corresp>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con">
<p>Conceived and designed the experiments: QY HT. Performed the experiments: QY HT. Analyzed the data: QY HT. Contributed reagents/materials/analysis tools: KCT HL. Wrote the paper: QY HT KCT HL.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<pub-date pub-type="epub">
<day>5</day>
<month>11</month>
<year>2013</year>
</pub-date>
<volume>8</volume>
<issue>11</issue>
<elocation-id>e78318</elocation-id>
<history>
<date date-type="received">
<day>1</day>
<month>4</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>11</day>
<month>9</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-year>2013</copyright-year>
<copyright-holder>Yu et al</copyright-holder>
<license>
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.</p>
</abstract>
<funding-group>
<funding-statement>This work was supported by Agency for Science, Technology, and Research (A*STAR), Singapore under SERC Grant 092 157 0130. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<page-count count="16"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>With the same capability of processing spikes as biological neural systems, spiking neural networks (SNNs)
<xref ref-type="bibr" rid="pone.0078318-Gerstner1">[1]</xref>
<xref ref-type="bibr" rid="pone.0078318-Maass1">[3]</xref>
are more biologically realistic and computationally powerful than the traditional artificial neural networks (ANNs). Spikes are believed to be the principal feature in the information processing of neural systems, though the neural coding mechanism, i.e., how information is encoded in spikes still remains unclear. For example, many different neural codes have been introduced to describe how the spatiotemporal spikes convey the information of external stimuli, and among them rate code and temporal code
<xref ref-type="bibr" rid="pone.0078318-Panzeri1">[4]</xref>
are the two most widely studied coding schemes. The rate code is a basic example of a neural code where information is conveyed through the spike count within a time window. Evidence to support the hypothesis of the rate code is demonstrated in
<xref ref-type="bibr" rid="pone.0078318-Adrian1">[5]</xref>
, where a correlation of firing rates with sensory variables is shown. In the temporal code, the precise timing of each spike is considered. Recently, increasing experimental evidence suggests that neural systems use the exact time of spikes to convey information. For example, neurons are revealed to precisely respond to stimuli on a millisecond precision in the retina
<xref ref-type="bibr" rid="pone.0078318-Berry1">[6]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Uzzell1">[7]</xref>
, the lateral geniculate nucleus
<xref ref-type="bibr" rid="pone.0078318-Reinagel1">[8]</xref>
and the visual cortex
<xref ref-type="bibr" rid="pone.0078318-Bair1">[9]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Mainen1">[10]</xref>
. These observations support the hypothesis of the temporal code. Additionally, recent studies also show that the temporal coding scheme can offer significant computational advantages over the rate coding scheme
<xref ref-type="bibr" rid="pone.0078318-Kempter1">[11]</xref>
<xref ref-type="bibr" rid="pone.0078318-Hopfield1">[13]</xref>
. However, the complexity of processing temporal codes
<xref ref-type="bibr" rid="pone.0078318-Shadlen1">[14]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Gtig1">[15]</xref>
might limit their usage in SNNs, which demands the development of efficient learning algorithms.</p>
<p>Supervised learning was proposed as a successful concept of information processing
<xref ref-type="bibr" rid="pone.0078318-Widrow1">[16]</xref>
. Neurons are driven to respond at desired states under a supervisory signal, and an increasing body of evidence shows that this kind of learning is exploited by the brain
<xref ref-type="bibr" rid="pone.0078318-Knudsen1">[17]</xref>
<xref ref-type="bibr" rid="pone.0078318-Carey1">[20]</xref>
. Supervised mechanism has been widely used to develop various learning algorithms for processing spatiotemporal spike patterns in SNNs
<xref ref-type="bibr" rid="pone.0078318-Gtig1">[15]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Brader1">[21]</xref>
<xref ref-type="bibr" rid="pone.0078318-Hu1">[27]</xref>
.</p>
<p>Some of the existing supervised learning rules, such as spike-driven synaptic plasticity
<xref ref-type="bibr" rid="pone.0078318-Brader1">[21]</xref>
, are formulated in a rate-based framework and are not feasible for the processing of precise-timing spike patterns. In the spike-driven synaptic plasticity approach, the learning process is supervised and stochastic, meaning that a teacher signal steers the output neuron to a desired firing rate. According to this algorithm, synaptic weights are modified upon the arrival of presynaptic spikes, considering the state of the postsynaptic neuron's potential and its recent firing activity.</p>
<p>SpikeProb
<xref ref-type="bibr" rid="pone.0078318-Bohte1">[22]</xref>
is one of the first supervised learning algorithms for processing precise spatiotemporal patterns in SNNs. It is a gradient descent based learning rule, which can solve nonlinear classification tasks by emitting single spikes at the desired firing time. However, in its original form, SpikeProb cannot learn to reproduce a multi-spike train. The tempotron rule
<xref ref-type="bibr" rid="pone.0078318-Gtig1">[15]</xref>
, another gradient descent approach that is evaluated to be efficient for binary temporal classification tasks, cannot output multiple spikes either. As the tempotron is designed mainly for pattern recognition, it is unable to produce precise spikes. The time of the tempotron's output spike seems to be arbitrary and does not carry information. By this nature, the output of a tempotron cannot serve as the input for another tempotron. To produce a desired spike train, several learning algorithms have been proposed such as ReSuMe
<xref ref-type="bibr" rid="pone.0078318-Ponulak1">[23]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Ponulak2">[28]</xref>
, Chronotron
<xref ref-type="bibr" rid="pone.0078318-Florian1">[24]</xref>
and SPAN
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
. These three learning rules are all capable of training a neuron to generate a desired spike train in response to an input stimulus. The ReSuMe rule is based on a learning window concept similar to spike-timing-dependent plasticity (STDP)
<xref ref-type="bibr" rid="pone.0078318-Kempter2">[29]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Bi1">[30]</xref>
. The ReSuMe interprets the Widrow-Hoff (WH) rule
<xref ref-type="bibr" rid="pone.0078318-Widrow1">[16]</xref>
through interaction of two biological processes: Hebbian and anti-Hebbian learning. In the Chronotron, two learning rules are introduced. One is analytically-derived (E-learning) and another one is heuristically-defined (I-learning). The I-learning rule is more biologically plausible but comes with less memory capacity than the E-learning rule. The performance of the I-learning rule depends on the weight initialization, where initial zero values can cause information loss from the corresponding afferent neurons. The E-learning rule and the SPAN rule are both based on an error function of the difference between the actual output spike train and the desired spike train. Their applicability is therefore limited to the tractable error evaluation, which might be unavailable in actual biological networks and inefficient from a computational point of view. These arithmetic-based rules can reveal explicitly how SNNs can be trained but the biological plausibility of the error calculation is somewhat questionable.</p>
<p>In this paper, we propose an alternative learning mechanism called Precise-Spike-Driven (PSD) synaptic plasticity, that is able to learn the association between precise spike patterns. Similar to ReSuMe
<xref ref-type="bibr" rid="pone.0078318-Ponulak1">[23]</xref>
and SPAN
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
, the PSD rule is derived from the WH rule but based on a different interpretation. The PSD rule is derived analytically based on converting the spike trains into analog signals by applying the spike convolution method. Such an approach is rarely reported in the existing learning rule studies
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
. Synaptic adaptation in the PSD is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation (LTP) and negative errors causing long-term depression (LTD). The amount of adaptation depends on an eligibility trace determined by the afferent spikes. Without complex error calculation, the PSD rule provides an efficient way for processing spatiotemporal patterns. We show that the PSD rule inherits the advantageous properties of both arithmetic-based and biologically realistic rules, being simple and efficient for computation, and yet biologically plausible. Furthermore, the PSD is an independent plasticity rule that can be applied to different neuron models. This straightforward interpretation of the WH rule also provides a possible direction for further exploitation of the rich theory of ANNs, and minimizes the gap between the learning algorithms of SNNs and the traditional ANNs.</p>
<p>Various properties of the PSD rule are investigated through an extensive experimental analysis. In the first experiment, the basic concepts of the PSD rule are demonstrated, and its learning ability on hetero-association of spatiotemporal spike pattern is investigated. In the second experiment, the PSD rule is shown to be applicable to different neuron models. Thereafter, experiments are conducted to analyze the learning rule regarding its robustness against noisy conditions, its memory capacity, effects of the learning parameters and its classification performance. The capability of the PSD rule is further demonstrated on a practical example of an optical character recognition (OCR) problem. Finally, a detailed discussion about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe is presented.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Methods</title>
<p>In this section, we begin by presenting the spiking neuron models. We then describe the PSD rule for learning hetero-association between the input spatiotemporal spike patterns and the desired spike trains.</p>
<sec id="s2a">
<title>Spiking Neuron Model</title>
<p>As the third generation neuron model, spiking neurons raise the level of biological realism by utilizing spikes
<xref ref-type="bibr" rid="pone.0078318-Maass1">[3]</xref>
. The spiking neurons perform computation using the precise timing spikes, and offer improvements over the traditional neural models in terms of accuracy and computational power
<xref ref-type="bibr" rid="pone.0078318-GhoshDastidar2">[31]</xref>
. There are several kinds of spiking neuron models such as the integrate-and-fire (IF) model
<xref ref-type="bibr" rid="pone.0078318-Gerstner1">[1]</xref>
, the resonate-and-fire model
<xref ref-type="bibr" rid="pone.0078318-Izhikevich1">[32]</xref>
, the Hodgkin-Huxley model
<xref ref-type="bibr" rid="pone.0078318-Hodgkin1">[33]</xref>
, and the Izhikevich (IM) model
<xref ref-type="bibr" rid="pone.0078318-Izhikevich2">[34]</xref>
. Because the IF model is simple and computationally effective, it has become the most widely used spiking neuron model
<xref ref-type="bibr" rid="pone.0078318-Gtig1">[15]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Brader1">[21]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Bohte1">[22]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Ponulak2">[28]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Wade1">[35]</xref>
<xref ref-type="bibr" rid="pone.0078318-Rubinov1">[37]</xref>
, despite other more biologically realistic models.</p>
<p>For the sake of simplicity, the leaky integrate-and-fire (LIF) model is firstly considered. The dynamics of each neuron evolves according to the following equation:
<disp-formula id="pone.0078318.e001">
<graphic xlink:href="pone.0078318.e001"></graphic>
<label>(1)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e002.jpg"></inline-graphic>
</inline-formula>
is the membrane potential,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e003.jpg"></inline-graphic>
</inline-formula>
is the membrane time constant,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e004.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e005.jpg"></inline-graphic>
</inline-formula>
are the membrane resistance and capacitance, respectively,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e006.jpg"></inline-graphic>
</inline-formula>
is the resting potential,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e007.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e008.jpg"></inline-graphic>
</inline-formula>
are the background current noise and synaptic current, respectively. When
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e009.jpg"></inline-graphic>
</inline-formula>
exceeds a constant threshold
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e010.jpg"></inline-graphic>
</inline-formula>
, the neuron is said to fire, and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e011.jpg"></inline-graphic>
</inline-formula>
is reset to
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e012.jpg"></inline-graphic>
</inline-formula>
for a refractory period
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e013.jpg"></inline-graphic>
</inline-formula>
. We set
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e014.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e015.jpg"></inline-graphic>
</inline-formula>
for clarity, but any other values as
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e016.jpg"></inline-graphic>
</inline-formula>
will result in equivalent dynamics as long as the relationships among
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e017.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e018.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e019.jpg"></inline-graphic>
</inline-formula>
are kept.</p>
<p>For the postsynaptic neuron, we model the input synaptic current as:
<disp-formula id="pone.0078318.e020">
<graphic xlink:href="pone.0078318.e020"></graphic>
<label>(2)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e021.jpg"></inline-graphic>
</inline-formula>
is the synaptic efficacy of the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e022.jpg"></inline-graphic>
</inline-formula>
-
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e023.jpg"></inline-graphic>
</inline-formula>
afferent neuron, and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e024.jpg"></inline-graphic>
</inline-formula>
is the un-weighted postsynaptic current from the corresponding afferent.
<disp-formula id="pone.0078318.e025">
<graphic xlink:href="pone.0078318.e025"></graphic>
<label>(3)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e026.jpg"></inline-graphic>
</inline-formula>
is the time of the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e027.jpg"></inline-graphic>
</inline-formula>
-
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e028.jpg"></inline-graphic>
</inline-formula>
spike emitted from the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e029.jpg"></inline-graphic>
</inline-formula>
-
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e030.jpg"></inline-graphic>
</inline-formula>
afferent neuron,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e031.jpg"></inline-graphic>
</inline-formula>
refers to the Heaviside function,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e032.jpg"></inline-graphic>
</inline-formula>
denotes a normalized kernel and we choose it as:
<disp-formula id="pone.0078318.e033">
<graphic xlink:href="pone.0078318.e033"></graphic>
<label>(4)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e034.jpg"></inline-graphic>
</inline-formula>
is a normalization factor such that the maximum value of the kernel is 1,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e035.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e036.jpg"></inline-graphic>
</inline-formula>
are the slow and fast decay constants respectively, and their ratio is fixed at
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e037.jpg"></inline-graphic>
</inline-formula>
.</p>
<p>
<xref ref-type="fig" rid="pone-0078318-g001">Fig. 1</xref>
illustrates the neuron structure. Each spike from the afferent neuron will result in a postsynaptic current (PSC). The membrane potential of the postsynaptic neuron is a weighted sum of all incoming PSCs over all afferent neurons.</p>
<fig id="pone-0078318-g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Illustration of the neuron structure.</title>
<p>The afferent neurons are connected to the postsynaptic neuron through synapses. Each emitted spike from afferent neurons will trigger a postsynaptic current (PSC). The membrane potential of the postsynaptic neuron is a weighted sum of all incoming PSCs from all afferent neurons. The yellow neuron denotes the instructor which is used for learning.</p>
</caption>
<graphic xlink:href="pone.0078318.g001"></graphic>
</fig>
<p>In addition to the LIF model, we also investigate the flexibility of the PSD rule to different neuron models. For this, we use the IM model
<xref ref-type="bibr" rid="pone.0078318-Izhikevich2">[34]</xref>
, where the dynamics of the IM model is described as:
<disp-formula id="pone.0078318.e038">
<graphic xlink:href="pone.0078318.e038"></graphic>
<label>(5)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e039.jpg"></inline-graphic>
</inline-formula>
again represents the membrane potential.
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e040.jpg"></inline-graphic>
</inline-formula>
is the membrane recovery variable. The synaptic current (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e041.jpg"></inline-graphic>
</inline-formula>
) is in the same form as described before, and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e042.jpg"></inline-graphic>
</inline-formula>
again represents the background noise. The parameters
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e043.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e044.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e045.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e046.jpg"></inline-graphic>
</inline-formula>
are chosen such that the neuron exhibits a regular spiking behavior which is the most typical behavior observed in cortex
<xref ref-type="bibr" rid="pone.0078318-Izhikevich2">[34]</xref>
.</p>
<p>For computational efficiency, the LIF model is used in the following studies, unless otherwise stated.</p>
</sec>
<sec id="s2b">
<title>PSD Learning Rule</title>
<p>In this section we describe in detail the PSD learning rule. Note that the spiking neuron models were developed from the traditional neuron models. In a similar way, we develop the learning rule for spiking neurons from traditional algorithms. Inspired by
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
, we derive the proposed rule from the common Widrow-Hoff (WH) rule. The WH rule is described as:
<disp-formula id="pone.0078318.e047">
<graphic xlink:href="pone.0078318.e047"></graphic>
<label>(6)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e048.jpg"></inline-graphic>
</inline-formula>
is a positive constant referring to the learning rate,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e049.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e050.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e051.jpg"></inline-graphic>
</inline-formula>
refer to the input, the desired output and the actual output, respectively.</p>
<p>Note that because the WH rule was introduced for the traditional neuron models such as perceptron, the variables in the WH rule are regarded as real-valued vectors. In the case of spiking neurons, the input and output signals are described by the timing of spikes. Therefore, a direct implementation of the WH rule does not work for spiking neurons. This motivates the development of the PSD rule.</p>
<p>A spike train is defined as a sequence of impulses triggered by a particular neuron at its firing time. A spike train is expressed in the form of:
<disp-formula id="pone.0078318.e052">
<graphic xlink:href="pone.0078318.e052"></graphic>
<label>(7)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e053.jpg"></inline-graphic>
</inline-formula>
is the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e054.jpg"></inline-graphic>
</inline-formula>
-
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e055.jpg"></inline-graphic>
</inline-formula>
firing time, and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e056.jpg"></inline-graphic>
</inline-formula>
is the Dirac function:
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e057.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e058.jpg"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e059.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e060.jpg"></inline-graphic>
</inline-formula>
. Thus, the input, the desired output and the actual output of the spiking neuron are described as:
<disp-formula id="pone.0078318.e061">
<graphic xlink:href="pone.0078318.e061"></graphic>
<label>(8)</label>
</disp-formula>
</p>
<p>The products of Dirac functions are mathematically problematic. To solve this difficulty, we apply an approach called spike convolution. Unlike the method used in
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
, which needs a complex error evaluation and requires spike convolution on all the spike trains of the input, the desired output and the actual output, we only convolve the input spike trains.
<disp-formula id="pone.0078318.e062">
<graphic xlink:href="pone.0078318.e062"></graphic>
<label>(9)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e063.jpg"></inline-graphic>
</inline-formula>
is the convolving kernel, which we choose to be the same as Eq. (4). In this case, the convolved signal is in the same form as
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e064.jpg"></inline-graphic>
</inline-formula>
in Eq. (3). Thus, we use
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e065.jpg"></inline-graphic>
</inline-formula>
as the eligibility trace for the weight adaptation. The learning rule becomes:
<disp-formula id="pone.0078318.e066">
<graphic xlink:href="pone.0078318.e066"></graphic>
<label>(10)</label>
</disp-formula>
</p>
<p>Eq. (10) formulates an online learning rule. The dynamics of this learning rule is illustrated in
<xref ref-type="fig" rid="pone-0078318-g002">Fig. 2</xref>
. It can be seen that the polarity of the synaptic changes depends on three cases: (1) a positive error (corresponding to a miss of the spike) where the neuron does not spike at the desired time, (2) a zero error (corresponding to a hit) where the neuron spikes at the desired time, and (3) a negative error (corresponding to a false-alarm) where the neuron spikes when it is not supposed to.</p>
<fig id="pone-0078318-g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Illustration of the weight adaptation.</title>
<p>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e067.jpg"></inline-graphic>
</inline-formula>
is the presynaptic spike train.
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e068.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e069.jpg"></inline-graphic>
</inline-formula>
are the desired and the actual postsynaptic spike train, respectively.
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e070.jpg"></inline-graphic>
</inline-formula>
is the postsynaptic current and can be referred to as the eligibility trace for the adaptation of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e071.jpg"></inline-graphic>
</inline-formula>
. A positive error, where the neuron does not spike at the desired time, causes synaptic potentiation. A negative error, where the neuron spikes when it is not supposed to, results in synaptic depression. The amount of adaptation is proportional to the postsynaptic current. There will be no modification when the actual output spike fires exactly at the desired time. This figure is inspired from
<xref ref-type="bibr" rid="pone.0078318-Ponulak2">[28]</xref>
.</p>
</caption>
<graphic xlink:href="pone.0078318.g002"></graphic>
</fig>
<p>Thus, the weight adaptation is triggered by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. No synaptic change will occur if the actual output spike fires at the desired time. The amount of synaptic changes is determined by the current
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e072.jpg"></inline-graphic>
</inline-formula>
.</p>
<p>With the PSD learning rule, each of the variables involved has its own physical meaning. Moreover, the weight adaptation only depends on the current states. This is different from rules involving STDP, where both the pre- and post-synaptic spiking times are stored and used for adaptation.</p>
<p>By integrating Eq. (10), we get:
<disp-formula id="pone.0078318.e073">
<graphic xlink:href="pone.0078318.e073"></graphic>
<label>(11)</label>
</disp-formula>
</p>
<p>This equation could be used for trial learning where the weight modification is performed at the end of the pattern presentation.</p>
<p>In order to measure the distance between two spike trains, we use the van Rossum metric
<xref ref-type="bibr" rid="pone.0078318-Rossum1">[38]</xref>
but with a different filter function as described in Eq. (4). This filter is used to compensate for the discontinuity of the original filter function. The distance can be written as:
<disp-formula id="pone.0078318.e075">
<graphic xlink:href="pone.0078318.e075"></graphic>
<label>(12)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e076.jpg"></inline-graphic>
</inline-formula>
is a free parameter (we set
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e077.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e078.jpg"></inline-graphic>
</inline-formula>
here),
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e079.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e080.jpg"></inline-graphic>
</inline-formula>
are filtered signals of the two spike trains that are considered for distance measurement. More details can be found in
<xref ref-type="bibr" rid="pone.0078318-Rossum1">[38]</xref>
.</p>
<p>Noteworthily, this distance parameter
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e081.jpg"></inline-graphic>
</inline-formula>
is not involved in the PSD learning rule, but is used for measuring and analyzing the performance of the learning rule, which reflects the dissimilarity between the desired and the actual spike trains. In the following experiments, different values of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e082.jpg"></inline-graphic>
</inline-formula>
are used for analysis depending on the problems. For single-spike and multi-spike target trains, we set
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e083.jpg"></inline-graphic>
</inline-formula>
to be 0.2 and 0.5, respectively, corresponding to an average time difference of around
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e084.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e085.jpg"></inline-graphic>
</inline-formula>
for each pair of the actual and desired spikes. Smaller
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e086.jpg"></inline-graphic>
</inline-formula>
can be used if exact association is the main focus, e.g.,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e087.jpg"></inline-graphic>
</inline-formula>
corresponds to a time difference about
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e088.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e089.jpg"></inline-graphic>
</inline-formula>
, where no obvious dissimilarity can be seen between the two spike trains.</p>
</sec>
</sec>
<sec id="s3">
<title>Results</title>
<p>In this section, several experiments are presented to demonstrate the characteristics of the PSD rule. The basic concepts of the PSD rule are first examined, by demonstrating its ability to associate a spatiotemporal spike pattern with a target spike train. Furthermore, we show that the PSD has desirable properties, such as generality to different neuron models, robustness against noise and learning capacity. The effects of the parameters on the learning are also investigated. Then, the application of the proposed algorithm to the classification of spike patterns is also shown, with the final experiment demonstrating its performance on a practical OCR task.</p>
<sec id="s3a">
<title>Association of Single-Spike and Multi-Spike Patterns</title>
<p>This experiment is devised to demonstrate the ability of the proposed PSD rule for learning a spatiotemporal spike pattern. The neuron is trained to reproduce spikes that fire at the same spiking time of a target train.</p>
<sec id="s3a1">
<title>Experiment setup</title>
<p>The neuron is connected with
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e090.jpg"></inline-graphic>
</inline-formula>
afferent neurons, and each fires a single spike within the time interval of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e091.jpg"></inline-graphic>
</inline-formula>
. Each spike is randomly generated with a uniform distribution. We set
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e092.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e093.jpg"></inline-graphic>
</inline-formula>
here. To avoid a single synapse dominating the firing of the neuron, we limit the weight below
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e094.jpg"></inline-graphic>
</inline-formula>
. The initial synaptic weights are drawn randomly from a normal distribution with mean value of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e095.jpg"></inline-graphic>
</inline-formula>
and a standard deviation of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e096.jpg"></inline-graphic>
</inline-formula>
. For the learning parameters, we set
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e097.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e098.jpg"></inline-graphic>
</inline-formula>
. The target spike train can be randomly generated, but for simplicity, we specify it as
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e099.jpg"></inline-graphic>
</inline-formula>
. In this way, the spikes are evenly distributed over the whole interval
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e100.jpg"></inline-graphic>
</inline-formula>
.</p>
</sec>
<sec id="s3a2">
<title>Learning process</title>
<p>
<xref ref-type="fig" rid="pone-0078318-g003">Fig. 3</xref>
illustrates a typical run of the learning. Initially, the neuron is observed to fire at any arbitrary time and with a firing rate different from the target train, resulting in a large distance value. The actual output spike train is quite different from the target train at the beginning. During the learning process, the neuron gradually learns to produce spikes at the target time, and that is also reflected by the decreasing distance. After finishing the first 10 epochs of learning, both the firing rate and the firing time of the output spikes match those in the target spike train. The dynamics of neuron's membrane potential is also shown in
<xref ref-type="fig" rid="pone-0078318-g003">Fig. 3</xref>
. Whenever the membrane potential exceeds the threshold, a spike is emitted and the potential is kept at reset level for a refractory period. The detailed mathematical description governing this behaviour was presented previously in the section on the Spiking Neuron Model.</p>
<fig id="pone-0078318-g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Illustration of the temporal sequence learning of a typical run.</title>
<p>The neuron is connected with
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e101.jpg"></inline-graphic>
</inline-formula>
synapses, and is trained to reproduce spikes at the target time (denoted as light blue bars in the middle). The bottom and top show the dynamics of the neuron's potential before and after learning, respectively. The dashed red lines denote the firing threshold. In the middle, each spike is denoted as a dot. The right figure shows the distance between the actual output spike train and the target spike train.</p>
</caption>
<graphic xlink:href="pone.0078318.g003"></graphic>
</fig>
<p>This experiment shows the feasibility of the PSD rule to train the neuron to reproduce a desired spike train. After several learning epochs, the neuron can successfully spike at the target time. In other words, the proposed rule is able to train the neuron to associate the input spatiotemporal pattern with a desired output spike train within several training epochs. The information of the input pattern is stored by a specified spike train.</p>
</sec>
<sec id="s3a3">
<title>Causal weight distribution</title>
<p>We further examine how the PSD rule drives the synaptic weights and the evolution of the distance between the actual and the target spike trains. In order to guarantee statistical significance, the task described in
<xref ref-type="fig" rid="pone-0078318-g003">Fig. 3</xref>
is repeated 100 times. Each time is referred to as one run. At the initial point of each run, different random weights are used for training. As can be seen from
<xref ref-type="fig" rid="pone-0078318-g004">Fig. 4</xref>
, the initial weights are normally distributed around
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e102.jpg"></inline-graphic>
</inline-formula>
, which reflects the fact that there are no significant differences among the input synapses. This initial distribution of weights is expected due to the experimental setup. After learning, a causal connectivity is established. According to the learning rule, the synapses that fire temporally close to the time of the target spikes are potentiated. Those synapses that result in undesired output spikes are depressed. This temporal causality is clearly reflected on the distribution of weights after learning (
<xref ref-type="fig" rid="pone-0078318-g004">Fig. 4</xref>
). Among those causal synapses, the one with a closer spiking time to the desired time normally has a relatively higher synaptic strength. The synapses firing far from the desired time will have lower causal effects. Additionally, the evolution of distance along the learning shows that the PSD rule successfully trains the neuron to reproduce the desired spikes in around ten epochs. The results also validate the efficiency of the PSD learning rule in accomplishing the single association task.</p>
<fig id="pone-0078318-g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Effect of the learning on synaptic weights and the evolution of distance along the learning process.</title>
<p>The top and the middle show the averaged weights before and after learning, respectively. The height of each bar in the figure reflects the corresponding synaptic strength. All the afferent neurons are chronologically sorted according to their spike time. The target spikes are overlayed on the weights figure according to their time, and are denoted as red lines. The bottom shows the averaged distance between the actual spike train and the desired spike train along the learning process. All the data are averaged over 100 runs.</p>
</caption>
<graphic xlink:href="pone.0078318.g004"></graphic>
</fig>
</sec>
<sec id="s3a4">
<title>Adaptive learning performance</title>
<p>At the beginning, the neuron is trained to learn a target train as in the previous tasks. After one successful learning, the target spike train is changed to another arbitrarily generated train, where the precise spike time and the firing rate are different from the previous target. We discover that, with the PSD learning rule, we successfully train the neuron to learn the new target within several epochs. As shown in
<xref ref-type="fig" rid="pone-0078318-g005">Fig. 5</xref>
, during learning, the neuron gradually adapts its firing status from the old target to the new target.</p>
<fig id="pone-0078318-g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Illustration of the adaptive learning of the changed target trains.</title>
<p>Each dot denotes a spike. At the beginning, the neuron is trained to learn one target (denoted by the light blue bars). After 25 epochs of learning (the dashed red line), the target is changed to another randomly generated train (denoted by the green bars). The right figure shows the distance between the actual output spike train and the target spike train along the learning process.</p>
</caption>
<graphic xlink:href="pone.0078318.g005"></graphic>
</fig>
</sec>
<sec id="s3a5">
<title>Learning multiple spikes</title>
<p>In the scenario considered above, all afferent neurons are supposed to fire only once during the entire time window. The applicability of the PSD rule is not limited to this single spike code. We further illustrate the case where each synaptic input transmits multiple spikes during the time window. We again use the same setup as above, but each synaptic input is now generated by a homogeneous Poisson process with a random rate ranging from
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e103.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e104.jpg"></inline-graphic>
</inline-formula>
. Multiple spikes increase the difficulty of the learning since these spikes interfere with the local learning processes
<xref ref-type="bibr" rid="pone.0078318-Ponulak2">[28]</xref>
. As shown in
<xref ref-type="fig" rid="pone-0078318-g006">Fig. 6</xref>
, the learning although slower, is again successful. The interference of local learning processes results in fluctuations of the output spikes around the target time. In the subsequent learning epochs, the neuron gradually converges to spiking at the target time. This experiment demonstrates that the PSD rule deals with multiple spikes quite well. Compared to multiple spikes, the single spike code is simple for analysis and efficient for computation. Thus, for simplicity, we use the single spike code in the following experiments where each afferent neuron fires only once during the time window.</p>
<fig id="pone-0078318-g006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g006</object-id>
<label>Figure 6</label>
<caption>
<title>Illustration of a typical run for learning multi-spike pattern.</title>
<p>Each dot denotes a spike. The top left shows the input spikes from the first 50 afferent neurons out of 1000. Each synaptic input is generated by a homogeneous Poisson process with a random rate from
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e105.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e106.jpg"></inline-graphic>
</inline-formula>
. The bottom left shows the neuron's output spikes. The right column shows the distance between the actual output spike train and the target spike train along learning.</p>
</caption>
<graphic xlink:href="pone.0078318.g006"></graphic>
</fig>
<p>These experiments clearly demonstrate that the PSD rule is capable of training the neuron to fire at the desired time. The causal connectivity is established after learning with this rule. In the following sections, some more challenging learning scenarios are taken into consideration to further investigate the properties of the PSD rule.</p>
</sec>
</sec>
<sec id="s3b">
<title>Generality to Different Neuron Models</title>
<p>We carry out this experiment to demonstrate that the PSD learning rule is independent of the neuron model. In this experiment, we only compare the results of learning association for the LIF and IM neuron models that were described previously. For a fair comparison, both neurons are connected to the same afferent neurons, and they are trained to reproduce the same target spike train. The setup for generating the input spatiotemporal patterns is the same as the experiment in
<xref ref-type="fig" rid="pone-0078318-g005">Fig. 5</xref>
. The connection setup is illustrated in
<xref ref-type="fig" rid="pone-0078318-g007">Fig. 7</xref>
. Except for the neuron dynamics described in Eq. (1) and Eq. (5) respectively, all the other parameters are the same for the two neurons.</p>
<fig id="pone-0078318-g007" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g007</object-id>
<label>Figure 7</label>
<caption>
<title>Learning with different spiking neuron models.</title>
<p>The LIF and IM neuron models are considered. The left panel shows the connection setup of the experiment. Both the two neurons are connected to the same
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e107.jpg"></inline-graphic>
</inline-formula>
afferent neurons, and are trained to reproduce target spikes (denoted by the yellow parts). The right panel shows the dynamics of neurons' potential before and after learning. The dashed red lines denote the firing threshold.</p>
</caption>
<graphic xlink:href="pone.0078318.g007"></graphic>
</fig>
<p>The dynamic difference between the two types of spiking neuron models is clearly demonstrated in
<xref ref-type="fig" rid="pone-0078318-g007">Fig. 7</xref>
. Although the neuron models are different, both of the neurons can be trained to successfully reproduce the target spike train with the proposed PSD learning rule. It is seen that the two neurons fire at arbitrary time before learning, while after learning they fire spikes at the desired time.</p>
<p>In the PSD rule, synaptic adaptation is triggered by both the desired spikes and the actual output spikes. The amount of updating depends on the presynaptic spikes firing before the triggering spikes. That is to say, the weight adaptation of our rule is based on the correlation between the spiking time only. This suggests the PSD has the generality to work with various neuron models, a capability similar to that of the ReSuMe rule
<xref ref-type="bibr" rid="pone.0078318-Ponulak2">[28]</xref>
.</p>
</sec>
<sec id="s3c">
<title>Robustness to Noise</title>
<p>In previous experiments, we only consider the simple case where the neuron is trained to learn a single pattern under noise-free condition. However, the reliability of the neuron response could be significantly affected by noise. In this experiment, two noisy cases are considered: stimuli noise and background noise.</p>
<sec id="s3c1">
<title>Experiment setup</title>
<p>In this experiment, a single LIF neuron with
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e108.jpg"></inline-graphic>
</inline-formula>
afferent neurons is tested. Initially, a set of 10 spike patterns are randomly generated as in previous experiments. These 10 spike patterns are fixed as the templates. The neuron is trained for 400 epochs to associate all patterns in the training set with a desired spike train (the same train as is used before). Two training scenarios are considered in this experiment, i.e., deterministic training (in the noise-free condition) and noisy training. In the testing phase, a total number of 200 noise patterns are used. Each template is used to construct 20 testing patterns. We determine the association to be correct, if the distance between the output spike train and the desired spike train is lower than a specified level (0.5 is used here).</p>
</sec>
<sec id="s3c2">
<title>Input jittering noise</title>
<p>In the case of input jittering noise, a Gaussian jitter with a standard deviation (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e109.jpg"></inline-graphic>
</inline-formula>
) is added to each input spike to generate the noise patterns. The strength of the jitter is controlled by the standard deviation of the Gaussian. The top row in
<xref ref-type="fig" rid="pone-0078318-g008">Fig. 8</xref>
shows the learning performance. In the deterministic training, the neuron is trained purely with the initial templates. In the noisy training, a noise level of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e110.jpg"></inline-graphic>
</inline-formula>
is used. Different levels of noise are used in the testing phase to evaluate the generalization ability. For the deterministic training, the output stabilizes quickly and can exactly converge to the desired spike train within tens of learning epochs. However, the generalization accuracy decreases quickly with the increasing jitter strength. In the scenario of noisy training, although the training error cannot become zero, a better generalization ability is obtained. The neuron can successfully reproduce the desired spike train with a relatively high accuracy when the noise strength is not higher than the one used in the training. In conclusion, the neuron is less sensitive to the noise if the noisy training is performed.</p>
<fig id="pone-0078318-g008" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g008</object-id>
<label>Figure 8</label>
<caption>
<title>Robustness of the learning rule against jittering noise of input stimuli and background noise.</title>
<p>The top row presents the case where the noise comes from the input spike jitters. The bottom row presents the case of background noise. The neuron is trained under noise-free conditions (denoted as deterministic training), or is trained under noisy conditions (denoted as noisy training). In the training phase (left two columns), the neuron is trained for 400 epochs. Along the training process, the average distance between the actual output spike train and the desired spike train is shown. The standard deviation is denoted by the shaded area. In the testing phase (right column), the generalization accuracies of the trained neuron on different levels of noise patterns are presented. Both the average value and the standard deviation are shown. All the data are averaged over 100 runs.</p>
</caption>
<graphic xlink:href="pone.0078318.g008"></graphic>
</fig>
</sec>
<sec id="s3c3">
<title>Background current noise</title>
<p>In this case, the background current noise (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e111.jpg"></inline-graphic>
</inline-formula>
) is considered as the noise source. The mean value of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e112.jpg"></inline-graphic>
</inline-formula>
is assumed zero, and the strength of the noise is determined by its variance (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e113.jpg"></inline-graphic>
</inline-formula>
). A strength of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e114.jpg"></inline-graphic>
</inline-formula>
noise is used in the noisy training. We report the results in the bottom row of
<xref ref-type="fig" rid="pone-0078318-g008">Fig. 8</xref>
. Similar results are obtained as with the first case. Although the output can quickly converge to zero error in the deterministic training, the generalization performance is quite sensitive to the noise. The association accuracy drops quickly when the noise strength increases. When the neuron is trained with noise patterns, it becomes less sensitive to the noise. A relatively high accuracy can be obtained with a noise level up to
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e115.jpg"></inline-graphic>
</inline-formula>
.</p>
<p>This experiment shows that the trained neuron under noise-free conditions will be significantly affected by noise. Such an influence of noise on the timing accuracy and reliability of the neuron response has been considered in many studies
<xref ref-type="bibr" rid="pone.0078318-Gtig1">[15]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Florian1">[24]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Hu1">[27]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Ponulak2">[28]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Rieke1">[39]</xref>
. Under the noisy training, the trained neuron demonstrates high robustness against the noise. The noisy training enables the neuron to reproduce desired spikes more reliably and precisely.</p>
</sec>
</sec>
<sec id="s3d">
<title>Learning Capacity</title>
<p>As used for the perceptron
<xref ref-type="bibr" rid="pone.0078318-Gardner1">[40]</xref>
and tempotron
<xref ref-type="bibr" rid="pone.0078318-Gtig1">[15]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Yu1">[26]</xref>
learning rules, the ratio of the number of random patterns (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e116.jpg"></inline-graphic>
</inline-formula>
) that a neuron can correctly classify over the number of its synapses (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e117.jpg"></inline-graphic>
</inline-formula>
),
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e118.jpg"></inline-graphic>
</inline-formula>
, is used to measure the memory load. An important characteristic of a neuron's capacity is the maximum load that it can learn. In this experiment, the memory capacity of the PSD rule is investigated.</p>
<sec id="s3d1">
<title>Experiment setup</title>
<p>We devise an experiment that has a similar setup to that in
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
. A number of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e119.jpg"></inline-graphic>
</inline-formula>
patterns are randomly generated in the same process as previous experiments, where each pattern contains
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e120.jpg"></inline-graphic>
</inline-formula>
spike trains and each train has a single spike. The patterns are randomly and evenly assigned to
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e121.jpg"></inline-graphic>
</inline-formula>
different categories. Here we choose
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e122.jpg"></inline-graphic>
</inline-formula>
for this experiment. A single LIF neuron is trained to memorize all patterns correctly in a maximum number of 500 training epochs. The neuron is trained to emit a single spike at a specified time for patterns from each category. The desired spikes for the 4 generated categories are set to the time of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e123.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e124.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e125.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e126.jpg"></inline-graphic>
</inline-formula>
, respectively. A pattern is considered to have been correctly memorized by the neuron if the distance between the actual spike train and the desired train is below 0.2. The learning process is considered a failure if the number of training epochs reaches the maximum number.</p>
</sec>
<sec id="s3d2">
<title>Maximum load factor</title>
<p>
<xref ref-type="fig" rid="pone-0078318-g009">Fig. 9</xref>
shows the results of the experiment for the case of 500, 750 and 1000 afferent neurons, respectively. All the data are averaged over 100 runs. In each run, different initial weights are used. As seen from
<xref ref-type="fig" rid="pone-0078318-g009">Fig. 9</xref>
, the number of epochs required for the training increases slightly as the number of patterns increases when the load is not too high, but a sharp increase of learning epochs occurs after a certain high load. This suggests that the task becomes tougher with an increasing load. It is also noted that a larger number of synapses leads to a bigger memory capacity for the same neuron. It is reported that the maximum load factors for 500, 750 and 1000 synapses are 0.144, 0.133 and 0.124, respectively.</p>
<fig id="pone-0078318-g009" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g009</object-id>
<label>Figure 9</label>
<caption>
<title>The memory capacity of the PSD rule with different numbers of synapses.</title>
<p>The neuron is trained to memorize all patterns correctly in a maximum number of 500 epochs. The reaching points of 500 epochs are regarded as failure of the learning. The cases of 500, 750 and 1000 synapses are denoted by blue, red and green parts, respectively. The marked lines denote average learning epochs and the shaded areas show the standard deviation. The dashed line at 100 epochs is used for evaluating the efficient load
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e127.jpg"></inline-graphic>
</inline-formula>
described in the main text. All the data are averaged over 100 runs.</p>
</caption>
<graphic xlink:href="pone.0078318.g009"></graphic>
</fig>
</sec>
<sec id="s3d3">
<title>Efficient load factor</title>
<p>Besides the maximum load factor, we heuristically define another factor, the efficient load
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e128.jpg"></inline-graphic>
</inline-formula>
. As described above, the neuron can perform the task efficiently with a relatively high load when the number of patterns does not exceed a certain value (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e129.jpg"></inline-graphic>
</inline-formula>
). The efficient load is denoted as
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e130.jpg"></inline-graphic>
</inline-formula>
. When the load is below
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e131.jpg"></inline-graphic>
</inline-formula>
, the neuron can reliably memorize all patterns with a small number of training epochs. There are different ways to define
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e132.jpg"></inline-graphic>
</inline-formula>
. We show two possible ways. One is to derive the definition from a mathematical calculation such as
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e133.jpg"></inline-graphic>
</inline-formula>
, where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e134.jpg"></inline-graphic>
</inline-formula>
is a specified value (for example
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e135.jpg"></inline-graphic>
</inline-formula>
). A simpler method is where a specified number of training epochs is used. The corresponding number of patterns that can be correctly learnt is considered as
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e136.jpg"></inline-graphic>
</inline-formula>
. For simplicity, we use the latter as an example for demonstration and the specified number of epochs is set to 100. As seen from
<xref ref-type="fig" rid="pone-0078318-g009">Fig. 9</xref>
, the efficient load factors for 500, 750 and 1000 synapses are 0.112, 0.109 and 0.108, respectively. Surprisingly, these efficient load factors seem to all be around a stable value which only changes slightly across different numbers of synapses. This fixed value of efficient load factor for different values of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e137.jpg"></inline-graphic>
</inline-formula>
indicates that the number of patterns that a neuron can efficiently memorize grows linearly with the number of afferent synapses. It is worth noting that the concept of efficient load factor
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e138.jpg"></inline-graphic>
</inline-formula>
provides an important guideline for choosing the load of patterns when a reliable and efficient training is required.</p>
</sec>
</sec>
<sec id="s3e">
<title>Effects of Learning Parameters</title>
<p>Two of the major parameters involved in the PSD learning rule are the learning rate
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e139.jpg"></inline-graphic>
</inline-formula>
and the decay constant
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e140.jpg"></inline-graphic>
</inline-formula>
. In this section, we aim to investigate the effects of these parameters on the learning process.</p>
<sec id="s3e1">
<title>Small
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e141.jpg"></inline-graphic>
</inline-formula>
results in strong causal weight distribution</title>
<p>As a decay constant,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e142.jpg"></inline-graphic>
</inline-formula>
is an important parameter involved in the postsynaptic current. It determines how long a presynaptic spike will still have causal effect on the postsynaptic neuron. In the phase of synaptic adaptation,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e143.jpg"></inline-graphic>
</inline-formula>
also determines the magnitude of modification on the synaptic weights at the time of a triggering spike. Thus,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e144.jpg"></inline-graphic>
</inline-formula>
will affect the distribution of weights after the training. To look into this effect, we conduct an experiment with a similar setup as in
<xref ref-type="fig" rid="pone-0078318-g004">Fig. 4</xref>
but with different values of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e145.jpg"></inline-graphic>
</inline-formula>
. Here we choose
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e146.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e147.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e148.jpg"></inline-graphic>
</inline-formula>
. As can be seen from
<xref ref-type="fig" rid="pone-0078318-g010">Fig. 10</xref>
, a smaller
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e149.jpg"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e150.jpg"></inline-graphic>
</inline-formula>
) can result in a very uneven distribution with only a few synapses being given relatively higher weights. A flat distribution is obtained with an increasing
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e151.jpg"></inline-graphic>
</inline-formula>
. This is because
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e152.jpg"></inline-graphic>
</inline-formula>
determines how long the causal effect of an afferent spike will sustain. A smaller
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e153.jpg"></inline-graphic>
</inline-formula>
means that only the nearer neighbors are involved in generating the desired spikes, hence resulting in a smaller number of causal synapses. With a smaller number of causal synapses, a higher synaptic strength will be required to generate spikes at the desired time. On the other hand, with a larger
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e154.jpg"></inline-graphic>
</inline-formula>
, a wider range of causal neighbors can contribute to generating the desired spikes, and therefore a lower synaptic strength will be sufficient. The synaptic strength and distribution for different values of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e155.jpg"></inline-graphic>
</inline-formula>
are obtained as in
<xref ref-type="fig" rid="pone-0078318-g010">Fig. 10</xref>
.</p>
<fig id="pone-0078318-g010" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g010</object-id>
<label>Figure 10</label>
<caption>
<title>Effect of decay constant
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e156.jpg"></inline-graphic>
</inline-formula>
on the distribution of weights.</title>
<p>The averaged weights after learning are shown. The height of each bar reflects the synaptic strength. The afferent neurons are chronologically sorted according to their spike time. The target spikes are overlayed and denoted as red lines. Cases of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e157.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e158.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e159.jpg"></inline-graphic>
</inline-formula>
are depicted. All the data are averaged over 100 runs.</p>
</caption>
<graphic xlink:href="pone.0078318.g010"></graphic>
</fig>
</sec>
<sec id="s3e2">
<title>Effects of both
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e160.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e161.jpg"></inline-graphic>
</inline-formula>
on the learning</title>
<p>We further conduct another experiment to evaluate the effects of both
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e162.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e163.jpg"></inline-graphic>
</inline-formula>
on the learning. In this experiment, a single LIF neuron with
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e164.jpg"></inline-graphic>
</inline-formula>
afferent neurons is considered. The neuron is trained to correctly memorize a set of 10 spike patterns randomly generated over a time window of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e165.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e166.jpg"></inline-graphic>
</inline-formula>
. The neuron is trained in a maximum number of 500 epochs to correctly associate all these patterns with a desired spike train of [
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e167.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e168.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e169.jpg"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e170.jpg"></inline-graphic>
</inline-formula>
]
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e171.jpg"></inline-graphic>
</inline-formula>
. We denote that a pattern is correctly memorized if the distance between the output spike train and the desired spike train is below
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e172.jpg"></inline-graphic>
</inline-formula>
. If the number of training epochs exceeds 500, we regard it as a failure. We conduct an exhaustive search over a wide range of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e173.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e174.jpg"></inline-graphic>
</inline-formula>
.
<xref ref-type="fig" rid="pone-0078318-g011">Fig. 11</xref>
shows how
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e175.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e176.jpg"></inline-graphic>
</inline-formula>
jointly affect the learning performance, which can be used as a guidance to select the learning parameters. With a fixed
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e177.jpg"></inline-graphic>
</inline-formula>
, a larger
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e178.jpg"></inline-graphic>
</inline-formula>
results in a faster learning speed (shown in
<xref ref-type="fig" rid="pone-0078318-g011">Fig. 11, right panel</xref>
), but when
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e179.jpg"></inline-graphic>
</inline-formula>
is increased above a critical value (e.g., 0.1 for
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e180.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e181.jpg"></inline-graphic>
</inline-formula>
in our experiments), the learning will slow down or even fail. For small
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e182.jpg"></inline-graphic>
</inline-formula>
, a larger
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e183.jpg"></inline-graphic>
</inline-formula>
leads to a faster learning, however, for large
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e184.jpg"></inline-graphic>
</inline-formula>
, a larger
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e185.jpg"></inline-graphic>
</inline-formula>
has the opposite effect. As a consequence, when
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e186.jpg"></inline-graphic>
</inline-formula>
is set in a suitable range (e.g.,
<xref ref-type="bibr" rid="pone.0078318-Adrian1">[5]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Gtig1">[15]</xref>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e187.jpg"></inline-graphic>
</inline-formula>
), a wide range of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e188.jpg"></inline-graphic>
</inline-formula>
can result in a fast learning speed (e.g., below 100 epochs).</p>
<fig id="pone-0078318-g011" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g011</object-id>
<label>Figure 11</label>
<caption>
<title>Effects of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e189.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e190.jpg"></inline-graphic>
</inline-formula>
on the learning.</title>
<p>The neuron is trained in a maximum number of 500 epochs to correctly memorize a set of 10 spike patterns. The average learning epochs are recorded for each pair of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e191.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e192.jpg"></inline-graphic>
</inline-formula>
. The reaching points of 500 epochs are regarded as failure of the learning. The left shows an exhaustive investigation of a wide range of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e193.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e194.jpg"></inline-graphic>
</inline-formula>
, and the data are averaged over 30 runs. A small number of learning parameters are examined in the right figure, and the data are averaged over 100 runs.</p>
</caption>
<graphic xlink:href="pone.0078318.g011"></graphic>
</fig>
</sec>
</sec>
<sec id="s3f">
<title>Classification of Spatiotemporal Patterns</title>
<p>In this experiment, the ability of the proposed PSD rule for classifying spatiotemporal patterns is investigated by using a multi-category classification task. The setup of this experiment is similar to that in
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
. Three random spike patterns representing three categories are generated in a similar fashion to that in the previous experiments, and they are fixed as the templates. A Gaussian jitter with a standard deviation of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e195.jpg"></inline-graphic>
</inline-formula>
is used to generate training and testing patterns. The training set and the testing set contain
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e196.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e197.jpg"></inline-graphic>
</inline-formula>
samples, respectively. Three neurons are trained to classify these three categories, with each neuron representing one category. Different neurons for each category can be specified to fire different spike trains. However, for simplicity, all the neurons in this experiment are trained to fire the same spike train (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e198.jpg"></inline-graphic>
</inline-formula>
). The experiment is repeated 100 times, with each run having different initial conditions.</p>
<p>After training, classification is performed on both the training and the testing set. In the classification task, we propose two decision-making criteria: absolute confidence and relative confidence. With the absolute confidence criterion, only if the distance between the desired spike train and the actual output spike train of the corresponding neuron is smaller than a specified value (0.5 is used here), then the input pattern will be regarded as being correctly classified. As for the relative confidence criterion, a scheme of competition is used. The incoming pattern will be labeled by the winning neuron that produces the closest spike train to its desired spike train.</p>
<p>
<xref ref-type="fig" rid="pone-0078318-g012">Fig. 12</xref>
shows the average classification accuracy for each category under the two proposed decision criteria. From the absolute confidence criterion, we see that the neuron successfully classifies the training set with an average accuracy of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e199.jpg"></inline-graphic>
</inline-formula>
. The average accuracy for the testing set is
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e200.jpg"></inline-graphic>
</inline-formula>
. Noteworthily, under the relative confidence, both the average accuracies for the training and the testing set reach
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e201.jpg"></inline-graphic>
</inline-formula>
. The performance for the classification task is therefore significantly improved by the relative confidence decision making criterion. With the absolute confidence criterion, the trained neuron strives to find a good match with the memorized patterns. However, with the relative confidence criterion, the trained neuron attempts to find the most likely category through competition.</p>
<fig id="pone-0078318-g012" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g012</object-id>
<label>Figure 12</label>
<caption>
<title>The average accuracies for the classification of spatiotemporal patterns.</title>
<p>There are 3 categories to be classified. The average accuracies are represented by shaded bars. Two types of criteria for making decision are proposed and investigated. The left is the absolute confidence criterion, and the right is the relative confidence criterion. All the data are averaged over 100 runs.</p>
</caption>
<graphic xlink:href="pone.0078318.g012"></graphic>
</fig>
<p>For the classification of spatiotemporal patterns, the tempotron is an efficient rule
<xref ref-type="bibr" rid="pone.0078318-Gtig1">[15]</xref>
in training LIF neurons to distinguish two classes of patterns by firing one spike or by keeping quiescent. We use the tempotron rule to benchmark the PSD rule in the classification of spatiotemporal patterns. The tempotron rule is applied to perform the same classification task as above. The classification accuracies are shown in
<xref ref-type="table" rid="pone-0078318-t001">Table 1</xref>
. As can be seen from
<xref ref-type="table" rid="pone-0078318-t001">Table 1</xref>
, our proposed rule with the relative confidence criterion has a comparable performance to the tempotron rule. Moreover, the PSD rule is advantageous in that it is not limited to performing classification, but it is also able to memorize patterns by firing desired spikes at precise time.</p>
<table-wrap id="pone-0078318-t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.t001</object-id>
<label>Table 1</label>
<caption>
<title>Multi-Category Classification of Spatiotemporal Patterns.</title>
</caption>
<alternatives>
<graphic id="pone-0078318-t001-1" xlink:href="pone.0078318.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Accuracy (%)</td>
<td colspan="2" align="left" rowspan="1">Category 1</td>
<td colspan="2" align="left" rowspan="1">Category 2</td>
<td colspan="2" align="left" rowspan="1">Category 3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Training</td>
<td align="left" rowspan="1" colspan="1">Testing</td>
<td align="left" rowspan="1" colspan="1">Training</td>
<td align="left" rowspan="1" colspan="1">Testing</td>
<td align="left" rowspan="1" colspan="1">Training</td>
<td align="left" rowspan="1" colspan="1">Testing</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Absolute Confidence</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e202.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e203.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e204.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e205.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e206.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e207.jpg"></inline-graphic>
</inline-formula>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e208.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e209.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e210.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e211.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e212.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e213.jpg"></inline-graphic>
</inline-formula>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Relative Confidence</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e214.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e215.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e216.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e217.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e218.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e219.jpg"></inline-graphic>
</inline-formula>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Tempotron</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e220.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e221.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e222.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e223.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e224.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e225.jpg"></inline-graphic>
</inline-formula>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e226.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e227.jpg"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e228.jpg"></inline-graphic>
</inline-formula>
</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
</sec>
<sec id="s3g">
<title>Optical Character Recognition</title>
<p>In order to investigate the capability of the PSD rule over a practical problem, an OCR task is considered in this experiment. Images of digits 0-9 are used. Each image has a size of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e229.jpg"></inline-graphic>
</inline-formula>
black/white (B/W) pixels. Additionally, a reversal noise is introduced to generate noisy images. We do this by reversing a pixel randomly with a probability denoted as the noise level.
<xref ref-type="fig" rid="pone-0078318-g013">Fig. 13</xref>
illustrates some image samples. The digits are destroyed gradually with an increasing noise level. When the noise level is above
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e230.jpg"></inline-graphic>
</inline-formula>
, the digits are hardly recognizable.</p>
<fig id="pone-0078318-g013" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g013</object-id>
<label>Figure 13</label>
<caption>
<title>Illustration of image samples.</title>
<p>Each image has a size of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e231.jpg"></inline-graphic>
</inline-formula>
B/W pixels. The top two rows show template images. The bottom two rows show images with noise introduced to the templates. Reversal noise is used where each pixel is randomly reversed with a probability denoted as the noise level. A range of noise level of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e232.jpg"></inline-graphic>
</inline-formula>
is illustrated.</p>
</caption>
<graphic xlink:href="pone.0078318.g013"></graphic>
</fig>
<p>One of the major challenges of applying SNNs to practical problems is that proper encoding methods are required to produce the input data
<xref ref-type="bibr" rid="pone.0078318-Yu1">[26]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Yu2">[41]</xref>
. Encoding is the first step of utilizing spiking neurons. It aims to generate spike patterns that represent the external stimuli. However, how the external information is encoded in the brain still remains unclear. Many encoding mechanisms have been proposed for converting images into spikes such as rate code
<xref ref-type="bibr" rid="pone.0078318-Brader1">[21]</xref>
, latency code
<xref ref-type="bibr" rid="pone.0078318-Yu1">[26]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Shriki1">[42]</xref>
and phase code
<xref ref-type="bibr" rid="pone.0078318-Hu1">[27]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Nadasdy1">[43]</xref>
. The rate code is unsuitable for the rules that learn precise spike patterns. A direct utilization of the latency code is also found to be inappropriate. For example, if a simple latency code is used in this OCR task, the spikes in the input spatiotemporal pattern will all occur at
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e233.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e234.jpg"></inline-graphic>
</inline-formula>
. This does not work for spatiotemporal mapping algorithms including PSD, ReSuMe
<xref ref-type="bibr" rid="pone.0078318-Ponulak1">[23]</xref>
, Chronotron
<xref ref-type="bibr" rid="pone.0078318-Florian1">[24]</xref>
and SPAN
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
. These spatiotemporal mapping algorithms cannot guarantee successful learning of an arbitrary spatiotemporal spike pattern. To trigger a desired spike, a sufficient number of input spikes around it are required. Long delays will not be effectively learnt since the causal connection could not be built. In real nervous systems, neurons rarely fire in such a highly synchronized manner but rather in a distributed one
<xref ref-type="bibr" rid="pone.0078318-Uzzell1">[7]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Reinagel1">[8]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Gollisch1">[44]</xref>
. Thus, proper encoding is required not only for successful learning association but also for maintaining some level of biological realism.</p>
<p>An increasing body of evidence shows that action potentials are related to the phases of the intrinsic subthreshold membrane potential oscillations
<xref ref-type="bibr" rid="pone.0078318-Llinas1">[45]</xref>
<xref ref-type="bibr" rid="pone.0078318-Jacobs1">[47]</xref>
. These observations support the hypothesis of a phase code. Following the phase code presented in
<xref ref-type="bibr" rid="pone.0078318-Hu1">[27]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Nadasdy1">[43]</xref>
, we develop a simple encoding method for this task. The mechanism of our encoding model is illustrated in
<xref ref-type="fig" rid="pone-0078318-g014">Fig. 14</xref>
. The encoding unit consists of a positive neuron (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e235.jpg"></inline-graphic>
</inline-formula>
), a negative neuron (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e236.jpg"></inline-graphic>
</inline-formula>
) and an output neuron (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e237.jpg"></inline-graphic>
</inline-formula>
). Each encoding unit is connected to a pixel and a subthreshold membrane potential oscillation. For simplicity, the oscillation for the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e238.jpg"></inline-graphic>
</inline-formula>
-
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e239.jpg"></inline-graphic>
</inline-formula>
encoding unit is described as:
<disp-formula id="pone.0078318.e240">
<graphic xlink:href="pone.0078318.e240"></graphic>
<label>(13)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e241.jpg"></inline-graphic>
</inline-formula>
is the magnitude of the subthreshold membrane oscillation,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e242.jpg"></inline-graphic>
</inline-formula>
is the phase angular velocity and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e243.jpg"></inline-graphic>
</inline-formula>
is the initial phase.
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e244.jpg"></inline-graphic>
</inline-formula>
is defined as:
<disp-formula id="pone.0078318.e245">
<graphic xlink:href="pone.0078318.e245"></graphic>
<label>(14)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e246.jpg"></inline-graphic>
</inline-formula>
is the reference phase and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e247.jpg"></inline-graphic>
</inline-formula>
is the phase difference between nearby encoding units. We set
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e248.jpg"></inline-graphic>
</inline-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e249.jpg"></inline-graphic>
</inline-formula>
is the number of encoding units.
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e250.jpg"></inline-graphic>
</inline-formula>
is equal to the number of pixels in the image (400 here). The oscillation period is set to be
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e251.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e252.jpg"></inline-graphic>
</inline-formula>
which corresponds to a frequency of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e253.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e254.jpg"></inline-graphic>
</inline-formula>
.</p>
<fig id="pone-0078318-g014" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g014</object-id>
<label>Figure 14</label>
<caption>
<title>Illustration of the encoding schema.</title>
<p>The left shows the structure of an encoding unit. The encoding unit includes a positive neuron (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e255.jpg"></inline-graphic>
</inline-formula>
), a negative neuron (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e256.jpg"></inline-graphic>
</inline-formula>
) and an output neuron (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e257.jpg"></inline-graphic>
</inline-formula>
). Each encoding unit is assigned to a subthreshold membrane oscillation. Both
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e258.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e259.jpg"></inline-graphic>
</inline-formula>
neurons receive signals from this subthreshold membrane oscillation and the corresponding pixel. The
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e260.jpg"></inline-graphic>
</inline-formula>
neuron only reacts to positive activation voltage, while the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e261.jpg"></inline-graphic>
</inline-formula>
neuron only reacts to negative activation voltage. The firing of either the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e262.jpg"></inline-graphic>
</inline-formula>
neuron or the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e263.jpg"></inline-graphic>
</inline-formula>
neuron will immediately cause the firing of the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e264.jpg"></inline-graphic>
</inline-formula>
neuron. The right illustrate the dynamics of the encoding. The B/W pixel will cause a downward/upward shift from the subthreshold membrane oscillation. A spike is generated if the membrane potential crosses the threshold line (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e265.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e266.jpg"></inline-graphic>
</inline-formula>
).</p>
</caption>
<graphic xlink:href="pone.0078318.g014"></graphic>
</fig>
<p>The
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e267.jpg"></inline-graphic>
</inline-formula>
neuron only responds to positive activation potential, while the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e268.jpg"></inline-graphic>
</inline-formula>
neuron only reacts to negative activation potential. An input of B/W pixel will cause a downward/upward shift from the subthreshold membrane oscillation. Whenever the membrane potential crosses the threshold, a spike is generated. Through the fine tuning of parameter
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e269.jpg"></inline-graphic>
</inline-formula>
, the amount of shift and the threshold values, we set the spike to occur at peaks of the oscillation. The firing of either the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e270.jpg"></inline-graphic>
</inline-formula>
neuron or the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e271.jpg"></inline-graphic>
</inline-formula>
neuron will immediately trigger the firing of the
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e272.jpg"></inline-graphic>
</inline-formula>
neuron. The encoding units therefore output a spike at one phase for a white pixel and another shifted phase of 180 degrees for a black pixel. Also, the emitted phases change depending which pixel is the input.</p>
<p>We select 10 neurons to learn the patterns generated by the encoding units. Each learning neuron corresponds to one category. The parameter setting of the learning neurons is the same as that in the previous task of spatiotemporal pattern classification. Each neuron is trained to produce a target spike train (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e273.jpg"></inline-graphic>
</inline-formula>
) when a pattern from the assigned class is presented, and not to spike when patterns from other classes are presented. In principle, different target spike trains can be used for different digits. The neurons are trained for
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e274.jpg"></inline-graphic>
</inline-formula>
epochs. In each training epoch, a training data set of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e275.jpg"></inline-graphic>
</inline-formula>
samples is formed. There are
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e276.jpg"></inline-graphic>
</inline-formula>
samples for each digit. Among these
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e277.jpg"></inline-graphic>
</inline-formula>
samples, one is the template image and the other
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e278.jpg"></inline-graphic>
</inline-formula>
are generated with a random noise level of
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e279.jpg"></inline-graphic>
</inline-formula>
. After training, the neurons are tested on different noise levels. On each noise level,
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e280.jpg"></inline-graphic>
</inline-formula>
noise patterns are generated for each digit. The relative confidence criterion is used for making decision. In our test, the category of an input pattern will be decided by one of the neurons that generates the lowest spike distance.</p>
<p>
<xref ref-type="fig" rid="pone-0078318-g015">Fig. 15</xref>
shows the testing results. In order to observe the association ability of the neuron to map a digit with the desired spike train, digit “8” is used as an example. The neuron corresponding to digit “8” can successfully produce a spike train close to the target train when the noise level is low. This association worsens as the noise level increases. As shown in
<xref ref-type="fig" rid="pone-0078318-g015">Fig. 15</xref>
, the classification accuracy remains high when the noise level is low and will drop gradually with increasing noise level. Even when the image is seriously damaged by the noise (
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e281.jpg"></inline-graphic>
</inline-formula>
noise level), a high accuracy of around
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e282.jpg"></inline-graphic>
</inline-formula>
can still be obtained. The results show that the trained neurons can successfully associate the template images with the target spike train. Moreover, the trained neurons present a high recognition ability under the relative confidence criterion even if images are damaged by noise.</p>
<fig id="pone-0078318-g015" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0078318.g015</object-id>
<label>Figure 15</label>
<caption>
<title>Performance on OCR task.</title>
<p>The left shows the association ability of the neuron to map a typical digit with the desired spike train. Digit “8” is used as an example here. The distance between the output spike train and the desired spike train is depicted versus the noise level. The right shows the classification accuracy on the testing set. Solid lines denote the average and shaded areas denote the standard deviation. All the data are averaged over 30 runs.</p>
</caption>
<graphic xlink:href="pone.0078318.g015"></graphic>
</fig>
</sec>
</sec>
<sec id="s4">
<title>Discussion</title>
<p>The PSD rule is proposed for the association and recognition of spatiotemporal spike patterns. In summary, the PSD rule transforms the input spike trains into analog signals by convolving the spikes with a kernel function. By using a kernel function, the analog signals are presented in the simple form of synaptic currents. It is biologically plausible because it allows us to interpret the signals with physical meaning. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing LTP and negative errors causing LTD. The amount of synaptic adaptation is determined by the transformed signal of the input spikes (postsynaptic currents here) at the time of modification occurrence. When the actual spike train is the same as the desired spike train, the adaptation of the weights will be terminated.</p>
<p>There is a supervisory signal involved in the PSD rule. The most documented evidence for supervised rules comes from studies of the cerebellum and the cerebellar cortex
<xref ref-type="bibr" rid="pone.0078318-Thach1">[18]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Ito1">[19]</xref>
. It is shown that supervisory signals are provided to the learning modules by sensory feedback
<xref ref-type="bibr" rid="pone.0078318-Carey1">[20]</xref>
or other supervisory neural structures in the brain
<xref ref-type="bibr" rid="pone.0078318-Ito1">[19]</xref>
. A neuromodulator released by the supervisory system can induce the control of the adaptation. This control occurs for several neuromodulatory pathways, such as dopamine and acetylcholine
<xref ref-type="bibr" rid="pone.0078318-Foehring1">[48]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Seamans1">[49]</xref>
. Experimental evidence shows that N-methyl-D-aspartate (NMDA) receptors are critically involved in the processes of LTP and LTD
<xref ref-type="bibr" rid="pone.0078318-Artola1">[50]</xref>
-
<xref ref-type="bibr" rid="pone.0078318-Lisman1">[52]</xref>
. After opening the NMDA channels, the resulting
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e283.jpg"></inline-graphic>
</inline-formula>
entry then activates the biochemistry of potentiation which leads to LTP
<xref ref-type="bibr" rid="pone.0078318-Lisman1">[52]</xref>
. Suppression of NMDA receptors by spike-mediated calcium entry may be a necessary step in the induction of LTD
<xref ref-type="bibr" rid="pone.0078318-Lisman1">[52]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Froemke1">[53]</xref>
. The synaptic modification can be implemented through a supervisory control of opening or suppression of these NMDA channels.</p>
<p>The PSD rule is simple and efficient in synaptic adaptation. Utilizing the postsynaptic current as the eligibility trace for weight adaptation is a simple and efficient choice. The same signals of postsynaptic currents are also used in the synaptic adaptation as in the neuron dynamics, unlike the learning rules such as
<xref ref-type="bibr" rid="pone.0078318-Bohte1">[22]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Ponulak2">[28]</xref>
where different sources of signals were used. Thus, the number of signal sources involved in the learning is reduced, which will directly benefit the computation. Secondly, unlike the arithmetic-based rules
<xref ref-type="bibr" rid="pone.0078318-Bohte1">[22]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Florian1">[24]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
, where a complex error calculation is required for the synaptic adaptation, the PSD rule is based on a simple form of spike error between the actual and the desired spikes. The synaptic adaptation is driven by these precise spikes without complex error calculation. As a matter of fact, the weight modification only depends on currently available information (shown as
<xref ref-type="fig" rid="pone-0078318-g002">Fig. 2</xref>
). Additionally, due to the ability of the PSD rule to operate online, it is suitable for real-time applications. According to the PSD rule, different kernels, such as the exponential kernel and
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e284.jpg"></inline-graphic>
</inline-formula>
kernel, can also be used in convolving the spikes to provide different eligibility traces.</p>
<p>The PSD rule is designed for processing spatiotemporal patterns, where the exact time of each spike is used for information transmission. The PSD rule is unsuitable for learning patterns under the rate code because this rule is designed to process precise-timing spikes by its nature. The rate code uses the spike count but not the precise time to convey information. Like other spatiotemporal mapping algorithms, including ReSuMe
<xref ref-type="bibr" rid="pone.0078318-Ponulak1">[23]</xref>
, Chronotron
<xref ref-type="bibr" rid="pone.0078318-Florian1">[24]</xref>
and SPAN
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
, the PSD rule cannot guarantee successful learning of an arbitrary spatiotemporal spike pattern. A sufficient number of input spikes around the desired time are required for establishing causal connections. In other words, the temporal range covered by the desired spikes should be covered by the input spikes.</p>
<p>The spiking neurons are equivalent to traditional neurons such as perceptron under certain conditions
<xref ref-type="bibr" rid="pone.0078318-Gtig1">[15]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Xu1">[54]</xref>
. The running of a spiking neuron is a continuous process over a period of time while a perceptron does not involve the concept of time. However, the common feature between perceptrons and spiking neurons is that the calculation of a weighted sum is needed. Segmenting the running time of the spiking neuron into several fixed points, the perceptron can replace the spiking neuron. The input vectors for the perceptron are the postsynaptic currents at these fixed time points. According to
<xref ref-type="bibr" rid="pone.0078318-Xu1">[54]</xref>
, the supervised association can be transformed into a classification problem and then be solved by using the perceptron learning rule. The target of the classification is to distinguish the spike firing time from non-spike firing time. However, a large number of fixed points are required for the perceptron to achieve similar dynamics of the spiking neuron. This means the perceptron needs to remember all pattern vectors at these fixed points. The computational power of spiking neurons is sacrificed by using perceptrons at this point.</p>
<p>In all the experiments, a single spike code is used for afferent neurons, where each input neuron only fires a single spike during the entire time window. This single spike code is chosen for various reasons but more than one spike is also allowed for the PSD rule. Firstly, a single spike code is simple for analysis and efficient for computation
<xref ref-type="bibr" rid="pone.0078318-Bohte1">[22]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Yu1">[26]</xref>
. Secondly, there is strong biological evidence supporting the single spike code. Visual systems can analyze a new complex scene in less than
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e285.jpg"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e286.jpg"></inline-graphic>
</inline-formula>
<xref ref-type="bibr" rid="pone.0078318-Gollisch1">[44]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Thorpe1">[55]</xref>
. This period of time is impressive for processing, considering the billions of neurons involved. This suggests neurons exchange only one or few spikes. Single spike codes can also fit situations where information is coded in the time of the first spike, relative to the onset of stimuli
<xref ref-type="bibr" rid="pone.0078318-VanRullen1">[56]</xref>
, or situations where information is coded relative to a background oscillation
<xref ref-type="bibr" rid="pone.0078318-Florian1">[24]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Jacobs1">[47]</xref>
. The PSD rule is also suitable for multi-spike train (results shown in
<xref ref-type="fig" rid="pone-0078318-g006">Fig. 6</xref>
). When the number of spikes from each afferent neuron is not high enough, the neuron can produce the desired spike train after several epochs. When the number of spikes increases, the learning becomes slower and more difficult to converge. Additionally, the biological plausibility of an encoding scheme that can use multiple spikes to code information is still unclear.</p>
<sec id="s4a">
<title>Related Works</title>
<p>Several learning algorithms have been proposed to explore how spiking neurons may respond for processing and memorizing spatiotemporal patterns.</p>
<p>The tempotron rule
<xref ref-type="bibr" rid="pone.0078318-Gtig1">[15]</xref>
is one such learning rule where neurons are trained to discriminate between two classes of spatiotemporal patterns. This learning rule is based on a gradient descent approach. In the tempotron rule, the synaptic plasticity is governed by the temporal contiguity of presynaptic spike and postsynaptic depolarization and a supervisory signal. The neurons could be trained to successfully distinguish two classes by firing a spike or by remaining quiescent. However, the neurons do not learn to fire at precise time. Since the tempotron rule mainly aims at decision-making tasks, it cannot support the same coding scheme used in both the input and output spikes. To support the same coding scheme through the input and output, a learning rule is needed to let the neuron not only fire but also fire at the specified time. In addition, the tempotron is designed for a specific neuron model, which might limit its usage on other spiking neuron models. For the decision-making task (classification), our proposed rule can obtain a comparable performance as the tempotron rule (see
<xref ref-type="table" rid="pone-0078318-t001">Table 1</xref>
).</p>
<p>SpikeProp
<xref ref-type="bibr" rid="pone.0078318-Bohte1">[22]</xref>
is a supervised learning rule for SNNs that can solve nonlinear classification problems by emitting a single spike at the desired time. The major limitation is that SpikeProp and its extension in
<xref ref-type="bibr" rid="pone.0078318-Booij1">[57]</xref>
do not allow multiple spikes in the output spike train. Thus, several different learning rules have been developed to train neurons to produce multiple output spikes in response to a spatiotemporal stimulus, such as ReSuMe
<xref ref-type="bibr" rid="pone.0078318-Ponulak1">[23]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Ponulak2">[28]</xref>
, Chronotron
<xref ref-type="bibr" rid="pone.0078318-Florian1">[24]</xref>
and SPAN
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
, as well as our PSD rule.</p>
<p>In both the SPAN rule and the Chronotron E-learning rule, the synaptic weights are modified according to a gradient descent approach in an error landscape. The error function in the Chronotron is based on the Victor
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e287.jpg"></inline-graphic>
</inline-formula>
Purpura (VP) distance
<xref ref-type="bibr" rid="pone.0078318-Victor1">[58]</xref>
, while in the SPAN rule the error function is based on a metric similar to the van Rossum metric
<xref ref-type="bibr" rid="pone.0078318-Rossum1">[38]</xref>
. These arithmetic calculations can easily reveal why and how networks with spiking neurons can be trained, but the arithmetic-based rules are not a good choice for designing networks with biological plausibility. The biological plausibility of error calculation is at least questionable. In contrast, the PSD minimizes the error between the actual output spike train and the desired spike train without the need for an explicit gradient calculation. Without extra calculation of the error, the PSD provides an efficient way for processing spatiotemporal patterns. On the other hand, since the PSD rule is derived from the common WH rule, it can also easily reveal why and how neurons can be trained similarly with arithmetic-based rules.</p>
<p>From the perspective of increased biological plausibility, the Chronotron I-learning rule and the ReSuMe rule are considered below. The I-learning rule was heuristically defined in
<xref ref-type="bibr" rid="pone.0078318-Florian1">[24]</xref>
where synaptic changes depend on the synaptic currents. This learning rule is quite similar to the PSD rule and it can be considered as a variation of the PSD rule. According to the I-learning rule, its development seems to be based on a particular case of the Spike Response Model
<xref ref-type="bibr" rid="pone.0078318-Gerstner1">[1]</xref>
, which might also limit its usage on other spiking neuron models or at least is not clearly demonstrated. Moreover, those synapses with zero initial weights will never be updated according to the I-learning rule. This will inevitably lead to information loss from those afferent neurons. In the PSD rule, all these issues are considered. The PSD is a more general rule and it is analytically derived. Through careful choice, the eligibility trace in the PSD rule can be represented by the postsynaptic current. In the tempotron rule, the postsynaptic voltage is involved in the learning. We refer to both the postsynaptic current and the postsynaptic voltage as the postsynaptic state. A crucial role of the postsynaptic state in the induction of long term plasticity has been demonstrated in
<xref ref-type="bibr" rid="pone.0078318-Artola1">[50]</xref>
<xref ref-type="bibr" rid="pone.0078318-Lisman1">[52]</xref>
. Similar to the PSD rule and the SPAN rule, the ReSuMe rule is derived from the WH rule. The ReSuMe interprets the WH rule as a combination of a Hebbian and an anti-Hebbian process within a learning window. It was demonstrated in
<xref ref-type="bibr" rid="pone.0078318-Mohemmed1">[25]</xref>
that the form of the SPAN rule has a surprising similarity to the ReSuMe rule with an exponential kernel. Similarly, we can transform the PSD rule by replacing the kernel used in Eq. (11) with the exponential kernel. This leads to:
<disp-formula id="pone.0078318.e288">
<graphic xlink:href="pone.0078318.e288"></graphic>
<label>(15)</label>
</disp-formula>
</p>
<p>A batch learning version of the ReSuMe rule given in
<xref ref-type="bibr" rid="pone.0078318-Florian1">[24]</xref>
is described as:
<disp-formula id="pone.0078318.e289">
<graphic xlink:href="pone.0078318.e289"></graphic>
<label>(16)</label>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e290.jpg"></inline-graphic>
</inline-formula>
is a non-Hebbian term used for speeding up the convergence of the learning.</p>
<p>As can be seen from the above equations, the PSD rule is also mathematically similar to the ReSuMe rule under certain conditions. The similarity among PSD, SPAN and ReSuMe results from the common WH rule. All these rules are derived from the WH rule with different interpretations.</p>
<p>Surprisingly, the WH rule also guarantees an intrinsical similarity among other learning rules such as synaptic scaling rules
<xref ref-type="bibr" rid="pone.0078318-VanRossum1">[59]</xref>
,
<xref ref-type="bibr" rid="pone.0078318-Buonomano1">[60]</xref>
. For example, a synaptic scaling rule was introduced in
<xref ref-type="bibr" rid="pone.0078318-Buonomano1">[60]</xref>
as:
<disp-formula id="pone.0078318.e291">
<graphic xlink:href="pone.0078318.e291"></graphic>
<label>(17)</label>
</disp-formula>
where the variable
<inline-formula>
<inline-graphic xlink:href="pone.0078318.e292.jpg"></inline-graphic>
</inline-formula>
measures the average activity of neurons, and it can be referred to as the firing rate. If a kernel with a long time constant is used to convolve the input, the actual output and the desired spikes, a similar measurement of the average firing activity will be obtained. Thus, the common WH rule can be presented in a similar form as the scaling rule.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>The authors are grateful to the anonymous reviewers for their constructive comments.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0078318-Gerstner1">
<label>1</label>
<mixed-citation publication-type="other">Gerstner W, Kistler WM (2002) Spiking neuron models: single neurons, populations, plasticity. Cambridge University Press.</mixed-citation>
</ref>
<ref id="pone.0078318-GhoshDastidar1">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ghosh-Dastidar</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Adeli</surname>
<given-names>H</given-names>
</name>
(
<year>2009</year>
)
<article-title>Spiking neural networks</article-title>
.
<source>International Journal of Neural Systems</source>
<volume>19</volume>
:
<fpage>295</fpage>
<lpage>308</lpage>
<pub-id pub-id-type="pmid">19731402</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Maass1">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Maass</surname>
<given-names>W</given-names>
</name>
(
<year>1997</year>
)
<article-title>Networks of spiking neurons: the third generation of neural network models</article-title>
.
<source>Neural Networks</source>
<volume>10</volume>
:
<fpage>1659</fpage>
<lpage>1671</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Panzeri1">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Panzeri</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Brunel</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Logothetis</surname>
<given-names>NK</given-names>
</name>
,
<name>
<surname>Kayser</surname>
<given-names>C</given-names>
</name>
(
<year>2010</year>
)
<article-title>Sensory neural codes using multiplexed temporal scales</article-title>
.
<source>Trends in Neurosciences</source>
<volume>33</volume>
:
<fpage>111</fpage>
<lpage>120</lpage>
<pub-id pub-id-type="pmid">20045201</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Adrian1">
<label>5</label>
<mixed-citation publication-type="other">Adrian E (1928) The basis of sensation: the action of the sense organs. W. W. Norton, New York.</mixed-citation>
</ref>
<ref id="pone.0078318-Berry1">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Berry</surname>
<given-names>MJ</given-names>
</name>
,
<name>
<surname>Meister</surname>
<given-names>M</given-names>
</name>
(
<year>1998</year>
)
<article-title>Refractoriness and neural precision</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>18</volume>
:
<fpage>2200</fpage>
<lpage>2211</lpage>
<pub-id pub-id-type="pmid">9482804</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Uzzell1">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Uzzell</surname>
<given-names>VJ</given-names>
</name>
,
<name>
<surname>Chichilnisky</surname>
<given-names>EJ</given-names>
</name>
(
<year>2004</year>
)
<article-title>Precision of spike trains in primate retinal ganglion cells</article-title>
.
<source>Journal of Neurophysiology</source>
<volume>92</volume>
:
<fpage>780</fpage>
<lpage>789</lpage>
<pub-id pub-id-type="pmid">15277596</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Reinagel1">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Reinagel</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Reid</surname>
<given-names>RC</given-names>
</name>
(
<year>2000</year>
)
<article-title>Temporal coding of visual information in the thalamus</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>20</volume>
:
<fpage>5392</fpage>
<lpage>5400</lpage>
<pub-id pub-id-type="pmid">10884324</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Bair1">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bair</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Koch</surname>
<given-names>C</given-names>
</name>
(
<year>1996</year>
)
<article-title>Temporal precision of spike trains in extrastriate cortex of the behaving macaque monkey</article-title>
.
<source>Neural Computation</source>
<volume>8</volume>
:
<fpage>1185</fpage>
<lpage>1202</lpage>
<pub-id pub-id-type="pmid">8768391</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Mainen1">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Mainen</surname>
<given-names>ZF</given-names>
</name>
,
<name>
<surname>Sejnowski</surname>
<given-names>TJ</given-names>
</name>
(
<year>1995</year>
)
<article-title>Reliability of spike timing in neocortical neurons</article-title>
.
<source>Science</source>
<volume>268</volume>
:
<fpage>1503</fpage>
<lpage>1506</lpage>
<pub-id pub-id-type="pmid">7770778</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Kempter1">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kempter</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Gerstner</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>van Hemmen</surname>
<given-names>JL</given-names>
</name>
(
<year>1998</year>
)
<article-title>Spike-based compared to rate-based Hebbian learning</article-title>
.
<source>In: NIPS'</source>
<volume>98</volume>
:
<fpage>125</fpage>
<lpage>131</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Borst1">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Borst</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Theunissen</surname>
<given-names>FE</given-names>
</name>
(
<year>1999</year>
)
<article-title>Information theory and neural coding</article-title>
.
<source>Nature Neuroscience</source>
<volume>2</volume>
:
<fpage>947</fpage>
<lpage>957</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Hopfield1">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hopfield</surname>
<given-names>JJ</given-names>
</name>
(
<year>1995</year>
)
<article-title>Pattern recognition computation using action potential timing for stimulus representation</article-title>
.
<source>Nature</source>
<volume>376</volume>
:
<fpage>33</fpage>
<lpage>36</lpage>
<pub-id pub-id-type="pmid">7596429</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Shadlen1">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Shadlen</surname>
<given-names>MN</given-names>
</name>
,
<name>
<surname>Movshon</surname>
<given-names>JA</given-names>
</name>
(
<year>1999</year>
)
<article-title>Synchrony unbound: review a critical evaluation of the temporal binding hypothesis</article-title>
.
<source>Neuron</source>
<volume>24</volume>
:
<fpage>67</fpage>
<lpage>77</lpage>
<pub-id pub-id-type="pmid">10677027</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Gtig1">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gütig</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Sompolinsky</surname>
<given-names>H</given-names>
</name>
(
<year>2006</year>
)
<article-title>The tempotron: a neuron that learns spike timing-based decisions</article-title>
.
<source>Nature Neuroscience</source>
<volume>9</volume>
:
<fpage>420</fpage>
<lpage>428</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Widrow1">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Widrow</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Lehr</surname>
<given-names>M</given-names>
</name>
(
<year>1990</year>
)
<article-title>30 years of adaptive neural networks: Perceptron, madaline, and backpropagation</article-title>
.
<source>Proceedings of the IEEE</source>
<volume>78</volume>
:
<fpage>1415</fpage>
<lpage>1442</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Knudsen1">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Knudsen</surname>
<given-names>EI</given-names>
</name>
(
<year>1994</year>
)
<article-title>Supervised learning in the brain</article-title>
.
<source>Journal of Neuroscience</source>
<volume>14</volume>
:
<fpage>3985</fpage>
<lpage>3997</lpage>
<pub-id pub-id-type="pmid">8027757</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Thach1">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thach</surname>
<given-names>WT</given-names>
</name>
(
<year>1996</year>
)
<article-title>On the specific role of the cerebellum in motor learning and cognition: clues from PET activation and lesion studies in man</article-title>
.
<source>Behavioral and Brain Sciences</source>
<volume>19</volume>
:
<fpage>411</fpage>
<lpage>431</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Ito1">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ito</surname>
<given-names>M</given-names>
</name>
(
<year>2000</year>
)
<article-title>Mechanisms of motor learning in the cerebellum</article-title>
.
<source>Brain Research</source>
<volume>886</volume>
:
<fpage>237</fpage>
<lpage>245</lpage>
<pub-id pub-id-type="pmid">11119699</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Carey1">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Carey</surname>
<given-names>MR</given-names>
</name>
,
<name>
<surname>Medina</surname>
<given-names>JF</given-names>
</name>
,
<name>
<surname>Lisberger</surname>
<given-names>SG</given-names>
</name>
(
<year>2005</year>
)
<article-title>Instructive signals for motor learning from visual cortical area MT</article-title>
.
<source>Nature Neuroscience</source>
<volume>8</volume>
:
<fpage>813</fpage>
<lpage>819</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Brader1">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brader</surname>
<given-names>JM</given-names>
</name>
,
<name>
<surname>Senn</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Fusi</surname>
<given-names>S</given-names>
</name>
(
<year>2007</year>
)
<article-title>Learning real-world stimuli in a neural network with spike-driven synaptic dynamics</article-title>
.
<source>Neural Computation</source>
<volume>19</volume>
:
<fpage>2881</fpage>
<lpage>2912</lpage>
<pub-id pub-id-type="pmid">17883345</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Bohte1">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bohte</surname>
<given-names>SM</given-names>
</name>
,
<name>
<surname>Kok</surname>
<given-names>JN</given-names>
</name>
,
<name>
<surname>Poutré</surname>
<given-names>JAL</given-names>
</name>
(
<year>2002</year>
)
<article-title>Error-backpropagation in temporally encoded networks of spiking neurons</article-title>
.
<source>Neurocomputing</source>
<volume>48</volume>
:
<fpage>17</fpage>
<lpage>37</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Ponulak1">
<label>23</label>
<mixed-citation publication-type="other">Ponulak F (2005) ReSuMe–new supervised learning method for spiking neural networks. Technical report, Institute of Control and Information Engineering, Poznoń University of Technology.</mixed-citation>
</ref>
<ref id="pone.0078318-Florian1">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Florian</surname>
<given-names>RV</given-names>
</name>
(
<year>2012</year>
)
<article-title>The Chronotron: a neuron that learns to fire temporally precise spike patterns</article-title>
.
<source>PloS One</source>
<volume>7</volume>
:
<fpage>e40233</fpage>
<pub-id pub-id-type="pmid">22879876</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Mohemmed1">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Mohemmed</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Schliebs</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Matsuda</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Kasabov</surname>
<given-names>N</given-names>
</name>
(
<year>2012</year>
)
<article-title>SPAN: spike pattern association neuron for learning spatio-temporal spike patterns</article-title>
.
<source>International Journal of Neural Systems</source>
<volume>22</volume>
:
<fpage>1250012</fpage>
<pub-id pub-id-type="pmid">22830962</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Yu1">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Yu</surname>
<given-names>Q</given-names>
</name>
,
<name>
<surname>Tang</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Tan</surname>
<given-names>KC</given-names>
</name>
,
<name>
<surname>Li</surname>
<given-names>H</given-names>
</name>
(
<year>2013</year>
)
<article-title>Rapid feedforward computation by temporal encoding and learning with spiking neurons</article-title>
.
<source>IEEE Transactions on Neural Networks and Learning Systems</source>
<volume>24</volume>
:
<fpage>1539</fpage>
<lpage>1552</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Hu1">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hu</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Tang</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Tan</surname>
<given-names>KC</given-names>
</name>
,
<name>
<surname>Li</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Shi</surname>
<given-names>L</given-names>
</name>
(
<year>2013</year>
)
<article-title>A spike-timing-based integrated model for pattern recognition</article-title>
.
<source>Neural Computation</source>
<volume>25</volume>
:
<fpage>450</fpage>
<lpage>472</lpage>
<pub-id pub-id-type="pmid">23148414</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Ponulak2">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ponulak</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Kasinski</surname>
<given-names>AJ</given-names>
</name>
(
<year>2010</year>
)
<article-title>Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting</article-title>
.
<source>Neural Computation</source>
<volume>22</volume>
:
<fpage>467</fpage>
<lpage>510</lpage>
<pub-id pub-id-type="pmid">19842989</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Kempter2">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kempter</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Gerstner</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>van Hemmen</surname>
<given-names>JL</given-names>
</name>
(
<year>1999</year>
)
<article-title>Hebbian learning and spiking neurons</article-title>
.
<source>Physical Review E</source>
<volume>59</volume>
:
<fpage>4498</fpage>
<lpage>4514</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Bi1">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bi</surname>
<given-names>GQ</given-names>
</name>
,
<name>
<surname>Poo</surname>
<given-names>MM</given-names>
</name>
(
<year>2001</year>
)
<article-title>Synaptic modification by correlated activity: Hebb's postulate revisited</article-title>
.
<source>Annual Review of Neuroscience</source>
<volume>24</volume>
:
<fpage>139</fpage>
<lpage>166</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-GhoshDastidar2">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ghosh-Dastidar</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Adeli</surname>
<given-names>H</given-names>
</name>
(
<year>2007</year>
)
<article-title>Improved spiking neural networks for EEG classification and epilepsy and seizure detection</article-title>
.
<source>Integr Comput-Aided Eng</source>
<volume>14</volume>
:
<fpage>187</fpage>
<lpage>212</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Izhikevich1">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Izhikevich</surname>
<given-names>EM</given-names>
</name>
(
<year>2001</year>
)
<article-title>Resonate-and-fire neurons</article-title>
.
<source>Neural Networks</source>
<volume>14</volume>
:
<fpage>883</fpage>
<lpage>894</lpage>
<pub-id pub-id-type="pmid">11665779</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Hodgkin1">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hodgkin</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Huxley</surname>
<given-names>A</given-names>
</name>
(
<year>1952</year>
)
<article-title>A quantitative description of membrane current and its application to conduction and excitation in nerve</article-title>
.
<source>Journal of Physiology</source>
<volume>117</volume>
:
<fpage>500</fpage>
<lpage>544</lpage>
<pub-id pub-id-type="pmid">12991237</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Izhikevich2">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Izhikevich</surname>
<given-names>EM</given-names>
</name>
(
<year>2003</year>
)
<article-title>Simple model of spiking neurons</article-title>
.
<source>IEEE Transactions on Neural Networks</source>
<volume>14</volume>
:
<fpage>1569</fpage>
<lpage>1572</lpage>
<pub-id pub-id-type="pmid">18244602</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Wade1">
<label>35</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wade</surname>
<given-names>JJ</given-names>
</name>
,
<name>
<surname>McDaid</surname>
<given-names>LJ</given-names>
</name>
,
<name>
<surname>Santos</surname>
<given-names>JA</given-names>
</name>
,
<name>
<surname>Sayers</surname>
<given-names>HM</given-names>
</name>
(
<year>2010</year>
)
<article-title>SWAT: a spiking neural network training algorithm for classification problems</article-title>
.
<source>IEEE Transactions on Neural Networks</source>
<volume>21</volume>
:
<fpage>1817</fpage>
<lpage>1830</lpage>
<pub-id pub-id-type="pmid">20876015</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Masquelier1">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Masquelier</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Guyonneau</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Thorpe</surname>
<given-names>SJ</given-names>
</name>
(
<year>2009</year>
)
<article-title>Competitive STDP-based spike pattern learning</article-title>
.
<source>Neural Computation</source>
<volume>21</volume>
:
<fpage>1259</fpage>
<lpage>1276</lpage>
<pub-id pub-id-type="pmid">19718815</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Rubinov1">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rubinov</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Sporns</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Thivierge</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Breakspear</surname>
<given-names>M</given-names>
</name>
(
<year>2011</year>
)
<article-title>Neurobiologically realistic determinants of self-organized criticality in networks of spiking neurons</article-title>
.
<source>PLoS Computational Biology</source>
<volume>7</volume>
:
<fpage>e1002038</fpage>
<pub-id pub-id-type="pmid">21673863</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Rossum1">
<label>38</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rossum</surname>
<given-names>M</given-names>
</name>
(
<year>2001</year>
)
<article-title>A novel spike distance</article-title>
.
<source>Neural Computation</source>
<volume>13</volume>
:
<fpage>751</fpage>
<lpage>763</lpage>
<pub-id pub-id-type="pmid">11255567</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Rieke1">
<label>39</label>
<mixed-citation publication-type="other">Rieke F, Warland D, Deruytervansteveninck R, Bialek W (1997) Spikes: exploring the neural code. Cambridge, MA: MIT Press, 1st edition.</mixed-citation>
</ref>
<ref id="pone.0078318-Gardner1">
<label>40</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gardner</surname>
<given-names>E</given-names>
</name>
(
<year>1988</year>
)
<article-title>The space of interactions in neural networks models</article-title>
.
<source>Journal of Physics</source>
<volume>A21</volume>
:
<fpage>257</fpage>
<lpage>270</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Yu2">
<label>41</label>
<mixed-citation publication-type="other">Yu Q, Tan KC, Tang H (2012) Pattern recognition computation in a spiking neural network with temporal encoding and learning. In: Proceedings of 2012 International Joint Conference on Neural Networks. IEEE Press. 466–472.</mixed-citation>
</ref>
<ref id="pone.0078318-Shriki1">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Shriki</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Kohn</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Shamir</surname>
<given-names>M</given-names>
</name>
(
<year>2012</year>
)
<article-title>Fast coding of orientation in primary visual cortex</article-title>
.
<source>PLoS Computational Biology</source>
<volume>8</volume>
:
<fpage>e1002536</fpage>
<pub-id pub-id-type="pmid">22719237</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Nadasdy1">
<label>43</label>
<mixed-citation publication-type="journal">
<name>
<surname>Nadasdy</surname>
<given-names>Z</given-names>
</name>
(
<year>2009</year>
)
<article-title>Information encoding and reconstruction from the phase of action potentials</article-title>
.
<source>Frontiers in Systems Neuroscience</source>
<volume>3</volume>
:
<fpage>6</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3389/neuro.06.006.2009">10.3389/neuro.06.006.2009</ext-link>
</comment>
<pub-id pub-id-type="pmid">19668700</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Gollisch1">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gollisch</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Meister</surname>
<given-names>M</given-names>
</name>
(
<year>2008</year>
)
<article-title>Rapid neural coding in the retina with relative spike latencies</article-title>
.
<source>Science</source>
<volume>319</volume>
:
<fpage>1108</fpage>
<lpage>1111</lpage>
<pub-id pub-id-type="pmid">18292344</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Llinas1">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Llinas</surname>
<given-names>RR</given-names>
</name>
,
<name>
<surname>Grace</surname>
<given-names>AA</given-names>
</name>
,
<name>
<surname>Yarom</surname>
<given-names>Y</given-names>
</name>
(
<year>1991</year>
)
<article-title>In vitro neurons in mammalian cortical layer 4 exhibit intrinsic oscillatory activity in the 10-to 50-Hz frequency range</article-title>
.
<source>Proceedings of the National Academy of Sciences</source>
<volume>88</volume>
:
<fpage>897</fpage>
<lpage>901</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Koepsell1">
<label>46</label>
<mixed-citation publication-type="journal">
<name>
<surname>Koepsell</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Wang</surname>
<given-names>X</given-names>
</name>
,
<name>
<surname>Vaingankar</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Wei</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Wang</surname>
<given-names>Q</given-names>
</name>
,
<etal>et al</etal>
(
<year>2009</year>
)
<article-title>Retinal oscillations carry visual information to cortex</article-title>
.
<source>Frontiers in Systems Neuroscience</source>
<volume>3</volume>
:
<fpage>4</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3389/neuro.06.004.2009">10.3389/neuro.06.004.2009</ext-link>
</comment>
<pub-id pub-id-type="pmid">19404487</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Jacobs1">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jacobs</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Kahana</surname>
<given-names>MJ</given-names>
</name>
,
<name>
<surname>Ekstrom</surname>
<given-names>AD</given-names>
</name>
,
<name>
<surname>Fried</surname>
<given-names>I</given-names>
</name>
(
<year>2007</year>
)
<article-title>Brain oscillations control timing of single-neuron activity in humans</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>27</volume>
:
<fpage>3839</fpage>
<lpage>3844</lpage>
<pub-id pub-id-type="pmid">17409248</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Foehring1">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Foehring</surname>
<given-names>RC</given-names>
</name>
,
<name>
<surname>Lorenzon</surname>
<given-names>NM</given-names>
</name>
(
<year>1999</year>
)
<article-title>Neuromodulation, development and synaptic plasticity</article-title>
.
<source>Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale</source>
<volume>53</volume>
:
<fpage>45</fpage>
<lpage>61</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Seamans1">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>Seamans</surname>
<given-names>JK</given-names>
</name>
,
<name>
<surname>Yang</surname>
<given-names>CR</given-names>
</name>
(
<year>2004</year>
)
<article-title>The principal features and mechanisms of dopamine modulation in the prefrontal cortex</article-title>
.
<source>Progress in Neurobiology</source>
<volume>74</volume>
:
<fpage>1</fpage>
<lpage>57</lpage>
<pub-id pub-id-type="pmid">15381316</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Artola1">
<label>50</label>
<mixed-citation publication-type="journal">
<name>
<surname>Artola</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Bröcher</surname>
<given-names>S</given-names>
</name>
(
<year>1990</year>
)
<collab>SingerW</collab>
(
<year>1990</year>
)
<article-title>Different voltage-dependent thresholds for inducing long-term depressiona and long-term potentiation in slices of rat visual cortex</article-title>
.
<source>Nature</source>
<volume>347</volume>
:
<fpage>69</fpage>
<lpage>72</lpage>
<pub-id pub-id-type="pmid">1975639</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Ngezahayo1">
<label>51</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ngezahayo</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Schachner</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Artola</surname>
<given-names>A</given-names>
</name>
(
<year>2000</year>
)
<article-title>Synaptic activity modulates the induction of bidirectional synaptic changes in adult mouse hippocampus</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>20</volume>
:
<fpage>2451</fpage>
<lpage>2458</lpage>
<pub-id pub-id-type="pmid">10729325</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Lisman1">
<label>52</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lisman</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Spruston</surname>
<given-names>N</given-names>
</name>
(
<year>2005</year>
)
<article-title>Postsynaptic depolarization requirements for LTP and LTD: a critique of spike timing-dependent plasticity</article-title>
.
<source>Nature Neuroscience</source>
<volume>8</volume>
:
<fpage>839</fpage>
<lpage>841</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Froemke1">
<label>53</label>
<mixed-citation publication-type="journal">
<name>
<surname>Froemke</surname>
<given-names>RC</given-names>
</name>
,
<name>
<surname>Poo</surname>
<given-names>Mm</given-names>
</name>
,
<name>
<surname>Dan</surname>
<given-names>Y</given-names>
</name>
(
<year>2005</year>
)
<article-title>Spike-timing-dependent synaptic plasticity depends on dendritic location</article-title>
.
<source>Nature</source>
<volume>434</volume>
:
<fpage>221</fpage>
<lpage>225</lpage>
<pub-id pub-id-type="pmid">15759002</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Xu1">
<label>54</label>
<mixed-citation publication-type="journal">
<name>
<surname>Xu</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Zeng</surname>
<given-names>X</given-names>
</name>
,
<name>
<surname>Zhong</surname>
<given-names>S</given-names>
</name>
(
<year>2013</year>
)
<article-title>A new supervised learning algorithm for spiking neurons</article-title>
.
<source>Neural Computation</source>
<volume>25</volume>
:
<fpage>1472</fpage>
<lpage>1511</lpage>
<pub-id pub-id-type="pmid">23517101</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Thorpe1">
<label>55</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thorpe</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Fize</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Marlot</surname>
<given-names>C</given-names>
</name>
(
<year>1996</year>
)
<article-title>Speed of processing in the human visual system</article-title>
.
<source>Nature</source>
<volume>381</volume>
:
<fpage>520</fpage>
<lpage>522</lpage>
<pub-id pub-id-type="pmid">8632824</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-VanRullen1">
<label>56</label>
<mixed-citation publication-type="journal">
<name>
<surname>VanRullen</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Guyonneau</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Thorpe</surname>
<given-names>SJ</given-names>
</name>
(
<year>2005</year>
)
<article-title>Spike times make sense</article-title>
.
<source>Trends in Neurosciences</source>
<volume>28</volume>
:
<fpage>1</fpage>
<lpage>4</lpage>
<pub-id pub-id-type="pmid">15626490</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Booij1">
<label>57</label>
<mixed-citation publication-type="journal">
<name>
<surname>Booij</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Nguyen</surname>
<given-names>HT</given-names>
</name>
(
<year>2005</year>
)
<article-title>A gradient descent rule for spiking neurons emitting multiple spikes</article-title>
.
<source>Information Processing Letters</source>
<volume>95</volume>
:
<fpage>552</fpage>
<lpage>558</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-Victor1">
<label>58</label>
<mixed-citation publication-type="journal">
<name>
<surname>Victor</surname>
<given-names>JD</given-names>
</name>
,
<name>
<surname>Purpura</surname>
<given-names>KP</given-names>
</name>
(
<year>1997</year>
)
<article-title>Metric-space analysis of spike trains: theory, algorithms and application</article-title>
.
<source>Network: Computation in Neural Systems</source>
<volume>8</volume>
:
<fpage>127</fpage>
<lpage>164</lpage>
</mixed-citation>
</ref>
<ref id="pone.0078318-VanRossum1">
<label>59</label>
<mixed-citation publication-type="journal">
<name>
<surname>Van Rossum</surname>
<given-names>MC</given-names>
</name>
,
<name>
<surname>Bi</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Turrigiano</surname>
<given-names>G</given-names>
</name>
(
<year>2000</year>
)
<article-title>Stable Hebbian learning from spike timing-dependent plasticity</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>20</volume>
:
<fpage>8812</fpage>
<lpage>8821</lpage>
<pub-id pub-id-type="pmid">11102489</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0078318-Buonomano1">
<label>60</label>
<mixed-citation publication-type="journal">
<name>
<surname>Buonomano</surname>
<given-names>DV</given-names>
</name>
(
<year>2005</year>
)
<article-title>A learning rule for the emergence of stable dynamics and timing in recurrent networks</article-title>
.
<source>Journal of Neurophysiology</source>
<volume>94</volume>
:
<fpage>2275</fpage>
<lpage>2283</lpage>
<pub-id pub-id-type="pmid">16160088</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/OcrV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000176 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000176 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    OcrV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3818323
   |texte=   Precise-Spike-Driven Synaptic Plasticity: Learning Hetero-Association of Spatiotemporal Spike Patterns
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:24223789" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a OcrV1 

Wicri

This area was generated with Dilib version V0.6.32.
Data generation: Sat Nov 11 16:53:45 2017. Site generation: Mon Mar 11 23:15:16 2024