Serveur d'exploration sur les relations entre la France et l'Australie

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Capacity of very noisy communication channels based on Fisher information

Identifieur interne : 000830 ( Pmc/Corpus ); précédent : 000829; suivant : 000831

Capacity of very noisy communication channels based on Fisher information

Auteurs : Fabing Duan ; François Chapeau-Blondeau ; Derek Abbott

Source :

RBID : PMC:4910081

Abstract

We generalize the asymptotic capacity expression for very noisy communication channels to now include coloured noise. For the practical scenario of a non-optimal receiver, we consider the common case of a correlation receiver. Due to the central limit theorem and the cumulative characteristic of a correlation receiver, we model this channel noise as additive Gaussian noise. Then, the channel capacity proves to be directly related to the Fisher information of the noise distribution and the weak signal energy. The conditions for occurrence of a noise-enhanced capacity effect are discussed, and the capacity difference between this noisy communication channel and other nonlinear channels is clarified.


Url:
DOI: 10.1038/srep27946
PubMed: 27306041
PubMed Central: 4910081

Links to Exploration step

PMC:4910081

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Capacity of very noisy communication channels based on Fisher information</title>
<author>
<name sortKey="Duan, Fabing" sort="Duan, Fabing" uniqKey="Duan F" first="Fabing" last="Duan">Fabing Duan</name>
<affiliation>
<nlm:aff id="a1">
<institution>Institute of Complexity Science, Qingdao University</institution>
, 266071 Qingdao,
<country>China</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Chapeau Blondeau, Francois" sort="Chapeau Blondeau, Francois" uniqKey="Chapeau Blondeau F" first="François" last="Chapeau-Blondeau">François Chapeau-Blondeau</name>
<affiliation>
<nlm:aff id="a2">
<institution>Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), Université d’Angers</institution>
, 49000 Angers,
<country>France</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Abbott, Derek" sort="Abbott, Derek" uniqKey="Abbott D" first="Derek" last="Abbott">Derek Abbott</name>
<affiliation>
<nlm:aff id="a3">
<institution>Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide</institution>
, Adelaide, SA 5005,
<country>Australia</country>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">27306041</idno>
<idno type="pmc">4910081</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4910081</idno>
<idno type="RBID">PMC:4910081</idno>
<idno type="doi">10.1038/srep27946</idno>
<date when="2016">2016</date>
<idno type="wicri:Area/Pmc/Corpus">000830</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000830</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Capacity of very noisy communication channels based on Fisher information</title>
<author>
<name sortKey="Duan, Fabing" sort="Duan, Fabing" uniqKey="Duan F" first="Fabing" last="Duan">Fabing Duan</name>
<affiliation>
<nlm:aff id="a1">
<institution>Institute of Complexity Science, Qingdao University</institution>
, 266071 Qingdao,
<country>China</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Chapeau Blondeau, Francois" sort="Chapeau Blondeau, Francois" uniqKey="Chapeau Blondeau F" first="François" last="Chapeau-Blondeau">François Chapeau-Blondeau</name>
<affiliation>
<nlm:aff id="a2">
<institution>Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), Université d’Angers</institution>
, 49000 Angers,
<country>France</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Abbott, Derek" sort="Abbott, Derek" uniqKey="Abbott D" first="Derek" last="Abbott">Derek Abbott</name>
<affiliation>
<nlm:aff id="a3">
<institution>Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide</institution>
, Adelaide, SA 5005,
<country>Australia</country>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Scientific Reports</title>
<idno type="eISSN">2045-2322</idno>
<imprint>
<date when="2016">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>We generalize the asymptotic capacity expression for very noisy communication channels to now include coloured noise. For the practical scenario of a non-optimal receiver, we consider the common case of a correlation receiver. Due to the central limit theorem and the cumulative characteristic of a correlation receiver, we model this channel noise as additive Gaussian noise. Then, the channel capacity proves to be directly related to the Fisher information of the noise distribution and the weak signal energy. The conditions for occurrence of a noise-enhanced capacity effect are discussed, and the capacity difference between this noisy communication channel and other nonlinear channels is clarified.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Shannon, C E" uniqKey="Shannon C">C. E. Shannon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gallager, R G" uniqKey="Gallager R">R. G. Gallager</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cover, T M" uniqKey="Cover T">T. M. Cover</name>
</author>
<author>
<name sortKey="Thomas, J A" uniqKey="Thomas J">J. A. Thomas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yeung, R W" uniqKey="Yeung R">R. W. Yeung</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nirenberg, L M" uniqKey="Nirenberg L">L. M. Nirenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kassam, S A" uniqKey="Kassam S">S. A. Kassam</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kay, S" uniqKey="Kay S">S. Kay</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reiffen, B" uniqKey="Reiffen B">B. Reiffen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Majani, E E" uniqKey="Majani E">E. E. Majani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kullback, S" uniqKey="Kullback S">S. Kullback</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Verdu, S" uniqKey="Verdu S">S. Verdú</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Prelov, V V" uniqKey="Prelov V">V. V. Prelov</name>
</author>
<author>
<name sortKey="Van Der Meulen, E C" uniqKey="Van Der Meulen E">E. C. van der Meulen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kostal, L" uniqKey="Kostal L">L. Kostal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kostal, L" uniqKey="Kostal L">L. Kostal</name>
</author>
<author>
<name sortKey="Lansky, P" uniqKey="Lansky P">P. Lansky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Divincenzo, D P" uniqKey="Divincenzo D">D. P. DiVincenzo</name>
</author>
<author>
<name sortKey="Shor, P W" uniqKey="Shor P">P. W. Shor</name>
</author>
<author>
<name sortKey="Smolin, J A" uniqKey="Smolin J">J. A. Smolin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Abdel Ghaffar, K" uniqKey="Abdel Ghaffar K">K. Abdel-Ghaffar</name>
</author>
<author>
<name sortKey="Mceliece, R J" uniqKey="Mceliece R">R. J. McEliece</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huber, P J" uniqKey="Huber P">P. J. Huber</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stam, A J" uniqKey="Stam A">A. J. Stam</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A" uniqKey="Patel A">A. Patel</name>
</author>
<author>
<name sortKey="Kosko, B" uniqKey="Kosko B">B. Kosko</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benzi, R" uniqKey="Benzi R">R. Benzi</name>
</author>
<author>
<name sortKey="Sutera, A" uniqKey="Sutera A">A. Sutera</name>
</author>
<author>
<name sortKey="Vulpiani, A" uniqKey="Vulpiani A">A. Vulpiani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chapeau Blondeau, F" uniqKey="Chapeau Blondeau F">F. Chapeau-Blondeau</name>
</author>
<author>
<name sortKey="Godivier, X" uniqKey="Godivier X">X. Godivier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Collins, J J" uniqKey="Collins J">J. J. Collins</name>
</author>
<author>
<name sortKey="Chow, C C" uniqKey="Chow C">C. C. Chow</name>
</author>
<author>
<name sortKey="Imhoff, T T" uniqKey="Imhoff T">T. T. Imhoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Floriani, E" uniqKey="Floriani E">E. Floriani</name>
</author>
<author>
<name sortKey="Mannella, R" uniqKey="Mannella R">R. Mannella</name>
</author>
<author>
<name sortKey="Grigolini, P" uniqKey="Grigolini P">P. Grigolini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bulsara, A R" uniqKey="Bulsara A">A. R. Bulsara</name>
</author>
<author>
<name sortKey="Zador, A" uniqKey="Zador A">A. Zador</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Greenwood, P E" uniqKey="Greenwood P">P. E. Greenwood</name>
</author>
<author>
<name sortKey="Ward, L M" uniqKey="Ward L">L. M. Ward</name>
</author>
<author>
<name sortKey="Wefelmeyer, W" uniqKey="Wefelmeyer W">W. Wefelmeyer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duan, F" uniqKey="Duan F">F. Duan</name>
</author>
<author>
<name sortKey="Chapeau Blondeau, F" uniqKey="Chapeau Blondeau F">F. Chapeau-Blondeau</name>
</author>
<author>
<name sortKey="Abbott, D" uniqKey="Abbott D">D. Abbott</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gammaitoni, L" uniqKey="Gammaitoni L">L. Gammaitoni</name>
</author>
<author>
<name sortKey="H Nggi, P" uniqKey="H Nggi P">P. Hänggi</name>
</author>
<author>
<name sortKey="Jung, P" uniqKey="Jung P">P. Jung</name>
</author>
<author>
<name sortKey="Marchesoni, F" uniqKey="Marchesoni F">F. Marchesoni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kay, S" uniqKey="Kay S">S. Kay</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcdonnell, M D" uniqKey="Mcdonnell M">M. D. McDonnell</name>
</author>
<author>
<name sortKey="Stocks, N G" uniqKey="Stocks N">N. G. Stocks</name>
</author>
<author>
<name sortKey="Pearce, C E M" uniqKey="Pearce C">C. E. M. Pearce</name>
</author>
<author>
<name sortKey="Abbott, D" uniqKey="Abbott D">D. Abbott</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moss, F" uniqKey="Moss F">F. Moss</name>
</author>
<author>
<name sortKey="Ward, L M" uniqKey="Ward L">L. M. Ward</name>
</author>
<author>
<name sortKey="Sannita, W G" uniqKey="Sannita W">W. G. Sannita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stocks, N G" uniqKey="Stocks N">N. G. Stocks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martignoli, S" uniqKey="Martignoli S">S. Martignoli</name>
</author>
<author>
<name sortKey="Gomez, F" uniqKey="Gomez F">F. Gomez</name>
</author>
<author>
<name sortKey="Stoop, R" uniqKey="Stoop R">R. Stoop</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zozor, S" uniqKey="Zozor S">S. Zozor</name>
</author>
<author>
<name sortKey="Amblard, P O" uniqKey="Amblard P">P. O. Amblard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duan, F" uniqKey="Duan F">F. Duan</name>
</author>
<author>
<name sortKey="Chapeau Blondeau, F" uniqKey="Chapeau Blondeau F">F. Chapeau-Blondeau</name>
</author>
<author>
<name sortKey="Abbott, D" uniqKey="Abbott D">D. Abbott</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Czaplicka, A" uniqKey="Czaplicka A">A. Czaplicka</name>
</author>
<author>
<name sortKey="Holyst, J A" uniqKey="Holyst J">J. A. Holyst</name>
</author>
<author>
<name sortKey="Sloot, P M A" uniqKey="Sloot P">P. M. A. Sloot</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Han, J" uniqKey="Han J">J. Han</name>
</author>
<author>
<name sortKey="Liu, H" uniqKey="Liu H">H. Liu</name>
</author>
<author>
<name sortKey="Sun, Q" uniqKey="Sun Q">Q. Sun</name>
</author>
<author>
<name sortKey="Huang, N" uniqKey="Huang N">N. Huang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blachman, N M" uniqKey="Blachman N">N. M. Blachman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dembo, A" uniqKey="Dembo A">A. Dembo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dembo, A" uniqKey="Dembo A">A. Dembo</name>
</author>
<author>
<name sortKey="Cover, T M" uniqKey="Cover T">T. M. Cover</name>
</author>
<author>
<name sortKey="Thomas, J A" uniqKey="Thomas J">J. A. Thomas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zamir, R" uniqKey="Zamir R">R. Zamir</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Poor, H V" uniqKey="Poor H">H. V. Poor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perc, M" uniqKey="Perc M">M. Perc</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benzi, R" uniqKey="Benzi R">R. Benzi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perc, M" uniqKey="Perc M">M. Perc</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, Q" uniqKey="Wang Q">Q. Wang</name>
</author>
<author>
<name sortKey="Perc, M" uniqKey="Perc M">M. Perc</name>
</author>
<author>
<name sortKey="Duan, Z" uniqKey="Duan Z">Z. Duan</name>
</author>
<author>
<name sortKey="Chen, G" uniqKey="Chen G">G. Chen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gan, C" uniqKey="Gan C">C. Gan</name>
</author>
<author>
<name sortKey="Perc, M" uniqKey="Perc M">M. Perc</name>
</author>
<author>
<name sortKey="Wang, Q" uniqKey="Wang Q">Q. Wang</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sci Rep</journal-id>
<journal-id journal-id-type="iso-abbrev">Sci Rep</journal-id>
<journal-title-group>
<journal-title>Scientific Reports</journal-title>
</journal-title-group>
<issn pub-type="epub">2045-2322</issn>
<publisher>
<publisher-name>Nature Publishing Group</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">27306041</article-id>
<article-id pub-id-type="pmc">4910081</article-id>
<article-id pub-id-type="pii">srep27946</article-id>
<article-id pub-id-type="doi">10.1038/srep27946</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Capacity of very noisy communication channels based on Fisher information</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Duan</surname>
<given-names>Fabing</given-names>
</name>
<xref ref-type="corresp" rid="c1">a</xref>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Chapeau-Blondeau</surname>
<given-names>François</given-names>
</name>
<xref ref-type="aff" rid="a2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Abbott</surname>
<given-names>Derek</given-names>
</name>
<xref ref-type="aff" rid="a3">3</xref>
</contrib>
<aff id="a1">
<label>1</label>
<institution>Institute of Complexity Science, Qingdao University</institution>
, 266071 Qingdao,
<country>China</country>
</aff>
<aff id="a2">
<label>2</label>
<institution>Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), Université d’Angers</institution>
, 49000 Angers,
<country>France</country>
</aff>
<aff id="a3">
<label>3</label>
<institution>Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide</institution>
, Adelaide, SA 5005,
<country>Australia</country>
</aff>
</contrib-group>
<author-notes>
<corresp id="c1">
<label>a</label>
<email>fabing.duan@gmail.com</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>16</day>
<month>06</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>6</volume>
<elocation-id>27946</elocation-id>
<history>
<date date-type="received">
<day>22</day>
<month>02</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>27</day>
<month>05</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2016, Macmillan Publishers Limited</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Macmillan Publishers Limited</copyright-holder>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<pmc-comment>author-paid</pmc-comment>
<license-p>This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
</license-p>
</license>
</permissions>
<abstract>
<p>We generalize the asymptotic capacity expression for very noisy communication channels to now include coloured noise. For the practical scenario of a non-optimal receiver, we consider the common case of a correlation receiver. Due to the central limit theorem and the cumulative characteristic of a correlation receiver, we model this channel noise as additive Gaussian noise. Then, the channel capacity proves to be directly related to the Fisher information of the noise distribution and the weak signal energy. The conditions for occurrence of a noise-enhanced capacity effect are discussed, and the capacity difference between this noisy communication channel and other nonlinear channels is clarified.</p>
</abstract>
</article-meta>
</front>
<body>
<p>It is well known that, for an additive Gaussian noise channel and an energy constrained input signal, the channel capacity can be explicitly calculated
<xref ref-type="bibr" rid="b1">1</xref>
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b4">4</xref>
. In practical applications, however, communication systems frequently encounter non-Gaussian noise environments, for instance, underwater acoustic noise and low-frequency atmospheric noise
<xref ref-type="bibr" rid="b5">5</xref>
<xref ref-type="bibr" rid="b6">6</xref>
<xref ref-type="bibr" rid="b7">7</xref>
. Of all channels with power-constrained noise, the capacity of a Gaussian channel is the smallest
<xref ref-type="bibr" rid="b1">1</xref>
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b4">4</xref>
. Thus, the capacities of non-Gaussian channels are of great interest
<xref ref-type="bibr" rid="b5">5</xref>
<xref ref-type="bibr" rid="b6">6</xref>
<xref ref-type="bibr" rid="b7">7</xref>
<xref ref-type="bibr" rid="b8">8</xref>
<xref ref-type="bibr" rid="b9">9</xref>
<xref ref-type="bibr" rid="b10">10</xref>
<xref ref-type="bibr" rid="b11">11</xref>
<xref ref-type="bibr" rid="b12">12</xref>
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
<xref ref-type="bibr" rid="b15">15</xref>
. Moreover, from theoretical and practical viewpoints, a very interesting topic is the investigation of the channel capacity with very weak input signals, e.g. deep space communication channels
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b9">9</xref>
and qubit depolarizing channels
<xref ref-type="bibr" rid="b15">15</xref>
. A very noisy channel was introduced by Reiffen
<xref ref-type="bibr" rid="b8">8</xref>
, and extended by Gallager
<xref ref-type="bibr" rid="b2">2</xref>
and Majani
<xref ref-type="bibr" rid="b9">9</xref>
to model many physical communication channels operating at very low signal-to-noise ratio (SNR). “Very noisy” channels with very low capacity are of significant interest to communications, since Shannon’s theorem guarantees reliable communication as long as the capacity is nonzero
<xref ref-type="bibr" rid="b1">1</xref>
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b4">4</xref>
<xref ref-type="bibr" rid="b9">9</xref>
<xref ref-type="bibr" rid="b16">16</xref>
. Following the approaches developed in
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b8">8</xref>
<xref ref-type="bibr" rid="b9">9</xref>
and using a power series of characteristic functions, Nirenberg
<xref ref-type="bibr" rid="b5">5</xref>
derived a simple formula of the capacity for the coherent threshold channel with an optimum receiver. For memoryless channels with very weak inputs, Kullback
<xref ref-type="bibr" rid="b10">10</xref>
, Verdú
<xref ref-type="bibr" rid="b11">11</xref>
and Prelov
<xref ref-type="bibr" rid="b12">12</xref>
explicitly expressed the asymptotic expressions of the channel capacity closely related to the Fisher information matrix. Recently, Kostal and Lansky
<xref ref-type="bibr" rid="b14">14</xref>
presented an approximate expression for the information capacity in a broad class of discrete-time channels under the constraint of vanishing input amplitude or power, which allows us to analyse the capacity of channels with memory in a convenient way
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
.</p>
<p>In this paper, under the assumption of low SNR, we will further derive the capacity of a very noisy communication channel, wherein the optimum receiver may be unavailable and noise is not restricted to be white. Based on the central limit theorem, we argue that, for sufficiently large observation times and with the constraint of weak signal energy, the receiver output tends to be Gaussian distributed, and the channel capacity is then computed by a simple formula being directly related to the Fisher information of the noise distribution. We demonstrate the enhancement of capacity via stochastic resonance will not occur in very noisy communication channel with an optimum receiver, but it can occur with generalized correlation receivers suited for practical implementation. Finally, we compare the asymptotic capacity expressions of this noisy communication channel with other capacity formulas in refs
<xref ref-type="bibr" rid="b10">10</xref>
,
<xref ref-type="bibr" rid="b11">11</xref>
,
<xref ref-type="bibr" rid="b12">12</xref>
,
<xref ref-type="bibr" rid="b13">13</xref>
,
<xref ref-type="bibr" rid="b14">14</xref>
.</p>
<sec disp-level="1">
<title>Results</title>
<sec disp-level="2">
<title>Channel capacity for coloured noise</title>
<p>For the
<italic>M</italic>
-ary communication channel shown in
<xref ref-type="fig" rid="f1">Fig. 1</xref>
, the observation data vector
<bold>X</bold>
contains the additive noise vector
<bold>Z</bold>
and the signal vector
<bold>S</bold>
<sub>
<italic>m</italic>
</sub>
,
<inline-formula id="d33e314">
<inline-graphic id="d33e315" xlink:href="srep27946-m1.jpg"></inline-graphic>
</inline-formula>
. With the assumptions of white noise and very low SNR, Nirenberg
<xref ref-type="bibr" rid="b5">5</xref>
derived the capacity for the coherent threshold channel with an optimum receiver. We briefly present the conclusions of ref.
<xref ref-type="bibr" rid="b5">5</xref>
for reference (see
<bold>Methods</bold>
). However, the idealized assumption of white noise is unpractical, and the coloured noise has practical significance
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b4">4</xref>
. We here further derive a general asymptotic expression of the channel capacity for coloured noise, which applies to not only the optimum receiver but also an arbitrary correlation receiver.</p>
<p>In the case of coloured noise and for very low SNR, the conditional probability function can be expanded to the first order</p>
<p>
<disp-formula id="eq2">
<inline-graphic id="d33e331" xlink:href="srep27946-m2.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the operator
<inline-formula id="d33e334">
<inline-graphic id="d33e335" xlink:href="srep27946-m3.jpg"></inline-graphic>
</inline-formula>
and the statistic
<inline-formula id="d33e337">
<inline-graphic id="d33e338" xlink:href="srep27946-m4.jpg"></inline-graphic>
</inline-formula>
. Here, from an information theory point of view by Reiffen
<xref ref-type="bibr" rid="b8">8</xref>
and Gallager
<xref ref-type="bibr" rid="b2">2</xref>
, the module
<inline-formula id="d33e344">
<inline-graphic id="d33e345" xlink:href="srep27946-m5.jpg"></inline-graphic>
</inline-formula>
indicates the channel is very noisy in the sense that the channel output is almost independent of the input. For
<italic>M</italic>
equiprobable signals
<bold>S</bold>
<sub>
<italic>m</italic>
</sub>
, the receiver takes the maximum likelihood rule</p>
<p>
<disp-formula id="eq6">
<inline-graphic id="d33e359" xlink:href="srep27946-m6.jpg"></inline-graphic>
</disp-formula>
</p>
<p>to optimally choose
<italic>m</italic>
th signal
<xref ref-type="bibr" rid="b5">5</xref>
<xref ref-type="bibr" rid="b17">17</xref>
. Substituting
<xref ref-type="disp-formula" rid="eq12">equation (1)</xref>
into
<xref ref-type="disp-formula" rid="eq12">equation (2)</xref>
, the optimum receiver</p>
<p>
<disp-formula id="eq7">
<inline-graphic id="d33e375" xlink:href="srep27946-m7.jpg"></inline-graphic>
</disp-formula>
</p>
<p>enables us to decide if the
<italic>m</italic>
th signal was transmitted. For clarity, we state that the statistic Γ(
<bold>s</bold>
<sub>
<italic>m</italic>
</sub>
,
<bold>x</bold>
) and the maximum likelihood decoding rule of
<xref ref-type="disp-formula" rid="eq12">equation (2)</xref>
compose an optimum correlation receiver. The channel output is the decoding signal
<italic>ω</italic>
<sub>
<italic>m</italic>
</sub>
of the receiver, as shown in
<xref ref-type="fig" rid="f1">Fig. 1</xref>
.</p>
<p>Then, supposing the zero-mean E
<sub>
<bold>S</bold>
</sub>
(
<bold>s</bold>
) = 
<bold>0</bold>
and extending the very noisy vector channel
<inline-formula id="d33e415">
<inline-graphic id="d33e416" xlink:href="srep27946-m8.jpg"></inline-graphic>
</inline-formula>
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b5">5</xref>
<xref ref-type="bibr" rid="b8">8</xref>
<xref ref-type="bibr" rid="b9">9</xref>
, the mutual information between the input signal space
<bold>Φ</bold>
and the channel output space
<bold>Ω</bold>
is given by</p>
<p>
<disp-formula id="eq9">
<inline-graphic id="d33e428" xlink:href="srep27946-m9.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the Fisher information matrix of the noise distribution is defined as
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b7">7</xref>
</p>
<p>
<disp-formula id="eq10">
<inline-graphic id="d33e435" xlink:href="srep27946-m10.jpg"></inline-graphic>
</disp-formula>
</p>
<p>It is noted that
<bold>J</bold>
(
<italic>f</italic>
<sub>
<bold>Z</bold>
</sub>
) is also called the Fisher information of a location parameter or the shift-invariant Fisher information
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b6">6</xref>
<xref ref-type="bibr" rid="b7">7</xref>
<xref ref-type="bibr" rid="b18">18</xref>
, which can be viewed as a special case of the Fisher information measuring the statistical information contained in data about an unknown parameter. Therefore, with the energy constraint of E
<sub>
<bold>S</bold>
</sub>
(
<bold>s</bold>
<sup>
<italic>T</italic>
</sup>
<bold>s</bold>
) ≤ 
<italic>ε</italic>
and for the standardized vector
<inline-formula id="d33e465">
<inline-graphic id="d33e466" xlink:href="srep27946-m11.jpg"></inline-graphic>
</inline-formula>
, the channel capacity can be expressed as</p>
<p>
<disp-formula id="eq12">
<inline-graphic id="d33e470" xlink:href="srep27946-m12.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where Λ is the largest eigenvalue of the matrix
<bold>J</bold>
(
<italic>f</italic>
<sub>
<bold>Z</bold>
</sub>
) and
<bold>u</bold>
takes the corresponding eigenvector.</p>
<p>For positive definite matrixes
<bold>A</bold>
,
<inline-formula id="d33e490">
<inline-graphic id="d33e491" xlink:href="srep27946-m13.jpg"></inline-graphic>
</inline-formula>
and an arbitrary column vector
<inline-formula id="d33e493">
<inline-graphic id="d33e494" xlink:href="srep27946-m14.jpg"></inline-graphic>
</inline-formula>
, the inequality
<bold>X</bold>
<sup>
<italic>T</italic>
</sup>
(
<bold>A</bold>
 − 
<bold>B</bold>
)
<bold>X</bold>
 ≥ 0 is abbreviated as
<inline-formula id="d33e512">
<inline-graphic id="d33e513" xlink:href="srep27946-m15.jpg"></inline-graphic>
</inline-formula>
. Then, for the positive semidefinite matrix</p>
<p>
<disp-formula id="eq16">
<inline-graphic id="d33e517" xlink:href="srep27946-m16.jpg"></inline-graphic>
</disp-formula>
</p>
<p>and the noise covariance matrix
<bold></bold>
<sub>
<bold>Z</bold>
</sub>
 = E
<sub>
<bold>Z</bold>
</sub>
(
<bold>zz</bold>
<sup>
<italic>T</italic>
</sup>
), we have</p>
<p>
<disp-formula id="eq17">
<inline-graphic id="d33e538" xlink:href="srep27946-m17.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the equality occurs for
<italic>N</italic>
-dimensional Gaussian distribution
<inline-formula id="d33e544">
<inline-graphic id="d33e545" xlink:href="srep27946-m18.jpg"></inline-graphic>
</inline-formula>
with its Fisher information matrix
<inline-formula id="d33e547">
<inline-graphic id="d33e548" xlink:href="srep27946-m19.jpg"></inline-graphic>
</inline-formula>
. Thus,
<xref ref-type="disp-formula" rid="eq17">equation (7)</xref>
indicates the maximum eigenvalue of
<inline-formula id="d33e553">
<inline-graphic id="d33e554" xlink:href="srep27946-m20.jpg"></inline-graphic>
</inline-formula>
is less than that of Fisher information matrix of non-Gaussian noise. This result extends the conclusion of
<xref ref-type="disp-formula" rid="eq102">equation (36)</xref>
by Nirenberg
<xref ref-type="bibr" rid="b5">5</xref>
, and also confirms that, in terms of the channel capacity, zero-mean Gaussian noise is the worst case given that the noise vector has a fixed covariance matrix
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b4">4</xref>
.</p>
<p>However, we note the channel capacity of
<xref ref-type="disp-formula" rid="eq12">equation (6)</xref>
is achieved by the optimum receiver of
<xref ref-type="disp-formula" rid="eq17">equation (3)</xref>
. In many practical cases, the optimum receiver may be not implementable for the unknown noise distribution or the non-closed form of distributions (e.g.
<italic>α</italic>
-stable noise
<xref ref-type="bibr" rid="b19">19</xref>
). Thus, we further consider the generalized correlation receiver</p>
<p>
<disp-formula id="eq21">
<inline-graphic id="d33e580" xlink:href="srep27946-m21.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the coefficient vector
<inline-formula id="d33e583">
<inline-graphic id="d33e584" xlink:href="srep27946-m22.jpg"></inline-graphic>
</inline-formula>
and the function
<italic>g</italic>
(
<bold>x</bold>
) is not restricted to be memoryless. For the zero-mean vector of E
<sub>
<bold>Z</bold>
</sub>
[
<italic>g</italic>
(
<bold>z</bold>
)] = 
<bold>0</bold>
(for a shift in mean)
<xref ref-type="bibr" rid="b6">6</xref>
under
<italic>f</italic>
<sub>
<bold>Z</bold>
</sub>
and for very low SNR,
<italic>g</italic>
(
<bold>x</bold>
) can be expanded to the first-order</p>
<p>
<disp-formula id="eq23">
<inline-graphic id="d33e623" xlink:href="srep27946-m23.jpg"></inline-graphic>
</disp-formula>
</p>
<p>Then, for a large observation size
<italic>N</italic>
, the statistic
<italic>T</italic>
<sub>
<italic>m</italic>
</sub>
has the mean
<inline-formula id="d33e635">
<inline-graphic id="d33e636" xlink:href="srep27946-m24.jpg"></inline-graphic>
</inline-formula>
and the variance
<inline-formula id="d33e638">
<inline-graphic id="d33e639" xlink:href="srep27946-m25.jpg"></inline-graphic>
</inline-formula>
. Using the Cholesky decomposition of the symmetrical matrix
<bold>V</bold>
 = E
<sub>
<bold>Z</bold>
</sub>
[
<italic>g</italic>
(
<bold>z</bold>
)
<italic>g</italic>
(
<bold>z</bold>
)
<sup>
<italic>T</italic>
</sup>
] = 
<bold>LL</bold>
<sup>
<italic>T</italic>
</sup>
, the output SNR of the receiver can be calculated as</p>
<p>
<disp-formula id="eq26">
<inline-graphic id="d33e674" xlink:href="srep27946-m26.jpg"></inline-graphic>
</disp-formula>
</p>
<p>by optimally choosing
<inline-formula id="d33e677">
<inline-graphic id="d33e678" xlink:href="srep27946-m27.jpg"></inline-graphic>
</inline-formula>
. Then, we argue that, for sufficiently large observation times and with the constraint of weak signal energy, the receiver output tends to be Gaussian distributed, and the capacity can be approximately calculated as</p>
<p>
<disp-formula id="eq28">
<inline-graphic id="d33e682" xlink:href="srep27946-m28.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where Λ
<sub>
<italic>g</italic>
</sub>
is the largest eigenvalue of the matrix
<inline-formula id="d33e689">
<inline-graphic id="d33e690" xlink:href="srep27946-m29.jpg"></inline-graphic>
</inline-formula>
. Observing</p>
<p>
<disp-formula id="eq30">
<inline-graphic id="d33e694" xlink:href="srep27946-m30.jpg"></inline-graphic>
</disp-formula>
</p>
<p>and for the positive semidefinite matrix</p>
<p>
<disp-formula id="eq31">
<inline-graphic id="d33e699" xlink:href="srep27946-m31.jpg"></inline-graphic>
</disp-formula>
</p>
<p>we have</p>
<p>
<disp-formula id="eq32">
<inline-graphic id="d33e705" xlink:href="srep27946-m32.jpg"></inline-graphic>
</disp-formula>
</p>
<p>with
<inline-formula id="d33e708">
<inline-graphic id="d33e709" xlink:href="srep27946-m33.jpg"></inline-graphic>
</inline-formula>
and the equality occurring for
<inline-formula id="d33e711">
<inline-graphic id="d33e712" xlink:href="srep27946-m34.jpg"></inline-graphic>
</inline-formula>
. This inequality (13) indicates that the eigenvalue Λ of
<bold>J</bold>
(
<italic>f</italic>
<sub>
<bold>Z</bold>
</sub>
) is not less than the eigenvalue Λ
<sub>
<italic>g</italic>
</sub>
of the matrix
<inline-formula id="d33e728">
<inline-graphic id="d33e729" xlink:href="srep27946-m35.jpg"></inline-graphic>
</inline-formula>
. Therefore, based on
<xref ref-type="disp-formula" rid="eq12">equations (6</xref>
), (
<xref ref-type="disp-formula" rid="eq76">11</xref>
) and (
<xref ref-type="disp-formula" rid="eq91">13</xref>
), we find</p>
<p>
<disp-formula id="eq36">
<inline-graphic id="d33e742" xlink:href="srep27946-m36.jpg"></inline-graphic>
</disp-formula>
</p>
<p>which extends the conclusion of ref.
<xref ref-type="bibr" rid="b5">5</xref>
to the case of coloured noise. In addition, the equality in
<xref ref-type="disp-formula" rid="eq91">equation (13)</xref>
also demonstrates the receiver of
<xref ref-type="disp-formula" rid="eq21">equation (8)</xref>
is optimal when
<inline-formula id="d33e754">
<inline-graphic id="d33e755" xlink:href="srep27946-m37.jpg"></inline-graphic>
</inline-formula>
, i.e. the optimum receiver of
<xref ref-type="disp-formula" rid="eq17">equation (3)</xref>
.</p>
<p>We argue that the asymptotic capacity expression of
<xref ref-type="disp-formula" rid="eq76">equation (11)</xref>
has a broader applicability for an arbitrary correlation receiver operated in coloured or white noise environments. As a simple check for the consistency of the results from
<xref ref-type="disp-formula" rid="eq76">equation (11)</xref>
to
<xref ref-type="disp-formula" rid="eq102">equation (14)</xref>
, we consider the case of white noise. Immediately, due to the statistical independence of
<italic>g</italic>
(
<bold>z</bold>
), the expectation matrices
<inline-formula id="d33e778">
<inline-graphic id="d33e779" xlink:href="srep27946-m38.jpg"></inline-graphic>
</inline-formula>
and
<bold>V</bold>
 = E
<sub>
<italic>z</italic>
</sub>
[
<italic>g</italic>
<sup>2</sup>
(
<italic>z</italic>
)]
<bold>I</bold>
. Here, the derivative
<italic>g</italic>
′(
<italic>z</italic>
) = 
<italic>dg</italic>
(
<italic>z</italic>
)/
<italic>dz</italic>
and
<bold>I</bold>
is the unit matrix. Therefore, the matrix
<inline-formula id="d33e819">
<inline-graphic id="d33e820" xlink:href="srep27946-m39.jpg"></inline-graphic>
</inline-formula>
in
<xref ref-type="disp-formula" rid="eq76">equation (11)</xref>
has
<italic>N</italic>
identical eigenvalues
<inline-formula id="d33e828">
<inline-graphic id="d33e829" xlink:href="srep27946-m40.jpg"></inline-graphic>
</inline-formula>
, and the channel capacity becomes</p>
<p>
<disp-formula id="eq41">
<inline-graphic id="d33e833" xlink:href="srep27946-m41.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the eigenvalue Λ = 
<italic>J</italic>
(
<italic>f</italic>
<sub>
<italic>z</italic>
</sub>
) corresponds to the Fisher information matrix
<bold>J</bold>
(
<italic>f</italic>
<sub>
<bold>Z</bold>
</sub>
) in
<xref ref-type="disp-formula" rid="eq12">equation (6)</xref>
. Using the Cauchy-Schwarz inequality and integration by parts
<inline-formula id="d33e858">
<inline-graphic id="d33e859" xlink:href="srep27946-m42.jpg"></inline-graphic>
</inline-formula>
, and the equality in
<xref ref-type="disp-formula" rid="eq41">equation (15)</xref>
occurs when
<inline-formula id="d33e864">
<inline-graphic id="d33e865" xlink:href="srep27946-m43.jpg"></inline-graphic>
</inline-formula>
that specifies the optimum receiver in the presence of white noise
<xref ref-type="bibr" rid="b5">5</xref>
.</p>
</sec>
<sec disp-level="2">
<title>Conditions for noise-enhanced capacity</title>
<p>Since the emergence of the concept of stochastic resonance
<xref ref-type="bibr" rid="b20">20</xref>
, the employment of noise in enhancing the performance of nonlinear systems has become an interesting option
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
<xref ref-type="bibr" rid="b21">21</xref>
<xref ref-type="bibr" rid="b22">22</xref>
<xref ref-type="bibr" rid="b23">23</xref>
<xref ref-type="bibr" rid="b24">24</xref>
<xref ref-type="bibr" rid="b25">25</xref>
<xref ref-type="bibr" rid="b26">26</xref>
<xref ref-type="bibr" rid="b27">27</xref>
<xref ref-type="bibr" rid="b28">28</xref>
<xref ref-type="bibr" rid="b29">29</xref>
<xref ref-type="bibr" rid="b30">30</xref>
<xref ref-type="bibr" rid="b31">31</xref>
<xref ref-type="bibr" rid="b32">32</xref>
<xref ref-type="bibr" rid="b33">33</xref>
<xref ref-type="bibr" rid="b34">34</xref>
<xref ref-type="bibr" rid="b35">35</xref>
<xref ref-type="bibr" rid="b36">36</xref>
. Initially, the mechanism of stochastic resonance manifests itself as a time-scale matching condition for the noise-induced characteristic time of systems and the signal period
<xref ref-type="bibr" rid="b20">20</xref>
<xref ref-type="bibr" rid="b27">27</xref>
. Later, the notion of stochastic resonance has been widened to a number of different mechanisms, e.g. aperiodic stochastic resonance
<xref ref-type="bibr" rid="b22">22</xref>
and suprathreshold stochastic resonance
<xref ref-type="bibr" rid="b31">31</xref>
. For such stochastic resonance effects
<xref ref-type="bibr" rid="b22">22</xref>
<xref ref-type="bibr" rid="b31">31</xref>
, there is no matching time-scale that corresponds to the input aperiodic or information-carrying random signal, but the system performance still reaches a maximum at an optimal non-zero noise level. Therefore, the noise-enhanced effect, instead of stochastic resonance, becomes a more appropriate term for describing the enhancement effect of system responses via the addition of noise. Here, if the channel capacity reaches a maximum at an optimal non-zero noise level, then the noise-enhanced capacity effect occurs. Otherwise, upon increasing the noise level, the channel capacity monotonically decreases, this is to say, the noise-enhanced capacity effect does not exist.</p>
<p>There are two approaches for varying the noise in stochastic resonance. One is tuning the noise level but not changing the noise type, and the other is adding extra noise to a given noisy signal, while the extra noise type may be different form the original one. Next, we will demonstrate the occurrence or nonoccurrence condition of the noise-enhanced capacity effect by the above mentioned methods.</p>
<p>First, we will prove that no noise-enhanced capacity effect exists for tuning the scaled noise level in an optimum receiver. For the scaled noise vector
<bold>Z</bold>
 = 
<bold>DZ</bold>
<sub>
<italic>n</italic>
</sub>
, the covariance matrix
<bold></bold>
<sub>
<bold>Z</bold>
</sub>
can be factored as
<bold></bold>
<sub>
<bold>Z</bold>
</sub>
 = 
<bold>DD</bold>
<sup>
<italic>T</italic>
</sup>
and the standardized noise vector
<bold>Z</bold>
<sub>
<italic>n</italic>
</sub>
has a covariance matrix being the unit matrix
<inline-formula id="d33e925">
<inline-graphic id="d33e926" xlink:href="srep27946-m44.jpg"></inline-graphic>
</inline-formula>
<xref ref-type="bibr" rid="b7">7</xref>
. A well-known scaling property of the Fisher information matrix is
<xref ref-type="bibr" rid="b7">7</xref>
<xref ref-type="bibr" rid="b18">18</xref>
<xref ref-type="bibr" rid="b37">37</xref>
<xref ref-type="bibr" rid="b38">38</xref>
<xref ref-type="bibr" rid="b39">39</xref>
<xref ref-type="bibr" rid="b40">40</xref>
</p>
<p>
<disp-formula id="eq45">
<inline-graphic id="d33e932" xlink:href="srep27946-m45.jpg"></inline-graphic>
</disp-formula>
</p>
<p>which implies the largest eigenvalue Λ of
<bold>J</bold>
(
<italic>f</italic>
<sub>
<bold>Z</bold>
</sub>
) is a monotonically decreasing function of Λ
<sub>
<italic>n</italic>
</sub>
/det(
<bold></bold>
<sub>
<bold>Z</bold>
</sub>
) for the determinants det
<sup>2</sup>
(
<bold>D</bold>
) = det
<sup>2</sup>
(
<bold>D</bold>
<sup>
<italic>T</italic>
</sup>
) = det(
<bold></bold>
<sub>
<bold>Z</bold>
</sub>
). Here, the largest eigenvalue of
<inline-formula id="d33e977">
<inline-graphic id="d33e978" xlink:href="srep27946-m46.jpg"></inline-graphic>
</inline-formula>
is Λ
<sub>
<italic>n</italic>
</sub>
that is a fixed quantity for
<bold>Z</bold>
<sub>
<italic>n</italic>
</sub>
. For such a channel with its optimum receiver,
<xref ref-type="disp-formula" rid="eq76">equation (11)</xref>
indicates the channel capacity
<inline-formula id="d33e993">
<inline-graphic id="d33e994" xlink:href="srep27946-m47.jpg"></inline-graphic>
</inline-formula>
monotonically decreases as the noise intensity increases. Thus, no noise-enhanced capacity phenomenon will occur by tuning the noise level.</p>
<p>For instance, we consider a threshold receiver based on the function
<italic>g</italic>
(
<italic>x</italic>
) = sign(
<italic>x</italic>
) and the Laplacian white noise with its distribution
<inline-formula id="d33e1007">
<inline-graphic id="d33e1008" xlink:href="srep27946-m48.jpg"></inline-graphic>
</inline-formula>
. We note that the threshold receiver is optimum for the Laplacian noise, and
<inline-formula id="d33e1010">
<inline-graphic id="d33e1011" xlink:href="srep27946-m49.jpg"></inline-graphic>
</inline-formula>
satisfies the equality condition in
<xref ref-type="disp-formula" rid="eq41">equation (15)</xref>
. In this case, the channel capacity in
<xref ref-type="disp-formula" rid="eq41">equation (15)</xref>
can be calculated as
<inline-formula id="d33e1020">
<inline-graphic id="d33e1021" xlink:href="srep27946-m50.jpg"></inline-graphic>
</inline-formula>
, which monotonically decreases as the noise level
<italic>σ</italic>
increases. Thus, there is no noise-enhanced capacity effect.</p>
<p>Secondly, we usually have a given signal corrupted by noise, and the initial noise level is unadjustable. We will prove that the addition of extra noise cannot further improve the channel capacity achieved by the optimum receiver. Under this circumstance, we add an extra noise vector
<bold>W</bold>
, independent of
<bold>Z</bold>
and
<bold>S</bold>
<sub>
<italic>m</italic>
</sub>
, to the observation
<bold>X</bold>
, and the updated data vector is</p>
<p>
<disp-formula id="eq51">
<inline-graphic id="d33e1045" xlink:href="srep27946-m51.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the composite noise vector
<bold>U</bold>
 = 
<bold>Z</bold>
 + 
<bold>W</bold>
with its distribution
<italic>f</italic>
<sub>
<bold>U</bold>
</sub>
. In this case, we should employ the statistic
<inline-formula id="d33e1063">
<inline-graphic id="d33e1064" xlink:href="srep27946-m52.jpg"></inline-graphic>
</inline-formula>
to specify the optimum receiver, and the corresponding capacity is then given by</p>
<p>
<disp-formula id="eq53">
<inline-graphic id="d33e1068" xlink:href="srep27946-m53.jpg"></inline-graphic>
</disp-formula>
</p>
<p>with the largest eigenvalue
<inline-formula id="d33e1072">
<inline-graphic id="d33e1073" xlink:href="srep27946-m54.jpg"></inline-graphic>
</inline-formula>
of the Fisher information matrix
<bold>J</bold>
(
<italic>f</italic>
<sub>
<bold>U</bold>
</sub>
). For any nonsingular matrix
<inline-formula id="d33e1084">
<inline-graphic id="d33e1085" xlink:href="srep27946-m55.jpg"></inline-graphic>
</inline-formula>
, the Fisher information matrix inequality
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b37">37</xref>
<xref ref-type="bibr" rid="b38">38</xref>
<xref ref-type="bibr" rid="b39">39</xref>
<xref ref-type="bibr" rid="b40">40</xref>
holds for</p>
<p>
<disp-formula id="eq56">
<inline-graphic id="d33e1091" xlink:href="srep27946-m56.jpg"></inline-graphic>
</disp-formula>
</p>
<p>we then find the largest eigenvalue Λ of
<bold>J</bold>
(
<italic>f</italic>
<sub>
<bold>Z</bold>
</sub>
) is not less than the largest eigenvalue
<inline-formula id="d33e1103">
<inline-graphic id="d33e1104" xlink:href="srep27946-m57.jpg"></inline-graphic>
</inline-formula>
of
<bold>J</bold>
(
<italic>f</italic>
<sub>
<bold>U</bold>
</sub>
) and</p>
<p>
<disp-formula id="eq58">
<inline-graphic id="d33e1118" xlink:href="srep27946-m58.jpg"></inline-graphic>
</disp-formula>
</p>
<p>This result of
<xref ref-type="disp-formula" rid="eq58">equation (20)</xref>
clearly shows that stochastic resonance cannot further improve the channel capacity achieved by the optimum receiver, regardless of adding white or coloured noise vector
<bold>W</bold>
.</p>
<p>Thirdly, we note that the above two negative conditions of the noise-enhanced capacity effect arise with the optimum receiver matched to the distribution of the background noise. By Contrast, if the generalized correlation receivers of
<xref ref-type="disp-formula" rid="eq21">equation (8)</xref>
are not optimal for the background noise, stochastic resonance may play an important role in the enhancement of capacity. For example, we consider non-scaled Gaussian mixture noise vector
<bold>W</bold>
with its distribution
<xref ref-type="bibr" rid="b6">6</xref>
<xref ref-type="bibr" rid="b21">21</xref>
<xref ref-type="bibr" rid="b28">28</xref>
<xref ref-type="bibr" rid="b33">33</xref>
</p>
<p>
<disp-formula id="eq59">
<inline-graphic id="d33e1138" xlink:href="srep27946-m59.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the variance
<inline-formula id="d33e1141">
<inline-graphic id="d33e1142" xlink:href="srep27946-m60.jpg"></inline-graphic>
</inline-formula>
and parameters
<italic>μ</italic>
,
<italic>ζ</italic>
 ≥ 0. A useful coloured noise model of the first-order moving-average
<xref ref-type="bibr" rid="b41">41</xref>
as</p>
<p>
<disp-formula id="eq61">
<inline-graphic id="d33e1154" xlink:href="srep27946-m61.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the correlation coefficients are
<italic>ρ</italic>
<sub>1,2</sub>
and
<inline-formula id="d33e1162">
<inline-graphic id="d33e1163" xlink:href="srep27946-m62.jpg"></inline-graphic>
</inline-formula>
is an independent identically distributed (i.i.d.) random vector. For small values of
<italic>ρ</italic>
<sub>1,2</sub>
<inline-formula id="d33e1170">
<inline-graphic id="d33e1171" xlink:href="srep27946-m63.jpg"></inline-graphic>
</inline-formula>
, the dependence among noise samples
<italic>Z</italic>
<sub>
<italic>n</italic>
</sub>
will be weak
<xref ref-type="bibr" rid="b41">41</xref>
. The signum function
<italic>g</italic>
(
<italic>x</italic>
) = sign(
<italic>x</italic>
) is adopted to construct the generalized correlation receiver of
<xref ref-type="disp-formula" rid="eq21">equation (8)</xref>
, which is not optimal for the coloured noise
<bold>Z</bold>
. The optimum receiver indicated in
<xref ref-type="disp-formula" rid="eq17">equation (3)</xref>
for the coloured noise
<bold>Z</bold>
is rather complicated, since the distribution
<italic>f</italic>
<sub>
<bold>Z</bold>
</sub>
does not have a tractable analytic expression
<xref ref-type="bibr" rid="b41">41</xref>
. Using the approach developed in ref.
<xref ref-type="bibr" rid="b41">41</xref>
, we have the expectation matrix</p>
<p>
<disp-formula id="eq64">
<inline-graphic id="d33e1218" xlink:href="srep27946-m64.jpg"></inline-graphic>
</disp-formula>
</p>
<p>with the unit matrix
<bold>I</bold>
and
<inline-formula id="d33e1225">
<inline-graphic id="d33e1226" xlink:href="srep27946-m65.jpg"></inline-graphic>
</inline-formula>
, and the matrix
<bold>V</bold>
becomes tridiagonal with elements</p>
<p>
<disp-formula id="eq66">
<inline-graphic id="d33e1233" xlink:href="srep27946-m66.jpg"></inline-graphic>
</disp-formula>
</p>
<p>
<disp-formula id="eq67">
<inline-graphic id="d33e1236" xlink:href="srep27946-m67.jpg"></inline-graphic>
</disp-formula>
</p>
<p>for
<inline-formula id="d33e1239">
<inline-graphic id="d33e1240" xlink:href="srep27946-m68.jpg"></inline-graphic>
</inline-formula>
, and other elements are higher-order infinitesimal of
<italic>ρ</italic>
<sub>1</sub>
 + 
<italic>ρ</italic>
<sub>2</sub>
<inline-formula id="d33e1252">
<inline-graphic id="d33e1253" xlink:href="srep27946-m69.jpg"></inline-graphic>
</inline-formula>
. Then, we calculate the largest eigenvalue of the matrix
<inline-formula id="d33e1256">
<inline-graphic id="d33e1257" xlink:href="srep27946-m70.jpg"></inline-graphic>
</inline-formula>
as</p>
<p>
<disp-formula id="eq71">
<inline-graphic id="d33e1261" xlink:href="srep27946-m71.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the error function
<inline-formula id="d33e1264">
<inline-graphic id="d33e1265" xlink:href="srep27946-m72.jpg"></inline-graphic>
</inline-formula>
. In
<xref ref-type="fig" rid="f2">Fig. 2</xref>
, we show the capacity per signal energy
<italic>C</italic>
<sub>
<italic>g</italic>
</sub>
/
<italic>ε</italic>
 = Λ
<sub>
<italic>g</italic>
</sub>
/2 in
<xref ref-type="disp-formula" rid="eq76">equation (11)</xref>
versus the noise parameters
<italic>μ</italic>
and
<italic>ζ</italic>
in
<xref ref-type="disp-formula" rid="eq59">equation (21)</xref>
. Here, the correlation coefficient
<italic>ρ</italic>
<sub>1</sub>
 = 0.2 and
<italic>ρ</italic>
<sub>2</sub>
 = 0 in the coloured noise model of
<xref ref-type="disp-formula" rid="eq61">equation (22)</xref>
. We regard the parameters ±
<italic>μ</italic>
as the peak locations of the Gaussian mixture distribution in
<xref ref-type="disp-formula" rid="eq59">equation (21)</xref>
, while the parameter
<italic>ζ</italic>
as the noise level. It is then clearly shown in that
<xref ref-type="fig" rid="f2">Fig. 2</xref>
, upon increasing
<italic>ζ</italic>
for a fixed value of
<italic>μ</italic>
(the noise variance
<inline-formula id="d33e1329">
<inline-graphic id="d33e1330" xlink:href="srep27946-m73.jpg"></inline-graphic>
</inline-formula>
also increases), the noise-enhanced capacity effects exist. The corresponding maxima of
<italic>C</italic>
<sub>
<italic>g</italic>
</sub>
/
<italic>ε</italic>
versus optimal values of
<italic>ζ</italic>
are also marked by squares in
<xref ref-type="fig" rid="f2">Fig. 2</xref>
.</p>
<p>We emphasize that the above noise-enhanced capacity effect is an illustrative case of stochastic resonance that exists for a suboptimal receiver not matching the background noise. However, this mismatch condition is not the decision criteria for the occurrence of the noise-enhanced effect, since the example illustration is under the assumptions of a small signal and a correlation receiver with a large observation size. Beyond these restrictive assumptions, the noise-enhanced effect has been frequently observed
<xref ref-type="bibr" rid="b21">21</xref>
<xref ref-type="bibr" rid="b24">24</xref>
<xref ref-type="bibr" rid="b25">25</xref>
<xref ref-type="bibr" rid="b28">28</xref>
<xref ref-type="bibr" rid="b29">29</xref>
<xref ref-type="bibr" rid="b30">30</xref>
<xref ref-type="bibr" rid="b31">31</xref>
. For instance, the noise-enhanced effect has been demonstrated for non-weak signals in threshold neurons
<xref ref-type="bibr" rid="b25">25</xref>
<xref ref-type="bibr" rid="b29">29</xref>
<xref ref-type="bibr" rid="b31">31</xref>
, where an optimal matching condition is inapplicable to the neuronal model immersed in complex noisy environments. It is sufficiently recognized that a well-established criterion for the noise-enhanced effect is to observe an optimal noise level at which the system response can be optimized.</p>
</sec>
</sec>
<sec disp-level="1">
<title>Discussion</title>
<p>In this paper, we analyse the capacity of a very noisy communication channel with correlation receivers. With the weak signal energy constraint and for very low SNR, we generalize an asymptotic expression of capacity achieved by the optimum receivers in a coloured noisy environment. Moreover, for the case when the optimum receiver is unavailable in practice, a capacity formula is presented for the communication channel with a generalized correlation receiver. We further discuss the occurrence condition of the noise-enhanced capacity effect in the considered communication channel.</p>
<p>A similar asymptotic expression of capacity is also obtained in memoryless
<xref ref-type="bibr" rid="b10">10</xref>
<xref ref-type="bibr" rid="b11">11</xref>
or memory additive-noise channels
<xref ref-type="bibr" rid="b12">12</xref>
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
. We emphasize the asymptotic capacity expressions of
<xref ref-type="disp-formula" rid="eq12">equations (6</xref>
) and (
<xref ref-type="disp-formula" rid="eq76">11</xref>
) are different from that in previous literature
<xref ref-type="bibr" rid="b10">10</xref>
<xref ref-type="bibr" rid="b11">11</xref>
<xref ref-type="bibr" rid="b12">12</xref>
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
. In
<xref ref-type="fig" rid="f1">Fig. 1</xref>
, for the channel output
<bold>Y</bold>
 = 
<italic>g</italic>
(
<bold>X</bold>
), these studies assume the conditional probability density as
<italic>f</italic>
<sub>
<bold>Y</bold>
|
<bold>S</bold>
</sub>
(
<bold>y</bold>
|
<bold>s</bold>
). Then, the Fisher information matrix is defined as
<xref ref-type="bibr" rid="b10">10</xref>
<xref ref-type="bibr" rid="b11">11</xref>
<xref ref-type="bibr" rid="b12">12</xref>
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
</p>
<p>
<disp-formula id="eq74">
<inline-graphic id="d33e1405" xlink:href="srep27946-m74.jpg"></inline-graphic>
</disp-formula>
</p>
<p>with the operator
<inline-formula id="d33e1408">
<inline-graphic id="d33e1409" xlink:href="srep27946-m75.jpg"></inline-graphic>
</inline-formula>
. Then, for the zero-mean signal vector E
<sub>
<bold>S</bold>
</sub>
(
<bold>s</bold>
) = 
<bold>0</bold>
and the weak signal energy
<italic>ε</italic>
, the mutual information between the input space
<bold>Φ</bold>
and the output space
<bold>Ψ</bold>
is approximated as
<xref ref-type="bibr" rid="b10">10</xref>
<xref ref-type="bibr" rid="b11">11</xref>
<xref ref-type="bibr" rid="b12">12</xref>
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
</p>
<p>
<disp-formula id="eq76">
<inline-graphic id="d33e1434" xlink:href="srep27946-m76.jpg"></inline-graphic>
</disp-formula>
</p>
<p>which is different from the mutual information
<italic>I</italic>
(
<bold>Φ</bold>
, Ω) of
<xref ref-type="disp-formula" rid="eq23">equation (4)</xref>
based on the Fisher information matrix
<bold>J</bold>
(
<italic>f</italic>
<sub>
<bold>Z</bold>
</sub>
) of the noise distribution
<italic>f</italic>
<sub>
<bold>Z</bold>
</sub>
. It is shown in
<xref ref-type="fig" rid="f1">Fig. 1</xref>
that the receiver multiplies nonlinear transformation
<italic>g</italic>
(
<bold>x</bold>
) with optimized coefficients, and obtains a cumulative statistic
<italic>T</italic>
<sub>
<italic>m</italic>
</sub>
that decides whether the
<italic>m</italic>
th signal
<bold>S</bold>
<sub>
<italic>m</italic>
</sub>
is sent or not. Then, the considered communication channel chooses an optimal signal
<bold>S</bold>
<sub>
<italic>m</italic>
</sub>
from the signal space to maximize the average mutual information. Since the receiver collecting the weighted nonlinear outputs as the statistic
<inline-formula id="d33e1493">
<inline-graphic id="d33e1494" xlink:href="srep27946-m77.jpg"></inline-graphic>
</inline-formula>
, and for any nonlinear function
<italic>g</italic>
, the distribution of
<italic>T</italic>
<sub>
<italic>m</italic>
</sub>
tends to be Gaussian. This leads to the asymptotic expressions of capacity of
<xref ref-type="disp-formula" rid="eq12">equations (6</xref>
) and (
<xref ref-type="disp-formula" rid="eq76">11</xref>
). We recognize the asymptotic capacity expressions in
<xref ref-type="disp-formula" rid="eq12">equations (6</xref>
) and (
<xref ref-type="disp-formula" rid="eq76">11</xref>
) have application in the context of a very noisy communication channel with a correlation receiver. As a new analytical result of the channel capacity, it has theoretical significance and deserves some exposition.</p>
<p>We also note that, for the linear transfer function of
<bold>Y</bold>
 = 
<bold>Z</bold>
 + 
<bold>S</bold>
, the conditional probability density
<italic>f</italic>
<sub>
<bold>Y</bold>
|
<bold>S</bold>
</sub>
(
<bold>y</bold>
|
<bold>s</bold>
) = 
<italic>f</italic>
<sub>
<bold>Z</bold>
</sub>
(
<bold>y</bold>
 − 
<bold>s</bold>
), the Fisher information matrix of
<xref ref-type="disp-formula" rid="eq74">equation (27)</xref>
becomes</p>
<p>
<disp-formula id="eq78">
<inline-graphic id="d33e1562" xlink:href="srep27946-m78.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the differentiation operator
<inline-formula id="d33e1565">
<inline-graphic id="d33e1566" xlink:href="srep27946-m79.jpg"></inline-graphic>
</inline-formula>
with respect to
<bold>S</bold>
is equivalent to differentiation with respect to
<bold>Z</bold>
<xref ref-type="bibr" rid="b3">3</xref>
. Therefore, for the linear additive-noise channel, the considered communication channel has the same capacity as that denoted in refs
<xref ref-type="bibr" rid="b10">10</xref>
,
<xref ref-type="bibr" rid="b11">11</xref>
,
<xref ref-type="bibr" rid="b12">12</xref>
,
<xref ref-type="bibr" rid="b13">13</xref>
,
<xref ref-type="bibr" rid="b14">14</xref>
.</p>
<p>Besides a linear channel capacity defined and calculated by Shannon
<xref ref-type="bibr" rid="b1">1</xref>
, only a few analytical results exist for a variety of different nonlinear channel models. We argue that our asymptotic capacity expression for a nonlinear channel may be valuable for practical channels and coding techniques developed for communication applications in order to approach the established linear Shannon limit, and deserves further extensive study. We here only consider a single correlation receiver for detecting the weak signal, however recent studies in general provide evidence that, besides an optimal noise intensity, an optimal network configuration exists, at which the best system response can be obtained
<xref ref-type="bibr" rid="b22">22</xref>
<xref ref-type="bibr" rid="b31">31</xref>
<xref ref-type="bibr" rid="b42">42</xref>
<xref ref-type="bibr" rid="b43">43</xref>
<xref ref-type="bibr" rid="b44">44</xref>
<xref ref-type="bibr" rid="b45">45</xref>
<xref ref-type="bibr" rid="b46">46</xref>
. Thus, an interesting extension for future work is to investigate the capacity of a very noisy communication channel with receivers connected in various network configurations.</p>
</sec>
<sec disp-level="1">
<title>Methods</title>
<sec disp-level="2">
<title>Very noisy communication channel model</title>
<p>Consider a coherent
<italic>M</italic>
-ary communication channel transmitting
<italic>M</italic>
possible signals
<bold>S</bold>
<sub>
<italic>m</italic>
</sub>
for
<inline-formula id="d33e1617">
<inline-graphic id="d33e1618" xlink:href="srep27946-m80.jpg"></inline-graphic>
</inline-formula>
, as shown in
<xref ref-type="fig" rid="f1">Fig. 1</xref>
. In an interval, the observation vector</p>
<p>
<disp-formula id="eq81">
<inline-graphic id="d33e1625" xlink:href="srep27946-m81.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where
<inline-formula id="d33e1628">
<inline-graphic id="d33e1629" xlink:href="srep27946-m82.jpg"></inline-graphic>
</inline-formula>
contains the noise vector
<inline-formula id="d33e1631">
<inline-graphic id="d33e1632" xlink:href="srep27946-m83.jpg"></inline-graphic>
</inline-formula>
and the signal vector
<inline-formula id="d33e1634">
<inline-graphic id="d33e1635" xlink:href="srep27946-m84.jpg"></inline-graphic>
</inline-formula>
. Then, a receiver multiplies the transformation
<italic>g</italic>
(
<bold>X</bold>
) with optimized coefficients, resulting in a cumulative statistic
<italic>T</italic>
<sub>
<italic>m</italic>
</sub>
(
<bold>X</bold>
) for deciding whether the
<italic>m</italic>
th signal
<bold>S</bold>
<sub>
<italic>m</italic>
</sub>
is sent or not. The capacity
<italic>C</italic>
of a communication channel is given by the maximum of the mutual information
<italic>I</italic>
(
<bold>Φ</bold>
,
<bold>Ω</bold>
) between the input signal space
<bold>Φ</bold>
and the channel output space
<bold>Ω</bold>
</p>
<p>
<disp-formula id="eq85">
<inline-graphic id="d33e1682" xlink:href="srep27946-m85.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the maximization is with respect to the input distribution
<italic>f</italic>
<sub>
<bold>S</bold>
</sub>
over the signal space
<bold>Φ</bold>
<xref ref-type="bibr" rid="b1">1</xref>
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b4">4</xref>
<xref ref-type="bibr" rid="b5">5</xref>
.</p>
</sec>
<sec disp-level="2">
<title>Nirenberg’s approach for white noise</title>
<p>The white noise
<bold>Z</bold>
has the multivariate distribution
<inline-formula id="d33e1703">
<inline-graphic id="d33e1704" xlink:href="srep27946-m86.jpg"></inline-graphic>
</inline-formula>
with zero-mean and variance
<inline-formula id="d33e1706">
<inline-graphic id="d33e1707" xlink:href="srep27946-m87.jpg"></inline-graphic>
</inline-formula>
. Let the statistically independent signal components be constrained to satisfy
<inline-formula id="d33e1709">
<inline-graphic id="d33e1710" xlink:href="srep27946-m88.jpg"></inline-graphic>
</inline-formula>
, and the total signal energy has a constraint
<inline-formula id="d33e1712">
<inline-graphic id="d33e1713" xlink:href="srep27946-m89.jpg"></inline-graphic>
</inline-formula>
. Then, for very low SNR of
<inline-formula id="d33e1716">
<inline-graphic id="d33e1717" xlink:href="srep27946-m90.jpg"></inline-graphic>
</inline-formula>
, the conditional probability density can be approximated as</p>
<p>
<disp-formula id="eq91">
<inline-graphic id="d33e1721" xlink:href="srep27946-m91.jpg"></inline-graphic>
</disp-formula>
</p>
<p>with the first two terms of Taylor series. Here,
<inline-formula id="d33e1724">
<inline-graphic id="d33e1725" xlink:href="srep27946-m92.jpg"></inline-graphic>
</inline-formula>
and the statistic
<inline-formula id="d33e1727">
<inline-graphic id="d33e1728" xlink:href="srep27946-m93.jpg"></inline-graphic>
</inline-formula>
<inline-formula id="d33e1729">
<inline-graphic id="d33e1730" xlink:href="srep27946-m94.jpg"></inline-graphic>
</inline-formula>
<xref ref-type="bibr" rid="b5">5</xref>
. Using the maximum likelihood rule
<xref ref-type="bibr" rid="b17">17</xref>
, the conditional probability density on the knowledge that the
<italic>m</italic>
th signal satisfies</p>
<p>
<disp-formula id="eq95">
<inline-graphic id="d33e1740" xlink:href="srep27946-m95.jpg"></inline-graphic>
</disp-formula>
</p>
<p>which leads to the optimum receiver</p>
<p>
<disp-formula id="eq96">
<inline-graphic id="d33e1745" xlink:href="srep27946-m96.jpg"></inline-graphic>
</disp-formula>
</p>
<p>to decide if the
<italic>m</italic>
th signal was sent.</p>
<p>To simplify the mathematical manipulations, Nirenberg
<xref ref-type="bibr" rid="b5">5</xref>
assumes the even noise distribution function
<italic>f</italic>
<sub>
<italic>z</italic>
</sub>
(
<italic>z</italic>
) = 
<italic>f</italic>
<sub>
<italic>z</italic>
</sub>
(−
<italic>z</italic>
) and a very noisy channel
<inline-formula id="d33e1774">
<inline-graphic id="d33e1775" xlink:href="srep27946-m97.jpg"></inline-graphic>
</inline-formula>
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b8">8</xref>
<xref ref-type="bibr" rid="b9">9</xref>
yielding the mutual information between the output space
<bold>Ω</bold>
and the input signal space
<bold>Φ</bold>
as</p>
<p>
<disp-formula id="eq98">
<inline-graphic id="d33e1786" xlink:href="srep27946-m98.jpg"></inline-graphic>
</disp-formula>
</p>
<p>where the Fisher information
<inline-formula id="d33e1789">
<inline-graphic id="d33e1790" xlink:href="srep27946-m99.jpg"></inline-graphic>
</inline-formula>
of the noise density
<italic>f</italic>
<sub>
<italic>z</italic>
</sub>
and the expectation
<inline-formula id="d33e1798">
<inline-graphic id="d33e1799" xlink:href="srep27946-m100.jpg"></inline-graphic>
</inline-formula>
. Since the same bias
<inline-formula id="d33e1801">
<inline-graphic id="d33e1802" xlink:href="srep27946-m101.jpg"></inline-graphic>
</inline-formula>
does not affect the decision of inequality of
<xref ref-type="disp-formula" rid="eq96">equation (34)</xref>
, it may be conveniently assumed to be zero
<xref ref-type="bibr" rid="b5">5</xref>
. Then, over the class of signal distributions
<italic>f</italic>
<sub>
<bold>S</bold>
</sub>
, the channel capacity is computed as
<xref ref-type="bibr" rid="b5">5</xref>
</p>
<p>
<disp-formula id="eq102">
<inline-graphic id="d33e1820" xlink:href="srep27946-m102.jpg"></inline-graphic>
</disp-formula>
</p>
<p>which is applicable to various white noise types.</p>
<p>Furthermore, for a fixed noise variance
<inline-formula id="d33e1825">
<inline-graphic id="d33e1826" xlink:href="srep27946-m103.jpg"></inline-graphic>
</inline-formula>
and an arbitrary noise density function
<italic>f</italic>
<sub>
<italic>z</italic>
</sub>
,
<inline-formula id="d33e1834">
<inline-graphic id="d33e1835" xlink:href="srep27946-m104.jpg"></inline-graphic>
</inline-formula>
<xref ref-type="bibr" rid="b17">17</xref>
, where the equality occurs for Gaussian distribution
<inline-formula id="d33e1838">
<inline-graphic id="d33e1839" xlink:href="srep27946-m105.jpg"></inline-graphic>
</inline-formula>
with its Fisher information
<inline-formula id="d33e1842">
<inline-graphic id="d33e1843" xlink:href="srep27946-m106.jpg"></inline-graphic>
</inline-formula>
<xref ref-type="bibr" rid="b7">7</xref>
<xref ref-type="bibr" rid="b18">18</xref>
<xref ref-type="bibr" rid="b37">37</xref>
. Accordingly, the additive Gaussian noise channel is the worst one, and has the minimum capacity, as indicated in
<xref ref-type="disp-formula" rid="eq102">equation (36)</xref>
. It is well known that, for very low SNR of
<inline-formula id="d33e1849">
<inline-graphic id="d33e1850" xlink:href="srep27946-m107.jpg"></inline-graphic>
</inline-formula>
, the capacity of Gaussian vector channel is approximately calculated as
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b4">4</xref>
</p>
<p>
<disp-formula id="eq108">
<inline-graphic id="d33e1855" xlink:href="srep27946-m108.jpg"></inline-graphic>
</disp-formula>
</p>
<p>which accords well with equation (36)
<xref ref-type="bibr" rid="b5">5</xref>
.</p>
</sec>
</sec>
<sec disp-level="1">
<title>Additional Information</title>
<p>
<bold>How to cite this article</bold>
: Duan, F.
<italic>et al.</italic>
Capacity of very noisy communication channels based on Fisher information.
<italic>Sci. Rep.</italic>
<bold>6</bold>
, 27946; doi: 10.1038/srep27946 (2016).</p>
</sec>
</body>
<back>
<ack>
<p>This work is sponsored by the National Natural Science Foundation of China (No. 61573202), the Science and Technology Development Program of Shandong Province (No. 2014GGX101031), and the China Ship Research & Development Academy.</p>
</ack>
<ref-list>
<ref id="b1">
<mixed-citation publication-type="journal">
<name>
<surname>Shannon</surname>
<given-names>C. E.</given-names>
</name>
<article-title>A mathematical theory of communication</article-title>
.
<source>Bell System Technical Journal</source>
<volume>27</volume>
,
<fpage>379</fpage>
<lpage>423</lpage>
(
<year>1948</year>
).</mixed-citation>
</ref>
<ref id="b2">
<mixed-citation publication-type="journal">
<name>
<surname>Gallager</surname>
<given-names>R. G.</given-names>
</name>
<source>Information Theory and Reliable Communication</source>
(Wiley, New York,
<year>1968</year>
).</mixed-citation>
</ref>
<ref id="b3">
<mixed-citation publication-type="journal">
<name>
<surname>Cover</surname>
<given-names>T. M.</given-names>
</name>
&
<name>
<surname>Thomas</surname>
<given-names>J. A.</given-names>
</name>
<source>Elements of Information Theory</source>
(Wiley, New York,
<year>1991</year>
).</mixed-citation>
</ref>
<ref id="b4">
<mixed-citation publication-type="journal">
<name>
<surname>Yeung</surname>
<given-names>R. W.</given-names>
</name>
<source>Information Theory and Network Coding</source>
(Springer, New York,
<year>2008</year>
).</mixed-citation>
</ref>
<ref id="b5">
<mixed-citation publication-type="journal">
<name>
<surname>Nirenberg</surname>
<given-names>L. M.</given-names>
</name>
<article-title>Low SNR digital communication over certain additive non-Gaussian channels</article-title>
.
<source>IEEE Transactions on Communications</source>
<volume>23</volume>
,
<fpage>332</fpage>
<lpage>341</lpage>
(
<year>1975</year>
).</mixed-citation>
</ref>
<ref id="b6">
<mixed-citation publication-type="journal">
<name>
<surname>Kassam</surname>
<given-names>S. A.</given-names>
</name>
<source>Signal Detection in Non-Gaussian Noise</source>
(Springer-Verlag, New York,
<year>1988</year>
).</mixed-citation>
</ref>
<ref id="b7">
<mixed-citation publication-type="journal">
<name>
<surname>Kay</surname>
<given-names>S.</given-names>
</name>
<source>Fundamentals of Statistical Signal Processing</source>
(Prentice-Hall, Englewood Cliffs, New Jersey,
<year>1998</year>
).</mixed-citation>
</ref>
<ref id="b8">
<mixed-citation publication-type="journal">
<name>
<surname>Reiffen</surname>
<given-names>B.</given-names>
</name>
<article-title>A note on ‘very noisy’ channel</article-title>
.
<source>Information and Control</source>
<volume>6</volume>
,
<fpage>126</fpage>
<lpage>130</lpage>
(
<year>1963</year>
).</mixed-citation>
</ref>
<ref id="b9">
<mixed-citation publication-type="journal">
<name>
<surname>Majani</surname>
<given-names>E. E.</given-names>
</name>
<source>A Model for the Study of Very Noisy Channels and Applications</source>
(PhD. Thesis, California Institute of Technology, California,
<year>1988</year>
).</mixed-citation>
</ref>
<ref id="b10">
<mixed-citation publication-type="journal">
<name>
<surname>Kullback</surname>
<given-names>S.</given-names>
</name>
<source>Information Theory and Statistics</source>
(Dover, New York,
<year>1968</year>
).</mixed-citation>
</ref>
<ref id="b11">
<mixed-citation publication-type="journal">
<name>
<surname>Verdú</surname>
<given-names>S.</given-names>
</name>
<article-title>On channel capacity per unit cost</article-title>
.
<source>IEEE Transactions on Information Theory</source>
<volume>36</volume>
,
<fpage>1019</fpage>
<lpage>1030</lpage>
(
<year>1990</year>
).</mixed-citation>
</ref>
<ref id="b12">
<mixed-citation publication-type="journal">
<name>
<surname>Prelov</surname>
<given-names>V. V.</given-names>
</name>
&
<name>
<surname>van der Meulen</surname>
<given-names>E. C.</given-names>
</name>
<article-title>An asymptotic expression for the information and capacity of a multidimensional channel with weak input signals</article-title>
.
<source>IEEE Transactions on Information Theory</source>
<volume>39</volume>
,
<fpage>1728</fpage>
<lpage>1735</lpage>
(
<year>1993</year>
).</mixed-citation>
</ref>
<ref id="b13">
<mixed-citation publication-type="journal">
<name>
<surname>Kostal</surname>
<given-names>L.</given-names>
</name>
<article-title>Information capacity in the weak-signal approximation</article-title>
.
<source>Physical Review E</source>
<volume>82</volume>
,
<fpage>026115</fpage>
(
<year>2010</year>
).</mixed-citation>
</ref>
<ref id="b14">
<mixed-citation publication-type="journal">
<name>
<surname>Kostal</surname>
<given-names>L.</given-names>
</name>
&
<name>
<surname>Lansky</surname>
<given-names>P.</given-names>
</name>
<article-title>Information transfer for small-amplitude signals</article-title>
.
<source>Physical Review E</source>
<volume>81</volume>
,
<fpage>050901(R)</fpage>
(
<year>2010</year>
).</mixed-citation>
</ref>
<ref id="b15">
<mixed-citation publication-type="journal">
<name>
<surname>DiVincenzo</surname>
<given-names>D. P.</given-names>
</name>
,
<name>
<surname>Shor</surname>
<given-names>P. W.</given-names>
</name>
&
<name>
<surname>Smolin</surname>
<given-names>J. A.</given-names>
</name>
<article-title>Quantum-channel capacity of very noisy channels</article-title>
.
<source>Physical Review A</source>
<volume>57</volume>
,
<fpage>830</fpage>
<lpage>838</lpage>
(
<year>1998</year>
).</mixed-citation>
</ref>
<ref id="b16">
<mixed-citation publication-type="journal">
<name>
<surname>Abdel-Ghaffar</surname>
<given-names>K.</given-names>
</name>
&
<name>
<surname>McEliece</surname>
<given-names>R. J.</given-names>
</name>
<article-title>The ultimate limits of information density</article-title>
.
<source>Proceeding of the NATO Advanced Study Institute on Performance Limits in Communication Theory and Practice</source>
, Il Ciocco, Italy,
<volume>142</volume>
,
<fpage>267</fpage>
<lpage>279</lpage>
(
<year>1986</year>
).</mixed-citation>
</ref>
<ref id="b17">
<mixed-citation publication-type="journal">
<name>
<surname>Huber</surname>
<given-names>P. J.</given-names>
</name>
<source>Robust Statistics</source>
(Wiley, New York,
<year>1981</year>
).</mixed-citation>
</ref>
<ref id="b18">
<mixed-citation publication-type="journal">
<name>
<surname>Stam</surname>
<given-names>A. J.</given-names>
</name>
<article-title>Some inequalities satisfied by the quantities of information of Fisher and Shannon</article-title>
.
<source>Information and Control</source>
<volume>2</volume>
,
<fpage>101</fpage>
<lpage>112</lpage>
(
<year>1959</year>
).</mixed-citation>
</ref>
<ref id="b19">
<mixed-citation publication-type="journal">
<name>
<surname>Patel</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Kosko</surname>
<given-names>B.</given-names>
</name>
<article-title>Noise benefits in quantizer-array correlation detection and watermark decoding</article-title>
.
<source>IEEE Transactions on Signal Processing</source>
<volume>59</volume>
,
<fpage>488</fpage>
<lpage>505</lpage>
(
<year>2011</year>
).</mixed-citation>
</ref>
<ref id="b20">
<mixed-citation publication-type="journal">
<name>
<surname>Benzi</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Sutera</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Vulpiani</surname>
<given-names>A.</given-names>
</name>
<article-title>The mechanism of stochastic resonance</article-title>
.
<source>Journal of Physics A: Mathematical and General</source>
<volume>14</volume>
,
<fpage>L453</fpage>
<lpage>L457</lpage>
(
<year>1981</year>
).</mixed-citation>
</ref>
<ref id="b21">
<mixed-citation publication-type="journal">
<name>
<surname>Chapeau-Blondeau</surname>
<given-names>F.</given-names>
</name>
&
<name>
<surname>Godivier</surname>
<given-names>X.</given-names>
</name>
<article-title>Theory of stochastic resonance in signal transimission by static nonlinear systems</article-title>
.
<source>Physical Review E</source>
<volume>55</volume>
,
<fpage>1478</fpage>
<lpage>1495</lpage>
(
<year>1997</year>
).</mixed-citation>
</ref>
<ref id="b22">
<mixed-citation publication-type="journal">
<name>
<surname>Collins</surname>
<given-names>J. J.</given-names>
</name>
,
<name>
<surname>Chow</surname>
<given-names>C. C.</given-names>
</name>
&
<name>
<surname>Imhoff</surname>
<given-names>T. T.</given-names>
</name>
<article-title>Stochastic resonance without tuning</article-title>
.
<source>Nature</source>
<volume>376</volume>
,
<fpage>236</fpage>
<lpage>238</lpage>
(
<year>1995</year>
).
<pub-id pub-id-type="pmid">7617033</pub-id>
</mixed-citation>
</ref>
<ref id="b23">
<mixed-citation publication-type="journal">
<name>
<surname>Floriani</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Mannella</surname>
<given-names>R.</given-names>
</name>
&
<name>
<surname>Grigolini</surname>
<given-names>P.</given-names>
</name>
<article-title>Noise-induced transition from anomalous to ordinary diffusion: The crossover time as a function of noise intensity</article-title>
.
<source>Physical Review E</source>
<volume>52</volume>
,
<fpage>5910</fpage>
<lpage>5917</lpage>
(
<year>1995</year>
).</mixed-citation>
</ref>
<ref id="b24">
<mixed-citation publication-type="journal">
<name>
<surname>Bulsara</surname>
<given-names>A. R.</given-names>
</name>
&
<name>
<surname>Zador</surname>
<given-names>A.</given-names>
</name>
<article-title>Threshold detection of wideband signals: A noise-induced maximum in the mutual inforamtion</article-title>
.
<source>Physical Review E</source>
<volume>54</volume>
,
<fpage>R2185</fpage>
<lpage>R2188</lpage>
(
<year>1996</year>
).</mixed-citation>
</ref>
<ref id="b25">
<mixed-citation publication-type="journal">
<name>
<surname>Greenwood</surname>
<given-names>P. E.</given-names>
</name>
,
<name>
<surname>Ward</surname>
<given-names>L. M.</given-names>
</name>
&
<name>
<surname>Wefelmeyer</surname>
<given-names>W.</given-names>
</name>
<article-title>Statistical analysis of stochastic resonance in a simple setting</article-title>
.
<source>Physical Review E</source>
<volume>60</volume>
,
<fpage>4687</fpage>
<lpage>4695</lpage>
(
<year>1999</year>
).</mixed-citation>
</ref>
<ref id="b26">
<mixed-citation publication-type="journal">
<name>
<surname>Duan</surname>
<given-names>F.</given-names>
</name>
,
<name>
<surname>Chapeau-Blondeau</surname>
<given-names>F.</given-names>
</name>
&
<name>
<surname>Abbott</surname>
<given-names>D.</given-names>
</name>
<article-title>Fisher information as a metric of locally optimal processing and stochastic resonance</article-title>
.
<source>PLoS One</source>
<volume>7</volume>
,
<fpage>e34282</fpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22493686</pub-id>
</mixed-citation>
</ref>
<ref id="b27">
<mixed-citation publication-type="journal">
<name>
<surname>Gammaitoni</surname>
<given-names>L.</given-names>
</name>
,
<name>
<surname>Hänggi</surname>
<given-names>P.</given-names>
</name>
,
<name>
<surname>Jung</surname>
<given-names>P.</given-names>
</name>
&
<name>
<surname>Marchesoni</surname>
<given-names>F.</given-names>
</name>
<article-title>Stochastic resonance</article-title>
.
<source>Reviews of Modern Physics</source>
<volume>70</volume>
,
<fpage>233</fpage>
<lpage>287</lpage>
(
<year>1998</year>
).</mixed-citation>
</ref>
<ref id="b28">
<mixed-citation publication-type="journal">
<name>
<surname>Kay</surname>
<given-names>S.</given-names>
</name>
<article-title>Can detectabilty be improved by adding noise?</article-title>
<source>IEEE Signal Processing Letters</source>
<volume>7</volume>
,
<fpage>8</fpage>
<lpage>10</lpage>
(
<year>2000</year>
).</mixed-citation>
</ref>
<ref id="b29">
<mixed-citation publication-type="journal">
<name>
<surname>McDonnell</surname>
<given-names>M. D.</given-names>
</name>
,
<name>
<surname>Stocks</surname>
<given-names>N. G.</given-names>
</name>
,
<name>
<surname>Pearce</surname>
<given-names>C. E. M.</given-names>
</name>
&
<name>
<surname>Abbott</surname>
<given-names>D.</given-names>
</name>
<source>Stochastic Resonance: From Suprathreshold Stochastic Resonance to Stochastic Signal Quantization</source>
(Cambridge University Press, Cambridge,
<year>2008</year>
).</mixed-citation>
</ref>
<ref id="b30">
<mixed-citation publication-type="journal">
<name>
<surname>Moss</surname>
<given-names>F.</given-names>
</name>
,
<name>
<surname>Ward</surname>
<given-names>L. M.</given-names>
</name>
&
<name>
<surname>Sannita</surname>
<given-names>W. G.</given-names>
</name>
<article-title>Stochastic resonance and sensory information processing: A tutorial and review of application</article-title>
.
<source>Clinical NeuroPhysiology</source>
<volume>115</volume>
,
<fpage>267</fpage>
<lpage>281</lpage>
(
<year>2004</year>
).
<pub-id pub-id-type="pmid">14744566</pub-id>
</mixed-citation>
</ref>
<ref id="b31">
<mixed-citation publication-type="journal">
<name>
<surname>Stocks</surname>
<given-names>N. G.</given-names>
</name>
<article-title>Suprathreshold stochastic resonance in multilevel threshold systems</article-title>
.
<source>Physical Review Letters</source>
<volume>84</volume>
,
<fpage>2310</fpage>
<lpage>2313</lpage>
(
<year>2000</year>
).
<pub-id pub-id-type="pmid">11018872</pub-id>
</mixed-citation>
</ref>
<ref id="b32">
<mixed-citation publication-type="journal">
<name>
<surname>Martignoli</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Gomez</surname>
<given-names>F.</given-names>
</name>
&
<name>
<surname>Stoop</surname>
<given-names>R.</given-names>
</name>
<article-title>Pitch sensation involves stochastic resonance</article-title>
.
<source>Scientific Reports</source>
<volume>3</volume>
,
<fpage>2676</fpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">24045830</pub-id>
</mixed-citation>
</ref>
<ref id="b33">
<mixed-citation publication-type="journal">
<name>
<surname>Zozor</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Amblard</surname>
<given-names>P. O.</given-names>
</name>
<article-title>Stochastic resonance in locally optimal detectors</article-title>
.
<source>IEEE Transactions on Signal Processing</source>
<volume>51</volume>
,
<fpage>3177</fpage>
<lpage>3181</lpage>
(
<year>2003</year>
).</mixed-citation>
</ref>
<ref id="b34">
<mixed-citation publication-type="journal">
<name>
<surname>Duan</surname>
<given-names>F.</given-names>
</name>
,
<name>
<surname>Chapeau-Blondeau</surname>
<given-names>F.</given-names>
</name>
&
<name>
<surname>Abbott</surname>
<given-names>D.</given-names>
</name>
<article-title>Stochastic resonance with coloured noise for neural signal detection</article-title>
.
<source>PLoS One</source>
<volume>9</volume>
,
<fpage>e91345</fpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24632853</pub-id>
</mixed-citation>
</ref>
<ref id="b35">
<mixed-citation publication-type="journal">
<name>
<surname>Czaplicka</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Holyst</surname>
<given-names>J. A.</given-names>
</name>
&
<name>
<surname>Sloot</surname>
<given-names>P. M. A.</given-names>
</name>
<article-title>Noise enhances information transfer in hierarchical networks</article-title>
.
<source>Scientific Reports</source>
<volume>3</volume>
,
<fpage>1223</fpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23390574</pub-id>
</mixed-citation>
</ref>
<ref id="b36">
<mixed-citation publication-type="journal">
<name>
<surname>Han</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Liu</surname>
<given-names>H.</given-names>
</name>
,
<name>
<surname>Sun</surname>
<given-names>Q.</given-names>
</name>
&
<name>
<surname>Huang</surname>
<given-names>N.</given-names>
</name>
<article-title>Reconstruction of pulse noisy images via stochastic resonance</article-title>
.
<source>Scientific Reports</source>
<volume>5</volume>
,
<fpage>10616</fpage>
(
<year>2015</year>
).
<pub-id pub-id-type="pmid">26067911</pub-id>
</mixed-citation>
</ref>
<ref id="b37">
<mixed-citation publication-type="journal">
<name>
<surname>Blachman</surname>
<given-names>N. M.</given-names>
</name>
<article-title>The convolution inequality for entropy power</article-title>
.
<source>IEEE Transactions on Information Theory</source>
<volume>IT-11</volume>
,
<fpage>267</fpage>
<lpage>271</lpage>
(
<year>1965</year>
).</mixed-citation>
</ref>
<ref id="b38">
<mixed-citation publication-type="journal">
<name>
<surname>Dembo</surname>
<given-names>A.</given-names>
</name>
<article-title>Simple proof of the concavity of the entropy power with respect to added Gaussian noise</article-title>
.
<source>IEEE Transactions on Information Theory</source>
<volume>35</volume>
,
<fpage>887</fpage>
<lpage>888</lpage>
(
<year>1989</year>
).</mixed-citation>
</ref>
<ref id="b39">
<mixed-citation publication-type="journal">
<name>
<surname>Dembo</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Cover</surname>
<given-names>T. M.</given-names>
</name>
&
<name>
<surname>Thomas</surname>
<given-names>J. A.</given-names>
</name>
<article-title>Information theoretic inequlalities</article-title>
.
<source>IEEE Transactions on Information Theory</source>
<volume>37</volume>
,
<fpage>1501</fpage>
<lpage>1518</lpage>
(
<year>1991</year>
).</mixed-citation>
</ref>
<ref id="b40">
<mixed-citation publication-type="journal">
<name>
<surname>Zamir</surname>
<given-names>R.</given-names>
</name>
<article-title>A proof of the Fisher information inequality via a data processing argument</article-title>
.
<source>IEEE Transactions on Information Theory</source>
<volume>44</volume>
,
<fpage>1246</fpage>
<lpage>1250</lpage>
(
<year>1998</year>
).</mixed-citation>
</ref>
<ref id="b41">
<mixed-citation publication-type="journal">
<name>
<surname>Poor</surname>
<given-names>H. V.</given-names>
</name>
<article-title>Signal detection in the presence of weakly dependent noise—Part I: Optimum detection</article-title>
.
<source>IEEE Transactions on Information Theory</source>
<volume>28</volume>
,
<fpage>735</fpage>
<lpage>744</lpage>
(
<year>1982</year>
).</mixed-citation>
</ref>
<ref id="b42">
<mixed-citation publication-type="journal">
<name>
<surname>Perc</surname>
<given-names>M.</given-names>
</name>
<article-title>Stochastic resonance on excitable small-world networks via a pacemaker</article-title>
.
<source>Physical Review E</source>
<volume>76</volume>
, no. 066203 (
<year>2007</year>
).</mixed-citation>
</ref>
<ref id="b43">
<mixed-citation publication-type="journal">
<name>
<surname>Benzi</surname>
<given-names>R.</given-names>
</name>
<article-title>Stochastic resonance in complex systems</article-title>
.
<source>Journal of Statistical Mechanics: Theory and Experiment</source>
<volume>1</volume>
,
<fpage>P01052</fpage>
(
<year>2009</year>
).</mixed-citation>
</ref>
<ref id="b44">
<mixed-citation publication-type="journal">
<name>
<surname>Perc</surname>
<given-names>M.</given-names>
</name>
<article-title>Stochastic resonance on weakly paced scale-free networks</article-title>
.
<source>Physical Review E</source>
<volume>78</volume>
,
<fpage>036105</fpage>
(
<year>2008</year>
).</mixed-citation>
</ref>
<ref id="b45">
<mixed-citation publication-type="journal">
<name>
<surname>Wang</surname>
<given-names>Q.</given-names>
</name>
,
<name>
<surname>Perc</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Duan</surname>
<given-names>Z.</given-names>
</name>
&
<name>
<surname>Chen</surname>
<given-names>G.</given-names>
</name>
<article-title>Delay-induced multiple stochastic resonances on scale-free neuronal networks</article-title>
.
<source>Chaos</source>
<volume>19</volume>
,
<fpage>023112</fpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">19566247</pub-id>
</mixed-citation>
</ref>
<ref id="b46">
<mixed-citation publication-type="journal">
<name>
<surname>Gan</surname>
<given-names>C.</given-names>
</name>
,
<name>
<surname>Perc</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Wang</surname>
<given-names>Q.</given-names>
</name>
<article-title>Delay-aided stochastic multiresonances on scale-free FitzHugh-Nagumo neuronal networks</article-title>
.
<source>Chinese Physical B</source>
<volume>19</volume>
, no. 040508 (
<year>2010</year>
).</mixed-citation>
</ref>
</ref-list>
<fn-group>
<fn>
<p>
<bold>Author Contributions</bold>
All authors performed the theoretical analyses and participated in the writing of the manuscript. F.D. performed the experiments.</p>
</fn>
</fn-group>
</back>
<floats-group>
<fig id="f1">
<label>Figure 1</label>
<caption>
<title>Mutual information
<italic>I</italic>
(
<bold>Φ</bold>
,
<bold>Ω</bold>
) of the communication channel and
<italic>I</italic>
(
<bold>Φ</bold>
,
<bold>Ψ</bold>
) of the nonlinear channel.</title>
</caption>
<graphic xlink:href="srep27946-f1"></graphic>
</fig>
<fig id="f2">
<label>Figure 2</label>
<caption>
<title>Stochastic resonance effect of the capacity per signal energy
<italic>C</italic>
<sub>
<italic>g</italic>
</sub>
/
<italic>ε</italic>
 = Λ
<sub>
<italic>g</italic>
</sub>
/2 in
<xref ref-type="disp-formula" rid="eq76">equation (11)</xref>
versus the noise parameters
<italic>μ</italic>
and
<italic>ζ</italic>
in
<xref ref-type="disp-formula" rid="eq59">equation (21)</xref>
.</title>
<p>Here, the correlation coefficient
<italic>ρ</italic>
<sub>1</sub>
 = 0.2 and
<italic>ρ</italic>
<sub>2</sub>
 = 0 in the coloured noise model of
<xref ref-type="disp-formula" rid="eq61">equation (22)</xref>
. The corresponding maxima of
<italic>C</italic>
<sub>
<italic>g</italic>
</sub>
/
<italic>ε</italic>
versus optimal values of
<italic>ζ</italic>
are also marked by squares.</p>
</caption>
<graphic xlink:href="srep27946-f2"></graphic>
</fig>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Asie/explor/AustralieFrV1/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000830 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000830 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Asie
   |area=    AustralieFrV1
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     PMC:4910081
   |texte=   Capacity of very noisy communication channels based on Fisher information
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/RBID.i   -Sk "pubmed:27306041" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a AustralieFrV1 

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Tue Dec 5 10:43:12 2017. Site generation: Tue Mar 5 14:07:20 2024