Serveur d'exploration sur l'OCR

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Speeding up the training process of the MFNN by optimizing the hidden layers' outputs

Identifieur interne : 002769 ( Main/Merge ); précédent : 002768; suivant : 002770

Speeding up the training process of the MFNN by optimizing the hidden layers' outputs

Auteurs : Mingsheng Zhao [République populaire de Chine] ; Youshou Wu [République populaire de Chine] ; Xiaoqing Ding [République populaire de Chine]

Source :

RBID : ISTEX:D47F5084898AD80018E1F288351784B0B45A84C5

Abstract

A new rapid and efficient learning algorithm (Optimizing the Hidden Layers' Outputs, OHLO Algorithm) for Multilayer Feedforward Neural Networks (MFNN) is proposed in this paper. In the process of learning, both the weights and the outputs of hidden layers are optimized. The networks are trained layer by layer. This is different from the standard BP and other modified algorithms which minimized the outputs errors only with respect to the weights. Experiments show that the training speed and the convergence stability of the proposed method are better than that of standard BP.

Url:
DOI: 10.1016/0925-2312(95)00025-9

Links toward previous steps (curation, corpus...)


Links to Exploration step

ISTEX:D47F5084898AD80018E1F288351784B0B45A84C5

Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title>Speeding up the training process of the MFNN by optimizing the hidden layers' outputs</title>
<author>
<name sortKey="Zhao, Mingsheng" sort="Zhao, Mingsheng" uniqKey="Zhao M" first="Mingsheng" last="Zhao">Mingsheng Zhao</name>
</author>
<author>
<name sortKey="Wu, Youshou" sort="Wu, Youshou" uniqKey="Wu Y" first="Youshou" last="Wu">Youshou Wu</name>
</author>
<author>
<name sortKey="Ding, Xiaoqing" sort="Ding, Xiaoqing" uniqKey="Ding X" first="Xiaoqing" last="Ding">Xiaoqing Ding</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:D47F5084898AD80018E1F288351784B0B45A84C5</idno>
<date when="1996" year="1996">1996</date>
<idno type="doi">10.1016/0925-2312(95)00025-9</idno>
<idno type="url">https://api.istex.fr/document/D47F5084898AD80018E1F288351784B0B45A84C5/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">001E20</idno>
<idno type="wicri:Area/Istex/Curation">001C98</idno>
<idno type="wicri:Area/Istex/Checkpoint">001A36</idno>
<idno type="wicri:doubleKey">0925-2312:1996:Zhao M:speeding:up:the</idno>
<idno type="wicri:Area/Main/Merge">002769</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a">Speeding up the training process of the MFNN by optimizing the hidden layers' outputs</title>
<author>
<name sortKey="Zhao, Mingsheng" sort="Zhao, Mingsheng" uniqKey="Zhao M" first="Mingsheng" last="Zhao">Mingsheng Zhao</name>
<affiliation wicri:level="1">
<country xml:lang="fr">République populaire de Chine</country>
<wicri:regionArea>Image Processing Division, Department of Electronic Engineering, Tsinghua University, Beijing 100084</wicri:regionArea>
<placeName>
<settlement type="city">Pékin</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Wu, Youshou" sort="Wu, Youshou" uniqKey="Wu Y" first="Youshou" last="Wu">Youshou Wu</name>
<affiliation wicri:level="1">
<country xml:lang="fr">République populaire de Chine</country>
<wicri:regionArea>Image Processing Division, Department of Electronic Engineering, Tsinghua University, Beijing 100084</wicri:regionArea>
<placeName>
<settlement type="city">Pékin</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Ding, Xiaoqing" sort="Ding, Xiaoqing" uniqKey="Ding X" first="Xiaoqing" last="Ding">Xiaoqing Ding</name>
<affiliation wicri:level="1">
<country xml:lang="fr">République populaire de Chine</country>
<wicri:regionArea>Image Processing Division, Department of Electronic Engineering, Tsinghua University, Beijing 100084</wicri:regionArea>
<placeName>
<settlement type="city">Pékin</settlement>
</placeName>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="j">Neurocomputing</title>
<title level="j" type="abbrev">NEUCOM</title>
<idno type="ISSN">0925-2312</idno>
<imprint>
<publisher>ELSEVIER</publisher>
<date type="published" when="1995">1995</date>
<biblScope unit="volume">11</biblScope>
<biblScope unit="issue">1</biblScope>
<biblScope unit="page" from="89">89</biblScope>
<biblScope unit="page" to="100">100</biblScope>
</imprint>
<idno type="ISSN">0925-2312</idno>
</series>
<idno type="istex">D47F5084898AD80018E1F288351784B0B45A84C5</idno>
<idno type="DOI">10.1016/0925-2312(95)00025-9</idno>
<idno type="PII">0925-2312(95)00025-9</idno>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">0925-2312</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass></textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">A new rapid and efficient learning algorithm (Optimizing the Hidden Layers' Outputs, OHLO Algorithm) for Multilayer Feedforward Neural Networks (MFNN) is proposed in this paper. In the process of learning, both the weights and the outputs of hidden layers are optimized. The networks are trained layer by layer. This is different from the standard BP and other modified algorithms which minimized the outputs errors only with respect to the weights. Experiments show that the training speed and the convergence stability of the proposed method are better than that of standard BP.</div>
</front>
</TEI>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/OcrV1/Data/Main/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002769 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Merge/biblio.hfd -nk 002769 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    OcrV1
   |flux=    Main
   |étape=   Merge
   |type=    RBID
   |clé=     ISTEX:D47F5084898AD80018E1F288351784B0B45A84C5
   |texte=   Speeding up the training process of the MFNN by optimizing the hidden layers' outputs
}}

Wicri

This area was generated with Dilib version V0.6.32.
Data generation: Sat Nov 11 16:53:45 2017. Site generation: Mon Mar 11 23:15:16 2024