Serveur d'exploration sur la recherche en informatique en Lorraine

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A pruned higher-order network for knowledge extraction

Identifieur interne : 000726 ( PascalFrancis/Corpus ); précédent : 000725; suivant : 000727

A pruned higher-order network for knowledge extraction

Auteurs : Laurent Bougrain

Source :

RBID : Pascal:04-0132540

Descripteurs français

English descriptors

Abstract

Usually, the learning stage of a neural network leads to a single model. But a complex problem cannot always be solved adequately by a global system. On the other side, several systems specialized on a subspace have some difficulties to deal with situations located at the limit of two classes. This article presents a new adaptive architecture based upon higher-order computation to adjust a general model to each pattern and using a pruning algorithm to improve the generalization and extract knowledge. We use one small multi-layer perceptron to predict each weight of the model from the current pattern (we have one estimator per weight). This architecture introduces a higher-order computation, biologically inspired, similar to the modulation of a synapse between two neurons by a third neuron. The general model can then be smaller, more adaptative and more informative.

Notice en format standard (ISO 2709)

Pour connaître la documentation sur le format Inist Standard.

pA  
A01 01  1    @0 1098-7576
A08 01  1  ENG  @1 A pruned higher-order network for knowledge extraction
A09 01  1  ENG  @1 IJCNN'02 : international joint conference on neural networks : Honolulu HI, 12-17 May 2002
A11 01  1    @1 BOUGRAIN (Laurent)
A14 01      @1 LORIA INRIA-Lorraine, B.P. 239 @2 54506 Vandoeuvre-Les-Nancy @3 FRA @Z 1 aut.
A18 01  1    @1 IEEE. Neural Networks Society @3 USA @9 patr.
A18 02  1    @1 International Neural Network Society @3 USA @9 patr.
A20       @1 1726-1729
A21       @1 2002
A23 01      @0 ENG
A26 01      @0 0-7803-7278-6
A43 01      @1 INIST @2 Y 37961 @5 354000117750883070
A44       @0 0000 @1 © 2004 INIST-CNRS. All rights reserved.
A45       @0 12 ref.
A47 01  1    @0 04-0132540
A60       @1 P @2 C
A61       @0 A
A64 01  1    @0 IEEE ... International Conference on Neural Networks
A66 01      @0 USA
C01 01    ENG  @0 Usually, the learning stage of a neural network leads to a single model. But a complex problem cannot always be solved adequately by a global system. On the other side, several systems specialized on a subspace have some difficulties to deal with situations located at the limit of two classes. This article presents a new adaptive architecture based upon higher-order computation to adjust a general model to each pattern and using a pruning algorithm to improve the generalization and extract knowledge. We use one small multi-layer perceptron to predict each weight of the model from the current pattern (we have one estimator per weight). This architecture introduces a higher-order computation, biologically inspired, similar to the modulation of a synapse between two neurons by a third neuron. The general model can then be smaller, more adaptative and more informative.
C02 01  X    @0 001D02C06
C03 01  X  FRE  @0 Réseau multicouche @5 01
C03 01  X  ENG  @0 Multilayer network @5 01
C03 01  X  SPA  @0 Red multinivel @5 01
C03 02  3  FRE  @0 Perceptron multicouche @5 02
C03 02  3  ENG  @0 Multilayer perceptrons @5 02
C03 03  X  FRE  @0 Réseau neuronal @5 03
C03 03  X  ENG  @0 Neural network @5 03
C03 03  X  SPA  @0 Red neuronal @5 03
C03 04  X  FRE  @0 Synapse @5 04
C03 04  X  ENG  @0 Synapse @5 04
C03 04  X  SPA  @0 Sinapsis @5 04
C03 05  X  FRE  @0 Mode ordre élevé @5 05
C03 05  X  ENG  @0 High order mode @5 05
C03 05  X  SPA  @0 Modo orden elevado @5 05
C03 06  X  FRE  @0 Méthode adaptative @5 06
C03 06  X  ENG  @0 Adaptive method @5 06
C03 06  X  SPA  @0 Método adaptativo @5 06
C03 07  X  FRE  @0 Extraction connaissance @4 CD @5 96
C03 07  X  ENG  @0 Knowledge extraction @4 CD @5 96
N21       @1 082
N82       @1 PSI
pR  
A30 01  1  ENG  @1 2002 International joint conference on neural networks @3 Honolulu HI USA @4 2002-05-12

Format Inist (serveur)

NO : PASCAL 04-0132540 INIST
ET : A pruned higher-order network for knowledge extraction
AU : BOUGRAIN (Laurent)
AF : LORIA INRIA-Lorraine, B.P. 239/54506 Vandoeuvre-Les-Nancy/France (1 aut.)
DT : Publication en série; Congrès; Niveau analytique
SO : IEEE ... International Conference on Neural Networks; ISSN 1098-7576; Etats-Unis; Da. 2002; Pp. 1726-1729; Bibl. 12 ref.
LA : Anglais
EA : Usually, the learning stage of a neural network leads to a single model. But a complex problem cannot always be solved adequately by a global system. On the other side, several systems specialized on a subspace have some difficulties to deal with situations located at the limit of two classes. This article presents a new adaptive architecture based upon higher-order computation to adjust a general model to each pattern and using a pruning algorithm to improve the generalization and extract knowledge. We use one small multi-layer perceptron to predict each weight of the model from the current pattern (we have one estimator per weight). This architecture introduces a higher-order computation, biologically inspired, similar to the modulation of a synapse between two neurons by a third neuron. The general model can then be smaller, more adaptative and more informative.
CC : 001D02C06
FD : Réseau multicouche; Perceptron multicouche; Réseau neuronal; Synapse; Mode ordre élevé; Méthode adaptative; Extraction connaissance
ED : Multilayer network; Multilayer perceptrons; Neural network; Synapse; High order mode; Adaptive method; Knowledge extraction
SD : Red multinivel; Red neuronal; Sinapsis; Modo orden elevado; Método adaptativo
LO : INIST-Y 37961.354000117750883070
ID : 04-0132540

Links to Exploration step

Pascal:04-0132540

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">A pruned higher-order network for knowledge extraction</title>
<author>
<name sortKey="Bougrain, Laurent" sort="Bougrain, Laurent" uniqKey="Bougrain L" first="Laurent" last="Bougrain">Laurent Bougrain</name>
<affiliation>
<inist:fA14 i1="01">
<s1>LORIA INRIA-Lorraine, B.P. 239</s1>
<s2>54506 Vandoeuvre-Les-Nancy</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">04-0132540</idno>
<date when="2002">2002</date>
<idno type="stanalyst">PASCAL 04-0132540 INIST</idno>
<idno type="RBID">Pascal:04-0132540</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000726</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">A pruned higher-order network for knowledge extraction</title>
<author>
<name sortKey="Bougrain, Laurent" sort="Bougrain, Laurent" uniqKey="Bougrain L" first="Laurent" last="Bougrain">Laurent Bougrain</name>
<affiliation>
<inist:fA14 i1="01">
<s1>LORIA INRIA-Lorraine, B.P. 239</s1>
<s2>54506 Vandoeuvre-Les-Nancy</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">IEEE ... International Conference on Neural Networks</title>
<idno type="ISSN">1098-7576</idno>
<imprint>
<date when="2002">2002</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">IEEE ... International Conference on Neural Networks</title>
<idno type="ISSN">1098-7576</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Adaptive method</term>
<term>High order mode</term>
<term>Knowledge extraction</term>
<term>Multilayer network</term>
<term>Multilayer perceptrons</term>
<term>Neural network</term>
<term>Synapse</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Réseau multicouche</term>
<term>Perceptron multicouche</term>
<term>Réseau neuronal</term>
<term>Synapse</term>
<term>Mode ordre élevé</term>
<term>Méthode adaptative</term>
<term>Extraction connaissance</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Usually, the learning stage of a neural network leads to a single model. But a complex problem cannot always be solved adequately by a global system. On the other side, several systems specialized on a subspace have some difficulties to deal with situations located at the limit of two classes. This article presents a new adaptive architecture based upon higher-order computation to adjust a general model to each pattern and using a pruning algorithm to improve the generalization and extract knowledge. We use one small multi-layer perceptron to predict each weight of the model from the current pattern (we have one estimator per weight). This architecture introduces a higher-order computation, biologically inspired, similar to the modulation of a synapse between two neurons by a third neuron. The general model can then be smaller, more adaptative and more informative.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>1098-7576</s0>
</fA01>
<fA08 i1="01" i2="1" l="ENG">
<s1>A pruned higher-order network for knowledge extraction</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG">
<s1>IJCNN'02 : international joint conference on neural networks : Honolulu HI, 12-17 May 2002</s1>
</fA09>
<fA11 i1="01" i2="1">
<s1>BOUGRAIN (Laurent)</s1>
</fA11>
<fA14 i1="01">
<s1>LORIA INRIA-Lorraine, B.P. 239</s1>
<s2>54506 Vandoeuvre-Les-Nancy</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
</fA14>
<fA18 i1="01" i2="1">
<s1>IEEE. Neural Networks Society</s1>
<s3>USA</s3>
<s9>patr.</s9>
</fA18>
<fA18 i1="02" i2="1">
<s1>International Neural Network Society</s1>
<s3>USA</s3>
<s9>patr.</s9>
</fA18>
<fA20>
<s1>1726-1729</s1>
</fA20>
<fA21>
<s1>2002</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA26 i1="01">
<s0>0-7803-7278-6</s0>
</fA26>
<fA43 i1="01">
<s1>INIST</s1>
<s2>Y 37961</s2>
<s5>354000117750883070</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2004 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>12 ref.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>04-0132540</s0>
</fA47>
<fA60>
<s1>P</s1>
<s2>C</s2>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>IEEE ... International Conference on Neural Networks</s0>
</fA64>
<fA66 i1="01">
<s0>USA</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>Usually, the learning stage of a neural network leads to a single model. But a complex problem cannot always be solved adequately by a global system. On the other side, several systems specialized on a subspace have some difficulties to deal with situations located at the limit of two classes. This article presents a new adaptive architecture based upon higher-order computation to adjust a general model to each pattern and using a pruning algorithm to improve the generalization and extract knowledge. We use one small multi-layer perceptron to predict each weight of the model from the current pattern (we have one estimator per weight). This architecture introduces a higher-order computation, biologically inspired, similar to the modulation of a synapse between two neurons by a third neuron. The general model can then be smaller, more adaptative and more informative.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>001D02C06</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Réseau multicouche</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Multilayer network</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Red multinivel</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="3" l="FRE">
<s0>Perceptron multicouche</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="3" l="ENG">
<s0>Multilayer perceptrons</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Réseau neuronal</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Neural network</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Red neuronal</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Synapse</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Synapse</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Sinapsis</s0>
<s5>04</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Mode ordre élevé</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>High order mode</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Modo orden elevado</s0>
<s5>05</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Méthode adaptative</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Adaptive method</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Método adaptativo</s0>
<s5>06</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Extraction connaissance</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Knowledge extraction</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fN21>
<s1>082</s1>
</fN21>
<fN82>
<s1>PSI</s1>
</fN82>
</pA>
<pR>
<fA30 i1="01" i2="1" l="ENG">
<s1>2002 International joint conference on neural networks</s1>
<s3>Honolulu HI USA</s3>
<s4>2002-05-12</s4>
</fA30>
</pR>
</standard>
<server>
<NO>PASCAL 04-0132540 INIST</NO>
<ET>A pruned higher-order network for knowledge extraction</ET>
<AU>BOUGRAIN (Laurent)</AU>
<AF>LORIA INRIA-Lorraine, B.P. 239/54506 Vandoeuvre-Les-Nancy/France (1 aut.)</AF>
<DT>Publication en série; Congrès; Niveau analytique</DT>
<SO>IEEE ... International Conference on Neural Networks; ISSN 1098-7576; Etats-Unis; Da. 2002; Pp. 1726-1729; Bibl. 12 ref.</SO>
<LA>Anglais</LA>
<EA>Usually, the learning stage of a neural network leads to a single model. But a complex problem cannot always be solved adequately by a global system. On the other side, several systems specialized on a subspace have some difficulties to deal with situations located at the limit of two classes. This article presents a new adaptive architecture based upon higher-order computation to adjust a general model to each pattern and using a pruning algorithm to improve the generalization and extract knowledge. We use one small multi-layer perceptron to predict each weight of the model from the current pattern (we have one estimator per weight). This architecture introduces a higher-order computation, biologically inspired, similar to the modulation of a synapse between two neurons by a third neuron. The general model can then be smaller, more adaptative and more informative.</EA>
<CC>001D02C06</CC>
<FD>Réseau multicouche; Perceptron multicouche; Réseau neuronal; Synapse; Mode ordre élevé; Méthode adaptative; Extraction connaissance</FD>
<ED>Multilayer network; Multilayer perceptrons; Neural network; Synapse; High order mode; Adaptive method; Knowledge extraction</ED>
<SD>Red multinivel; Red neuronal; Sinapsis; Modo orden elevado; Método adaptativo</SD>
<LO>INIST-Y 37961.354000117750883070</LO>
<ID>04-0132540</ID>
</server>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Lorraine/explor/InforLorV4/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000726 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000726 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Lorraine
   |area=    InforLorV4
   |flux=    PascalFrancis
   |étape=   Corpus
   |type=    RBID
   |clé=     Pascal:04-0132540
   |texte=   A pruned higher-order network for knowledge extraction
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Mon Jun 10 21:56:28 2019. Site generation: Fri Feb 25 15:29:27 2022