A Self-Made Agent Based on Action-Selection
Identifieur interne : 007A94 ( Main/Merge ); précédent : 007A93; suivant : 007A95A Self-Made Agent Based on Action-Selection
Auteurs : Olivier Buffet ; Alain DutechSource :
English descriptors
Abstract
Some agents have to face multiple objectives simultaneously. In such cases, and considering partially observable environments, classical Reinforcement Learning (RL) is prone to fall in pretty low local optima, only learning straightforward behaviors. We present here a method that tries to identify and learn independent ``basic'' behaviors solving separate tasks the agent has to face. Using a combination of these behaviors (an action-selection algorithm), the agent is then able to efficiently deal with various complex goals in complex environments.
Links toward previous steps (curation, corpus...)
- to stream Crin, to step Corpus: 003855
- to stream Crin, to step Curation: 003855
- to stream Crin, to step Checkpoint: 000C47
Links to Exploration step
CRIN:buffet03bLe document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en" wicri:score="274">A Self-Made Agent Based on Action-Selection</title>
</titleStmt>
<publicationStmt><idno type="RBID">CRIN:buffet03b</idno>
<date when="2003" year="2003">2003</date>
<idno type="wicri:Area/Crin/Corpus">003855</idno>
<idno type="wicri:Area/Crin/Curation">003855</idno>
<idno type="wicri:explorRef" wicri:stream="Crin" wicri:step="Curation">003855</idno>
<idno type="wicri:Area/Crin/Checkpoint">000C47</idno>
<idno type="wicri:explorRef" wicri:stream="Crin" wicri:step="Checkpoint">000C47</idno>
<idno type="wicri:Area/Main/Merge">007A94</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en">A Self-Made Agent Based on Action-Selection</title>
<author><name sortKey="Buffet, Olivier" sort="Buffet, Olivier" uniqKey="Buffet O" first="Olivier" last="Buffet">Olivier Buffet</name>
</author>
<author><name sortKey="Dutech, Alain" sort="Dutech, Alain" uniqKey="Dutech A" first="Alain" last="Dutech">Alain Dutech</name>
</author>
</analytic>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>action selection</term>
<term>behavior generation</term>
<term>pomdp</term>
<term>reinforcement learning</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en" wicri:score="2344">Some agents have to face multiple objectives simultaneously. In such cases, and considering partially observable environments, classical Reinforcement Learning (RL) is prone to fall in pretty low local optima, only learning straightforward behaviors. We present here a method that tries to identify and learn independent ``basic'' behaviors solving separate tasks the agent has to face. Using a combination of these behaviors (an action-selection algorithm), the agent is then able to efficiently deal with various complex goals in complex environments.</div>
</front>
</TEI>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Wicri/Lorraine/explor/InforLorV4/Data/Main/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 007A94 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/Main/Merge/biblio.hfd -nk 007A94 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Wicri/Lorraine |area= InforLorV4 |flux= Main |étape= Merge |type= RBID |clé= CRIN:buffet03b |texte= A Self-Made Agent Based on Action-Selection }}
![]() | This area was generated with Dilib version V0.6.33. | ![]() |