Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Human Augmented Cognition Based on Integration of Visual and Auditory Information

Identifieur interne : 003934 ( Main/Merge ); précédent : 003933; suivant : 003935

Human Augmented Cognition Based on Integration of Visual and Auditory Information

Auteurs : Jae Won [Corée du Sud] ; Wono Lee [Corée du Sud] ; Sang-Woo Ban [Corée du Sud] ; Minook Kim [Corée du Sud] ; Hyung-Min Park [Corée du Sud] ; Minho Lee [Corée du Sud]

Source :

RBID : ISTEX:CCDFD3F3AE4C31DFC95EEE441D4754D601241AB9

Abstract

Abstract: In this paper, we propose a new multiple sensory fused human identification model for providing human augmented cognition. In the proposed model, both facial features and mel-frequency cepstral coefficients (MFCCs) are considered as visual features and auditory features for identifying a human, respectively. As well, an adaboosting model identifies a human using the integrated sensory features of both visual and auditory features. In the proposed model, facial form features are obtained from the principal component analysis (PCA) of a human’s face area localized by an Adaboost algorithm in conjunction with a skin color preferable attention model. Moreover, MFCCs are extracted from human speech. Thus, the proposed multiple sensory integration model is aimed to enhance the performance of human identification by considering both visual and auditory complementarily working under partly distorted sensory environments. A human augmented cognition system with the proposed human identification model is implemented as a goggle type, on which it presents information such as unknown people’s profile based on human identification. Experimental results show that the proposed model can plausibly conduct human identification in an indoor meeting situation.

Url:
DOI: 10.1007/978-3-642-15246-7_50

Links toward previous steps (curation, corpus...)


Links to Exploration step

ISTEX:CCDFD3F3AE4C31DFC95EEE441D4754D601241AB9

Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct:series">
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Human Augmented Cognition Based on Integration of Visual and Auditory Information</title>
<author>
<name sortKey="Won, Jae" sort="Won, Jae" uniqKey="Won J" first="Jae" last="Won">Jae Won</name>
</author>
<author>
<name sortKey="Lee, Wono" sort="Lee, Wono" uniqKey="Lee W" first="Wono" last="Lee">Wono Lee</name>
</author>
<author>
<name sortKey="Ban, Sang Woo" sort="Ban, Sang Woo" uniqKey="Ban S" first="Sang-Woo" last="Ban">Sang-Woo Ban</name>
</author>
<author>
<name sortKey="Kim, Minook" sort="Kim, Minook" uniqKey="Kim M" first="Minook" last="Kim">Minook Kim</name>
</author>
<author>
<name sortKey="Park, Hyung Min" sort="Park, Hyung Min" uniqKey="Park H" first="Hyung-Min" last="Park">Hyung-Min Park</name>
</author>
<author>
<name sortKey="Lee, Minho" sort="Lee, Minho" uniqKey="Lee M" first="Minho" last="Lee">Minho Lee</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:CCDFD3F3AE4C31DFC95EEE441D4754D601241AB9</idno>
<date when="2010" year="2010">2010</date>
<idno type="doi">10.1007/978-3-642-15246-7_50</idno>
<idno type="url">https://api.istex.fr/document/CCDFD3F3AE4C31DFC95EEE441D4754D601241AB9/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">004B68</idno>
<idno type="wicri:Area/Istex/Curation">004B68</idno>
<idno type="wicri:Area/Istex/Checkpoint">000673</idno>
<idno type="wicri:doubleKey">0302-9743:2010:Won J:human:augmented:cognition</idno>
<idno type="wicri:Area/Main/Merge">003934</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a" type="main" xml:lang="en">Human Augmented Cognition Based on Integration of Visual and Auditory Information</title>
<author>
<name sortKey="Won, Jae" sort="Won, Jae" uniqKey="Won J" first="Jae" last="Won">Jae Won</name>
<affiliation wicri:level="1">
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>School of Electrical Engineering and Computer Science, Kyungpook National University, 1370 Sankyuk-Dong, Puk-Gu, 702-701, Taegu</wicri:regionArea>
<wicri:noRegion>Taegu</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Corée du Sud</country>
</affiliation>
</author>
<author>
<name sortKey="Lee, Wono" sort="Lee, Wono" uniqKey="Lee W" first="Wono" last="Lee">Wono Lee</name>
<affiliation wicri:level="1">
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>School of Electrical Engineering and Computer Science, Kyungpook National University, 1370 Sankyuk-Dong, Puk-Gu, 702-701, Taegu</wicri:regionArea>
<wicri:noRegion>Taegu</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Corée du Sud</country>
</affiliation>
</author>
<author>
<name sortKey="Ban, Sang Woo" sort="Ban, Sang Woo" uniqKey="Ban S" first="Sang-Woo" last="Ban">Sang-Woo Ban</name>
<affiliation wicri:level="1">
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>Department of Information & Communication Engineering, Dongguk University, 707 Seokjang-Dong, Gyeongju, 780-714, Gyeongbuk</wicri:regionArea>
<wicri:noRegion>Gyeongbuk</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Corée du Sud</country>
</affiliation>
</author>
<author>
<name sortKey="Kim, Minook" sort="Kim, Minook" uniqKey="Kim M" first="Minook" last="Kim">Minook Kim</name>
<affiliation wicri:level="3">
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, 121-742, Seoul</wicri:regionArea>
<placeName>
<settlement type="city">Séoul</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Corée du Sud</country>
</affiliation>
</author>
<author>
<name sortKey="Park, Hyung Min" sort="Park, Hyung Min" uniqKey="Park H" first="Hyung-Min" last="Park">Hyung-Min Park</name>
<affiliation wicri:level="3">
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, 121-742, Seoul</wicri:regionArea>
<placeName>
<settlement type="city">Séoul</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Corée du Sud</country>
</affiliation>
</author>
<author>
<name sortKey="Lee, Minho" sort="Lee, Minho" uniqKey="Lee M" first="Minho" last="Lee">Minho Lee</name>
<affiliation wicri:level="1">
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>School of Electrical Engineering and Computer Science, Kyungpook National University, 1370 Sankyuk-Dong, Puk-Gu, 702-701, Taegu</wicri:regionArea>
<wicri:noRegion>Taegu</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Corée du Sud</country>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="s">Lecture Notes in Computer Science</title>
<imprint>
<date>2010</date>
</imprint>
<idno type="ISSN">0302-9743</idno>
<idno type="eISSN">1611-3349</idno>
<idno type="ISSN">0302-9743</idno>
</series>
<idno type="istex">CCDFD3F3AE4C31DFC95EEE441D4754D601241AB9</idno>
<idno type="DOI">10.1007/978-3-642-15246-7_50</idno>
<idno type="ChapterID">50</idno>
<idno type="ChapterID">Chap50</idno>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">0302-9743</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass></textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Abstract: In this paper, we propose a new multiple sensory fused human identification model for providing human augmented cognition. In the proposed model, both facial features and mel-frequency cepstral coefficients (MFCCs) are considered as visual features and auditory features for identifying a human, respectively. As well, an adaboosting model identifies a human using the integrated sensory features of both visual and auditory features. In the proposed model, facial form features are obtained from the principal component analysis (PCA) of a human’s face area localized by an Adaboost algorithm in conjunction with a skin color preferable attention model. Moreover, MFCCs are extracted from human speech. Thus, the proposed multiple sensory integration model is aimed to enhance the performance of human identification by considering both visual and auditory complementarily working under partly distorted sensory environments. A human augmented cognition system with the proposed human identification model is implemented as a goggle type, on which it presents information such as unknown people’s profile based on human identification. Experimental results show that the proposed model can plausibly conduct human identification in an indoor meeting situation.</div>
</front>
</TEI>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Main/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003934 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Merge/biblio.hfd -nk 003934 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Main
   |étape=   Merge
   |type=    RBID
   |clé=     ISTEX:CCDFD3F3AE4C31DFC95EEE441D4754D601241AB9
   |texte=   Human Augmented Cognition Based on Integration of Visual and Auditory Information
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024