MMSDS : Ubiquitous computing and WWW-based multi-modal sentential dialog system
Identifieur interne : 000965 ( PascalFrancis/Checkpoint ); précédent : 000964; suivant : 000966MMSDS : Ubiquitous computing and WWW-based multi-modal sentential dialog system
Auteurs : Jung-Hyun Kim [Corée du Sud] ; Kwang-Seok Hong [Corée du Sud]Source :
- Lecture notes in computer science [ 0302-9743 ] ; 2006.
Descripteurs français
- Pascal (Inist)
- Calculateur embarqué, Système réparti, Informatique diffuse, Système conversationnel, Interface utilisateur, Education, Langage gestuel, Synthèse parole, Temps réel, Mobilité, Informatique mobile, Sensibilité tactile, Trouble de l'audition, Coréen, Personnalisation, Besoin de l'utilisateur, Modélisation, ..
English descriptors
- KwdEn :
Abstract
In this study, we suggest and implement Multi-Modal Sentential Dialog System (MMSDS) integrating 2 sensory channels with speech and haptic information based on ubiquitous computing and WWW for clear communication. The importance and necessity of MMSDS for HCI as following: 1) it can allow more interactive and natural communication functions between the hearing-impaired and hearing person without special learning and education, 2) according as it recognizes a sentential Korean Standard Sign Language (KSSL) which is represented with speech and haptics and then translates recognition results into a synthetic speech and visual illustration in real-time, it may provide a wider range of personalized and differentiated information more effectively to them, and 3) above all things, a user need not be constrained by the limitations of a particular interaction mode at any given moment because it can guarantee mobility of WPS (Wearable Personal Station for the post PC) with a built-in sentential sign language recognizer. In experiment results, while an average recognition rate of uni-modal recognizer using KSSL only is 93.1% and speech only is 95.5%, advanced MMSDS deduced an average recognition rate of 96.1% for 32 sentential KSSL recognition models.
Affiliations:
Links toward previous steps (curation, corpus...)
Links to Exploration step
Pascal:08-0009649Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en" level="a">MMSDS : Ubiquitous computing and WWW-based multi-modal sentential dialog system</title>
<author><name sortKey="Kim, Jung Hyun" sort="Kim, Jung Hyun" uniqKey="Kim J" first="Jung-Hyun" last="Kim">Jung-Hyun Kim</name>
<affiliation wicri:level="1"><inist:fA14 i1="01"><s1>School of Information and Communication Engineering, Sungkyunkwan University, 300, Chunchun-dong, Jangan-gu</s1>
<s2>Suwon, KyungKi-do, 440-746</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>Corée du Sud</country>
<wicri:noRegion>Suwon, KyungKi-do, 440-746</wicri:noRegion>
</affiliation>
</author>
<author><name sortKey="Hong, Kwang Seok" sort="Hong, Kwang Seok" uniqKey="Hong K" first="Kwang-Seok" last="Hong">Kwang-Seok Hong</name>
<affiliation wicri:level="1"><inist:fA14 i1="01"><s1>School of Information and Communication Engineering, Sungkyunkwan University, 300, Chunchun-dong, Jangan-gu</s1>
<s2>Suwon, KyungKi-do, 440-746</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>Corée du Sud</country>
<wicri:noRegion>Suwon, KyungKi-do, 440-746</wicri:noRegion>
</affiliation>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">INIST</idno>
<idno type="inist">08-0009649</idno>
<date when="2006">2006</date>
<idno type="stanalyst">PASCAL 08-0009649 INIST</idno>
<idno type="RBID">Pascal:08-0009649</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000A30</idno>
<idno type="wicri:Area/PascalFrancis/Curation">000A31</idno>
<idno type="wicri:Area/PascalFrancis/Checkpoint">000965</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en" level="a">MMSDS : Ubiquitous computing and WWW-based multi-modal sentential dialog system</title>
<author><name sortKey="Kim, Jung Hyun" sort="Kim, Jung Hyun" uniqKey="Kim J" first="Jung-Hyun" last="Kim">Jung-Hyun Kim</name>
<affiliation wicri:level="1"><inist:fA14 i1="01"><s1>School of Information and Communication Engineering, Sungkyunkwan University, 300, Chunchun-dong, Jangan-gu</s1>
<s2>Suwon, KyungKi-do, 440-746</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>Corée du Sud</country>
<wicri:noRegion>Suwon, KyungKi-do, 440-746</wicri:noRegion>
</affiliation>
</author>
<author><name sortKey="Hong, Kwang Seok" sort="Hong, Kwang Seok" uniqKey="Hong K" first="Kwang-Seok" last="Hong">Kwang-Seok Hong</name>
<affiliation wicri:level="1"><inist:fA14 i1="01"><s1>School of Information and Communication Engineering, Sungkyunkwan University, 300, Chunchun-dong, Jangan-gu</s1>
<s2>Suwon, KyungKi-do, 440-746</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>Corée du Sud</country>
<wicri:noRegion>Suwon, KyungKi-do, 440-746</wicri:noRegion>
</affiliation>
</author>
</analytic>
<series><title level="j" type="main">Lecture notes in computer science</title>
<idno type="ISSN">0302-9743</idno>
<imprint><date when="2006">2006</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt><title level="j" type="main">Lecture notes in computer science</title>
<idno type="ISSN">0302-9743</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>Auditory disorder</term>
<term>Boarded computer</term>
<term>Customization</term>
<term>Distributed system</term>
<term>Education</term>
<term>Interactive system</term>
<term>Korean</term>
<term>Mobile computing</term>
<term>Mobility</term>
<term>Modeling</term>
<term>Pervasive computing</term>
<term>Real time</term>
<term>Sign language</term>
<term>Speech synthesis</term>
<term>Tactile sensitivity</term>
<term>User interface</term>
<term>User need</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr"><term>Calculateur embarqué</term>
<term>Système réparti</term>
<term>Informatique diffuse</term>
<term>Système conversationnel</term>
<term>Interface utilisateur</term>
<term>Education</term>
<term>Langage gestuel</term>
<term>Synthèse parole</term>
<term>Temps réel</term>
<term>Mobilité</term>
<term>Informatique mobile</term>
<term>Sensibilité tactile</term>
<term>Trouble de l'audition</term>
<term>Coréen</term>
<term>Personnalisation</term>
<term>Besoin de l'utilisateur</term>
<term>Modélisation</term>
<term>.</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">In this study, we suggest and implement Multi-Modal Sentential Dialog System (MMSDS) integrating 2 sensory channels with speech and haptic information based on ubiquitous computing and WWW for clear communication. The importance and necessity of MMSDS for HCI as following: 1) it can allow more interactive and natural communication functions between the hearing-impaired and hearing person without special learning and education, 2) according as it recognizes a sentential Korean Standard Sign Language (KSSL) which is represented with speech and haptics and then translates recognition results into a synthetic speech and visual illustration in real-time, it may provide a wider range of personalized and differentiated information more effectively to them, and 3) above all things, a user need not be constrained by the limitations of a particular interaction mode at any given moment because it can guarantee mobility of WPS (Wearable Personal Station for the post PC) with a built-in sentential sign language recognizer. In experiment results, while an average recognition rate of uni-modal recognizer using KSSL only is 93.1% and speech only is 95.5%, advanced MMSDS deduced an average recognition rate of 96.1% for 32 sentential KSSL recognition models.</div>
</front>
</TEI>
<inist><standard h6="B"><pA><fA01 i1="01" i2="1"><s0>0302-9743</s0>
</fA01>
<fA05><s2>4096</s2>
</fA05>
<fA08 i1="01" i2="1" l="ENG"><s1>MMSDS : Ubiquitous computing and WWW-based multi-modal sentential dialog system</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG"><s1>Embedded and ubiquitous computing : International conference, EUC 2006, Seoul, Korea, August 1-4, 2006 : proceedings</s1>
</fA09>
<fA11 i1="01" i2="1"><s1>KIM (Jung-Hyun)</s1>
</fA11>
<fA11 i1="02" i2="1"><s1>HONG (Kwang-Seok)</s1>
</fA11>
<fA14 i1="01"><s1>School of Information and Communication Engineering, Sungkyunkwan University, 300, Chunchun-dong, Jangan-gu</s1>
<s2>Suwon, KyungKi-do, 440-746</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</fA14>
<fA20><s1>539-548</s1>
</fA20>
<fA21><s1>2006</s1>
</fA21>
<fA23 i1="01"><s0>ENG</s0>
</fA23>
<fA26 i1="01"><s0>3-540-36679-2</s0>
</fA26>
<fA43 i1="01"><s1>INIST</s1>
<s2>16343</s2>
<s5>354000153642770530</s5>
</fA43>
<fA44><s0>0000</s0>
<s1>© 2008 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45><s0>17 ref.</s0>
</fA45>
<fA47 i1="01" i2="1"><s0>08-0009649</s0>
</fA47>
<fA60><s1>P</s1>
<s2>C</s2>
</fA60>
<fA61><s0>A</s0>
</fA61>
<fA64 i1="01" i2="1"><s0>Lecture notes in computer science</s0>
</fA64>
<fA66 i1="01"><s0>DEU</s0>
</fA66>
<fA66 i1="02"><s0>USA</s0>
</fA66>
<fC01 i1="01" l="ENG"><s0>In this study, we suggest and implement Multi-Modal Sentential Dialog System (MMSDS) integrating 2 sensory channels with speech and haptic information based on ubiquitous computing and WWW for clear communication. The importance and necessity of MMSDS for HCI as following: 1) it can allow more interactive and natural communication functions between the hearing-impaired and hearing person without special learning and education, 2) according as it recognizes a sentential Korean Standard Sign Language (KSSL) which is represented with speech and haptics and then translates recognition results into a synthetic speech and visual illustration in real-time, it may provide a wider range of personalized and differentiated information more effectively to them, and 3) above all things, a user need not be constrained by the limitations of a particular interaction mode at any given moment because it can guarantee mobility of WPS (Wearable Personal Station for the post PC) with a built-in sentential sign language recognizer. In experiment results, while an average recognition rate of uni-modal recognizer using KSSL only is 93.1% and speech only is 95.5%, advanced MMSDS deduced an average recognition rate of 96.1% for 32 sentential KSSL recognition models.</s0>
</fC01>
<fC02 i1="01" i2="X"><s0>001D02B04</s0>
</fC02>
<fC02 i1="02" i2="X"><s0>001D02C04</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE"><s0>Calculateur embarqué</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG"><s0>Boarded computer</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA"><s0>Calculador embarque</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE"><s0>Système réparti</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG"><s0>Distributed system</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA"><s0>Sistema repartido</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE"><s0>Informatique diffuse</s0>
<s5>06</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG"><s0>Pervasive computing</s0>
<s5>06</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA"><s0>Informática difusa</s0>
<s5>06</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE"><s0>Système conversationnel</s0>
<s5>07</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG"><s0>Interactive system</s0>
<s5>07</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA"><s0>Sistema interactivo</s0>
<s5>07</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE"><s0>Interface utilisateur</s0>
<s5>08</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG"><s0>User interface</s0>
<s5>08</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA"><s0>Interfase usuario</s0>
<s5>08</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE"><s0>Education</s0>
<s5>09</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG"><s0>Education</s0>
<s5>09</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA"><s0>Educación</s0>
<s5>09</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE"><s0>Langage gestuel</s0>
<s5>10</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG"><s0>Sign language</s0>
<s5>10</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA"><s0>Lenguaje por signos</s0>
<s5>10</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE"><s0>Synthèse parole</s0>
<s5>11</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG"><s0>Speech synthesis</s0>
<s5>11</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA"><s0>Síntesis palabra</s0>
<s5>11</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE"><s0>Temps réel</s0>
<s5>12</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG"><s0>Real time</s0>
<s5>12</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA"><s0>Tiempo real</s0>
<s5>12</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE"><s0>Mobilité</s0>
<s5>13</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG"><s0>Mobility</s0>
<s5>13</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA"><s0>Movilidad</s0>
<s5>13</s5>
</fC03>
<fC03 i1="11" i2="3" l="FRE"><s0>Informatique mobile</s0>
<s5>18</s5>
</fC03>
<fC03 i1="11" i2="3" l="ENG"><s0>Mobile computing</s0>
<s5>18</s5>
</fC03>
<fC03 i1="12" i2="X" l="FRE"><s0>Sensibilité tactile</s0>
<s5>19</s5>
</fC03>
<fC03 i1="12" i2="X" l="ENG"><s0>Tactile sensitivity</s0>
<s5>19</s5>
</fC03>
<fC03 i1="12" i2="X" l="SPA"><s0>Sensibilidad tactil</s0>
<s5>19</s5>
</fC03>
<fC03 i1="13" i2="X" l="FRE"><s0>Trouble de l'audition</s0>
<s5>20</s5>
</fC03>
<fC03 i1="13" i2="X" l="ENG"><s0>Auditory disorder</s0>
<s5>20</s5>
</fC03>
<fC03 i1="13" i2="X" l="SPA"><s0>Trastorno auditivo</s0>
<s5>20</s5>
</fC03>
<fC03 i1="14" i2="X" l="FRE"><s0>Coréen</s0>
<s5>21</s5>
</fC03>
<fC03 i1="14" i2="X" l="ENG"><s0>Korean</s0>
<s5>21</s5>
</fC03>
<fC03 i1="14" i2="X" l="SPA"><s0>Coreano</s0>
<s5>21</s5>
</fC03>
<fC03 i1="15" i2="X" l="FRE"><s0>Personnalisation</s0>
<s5>22</s5>
</fC03>
<fC03 i1="15" i2="X" l="ENG"><s0>Customization</s0>
<s5>22</s5>
</fC03>
<fC03 i1="15" i2="X" l="SPA"><s0>Personalización</s0>
<s5>22</s5>
</fC03>
<fC03 i1="16" i2="X" l="FRE"><s0>Besoin de l'utilisateur</s0>
<s5>23</s5>
</fC03>
<fC03 i1="16" i2="X" l="ENG"><s0>User need</s0>
<s5>23</s5>
</fC03>
<fC03 i1="16" i2="X" l="SPA"><s0>Necesidad usuario</s0>
<s5>23</s5>
</fC03>
<fC03 i1="17" i2="X" l="FRE"><s0>Modélisation</s0>
<s5>24</s5>
</fC03>
<fC03 i1="17" i2="X" l="ENG"><s0>Modeling</s0>
<s5>24</s5>
</fC03>
<fC03 i1="17" i2="X" l="SPA"><s0>Modelización</s0>
<s5>24</s5>
</fC03>
<fC03 i1="18" i2="X" l="FRE"><s0>.</s0>
<s4>INC</s4>
<s5>82</s5>
</fC03>
<fN21><s1>007</s1>
</fN21>
<fN44 i1="01"><s1>OTO</s1>
</fN44>
<fN82><s1>OTO</s1>
</fN82>
</pA>
<pR><fA30 i1="01" i2="1" l="ENG"><s1>International Conference on Embedded and Ubiquitous Computing</s1>
<s3>Seoul KOR</s3>
<s4>2006</s4>
</fA30>
</pR>
</standard>
</inist>
<affiliations><list><country><li>Corée du Sud</li>
</country>
</list>
<tree><country name="Corée du Sud"><noRegion><name sortKey="Kim, Jung Hyun" sort="Kim, Jung Hyun" uniqKey="Kim J" first="Jung-Hyun" last="Kim">Jung-Hyun Kim</name>
</noRegion>
<name sortKey="Hong, Kwang Seok" sort="Hong, Kwang Seok" uniqKey="Hong K" first="Kwang-Seok" last="Hong">Kwang-Seok Hong</name>
</country>
</tree>
</affiliations>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000965 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Checkpoint/biblio.hfd -nk 000965 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Ticri/CIDE |area= HapticV1 |flux= PascalFrancis |étape= Checkpoint |type= RBID |clé= Pascal:08-0009649 |texte= MMSDS : Ubiquitous computing and WWW-based multi-modal sentential dialog system }}
This area was generated with Dilib version V0.6.23. |