Semantic indexing of multimedia content using textual and visual information
Identifieur interne : 000059 ( PascalFrancis/Corpus ); précédent : 000058; suivant : 000060Semantic indexing of multimedia content using textual and visual information
Auteurs : Abdesalam Amrane ; Hakima Mellah ; Rachid Aliradi ; Youssef AmgharSource :
- International journal of advanced media and communication [ 1462-4613 ] ; 2014.
Descripteurs français
- Pascal (Inist)
- Indexation, Multimédia, Donnée textuelle, Information visuelle, Base donnée multimédia, Recherche information, Texte, Gestion contenu, Interrogation base donnée, Linguistique, Ontologie, Traitement image, Sémantique, Mot clé, Lexique, Analyse conceptuelle, Annotation, Classification à vaste marge, Représentation parcimonieuse, Capteur multiple, Analyse sémantique, ., Recherche par contenu, Classification image, Appariement image.
English descriptors
- KwdEn :
- Annotation, Conceptual analysis, Content management, Content-based retrieval, Database query, Image classification, Image matching, Image processing, Indexing, Information retrieval, Keyword, Lexicon, Linguistics, Multimedia, Multimedia databases, Multisensor, Ontology, Semantic analysis, Semantics, Sparse representation, Text, Textual data, Vector support machine, Visual information.
Abstract
The challenge in multimedia information retrieval remains in the indexing process, an active search area. There are three fundamental techniques for indexing multimedia content: using textual information, using low-level information and combining different information extracted from multimedia. Each approach has its advantages and disadvantages as well to improve multimedia retrieval systems. The recent works are oriented towards multimodal approaches. In this paper, we propose an approach that combines the surrounding text with the information extracted from the visual content of multimedia and represented in the same repository in order to allow querying multimedia content based on keywords or concepts. Each word contained in queries or in description of multimedia is disambiguated using the WordNet ontology in order to define its semantic concept. Support vector machines (SVMs) are used for image classification in one of the defined semantic concept based on SIFT (scale invariant feature transform) descriptors.
Notice en format standard (ISO 2709)
Pour connaître la documentation sur le format Inist Standard.
pA |
|
---|
Format Inist (serveur)
NO : | PASCAL 15-0028757 INIST |
---|---|
ET : | Semantic indexing of multimedia content using textual and visual information |
AU : | AMRANE (Abdesalam); MELLAH (Hakima); ALIRADI (Rachid); AMGHAR (Youssef); WAI CHI FANG; KIM (Tai-hoon); RAMOS (Carlos); MOHAMMED (Sabah); GERVASI (Osvaldo); STOICA (Adrian) |
AF : | Research Center on Scientific and Technical Information (CERIST)/Ben Aknoun, Algiers/Algérie (1 aut., 2 aut., 3 aut.); University of Lyon, CNRS, INSA-Lyon, LIRIS/UMR5205, 69621/France (4 aut.); Department of Electronics Engineering, National Chiao Tung University, 1001 Ta Hsueh Road/Hinschu, Taiwan 300/Taïwan (1 aut.); School of Computing and Information Science, University of Tasmania, Australia, Centenary Building, Room 350, Private Bag 87/Hobart, TAS 7001/Australie (2 aut.); Instituto Politécnico do Porto, Rua Dr. António Bernardino de Almeida, 431/Porto 4200-072/Portugal (3 aut.); Department of Computer Science, Lakehead University, 955 Oliver Road, Thunder Bay/Ontario P7B 5E1/Canada (4 aut.); Department of Mathematics and Computer Science, University of Perugia/106123 Perugia/Italie (5 aut.); NASA JPL, M/S 303-300, 4800 Oak Grove Drive/Pasadena, CA 91109/Etats-Unis (6 aut.) |
DT : | Publication en série; Niveau analytique |
SO : | International journal of advanced media and communication; ISSN 1462-4613; Suisse; Da. 2014; Vol. 5; No. 2-3; Pp. 182-194; Bibl. 1 p. |
LA : | Anglais |
EA : | The challenge in multimedia information retrieval remains in the indexing process, an active search area. There are three fundamental techniques for indexing multimedia content: using textual information, using low-level information and combining different information extracted from multimedia. Each approach has its advantages and disadvantages as well to improve multimedia retrieval systems. The recent works are oriented towards multimodal approaches. In this paper, we propose an approach that combines the surrounding text with the information extracted from the visual content of multimedia and represented in the same repository in order to allow querying multimedia content based on keywords or concepts. Each word contained in queries or in description of multimedia is disambiguated using the WordNet ontology in order to define its semantic concept. Support vector machines (SVMs) are used for image classification in one of the defined semantic concept based on SIFT (scale invariant feature transform) descriptors. |
CC : | 001D02C03; 001D02B07D; 001D02B04; 001D02B07B |
FD : | Indexation; Multimédia; Donnée textuelle; Information visuelle; Base donnée multimédia; Recherche information; Texte; Gestion contenu; Interrogation base donnée; Linguistique; Ontologie; Traitement image; Sémantique; Mot clé; Lexique; Analyse conceptuelle; Annotation; Classification à vaste marge; Représentation parcimonieuse; Capteur multiple; Analyse sémantique; .; Recherche par contenu; Classification image; Appariement image |
ED : | Indexing; Multimedia; Textual data; Visual information; Multimedia databases; Information retrieval; Text; Content management; Database query; Linguistics; Ontology; Image processing; Semantics; Keyword; Lexicon; Conceptual analysis; Annotation; Vector support machine; Sparse representation; Multisensor; Semantic analysis; Content-based retrieval; Image classification; Image matching |
SD : | Indización; Multimedia; Dato textual; Información visual; Búsqueda información; Texto; Gestión contenido; Interrogación base datos; Linguística; Ontología; Procesamiento imagen; Semántica; Palabra clave; Léxico; Análisis conceptual; Anotación; Máquina ejemplo soporte; Representación parsimoniosa; Multisensor; Análisis semántico; Búsqueda por Contenidos; Clasificación de imágenes; reconocimiento de patrones en imágenes |
LO : | INIST-27778.354000504548010080 |
ID : | 15-0028757 |
Links to Exploration step
Pascal:15-0028757Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en" level="a">Semantic indexing of multimedia content using textual and visual information</title>
<author><name sortKey="Amrane, Abdesalam" sort="Amrane, Abdesalam" uniqKey="Amrane A" first="Abdesalam" last="Amrane">Abdesalam Amrane</name>
<affiliation><inist:fA14 i1="01"><s1>Research Center on Scientific and Technical Information (CERIST)</s1>
<s2>Ben Aknoun, Algiers</s2>
<s3>DZA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Mellah, Hakima" sort="Mellah, Hakima" uniqKey="Mellah H" first="Hakima" last="Mellah">Hakima Mellah</name>
<affiliation><inist:fA14 i1="01"><s1>Research Center on Scientific and Technical Information (CERIST)</s1>
<s2>Ben Aknoun, Algiers</s2>
<s3>DZA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Aliradi, Rachid" sort="Aliradi, Rachid" uniqKey="Aliradi R" first="Rachid" last="Aliradi">Rachid Aliradi</name>
<affiliation><inist:fA14 i1="01"><s1>Research Center on Scientific and Technical Information (CERIST)</s1>
<s2>Ben Aknoun, Algiers</s2>
<s3>DZA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Amghar, Youssef" sort="Amghar, Youssef" uniqKey="Amghar Y" first="Youssef" last="Amghar">Youssef Amghar</name>
<affiliation><inist:fA14 i1="02"><s1>University of Lyon, CNRS, INSA-Lyon, LIRIS</s1>
<s2>UMR5205, 69621</s2>
<s3>FRA</s3>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">INIST</idno>
<idno type="inist">15-0028757</idno>
<date when="2014">2014</date>
<idno type="stanalyst">PASCAL 15-0028757 INIST</idno>
<idno type="RBID">Pascal:15-0028757</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000059</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en" level="a">Semantic indexing of multimedia content using textual and visual information</title>
<author><name sortKey="Amrane, Abdesalam" sort="Amrane, Abdesalam" uniqKey="Amrane A" first="Abdesalam" last="Amrane">Abdesalam Amrane</name>
<affiliation><inist:fA14 i1="01"><s1>Research Center on Scientific and Technical Information (CERIST)</s1>
<s2>Ben Aknoun, Algiers</s2>
<s3>DZA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Mellah, Hakima" sort="Mellah, Hakima" uniqKey="Mellah H" first="Hakima" last="Mellah">Hakima Mellah</name>
<affiliation><inist:fA14 i1="01"><s1>Research Center on Scientific and Technical Information (CERIST)</s1>
<s2>Ben Aknoun, Algiers</s2>
<s3>DZA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Aliradi, Rachid" sort="Aliradi, Rachid" uniqKey="Aliradi R" first="Rachid" last="Aliradi">Rachid Aliradi</name>
<affiliation><inist:fA14 i1="01"><s1>Research Center on Scientific and Technical Information (CERIST)</s1>
<s2>Ben Aknoun, Algiers</s2>
<s3>DZA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Amghar, Youssef" sort="Amghar, Youssef" uniqKey="Amghar Y" first="Youssef" last="Amghar">Youssef Amghar</name>
<affiliation><inist:fA14 i1="02"><s1>University of Lyon, CNRS, INSA-Lyon, LIRIS</s1>
<s2>UMR5205, 69621</s2>
<s3>FRA</s3>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series><title level="j" type="main">International journal of advanced media and communication</title>
<idno type="ISSN">1462-4613</idno>
<imprint><date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt><title level="j" type="main">International journal of advanced media and communication</title>
<idno type="ISSN">1462-4613</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>Annotation</term>
<term>Conceptual analysis</term>
<term>Content management</term>
<term>Content-based retrieval</term>
<term>Database query</term>
<term>Image classification</term>
<term>Image matching</term>
<term>Image processing</term>
<term>Indexing</term>
<term>Information retrieval</term>
<term>Keyword</term>
<term>Lexicon</term>
<term>Linguistics</term>
<term>Multimedia</term>
<term>Multimedia databases</term>
<term>Multisensor</term>
<term>Ontology</term>
<term>Semantic analysis</term>
<term>Semantics</term>
<term>Sparse representation</term>
<term>Text</term>
<term>Textual data</term>
<term>Vector support machine</term>
<term>Visual information</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr"><term>Indexation</term>
<term>Multimédia</term>
<term>Donnée textuelle</term>
<term>Information visuelle</term>
<term>Base donnée multimédia</term>
<term>Recherche information</term>
<term>Texte</term>
<term>Gestion contenu</term>
<term>Interrogation base donnée</term>
<term>Linguistique</term>
<term>Ontologie</term>
<term>Traitement image</term>
<term>Sémantique</term>
<term>Mot clé</term>
<term>Lexique</term>
<term>Analyse conceptuelle</term>
<term>Annotation</term>
<term>Classification à vaste marge</term>
<term>Représentation parcimonieuse</term>
<term>Capteur multiple</term>
<term>Analyse sémantique</term>
<term>.</term>
<term>Recherche par contenu</term>
<term>Classification image</term>
<term>Appariement image</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">The challenge in multimedia information retrieval remains in the indexing process, an active search area. There are three fundamental techniques for indexing multimedia content: using textual information, using low-level information and combining different information extracted from multimedia. Each approach has its advantages and disadvantages as well to improve multimedia retrieval systems. The recent works are oriented towards multimodal approaches. In this paper, we propose an approach that combines the surrounding text with the information extracted from the visual content of multimedia and represented in the same repository in order to allow querying multimedia content based on keywords or concepts. Each word contained in queries or in description of multimedia is disambiguated using the WordNet ontology in order to define its semantic concept. Support vector machines (SVMs) are used for image classification in one of the defined semantic concept based on SIFT (scale invariant feature transform) descriptors.</div>
</front>
</TEI>
<inist><standard h6="B"><pA><fA01 i1="01" i2="1"><s0>1462-4613</s0>
</fA01>
<fA05><s2>5</s2>
</fA05>
<fA06><s2>2-3</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG"><s1>Semantic indexing of multimedia content using textual and visual information</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG"><s1>ADVANCES IN MULTIMEDIA, COMPUTER GRAPHICS AND BROADCASTING</s1>
</fA09>
<fA11 i1="01" i2="1"><s1>AMRANE (Abdesalam)</s1>
</fA11>
<fA11 i1="02" i2="1"><s1>MELLAH (Hakima)</s1>
</fA11>
<fA11 i1="03" i2="1"><s1>ALIRADI (Rachid)</s1>
</fA11>
<fA11 i1="04" i2="1"><s1>AMGHAR (Youssef)</s1>
</fA11>
<fA12 i1="01" i2="1"><s1>WAI CHI FANG</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1"><s1>KIM (Tai-hoon)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="03" i2="1"><s1>RAMOS (Carlos)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="04" i2="1"><s1>MOHAMMED (Sabah)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="05" i2="1"><s1>GERVASI (Osvaldo)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="06" i2="1"><s1>STOICA (Adrian)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01"><s1>Research Center on Scientific and Technical Information (CERIST)</s1>
<s2>Ben Aknoun, Algiers</s2>
<s3>DZA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</fA14>
<fA14 i1="02"><s1>University of Lyon, CNRS, INSA-Lyon, LIRIS</s1>
<s2>UMR5205, 69621</s2>
<s3>FRA</s3>
<sZ>4 aut.</sZ>
</fA14>
<fA15 i1="01"><s1>Department of Electronics Engineering, National Chiao Tung University, 1001 Ta Hsueh Road</s1>
<s2>Hinschu, Taiwan 300</s2>
<s3>TWN</s3>
<sZ>1 aut.</sZ>
</fA15>
<fA15 i1="02"><s1>School of Computing and Information Science, University of Tasmania, Australia, Centenary Building, Room 350, Private Bag 87</s1>
<s2>Hobart, TAS 7001</s2>
<s3>AUS</s3>
<sZ>2 aut.</sZ>
</fA15>
<fA15 i1="03"><s1>Instituto Politécnico do Porto, Rua Dr. António Bernardino de Almeida, 431</s1>
<s2>Porto 4200-072</s2>
<s3>PRT</s3>
<sZ>3 aut.</sZ>
</fA15>
<fA15 i1="04"><s1>Department of Computer Science, Lakehead University, 955 Oliver Road, Thunder Bay</s1>
<s2>Ontario P7B 5E1</s2>
<s3>CAN</s3>
<sZ>4 aut.</sZ>
</fA15>
<fA15 i1="05"><s1>Department of Mathematics and Computer Science, University of Perugia</s1>
<s2>106123 Perugia</s2>
<s3>ITA</s3>
<sZ>5 aut.</sZ>
</fA15>
<fA15 i1="06"><s1>NASA JPL, M/S 303-300, 4800 Oak Grove Drive</s1>
<s2>Pasadena, CA 91109</s2>
<s3>USA</s3>
<sZ>6 aut.</sZ>
</fA15>
<fA20><s1>182-194</s1>
</fA20>
<fA21><s1>2014</s1>
</fA21>
<fA23 i1="01"><s0>ENG</s0>
</fA23>
<fA43 i1="01"><s1>INIST</s1>
<s2>27778</s2>
<s5>354000504548010080</s5>
</fA43>
<fA44><s0>0000</s0>
<s1>© 2015 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45><s0>1 p.</s0>
</fA45>
<fA47 i1="01" i2="1"><s0>15-0028757</s0>
</fA47>
<fA60><s1>P</s1>
</fA60>
<fA61><s0>A</s0>
</fA61>
<fA64 i1="01" i2="1"><s0>International journal of advanced media and communication</s0>
</fA64>
<fA66 i1="01"><s0>CHE</s0>
</fA66>
<fA99><s0>1 notes</s0>
</fA99>
<fC01 i1="01" l="ENG"><s0>The challenge in multimedia information retrieval remains in the indexing process, an active search area. There are three fundamental techniques for indexing multimedia content: using textual information, using low-level information and combining different information extracted from multimedia. Each approach has its advantages and disadvantages as well to improve multimedia retrieval systems. The recent works are oriented towards multimodal approaches. In this paper, we propose an approach that combines the surrounding text with the information extracted from the visual content of multimedia and represented in the same repository in order to allow querying multimedia content based on keywords or concepts. Each word contained in queries or in description of multimedia is disambiguated using the WordNet ontology in order to define its semantic concept. Support vector machines (SVMs) are used for image classification in one of the defined semantic concept based on SIFT (scale invariant feature transform) descriptors.</s0>
</fC01>
<fC02 i1="01" i2="X"><s0>001D02C03</s0>
</fC02>
<fC02 i1="02" i2="X"><s0>001D02B07D</s0>
</fC02>
<fC02 i1="03" i2="X"><s0>001D02B04</s0>
</fC02>
<fC02 i1="04" i2="X"><s0>001D02B07B</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE"><s0>Indexation</s0>
<s5>06</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG"><s0>Indexing</s0>
<s5>06</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA"><s0>Indización</s0>
<s5>06</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE"><s0>Multimédia</s0>
<s5>07</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG"><s0>Multimedia</s0>
<s5>07</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA"><s0>Multimedia</s0>
<s5>07</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE"><s0>Donnée textuelle</s0>
<s5>08</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG"><s0>Textual data</s0>
<s5>08</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA"><s0>Dato textual</s0>
<s5>08</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE"><s0>Information visuelle</s0>
<s5>09</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG"><s0>Visual information</s0>
<s5>09</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA"><s0>Información visual</s0>
<s5>09</s5>
</fC03>
<fC03 i1="05" i2="3" l="FRE"><s0>Base donnée multimédia</s0>
<s5>10</s5>
</fC03>
<fC03 i1="05" i2="3" l="ENG"><s0>Multimedia databases</s0>
<s5>10</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE"><s0>Recherche information</s0>
<s5>11</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG"><s0>Information retrieval</s0>
<s5>11</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA"><s0>Búsqueda información</s0>
<s5>11</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE"><s0>Texte</s0>
<s5>12</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG"><s0>Text</s0>
<s5>12</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA"><s0>Texto</s0>
<s5>12</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE"><s0>Gestion contenu</s0>
<s5>13</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG"><s0>Content management</s0>
<s5>13</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA"><s0>Gestión contenido</s0>
<s5>13</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE"><s0>Interrogation base donnée</s0>
<s5>14</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG"><s0>Database query</s0>
<s5>14</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA"><s0>Interrogación base datos</s0>
<s5>14</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE"><s0>Linguistique</s0>
<s5>15</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG"><s0>Linguistics</s0>
<s5>15</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA"><s0>Linguística</s0>
<s5>15</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE"><s0>Ontologie</s0>
<s5>16</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG"><s0>Ontology</s0>
<s5>16</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA"><s0>Ontología</s0>
<s5>16</s5>
</fC03>
<fC03 i1="12" i2="X" l="FRE"><s0>Traitement image</s0>
<s5>17</s5>
</fC03>
<fC03 i1="12" i2="X" l="ENG"><s0>Image processing</s0>
<s5>17</s5>
</fC03>
<fC03 i1="12" i2="X" l="SPA"><s0>Procesamiento imagen</s0>
<s5>17</s5>
</fC03>
<fC03 i1="13" i2="X" l="FRE"><s0>Sémantique</s0>
<s5>18</s5>
</fC03>
<fC03 i1="13" i2="X" l="ENG"><s0>Semantics</s0>
<s5>18</s5>
</fC03>
<fC03 i1="13" i2="X" l="SPA"><s0>Semántica</s0>
<s5>18</s5>
</fC03>
<fC03 i1="14" i2="X" l="FRE"><s0>Mot clé</s0>
<s5>19</s5>
</fC03>
<fC03 i1="14" i2="X" l="ENG"><s0>Keyword</s0>
<s5>19</s5>
</fC03>
<fC03 i1="14" i2="X" l="SPA"><s0>Palabra clave</s0>
<s5>19</s5>
</fC03>
<fC03 i1="15" i2="X" l="FRE"><s0>Lexique</s0>
<s5>20</s5>
</fC03>
<fC03 i1="15" i2="X" l="ENG"><s0>Lexicon</s0>
<s5>20</s5>
</fC03>
<fC03 i1="15" i2="X" l="SPA"><s0>Léxico</s0>
<s5>20</s5>
</fC03>
<fC03 i1="16" i2="X" l="FRE"><s0>Analyse conceptuelle</s0>
<s5>21</s5>
</fC03>
<fC03 i1="16" i2="X" l="ENG"><s0>Conceptual analysis</s0>
<s5>21</s5>
</fC03>
<fC03 i1="16" i2="X" l="SPA"><s0>Análisis conceptual</s0>
<s5>21</s5>
</fC03>
<fC03 i1="17" i2="X" l="FRE"><s0>Annotation</s0>
<s5>22</s5>
</fC03>
<fC03 i1="17" i2="X" l="ENG"><s0>Annotation</s0>
<s5>22</s5>
</fC03>
<fC03 i1="17" i2="X" l="SPA"><s0>Anotación</s0>
<s5>22</s5>
</fC03>
<fC03 i1="18" i2="X" l="FRE"><s0>Classification à vaste marge</s0>
<s5>23</s5>
</fC03>
<fC03 i1="18" i2="X" l="ENG"><s0>Vector support machine</s0>
<s5>23</s5>
</fC03>
<fC03 i1="18" i2="X" l="SPA"><s0>Máquina ejemplo soporte</s0>
<s5>23</s5>
</fC03>
<fC03 i1="19" i2="X" l="FRE"><s0>Représentation parcimonieuse</s0>
<s5>24</s5>
</fC03>
<fC03 i1="19" i2="X" l="ENG"><s0>Sparse representation</s0>
<s5>24</s5>
</fC03>
<fC03 i1="19" i2="X" l="SPA"><s0>Representación parsimoniosa</s0>
<s5>24</s5>
</fC03>
<fC03 i1="20" i2="X" l="FRE"><s0>Capteur multiple</s0>
<s5>25</s5>
</fC03>
<fC03 i1="20" i2="X" l="ENG"><s0>Multisensor</s0>
<s5>25</s5>
</fC03>
<fC03 i1="20" i2="X" l="SPA"><s0>Multisensor</s0>
<s5>25</s5>
</fC03>
<fC03 i1="21" i2="X" l="FRE"><s0>Analyse sémantique</s0>
<s5>41</s5>
</fC03>
<fC03 i1="21" i2="X" l="ENG"><s0>Semantic analysis</s0>
<s5>41</s5>
</fC03>
<fC03 i1="21" i2="X" l="SPA"><s0>Análisis semántico</s0>
<s5>41</s5>
</fC03>
<fC03 i1="22" i2="X" l="FRE"><s0>.</s0>
<s4>INC</s4>
<s5>82</s5>
</fC03>
<fC03 i1="23" i2="X" l="FRE"><s0>Recherche par contenu</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="23" i2="X" l="ENG"><s0>Content-based retrieval</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="23" i2="X" l="SPA"><s0>Búsqueda por Contenidos</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="24" i2="X" l="FRE"><s0>Classification image</s0>
<s4>CD</s4>
<s5>97</s5>
</fC03>
<fC03 i1="24" i2="X" l="ENG"><s0>Image classification</s0>
<s4>CD</s4>
<s5>97</s5>
</fC03>
<fC03 i1="24" i2="X" l="SPA"><s0>Clasificación de imágenes</s0>
<s4>CD</s4>
<s5>97</s5>
</fC03>
<fC03 i1="25" i2="X" l="FRE"><s0>Appariement image</s0>
<s4>CD</s4>
<s5>98</s5>
</fC03>
<fC03 i1="25" i2="X" l="ENG"><s0>Image matching</s0>
<s4>CD</s4>
<s5>98</s5>
</fC03>
<fC03 i1="25" i2="X" l="SPA"><s0>reconocimiento de patrones en imágenes</s0>
<s4>CD</s4>
<s5>98</s5>
</fC03>
<fN21><s1>047</s1>
</fN21>
<fN44 i1="01"><s1>OTO</s1>
</fN44>
<fN82><s1>OTO</s1>
</fN82>
</pA>
</standard>
<server><NO>PASCAL 15-0028757 INIST</NO>
<ET>Semantic indexing of multimedia content using textual and visual information</ET>
<AU>AMRANE (Abdesalam); MELLAH (Hakima); ALIRADI (Rachid); AMGHAR (Youssef); WAI CHI FANG; KIM (Tai-hoon); RAMOS (Carlos); MOHAMMED (Sabah); GERVASI (Osvaldo); STOICA (Adrian)</AU>
<AF>Research Center on Scientific and Technical Information (CERIST)/Ben Aknoun, Algiers/Algérie (1 aut., 2 aut., 3 aut.); University of Lyon, CNRS, INSA-Lyon, LIRIS/UMR5205, 69621/France (4 aut.); Department of Electronics Engineering, National Chiao Tung University, 1001 Ta Hsueh Road/Hinschu, Taiwan 300/Taïwan (1 aut.); School of Computing and Information Science, University of Tasmania, Australia, Centenary Building, Room 350, Private Bag 87/Hobart, TAS 7001/Australie (2 aut.); Instituto Politécnico do Porto, Rua Dr. António Bernardino de Almeida, 431/Porto 4200-072/Portugal (3 aut.); Department of Computer Science, Lakehead University, 955 Oliver Road, Thunder Bay/Ontario P7B 5E1/Canada (4 aut.); Department of Mathematics and Computer Science, University of Perugia/106123 Perugia/Italie (5 aut.); NASA JPL, M/S 303-300, 4800 Oak Grove Drive/Pasadena, CA 91109/Etats-Unis (6 aut.)</AF>
<DT>Publication en série; Niveau analytique</DT>
<SO>International journal of advanced media and communication; ISSN 1462-4613; Suisse; Da. 2014; Vol. 5; No. 2-3; Pp. 182-194; Bibl. 1 p.</SO>
<LA>Anglais</LA>
<EA>The challenge in multimedia information retrieval remains in the indexing process, an active search area. There are three fundamental techniques for indexing multimedia content: using textual information, using low-level information and combining different information extracted from multimedia. Each approach has its advantages and disadvantages as well to improve multimedia retrieval systems. The recent works are oriented towards multimodal approaches. In this paper, we propose an approach that combines the surrounding text with the information extracted from the visual content of multimedia and represented in the same repository in order to allow querying multimedia content based on keywords or concepts. Each word contained in queries or in description of multimedia is disambiguated using the WordNet ontology in order to define its semantic concept. Support vector machines (SVMs) are used for image classification in one of the defined semantic concept based on SIFT (scale invariant feature transform) descriptors.</EA>
<CC>001D02C03; 001D02B07D; 001D02B04; 001D02B07B</CC>
<FD>Indexation; Multimédia; Donnée textuelle; Information visuelle; Base donnée multimédia; Recherche information; Texte; Gestion contenu; Interrogation base donnée; Linguistique; Ontologie; Traitement image; Sémantique; Mot clé; Lexique; Analyse conceptuelle; Annotation; Classification à vaste marge; Représentation parcimonieuse; Capteur multiple; Analyse sémantique; .; Recherche par contenu; Classification image; Appariement image</FD>
<ED>Indexing; Multimedia; Textual data; Visual information; Multimedia databases; Information retrieval; Text; Content management; Database query; Linguistics; Ontology; Image processing; Semantics; Keyword; Lexicon; Conceptual analysis; Annotation; Vector support machine; Sparse representation; Multisensor; Semantic analysis; Content-based retrieval; Image classification; Image matching</ED>
<SD>Indización; Multimedia; Dato textual; Información visual; Búsqueda información; Texto; Gestión contenido; Interrogación base datos; Linguística; Ontología; Procesamiento imagen; Semántica; Palabra clave; Léxico; Análisis conceptual; Anotación; Máquina ejemplo soporte; Representación parsimoniosa; Multisensor; Análisis semántico; Búsqueda por Contenidos; Clasificación de imágenes; reconocimiento de patrones en imágenes</SD>
<LO>INIST-27778.354000504548010080</LO>
<ID>15-0028757</ID>
</server>
</inist>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Wicri/Asie/explor/AustralieFrV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000059 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000059 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Wicri/Asie |area= AustralieFrV1 |flux= PascalFrancis |étape= Corpus |type= RBID |clé= Pascal:15-0028757 |texte= Semantic indexing of multimedia content using textual and visual information }}
This area was generated with Dilib version V0.6.33. |