Serveur d'exploration sur Pittsburgh

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Multilabel classification with meta-level features in a learning-to-rank framework

Identifieur interne : 001A91 ( PascalFrancis/Curation ); précédent : 001A90; suivant : 001A92

Multilabel classification with meta-level features in a learning-to-rank framework

Auteurs : YIMING YANG [États-Unis] ; Siddharth Gopal [États-Unis]

Source :

RBID : Pascal:12-0433529

Descripteurs français

English descriptors

Abstract

Effective learning in multi-label classification (MLC) requires an appropriate level of abstraction for representing the relationship between each instance and multiple categories. Current MLC methods have focused on learning-to-map from instances to categories in a relatively low-level feature space, such as individual words. The fine-grained features in such a space may not be sufficiently expressive for learning to rank categories, which is essential in multi-label classification. This paper presents an alternative solution by transforming the conventional representation of instances and categories into meta-level features, and by leveraging successful learning-to-rank retrieval algorithms over this feature space. Controlled experiments on six benchmark datasets using eight evaluation metrics show strong evidence for the effectiveness of the proposed approach, which significantly outperformed other state-of-the-art methods such as Rank-SVM, ML-kNN (Multilabel kNN), IBLR-ML (Instance-based logistic regression for multi-label classification) on most of the datasets. Thorough analyses are also provided for separating the factors responsible for the improved performance.
pA  
A01 01  1    @0 0885-6125
A03   1    @0 Mach. learn.
A05       @2 88
A06       @2 1-2
A08 01  1  ENG  @1 Multilabel classification with meta-level features in a learning-to-rank framework
A09 01  1  ENG  @1 Special Issue on Learning from Multi-Label Data
A11 01  1    @1 YIMING YANG
A11 02  1    @1 GOPAL (Siddharth)
A12 01  1    @1 TSOUMAKAS (Grigorios) @9 ed.
A12 02  1    @1 ZHANG (Min-Ling) @9 ed.
A12 03  1    @1 ZHOU (Zhi-Hua) @9 ed.
A14 01      @1 Language Technologies Institute & Machine Learning Department, Carnegie Mellon University @2 Pittsburgh @3 USA @Z 1 aut.
A14 02      @1 Language Technologies Institute, Carnegie Mellon University @2 Pittsburgh @3 USA @Z 2 aut.
A15 01      @1 Department of Informatics, Aristotle University of Thessaloniki @2 54124 Thessaloniki @3 GRC @Z 1 aut.
A15 02      @1 MOE Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, 2 Sipailou @2 Nanjing 210096 @3 CHN @Z 2 aut.
A15 03      @1 National Key Laboratory for Novel Sofware Technology, Nanjing University, 163 Xianlin Avenue @2 Nanjing 210046 @3 CHN @Z 3 aut.
A20       @1 47-68
A21       @1 2012
A23 01      @0 ENG
A43 01      @1 INIST @2 21011 @5 354000506629240020
A44       @0 0000 @1 © 2012 INIST-CNRS. All rights reserved.
A45       @0 1 p.3/4
A47 01  1    @0 12-0433529
A60       @1 P
A61       @0 A
A64 01  1    @0 Machine learning
A66 01      @0 DEU
C01 01    ENG  @0 Effective learning in multi-label classification (MLC) requires an appropriate level of abstraction for representing the relationship between each instance and multiple categories. Current MLC methods have focused on learning-to-map from instances to categories in a relatively low-level feature space, such as individual words. The fine-grained features in such a space may not be sufficiently expressive for learning to rank categories, which is essential in multi-label classification. This paper presents an alternative solution by transforming the conventional representation of instances and categories into meta-level features, and by leveraging successful learning-to-rank retrieval algorithms over this feature space. Controlled experiments on six benchmark datasets using eight evaluation metrics show strong evidence for the effectiveness of the proposed approach, which significantly outperformed other state-of-the-art methods such as Rank-SVM, ML-kNN (Multilabel kNN), IBLR-ML (Instance-based logistic regression for multi-label classification) on most of the datasets. Thorough analyses are also provided for separating the factors responsible for the improved performance.
C02 01  X    @0 001D02C02
C02 02  X    @0 001D02B07B
C03 01  X  FRE  @0 Intelligence artificielle @5 06
C03 01  X  ENG  @0 Artificial intelligence @5 06
C03 01  X  SPA  @0 Inteligencia artificial @5 06
C03 02  X  FRE  @0 Analyse donnée @5 07
C03 02  X  ENG  @0 Data analysis @5 07
C03 02  X  SPA  @0 Análisis datos @5 07
C03 03  X  FRE  @0 Classification hiérarchique @5 18
C03 03  X  ENG  @0 Hierarchical classification @5 18
C03 03  X  SPA  @0 Clasificación jerarquizada @5 18
C03 04  X  FRE  @0 Abstraction @5 19
C03 04  X  ENG  @0 Abstraction @5 19
C03 04  X  SPA  @0 Abstracción @5 19
C03 05  X  FRE  @0 Métrique @5 20
C03 05  X  ENG  @0 Metric @5 20
C03 05  X  SPA  @0 Métrico @5 20
C03 06  X  FRE  @0 Régression logistique @5 21
C03 06  X  ENG  @0 Logistic regression @5 21
C03 06  X  SPA  @0 Regresión logística @5 21
C03 07  X  FRE  @0 Métamodèle @5 23
C03 07  X  ENG  @0 Metamodel @5 23
C03 07  X  SPA  @0 Metamodelo @5 23
C03 08  X  FRE  @0 Algorithme apprentissage @5 24
C03 08  X  ENG  @0 Learning algorithm @5 24
C03 08  X  SPA  @0 Algoritmo aprendizaje @5 24
C03 09  X  FRE  @0 Structure grain fin @5 25
C03 09  X  ENG  @0 Fine grain structure @5 25
C03 09  X  SPA  @0 Estructura grano fino @5 25
C03 10  X  FRE  @0 Efficacité @5 26
C03 10  X  ENG  @0 Efficiency @5 26
C03 10  X  SPA  @0 Eficacia @5 26
C03 11  X  FRE  @0 Classification multi-étiquettes @4 CD @5 96
C03 11  X  ENG  @0 Multi-label classification @4 CD @5 96
C03 11  X  SPA  @0 Clasificación multi-etiqueta @4 CD @5 96
N21       @1 338
N44 01      @1 OTO
N82       @1 OTO

Links toward previous steps (curation, corpus...)


Links to Exploration step

Pascal:12-0433529

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">Multilabel classification with meta-level features in a learning-to-rank framework</title>
<author>
<name sortKey="Yiming Yang" sort="Yiming Yang" uniqKey="Yiming Yang" last="Yiming Yang">YIMING YANG</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>Language Technologies Institute & Machine Learning Department, Carnegie Mellon University</s1>
<s2>Pittsburgh</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
</affiliation>
</author>
<author>
<name sortKey="Gopal, Siddharth" sort="Gopal, Siddharth" uniqKey="Gopal S" first="Siddharth" last="Gopal">Siddharth Gopal</name>
<affiliation wicri:level="1">
<inist:fA14 i1="02">
<s1>Language Technologies Institute, Carnegie Mellon University</s1>
<s2>Pittsburgh</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">12-0433529</idno>
<date when="2012">2012</date>
<idno type="stanalyst">PASCAL 12-0433529 INIST</idno>
<idno type="RBID">Pascal:12-0433529</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">003438</idno>
<idno type="wicri:Area/PascalFrancis/Curation">001A91</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">Multilabel classification with meta-level features in a learning-to-rank framework</title>
<author>
<name sortKey="Yiming Yang" sort="Yiming Yang" uniqKey="Yiming Yang" last="Yiming Yang">YIMING YANG</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>Language Technologies Institute & Machine Learning Department, Carnegie Mellon University</s1>
<s2>Pittsburgh</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
</affiliation>
</author>
<author>
<name sortKey="Gopal, Siddharth" sort="Gopal, Siddharth" uniqKey="Gopal S" first="Siddharth" last="Gopal">Siddharth Gopal</name>
<affiliation wicri:level="1">
<inist:fA14 i1="02">
<s1>Language Technologies Institute, Carnegie Mellon University</s1>
<s2>Pittsburgh</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Machine learning</title>
<title level="j" type="abbreviated">Mach. learn.</title>
<idno type="ISSN">0885-6125</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Machine learning</title>
<title level="j" type="abbreviated">Mach. learn.</title>
<idno type="ISSN">0885-6125</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Abstraction</term>
<term>Artificial intelligence</term>
<term>Data analysis</term>
<term>Efficiency</term>
<term>Fine grain structure</term>
<term>Hierarchical classification</term>
<term>Learning algorithm</term>
<term>Logistic regression</term>
<term>Metamodel</term>
<term>Metric</term>
<term>Multi-label classification</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Intelligence artificielle</term>
<term>Analyse donnée</term>
<term>Classification hiérarchique</term>
<term>Abstraction</term>
<term>Métrique</term>
<term>Régression logistique</term>
<term>Métamodèle</term>
<term>Algorithme apprentissage</term>
<term>Structure grain fin</term>
<term>Efficacité</term>
<term>Classification multi-étiquettes</term>
</keywords>
<keywords scheme="Wicri" type="topic" xml:lang="fr">
<term>Intelligence artificielle</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Effective learning in multi-label classification (MLC) requires an appropriate level of abstraction for representing the relationship between each instance and multiple categories. Current MLC methods have focused on learning-to-map from instances to categories in a relatively low-level feature space, such as individual words. The fine-grained features in such a space may not be sufficiently expressive for learning to rank categories, which is essential in multi-label classification. This paper presents an alternative solution by transforming the conventional representation of instances and categories into meta-level features, and by leveraging successful learning-to-rank retrieval algorithms over this feature space. Controlled experiments on six benchmark datasets using eight evaluation metrics show strong evidence for the effectiveness of the proposed approach, which significantly outperformed other state-of-the-art methods such as Rank-SVM, ML-kNN (Multilabel kNN), IBLR-ML (Instance-based logistic regression for multi-label classification) on most of the datasets. Thorough analyses are also provided for separating the factors responsible for the improved performance.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0885-6125</s0>
</fA01>
<fA03 i2="1">
<s0>Mach. learn.</s0>
</fA03>
<fA05>
<s2>88</s2>
</fA05>
<fA06>
<s2>1-2</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG">
<s1>Multilabel classification with meta-level features in a learning-to-rank framework</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG">
<s1>Special Issue on Learning from Multi-Label Data</s1>
</fA09>
<fA11 i1="01" i2="1">
<s1>YIMING YANG</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>GOPAL (Siddharth)</s1>
</fA11>
<fA12 i1="01" i2="1">
<s1>TSOUMAKAS (Grigorios)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1">
<s1>ZHANG (Min-Ling)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="03" i2="1">
<s1>ZHOU (Zhi-Hua)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01">
<s1>Language Technologies Institute & Machine Learning Department, Carnegie Mellon University</s1>
<s2>Pittsburgh</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</fA14>
<fA14 i1="02">
<s1>Language Technologies Institute, Carnegie Mellon University</s1>
<s2>Pittsburgh</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</fA14>
<fA15 i1="01">
<s1>Department of Informatics, Aristotle University of Thessaloniki</s1>
<s2>54124 Thessaloniki</s2>
<s3>GRC</s3>
<sZ>1 aut.</sZ>
</fA15>
<fA15 i1="02">
<s1>MOE Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, 2 Sipailou</s1>
<s2>Nanjing 210096</s2>
<s3>CHN</s3>
<sZ>2 aut.</sZ>
</fA15>
<fA15 i1="03">
<s1>National Key Laboratory for Novel Sofware Technology, Nanjing University, 163 Xianlin Avenue</s1>
<s2>Nanjing 210046</s2>
<s3>CHN</s3>
<sZ>3 aut.</sZ>
</fA15>
<fA20>
<s1>47-68</s1>
</fA20>
<fA21>
<s1>2012</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA43 i1="01">
<s1>INIST</s1>
<s2>21011</s2>
<s5>354000506629240020</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2012 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>1 p.3/4</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>12-0433529</s0>
</fA47>
<fA60>
<s1>P</s1>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Machine learning</s0>
</fA64>
<fA66 i1="01">
<s0>DEU</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>Effective learning in multi-label classification (MLC) requires an appropriate level of abstraction for representing the relationship between each instance and multiple categories. Current MLC methods have focused on learning-to-map from instances to categories in a relatively low-level feature space, such as individual words. The fine-grained features in such a space may not be sufficiently expressive for learning to rank categories, which is essential in multi-label classification. This paper presents an alternative solution by transforming the conventional representation of instances and categories into meta-level features, and by leveraging successful learning-to-rank retrieval algorithms over this feature space. Controlled experiments on six benchmark datasets using eight evaluation metrics show strong evidence for the effectiveness of the proposed approach, which significantly outperformed other state-of-the-art methods such as Rank-SVM, ML-kNN (Multilabel kNN), IBLR-ML (Instance-based logistic regression for multi-label classification) on most of the datasets. Thorough analyses are also provided for separating the factors responsible for the improved performance.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>001D02C02</s0>
</fC02>
<fC02 i1="02" i2="X">
<s0>001D02B07B</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Intelligence artificielle</s0>
<s5>06</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Artificial intelligence</s0>
<s5>06</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Inteligencia artificial</s0>
<s5>06</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Analyse donnée</s0>
<s5>07</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Data analysis</s0>
<s5>07</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Análisis datos</s0>
<s5>07</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Classification hiérarchique</s0>
<s5>18</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Hierarchical classification</s0>
<s5>18</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Clasificación jerarquizada</s0>
<s5>18</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Abstraction</s0>
<s5>19</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Abstraction</s0>
<s5>19</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Abstracción</s0>
<s5>19</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Métrique</s0>
<s5>20</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Metric</s0>
<s5>20</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Métrico</s0>
<s5>20</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Régression logistique</s0>
<s5>21</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Logistic regression</s0>
<s5>21</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Regresión logística</s0>
<s5>21</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Métamodèle</s0>
<s5>23</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Metamodel</s0>
<s5>23</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Metamodelo</s0>
<s5>23</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE">
<s0>Algorithme apprentissage</s0>
<s5>24</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG">
<s0>Learning algorithm</s0>
<s5>24</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA">
<s0>Algoritmo aprendizaje</s0>
<s5>24</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE">
<s0>Structure grain fin</s0>
<s5>25</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG">
<s0>Fine grain structure</s0>
<s5>25</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA">
<s0>Estructura grano fino</s0>
<s5>25</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE">
<s0>Efficacité</s0>
<s5>26</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG">
<s0>Efficiency</s0>
<s5>26</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA">
<s0>Eficacia</s0>
<s5>26</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE">
<s0>Classification multi-étiquettes</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG">
<s0>Multi-label classification</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA">
<s0>Clasificación multi-etiqueta</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fN21>
<s1>338</s1>
</fN21>
<fN44 i1="01">
<s1>OTO</s1>
</fN44>
<fN82>
<s1>OTO</s1>
</fN82>
</pA>
</standard>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Amérique/explor/PittsburghV1/Data/PascalFrancis/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001A91 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Curation/biblio.hfd -nk 001A91 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Amérique
   |area=    PittsburghV1
   |flux=    PascalFrancis
   |étape=   Curation
   |type=    RBID
   |clé=     Pascal:12-0433529
   |texte=   Multilabel classification with meta-level features in a learning-to-rank framework
}}

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Fri Jun 18 17:37:45 2021. Site generation: Fri Jun 18 18:15:47 2021