Local scene flow by tracking in intensity and depth
Identifieur interne : 000405 ( PascalFrancis/Corpus ); précédent : 000404; suivant : 000406Local scene flow by tracking in intensity and depth
Auteurs : Julian Quiroga ; Frédéric Devernay ; James CrowleySource :
- Journal of visual communication and image representation [ 1047-3203 ] ; 2014.
Descripteurs français
- Pascal (Inist)
English descriptors
- KwdEn :
Abstract
The scene flow describes the motion of each 3D point between two time steps. With the arrival of new depth sensors, as the Microsoft Kinect, it is now possible to compute scene flow with a single camera, with promising repercussion in a wide range of computer vision scenarios. We propose a novel method to compute a local scene flow by tracking in a Lucas-Kanade framework. Scene flow is estimated using a pair of aligned intensity and depth images but rather than computing a dense scene flow as in most previous methods, we get a set of 3D motion vectors by tracking surface patches. Assuming a 3D local rigidity of the scene, we propose a rigid translation flow model that allows solving directly for the scene flow by constraining the 3D motion field both in intensity and depth data. In our experimentation we achieve very encouraging results. Since this approach solves simultaneously for the 2D tracking and for the scene flow, it can be used for motion analysis in existing 2D tracking based methods or to define scene flow descriptors.
Notice en format standard (ISO 2709)
Pour connaître la documentation sur le format Inist Standard.
pA |
|
---|
Format Inist (serveur)
NO : | PASCAL 14-0147828 INIST |
---|---|
ET : | Local scene flow by tracking in intensity and depth |
AU : | QUIROGA (Julian); DEVERNAY (Frédéric); CROWLEY (James); BEETZ (Michael); CREMERS (Daniel); GALL (Juergen); LI (Wanqing); LIU (Zicheng); PANGERCIC (Dejan); STURM (Juergen); TAI (Yu-Wing) |
AF : | INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe/38334 Saint Ismier/France (1 aut., 2 aut., 3 aut.); Departamento de Electrónica, Pontificia Universidad Javeriana/Bogotá/Colombie (1 aut.); University of Bremen/Bremen/Allemagne (1 aut.); Technical University of Munich/Munich/Allemagne (2 aut., 7 aut.); University of Bonn/Bonn/Allemagne (3 aut.); University of Wollongong/Wollongong/Australie (4 aut.); Microsoft Research/Redmond/Etats-Unis (5 aut.); Bosch Research/Palo Alto/Etats-Unis (6 aut.); Korea Advanced Institute of Science and Technology/Daejeon/Corée, République de (8 aut.) |
DT : | Publication en série; Niveau analytique |
SO : | Journal of visual communication and image representation; ISSN 1047-3203; Pays-Bas; Da. 2014; Vol. 25; No. 1; Pp. 98-107; Bibl. 41 ref. |
LA : | Anglais |
EA : | The scene flow describes the motion of each 3D point between two time steps. With the arrival of new depth sensors, as the Microsoft Kinect, it is now possible to compute scene flow with a single camera, with promising repercussion in a wide range of computer vision scenarios. We propose a novel method to compute a local scene flow by tracking in a Lucas-Kanade framework. Scene flow is estimated using a pair of aligned intensity and depth images but rather than computing a dense scene flow as in most previous methods, we get a set of 3D motion vectors by tracking surface patches. Assuming a 3D local rigidity of the scene, we propose a rigid translation flow model that allows solving directly for the scene flow by constraining the 3D motion field both in intensity and depth data. In our experimentation we achieve very encouraging results. Since this approach solves simultaneously for the 2D tracking and for the scene flow, it can be used for motion analysis in existing 2D tracking based methods or to define scene flow descriptors. |
CC : | 001D02C03 |
FD : | Vision ordinateur; Pistage; Estimation mouvement; Flux optique; Analyse mouvement; Recalage image; Brillance; Modélisation; .; Flot de scène; Interface naturelle; Caméra vidéo |
ED : | Computer vision; Tracking; Motion estimation; Optical flow; Motion analysis; Image registration; Brightness; Modeling; Scene flow; Natural interface; Video cameras |
SD : | Visión ordenador; Rastreo; Estimación movimiento; Flujo óptico; Análisis movimiento; Registro imagen; Brillantez; Modelización; flujo de escena; Interfase natural; Cámara de vídeo |
LO : | INIST-28026.354000506131090090 |
ID : | 14-0147828 |
Links to Exploration step
Pascal:14-0147828Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en" level="a">Local scene flow by tracking in intensity and depth</title>
<author><name sortKey="Quiroga, Julian" sort="Quiroga, Julian" uniqKey="Quiroga J" first="Julian" last="Quiroga">Julian Quiroga</name>
<affiliation><inist:fA14 i1="01"><s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation><inist:fA14 i1="02"><s1>Departamento de Electrónica, Pontificia Universidad Javeriana</s1>
<s2>Bogotá</s2>
<s3>COL</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Devernay, Frederic" sort="Devernay, Frederic" uniqKey="Devernay F" first="Frédéric" last="Devernay">Frédéric Devernay</name>
<affiliation><inist:fA14 i1="01"><s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Crowley, James" sort="Crowley, James" uniqKey="Crowley J" first="James" last="Crowley">James Crowley</name>
<affiliation><inist:fA14 i1="01"><s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">INIST</idno>
<idno type="inist">14-0147828</idno>
<date when="2014">2014</date>
<idno type="stanalyst">PASCAL 14-0147828 INIST</idno>
<idno type="RBID">Pascal:14-0147828</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000405</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en" level="a">Local scene flow by tracking in intensity and depth</title>
<author><name sortKey="Quiroga, Julian" sort="Quiroga, Julian" uniqKey="Quiroga J" first="Julian" last="Quiroga">Julian Quiroga</name>
<affiliation><inist:fA14 i1="01"><s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation><inist:fA14 i1="02"><s1>Departamento de Electrónica, Pontificia Universidad Javeriana</s1>
<s2>Bogotá</s2>
<s3>COL</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Devernay, Frederic" sort="Devernay, Frederic" uniqKey="Devernay F" first="Frédéric" last="Devernay">Frédéric Devernay</name>
<affiliation><inist:fA14 i1="01"><s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Crowley, James" sort="Crowley, James" uniqKey="Crowley J" first="James" last="Crowley">James Crowley</name>
<affiliation><inist:fA14 i1="01"><s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series><title level="j" type="main">Journal of visual communication and image representation</title>
<title level="j" type="abbreviated">J. vis. commun. image represent.</title>
<idno type="ISSN">1047-3203</idno>
<imprint><date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt><title level="j" type="main">Journal of visual communication and image representation</title>
<title level="j" type="abbreviated">J. vis. commun. image represent.</title>
<idno type="ISSN">1047-3203</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>Brightness</term>
<term>Computer vision</term>
<term>Image registration</term>
<term>Modeling</term>
<term>Motion analysis</term>
<term>Motion estimation</term>
<term>Natural interface</term>
<term>Optical flow</term>
<term>Scene flow</term>
<term>Tracking</term>
<term>Video cameras</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr"><term>Vision ordinateur</term>
<term>Pistage</term>
<term>Estimation mouvement</term>
<term>Flux optique</term>
<term>Analyse mouvement</term>
<term>Recalage image</term>
<term>Brillance</term>
<term>Modélisation</term>
<term>.</term>
<term>Flot de scène</term>
<term>Interface naturelle</term>
<term>Caméra vidéo</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">The scene flow describes the motion of each 3D point between two time steps. With the arrival of new depth sensors, as the Microsoft Kinect, it is now possible to compute scene flow with a single camera, with promising repercussion in a wide range of computer vision scenarios. We propose a novel method to compute a local scene flow by tracking in a Lucas-Kanade framework. Scene flow is estimated using a pair of aligned intensity and depth images but rather than computing a dense scene flow as in most previous methods, we get a set of 3D motion vectors by tracking surface patches. Assuming a 3D local rigidity of the scene, we propose a rigid translation flow model that allows solving directly for the scene flow by constraining the 3D motion field both in intensity and depth data. In our experimentation we achieve very encouraging results. Since this approach solves simultaneously for the 2D tracking and for the scene flow, it can be used for motion analysis in existing 2D tracking based methods or to define scene flow descriptors.</div>
</front>
</TEI>
<inist><standard h6="B"><pA><fA01 i1="01" i2="1"><s0>1047-3203</s0>
</fA01>
<fA03 i2="1"><s0>J. vis. commun. image represent.</s0>
</fA03>
<fA05><s2>25</s2>
</fA05>
<fA06><s2>1</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG"><s1>Local scene flow by tracking in intensity and depth</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG"><s1>Special Issue on Visual Understanding and Applications with RGB-D Cameras</s1>
</fA09>
<fA11 i1="01" i2="1"><s1>QUIROGA (Julian)</s1>
</fA11>
<fA11 i1="02" i2="1"><s1>DEVERNAY (Frédéric)</s1>
</fA11>
<fA11 i1="03" i2="1"><s1>CROWLEY (James)</s1>
</fA11>
<fA12 i1="01" i2="1"><s1>BEETZ (Michael)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1"><s1>CREMERS (Daniel)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="03" i2="1"><s1>GALL (Juergen)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="04" i2="1"><s1>LI (Wanqing)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="05" i2="1"><s1>LIU (Zicheng)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="06" i2="1"><s1>PANGERCIC (Dejan)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="07" i2="1"><s1>STURM (Juergen)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="08" i2="1"><s1>TAI (Yu-Wing)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01"><s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</fA14>
<fA14 i1="02"><s1>Departamento de Electrónica, Pontificia Universidad Javeriana</s1>
<s2>Bogotá</s2>
<s3>COL</s3>
<sZ>1 aut.</sZ>
</fA14>
<fA15 i1="01"><s1>University of Bremen</s1>
<s2>Bremen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
</fA15>
<fA15 i1="02"><s1>Technical University of Munich</s1>
<s2>Munich</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>7 aut.</sZ>
</fA15>
<fA15 i1="03"><s1>University of Bonn</s1>
<s2>Bonn</s2>
<s3>DEU</s3>
<sZ>3 aut.</sZ>
</fA15>
<fA15 i1="04"><s1>University of Wollongong</s1>
<s2>Wollongong</s2>
<s3>AUS</s3>
<sZ>4 aut.</sZ>
</fA15>
<fA15 i1="05"><s1>Microsoft Research</s1>
<s2>Redmond</s2>
<s3>USA</s3>
<sZ>5 aut.</sZ>
</fA15>
<fA15 i1="06"><s1>Bosch Research</s1>
<s2>Palo Alto</s2>
<s3>USA</s3>
<sZ>6 aut.</sZ>
</fA15>
<fA15 i1="07"><s1>Korea Advanced Institute of Science and Technology</s1>
<s2>Daejeon</s2>
<s3>KOR</s3>
<sZ>8 aut.</sZ>
</fA15>
<fA20><s1>98-107</s1>
</fA20>
<fA21><s1>2014</s1>
</fA21>
<fA23 i1="01"><s0>ENG</s0>
</fA23>
<fA43 i1="01"><s1>INIST</s1>
<s2>28026</s2>
<s5>354000506131090090</s5>
</fA43>
<fA44><s0>0000</s0>
<s1>© 2014 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45><s0>41 ref.</s0>
</fA45>
<fA47 i1="01" i2="1"><s0>14-0147828</s0>
</fA47>
<fA60><s1>P</s1>
</fA60>
<fA61><s0>A</s0>
</fA61>
<fA64 i1="01" i2="1"><s0>Journal of visual communication and image representation</s0>
</fA64>
<fA66 i1="01"><s0>NLD</s0>
</fA66>
<fC01 i1="01" l="ENG"><s0>The scene flow describes the motion of each 3D point between two time steps. With the arrival of new depth sensors, as the Microsoft Kinect, it is now possible to compute scene flow with a single camera, with promising repercussion in a wide range of computer vision scenarios. We propose a novel method to compute a local scene flow by tracking in a Lucas-Kanade framework. Scene flow is estimated using a pair of aligned intensity and depth images but rather than computing a dense scene flow as in most previous methods, we get a set of 3D motion vectors by tracking surface patches. Assuming a 3D local rigidity of the scene, we propose a rigid translation flow model that allows solving directly for the scene flow by constraining the 3D motion field both in intensity and depth data. In our experimentation we achieve very encouraging results. Since this approach solves simultaneously for the 2D tracking and for the scene flow, it can be used for motion analysis in existing 2D tracking based methods or to define scene flow descriptors.</s0>
</fC01>
<fC02 i1="01" i2="X"><s0>001D02C03</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE"><s0>Vision ordinateur</s0>
<s5>06</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG"><s0>Computer vision</s0>
<s5>06</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA"><s0>Visión ordenador</s0>
<s5>06</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE"><s0>Pistage</s0>
<s5>07</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG"><s0>Tracking</s0>
<s5>07</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA"><s0>Rastreo</s0>
<s5>07</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE"><s0>Estimation mouvement</s0>
<s5>08</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG"><s0>Motion estimation</s0>
<s5>08</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA"><s0>Estimación movimiento</s0>
<s5>08</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE"><s0>Flux optique</s0>
<s5>09</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG"><s0>Optical flow</s0>
<s5>09</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA"><s0>Flujo óptico</s0>
<s5>09</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE"><s0>Analyse mouvement</s0>
<s5>10</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG"><s0>Motion analysis</s0>
<s5>10</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA"><s0>Análisis movimiento</s0>
<s5>10</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE"><s0>Recalage image</s0>
<s5>11</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG"><s0>Image registration</s0>
<s5>11</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA"><s0>Registro imagen</s0>
<s5>11</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE"><s0>Brillance</s0>
<s5>18</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG"><s0>Brightness</s0>
<s5>18</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA"><s0>Brillantez</s0>
<s5>18</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE"><s0>Modélisation</s0>
<s5>23</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG"><s0>Modeling</s0>
<s5>23</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA"><s0>Modelización</s0>
<s5>23</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE"><s0>.</s0>
<s4>INC</s4>
<s5>82</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE"><s0>Flot de scène</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG"><s0>Scene flow</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA"><s0>flujo de escena</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE"><s0>Interface naturelle</s0>
<s4>CD</s4>
<s5>97</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG"><s0>Natural interface</s0>
<s4>CD</s4>
<s5>97</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA"><s0>Interfase natural</s0>
<s4>CD</s4>
<s5>97</s5>
</fC03>
<fC03 i1="12" i2="X" l="FRE"><s0>Caméra vidéo</s0>
<s4>CD</s4>
<s5>98</s5>
</fC03>
<fC03 i1="12" i2="X" l="ENG"><s0>Video cameras</s0>
<s4>CD</s4>
<s5>98</s5>
</fC03>
<fC03 i1="12" i2="X" l="SPA"><s0>Cámara de vídeo</s0>
<s4>CD</s4>
<s5>98</s5>
</fC03>
<fN21><s1>188</s1>
</fN21>
<fN44 i1="01"><s1>OTO</s1>
</fN44>
<fN82><s1>OTO</s1>
</fN82>
</pA>
</standard>
<server><NO>PASCAL 14-0147828 INIST</NO>
<ET>Local scene flow by tracking in intensity and depth</ET>
<AU>QUIROGA (Julian); DEVERNAY (Frédéric); CROWLEY (James); BEETZ (Michael); CREMERS (Daniel); GALL (Juergen); LI (Wanqing); LIU (Zicheng); PANGERCIC (Dejan); STURM (Juergen); TAI (Yu-Wing)</AU>
<AF>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe/38334 Saint Ismier/France (1 aut., 2 aut., 3 aut.); Departamento de Electrónica, Pontificia Universidad Javeriana/Bogotá/Colombie (1 aut.); University of Bremen/Bremen/Allemagne (1 aut.); Technical University of Munich/Munich/Allemagne (2 aut., 7 aut.); University of Bonn/Bonn/Allemagne (3 aut.); University of Wollongong/Wollongong/Australie (4 aut.); Microsoft Research/Redmond/Etats-Unis (5 aut.); Bosch Research/Palo Alto/Etats-Unis (6 aut.); Korea Advanced Institute of Science and Technology/Daejeon/Corée, République de (8 aut.)</AF>
<DT>Publication en série; Niveau analytique</DT>
<SO>Journal of visual communication and image representation; ISSN 1047-3203; Pays-Bas; Da. 2014; Vol. 25; No. 1; Pp. 98-107; Bibl. 41 ref.</SO>
<LA>Anglais</LA>
<EA>The scene flow describes the motion of each 3D point between two time steps. With the arrival of new depth sensors, as the Microsoft Kinect, it is now possible to compute scene flow with a single camera, with promising repercussion in a wide range of computer vision scenarios. We propose a novel method to compute a local scene flow by tracking in a Lucas-Kanade framework. Scene flow is estimated using a pair of aligned intensity and depth images but rather than computing a dense scene flow as in most previous methods, we get a set of 3D motion vectors by tracking surface patches. Assuming a 3D local rigidity of the scene, we propose a rigid translation flow model that allows solving directly for the scene flow by constraining the 3D motion field both in intensity and depth data. In our experimentation we achieve very encouraging results. Since this approach solves simultaneously for the 2D tracking and for the scene flow, it can be used for motion analysis in existing 2D tracking based methods or to define scene flow descriptors.</EA>
<CC>001D02C03</CC>
<FD>Vision ordinateur; Pistage; Estimation mouvement; Flux optique; Analyse mouvement; Recalage image; Brillance; Modélisation; .; Flot de scène; Interface naturelle; Caméra vidéo</FD>
<ED>Computer vision; Tracking; Motion estimation; Optical flow; Motion analysis; Image registration; Brightness; Modeling; Scene flow; Natural interface; Video cameras</ED>
<SD>Visión ordenador; Rastreo; Estimación movimiento; Flujo óptico; Análisis movimiento; Registro imagen; Brillantez; Modelización; flujo de escena; Interfase natural; Cámara de vídeo</SD>
<LO>INIST-28026.354000506131090090</LO>
<ID>14-0147828</ID>
</server>
</inist>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Wicri/Asie/explor/AustralieFrV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000405 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000405 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Wicri/Asie |area= AustralieFrV1 |flux= PascalFrancis |étape= Corpus |type= RBID |clé= Pascal:14-0147828 |texte= Local scene flow by tracking in intensity and depth }}
This area was generated with Dilib version V0.6.33. |