Serveur d'exploration sur les relations entre la France et l'Australie

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Local scene flow by tracking in intensity and depth

Identifieur interne : 005A57 ( PascalFrancis/Curation ); précédent : 005A56; suivant : 005A58

Local scene flow by tracking in intensity and depth

Auteurs : Julian Quiroga [France, Colombie] ; Frédéric Devernay [France] ; James Crowley [France]

Source :

RBID : Pascal:14-0147828

Descripteurs français

English descriptors

Abstract

The scene flow describes the motion of each 3D point between two time steps. With the arrival of new depth sensors, as the Microsoft Kinect, it is now possible to compute scene flow with a single camera, with promising repercussion in a wide range of computer vision scenarios. We propose a novel method to compute a local scene flow by tracking in a Lucas-Kanade framework. Scene flow is estimated using a pair of aligned intensity and depth images but rather than computing a dense scene flow as in most previous methods, we get a set of 3D motion vectors by tracking surface patches. Assuming a 3D local rigidity of the scene, we propose a rigid translation flow model that allows solving directly for the scene flow by constraining the 3D motion field both in intensity and depth data. In our experimentation we achieve very encouraging results. Since this approach solves simultaneously for the 2D tracking and for the scene flow, it can be used for motion analysis in existing 2D tracking based methods or to define scene flow descriptors.
pA  
A01 01  1    @0 1047-3203
A03   1    @0 J. vis. commun. image represent.
A05       @2 25
A06       @2 1
A08 01  1  ENG  @1 Local scene flow by tracking in intensity and depth
A09 01  1  ENG  @1 Special Issue on Visual Understanding and Applications with RGB-D Cameras
A11 01  1    @1 QUIROGA (Julian)
A11 02  1    @1 DEVERNAY (Frédéric)
A11 03  1    @1 CROWLEY (James)
A12 01  1    @1 BEETZ (Michael) @9 ed.
A12 02  1    @1 CREMERS (Daniel) @9 ed.
A12 03  1    @1 GALL (Juergen) @9 ed.
A12 04  1    @1 LI (Wanqing) @9 ed.
A12 05  1    @1 LIU (Zicheng) @9 ed.
A12 06  1    @1 PANGERCIC (Dejan) @9 ed.
A12 07  1    @1 STURM (Juergen) @9 ed.
A12 08  1    @1 TAI (Yu-Wing) @9 ed.
A14 01      @1 INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe @2 38334 Saint Ismier @3 FRA @Z 1 aut. @Z 2 aut. @Z 3 aut.
A14 02      @1 Departamento de Electrónica, Pontificia Universidad Javeriana @2 Bogotá @3 COL @Z 1 aut.
A15 01      @1 University of Bremen @2 Bremen @3 DEU @Z 1 aut.
A15 02      @1 Technical University of Munich @2 Munich @3 DEU @Z 2 aut. @Z 7 aut.
A15 03      @1 University of Bonn @2 Bonn @3 DEU @Z 3 aut.
A15 04      @1 University of Wollongong @2 Wollongong @3 AUS @Z 4 aut.
A15 05      @1 Microsoft Research @2 Redmond @3 USA @Z 5 aut.
A15 06      @1 Bosch Research @2 Palo Alto @3 USA @Z 6 aut.
A15 07      @1 Korea Advanced Institute of Science and Technology @2 Daejeon @3 KOR @Z 8 aut.
A20       @1 98-107
A21       @1 2014
A23 01      @0 ENG
A43 01      @1 INIST @2 28026 @5 354000506131090090
A44       @0 0000 @1 © 2014 INIST-CNRS. All rights reserved.
A45       @0 41 ref.
A47 01  1    @0 14-0147828
A60       @1 P
A61       @0 A
A64 01  1    @0 Journal of visual communication and image representation
A66 01      @0 NLD
C01 01    ENG  @0 The scene flow describes the motion of each 3D point between two time steps. With the arrival of new depth sensors, as the Microsoft Kinect, it is now possible to compute scene flow with a single camera, with promising repercussion in a wide range of computer vision scenarios. We propose a novel method to compute a local scene flow by tracking in a Lucas-Kanade framework. Scene flow is estimated using a pair of aligned intensity and depth images but rather than computing a dense scene flow as in most previous methods, we get a set of 3D motion vectors by tracking surface patches. Assuming a 3D local rigidity of the scene, we propose a rigid translation flow model that allows solving directly for the scene flow by constraining the 3D motion field both in intensity and depth data. In our experimentation we achieve very encouraging results. Since this approach solves simultaneously for the 2D tracking and for the scene flow, it can be used for motion analysis in existing 2D tracking based methods or to define scene flow descriptors.
C02 01  X    @0 001D02C03
C03 01  X  FRE  @0 Vision ordinateur @5 06
C03 01  X  ENG  @0 Computer vision @5 06
C03 01  X  SPA  @0 Visión ordenador @5 06
C03 02  X  FRE  @0 Pistage @5 07
C03 02  X  ENG  @0 Tracking @5 07
C03 02  X  SPA  @0 Rastreo @5 07
C03 03  X  FRE  @0 Estimation mouvement @5 08
C03 03  X  ENG  @0 Motion estimation @5 08
C03 03  X  SPA  @0 Estimación movimiento @5 08
C03 04  X  FRE  @0 Flux optique @5 09
C03 04  X  ENG  @0 Optical flow @5 09
C03 04  X  SPA  @0 Flujo óptico @5 09
C03 05  X  FRE  @0 Analyse mouvement @5 10
C03 05  X  ENG  @0 Motion analysis @5 10
C03 05  X  SPA  @0 Análisis movimiento @5 10
C03 06  X  FRE  @0 Recalage image @5 11
C03 06  X  ENG  @0 Image registration @5 11
C03 06  X  SPA  @0 Registro imagen @5 11
C03 07  X  FRE  @0 Brillance @5 18
C03 07  X  ENG  @0 Brightness @5 18
C03 07  X  SPA  @0 Brillantez @5 18
C03 08  X  FRE  @0 Modélisation @5 23
C03 08  X  ENG  @0 Modeling @5 23
C03 08  X  SPA  @0 Modelización @5 23
C03 09  X  FRE  @0 . @4 INC @5 82
C03 10  X  FRE  @0 Flot de scène @4 CD @5 96
C03 10  X  ENG  @0 Scene flow @4 CD @5 96
C03 10  X  SPA  @0 flujo de escena @4 CD @5 96
C03 11  X  FRE  @0 Interface naturelle @4 CD @5 97
C03 11  X  ENG  @0 Natural interface @4 CD @5 97
C03 11  X  SPA  @0 Interfase natural @4 CD @5 97
C03 12  X  FRE  @0 Caméra vidéo @4 CD @5 98
C03 12  X  ENG  @0 Video cameras @4 CD @5 98
C03 12  X  SPA  @0 Cámara de vídeo @4 CD @5 98
N21       @1 188
N44 01      @1 OTO
N82       @1 OTO

Links toward previous steps (curation, corpus...)


Links to Exploration step

Pascal:14-0147828

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">Local scene flow by tracking in intensity and depth</title>
<author>
<name sortKey="Quiroga, Julian" sort="Quiroga, Julian" uniqKey="Quiroga J" first="Julian" last="Quiroga">Julian Quiroga</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>France</country>
</affiliation>
<affiliation wicri:level="1">
<inist:fA14 i1="02">
<s1>Departamento de Electrónica, Pontificia Universidad Javeriana</s1>
<s2>Bogotá</s2>
<s3>COL</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
<country>Colombie</country>
</affiliation>
</author>
<author>
<name sortKey="Devernay, Frederic" sort="Devernay, Frederic" uniqKey="Devernay F" first="Frédéric" last="Devernay">Frédéric Devernay</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>France</country>
</affiliation>
</author>
<author>
<name sortKey="Crowley, James" sort="Crowley, James" uniqKey="Crowley J" first="James" last="Crowley">James Crowley</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>France</country>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">14-0147828</idno>
<date when="2014">2014</date>
<idno type="stanalyst">PASCAL 14-0147828 INIST</idno>
<idno type="RBID">Pascal:14-0147828</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000405</idno>
<idno type="wicri:Area/PascalFrancis/Curation">005A57</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">Local scene flow by tracking in intensity and depth</title>
<author>
<name sortKey="Quiroga, Julian" sort="Quiroga, Julian" uniqKey="Quiroga J" first="Julian" last="Quiroga">Julian Quiroga</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>France</country>
</affiliation>
<affiliation wicri:level="1">
<inist:fA14 i1="02">
<s1>Departamento de Electrónica, Pontificia Universidad Javeriana</s1>
<s2>Bogotá</s2>
<s3>COL</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
<country>Colombie</country>
</affiliation>
</author>
<author>
<name sortKey="Devernay, Frederic" sort="Devernay, Frederic" uniqKey="Devernay F" first="Frédéric" last="Devernay">Frédéric Devernay</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>France</country>
</affiliation>
</author>
<author>
<name sortKey="Crowley, James" sort="Crowley, James" uniqKey="Crowley J" first="James" last="Crowley">James Crowley</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>France</country>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Journal of visual communication and image representation</title>
<title level="j" type="abbreviated">J. vis. commun. image represent.</title>
<idno type="ISSN">1047-3203</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Journal of visual communication and image representation</title>
<title level="j" type="abbreviated">J. vis. commun. image represent.</title>
<idno type="ISSN">1047-3203</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Brightness</term>
<term>Computer vision</term>
<term>Image registration</term>
<term>Modeling</term>
<term>Motion analysis</term>
<term>Motion estimation</term>
<term>Natural interface</term>
<term>Optical flow</term>
<term>Scene flow</term>
<term>Tracking</term>
<term>Video cameras</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Vision ordinateur</term>
<term>Pistage</term>
<term>Estimation mouvement</term>
<term>Flux optique</term>
<term>Analyse mouvement</term>
<term>Recalage image</term>
<term>Brillance</term>
<term>Modélisation</term>
<term>.</term>
<term>Flot de scène</term>
<term>Interface naturelle</term>
<term>Caméra vidéo</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">The scene flow describes the motion of each 3D point between two time steps. With the arrival of new depth sensors, as the Microsoft Kinect, it is now possible to compute scene flow with a single camera, with promising repercussion in a wide range of computer vision scenarios. We propose a novel method to compute a local scene flow by tracking in a Lucas-Kanade framework. Scene flow is estimated using a pair of aligned intensity and depth images but rather than computing a dense scene flow as in most previous methods, we get a set of 3D motion vectors by tracking surface patches. Assuming a 3D local rigidity of the scene, we propose a rigid translation flow model that allows solving directly for the scene flow by constraining the 3D motion field both in intensity and depth data. In our experimentation we achieve very encouraging results. Since this approach solves simultaneously for the 2D tracking and for the scene flow, it can be used for motion analysis in existing 2D tracking based methods or to define scene flow descriptors.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>1047-3203</s0>
</fA01>
<fA03 i2="1">
<s0>J. vis. commun. image represent.</s0>
</fA03>
<fA05>
<s2>25</s2>
</fA05>
<fA06>
<s2>1</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG">
<s1>Local scene flow by tracking in intensity and depth</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG">
<s1>Special Issue on Visual Understanding and Applications with RGB-D Cameras</s1>
</fA09>
<fA11 i1="01" i2="1">
<s1>QUIROGA (Julian)</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>DEVERNAY (Frédéric)</s1>
</fA11>
<fA11 i1="03" i2="1">
<s1>CROWLEY (James)</s1>
</fA11>
<fA12 i1="01" i2="1">
<s1>BEETZ (Michael)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1">
<s1>CREMERS (Daniel)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="03" i2="1">
<s1>GALL (Juergen)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="04" i2="1">
<s1>LI (Wanqing)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="05" i2="1">
<s1>LIU (Zicheng)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="06" i2="1">
<s1>PANGERCIC (Dejan)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="07" i2="1">
<s1>STURM (Juergen)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="08" i2="1">
<s1>TAI (Yu-Wing)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01">
<s1>INRIA Grenoble Rhone-Alpes, 655 avenue de l'Europe</s1>
<s2>38334 Saint Ismier</s2>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</fA14>
<fA14 i1="02">
<s1>Departamento de Electrónica, Pontificia Universidad Javeriana</s1>
<s2>Bogotá</s2>
<s3>COL</s3>
<sZ>1 aut.</sZ>
</fA14>
<fA15 i1="01">
<s1>University of Bremen</s1>
<s2>Bremen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
</fA15>
<fA15 i1="02">
<s1>Technical University of Munich</s1>
<s2>Munich</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>7 aut.</sZ>
</fA15>
<fA15 i1="03">
<s1>University of Bonn</s1>
<s2>Bonn</s2>
<s3>DEU</s3>
<sZ>3 aut.</sZ>
</fA15>
<fA15 i1="04">
<s1>University of Wollongong</s1>
<s2>Wollongong</s2>
<s3>AUS</s3>
<sZ>4 aut.</sZ>
</fA15>
<fA15 i1="05">
<s1>Microsoft Research</s1>
<s2>Redmond</s2>
<s3>USA</s3>
<sZ>5 aut.</sZ>
</fA15>
<fA15 i1="06">
<s1>Bosch Research</s1>
<s2>Palo Alto</s2>
<s3>USA</s3>
<sZ>6 aut.</sZ>
</fA15>
<fA15 i1="07">
<s1>Korea Advanced Institute of Science and Technology</s1>
<s2>Daejeon</s2>
<s3>KOR</s3>
<sZ>8 aut.</sZ>
</fA15>
<fA20>
<s1>98-107</s1>
</fA20>
<fA21>
<s1>2014</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA43 i1="01">
<s1>INIST</s1>
<s2>28026</s2>
<s5>354000506131090090</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2014 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>41 ref.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>14-0147828</s0>
</fA47>
<fA60>
<s1>P</s1>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Journal of visual communication and image representation</s0>
</fA64>
<fA66 i1="01">
<s0>NLD</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>The scene flow describes the motion of each 3D point between two time steps. With the arrival of new depth sensors, as the Microsoft Kinect, it is now possible to compute scene flow with a single camera, with promising repercussion in a wide range of computer vision scenarios. We propose a novel method to compute a local scene flow by tracking in a Lucas-Kanade framework. Scene flow is estimated using a pair of aligned intensity and depth images but rather than computing a dense scene flow as in most previous methods, we get a set of 3D motion vectors by tracking surface patches. Assuming a 3D local rigidity of the scene, we propose a rigid translation flow model that allows solving directly for the scene flow by constraining the 3D motion field both in intensity and depth data. In our experimentation we achieve very encouraging results. Since this approach solves simultaneously for the 2D tracking and for the scene flow, it can be used for motion analysis in existing 2D tracking based methods or to define scene flow descriptors.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>001D02C03</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Vision ordinateur</s0>
<s5>06</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Computer vision</s0>
<s5>06</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Visión ordenador</s0>
<s5>06</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Pistage</s0>
<s5>07</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Tracking</s0>
<s5>07</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Rastreo</s0>
<s5>07</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Estimation mouvement</s0>
<s5>08</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Motion estimation</s0>
<s5>08</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Estimación movimiento</s0>
<s5>08</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Flux optique</s0>
<s5>09</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Optical flow</s0>
<s5>09</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Flujo óptico</s0>
<s5>09</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Analyse mouvement</s0>
<s5>10</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Motion analysis</s0>
<s5>10</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Análisis movimiento</s0>
<s5>10</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Recalage image</s0>
<s5>11</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Image registration</s0>
<s5>11</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Registro imagen</s0>
<s5>11</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Brillance</s0>
<s5>18</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Brightness</s0>
<s5>18</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Brillantez</s0>
<s5>18</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE">
<s0>Modélisation</s0>
<s5>23</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG">
<s0>Modeling</s0>
<s5>23</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA">
<s0>Modelización</s0>
<s5>23</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE">
<s0>.</s0>
<s4>INC</s4>
<s5>82</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE">
<s0>Flot de scène</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG">
<s0>Scene flow</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA">
<s0>flujo de escena</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE">
<s0>Interface naturelle</s0>
<s4>CD</s4>
<s5>97</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG">
<s0>Natural interface</s0>
<s4>CD</s4>
<s5>97</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA">
<s0>Interfase natural</s0>
<s4>CD</s4>
<s5>97</s5>
</fC03>
<fC03 i1="12" i2="X" l="FRE">
<s0>Caméra vidéo</s0>
<s4>CD</s4>
<s5>98</s5>
</fC03>
<fC03 i1="12" i2="X" l="ENG">
<s0>Video cameras</s0>
<s4>CD</s4>
<s5>98</s5>
</fC03>
<fC03 i1="12" i2="X" l="SPA">
<s0>Cámara de vídeo</s0>
<s4>CD</s4>
<s5>98</s5>
</fC03>
<fN21>
<s1>188</s1>
</fN21>
<fN44 i1="01">
<s1>OTO</s1>
</fN44>
<fN82>
<s1>OTO</s1>
</fN82>
</pA>
</standard>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Asie/explor/AustralieFrV1/Data/PascalFrancis/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 005A57 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Curation/biblio.hfd -nk 005A57 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Asie
   |area=    AustralieFrV1
   |flux=    PascalFrancis
   |étape=   Curation
   |type=    RBID
   |clé=     Pascal:14-0147828
   |texte=   Local scene flow by tracking in intensity and depth
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Tue Dec 5 10:43:12 2017. Site generation: Tue Mar 5 14:07:20 2024