Vision assisted control for manipulation using virtual fixtures: Experiments at macro and micro scales
Identifieur interne :
001030 ( PascalFrancis/Corpus );
précédent :
001029;
suivant :
001031
Vision assisted control for manipulation using virtual fixtures: Experiments at macro and micro scales
Auteurs : A. Bettini ;
S. Lang ;
A. Okamura ;
G. HagerSource :
-
Proceedings - IEEE International Conference on Robotics and Automation [ 1050-4729 ] ; 2002.
RBID : Pascal:04-0205417
Descripteurs français
- Pascal (Inist)
- Système coopératif,
Programme commande,
Guidage,
Commande mouvement,
Rétroaction,
Robotique,
Positionnement,
Bridage,
Porte pièce,
Implémentation,
Chirurgie,
Vision artificielle,
Sensibilité tactile,
Main,
Structure macroscopique,
Courbe niveau,
Méthode continuation,
Méthode prédicteur correcteur.
English descriptors
- KwdEn :
- Artificial vision,
Clamping,
Continuation method,
Contour line,
Control program,
Cooperative systems,
Feedback regulation,
Guidance,
Hand,
Implementation,
Macroscopic structure,
Motion control,
Positioning,
Predictor corrector method,
Robotics,
Surgery,
Tactile sensitivity,
Work holder.
Abstract
We present the design and implementation of a vision-based system for micron-scale, cooperative manipulation of a surgical tool. The system is based on a control algorithm that implements a broad class of guidance modes called virtual fixtures. A virtual fixture, like a real fixture, limits the motion of a tool to a prescribed class or range. The implemented system uses vision as a sensor for providing a reference trajectory, and the control algorithm then provides haptic feedback involving direct, shared manipulation of a surgical tool. We have tested this system on the JHU Steady Hand robot and provide experimental results for path following and positioning on structures at both macroscopic and microscopic scales.
Notice en format standard (ISO 2709)
Pour connaître la documentation sur le format Inist Standard.
pA |
A01 | 01 | 1 | | @0 1050-4729 |
---|
A08 | 01 | 1 | ENG | @1 Vision assisted control for manipulation using virtual fixtures: Experiments at macro and micro scales |
---|
A09 | 01 | 1 | ENG | @1 Robotics and automation : Washington DC, 11-15 May 2002 |
---|
A11 | 01 | 1 | | @1 BETTINI (A.) |
---|
A11 | 02 | 1 | | @1 LANG (S.) |
---|
A11 | 03 | 1 | | @1 OKAMURA (A.) |
---|
A11 | 04 | 1 | | @1 HAGER (G.) |
---|
A14 | 01 | | | @1 Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University @3 USA @Z 1 aut. @Z 2 aut. @Z 3 aut. @Z 4 aut. |
---|
A18 | 01 | 1 | | @1 IEEE Robotics and Automatic Society @3 USA @9 patr. |
---|
A20 | | | | @1 3354-3361 |
---|
A21 | | | | @1 2002 |
---|
A23 | 01 | | | @0 ENG |
---|
A26 | 01 | | | @0 0-7803-7272-7 |
---|
A43 | 01 | | | @1 INIST @2 Y 37947 @5 354000117766645310 |
---|
A44 | | | | @0 0000 @1 © 2004 INIST-CNRS. All rights reserved. |
---|
A45 | | | | @0 9 ref. |
---|
A47 | 01 | 1 | | @0 04-0205417 |
---|
A60 | | | | @1 P @2 C |
---|
A61 | | | | @0 A |
---|
A64 | 01 | 1 | | @0 Proceedings - IEEE International Conference on Robotics and Automation |
---|
A66 | 01 | | | @0 USA |
---|
C01 | 01 | | ENG | @0 We present the design and implementation of a vision-based system for micron-scale, cooperative manipulation of a surgical tool. The system is based on a control algorithm that implements a broad class of guidance modes called virtual fixtures. A virtual fixture, like a real fixture, limits the motion of a tool to a prescribed class or range. The implemented system uses vision as a sensor for providing a reference trajectory, and the control algorithm then provides haptic feedback involving direct, shared manipulation of a surgical tool. We have tested this system on the JHU Steady Hand robot and provide experimental results for path following and positioning on structures at both macroscopic and microscopic scales. |
---|
C02 | 01 | X | | @0 001D02D11 |
---|
C03 | 01 | 3 | FRE | @0 Système coopératif @5 09 |
---|
C03 | 01 | 3 | ENG | @0 Cooperative systems @5 09 |
---|
C03 | 02 | X | FRE | @0 Programme commande @5 10 |
---|
C03 | 02 | X | ENG | @0 Control program @5 10 |
---|
C03 | 02 | X | SPA | @0 Programa mando @5 10 |
---|
C03 | 03 | X | FRE | @0 Guidage @5 11 |
---|
C03 | 03 | X | ENG | @0 Guidance @5 11 |
---|
C03 | 03 | X | SPA | @0 Guiado @5 11 |
---|
C03 | 04 | X | FRE | @0 Commande mouvement @5 12 |
---|
C03 | 04 | X | ENG | @0 Motion control @5 12 |
---|
C03 | 04 | X | SPA | @0 Control movimiento @5 12 |
---|
C03 | 05 | X | FRE | @0 Rétroaction @5 13 |
---|
C03 | 05 | X | ENG | @0 Feedback regulation @5 13 |
---|
C03 | 05 | X | SPA | @0 Retroacción @5 13 |
---|
C03 | 06 | X | FRE | @0 Robotique @5 14 |
---|
C03 | 06 | X | ENG | @0 Robotics @5 14 |
---|
C03 | 06 | X | SPA | @0 Robótica @5 14 |
---|
C03 | 07 | X | FRE | @0 Positionnement @5 15 |
---|
C03 | 07 | X | ENG | @0 Positioning @5 15 |
---|
C03 | 07 | X | SPA | @0 Posicionamiento @5 15 |
---|
C03 | 08 | X | FRE | @0 Bridage @5 18 |
---|
C03 | 08 | X | ENG | @0 Clamping @5 18 |
---|
C03 | 08 | X | SPA | @0 Apriete @5 18 |
---|
C03 | 09 | X | FRE | @0 Porte pièce @5 19 |
---|
C03 | 09 | X | ENG | @0 Work holder @5 19 |
---|
C03 | 09 | X | SPA | @0 Portapieza @5 19 |
---|
C03 | 10 | X | FRE | @0 Implémentation @5 20 |
---|
C03 | 10 | X | ENG | @0 Implementation @5 20 |
---|
C03 | 10 | X | SPA | @0 Implementación @5 20 |
---|
C03 | 11 | X | FRE | @0 Chirurgie @5 21 |
---|
C03 | 11 | X | ENG | @0 Surgery @5 21 |
---|
C03 | 11 | X | SPA | @0 Cirugía @5 21 |
---|
C03 | 12 | X | FRE | @0 Vision artificielle @5 22 |
---|
C03 | 12 | X | ENG | @0 Artificial vision @5 22 |
---|
C03 | 12 | X | SPA | @0 Visión artificial @5 22 |
---|
C03 | 13 | X | FRE | @0 Sensibilité tactile @5 23 |
---|
C03 | 13 | X | ENG | @0 Tactile sensitivity @5 23 |
---|
C03 | 13 | X | SPA | @0 Sensibilidad tactil @5 23 |
---|
C03 | 14 | X | FRE | @0 Main @5 24 |
---|
C03 | 14 | X | ENG | @0 Hand @5 24 |
---|
C03 | 14 | X | SPA | @0 Mano @5 24 |
---|
C03 | 15 | X | FRE | @0 Structure macroscopique @5 25 |
---|
C03 | 15 | X | ENG | @0 Macroscopic structure @5 25 |
---|
C03 | 15 | X | SPA | @0 Estructura macroscópica @5 25 |
---|
C03 | 16 | X | FRE | @0 Courbe niveau @5 28 |
---|
C03 | 16 | X | ENG | @0 Contour line @5 28 |
---|
C03 | 16 | X | SPA | @0 Curva nivel @5 28 |
---|
C03 | 17 | X | FRE | @0 Méthode continuation @5 29 |
---|
C03 | 17 | X | ENG | @0 Continuation method @5 29 |
---|
C03 | 17 | X | SPA | @0 Método continuación @5 29 |
---|
C03 | 18 | X | FRE | @0 Méthode prédicteur correcteur @5 30 |
---|
C03 | 18 | X | ENG | @0 Predictor corrector method @5 30 |
---|
C03 | 18 | X | SPA | @0 Método predictor corrector @5 30 |
---|
N21 | | | | @1 138 |
---|
N82 | | | | @1 OTO |
---|
|
pR |
A30 | 01 | 1 | ENG | @1 IEEE international conference on robotics and automation @3 Washington DC USA @4 2002-05-11 |
---|
|
Format Inist (serveur)
NO : | PASCAL 04-0205417 INIST |
ET : | Vision assisted control for manipulation using virtual fixtures: Experiments at macro and micro scales |
AU : | BETTINI (A.); LANG (S.); OKAMURA (A.); HAGER (G.) |
AF : | Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University/Etats-Unis (1 aut., 2 aut., 3 aut., 4 aut.) |
DT : | Publication en série; Congrès; Niveau analytique |
SO : | Proceedings - IEEE International Conference on Robotics and Automation; ISSN 1050-4729; Etats-Unis; Da. 2002; Pp. 3354-3361; Bibl. 9 ref. |
LA : | Anglais |
EA : | We present the design and implementation of a vision-based system for micron-scale, cooperative manipulation of a surgical tool. The system is based on a control algorithm that implements a broad class of guidance modes called virtual fixtures. A virtual fixture, like a real fixture, limits the motion of a tool to a prescribed class or range. The implemented system uses vision as a sensor for providing a reference trajectory, and the control algorithm then provides haptic feedback involving direct, shared manipulation of a surgical tool. We have tested this system on the JHU Steady Hand robot and provide experimental results for path following and positioning on structures at both macroscopic and microscopic scales. |
CC : | 001D02D11 |
FD : | Système coopératif; Programme commande; Guidage; Commande mouvement; Rétroaction; Robotique; Positionnement; Bridage; Porte pièce; Implémentation; Chirurgie; Vision artificielle; Sensibilité tactile; Main; Structure macroscopique; Courbe niveau; Méthode continuation; Méthode prédicteur correcteur |
ED : | Cooperative systems; Control program; Guidance; Motion control; Feedback regulation; Robotics; Positioning; Clamping; Work holder; Implementation; Surgery; Artificial vision; Tactile sensitivity; Hand; Macroscopic structure; Contour line; Continuation method; Predictor corrector method |
SD : | Programa mando; Guiado; Control movimiento; Retroacción; Robótica; Posicionamiento; Apriete; Portapieza; Implementación; Cirugía; Visión artificial; Sensibilidad tactil; Mano; Estructura macroscópica; Curva nivel; Método continuación; Método predictor corrector |
LO : | INIST-Y 37947.354000117766645310 |
ID : | 04-0205417 |
Links to Exploration step
Pascal:04-0205417
Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en" level="a">Vision assisted control for manipulation using virtual fixtures: Experiments at macro and micro scales</title>
<author><name sortKey="Bettini, A" sort="Bettini, A" uniqKey="Bettini A" first="A." last="Bettini">A. Bettini</name>
<affiliation><inist:fA14 i1="01"><s1>Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University</s1>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Lang, S" sort="Lang, S" uniqKey="Lang S" first="S." last="Lang">S. Lang</name>
<affiliation><inist:fA14 i1="01"><s1>Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University</s1>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Okamura, A" sort="Okamura, A" uniqKey="Okamura A" first="A." last="Okamura">A. Okamura</name>
<affiliation><inist:fA14 i1="01"><s1>Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University</s1>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Hager, G" sort="Hager, G" uniqKey="Hager G" first="G." last="Hager">G. Hager</name>
<affiliation><inist:fA14 i1="01"><s1>Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University</s1>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">INIST</idno>
<idno type="inist">04-0205417</idno>
<date when="2002">2002</date>
<idno type="stanalyst">PASCAL 04-0205417 INIST</idno>
<idno type="RBID">Pascal:04-0205417</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">001030</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en" level="a">Vision assisted control for manipulation using virtual fixtures: Experiments at macro and micro scales</title>
<author><name sortKey="Bettini, A" sort="Bettini, A" uniqKey="Bettini A" first="A." last="Bettini">A. Bettini</name>
<affiliation><inist:fA14 i1="01"><s1>Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University</s1>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Lang, S" sort="Lang, S" uniqKey="Lang S" first="S." last="Lang">S. Lang</name>
<affiliation><inist:fA14 i1="01"><s1>Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University</s1>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Okamura, A" sort="Okamura, A" uniqKey="Okamura A" first="A." last="Okamura">A. Okamura</name>
<affiliation><inist:fA14 i1="01"><s1>Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University</s1>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Hager, G" sort="Hager, G" uniqKey="Hager G" first="G." last="Hager">G. Hager</name>
<affiliation><inist:fA14 i1="01"><s1>Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University</s1>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series><title level="j" type="main">Proceedings - IEEE International Conference on Robotics and Automation</title>
<idno type="ISSN">1050-4729</idno>
<imprint><date when="2002">2002</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt><title level="j" type="main">Proceedings - IEEE International Conference on Robotics and Automation</title>
<idno type="ISSN">1050-4729</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>Artificial vision</term>
<term>Clamping</term>
<term>Continuation method</term>
<term>Contour line</term>
<term>Control program</term>
<term>Cooperative systems</term>
<term>Feedback regulation</term>
<term>Guidance</term>
<term>Hand</term>
<term>Implementation</term>
<term>Macroscopic structure</term>
<term>Motion control</term>
<term>Positioning</term>
<term>Predictor corrector method</term>
<term>Robotics</term>
<term>Surgery</term>
<term>Tactile sensitivity</term>
<term>Work holder</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr"><term>Système coopératif</term>
<term>Programme commande</term>
<term>Guidage</term>
<term>Commande mouvement</term>
<term>Rétroaction</term>
<term>Robotique</term>
<term>Positionnement</term>
<term>Bridage</term>
<term>Porte pièce</term>
<term>Implémentation</term>
<term>Chirurgie</term>
<term>Vision artificielle</term>
<term>Sensibilité tactile</term>
<term>Main</term>
<term>Structure macroscopique</term>
<term>Courbe niveau</term>
<term>Méthode continuation</term>
<term>Méthode prédicteur correcteur</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">We present the design and implementation of a vision-based system for micron-scale, cooperative manipulation of a surgical tool. The system is based on a control algorithm that implements a broad class of guidance modes called virtual fixtures. A virtual fixture, like a real fixture, limits the motion of a tool to a prescribed class or range. The implemented system uses vision as a sensor for providing a reference trajectory, and the control algorithm then provides haptic feedback involving direct, shared manipulation of a surgical tool. We have tested this system on the JHU Steady Hand robot and provide experimental results for path following and positioning on structures at both macroscopic and microscopic scales.</div>
</front>
</TEI>
<inist><standard h6="B"><pA><fA01 i1="01" i2="1"><s0>1050-4729</s0>
</fA01>
<fA08 i1="01" i2="1" l="ENG"><s1>Vision assisted control for manipulation using virtual fixtures: Experiments at macro and micro scales</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG"><s1>Robotics and automation : Washington DC, 11-15 May 2002</s1>
</fA09>
<fA11 i1="01" i2="1"><s1>BETTINI (A.)</s1>
</fA11>
<fA11 i1="02" i2="1"><s1>LANG (S.)</s1>
</fA11>
<fA11 i1="03" i2="1"><s1>OKAMURA (A.)</s1>
</fA11>
<fA11 i1="04" i2="1"><s1>HAGER (G.)</s1>
</fA11>
<fA14 i1="01"><s1>Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University</s1>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
</fA14>
<fA18 i1="01" i2="1"><s1>IEEE Robotics and Automatic Society</s1>
<s3>USA</s3>
<s9>patr.</s9>
</fA18>
<fA20><s1>3354-3361</s1>
</fA20>
<fA21><s1>2002</s1>
</fA21>
<fA23 i1="01"><s0>ENG</s0>
</fA23>
<fA26 i1="01"><s0>0-7803-7272-7</s0>
</fA26>
<fA43 i1="01"><s1>INIST</s1>
<s2>Y 37947</s2>
<s5>354000117766645310</s5>
</fA43>
<fA44><s0>0000</s0>
<s1>© 2004 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45><s0>9 ref.</s0>
</fA45>
<fA47 i1="01" i2="1"><s0>04-0205417</s0>
</fA47>
<fA60><s1>P</s1>
<s2>C</s2>
</fA60>
<fA64 i1="01" i2="1"><s0>Proceedings - IEEE International Conference on Robotics and Automation</s0>
</fA64>
<fA66 i1="01"><s0>USA</s0>
</fA66>
<fC01 i1="01" l="ENG"><s0>We present the design and implementation of a vision-based system for micron-scale, cooperative manipulation of a surgical tool. The system is based on a control algorithm that implements a broad class of guidance modes called virtual fixtures. A virtual fixture, like a real fixture, limits the motion of a tool to a prescribed class or range. The implemented system uses vision as a sensor for providing a reference trajectory, and the control algorithm then provides haptic feedback involving direct, shared manipulation of a surgical tool. We have tested this system on the JHU Steady Hand robot and provide experimental results for path following and positioning on structures at both macroscopic and microscopic scales.</s0>
</fC01>
<fC02 i1="01" i2="X"><s0>001D02D11</s0>
</fC02>
<fC03 i1="01" i2="3" l="FRE"><s0>Système coopératif</s0>
<s5>09</s5>
</fC03>
<fC03 i1="01" i2="3" l="ENG"><s0>Cooperative systems</s0>
<s5>09</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE"><s0>Programme commande</s0>
<s5>10</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG"><s0>Control program</s0>
<s5>10</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA"><s0>Programa mando</s0>
<s5>10</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE"><s0>Guidage</s0>
<s5>11</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG"><s0>Guidance</s0>
<s5>11</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA"><s0>Guiado</s0>
<s5>11</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE"><s0>Commande mouvement</s0>
<s5>12</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG"><s0>Motion control</s0>
<s5>12</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA"><s0>Control movimiento</s0>
<s5>12</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE"><s0>Rétroaction</s0>
<s5>13</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG"><s0>Feedback regulation</s0>
<s5>13</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA"><s0>Retroacción</s0>
<s5>13</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE"><s0>Robotique</s0>
<s5>14</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG"><s0>Robotics</s0>
<s5>14</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA"><s0>Robótica</s0>
<s5>14</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE"><s0>Positionnement</s0>
<s5>15</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG"><s0>Positioning</s0>
<s5>15</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA"><s0>Posicionamiento</s0>
<s5>15</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE"><s0>Bridage</s0>
<s5>18</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG"><s0>Clamping</s0>
<s5>18</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA"><s0>Apriete</s0>
<s5>18</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE"><s0>Porte pièce</s0>
<s5>19</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG"><s0>Work holder</s0>
<s5>19</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA"><s0>Portapieza</s0>
<s5>19</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE"><s0>Implémentation</s0>
<s5>20</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG"><s0>Implementation</s0>
<s5>20</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA"><s0>Implementación</s0>
<s5>20</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE"><s0>Chirurgie</s0>
<s5>21</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG"><s0>Surgery</s0>
<s5>21</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA"><s0>Cirugía</s0>
<s5>21</s5>
</fC03>
<fC03 i1="12" i2="X" l="FRE"><s0>Vision artificielle</s0>
<s5>22</s5>
</fC03>
<fC03 i1="12" i2="X" l="ENG"><s0>Artificial vision</s0>
<s5>22</s5>
</fC03>
<fC03 i1="12" i2="X" l="SPA"><s0>Visión artificial</s0>
<s5>22</s5>
</fC03>
<fC03 i1="13" i2="X" l="FRE"><s0>Sensibilité tactile</s0>
<s5>23</s5>
</fC03>
<fC03 i1="13" i2="X" l="ENG"><s0>Tactile sensitivity</s0>
<s5>23</s5>
</fC03>
<fC03 i1="13" i2="X" l="SPA"><s0>Sensibilidad tactil</s0>
<s5>23</s5>
</fC03>
<fC03 i1="14" i2="X" l="FRE"><s0>Main</s0>
<s5>24</s5>
</fC03>
<fC03 i1="14" i2="X" l="ENG"><s0>Hand</s0>
<s5>24</s5>
</fC03>
<fC03 i1="14" i2="X" l="SPA"><s0>Mano</s0>
<s5>24</s5>
</fC03>
<fC03 i1="15" i2="X" l="FRE"><s0>Structure macroscopique</s0>
<s5>25</s5>
</fC03>
<fC03 i1="15" i2="X" l="ENG"><s0>Macroscopic structure</s0>
<s5>25</s5>
</fC03>
<fC03 i1="15" i2="X" l="SPA"><s0>Estructura macroscópica</s0>
<s5>25</s5>
</fC03>
<fC03 i1="16" i2="X" l="FRE"><s0>Courbe niveau</s0>
<s5>28</s5>
</fC03>
<fC03 i1="16" i2="X" l="ENG"><s0>Contour line</s0>
<s5>28</s5>
</fC03>
<fC03 i1="16" i2="X" l="SPA"><s0>Curva nivel</s0>
<s5>28</s5>
</fC03>
<fC03 i1="17" i2="X" l="FRE"><s0>Méthode continuation</s0>
<s5>29</s5>
</fC03>
<fC03 i1="17" i2="X" l="ENG"><s0>Continuation method</s0>
<s5>29</s5>
</fC03>
<fC03 i1="17" i2="X" l="SPA"><s0>Método continuación</s0>
<s5>29</s5>
</fC03>
<fC03 i1="18" i2="X" l="FRE"><s0>Méthode prédicteur correcteur</s0>
<s5>30</s5>
</fC03>
<fC03 i1="18" i2="X" l="ENG"><s0>Predictor corrector method</s0>
<s5>30</s5>
</fC03>
<fC03 i1="18" i2="X" l="SPA"><s0>Método predictor corrector</s0>
<s5>30</s5>
</fC03>
<fN21><s1>138</s1>
</fN21>
<fN82><s1>OTO</s1>
</fN82>
</pA>
<pR><fA30 i1="01" i2="1" l="ENG"><s1>IEEE international conference on robotics and automation</s1>
<s3>Washington DC USA</s3>
<s4>2002-05-11</s4>
</fA30>
</pR>
</standard>
<server><NO>PASCAL 04-0205417 INIST</NO>
<ET>Vision assisted control for manipulation using virtual fixtures: Experiments at macro and micro scales</ET>
<AU>BETTINI (A.); LANG (S.); OKAMURA (A.); HAGER (G.)</AU>
<AF>Engineering Research Center for Computer, Integrated Surgical Systems and Technology Department of Computer Science, The Johns Hopkins University/Etats-Unis (1 aut., 2 aut., 3 aut., 4 aut.)</AF>
<DT>Publication en série; Congrès; Niveau analytique</DT>
<SO>Proceedings - IEEE International Conference on Robotics and Automation; ISSN 1050-4729; Etats-Unis; Da. 2002; Pp. 3354-3361; Bibl. 9 ref.</SO>
<LA>Anglais</LA>
<EA>We present the design and implementation of a vision-based system for micron-scale, cooperative manipulation of a surgical tool. The system is based on a control algorithm that implements a broad class of guidance modes called virtual fixtures. A virtual fixture, like a real fixture, limits the motion of a tool to a prescribed class or range. The implemented system uses vision as a sensor for providing a reference trajectory, and the control algorithm then provides haptic feedback involving direct, shared manipulation of a surgical tool. We have tested this system on the JHU Steady Hand robot and provide experimental results for path following and positioning on structures at both macroscopic and microscopic scales.</EA>
<CC>001D02D11</CC>
<FD>Système coopératif; Programme commande; Guidage; Commande mouvement; Rétroaction; Robotique; Positionnement; Bridage; Porte pièce; Implémentation; Chirurgie; Vision artificielle; Sensibilité tactile; Main; Structure macroscopique; Courbe niveau; Méthode continuation; Méthode prédicteur correcteur</FD>
<ED>Cooperative systems; Control program; Guidance; Motion control; Feedback regulation; Robotics; Positioning; Clamping; Work holder; Implementation; Surgery; Artificial vision; Tactile sensitivity; Hand; Macroscopic structure; Contour line; Continuation method; Predictor corrector method</ED>
<SD>Programa mando; Guiado; Control movimiento; Retroacción; Robótica; Posicionamiento; Apriete; Portapieza; Implementación; Cirugía; Visión artificial; Sensibilidad tactil; Mano; Estructura macroscópica; Curva nivel; Método continuación; Método predictor corrector</SD>
<LO>INIST-Y 37947.354000117766645310</LO>
<ID>04-0205417</ID>
</server>
</inist>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001030 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 001030 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien
|wiki= Ticri/CIDE
|area= HapticV1
|flux= PascalFrancis
|étape= Corpus
|type= RBID
|clé= Pascal:04-0205417
|texte= Vision assisted control for manipulation using virtual fixtures: Experiments at macro and micro scales
}}
| This area was generated with Dilib version V0.6.23. Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024 | ![](Common/icons/LogoDilib.gif) |