Gaze-Contingent Motor Channelling, haptic constraints and associated cognitive demand for robotic MIS.
Identifieur interne : 001022 ( PubMed/Corpus ); précédent : 001021; suivant : 001023Gaze-Contingent Motor Channelling, haptic constraints and associated cognitive demand for robotic MIS.
Auteurs : George P. Mylonas ; Ka-Wai Kwok ; David R C. James ; Daniel Leff ; Felipe Orihuela-Espina ; Ara Darzi ; Guang-Zhong YangSource :
- Medical image analysis [ 1361-8423 ] ; 2012.
English descriptors
- KwdEn :
- MESH :
Abstract
The success of MIS is coupled with an increasing demand on surgeons' manual dexterity and visuomotor coordination due to the complexity of instrument manipulations. The use of master-slave surgical robots has avoided many of the drawbacks of MIS, but at the same time, has increased the physical separation between the surgeon and the patient. Tissue deformation combined with restricted workspace and visibility of an already cluttered environment can raise critical issues related to surgical precision and safety. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robot-assisted MIS procedures. This paper introduces a novel gaze-contingent framework for real-time haptic feedback and virtual fixtures by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of Gaze-Contingent Motor Channelling. The method is also extended to 3D by introducing the concept of Gaze-Contingent Haptic Constraints where eye gaze is used to dynamically prescribe and update safety boundaries during robot-assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robot assisted phantom procedures demonstrate the potential clinical value of the technique. In order to assess the associated cognitive demand of the proposed concepts, functional Near-Infrared Spectroscopy is used and preliminary results are discussed.
DOI: 10.1016/j.media.2010.07.007
PubMed: 20889367
Links to Exploration step
pubmed:20889367Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en">Gaze-Contingent Motor Channelling, haptic constraints and associated cognitive demand for robotic MIS.</title>
<author><name sortKey="Mylonas, George P" sort="Mylonas, George P" uniqKey="Mylonas G" first="George P" last="Mylonas">George P. Mylonas</name>
<affiliation><nlm:affiliation>Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Imperial College London, London, United Kingdom. george.mylonas@imperial.ac.uk</nlm:affiliation>
</affiliation>
</author>
<author><name sortKey="Kwok, Ka Wai" sort="Kwok, Ka Wai" uniqKey="Kwok K" first="Ka-Wai" last="Kwok">Ka-Wai Kwok</name>
</author>
<author><name sortKey="James, David R C" sort="James, David R C" uniqKey="James D" first="David R C" last="James">David R C. James</name>
</author>
<author><name sortKey="Leff, Daniel" sort="Leff, Daniel" uniqKey="Leff D" first="Daniel" last="Leff">Daniel Leff</name>
</author>
<author><name sortKey="Orihuela Espina, Felipe" sort="Orihuela Espina, Felipe" uniqKey="Orihuela Espina F" first="Felipe" last="Orihuela-Espina">Felipe Orihuela-Espina</name>
</author>
<author><name sortKey="Darzi, Ara" sort="Darzi, Ara" uniqKey="Darzi A" first="Ara" last="Darzi">Ara Darzi</name>
</author>
<author><name sortKey="Yang, Guang Zhong" sort="Yang, Guang Zhong" uniqKey="Yang G" first="Guang-Zhong" last="Yang">Guang-Zhong Yang</name>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">PubMed</idno>
<date when="2012">2012</date>
<idno type="doi">10.1016/j.media.2010.07.007</idno>
<idno type="RBID">pubmed:20889367</idno>
<idno type="pmid">20889367</idno>
<idno type="wicri:Area/PubMed/Corpus">001022</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en">Gaze-Contingent Motor Channelling, haptic constraints and associated cognitive demand for robotic MIS.</title>
<author><name sortKey="Mylonas, George P" sort="Mylonas, George P" uniqKey="Mylonas G" first="George P" last="Mylonas">George P. Mylonas</name>
<affiliation><nlm:affiliation>Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Imperial College London, London, United Kingdom. george.mylonas@imperial.ac.uk</nlm:affiliation>
</affiliation>
</author>
<author><name sortKey="Kwok, Ka Wai" sort="Kwok, Ka Wai" uniqKey="Kwok K" first="Ka-Wai" last="Kwok">Ka-Wai Kwok</name>
</author>
<author><name sortKey="James, David R C" sort="James, David R C" uniqKey="James D" first="David R C" last="James">David R C. James</name>
</author>
<author><name sortKey="Leff, Daniel" sort="Leff, Daniel" uniqKey="Leff D" first="Daniel" last="Leff">Daniel Leff</name>
</author>
<author><name sortKey="Orihuela Espina, Felipe" sort="Orihuela Espina, Felipe" uniqKey="Orihuela Espina F" first="Felipe" last="Orihuela-Espina">Felipe Orihuela-Espina</name>
</author>
<author><name sortKey="Darzi, Ara" sort="Darzi, Ara" uniqKey="Darzi A" first="Ara" last="Darzi">Ara Darzi</name>
</author>
<author><name sortKey="Yang, Guang Zhong" sort="Yang, Guang Zhong" uniqKey="Yang G" first="Guang-Zhong" last="Yang">Guang-Zhong Yang</name>
</author>
</analytic>
<series><title level="j">Medical image analysis</title>
<idno type="eISSN">1361-8423</idno>
<imprint><date when="2012" type="published">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>Cognitive Reserve (physiology)</term>
<term>Eye Movement Measurements</term>
<term>Fixation, Ocular (physiology)</term>
<term>Humans</term>
<term>Minimally Invasive Surgical Procedures (methods)</term>
<term>Robotics (methods)</term>
<term>Surgery, Computer-Assisted (methods)</term>
<term>Touch (physiology)</term>
<term>User-Computer Interface</term>
</keywords>
<keywords scheme="MESH" qualifier="methods" xml:lang="en"><term>Minimally Invasive Surgical Procedures</term>
<term>Robotics</term>
<term>Surgery, Computer-Assisted</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en"><term>Cognitive Reserve</term>
<term>Fixation, Ocular</term>
<term>Touch</term>
</keywords>
<keywords scheme="MESH" xml:lang="en"><term>Eye Movement Measurements</term>
<term>Humans</term>
<term>User-Computer Interface</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">The success of MIS is coupled with an increasing demand on surgeons' manual dexterity and visuomotor coordination due to the complexity of instrument manipulations. The use of master-slave surgical robots has avoided many of the drawbacks of MIS, but at the same time, has increased the physical separation between the surgeon and the patient. Tissue deformation combined with restricted workspace and visibility of an already cluttered environment can raise critical issues related to surgical precision and safety. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robot-assisted MIS procedures. This paper introduces a novel gaze-contingent framework for real-time haptic feedback and virtual fixtures by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of Gaze-Contingent Motor Channelling. The method is also extended to 3D by introducing the concept of Gaze-Contingent Haptic Constraints where eye gaze is used to dynamically prescribe and update safety boundaries during robot-assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robot assisted phantom procedures demonstrate the potential clinical value of the technique. In order to assess the associated cognitive demand of the proposed concepts, functional Near-Infrared Spectroscopy is used and preliminary results are discussed.</div>
</front>
</TEI>
<pubmed><MedlineCitation Owner="NLM" Status="MEDLINE"><PMID Version="1">20889367</PMID>
<DateCreated><Year>2012</Year>
<Month>03</Month>
<Day>12</Day>
</DateCreated>
<DateCompleted><Year>2012</Year>
<Month>07</Month>
<Day>19</Day>
</DateCompleted>
<DateRevised><Year>2014</Year>
<Month>11</Month>
<Day>20</Day>
</DateRevised>
<Article PubModel="Print-Electronic"><Journal><ISSN IssnType="Electronic">1361-8423</ISSN>
<JournalIssue CitedMedium="Internet"><Volume>16</Volume>
<Issue>3</Issue>
<PubDate><Year>2012</Year>
<Month>Apr</Month>
</PubDate>
</JournalIssue>
<Title>Medical image analysis</Title>
<ISOAbbreviation>Med Image Anal</ISOAbbreviation>
</Journal>
<ArticleTitle>Gaze-Contingent Motor Channelling, haptic constraints and associated cognitive demand for robotic MIS.</ArticleTitle>
<Pagination><MedlinePgn>612-31</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.1016/j.media.2010.07.007</ELocationID>
<Abstract><AbstractText>The success of MIS is coupled with an increasing demand on surgeons' manual dexterity and visuomotor coordination due to the complexity of instrument manipulations. The use of master-slave surgical robots has avoided many of the drawbacks of MIS, but at the same time, has increased the physical separation between the surgeon and the patient. Tissue deformation combined with restricted workspace and visibility of an already cluttered environment can raise critical issues related to surgical precision and safety. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robot-assisted MIS procedures. This paper introduces a novel gaze-contingent framework for real-time haptic feedback and virtual fixtures by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of Gaze-Contingent Motor Channelling. The method is also extended to 3D by introducing the concept of Gaze-Contingent Haptic Constraints where eye gaze is used to dynamically prescribe and update safety boundaries during robot-assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robot assisted phantom procedures demonstrate the potential clinical value of the technique. In order to assess the associated cognitive demand of the proposed concepts, functional Near-Infrared Spectroscopy is used and preliminary results are discussed.</AbstractText>
<CopyrightInformation>Copyright © 2010 Elsevier B.V. All rights reserved.</CopyrightInformation>
</Abstract>
<AuthorList CompleteYN="Y"><Author ValidYN="Y"><LastName>Mylonas</LastName>
<ForeName>George P</ForeName>
<Initials>GP</Initials>
<AffiliationInfo><Affiliation>Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Imperial College London, London, United Kingdom. george.mylonas@imperial.ac.uk</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y"><LastName>Kwok</LastName>
<ForeName>Ka-Wai</ForeName>
<Initials>KW</Initials>
</Author>
<Author ValidYN="Y"><LastName>James</LastName>
<ForeName>David R C</ForeName>
<Initials>DR</Initials>
</Author>
<Author ValidYN="Y"><LastName>Leff</LastName>
<ForeName>Daniel</ForeName>
<Initials>D</Initials>
</Author>
<Author ValidYN="Y"><LastName>Orihuela-Espina</LastName>
<ForeName>Felipe</ForeName>
<Initials>F</Initials>
</Author>
<Author ValidYN="Y"><LastName>Darzi</LastName>
<ForeName>Ara</ForeName>
<Initials>A</Initials>
</Author>
<Author ValidYN="Y"><LastName>Yang</LastName>
<ForeName>Guang-Zhong</ForeName>
<Initials>GZ</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList><PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic"><Year>2010</Year>
<Month>08</Month>
<Day>01</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo><Country>Netherlands</Country>
<MedlineTA>Med Image Anal</MedlineTA>
<NlmUniqueID>9713490</NlmUniqueID>
<ISSNLinking>1361-8415</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList><MeshHeading><DescriptorName MajorTopicYN="N" UI="D058245">Cognitive Reserve</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D053483">Eye Movement Measurements</DescriptorName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D005403">Fixation, Ocular</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D019060">Minimally Invasive Surgical Procedures</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000379">methods</QualifierName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D012371">Robotics</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000379">methods</QualifierName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D025321">Surgery, Computer-Assisted</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000379">methods</QualifierName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D014110">Touch</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="Y" UI="D014584">User-Computer Interface</DescriptorName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData><History><PubMedPubDate PubStatus="received"><Year>2009</Year>
<Month>9</Month>
<Day>13</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="revised"><Year>2010</Year>
<Month>7</Month>
<Day>5</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="accepted"><Year>2010</Year>
<Month>7</Month>
<Day>22</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="aheadofprint"><Year>2010</Year>
<Month>8</Month>
<Day>1</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez"><Year>2010</Year>
<Month>10</Month>
<Day>5</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed"><Year>2010</Year>
<Month>10</Month>
<Day>5</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline"><Year>2012</Year>
<Month>7</Month>
<Day>20</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList><ArticleId IdType="pii">S1361-8415(10)00099-X</ArticleId>
<ArticleId IdType="doi">10.1016/j.media.2010.07.007</ArticleId>
<ArticleId IdType="pubmed">20889367</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001022 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd -nk 001022 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Ticri/CIDE |area= HapticV1 |flux= PubMed |étape= Corpus |type= RBID |clé= pubmed:20889367 |texte= Gaze-Contingent Motor Channelling, haptic constraints and associated cognitive demand for robotic MIS. }}
Pour générer des pages wiki
HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/RBID.i -Sk "pubmed:20889367" \ | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd \ | NlmPubMed2Wicri -a HapticV1
![]() | This area was generated with Dilib version V0.6.23. | ![]() |