Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Episode Classification for the Analysis of Tissue/Instrument Interaction with Multiple Visual Cues

Identifieur interne : 006E90 ( Main/Exploration ); précédent : 006E89; suivant : 006E91

Episode Classification for the Analysis of Tissue/Instrument Interaction with Multiple Visual Cues

Auteurs : L. Lo [Royaume-Uni] ; Ara Darzi [Royaume-Uni] ; Guang-Zhong Yang [Royaume-Uni]

Source :

RBID : ISTEX:8BCC7FC2D643FD3FC02B81393670B76F301B3671

Abstract

Abstract: The assessment of surgical skills for Minimally Invasive Surgery (MIS) has traditionally been conducted with visual observation and objective scoring. This paper presents a practical framework for the detection of instrument/tissue interaction from MIS video sequences by incorporating multiple visual cues. The proposed technique investigates the characteristics of four major events involved in MIS procedures including idle, retraction, cauterisation and suturing. Constant instrument tracking is maintained and multiple visual cues related to shape, deformation, changes in light reflection and other low level images featured are combined in a Bayesian framework to achieve an overall frame-by-frame classification accuracy of 77% and episode classification accuracy of 85%.

Url:
DOI: 10.1007/978-3-540-39899-8_29


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Episode Classification for the Analysis of Tissue/Instrument Interaction with Multiple Visual Cues</title>
<author>
<name sortKey="Lo, L" sort="Lo, L" uniqKey="Lo L" first="L." last="Lo">L. Lo</name>
</author>
<author>
<name sortKey="Darzi, Ara" sort="Darzi, Ara" uniqKey="Darzi A" first="Ara" last="Darzi">Ara Darzi</name>
</author>
<author>
<name sortKey="Yang, Guang Zhong" sort="Yang, Guang Zhong" uniqKey="Yang G" first="Guang-Zhong" last="Yang">Guang-Zhong Yang</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:8BCC7FC2D643FD3FC02B81393670B76F301B3671</idno>
<date when="2003" year="2003">2003</date>
<idno type="doi">10.1007/978-3-540-39899-8_29</idno>
<idno type="url">https://api.istex.fr/document/8BCC7FC2D643FD3FC02B81393670B76F301B3671/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">000A45</idno>
<idno type="wicri:Area/Istex/Curation">000A45</idno>
<idno type="wicri:Area/Istex/Checkpoint">002676</idno>
<idno type="wicri:doubleKey">0302-9743:2003:Lo L:episode:classification:for</idno>
<idno type="wicri:Area/Main/Merge">007366</idno>
<idno type="wicri:Area/Main/Curation">006E90</idno>
<idno type="wicri:Area/Main/Exploration">006E90</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a" type="main" xml:lang="en">Episode Classification for the Analysis of Tissue/Instrument Interaction with Multiple Visual Cues</title>
<author>
<name sortKey="Lo, L" sort="Lo, L" uniqKey="Lo L" first="L." last="Lo">L. Lo</name>
<affiliation wicri:level="3">
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Royal Society/Wolfson Medical Image Computing Laboratory, Imperial College London, London</wicri:regionArea>
<placeName>
<settlement type="city">Londres</settlement>
<region type="country">Angleterre</region>
<region type="région" nuts="1">Grand Londres</region>
</placeName>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Royaume-Uni</country>
</affiliation>
</author>
<author>
<name sortKey="Darzi, Ara" sort="Darzi, Ara" uniqKey="Darzi A" first="Ara" last="Darzi">Ara Darzi</name>
<affiliation wicri:level="3">
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Royal Society/Wolfson Medical Image Computing Laboratory, Imperial College London, London</wicri:regionArea>
<placeName>
<settlement type="city">Londres</settlement>
<region type="country">Angleterre</region>
<region type="région" nuts="1">Grand Londres</region>
</placeName>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Royaume-Uni</country>
</affiliation>
</author>
<author>
<name sortKey="Yang, Guang Zhong" sort="Yang, Guang Zhong" uniqKey="Yang G" first="Guang-Zhong" last="Yang">Guang-Zhong Yang</name>
<affiliation wicri:level="3">
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Royal Society/Wolfson Medical Image Computing Laboratory, Imperial College London, London</wicri:regionArea>
<placeName>
<settlement type="city">Londres</settlement>
<region type="country">Angleterre</region>
<region type="région" nuts="1">Grand Londres</region>
</placeName>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Royaume-Uni</country>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="s">Lecture Notes in Computer Science</title>
<imprint>
<date>2003</date>
</imprint>
<idno type="ISSN">0302-9743</idno>
<idno type="eISSN">1611-3349</idno>
<idno type="ISSN">0302-9743</idno>
</series>
<idno type="istex">8BCC7FC2D643FD3FC02B81393670B76F301B3671</idno>
<idno type="DOI">10.1007/978-3-540-39899-8_29</idno>
<idno type="ChapterID">29</idno>
<idno type="ChapterID">Chap29</idno>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">0302-9743</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass></textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Abstract: The assessment of surgical skills for Minimally Invasive Surgery (MIS) has traditionally been conducted with visual observation and objective scoring. This paper presents a practical framework for the detection of instrument/tissue interaction from MIS video sequences by incorporating multiple visual cues. The proposed technique investigates the characteristics of four major events involved in MIS procedures including idle, retraction, cauterisation and suturing. Constant instrument tracking is maintained and multiple visual cues related to shape, deformation, changes in light reflection and other low level images featured are combined in a Bayesian framework to achieve an overall frame-by-frame classification accuracy of 77% and episode classification accuracy of 85%.</div>
</front>
</TEI>
<affiliations>
<list>
<country>
<li>Royaume-Uni</li>
</country>
<region>
<li>Angleterre</li>
<li>Grand Londres</li>
</region>
<settlement>
<li>Londres</li>
</settlement>
</list>
<tree>
<country name="Royaume-Uni">
<region name="Angleterre">
<name sortKey="Lo, L" sort="Lo, L" uniqKey="Lo L" first="L." last="Lo">L. Lo</name>
</region>
<name sortKey="Darzi, Ara" sort="Darzi, Ara" uniqKey="Darzi A" first="Ara" last="Darzi">Ara Darzi</name>
<name sortKey="Darzi, Ara" sort="Darzi, Ara" uniqKey="Darzi A" first="Ara" last="Darzi">Ara Darzi</name>
<name sortKey="Lo, L" sort="Lo, L" uniqKey="Lo L" first="L." last="Lo">L. Lo</name>
<name sortKey="Yang, Guang Zhong" sort="Yang, Guang Zhong" uniqKey="Yang G" first="Guang-Zhong" last="Yang">Guang-Zhong Yang</name>
<name sortKey="Yang, Guang Zhong" sort="Yang, Guang Zhong" uniqKey="Yang G" first="Guang-Zhong" last="Yang">Guang-Zhong Yang</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 006E90 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 006E90 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     ISTEX:8BCC7FC2D643FD3FC02B81393670B76F301B3671
   |texte=   Episode Classification for the Analysis of Tissue/Instrument Interaction with Multiple Visual Cues
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024