Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Gesture-controlled interactive three dimensional anatomy: a novel teaching tool in head and neck surgery

Identifieur interne : 000854 ( Pmc/Curation ); précédent : 000853; suivant : 000855

Gesture-controlled interactive three dimensional anatomy: a novel teaching tool in head and neck surgery

Auteurs : Jordan B. Hochman ; Bertram Unger ; Jay Kraut ; Justyn Pisa ; Sabine Hombach-Klonisch

Source :

RBID : PMC:4193987

Abstract

Background

There is a need for innovative anatomic teaching tools. This paper describes a three dimensional (3D) tool employing the Microsoft Kinect™. Using this instrument, 3D temporal bone anatomy can be manipulated with the use of hand gestures, in the absence of mouse or keyboard.

Methods

CT Temporal bone data is imported into an image processing program and segmented. This information is then exported in polygonal mesh format to an in-house designed 3D graphics engine with an integrated Microsoft Kinect™. Motion in the virtual environment is controlled by tracking hand position relative to the user’s left shoulder.

Results

The tool successfully tracked scene depth and user joint locations. This permitted gesture-based control over the entire 3D environment. Stereoscopy was deemed appropriate with significant object projection, while still maintaining the operator’s ability to resolve image details. Specific anatomical structures can be selected from within the larger virtual environment. These structures can be extracted and rotated at the discretion of the user. Voice command employing the Kinect’s™ intrinsic speech library was also implemented, but is easily confounded by environmental noise.

Conclusion

There is a need for the development of virtual anatomy models to complement traditional education. Initial development is time intensive. Nonetheless, our novel gesture-controlled interactive 3D model of the temporal bone represents a promising interactive teaching tool utilizing a novel interface.


Url:
DOI: 10.1186/s40463-014-0038-2
PubMed: 25286966
PubMed Central: 4193987

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4193987

Curation

No country items

Jordan B. Hochman
<affiliation>
<nlm:aff id="Aff1">Neurotologic Surgery, Department of Otolaryngology - Head and Neck Surgery, Faculty of Medicine, University of Manitoba, GB421, 820 Sherbrook Street, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
Bertram Unger
<affiliation>
<nlm:aff id="Aff2">Clinical Learning and Simulation Facility, Department of Medical Education, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
Jay Kraut
<affiliation>
<nlm:aff id="Aff3">Department of Medical Education, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
Justyn Pisa
<affiliation>
<nlm:aff id="Aff4">Department of Otolaryngology - Head and Neck Surgery, Health Sciences Centre, Surgical Hearing Implant Program, GB421, 820 Sherbrook Street, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
Sabine Hombach-Klonisch
<affiliation>
<nlm:aff id="Aff5">Department of Human Anatomy and Cell Science, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Gesture-controlled interactive three dimensional anatomy: a novel teaching tool in head and neck surgery</title>
<author>
<name sortKey="Hochman, Jordan B" sort="Hochman, Jordan B" uniqKey="Hochman J" first="Jordan B" last="Hochman">Jordan B. Hochman</name>
<affiliation>
<nlm:aff id="Aff1">Neurotologic Surgery, Department of Otolaryngology - Head and Neck Surgery, Faculty of Medicine, University of Manitoba, GB421, 820 Sherbrook Street, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Unger, Bertram" sort="Unger, Bertram" uniqKey="Unger B" first="Bertram" last="Unger">Bertram Unger</name>
<affiliation>
<nlm:aff id="Aff2">Clinical Learning and Simulation Facility, Department of Medical Education, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Kraut, Jay" sort="Kraut, Jay" uniqKey="Kraut J" first="Jay" last="Kraut">Jay Kraut</name>
<affiliation>
<nlm:aff id="Aff3">Department of Medical Education, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Pisa, Justyn" sort="Pisa, Justyn" uniqKey="Pisa J" first="Justyn" last="Pisa">Justyn Pisa</name>
<affiliation>
<nlm:aff id="Aff4">Department of Otolaryngology - Head and Neck Surgery, Health Sciences Centre, Surgical Hearing Implant Program, GB421, 820 Sherbrook Street, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Hombach Klonisch, Sabine" sort="Hombach Klonisch, Sabine" uniqKey="Hombach Klonisch S" first="Sabine" last="Hombach-Klonisch">Sabine Hombach-Klonisch</name>
<affiliation>
<nlm:aff id="Aff5">Department of Human Anatomy and Cell Science, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25286966</idno>
<idno type="pmc">4193987</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4193987</idno>
<idno type="RBID">PMC:4193987</idno>
<idno type="doi">10.1186/s40463-014-0038-2</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">000854</idno>
<idno type="wicri:Area/Pmc/Curation">000854</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Gesture-controlled interactive three dimensional anatomy: a novel teaching tool in head and neck surgery</title>
<author>
<name sortKey="Hochman, Jordan B" sort="Hochman, Jordan B" uniqKey="Hochman J" first="Jordan B" last="Hochman">Jordan B. Hochman</name>
<affiliation>
<nlm:aff id="Aff1">Neurotologic Surgery, Department of Otolaryngology - Head and Neck Surgery, Faculty of Medicine, University of Manitoba, GB421, 820 Sherbrook Street, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Unger, Bertram" sort="Unger, Bertram" uniqKey="Unger B" first="Bertram" last="Unger">Bertram Unger</name>
<affiliation>
<nlm:aff id="Aff2">Clinical Learning and Simulation Facility, Department of Medical Education, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Kraut, Jay" sort="Kraut, Jay" uniqKey="Kraut J" first="Jay" last="Kraut">Jay Kraut</name>
<affiliation>
<nlm:aff id="Aff3">Department of Medical Education, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Pisa, Justyn" sort="Pisa, Justyn" uniqKey="Pisa J" first="Justyn" last="Pisa">Justyn Pisa</name>
<affiliation>
<nlm:aff id="Aff4">Department of Otolaryngology - Head and Neck Surgery, Health Sciences Centre, Surgical Hearing Implant Program, GB421, 820 Sherbrook Street, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Hombach Klonisch, Sabine" sort="Hombach Klonisch, Sabine" uniqKey="Hombach Klonisch S" first="Sabine" last="Hombach-Klonisch">Sabine Hombach-Klonisch</name>
<affiliation>
<nlm:aff id="Aff5">Department of Human Anatomy and Cell Science, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</nlm:aff>
<wicri:noCountry code="subfield">Manitoba Canada</wicri:noCountry>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Journal of Otolaryngology - Head & Neck Surgery</title>
<idno type="ISSN">1916-0208</idno>
<idno type="eISSN">1916-0216</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<sec>
<title>Background</title>
<p>There is a need for innovative anatomic teaching tools. This paper describes a three dimensional (3D) tool employing the Microsoft Kinect™. Using this instrument, 3D temporal bone anatomy can be manipulated with the use of hand gestures, in the absence of mouse or keyboard.</p>
</sec>
<sec>
<title>Methods</title>
<p>CT Temporal bone data is imported into an image processing program and segmented. This information is then exported in polygonal mesh format to an in-house designed 3D graphics engine with an integrated Microsoft Kinect™. Motion in the virtual environment is controlled by tracking hand position relative to the user’s left shoulder.</p>
</sec>
<sec>
<title>Results</title>
<p>The tool successfully tracked scene depth and user joint locations. This permitted gesture-based control over the entire 3D environment. Stereoscopy was deemed appropriate with significant object projection, while still maintaining the operator’s ability to resolve image details. Specific anatomical structures can be selected from within the larger virtual environment. These structures can be extracted and rotated at the discretion of the user. Voice command employing the Kinect’s™ intrinsic speech library was also implemented, but is easily confounded by environmental noise.</p>
</sec>
<sec>
<title>Conclusion</title>
<p>There is a need for the development of virtual anatomy models to complement traditional education. Initial development is time intensive. Nonetheless, our novel gesture-controlled interactive 3D model of the temporal bone represents a promising interactive teaching tool utilizing a novel interface.</p>
</sec>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Yeung, Jc" uniqKey="Yeung J">JC Yeung</name>
</author>
<author>
<name sortKey="Fung, K" uniqKey="Fung K">K Fung</name>
</author>
<author>
<name sortKey="Wilson, Td" uniqKey="Wilson T">TD Wilson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Venail, F" uniqKey="Venail F">F Venail</name>
</author>
<author>
<name sortKey="Deveze, A" uniqKey="Deveze A">A Deveze</name>
</author>
<author>
<name sortKey="Lallemant, B" uniqKey="Lallemant B">B Lallemant</name>
</author>
<author>
<name sortKey="Guevara, N" uniqKey="Guevara N">N Guevara</name>
</author>
<author>
<name sortKey="Mondain, M" uniqKey="Mondain M">M Mondain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nicholson, Dt" uniqKey="Nicholson D">DT Nicholson</name>
</author>
<author>
<name sortKey="Chalk, C" uniqKey="Chalk C">C Chalk</name>
</author>
<author>
<name sortKey="Funnell, Wr" uniqKey="Funnell W">WR Funnell</name>
</author>
<author>
<name sortKey="Daniel, Sj" uniqKey="Daniel S">SJ Daniel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Glittenberg, C" uniqKey="Glittenberg C">C Glittenberg</name>
</author>
<author>
<name sortKey="Binder, S" uniqKey="Binder S">S Binder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nance, Et" uniqKey="Nance E">ET Nance</name>
</author>
<author>
<name sortKey="Lanning, Sk" uniqKey="Lanning S">SK Lanning</name>
</author>
<author>
<name sortKey="Gunsolley, Jc" uniqKey="Gunsolley J">JC Gunsolley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Agur, Amr" uniqKey="Agur A">AMR Agur</name>
</author>
<author>
<name sortKey="Lee, Mj" uniqKey="Lee M">MJ Lee</name>
</author>
<author>
<name sortKey="Anderson, Je" uniqKey="Anderson J">JE Anderson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Netter, Fh" uniqKey="Netter F">FH Netter</name>
</author>
<author>
<name sortKey="Colacino, S" uniqKey="Colacino S">S Colacino</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gray, H" uniqKey="Gray H">H Gray</name>
</author>
<author>
<name sortKey="Williams, Pl" uniqKey="Williams P">PL Williams</name>
</author>
<author>
<name sortKey="Bannister, Lh" uniqKey="Bannister L">LH Bannister</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Garg, Ax" uniqKey="Garg A">AX Garg</name>
</author>
<author>
<name sortKey="Norman, G" uniqKey="Norman G">G Norman</name>
</author>
<author>
<name sortKey="Sperotable, L" uniqKey="Sperotable L">L Sperotable</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Temkin, B" uniqKey="Temkin B">B Temkin</name>
</author>
<author>
<name sortKey="Acosta, E" uniqKey="Acosta E">E Acosta</name>
</author>
<author>
<name sortKey="Malvankar, A" uniqKey="Malvankar A">A Malvankar</name>
</author>
<author>
<name sortKey="Vaidyanath, S" uniqKey="Vaidyanath S">S Vaidyanath</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="George, Ap" uniqKey="George A">AP George</name>
</author>
<author>
<name sortKey="De, R" uniqKey="De R">R De</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fried, Mp" uniqKey="Fried M">MP Fried</name>
</author>
<author>
<name sortKey="Uribe, Ji" uniqKey="Uribe J">JI Uribe</name>
</author>
<author>
<name sortKey="Sadoughi, B" uniqKey="Sadoughi B">B Sadoughi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schubert, O" uniqKey="Schubert O">O Schubert</name>
</author>
<author>
<name sortKey="Sartor, K" uniqKey="Sartor K">K Sartor</name>
</author>
<author>
<name sortKey="Forsting, M" uniqKey="Forsting M">M Forsting</name>
</author>
<author>
<name sortKey="Reisser, C" uniqKey="Reisser C">C Reisser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rodt, T" uniqKey="Rodt T">T Rodt</name>
</author>
<author>
<name sortKey="Sartor, K" uniqKey="Sartor K">K Sartor</name>
</author>
<author>
<name sortKey="Forsting, M" uniqKey="Forsting M">M Forsting</name>
</author>
<author>
<name sortKey="Reisser, C" uniqKey="Reisser C">C Reisser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Turmezei, Td" uniqKey="Turmezei T">TD Turmezei</name>
</author>
<author>
<name sortKey="Tam, Md" uniqKey="Tam M">MD Tam</name>
</author>
<author>
<name sortKey="Loughna, S" uniqKey="Loughna S">S Loughna</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lufler, Rs" uniqKey="Lufler R">RS Lufler</name>
</author>
<author>
<name sortKey="Zumwalt, Ac" uniqKey="Zumwalt A">AC Zumwalt</name>
</author>
<author>
<name sortKey="Romney, Ca" uniqKey="Romney C">CA Romney</name>
</author>
<author>
<name sortKey="Hoagland, Tm" uniqKey="Hoagland T">TM Hoagland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Luursema, J M" uniqKey="Luursema J">J-M Luursema</name>
</author>
<author>
<name sortKey="Zumwalt, Ac" uniqKey="Zumwalt A">AC Zumwalt</name>
</author>
<author>
<name sortKey="Romney, Ca" uniqKey="Romney C">CA Romney</name>
</author>
<author>
<name sortKey="Hoagland, Tm" uniqKey="Hoagland T">TM Hoagland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jacobson, S" uniqKey="Jacobson S">S Jacobson</name>
</author>
<author>
<name sortKey="Epstein, Sk" uniqKey="Epstein S">SK Epstein</name>
</author>
<author>
<name sortKey="Albright, S" uniqKey="Albright S">S Albright</name>
</author>
<author>
<name sortKey="Ochieng, J" uniqKey="Ochieng J">J Ochieng</name>
</author>
<author>
<name sortKey="Griffiths, J" uniqKey="Griffiths J">J Griffiths</name>
</author>
<author>
<name sortKey="Coppersmith, V" uniqKey="Coppersmith V">V Coppersmith</name>
</author>
<author>
<name sortKey="Polak, Jf" uniqKey="Polak J">JF Polak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hisley, Kc" uniqKey="Hisley K">KC Hisley</name>
</author>
<author>
<name sortKey="Anderson, Ld" uniqKey="Anderson L">LD Anderson</name>
</author>
<author>
<name sortKey="Smith, Se" uniqKey="Smith S">SE Smith</name>
</author>
<author>
<name sortKey="Kavic, Sm" uniqKey="Kavic S">SM Kavic</name>
</author>
<author>
<name sortKey="Tracy, Jk" uniqKey="Tracy J">JK Tracy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Petersson, H" uniqKey="Petersson H">H Petersson</name>
</author>
<author>
<name sortKey="Sinkvist, D" uniqKey="Sinkvist D">D Sinkvist</name>
</author>
<author>
<name sortKey="Wang, C" uniqKey="Wang C">C Wang</name>
</author>
<author>
<name sortKey="Smedby, O" uniqKey="Smedby O">O Smedby</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Crossingham, Jl" uniqKey="Crossingham J">JL Crossingham</name>
</author>
<author>
<name sortKey="Jenkinson, J" uniqKey="Jenkinson J">J Jenkinson</name>
</author>
<author>
<name sortKey="Woolridge, N" uniqKey="Woolridge N">N Woolridge</name>
</author>
<author>
<name sortKey="Gallinger, S" uniqKey="Gallinger S">S Gallinger</name>
</author>
<author>
<name sortKey="Tait, Ga" uniqKey="Tait G">GA Tait</name>
</author>
<author>
<name sortKey="Moulton, Ca" uniqKey="Moulton C">CA Moulton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rodt, T" uniqKey="Rodt T">T Rodt</name>
</author>
<author>
<name sortKey="Burmeister, Hp" uniqKey="Burmeister H">HP Burmeister</name>
</author>
<author>
<name sortKey="Bartling, S" uniqKey="Bartling S">S Bartling</name>
</author>
<author>
<name sortKey="Kaminsky, J" uniqKey="Kaminsky J">J Kaminsky</name>
</author>
<author>
<name sortKey="Schwab, B" uniqKey="Schwab B">B Schwab</name>
</author>
<author>
<name sortKey="Kikinis, R" uniqKey="Kikinis R">R Kikinis</name>
</author>
<author>
<name sortKey="Backer, H" uniqKey="Backer H">H Backer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gould, Dj" uniqKey="Gould D">DJ Gould</name>
</author>
<author>
<name sortKey="Terrell, Ma" uniqKey="Terrell M">MA Terrell</name>
</author>
<author>
<name sortKey="Fleming, J" uniqKey="Fleming J">J Fleming</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yip, Gw" uniqKey="Yip G">GW Yip</name>
</author>
<author>
<name sortKey="Rajendran, K" uniqKey="Rajendran K">K Rajendran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Trelease, Rb" uniqKey="Trelease R">RB Trelease</name>
</author>
<author>
<name sortKey="Rosset, A" uniqKey="Rosset A">A Rosset</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nguyen, N" uniqKey="Nguyen N">N Nguyen</name>
</author>
<author>
<name sortKey="Wilson, Td" uniqKey="Wilson T">TD Wilson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vazquez, Pp" uniqKey="Vazquez P">PP Vazquez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hariri, S" uniqKey="Hariri S">S Hariri</name>
</author>
<author>
<name sortKey="Rawn, C" uniqKey="Rawn C">C Rawn</name>
</author>
<author>
<name sortKey="Srivastava, S" uniqKey="Srivastava S">S Srivastava</name>
</author>
<author>
<name sortKey="Youngblood, P" uniqKey="Youngblood P">P Youngblood</name>
</author>
<author>
<name sortKey="Ladd, A" uniqKey="Ladd A">A Ladd</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brenton, H" uniqKey="Brenton H">H Brenton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcrackan, Tr" uniqKey="Mcrackan T">TR McRackan</name>
</author>
<author>
<name sortKey="Reda, Fa" uniqKey="Reda F">FA Reda</name>
</author>
<author>
<name sortKey="Rivas, A" uniqKey="Rivas A">A Rivas</name>
</author>
<author>
<name sortKey="Noble, Jh" uniqKey="Noble J">JH Noble</name>
</author>
<author>
<name sortKey="Dietrich, Ms" uniqKey="Dietrich M">MS Dietrich</name>
</author>
<author>
<name sortKey="Dawant, Bm" uniqKey="Dawant B">BM Dawant</name>
</author>
<author>
<name sortKey="Labadie, Rf" uniqKey="Labadie R">RF Labadie</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reda, Fa" uniqKey="Reda F">FA Reda</name>
</author>
<author>
<name sortKey="Noble, Jh" uniqKey="Noble J">JH Noble</name>
</author>
<author>
<name sortKey="Rivas, A" uniqKey="Rivas A">A Rivas</name>
</author>
<author>
<name sortKey="Mcrackan, Tr" uniqKey="Mcrackan T">TR McRackan</name>
</author>
<author>
<name sortKey="Labadie, Rf" uniqKey="Labadie R">RF Labadie</name>
</author>
<author>
<name sortKey="Dawant, Bm" uniqKey="Dawant B">BM Dawant</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">J Otolaryngol Head Neck Surg</journal-id>
<journal-id journal-id-type="iso-abbrev">J Otolaryngol Head Neck Surg</journal-id>
<journal-title-group>
<journal-title>Journal of Otolaryngology - Head & Neck Surgery</journal-title>
</journal-title-group>
<issn pub-type="ppub">1916-0208</issn>
<issn pub-type="epub">1916-0216</issn>
<publisher>
<publisher-name>BioMed Central</publisher-name>
<publisher-loc>London</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25286966</article-id>
<article-id pub-id-type="pmc">4193987</article-id>
<article-id pub-id-type="publisher-id">38</article-id>
<article-id pub-id-type="doi">10.1186/s40463-014-0038-2</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Original Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Gesture-controlled interactive three dimensional anatomy: a novel teaching tool in head and neck surgery</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Hochman</surname>
<given-names>Jordan B</given-names>
</name>
<address>
<email>jordanhochman@hotmail.com</email>
</address>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Unger</surname>
<given-names>Bertram</given-names>
</name>
<address>
<email>bertram.j.unger@gmail.com</email>
</address>
<xref ref-type="aff" rid="Aff2"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kraut</surname>
<given-names>Jay</given-names>
</name>
<address>
<email>jaykraut@gmail.com</email>
</address>
<xref ref-type="aff" rid="Aff3"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Pisa</surname>
<given-names>Justyn</given-names>
</name>
<address>
<email>jpisa@hsc.mb.ca</email>
</address>
<xref ref-type="aff" rid="Aff4"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Hombach-Klonisch</surname>
<given-names>Sabine</given-names>
</name>
<address>
<email>sabine.hombach-klonisch@med.umanitoba.ca</email>
</address>
<xref ref-type="aff" rid="Aff5"></xref>
</contrib>
<aff id="Aff1">
<label></label>
Neurotologic Surgery, Department of Otolaryngology - Head and Neck Surgery, Faculty of Medicine, University of Manitoba, GB421, 820 Sherbrook Street, Winnipeg, Manitoba Canada</aff>
<aff id="Aff2">
<label></label>
Clinical Learning and Simulation Facility, Department of Medical Education, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</aff>
<aff id="Aff3">
<label></label>
Department of Medical Education, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</aff>
<aff id="Aff4">
<label></label>
Department of Otolaryngology - Head and Neck Surgery, Health Sciences Centre, Surgical Hearing Implant Program, GB421, 820 Sherbrook Street, Winnipeg, Manitoba Canada</aff>
<aff id="Aff5">
<label></label>
Department of Human Anatomy and Cell Science, Faculty of Medicine, University of Manitoba, Winnipeg, Manitoba Canada</aff>
</contrib-group>
<pub-date pub-type="epub">
<day>7</day>
<month>10</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>7</day>
<month>10</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>43</volume>
<issue>1</issue>
<elocation-id>38</elocation-id>
<history>
<date date-type="received">
<day>28</day>
<month>1</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>22</day>
<month>9</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>© Hochman et al.; licensee BioMed Central Ltd. 2014</copyright-statement>
<license license-type="open-access">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution License (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/2.0">http://creativecommons.org/licenses/by/2.0</ext-link>
), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/publicdomain/zero/1.0/">http://creativecommons.org/publicdomain/zero/1.0/</ext-link>
) applies to the data made available in this article, unless otherwise stated.</license-p>
</license>
</permissions>
<abstract id="Abs1">
<sec>
<title>Background</title>
<p>There is a need for innovative anatomic teaching tools. This paper describes a three dimensional (3D) tool employing the Microsoft Kinect™. Using this instrument, 3D temporal bone anatomy can be manipulated with the use of hand gestures, in the absence of mouse or keyboard.</p>
</sec>
<sec>
<title>Methods</title>
<p>CT Temporal bone data is imported into an image processing program and segmented. This information is then exported in polygonal mesh format to an in-house designed 3D graphics engine with an integrated Microsoft Kinect™. Motion in the virtual environment is controlled by tracking hand position relative to the user’s left shoulder.</p>
</sec>
<sec>
<title>Results</title>
<p>The tool successfully tracked scene depth and user joint locations. This permitted gesture-based control over the entire 3D environment. Stereoscopy was deemed appropriate with significant object projection, while still maintaining the operator’s ability to resolve image details. Specific anatomical structures can be selected from within the larger virtual environment. These structures can be extracted and rotated at the discretion of the user. Voice command employing the Kinect’s™ intrinsic speech library was also implemented, but is easily confounded by environmental noise.</p>
</sec>
<sec>
<title>Conclusion</title>
<p>There is a need for the development of virtual anatomy models to complement traditional education. Initial development is time intensive. Nonetheless, our novel gesture-controlled interactive 3D model of the temporal bone represents a promising interactive teaching tool utilizing a novel interface.</p>
</sec>
</abstract>
<kwd-group xml:lang="en">
<title>Keywords</title>
<kwd>Interactive</kwd>
<kwd>3D model</kwd>
<kwd>Gesture controlled</kwd>
<kwd>Virtual reality</kwd>
<kwd>Haptic</kwd>
<kwd>Temporal bone</kwd>
</kwd-group>
<custom-meta-group>
<custom-meta>
<meta-name>issue-copyright-statement</meta-name>
<meta-value>© The Author(s) 2014</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec id="Sec1" sec-type="introduction">
<title>Introduction</title>
<p>Three-dimensional (3D) virtual imagery can be an important tool for understanding the spatial relationships between distinct anatomical structures. This is particularly relevant in regions for which the classical dissection technique has limitations. For example, the complexity and microscopic nature of head and neck anatomy has proven to be an ongoing challenge for learners [
<xref ref-type="bibr" rid="CR1">1</xref>
]. Within the temporal bone, there are considerable soft tissue structures, densely situated in bone, making severe demands on visuo-spatial capabilities. New learners and Senior residents must grapple with complex normative and pathologic conditions, some which occur only infrequently. Here, novel tools are needed to facilitate spatial anatomic learning and to adequately prepare the professional trainee for the practical demands of surgery. Previous research has indicated that the learning experience of students is positively affected when 3D teaching tools are used in parallel with traditional teaching methods [
<xref ref-type="bibr" rid="CR2">2</xref>
]. 3D computer simulations have been introduced in the teaching of the middle and inner ear [
<xref ref-type="bibr" rid="CR3">3</xref>
], the orbital anatomy [
<xref ref-type="bibr" rid="CR4">4</xref>
], and dental anatomy [
<xref ref-type="bibr" rid="CR5">5</xref>
], with encouraging results.</p>
<p>Medical students still learn the anatomy of this region primarily through illustrated texts, many of which have been in print for decades [
<xref ref-type="bibr" rid="CR6">6</xref>
-
<xref ref-type="bibr" rid="CR8">8</xref>
], but the dissection of the temporal bone itself is usually limited to senior trainees, largely due to the relative scarcity of available samples for practicing operative approaches.</p>
<p>With the advent of high-speed computing, 3D graphical models of complex anatomy have become possible [
<xref ref-type="bibr" rid="CR3">3</xref>
,
<xref ref-type="bibr" rid="CR9">9</xref>
-
<xref ref-type="bibr" rid="CR14">14</xref>
]. Actual interaction with 3D anatomical models can occur at several levels. In the simplest form they may involve allowing the user to examine an object in 3D or from different viewpoints [
<xref ref-type="bibr" rid="CR9">9</xref>
,
<xref ref-type="bibr" rid="CR15">15</xref>
-
<xref ref-type="bibr" rid="CR18">18</xref>
]. In more complex cases, a user may be able to select components for closer study, move them about and examine supplementary data such as labels, radiographs and animations [
<xref ref-type="bibr" rid="CR2">2</xref>
,
<xref ref-type="bibr" rid="CR3">3</xref>
,
<xref ref-type="bibr" rid="CR19">19</xref>
-
<xref ref-type="bibr" rid="CR27">27</xref>
]. At the highest levels, users may interact in a natural way with the model, moving it by grasping it with a hand or altering it by cutting or drilling with a tool [
<xref ref-type="bibr" rid="CR10">10</xref>
,
<xref ref-type="bibr" rid="CR28">28</xref>
]. The addition of gesture-based interaction to stereoscopic models combines intuitive interaction with immersive visualization. It is postulated that such a system could alleviate cognitive overload by providing a learner with an environment in which their natural actions act on objects, without the need for complex input devices.</p>
<p>While the technology and accompanying literature surrounding 3D imagery develops, education needs to continue to advance in the setting of both time and fiscal constraints. In this paper we describe a novel gesture-controlled 3D teaching tool in which the three dimensional temporal bone anatomy is manipulated with the use of hand gestures through a Microsoft Kinect™, in the absence of mouse and keyboard. Key structures are easily maneuvered and can be removed and better examined in reference to the whole. This novel tool provides a learning environment in which the physical involvement of the user may enhance the learning experience and increase motivation.</p>
</sec>
<sec id="Sec2" sec-type="materials|methods">
<title>Methods</title>
<p>In order to take advantage of recent advances in technology we have developed a 3D stereoscopic display which uses the Microsoft Kinect™ (Microsoft Corporation, Redmond, Washington, USA) to allow gesture control of anatomical images. Images can be selected, translated, magnified and rotated with simple body motions. The system uses 3D models extracted from CT data by segmentation of anatomical structures of interest. The models are then displayed stereoscopically by a 3D graphics engine which incorporates gesture control from the Microsoft Kinect™. What follows is a description of the system and the process by which anatomical information is converted from tomographic data to a gesture-based anatomy teaching tool.</p>
<p>Our aim is to provide a teaching tool for patient-specific anatomy. To facilitate this, we use actual CT images as the basis. In our prototype, 0.15 mm slice thickness cadaveric temporal bone images (General Electric MicroCT - eXplore speCZT, 0.150 mm thickness) are acquired and imported to a 3D image processing program (Mimics v. 11.02, Materialise NV, Leuven, Belgium). The dataset is resampled to a slice interval of 0.1 mm to help volume interpolation. Anatomical regions of interest, such as the temporal bone, internal carotid artery and facial nerve are identified by segmentation. Initial segmentation is carried out by thresholding CT data by density. For example, the temporal bone is identified by retaining all voxels with densities between 382 and 3071 Hounsfield units (HU). Soft tissue regions and ossicles are manually segmented by visual inspection of the data while varying the density threshold; an expert then inspects the margins of the rough segmentation and adds or removes voxels as needed, based on knowledge of the anatomy. For example, with the contrast set to HU less than -50, the tympanic membrane can be partly resolved and the margins of the membrane extrapolated by estimation. To ensure that the membrane will appear intact in the final model, it is thickened to 2-3 voxels.</p>
<p>The segmented anatomical models are converted to 3D polygonal mesh format and exported in stereolithography file format (STL) (Figure 
<xref rid="Fig1" ref-type="fig">1</xref>
). The resulting models can be displayed in 3D, using a commercially available 3D graphics card (Nvidia GeForce GTX560 - Santa Clara, California, USA), active shutter glasses and either a 3D capable monitor or projector. We have developed our own 3D anatomical graphics engine which loads and renders multiple large polygonal mesh models in 3D and allows users to manipulate camera positions as well as select and manipulate individual models.
<fig id="Fig1">
<label>Figure 1</label>
<caption>
<p>
<bold>Segmented 3D temporal bone anatomy. a)</bold>
Cochleo-vestibular apparatus with medial to lateral orientation and direct view into the internal auditory canal.
<bold>b)</bold>
Sagittal view of external meatus. Note the ossicular network (brown), vertical segment of the facial nerve (yellow), and cochleo-vestibular apparatus (transparent grey).
<bold>c)</bold>
View perpendicular to the internal acoustic meatus with appreciation of facial, cochlear and both inferior and superior vestibular nerves (yellow).</p>
</caption>
<graphic xlink:href="40463_2014_38_Fig1_HTML" id="MO1"></graphic>
</fig>
</p>
<p>Our graphics engine is developed in Microsoft Visual Studios 2008 using the Microsoft Foundation Class software library and the C++ programming language. The Microsoft Kinect™ Software Development Kit (MKSDK) and the NVidia Application Programming Interface (API) were integrated. To render in 3D with stereoscopy (Nvidia’s 3D vision) the DirectX 11.0 API is employed. 3D vision is automatically engaged when an application is set to full screen. The hardware and software requirements needed to run our engine are widely available and accessible to the general user.</p>
<p>The MKSDK uses input from a colour camera and infrared depth sensor to detect human motion. It provides information on scene depth and color (Figure 
<xref rid="Fig2" ref-type="fig">2</xref>
) based on the joint locations (Figure 
<xref rid="Fig3" ref-type="fig">3</xref>
). It also contains an intrinsic speech library that facilitates speech recognition using a built-in microphone. Using the MKSDK, the software is able to integrate user body motions detected by the Kinect™ into our anatomical graphics engine.
<fig id="Fig2">
<label>Figure 2</label>
<caption>
<p>
<bold>Screen shot of 3D Kinect™</bold>
<bold>gesture controlled demo.</bold>
The large red cubes in the forefront govern navigation with the left hand controlling translational movement, and the right hand controlling rotation and orientation. The smaller white cubes, set inside the control cubes, are used to visualize hand locations. The user is represented pictorially by colour camera and infrared depth sensor on the left and graphically by the avatar in the top right.</p>
</caption>
<graphic xlink:href="40463_2014_38_Fig2_HTML" id="MO2"></graphic>
</fig>
<fig id="Fig3">
<label>Figure 3</label>
<caption>
<p>
<bold>Joints identified and tracked by the Kinect™.</bold>
An in-house generated image depicting the use of the joints by the Kinect for gesture control. No copyright should be required (2
<sup>nd</sup>
Item from Editorial staff).</p>
</caption>
<graphic xlink:href="40463_2014_38_Fig3_HTML" id="MO3"></graphic>
</fig>
</p>
</sec>
<sec id="Sec3" sec-type="results">
<title>Results</title>
<p>Our software uses the Kinect™ to allow an operator to navigate in 3D space and to select specific anatomical structures of interest from within the larger virtual environment (Figure 
<xref rid="Fig4" ref-type="fig">4</xref>
). These structures can then be extracted and rotated in all planes at the discretion of the user.
<fig id="Fig4">
<label>Figure 4</label>
<caption>
<p>
<bold>3D anatomy tool selection mode with cochleo-vestibular apparatus brought to forefront.</bold>
Objects may be manipulated both by gesture and voice control.
<bold>a)</bold>
Cochleo-vestibular apparatus, having been selected, in transit towards viewer.
<bold>b)</bold>
Cochleo-vestibular apparatus “popped” out of screen in 3D and rotated by 180°. It may be translated, magnified or rotated under user control using gestures. The users are first author Jordan Hochman and 2
<sup>nd</sup>
author Bert Unger.</p>
</caption>
<graphic xlink:href="40463_2014_38_Fig4_HTML" id="MO4"></graphic>
</fig>
</p>
<p>To move in 3D space, both the left and right hand are tracked relative to the position of the left shoulder. The left hand controls translational movement, and the right hand controls rotation and orientation. Two cubes, shown at the bottom of both Figures 
<xref rid="Fig2" ref-type="fig">2</xref>
and
<xref rid="Fig4" ref-type="fig">4</xref>
, are used to visualize hand locations. A preset distance from the hand to the shoulder is defined as the center of each cube. When the hand, represented by a small sphere, is centered in a cube, no movement or rotation occurs. As the hand moves away from the center, camera movement or rotation is proportional to the hand’s distance from the center. When the user’s hand lies outside of the cube for several seconds, motion control of the scene is disabled. Motion control can be re-enabled by again placing one’s hand in the center reference position.</p>
<p>The NVidia API allows the software to control depth and convergence of 3D vision in our system. Depth settings control the illusion of depth in the 3D image; convergence settings control the distance from the camera and at which objects appear to “pop” out of the screen. If these settings are too low then 3D stereoscopy may not be noticeable, however if too large, there can be divergence and the stereoscopy may not be resolved as a single image, resulting in eye-strain.</p>
<p>When the camera is at a desired location, the user can switch modes to select objects of interest for closer inspection. The operator switches modes by either tapping their left shoulder with their right hand, or employing an audio command. When the selection mode is activated, the left cube controls a sphere that can move within the 3D scene to highlight any desired structure. Once an object is highlighted it can then be selected by another shoulder tap or an audio command. Once an object is selected (Figure 
<xref rid="Fig4" ref-type="fig">4</xref>
), the left hand controls the location of the structure while the right hand controls its orientation. The 3D vision effect is set to bring the selected object, towards the user, enabling a “pop out” so the anatomy can be observed more closely and manipulated separately from the larger model.</p>
</sec>
<sec id="Sec4" sec-type="discussion">
<title>Discussion</title>
<p>New technologies are advocated, not to replace but rather, to complement classic learning. These modalities are best perceived as fueling a renaissance in anatomy learning as opposed to supplanting cadaveric education. They represent a promising opportunity in medical education. Successful integration into standard training and patient care requires a significant interplay between anatomists, clinicians and engineering. Collaborative development of educational and manipulative tools needs to advance before global acceptance is assured.</p>
<p>Requisite to any teaching model is the recognition that anatomy is fundamental for responsible and effective medical education and patient management and the deconstruction of anatomic education and the associated undermining of crucial knowledge and skills may lead to under-qualified doctors. Medical education needs to be enduring and not solely pertinent to exam purposes. Patient oriented and safe care includes a sound anatomical basis provided during formative years in association with lifelong regular learning.</p>
<p>Initial costs in setup and design of 3D digital medical education tools may seem prohibitive. A cost comparison between physical and digital dissection was undertaken by Hisley
<italic>et al</italic>
. in 2007 [
<xref ref-type="bibr" rid="CR19">19</xref>
]. Physical dissection appeared more economical when a singular cadaver was compared to initial setup of a virtual dissected specimen. However, even accounting for multiple work stations and the accrual of a broad anatomic library, digital dissection quickly becomes a less expensive option when considered longitudinally.</p>
<p>Unfortunately the development of three dimensional models is time intensive. The constructed images are highly accurate and drawn from real anatomy but ultimately remain a stylized abstraction. Additionally, it is difficult to determine the appropriate level of detail to include, as a teaching module may be used by disparate learners. Dissimilar file formats are employed by different institutions and the sharing of information/crafted modules are complicated for proprietary programs [
<xref ref-type="bibr" rid="CR29">29</xref>
]. If the data is obtained from histologic samples, difficulties inherent in embalming, freezing and slicing may cause irregularities within the data sets and ultimate inaccuracies in the anatomy.</p>
<p>Case-specific three dimensional visualization is now possible. The process is limited by the requisite time for segmentation. However, complex, variant and unusual cases may dictate such an investment. The near future holds the promise of automated segmentation [
<xref ref-type="bibr" rid="CR30">30</xref>
,
<xref ref-type="bibr" rid="CR31">31</xref>
], further encouraging these newer technologies. The current iteration of the Kinect™ can also be employed in the operative theatre allowing the user to maintain sterility while providing valuable spatial information on the relationship between normal and pathologic anatomical structures, with an aim of preserving the former.</p>
</sec>
<sec id="Sec5" sec-type="conclusion">
<title>Conclusion</title>
<p>There is a great need for the development of advanced virtual anatomy models to complement traditional education. Our novel gesture-controlled interactive 3D model of temporal bone anatomy comprises a promising teaching tool, not only for the early learner, but in particular for the advanced learner with an aim to better prepare professionals for advanced spatial comprehension in surgical practice.</p>
</sec>
</body>
<back>
<fn-group>
<fn>
<p>
<bold>Competing interests</bold>
</p>
<p>The authors declare that they have no competing interests.</p>
</fn>
<fn>
<p>
<bold>Authors’ contributions</bold>
</p>
<p>JH provided the literature review and was responsible for the study design and was the major contributor to the written manuscript. BU supplied engineering expertise on the test equipment and contributed to the study design and data analysis. JK offered engineering expertise on testing equipment and the study protocol. JP carried out data analysis and contributed to writing the manuscript. SHK contributed to the literature review, study design and editing of the manuscript. All authors read and approved of the final manuscript.</p>
</fn>
</fn-group>
<ack>
<title>Acknowledgements</title>
<p>The authors thank Ms. Sharmin Farzana-Khan for her excellent assistance with the segmentation process.</p>
<p>We are grateful to have received financial support from 1) the Health Sciences Center Foundation, 2) the Virtual Reality Application Fund, Government of Manitoba and 3)Dean’s Strategic Research Fund of the Faculty of Medicine, University of Manitoba.</p>
</ack>
<ref-list id="Bib1">
<title>References</title>
<ref id="CR1">
<label>1.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yeung</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Fung</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Wilson</surname>
<given-names>TD</given-names>
</name>
</person-group>
<article-title>Development of a computer-assisted cranial nerve simulation from the visible human dataset</article-title>
<source>Anat Sci Educ</source>
<year>2011</year>
<volume>4</volume>
<issue>2</issue>
<fpage>92</fpage>
<lpage>97</lpage>
<pub-id pub-id-type="doi">10.1002/ase.190</pub-id>
<pub-id pub-id-type="pmid">21438158</pub-id>
</element-citation>
</ref>
<ref id="CR2">
<label>2.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Venail</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Deveze</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Lallemant</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Guevara</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Mondain</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Enhancement of temporal bone anatomy learning with computer 3D rendered imaging software</article-title>
<source>Med Teach</source>
<year>2010</year>
<volume>32</volume>
<issue>7</issue>
<fpage>e282</fpage>
<lpage>e288</lpage>
<pub-id pub-id-type="doi">10.3109/0142159X.2010.490280</pub-id>
<pub-id pub-id-type="pmid">20653370</pub-id>
</element-citation>
</ref>
<ref id="CR3">
<label>3.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nicholson</surname>
<given-names>DT</given-names>
</name>
<name>
<surname>Chalk</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Funnell</surname>
<given-names>WR</given-names>
</name>
<name>
<surname>Daniel</surname>
<given-names>SJ</given-names>
</name>
</person-group>
<article-title>Can virtual reality improve anatomy education? A randomised controlled study of a computer-generated three-dimensional anatomical ear model</article-title>
<source>Med Educ</source>
<year>2006</year>
<volume>40</volume>
<issue>11</issue>
<fpage>1081</fpage>
<lpage>1087</lpage>
<pub-id pub-id-type="doi">10.1111/j.1365-2929.2006.02611.x</pub-id>
<pub-id pub-id-type="pmid">17054617</pub-id>
</element-citation>
</ref>
<ref id="CR4">
<label>4.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Glittenberg</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Binder</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Using 3D computer simulations to enhance ophthalmic training</article-title>
<source>Ophthalmic Physiol Opt</source>
<year>2006</year>
<volume>26</volume>
<issue>1</issue>
<fpage>40</fpage>
<lpage>49</lpage>
<pub-id pub-id-type="doi">10.1111/j.1475-1313.2005.00358.x</pub-id>
<pub-id pub-id-type="pmid">16390481</pub-id>
</element-citation>
</ref>
<ref id="CR5">
<label>5.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nance</surname>
<given-names>ET</given-names>
</name>
<name>
<surname>Lanning</surname>
<given-names>SK</given-names>
</name>
<name>
<surname>Gunsolley</surname>
<given-names>JC</given-names>
</name>
</person-group>
<article-title>Dental anatomy carving computer-assisted instruction program: an assessment of student performance and perceptions</article-title>
<source>J Dent Educ</source>
<year>2009</year>
<volume>73</volume>
<issue>8</issue>
<fpage>972</fpage>
<lpage>979</lpage>
<pub-id pub-id-type="pmid">19648568</pub-id>
</element-citation>
</ref>
<ref id="CR6">
<label>6.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Agur</surname>
<given-names>AMR</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Anderson</surname>
<given-names>JE</given-names>
</name>
</person-group>
<source>Grant’s Atlas of Anatomy</source>
<year>1991</year>
<edition>9</edition>
<publisher-loc>Baltimore</publisher-loc>
<publisher-name>Williams & Wilkins</publisher-name>
<fpage>650</fpage>
</element-citation>
</ref>
<ref id="CR7">
<label>7.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Netter</surname>
<given-names>FH</given-names>
</name>
<name>
<surname>Colacino</surname>
<given-names>S</given-names>
</name>
</person-group>
<source>Atlas of Human Anatomy</source>
<year>1997</year>
<edition>2</edition>
<publisher-loc>East Hanover</publisher-loc>
<publisher-name>Novartis</publisher-name>
<fpage>525</fpage>
</element-citation>
</ref>
<ref id="CR8">
<label>8.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gray</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>PL</given-names>
</name>
<name>
<surname>Bannister</surname>
<given-names>LH</given-names>
</name>
</person-group>
<source>Gray’s Anatomy: The Anatomical Basis of Medicine and Surgery</source>
<year>1995</year>
<edition>38</edition>
<publisher-loc>New York</publisher-loc>
<publisher-name>Churchill Livingstone</publisher-name>
<fpage>2092</fpage>
</element-citation>
</ref>
<ref id="CR9">
<label>9.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Garg</surname>
<given-names>AX</given-names>
</name>
<name>
<surname>Norman</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Sperotable</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>How medical students learn spatial anatomy</article-title>
<source>Lancet</source>
<year>2001</year>
<volume>357</volume>
<issue>9253</issue>
<fpage>363</fpage>
<lpage>364</lpage>
<pub-id pub-id-type="doi">10.1016/S0140-6736(00)03649-7</pub-id>
<pub-id pub-id-type="pmid">11211004</pub-id>
</element-citation>
</ref>
<ref id="CR10">
<label>10.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Temkin</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Acosta</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Malvankar</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Vaidyanath</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>An interactive three-dimensional virtual body structures system for anatomical training over the internet</article-title>
<source>Clin Anat</source>
<year>2006</year>
<volume>19</volume>
<issue>3</issue>
<fpage>267</fpage>
<lpage>274</lpage>
<pub-id pub-id-type="doi">10.1002/ca.20230</pub-id>
<pub-id pub-id-type="pmid">16506202</pub-id>
</element-citation>
</ref>
<ref id="CR11">
<label>11.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>George</surname>
<given-names>AP</given-names>
</name>
<name>
<surname>De</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Review of temporal bone dissection teaching: how it was, is and will be</article-title>
<source>J Laryngol Otol</source>
<year>2010</year>
<volume>124</volume>
<issue>2</issue>
<fpage>119</fpage>
<lpage>125</lpage>
<pub-id pub-id-type="doi">10.1017/S0022215109991617</pub-id>
<pub-id pub-id-type="pmid">19954559</pub-id>
</element-citation>
</ref>
<ref id="CR12">
<label>12.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fried</surname>
<given-names>MP</given-names>
</name>
<name>
<surname>Uribe</surname>
<given-names>JI</given-names>
</name>
<name>
<surname>Sadoughi</surname>
<given-names>B</given-names>
</name>
</person-group>
<article-title>The role of virtual reality in surgical training in otorhinolaryngology</article-title>
<source>Curr Opin Otolaryngol Head Neck Surg</source>
<year>2007</year>
<volume>15</volume>
<issue>3</issue>
<fpage>163</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="doi">10.1097/MOO.0b013e32814b0802</pub-id>
<pub-id pub-id-type="pmid">17483684</pub-id>
</element-citation>
</ref>
<ref id="CR13">
<label>13.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schubert</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Sartor</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Forsting</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Reisser</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Three-dimensional computed display of otosurgical operation sites by spiral CT</article-title>
<source>Neuroradiology</source>
<year>1996</year>
<volume>38</volume>
<issue>7</issue>
<fpage>663</fpage>
<lpage>668</lpage>
<pub-id pub-id-type="doi">10.1007/s002340050330</pub-id>
<pub-id pub-id-type="pmid">8912325</pub-id>
</element-citation>
</ref>
<ref id="CR14">
<label>14.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rodt</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Sartor</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Forsting</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Reisser</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>3D visualisation of the middle ear and adjacent structures using reconstructed multi-slice CT datasets, correlating 3D images and virtual endoscopy to the 2D cross-sectional images</article-title>
<source>Neuroradiology</source>
<year>2002</year>
<volume>44</volume>
<issue>9</issue>
<fpage>783</fpage>
<lpage>790</lpage>
<pub-id pub-id-type="doi">10.1007/s00234-002-0784-0</pub-id>
<pub-id pub-id-type="pmid">12221454</pub-id>
</element-citation>
</ref>
<ref id="CR15">
<label>15.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Turmezei</surname>
<given-names>TD</given-names>
</name>
<name>
<surname>Tam</surname>
<given-names>MD</given-names>
</name>
<name>
<surname>Loughna</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>A survey of medical students on the impact of a new digital imaging library in the dissection room</article-title>
<source>Clin Anat</source>
<year>2009</year>
<volume>22</volume>
<issue>6</issue>
<fpage>761</fpage>
<lpage>769</lpage>
<pub-id pub-id-type="doi">10.1002/ca.20833</pub-id>
<pub-id pub-id-type="pmid">19637297</pub-id>
</element-citation>
</ref>
<ref id="CR16">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lufler</surname>
<given-names>RS</given-names>
</name>
<name>
<surname>Zumwalt</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Romney</surname>
<given-names>CA</given-names>
</name>
<name>
<surname>Hoagland</surname>
<given-names>TM</given-names>
</name>
</person-group>
<article-title>Incorporating radiology into medical gross anatomy: does the use of cadaver CT scans improve students’ academic performance in anatomy?</article-title>
<source>Anat Sci Educ</source>
<year>2010</year>
<volume>3</volume>
<issue>2</issue>
<fpage>56</fpage>
<lpage>63</lpage>
<pub-id pub-id-type="pmid">20213692</pub-id>
</element-citation>
</ref>
<ref id="CR17">
<label>17.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Luursema</surname>
<given-names>J-M</given-names>
</name>
<name>
<surname>Zumwalt</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Romney</surname>
<given-names>CA</given-names>
</name>
<name>
<surname>Hoagland</surname>
<given-names>TM</given-names>
</name>
</person-group>
<article-title>The role of steropsis in virtual anatomic learning</article-title>
<source>Interacting with Comput</source>
<year>2008</year>
<volume>20</volume>
<fpage>455</fpage>
<lpage>460</lpage>
<pub-id pub-id-type="doi">10.1016/j.intcom.2008.04.003</pub-id>
</element-citation>
</ref>
<ref id="CR18">
<label>18.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jacobson</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Epstein</surname>
<given-names>SK</given-names>
</name>
<name>
<surname>Albright</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ochieng</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Coppersmith</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Polak</surname>
<given-names>JF</given-names>
</name>
</person-group>
<article-title>Creation of virtual patients from CT images of cadavers to enhance integration of clinical and basic science student learning in anatomy</article-title>
<source>Med Teach</source>
<year>2009</year>
<volume>31</volume>
<issue>8</issue>
<fpage>749</fpage>
<lpage>751</lpage>
<pub-id pub-id-type="doi">10.1080/01421590903124757</pub-id>
<pub-id pub-id-type="pmid">19811213</pub-id>
</element-citation>
</ref>
<ref id="CR19">
<label>19.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hisley</surname>
<given-names>KC</given-names>
</name>
<name>
<surname>Anderson</surname>
<given-names>LD</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>SE</given-names>
</name>
<name>
<surname>Kavic</surname>
<given-names>SM</given-names>
</name>
<name>
<surname>Tracy</surname>
<given-names>JK</given-names>
</name>
</person-group>
<article-title>Coupled physical and digital cadaver dissection followed by a visual test protocol provides insights into the nature of anatomical knowledge and its evaluation</article-title>
<source>Anat Sci Educ</source>
<year>2008</year>
<volume>1</volume>
<issue>1</issue>
<fpage>27</fpage>
<lpage>40</lpage>
<pub-id pub-id-type="doi">10.1002/ase.4</pub-id>
<pub-id pub-id-type="pmid">19177376</pub-id>
</element-citation>
</ref>
<ref id="CR20">
<label>20.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Petersson</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Sinkvist</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Smedby</surname>
<given-names>O</given-names>
</name>
</person-group>
<article-title>Web-based interactive 3D visualization as a tool for improved anatomy learning</article-title>
<source>Anat Sci Educ</source>
<year>2009</year>
<volume>2</volume>
<issue>2</issue>
<fpage>61</fpage>
<lpage>68</lpage>
<pub-id pub-id-type="doi">10.1002/ase.76</pub-id>
<pub-id pub-id-type="pmid">19363804</pub-id>
</element-citation>
</ref>
<ref id="CR21">
<label>21.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Crossingham</surname>
<given-names>JL</given-names>
</name>
<name>
<surname>Jenkinson</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Woolridge</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Gallinger</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Tait</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Moulton</surname>
<given-names>CA</given-names>
</name>
</person-group>
<article-title>Interpreting three-dimensional structures from two-dimensional images: a web-based interactive 3D teaching model of surgical liver anatomy</article-title>
<source>HPB (Oxford)</source>
<year>2009</year>
<volume>11</volume>
<issue>6</issue>
<fpage>523</fpage>
<lpage>528</lpage>
<pub-id pub-id-type="doi">10.1111/j.1477-2574.2009.00097.x</pub-id>
<pub-id pub-id-type="pmid">19816618</pub-id>
</element-citation>
</ref>
<ref id="CR22">
<label>22.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rodt</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Burmeister</surname>
<given-names>HP</given-names>
</name>
<name>
<surname>Bartling</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kaminsky</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Schwab</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Kikinis</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Backer</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>3D-Visualisation of the middle ear by computer-assisted post-processing of helical multi-slice CT data</article-title>
<source>Laryngorhinootologie</source>
<year>2004</year>
<volume>83</volume>
<issue>7</issue>
<fpage>438</fpage>
<lpage>444</lpage>
<pub-id pub-id-type="doi">10.1055/s-2004-814370</pub-id>
<pub-id pub-id-type="pmid">15257492</pub-id>
</element-citation>
</ref>
<ref id="CR23">
<label>23.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gould</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Terrell</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Fleming</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>A usability study of users’ perceptions toward a multimedia computer-assisted learning tool for neuroanatomy</article-title>
<source>Anat Sci Educ</source>
<year>2008</year>
<volume>1</volume>
<issue>4</issue>
<fpage>175</fpage>
<lpage>183</lpage>
<pub-id pub-id-type="doi">10.1002/ase.36</pub-id>
<pub-id pub-id-type="pmid">19177405</pub-id>
</element-citation>
</ref>
<ref id="CR24">
<label>24.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yip</surname>
<given-names>GW</given-names>
</name>
<name>
<surname>Rajendran</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>SnapAnatomy, a computer-based interactive tool for independent learning of human anatomy</article-title>
<source>J Vis Commun Med</source>
<year>2008</year>
<volume>31</volume>
<issue>2</issue>
<fpage>46</fpage>
<lpage>50</lpage>
<pub-id pub-id-type="doi">10.1080/17453050802241548</pub-id>
<pub-id pub-id-type="pmid">18802832</pub-id>
</element-citation>
</ref>
<ref id="CR25">
<label>25.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Trelease</surname>
<given-names>RB</given-names>
</name>
<name>
<surname>Rosset</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Transforming clinical imaging data for virtual reality learning objects</article-title>
<source>Anat Sci Educ</source>
<year>2008</year>
<volume>1</volume>
<issue>2</issue>
<fpage>50</fpage>
<lpage>55</lpage>
<pub-id pub-id-type="doi">10.1002/ase.13</pub-id>
<pub-id pub-id-type="pmid">19177381</pub-id>
</element-citation>
</ref>
<ref id="CR26">
<label>26.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nguyen</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Wilson</surname>
<given-names>TD</given-names>
</name>
</person-group>
<article-title>A head in virtual reality: development of a dynamic head and neck model</article-title>
<source>Anat Sci Educ</source>
<year>2009</year>
<volume>2</volume>
<issue>6</issue>
<fpage>294</fpage>
<lpage>301</lpage>
<pub-id pub-id-type="doi">10.1002/ase.115</pub-id>
<pub-id pub-id-type="pmid">19890983</pub-id>
</element-citation>
</ref>
<ref id="CR27">
<label>27.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vazquez</surname>
<given-names>PP</given-names>
</name>
</person-group>
<article-title>An interactive 3D framework for anatomical education</article-title>
<source>Int J Comput-Assist Radiol Surg</source>
<year>2008</year>
<volume>3</volume>
<fpage>511</fpage>
<lpage>524</lpage>
<pub-id pub-id-type="doi">10.1007/s11548-008-0251-4</pub-id>
</element-citation>
</ref>
<ref id="CR28">
<label>28.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hariri</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Rawn</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Srivastava</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Youngblood</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Ladd</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Evaluation of a surgical simulator for learning clinical anatomy</article-title>
<source>Med Educ</source>
<year>2004</year>
<volume>38</volume>
<issue>8</issue>
<fpage>896</fpage>
<lpage>902</lpage>
<pub-id pub-id-type="doi">10.1111/j.1365-2929.2004.01897.x</pub-id>
<pub-id pub-id-type="pmid">15271051</pub-id>
</element-citation>
</ref>
<ref id="CR29">
<label>29.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brenton</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Using multimedia and Web3D to enhance anatomy teaching</article-title>
<source>Comput Educ</source>
<year>2007</year>
<volume>49</volume>
<issue>1</issue>
<fpage>32</fpage>
<lpage>53</lpage>
<pub-id pub-id-type="doi">10.1016/j.compedu.2005.06.005</pub-id>
</element-citation>
</ref>
<ref id="CR30">
<label>30.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McRackan</surname>
<given-names>TR</given-names>
</name>
<name>
<surname>Reda</surname>
<given-names>FA</given-names>
</name>
<name>
<surname>Rivas</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Noble</surname>
<given-names>JH</given-names>
</name>
<name>
<surname>Dietrich</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Dawant</surname>
<given-names>BM</given-names>
</name>
<name>
<surname>Labadie</surname>
<given-names>RF</given-names>
</name>
</person-group>
<article-title>Comparison of cochlear implant relevant anatomy in children versus adults</article-title>
<source>Otol Neurotol</source>
<year>2012</year>
<volume>33</volume>
<issue>3</issue>
<fpage>328</fpage>
<lpage>334</lpage>
<pub-id pub-id-type="doi">10.1097/MAO.0b013e318245cc9f</pub-id>
<pub-id pub-id-type="pmid">22377644</pub-id>
</element-citation>
</ref>
<ref id="CR31">
<label>31.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reda</surname>
<given-names>FA</given-names>
</name>
<name>
<surname>Noble</surname>
<given-names>JH</given-names>
</name>
<name>
<surname>Rivas</surname>
<given-names>A</given-names>
</name>
<name>
<surname>McRackan</surname>
<given-names>TR</given-names>
</name>
<name>
<surname>Labadie</surname>
<given-names>RF</given-names>
</name>
<name>
<surname>Dawant</surname>
<given-names>BM</given-names>
</name>
</person-group>
<article-title>Automatic segmentation of the facial nerve and chorda tympani in pediatric CT scans</article-title>
<source>Med Phys</source>
<year>2011</year>
<volume>38</volume>
<issue>10</issue>
<fpage>5590</fpage>
<lpage>5600</lpage>
<pub-id pub-id-type="doi">10.1118/1.3634048</pub-id>
<pub-id pub-id-type="pmid">21992377</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000854 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000854 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4193987
   |texte=   Gesture-controlled interactive three dimensional anatomy: a novel teaching tool in head and neck surgery
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:25286966" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024