Serveur d'exploration sur le patient édenté

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

Identifieur interne : 000B76 ( Pmc/Corpus ); précédent : 000B75; suivant : 000B77

Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

Auteurs : Hideyuki Suenaga ; Huy Hoang Tran ; Hongen Liao ; Ken Masamune ; Takeyoshi Dohi ; Kazuto Hoshi ; Tsuyoshi Takato

Source :

RBID : PMC:4630916

Abstract

Background

This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery.

Method

A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration.

Results

Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses.

Conclusion

Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications.

Electronic supplementary material

The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users.


Url:
DOI: 10.1186/s12880-015-0089-5
PubMed: 26525142
PubMed Central: 4630916

Links to Exploration step

PMC:4630916

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study</title>
<author>
<name sortKey="Suenaga, Hideyuki" sort="Suenaga, Hideyuki" uniqKey="Suenaga H" first="Hideyuki" last="Suenaga">Hideyuki Suenaga</name>
<affiliation>
<nlm:aff id="Aff1">Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656 Japan</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Tran, Huy Hoang" sort="Tran, Huy Hoang" uniqKey="Tran H" first="Huy Hoang" last="Tran">Huy Hoang Tran</name>
<affiliation>
<nlm:aff id="Aff2">Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Liao, Hongen" sort="Liao, Hongen" uniqKey="Liao H" first="Hongen" last="Liao">Hongen Liao</name>
<affiliation>
<nlm:aff id="Aff3">Department of Bioengineering, Graduate School of Engineering, The University of Tokyo, Tokyo, Japan</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="Aff4">Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Masamune, Ken" sort="Masamune, Ken" uniqKey="Masamune K" first="Ken" last="Masamune">Ken Masamune</name>
<affiliation>
<nlm:aff id="Aff2">Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="Aff5">Faculty of Advanced Technology and Surgery, Institute of Advanced Biomedical Engineering and Science, Tokyo Women’s Medical University, Tokyo, Japan</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Dohi, Takeyoshi" sort="Dohi, Takeyoshi" uniqKey="Dohi T" first="Takeyoshi" last="Dohi">Takeyoshi Dohi</name>
<affiliation>
<nlm:aff id="Aff6">Department of Mechanical Engineering, School of Engineering, Tokyo Denki University, Tokyo, Japan</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Hoshi, Kazuto" sort="Hoshi, Kazuto" uniqKey="Hoshi K" first="Kazuto" last="Hoshi">Kazuto Hoshi</name>
<affiliation>
<nlm:aff id="Aff1">Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656 Japan</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Takato, Tsuyoshi" sort="Takato, Tsuyoshi" uniqKey="Takato T" first="Tsuyoshi" last="Takato">Tsuyoshi Takato</name>
<affiliation>
<nlm:aff id="Aff1">Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656 Japan</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26525142</idno>
<idno type="pmc">4630916</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4630916</idno>
<idno type="RBID">PMC:4630916</idno>
<idno type="doi">10.1186/s12880-015-0089-5</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000B76</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000B76</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study</title>
<author>
<name sortKey="Suenaga, Hideyuki" sort="Suenaga, Hideyuki" uniqKey="Suenaga H" first="Hideyuki" last="Suenaga">Hideyuki Suenaga</name>
<affiliation>
<nlm:aff id="Aff1">Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656 Japan</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Tran, Huy Hoang" sort="Tran, Huy Hoang" uniqKey="Tran H" first="Huy Hoang" last="Tran">Huy Hoang Tran</name>
<affiliation>
<nlm:aff id="Aff2">Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Liao, Hongen" sort="Liao, Hongen" uniqKey="Liao H" first="Hongen" last="Liao">Hongen Liao</name>
<affiliation>
<nlm:aff id="Aff3">Department of Bioengineering, Graduate School of Engineering, The University of Tokyo, Tokyo, Japan</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="Aff4">Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Masamune, Ken" sort="Masamune, Ken" uniqKey="Masamune K" first="Ken" last="Masamune">Ken Masamune</name>
<affiliation>
<nlm:aff id="Aff2">Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="Aff5">Faculty of Advanced Technology and Surgery, Institute of Advanced Biomedical Engineering and Science, Tokyo Women’s Medical University, Tokyo, Japan</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Dohi, Takeyoshi" sort="Dohi, Takeyoshi" uniqKey="Dohi T" first="Takeyoshi" last="Dohi">Takeyoshi Dohi</name>
<affiliation>
<nlm:aff id="Aff6">Department of Mechanical Engineering, School of Engineering, Tokyo Denki University, Tokyo, Japan</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Hoshi, Kazuto" sort="Hoshi, Kazuto" uniqKey="Hoshi K" first="Kazuto" last="Hoshi">Kazuto Hoshi</name>
<affiliation>
<nlm:aff id="Aff1">Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656 Japan</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Takato, Tsuyoshi" sort="Takato, Tsuyoshi" uniqKey="Takato T" first="Tsuyoshi" last="Takato">Tsuyoshi Takato</name>
<affiliation>
<nlm:aff id="Aff1">Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656 Japan</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">BMC Medical Imaging</title>
<idno type="eISSN">1471-2342</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<sec>
<title>Background</title>
<p>This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery.</p>
</sec>
<sec>
<title>Method</title>
<p>A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration.</p>
</sec>
<sec>
<title>Results</title>
<p>Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses.</p>
</sec>
<sec>
<title>Conclusion</title>
<p>Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications.</p>
</sec>
<sec>
<title>Electronic supplementary material</title>
<p>The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users.</p>
</sec>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Lovo, Ee" uniqKey="Lovo E">EE Lovo</name>
</author>
<author>
<name sortKey="Quintana, Jc" uniqKey="Quintana J">JC Quintana</name>
</author>
<author>
<name sortKey="Puebla, Mc" uniqKey="Puebla M">MC Puebla</name>
</author>
<author>
<name sortKey="Torrealba, G" uniqKey="Torrealba G">G Torrealba</name>
</author>
<author>
<name sortKey="Santos, Jl" uniqKey="Santos J">JL Santos</name>
</author>
<author>
<name sortKey="Lira, Ih" uniqKey="Lira I">IH Lira</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sielhorst, T" uniqKey="Sielhorst T">T Sielhorst</name>
</author>
<author>
<name sortKey="Bichlmeier, C" uniqKey="Bichlmeier C">C Bichlmeier</name>
</author>
<author>
<name sortKey="Heining, Sm" uniqKey="Heining S">SM Heining</name>
</author>
<author>
<name sortKey="Navab, N" uniqKey="Navab N">N Navab</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kang, X" uniqKey="Kang X">X Kang</name>
</author>
<author>
<name sortKey="Azizian, M" uniqKey="Azizian M">M Azizian</name>
</author>
<author>
<name sortKey="Wilson, E" uniqKey="Wilson E">E Wilson</name>
</author>
<author>
<name sortKey="Wu, K" uniqKey="Wu K">K Wu</name>
</author>
<author>
<name sortKey="Martin, Ad" uniqKey="Martin A">AD Martin</name>
</author>
<author>
<name sortKey="Kane, Td" uniqKey="Kane T">TD Kane</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Volonte, F" uniqKey="Volonte F">F Volonte</name>
</author>
<author>
<name sortKey="Pugin, F" uniqKey="Pugin F">F Pugin</name>
</author>
<author>
<name sortKey="Bucher, P" uniqKey="Bucher P">P Bucher</name>
</author>
<author>
<name sortKey="Sugimoto, M" uniqKey="Sugimoto M">M Sugimoto</name>
</author>
<author>
<name sortKey="Ratib, O" uniqKey="Ratib O">O Ratib</name>
</author>
<author>
<name sortKey="Morel, P" uniqKey="Morel P">P Morel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fritz, J" uniqKey="Fritz J">J Fritz</name>
</author>
<author>
<name sortKey="Thainual, P" uniqKey="Thainual P">P Thainual</name>
</author>
<author>
<name sortKey="Ungi, T" uniqKey="Ungi T">T Ungi</name>
</author>
<author>
<name sortKey="Flammang, Aj" uniqKey="Flammang A">AJ Flammang</name>
</author>
<author>
<name sortKey="Cho, Nb" uniqKey="Cho N">NB Cho</name>
</author>
<author>
<name sortKey="Fichtinger, G" uniqKey="Fichtinger G">G Fichtinger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mahvash, M" uniqKey="Mahvash M">M Mahvash</name>
</author>
<author>
<name sortKey="Besharati, Tl" uniqKey="Besharati T">TL Besharati</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Puerto Souza, G" uniqKey="Puerto Souza G">G Puerto-Souza</name>
</author>
<author>
<name sortKey="Cadeddu, J" uniqKey="Cadeddu J">J Cadeddu</name>
</author>
<author>
<name sortKey="Mariottini, Gl" uniqKey="Mariottini G">GL Mariottini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kersten Oertel, M" uniqKey="Kersten Oertel M">M Kersten-Oertel</name>
</author>
<author>
<name sortKey="Chen, Ss" uniqKey="Chen S">SS Chen</name>
</author>
<author>
<name sortKey="Drouin, S" uniqKey="Drouin S">S Drouin</name>
</author>
<author>
<name sortKey="Sinclair, Ds" uniqKey="Sinclair D">DS Sinclair</name>
</author>
<author>
<name sortKey="Collins, Dl" uniqKey="Collins D">DL Collins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Matsumoto, T" uniqKey="Matsumoto T">T Matsumoto</name>
</author>
<author>
<name sortKey="Kubo, S" uniqKey="Kubo S">S Kubo</name>
</author>
<author>
<name sortKey="Muratsu, H" uniqKey="Muratsu H">H Muratsu</name>
</author>
<author>
<name sortKey="Tsumura, N" uniqKey="Tsumura N">N Tsumura</name>
</author>
<author>
<name sortKey="Ishida, K" uniqKey="Ishida K">K Ishida</name>
</author>
<author>
<name sortKey="Matsushita, T" uniqKey="Matsushita T">T Matsushita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yokoyama, Y" uniqKey="Yokoyama Y">Y Yokoyama</name>
</author>
<author>
<name sortKey="Abe, N" uniqKey="Abe N">N Abe</name>
</author>
<author>
<name sortKey="Fujiwara, K" uniqKey="Fujiwara K">K Fujiwara</name>
</author>
<author>
<name sortKey="Suzuki, M" uniqKey="Suzuki M">M Suzuki</name>
</author>
<author>
<name sortKey="Nakajima, Y" uniqKey="Nakajima Y">Y Nakajima</name>
</author>
<author>
<name sortKey="Sugita, N" uniqKey="Sugita N">N Sugita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Suzuki, N" uniqKey="Suzuki N">N Suzuki</name>
</author>
<author>
<name sortKey="Hattori, A" uniqKey="Hattori A">A Hattori</name>
</author>
<author>
<name sortKey="Iimura, J" uniqKey="Iimura J">J Iimura</name>
</author>
<author>
<name sortKey="Otori, N" uniqKey="Otori N">N Otori</name>
</author>
<author>
<name sortKey="Onda, S" uniqKey="Onda S">S Onda</name>
</author>
<author>
<name sortKey="Okamoto, T" uniqKey="Okamoto T">T Okamoto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Badiali, G" uniqKey="Badiali G">G Badiali</name>
</author>
<author>
<name sortKey="Ferrari, V" uniqKey="Ferrari V">V Ferrari</name>
</author>
<author>
<name sortKey="Cutolo, F" uniqKey="Cutolo F">F Cutolo</name>
</author>
<author>
<name sortKey="Freschi, C" uniqKey="Freschi C">C Freschi</name>
</author>
<author>
<name sortKey="Caramella, D" uniqKey="Caramella D">D Caramella</name>
</author>
<author>
<name sortKey="Bianchi, A" uniqKey="Bianchi A">A Bianchi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nijmeh, Ad" uniqKey="Nijmeh A">AD Nijmeh</name>
</author>
<author>
<name sortKey="Goodger, Nm" uniqKey="Goodger N">NM Goodger</name>
</author>
<author>
<name sortKey="Hawkes, D" uniqKey="Hawkes D">D Hawkes</name>
</author>
<author>
<name sortKey="Edwards, Pj" uniqKey="Edwards P">PJ Edwards</name>
</author>
<author>
<name sortKey="Mcgurk, M" uniqKey="Mcgurk M">M McGurk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, J" uniqKey="Wang J">J Wang</name>
</author>
<author>
<name sortKey="Suenaga, H" uniqKey="Suenaga H">H Suenaga</name>
</author>
<author>
<name sortKey="Yang, L" uniqKey="Yang L">L Yang</name>
</author>
<author>
<name sortKey="Liao, H" uniqKey="Liao H">H Liao</name>
</author>
<author>
<name sortKey="Kobayashi, E" uniqKey="Kobayashi E">E Kobayashi</name>
</author>
<author>
<name sortKey="Takato, T" uniqKey="Takato T">T Takato</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, J" uniqKey="Wang J">J Wang</name>
</author>
<author>
<name sortKey="Suenaga, H" uniqKey="Suenaga H">H Suenaga</name>
</author>
<author>
<name sortKey="Hoshi, K" uniqKey="Hoshi K">K Hoshi</name>
</author>
<author>
<name sortKey="Yang, L" uniqKey="Yang L">L Yang</name>
</author>
<author>
<name sortKey="Kobayashi, E" uniqKey="Kobayashi E">E Kobayashi</name>
</author>
<author>
<name sortKey="Sakuma, I" uniqKey="Sakuma I">I Sakuma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lin, Yk" uniqKey="Lin Y">YK Lin</name>
</author>
<author>
<name sortKey="Yau, Ht" uniqKey="Yau H">HT Yau</name>
</author>
<author>
<name sortKey="Wang, Ic" uniqKey="Wang I">IC Wang</name>
</author>
<author>
<name sortKey="Zheng, C" uniqKey="Zheng C">C Zheng</name>
</author>
<author>
<name sortKey="Chung, Kh" uniqKey="Chung K">KH Chung</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kang, Sh" uniqKey="Kang S">SH Kang</name>
</author>
<author>
<name sortKey="Kim, Mk" uniqKey="Kim M">MK Kim</name>
</author>
<author>
<name sortKey="Kim, Jh" uniqKey="Kim J">JH Kim</name>
</author>
<author>
<name sortKey="Park, Hk" uniqKey="Park H">HK Park</name>
</author>
<author>
<name sortKey="Lee, Sh" uniqKey="Lee S">SH Lee</name>
</author>
<author>
<name sortKey="Park, W" uniqKey="Park W">W Park</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sun, Y" uniqKey="Sun Y">Y Sun</name>
</author>
<author>
<name sortKey="Luebbers, Ht" uniqKey="Luebbers H">HT Luebbers</name>
</author>
<author>
<name sortKey="Agbaje, Jo" uniqKey="Agbaje J">JO Agbaje</name>
</author>
<author>
<name sortKey="Schepers, S" uniqKey="Schepers S">S Schepers</name>
</author>
<author>
<name sortKey="Vrielinck, L" uniqKey="Vrielinck L">L Vrielinck</name>
</author>
<author>
<name sortKey="Lambrichts, I" uniqKey="Lambrichts I">I Lambrichts</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liao, H" uniqKey="Liao H">H Liao</name>
</author>
<author>
<name sortKey="Hata, N" uniqKey="Hata N">N Hata</name>
</author>
<author>
<name sortKey="Nakajima, S" uniqKey="Nakajima S">S Nakajima</name>
</author>
<author>
<name sortKey="Iwahara, M" uniqKey="Iwahara M">M Iwahara</name>
</author>
<author>
<name sortKey="Sakuma, I" uniqKey="Sakuma I">I Sakuma</name>
</author>
<author>
<name sortKey="Dohi, T" uniqKey="Dohi T">T Dohi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liao, H" uniqKey="Liao H">H Liao</name>
</author>
<author>
<name sortKey="Ishihara, H" uniqKey="Ishihara H">H Ishihara</name>
</author>
<author>
<name sortKey="Tran, Hh" uniqKey="Tran H">HH Tran</name>
</author>
<author>
<name sortKey="Masamune, K" uniqKey="Masamune K">K Masamune</name>
</author>
<author>
<name sortKey="Sakuma, I" uniqKey="Sakuma I">I Sakuma</name>
</author>
<author>
<name sortKey="Dohi, T" uniqKey="Dohi T">T Dohi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tran, Hh" uniqKey="Tran H">HH Tran</name>
</author>
<author>
<name sortKey="Matsumiya, K" uniqKey="Matsumiya K">K Matsumiya</name>
</author>
<author>
<name sortKey="Masamune, K" uniqKey="Masamune K">K Masamune</name>
</author>
<author>
<name sortKey="Sakuma, I" uniqKey="Sakuma I">I Sakuma</name>
</author>
<author>
<name sortKey="Dohi, T" uniqKey="Dohi T">T Dohi</name>
</author>
<author>
<name sortKey="Liao, H" uniqKey="Liao H">H Liao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Suenaga, H" uniqKey="Suenaga H">H Suenaga</name>
</author>
<author>
<name sortKey="Hoang Tran, H" uniqKey="Hoang Tran H">H Hoang Tran</name>
</author>
<author>
<name sortKey="Liao, H" uniqKey="Liao H">H Liao</name>
</author>
<author>
<name sortKey="Masamune, K" uniqKey="Masamune K">K Masamune</name>
</author>
<author>
<name sortKey="Dohi, T" uniqKey="Dohi T">T Dohi</name>
</author>
<author>
<name sortKey="Hoshi, K" uniqKey="Hoshi K">K Hoshi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, J" uniqKey="Wang J">J Wang</name>
</author>
<author>
<name sortKey="Suenaga, H" uniqKey="Suenaga H">H Suenaga</name>
</author>
<author>
<name sortKey="Liao, H" uniqKey="Liao H">H Liao</name>
</author>
<author>
<name sortKey="Hoshi, K" uniqKey="Hoshi K">K Hoshi</name>
</author>
<author>
<name sortKey="Yang, L" uniqKey="Yang L">L Yang</name>
</author>
<author>
<name sortKey="Kobayashi, E" uniqKey="Kobayashi E">E Kobayashi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Widmann, G" uniqKey="Widmann G">G Widmann</name>
</author>
<author>
<name sortKey="Stoffner, R" uniqKey="Stoffner R">R Stoffner</name>
</author>
<author>
<name sortKey="Bale, R" uniqKey="Bale R">R Bale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Noh, H" uniqKey="Noh H">H Noh</name>
</author>
<author>
<name sortKey="Nabha, W" uniqKey="Nabha W">W Nabha</name>
</author>
<author>
<name sortKey="Cho, Jh" uniqKey="Cho J">JH Cho</name>
</author>
<author>
<name sortKey="Hwang, Hs" uniqKey="Hwang H">HS Hwang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fitzpatrick, Jm" uniqKey="Fitzpatrick J">JM Fitzpatrick</name>
</author>
<author>
<name sortKey="West, Jb" uniqKey="West J">JB West</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Casap, N" uniqKey="Casap N">N Casap</name>
</author>
<author>
<name sortKey="Wexler, A" uniqKey="Wexler A">A Wexler</name>
</author>
<author>
<name sortKey="Eliashar, R" uniqKey="Eliashar R">R Eliashar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eggers, G" uniqKey="Eggers G">G Eggers</name>
</author>
<author>
<name sortKey="Kress, B" uniqKey="Kress B">B Kress</name>
</author>
<author>
<name sortKey="Muhling, J" uniqKey="Muhling J">J Muhling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhu, M" uniqKey="Zhu M">M Zhu</name>
</author>
<author>
<name sortKey="Chai, G" uniqKey="Chai G">G Chai</name>
</author>
<author>
<name sortKey="Zhang, Y" uniqKey="Zhang Y">Y Zhang</name>
</author>
<author>
<name sortKey="Ma, X" uniqKey="Ma X">X Ma</name>
</author>
<author>
<name sortKey="Gan, J" uniqKey="Gan J">J Gan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kang, Sh" uniqKey="Kang S">SH Kang</name>
</author>
<author>
<name sortKey="Kim, Mk" uniqKey="Kim M">MK Kim</name>
</author>
<author>
<name sortKey="Kim, Jh" uniqKey="Kim J">JH Kim</name>
</author>
<author>
<name sortKey="Park, Hk" uniqKey="Park H">HK Park</name>
</author>
<author>
<name sortKey="Park, W" uniqKey="Park W">W Park</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bouchard, C" uniqKey="Bouchard C">C Bouchard</name>
</author>
<author>
<name sortKey="Magill, Jc" uniqKey="Magill J">JC Magill</name>
</author>
<author>
<name sortKey="Nikonovskiy, V" uniqKey="Nikonovskiy V">V Nikonovskiy</name>
</author>
<author>
<name sortKey="Byl, M" uniqKey="Byl M">M Byl</name>
</author>
<author>
<name sortKey="Murphy, Ba" uniqKey="Murphy B">BA Murphy</name>
</author>
<author>
<name sortKey="Kaban, Lb" uniqKey="Kaban L">LB Kaban</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marmulla, R" uniqKey="Marmulla R">R Marmulla</name>
</author>
<author>
<name sortKey="Luth, T" uniqKey="Luth T">T Luth</name>
</author>
<author>
<name sortKey="Muhling, J" uniqKey="Muhling J">J Muhling</name>
</author>
<author>
<name sortKey="Hassfeld, S" uniqKey="Hassfeld S">S Hassfeld</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shamir, Rr" uniqKey="Shamir R">RR Shamir</name>
</author>
<author>
<name sortKey="Joskowicz, L" uniqKey="Joskowicz L">L Joskowicz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Khadem, R" uniqKey="Khadem R">R Khadem</name>
</author>
<author>
<name sortKey="Yeh, Cc" uniqKey="Yeh C">CC Yeh</name>
</author>
<author>
<name sortKey="Sadeghi Tehrani, M" uniqKey="Sadeghi Tehrani M">M Sadeghi-Tehrani</name>
</author>
<author>
<name sortKey="Bax, Mr" uniqKey="Bax M">MR Bax</name>
</author>
<author>
<name sortKey="Johnson, Ja" uniqKey="Johnson J">JA Johnson</name>
</author>
<author>
<name sortKey="Welch, Jn" uniqKey="Welch J">JN Welch</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">BMC Med Imaging</journal-id>
<journal-id journal-id-type="iso-abbrev">BMC Med Imaging</journal-id>
<journal-title-group>
<journal-title>BMC Medical Imaging</journal-title>
</journal-title-group>
<issn pub-type="epub">1471-2342</issn>
<publisher>
<publisher-name>BioMed Central</publisher-name>
<publisher-loc>London</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26525142</article-id>
<article-id pub-id-type="pmc">4630916</article-id>
<article-id pub-id-type="publisher-id">89</article-id>
<article-id pub-id-type="doi">10.1186/s12880-015-0089-5</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Suenaga</surname>
<given-names>Hideyuki</given-names>
</name>
<address>
<phone>+81 3 5800 8669</phone>
<email>suenaga-tky@umin.ac.jp</email>
</address>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Tran</surname>
<given-names>Huy Hoang</given-names>
</name>
<address>
<email>h2.tran@yahoo.com</email>
</address>
<xref ref-type="aff" rid="Aff2"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Liao</surname>
<given-names>Hongen</given-names>
</name>
<address>
<email>liao@tsinghua.edu.cn</email>
</address>
<xref ref-type="aff" rid="Aff3"></xref>
<xref ref-type="aff" rid="Aff4"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Masamune</surname>
<given-names>Ken</given-names>
</name>
<address>
<email>masamune.ken@twmu.ac.jp</email>
</address>
<xref ref-type="aff" rid="Aff2"></xref>
<xref ref-type="aff" rid="Aff5"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Dohi</surname>
<given-names>Takeyoshi</given-names>
</name>
<address>
<email>take14-dohi82@mail.dendai.ac.jp</email>
</address>
<xref ref-type="aff" rid="Aff6"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Hoshi</surname>
<given-names>Kazuto</given-names>
</name>
<address>
<email>pochi-tky@umin.net</email>
</address>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Takato</surname>
<given-names>Tsuyoshi</given-names>
</name>
<address>
<email>takato-ora@h.u-tokyo.ac.jp</email>
</address>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<aff id="Aff1">
<label></label>
Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656 Japan</aff>
<aff id="Aff2">
<label></label>
Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan</aff>
<aff id="Aff3">
<label></label>
Department of Bioengineering, Graduate School of Engineering, The University of Tokyo, Tokyo, Japan</aff>
<aff id="Aff4">
<label></label>
Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China</aff>
<aff id="Aff5">
<label></label>
Faculty of Advanced Technology and Surgery, Institute of Advanced Biomedical Engineering and Science, Tokyo Women’s Medical University, Tokyo, Japan</aff>
<aff id="Aff6">
<label></label>
Department of Mechanical Engineering, School of Engineering, Tokyo Denki University, Tokyo, Japan</aff>
</contrib-group>
<pub-date pub-type="epub">
<day>2</day>
<month>11</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>2</day>
<month>11</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>15</volume>
<elocation-id>51</elocation-id>
<history>
<date date-type="received">
<day>23</day>
<month>4</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>9</day>
<month>10</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>© Suenaga et al. 2015</copyright-statement>
<license license-type="OpenAccess">
<license-p>
<bold>Open Access</bold>
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/publicdomain/zero/1.0/">http://creativecommons.org/publicdomain/zero/1.0/</ext-link>
) applies to the data made available in this article, unless otherwise stated.</license-p>
</license>
</permissions>
<abstract id="Abs1">
<sec>
<title>Background</title>
<p>This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery.</p>
</sec>
<sec>
<title>Method</title>
<p>A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration.</p>
</sec>
<sec>
<title>Results</title>
<p>Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses.</p>
</sec>
<sec>
<title>Conclusion</title>
<p>Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications.</p>
</sec>
<sec>
<title>Electronic supplementary material</title>
<p>The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users.</p>
</sec>
</abstract>
<kwd-group xml:lang="en">
<title>Keywords</title>
<kwd>Augmented reality</kwd>
<kwd>Integral videography</kwd>
<kwd>Markerless registration</kwd>
<kwd>Stereo vision</kwd>
<kwd>Three-dimensional image</kwd>
</kwd-group>
<custom-meta-group>
<custom-meta>
<meta-name>issue-copyright-statement</meta-name>
<meta-value>© The Author(s) 2015</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec id="Sec1" sec-type="introduction">
<title>Background</title>
<p>Augmented reality (AR) involves the co-display of a virtual image and a real-time image so that the user is able to utilize and interact with the components of both images simultaneously [
<xref ref-type="bibr" rid="CR1">1</xref>
]. This image-based navigation facilitates
<italic>in situ</italic>
visualization during surgical procedures [
<xref ref-type="bibr" rid="CR2">2</xref>
] because visual cues obtained from a preoperative radiological virtual image can enhance visualization of surgical anatomy [
<xref ref-type="bibr" rid="CR3">3</xref>
], thus improving preoperative planning and supporting the surgeon’s skill by simplifying the anatomical approach to complex procedures [
<xref ref-type="bibr" rid="CR4">4</xref>
]. In recent years, the technical application of AR has been studied in the context of various clinical applications. Examples of recent research which has sought to determine how the application of AR may lead to improvements in medical outcomes have included a study examining the use of AR to improve the precision of minimally invasive laparoscopic surgeries [
<xref ref-type="bibr" rid="CR3">3</xref>
]; comparison between planned and actual needle locations in MRI-guided lumbar spinal injection procedures [
<xref ref-type="bibr" rid="CR5">5</xref>
]; and studies examining the application of AR for image-guided neurosurgery for brain tumors [
<xref ref-type="bibr" rid="CR6">6</xref>
], for the overlay of preoperative radiological 3-dimensional (3D) models onto the intraoperative laparoscopic videos [
<xref ref-type="bibr" rid="CR7">7</xref>
]; and to facilitate vessel localization in neurovascular surgery [
<xref ref-type="bibr" rid="CR8">8</xref>
]. AR also has potential as an aid to surgical teaching [
<xref ref-type="bibr" rid="CR4">4</xref>
]. Furthermore, CT-free navigation systems, which do not rely on the acquisition of pre-procedure image acquisition but instead intra-operatively recognize the position and orientation of defined patient features, are also being evaluated [
<xref ref-type="bibr" rid="CR9">9</xref>
,
<xref ref-type="bibr" rid="CR10">10</xref>
]. AR has the potential to increase the surgeon’s visual awareness of high-risk surgical targets [
<xref ref-type="bibr" rid="CR7">7</xref>
] and to improve the surgeon’s intuitive grasp of the structures within the operational fields [
<xref ref-type="bibr" rid="CR11">11</xref>
].</p>
<p>Similarly, there are an increasing number of studies examining the potential use of image-guided systems for oral and maxillofacial surgeries (OMS) [
<xref ref-type="bibr" rid="CR12">12</xref>
,
<xref ref-type="bibr" rid="CR13">13</xref>
]. Patient or image registration (overlay) is key to associating the surgical field with its virtual counterpart [
<xref ref-type="bibr" rid="CR14">14</xref>
,
<xref ref-type="bibr" rid="CR15">15</xref>
]. The disadvantages of the current navigation systems used in OMS include bulky optical trackers and lower accuracy of electromagnetic trackers in locating surgical instruments, invasive and error-prone image registration procedures, and an additional reference marker to track patient movement [
<xref ref-type="bibr" rid="CR16">16</xref>
]. In addition, errors related to position, angle, distance, vibration of the optical tracker, the reference frame and probe tip of the equipment are high. With anatomical landmark-based registration, each observer is only prone to human error based on personal preference of anatomical landmarks in the surgical field [
<xref ref-type="bibr" rid="CR17">17</xref>
,
<xref ref-type="bibr" rid="CR18">18</xref>
]. Moreover, frequent hand-eye transformation, which corrects the displacement between the probe tip and the image reference frame, is required for constant comparisons between the surgical field and the displayed image. Furthermore, images in real space are projected using a 3D display via binocular stereopsis, with the disadvantage that the observed video does not change with changes in viewing position since only relative depth is recognized. Thus, accurate 3D positioning cannot be reproduced without incurring motion parallax. Head mounted displays and head-mounted operating microscopes with stereoscopic vision have been used many times for AR visualization in the medical field. However, such video see through devices have two views that present only horizontal parallax, instead of the full parallax. Projector-based AR visualization is appropriate for large operative field overlays; however, it lacks depth perception. As described in our previous study, we have developed an autostereoscopic 3-D image overlay using a translucent mirror [
<xref ref-type="bibr" rid="CR15">15</xref>
]. The integral videography (IV) principle applied in this study differs from binocular stereopsis, and allows both binocular parallaxes for depth perception and motion parallax, wherein depth cues are recognized even if the observer is in motion [
<xref ref-type="bibr" rid="CR15">15</xref>
,
<xref ref-type="bibr" rid="CR19">19</xref>
<xref ref-type="bibr" rid="CR21">21</xref>
]. Results from our previous research have shown that the 3D AR system using integral video-graphic images is a highly effective and accurate tool for surgical navigation in OMS [
<xref ref-type="bibr" rid="CR22">22</xref>
].</p>
<p>To overcome the challenges of image-guided OMS, we developed a more simplified AR navigation system that provides automatic markerless image registration using real-time autostereoscopic 3D (IV) imaging and stereo vision for dental surgery. Patient-image registration achieved by patient tracking via contour matching has been previously described [
<xref ref-type="bibr" rid="CR14">14</xref>
]. The current study evaluated the feasibility of using a combination of AR and stereo vision technologies to project IV images obtained from preoperative CT data onto the actual surgical site during real time and automatic markerless registration, respectively, in a clinical setting; this feasibility study was performed on a volunteer. Therefore, this study proposes use of this simplified image-guided AR technology for superimposing a region-specific 3D image of the jaw bone on the actual surgical site in real time. This technology can aid surgical treatment of structures that are in spatial positions but not directly observable.</p>
</sec>
<sec id="Sec2" sec-type="materials|methods">
<title>Methods</title>
<p>The apparatus for the entire system was comprised of 3D stereo camera and the 3D-IV imaging system, as shown in Fig. 
<xref rid="Fig1" ref-type="fig">1a</xref>
. We used two computer systems; one to track the surgical procedure using stereo vision and the other to generate 3D-IV images for a projected overlay. The study was conducted in accordance with Good Clinical Practice (GCP) guidelines and the Declaration of Helsinki, and the study protocol was approved by the medical ethics committee of the Graduate School of Medicine of the University of Tokyo, Japan. Written informed consent was provided by the volunteer prior to study initiation.
<fig id="Fig1">
<label>Fig. 1</label>
<caption>
<p>The physical setup of the system.
<bold>a</bold>
The configuration of the markerless surgical navigation system based on stereo vision and augmented reality, and
<bold>b</bold>
a 3D rapid prototyping model</p>
</caption>
<graphic xlink:href="12880_2015_89_Fig1_HTML" id="MO1"></graphic>
</fig>
</p>
<sec id="Sec3">
<title>Generation of 3D-IV images</title>
<p>The 3D-IV images to be projected onto the surgical site were generated from CT images of the jaw bones using an Aquilion ONE™ (Toshiba, Tokyo, Japan) 320-row area detector CT scanner and from images of the teeth using a Rexcan DS2 3D scanner (Solutionix, Seoul, Korea). Conditions for the area detector CT (ADCT) scan were: tube voltage, 120 kV; tube current, 270 mA; and slice thickness, 0.5 mm. Thus, the IV image generated from the preoperative CT data was a “real” 3D representation of the jaws. Next, 3D surface models of the upper and lower jawbones were generated using Mimics® Version 16 (Materialise, Leuven, Belgium) and Geomagic Control (Geomagic, Cary, NC, USA) medical image-processing software. Similarly, the 3D scanned images of a dental plaster model were recorded onto a CT image using the Rexcan DS2 3D scanner. Briefly, the IV image of the 3D CT was constructed as an assembly of reconstructed light sources. The complete 3D-IV image was displayed directly onto the surgical site using a half-silvered mirror (Fig. 
<xref rid="Fig1" ref-type="fig">1a</xref>
), which makes it appear that the 3D image is inside the patient's body, and could be viewed directly without special glasses. Technical details for the generation of the 3D-IV images have been described in previous publications [
<xref ref-type="bibr" rid="CR15">15</xref>
,
<xref ref-type="bibr" rid="CR19">19</xref>
,
<xref ref-type="bibr" rid="CR20">20</xref>
,
<xref ref-type="bibr" rid="CR22">22</xref>
,
<xref ref-type="bibr" rid="CR23">23</xref>
]. Each point, shown in a 3D space, was reconstructed at the same position as the actual object by the convergence of rays from the pixels of the element images on the computer display after they pass through the lenses in a microconvex lens array. The surgeon can see any point on the display from various directions, as though it were fixed in 3D space. Each point appears as a different light source. The system was able to render IV images at a rate of 5 frames per second. For the IV image to be displayed in the correct position, the coordinates of the preoperative model obtained from CT data are registered intra-operatively with the contour derived from the 3D scanner image of the subject in real space. The triangle mesh model of the teeth is created using the marching cubes algorithm and is rendered by OpenGL.</p>
</sec>
<sec id="Sec4">
<title>Rapid prototyping model</title>
<p>With the 3D-IV images generated, a feasibility study was conducted on a phantom head using a 3D rapid prototyping (RP) model of the mandible using Alaris™ 30U RP technology (Objet Geometries, Rehovot, Israel) based on CT data of the subject (Fig. 
<xref rid="Fig1" ref-type="fig">1b</xref>
). Technical details of the registration of the RP model have been described in previous publications, with registration errors reported to be between 0.27 and 0.33 mm [
<xref ref-type="bibr" rid="CR18">18</xref>
]. The mean overall error of the 3D image overlay in the current study was 0.71 mm, which met clinical requirements [
<xref ref-type="bibr" rid="CR15">15</xref>
].</p>
</sec>
<sec id="Sec5">
<title>Patient tracking</title>
<p>“Patient tracking” refers to tracking of the 3D contours of the teeth (incisal margins). The incisal margins were tracked, with the right and left images obtained through the stereo camera in real time; spatial positions of the teeth were obtained by matching the right and left images for 3D-contour reconstruction. The reconstructed 3D image was then compared with the actual image from the subject using the stereo camera.</p>
<p>Specifications for the tracking system included an Intel® Core™ i7 3.33 GHz processor combined with an NVIDIA® GeForce® GTX 285 GPU and an EO-0813CCD 2 charge-coupled device stereo camera (Edmund Optics Inc., Barrington, NJ, USA). The camera had a maximum frame rate of 60 frames per second with an image resolution of 1280 × 1024 pixels.</p>
</sec>
<sec id="Sec6">
<title>Image-based calibration of IV display</title>
<p>The 3D-IV images were calibrated using a calibration model with known geometry that included: a) visualization of the calibration model with five feature points in the IV frame; b) display of the 3D image of the calibration model; c) stereo image capture of the 3D image with the stereo camera through the half silvered mirror; and d), matching of parallax images (right and left images) from the stereo camera to obtain a final 3D-IV image. Similarly, the final calibrated 3D-IV images of the subject’s jaw that appeared to be floating in real space were projected into the correct position based on the coordinates of the image obtained from preoperative CT data and the real object from the subject using HALCON software Version 11 (MVTec Software GmbH, Munich, Germany) and Open CV, the Open Source Computer Vision Library.</p>
</sec>
<sec id="Sec7">
<title>Patient-image registration</title>
<p>Typically, fixed external anatomic landmarks on the patient and imaging data define the accuracy of imaging system [
<xref ref-type="bibr" rid="CR24">24</xref>
] whereby anatomic landmarks identified on the surface of the organ can be correlated with accuracy to the predefined landmarks in the computer’s coordinate system. In the current study, the natural landmarks (incisal margins) were tracked with the stereo camera instead of manual identification. The 3D position of this natural landmark was accurately determined using the right and left images (parallax images) captured by the stereo camera [
<xref ref-type="bibr" rid="CR25">25</xref>
]. Thereafter, the preoperative 3D-CT image was integrated with images from the subject using 3D image-matching technology (stereo vision). The detected feature point of a tooth (actual image) was then correlated with the corresponding feature point of a tooth on the 3D CT image. This means that mutually corresponding feature points in both the images (i.e., 3D-CT image and actual image of a tooth) were matched. Matching of the 3D-CT image and volunteer position was based on the correlation of ≥ 200 feature points on the incisal margin. Because of the high contrast between the teeth and the oral cavity, the 3-D contour of the front teeth is easily extracted using template matching and edge extraction. An image template is first manually selected using the left camera image and then matched to the corresponding right camera image to select the regions of interest (ROI). 2-D edges of the front teeth are then extracted with subpixel accuracy within the detected ROIs, and the extracted teeth edges are stereo-matched using epipolar constraint searching. Sub-pixel estimation is the process of estimating the value of a geometric quantity to better than pixel accuracy, even though the data was originally sampled on an integer pixel quantized space. Frequency based shift calculated methods using phase correlation (PC) have been widely used because of its accuracy and low complexity for shift motion due to translation, rotation or scale changes between images. The PC method for images alignment relies mainly on the shift property of the Fourier transform to estimate the translation between two images. The epipolar line is the straight line of intersection of the epipolar plane with the image plane. It is the image in one camera of a ray through the optical centre and image point in the other camera. All epipolar lines intersect at the epipole. This epipolar line is an extremely important constraint in the stereo matching step. Epipolar constraint searching aims to establish a mapping between points in the left image and lines in the right image and vice versa so that "the correct match must lie on the epipolar line". 11 × 11 area is defined as the patch size of 11 × 11 pixel. A normalized cross correlation coefficient describes the similarity between two patches and can be used for solving correspondence problems between images. The basic steps involves (i) extracting a reference patch from the reference image; the conjugate position of this reference patch in the search image is determined, (ii) defining a search area and specific sampling positions (search patches) for correlation, within the search image, (iii) computing the correlation value (with respect to the reference patch) at each of the defined sample position, and (iv) finding the sample position with the maximum correlation value, which indicates the search patch with highest similarity to the reference patch. If multiple edge points appear on the epipolar line, the one with the closest normalized cross correlation value (calculated in an 11 × 11 area centered at the candidate edge point) is chosen for the match. Full details of the algorithms used for matching have been previously described [
<xref ref-type="bibr" rid="CR15">15</xref>
]. An HDC-Z10000 3D video camera (Panasonic Co, Ltd, Tokyo, Japan) was used to document the procedure. Although we adapted the steps from published literature [
<xref ref-type="bibr" rid="CR15">15</xref>
,
<xref ref-type="bibr" rid="CR25">25</xref>
], the novelty of our method is that this is the first study in which dentomaxillofacial 3D computed tomography (CT) data (both maxillary and mandibular jaws along with teeth) generated by a 3D IV image display system using augmented reality (AR) navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery were superimposed on a human volunteer. Previous studies were made on phantom models. We focused on investigating the property of the intraoral environment of human. Patient movement was a challenge that needed to be addressed when applying the method on a human subject in real clinical setting. The challenge of patient movement was overcome by the use of a custom designed stereo camera which tracked the patient movement and updated the image registration on real time without manual involvement.</p>
</sec>
<sec id="Sec8">
<title>Evaluation of recognition time and positioning error</title>
<p>Because registration and tracking were performed using the stereo camera, the measurement error of the stereo camera system was considered a major source of registration error. Since it was considered impractical to evaluate the accuracy of each stage of the registration process, the accuracy of the final 3D-IV image was used to confirm the accuracy of this novel system. Error calculation was conducted as per our previous study, based on the alignment of 14 points on the surface of the teeth with the actual 3D-IV images using the stereo camera [
<xref ref-type="bibr" rid="CR23">23</xref>
]. Because these points were not used at any stage of the registration process, the accuracy of this experiment can be considered as a target registration error (TRE). Each point was measured 20 times in the stereoscopic model and 3D-IV images to determine average value, standard deviation (SD) and 3D differences for the target position [
<xref ref-type="bibr" rid="CR1">1</xref>
]. Calculations were performed according to Fitzpatrick and West [
<xref ref-type="bibr" rid="CR26">26</xref>
].</p>
<p>The accuracy of the cameras was based on the following equations:
<disp-formula id="Equa">
<alternatives>
<tex-math id="M1">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \begin{array}{l} XYdirection\kern0.24em :\kern0.24em {\varDelta}_x = \kern0.24em {\varDelta}_y=\kern0.36em \frac{z}{f}\kern0.24em {\varDelta}_d\\ {} Zdirection\kern0.24em :\kern0.24em {\varDelta}_z=\frac{z^2}{fb}\kern0.24em {\varDelta}_d\end{array} $$\end{document}</tex-math>
<mml:math id="M2">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">XYdirection</mml:mi>
<mml:mspace width="0.24em"></mml:mspace>
<mml:mo>:</mml:mo>
<mml:mspace width="0.24em"></mml:mspace>
<mml:msub>
<mml:mi>Δ</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:mspace width="0.12em"></mml:mspace>
<mml:mtext></mml:mtext>
<mml:mo>=</mml:mo>
<mml:mtext></mml:mtext>
<mml:mspace width="0.24em"></mml:mspace>
<mml:msub>
<mml:mi>Δ</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mspace width="0.36em"></mml:mspace>
<mml:mfrac>
<mml:mi>z</mml:mi>
<mml:mi>f</mml:mi>
</mml:mfrac>
<mml:mspace width="0.24em"></mml:mspace>
<mml:msub>
<mml:mi>Δ</mml:mi>
<mml:mi>d</mml:mi>
</mml:msub>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">Zdirection</mml:mi>
<mml:mtext></mml:mtext>
<mml:mspace width="0.24em"></mml:mspace>
<mml:mo>:</mml:mo>
<mml:mspace width="0.24em"></mml:mspace>
<mml:msub>
<mml:mi>Δ</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:msup>
<mml:mi>z</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mspace width="0.24em"></mml:mspace>
<mml:msub>
<mml:mi>Δ</mml:mi>
<mml:mi>d</mml:mi>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<graphic xlink:href="12880_2015_89_Article_Equa.gif" position="anchor"></graphic>
</alternatives>
</disp-formula>
</p>
<p>where, z is the distance from the camera to the object (~500 mm), ƒ is the focal length of the cameras (12 mm), b is the distance between the cameras (120 mm) and ∆
<sub>d</sub>
represents half of a pixel’s size on the sensor (2.325 μm).</p>
</sec>
</sec>
<sec id="Sec9" sec-type="results">
<title>Results</title>
<p>The calibrated images of the setup based on five feature points and the resulting 3D-IV image were displayed in real space as shown in Fig. 
<xref rid="Fig2" ref-type="fig">2a</xref>
and
<xref rid="Fig2" ref-type="fig">2b</xref>
, respectively, which were then recognized and matched by our automatic measurement method using the stereo camera (Fig. 
<xref rid="Fig2" ref-type="fig">2c</xref>
); extraction of characteristic points is shown in Fig. 
<xref rid="Fig3" ref-type="fig">3a</xref>
. Registration of the 3D-IV and subject images was performed, wherein the contours were automatically detected, their outline generated and both were matched using the stereo camera (Fig. 
<xref rid="Fig3" ref-type="fig">3b</xref>
). The 3D reconstruction of the teeth contours in the stereo camera after image matching is shown in Fig. 
<xref rid="Fig3" ref-type="fig">3c</xref>
. By automatically detecting the feature points of the teeth, complete automatic registration was possible (Fig. 
<xref rid="Fig3" ref-type="fig">3b</xref>
and
<xref rid="Fig3" ref-type="fig">3c</xref>
). Therefore, this system allowed real-time patient-image registration through tracking of teeth contours and image matching with the pre-operative model. The 3D-CT images of the mandible and maxilla in real space obtained using AR technology are shown in Fig. 
<xref rid="Fig4" ref-type="fig">4a</xref>
and
<xref rid="Fig4" ref-type="fig">4b</xref>
(Additional files
<xref rid="MOESM1" ref-type="media">1</xref>
and
<xref rid="MOESM2" ref-type="media">2</xref>
). Furthermore, CT data were displayed in real space as high-accuracy stereoscopic images with the teeth as landmarks for capturing information regarding the position of structures, thus negating the need for markers. The mandibular canal, tooth root and impacted third molar could also be visualized in the 3D-IV image (Fig. 
<xref rid="Fig4" ref-type="fig">4c</xref>
; Additional file
<xref rid="MOESM3" ref-type="media">3</xref>
). The actual accuracy of the camera system was computed to be 0.096 mm along the XY axis and 0.403 mm along the Z axis (depth); the accuracy limit in positioning was theoretically calculated to be 0.425 mm. The error component in each direction is shown in Fig. 
<xref rid="Fig5" ref-type="fig">5</xref>
. The mean (SD) error between the IV image and object was 0.28 (0.21) mm along the X axis, 0.25 (0.17) mm along the Y axis and 0.36 (0.33) mm along the Z axis.
<fig id="Fig2">
<label>Fig. 2</label>
<caption>
<p>Calibration of the integral videography (IV) display. This includes
<bold>a</bold>
five feature points for calibration,
<bold>b</bold>
The IV image displayed in real space, and
<bold>c</bold>
recognition results for the calibrated IV images (matching of left and right images via stereo vision)</p>
</caption>
<graphic xlink:href="12880_2015_89_Fig2_HTML" id="MO2"></graphic>
</fig>
<fig id="Fig3">
<label>Fig. 3</label>
<caption>
<p>Automatic registration of the 3D-CT image and volunteer’s position. This included
<bold>a</bold>
extraction of characteristic points,
<bold>b</bold>
automatic detection of teeth contour and matching of right and left images via stereo vision, and
<bold>c</bold>
3D contour reconstruction in the stereo camera frame</p>
</caption>
<graphic xlink:href="12880_2015_89_Fig3_HTML" id="MO3"></graphic>
</fig>
<fig id="Fig4">
<label>Fig. 4</label>
<caption>
<p>The IV images are overlaid on the surgical site. This included the
<bold>a</bold>
mandible,
<bold>b</bold>
maxilla overlaid on the surgical site, and
<bold>c</bold>
visualization of the mandibular canal, tooth root, and impacted third molar</p>
</caption>
<graphic xlink:href="12880_2015_89_Fig4_HTML" id="MO4"></graphic>
</fig>
<fig id="Fig5">
<label>Fig. 5</label>
<caption>
<p>Positional errors along different axes (
<italic>x</italic>
,
<italic>y</italic>
, and
<italic>z</italic>
)</p>
</caption>
<graphic xlink:href="12880_2015_89_Fig5_HTML" id="MO5"></graphic>
</fig>
</p>
<p>The current study evaluated the feasibility of using a combination of AR and stereo vision technologies to project IV images obtained from preoperative CT data onto the actual surgical site during real time and automatic markerless registration, respectively, in a clinical setting on a volunteer. The existing methods using this type of system is so far done on phantom models. Here we have successfully done the markerless registration of patient image on a patient volunteer in a clinical setup for the first time. So there is no other similar study for comparison.</p>
</sec>
<sec id="Sec10" sec-type="discussion">
<title>Discussion</title>
<p>Dental surgery requires highly precise operations, with surgical targets often hidden by surrounding structures that must remain undamaged during the procedure. The use of AR system may provide a solution to address challenges presented in routine surgical practice.</p>
<p>The current study strategically simplified and improved the application of AR in OMS. The study used region-specific 3D-CT images displayed in real space with high accuracy and depth perception by using markerless registration through stereo vision. The study also used 320-row ADCT to provide a larger number of detector rows, and a single rotation of the gantry obtains 320 slices of CT images for a 16 cm volume area without a helical scan. Traditional methods of registration include an external registration frame or marker frames with a screw, [
<xref ref-type="bibr" rid="CR13">13</xref>
,
<xref ref-type="bibr" rid="CR24">24</xref>
,
<xref ref-type="bibr" rid="CR27">27</xref>
] which are fraught with errors and also restrict the operating space. Furthermore, registrations for soft tissues are associated with low accuracy [
<xref ref-type="bibr" rid="CR28">28</xref>
]. Zhu and colleagues [
<xref ref-type="bibr" rid="CR29">29</xref>
] used an occlusal splint for registration in mandibular angle oblique-split osteotomy with good results; however, the system could not be used in edentulous patients and the markers were limited to the lower half of the face. A study using markerless registration showed variations between three different methods based on anatomic landmarks such as the zygoma, sinus posterior wall, molar alveolar, premolar alveolar, lateral nasal aperture and the infra-orbital areas that were used during navigational surgery of the maxillary sinus [
<xref ref-type="bibr" rid="CR30">30</xref>
]. The results of that study showed that although the use of skin adhesive markers and anatomic landmarks was noninvasive and practical, it had limited accuracy and was restricted to craniofacial surgery. In the current study, complete automatic registration was possible because of the use of the anatomic features of teeth; they are the only hard tissues externally exposed, which makes them useful targets for registration based on 3D image matching via stereo vision. We believe that the registration procedure used in the present study can be used in the anterior teeth as well as molars (both jaws), thus including the entire oral surgery site; the only requirement being registration of the teeth. The introduction of stereo cameras into the IV image-overlay system eliminated the use of an external optical tracker.</p>
<p>During navigation in image-guided surgery, since the surgeon cannot simultaneously look at the screen and the operative site, this limitation can cause surgical errors [
<xref ref-type="bibr" rid="CR24">24</xref>
]. In the current study using AR and stereo vision technologies, the IV image could be accurately aligned with the preoperative patient model, as observed from both directions. An added benefit was the facility to observe 3D images when changing the viewing position from horizontal to vertical on the subject similarly to real space without the need for special glasses. Because the 3D-AR display in this system obviated the need to avert the operator’s eyes from the surgical field or change focus, necessary information could be read in real-time during surgery.</p>
<p>In a comparative study of two navigation systems using frames for registration, the accuracy varied from 0.5 to 4 mm [
<xref ref-type="bibr" rid="CR27">27</xref>
]. The average registration error in the current system approximated the theoretical value and was reported as 0.63 ± 0.23 mm, which was much lower than that reported in the other studies [
<xref ref-type="bibr" rid="CR31">31</xref>
,
<xref ref-type="bibr" rid="CR32">32</xref>
]. The measurement resolution capacity of the stereo camera system in the current study was 0.1 mm along the XY axis and 0.4 mm along the Z axis. A comparative tracking error analysis of five different optical tracking systems showed that their position-measuring accuracy, ranging from 0.1 to 0.4 mm [
<xref ref-type="bibr" rid="CR33">33</xref>
], was considered highly acceptable, whereas the position-measuring accuracy in the current study was theoretically calculated as 0.425 mm using the stereo camera [
<xref ref-type="bibr" rid="CR34">34</xref>
]. Because our extraction algorithm was accurate up to the sub-pixel level, planar coordinates of a 3D point could be computed accurately to within 0.1 mm, which was superior to that of an optical tracking system. However, as both registration and tracking were carried out using a stereo camera set, measurement error was one of the major sources of registration errors. It was anticipated that the TRE for the proximal’characteristic features’ used for registration would be less than that in the region more distal. The pre-and intra-operative contours of each part (anterior or left and right molars) could be easily extracted using the method shown in this study, and the pair near the surgical site should be used for patient-image registration. Thus, the location of the surgical site should determine the selection of anterior teeth or molar tracking for patient-image registration. The potential for error increases with increases in the number of steps in a procedure. Therefore, the possibility of segmentation errors in CT data and registration errors related to incorporating digital dental models in CT data cannot be completely ruled out. In the field of oral surgery, the complexity of the anatomical structures involved often makes it difficult to visualize the operating field site. Ability to grasp the 3D relationships between such structures through direct visualization promises to greatly facilitate surgical procedures. Furthermore, the actual clinical accuracy in terms of clinical outcomes will require assessment of this procedure in surgery-specific randomized controlled trials.</p>
</sec>
<sec id="Sec11" sec-type="conclusion">
<title>Conclusion</title>
<p>In conclusion, this system, without the use of references and fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching using stereo vision, which, combined with AR, could have significant clinical applications. The entire registration process took less than three seconds to complete and the time for computation was acceptable. To improve accuracy and processing speed, future research should aim to improve the resolution of cameras, create finer displays and increase the computational power of the computer systems. Depth resolution could be improved using a camera with a smaller pixel pitch. Since an improved highly precise system is required, a high-definition rear display or lens array is needed for the densification of pixels in AR display. The most important contribution of this study is that it proved the feasibility of the AR system with IV imaging for clinical use with a registration error of < 1 mm. Our registration frame provided a markerless registration and tracking of IV images and patient, thereby simplifying intraoperative tasks, increasing accuracy and reducing potential discomfort and inconvenience to the patient. Therefore, this modified AR technique with stereo vision has a significant potential for use in clinical applications. In a sense, we think that the method adopted here cannot be rigidly classified into either a methodology or validation paper. In this paper it was termed as methodology because so many aspects of the methods were needed to conduct this work. The repeatability of the method may be in question if it is done only on one patient. Therefore, future works need to include clinical trials of this technology in more number of patients to assess a multitude of potential applications and universalization of this technique.</p>
</sec>
</body>
<back>
<app-group>
<app id="App1">
<sec id="Sec12">
<title>Additional files</title>
<p>
<media position="anchor" xlink:href="12880_2015_89_MOESM1_ESM.avi" id="MOESM1">
<label>Additional file 1:</label>
<caption>
<p>
<bold>Supplementary videos 1.</bold>
Real-time computer-generated integral images of the mandible are overlaid on the surgical site. (mp4 2.37MB)</p>
</caption>
</media>
<media position="anchor" xlink:href="12880_2015_89_MOESM2_ESM.avi" id="MOESM2">
<label>Additional file 2:</label>
<caption>
<p>
<bold>Supplementary videos 2.</bold>
Real-time computer-generated integral images of the maxilla are overlaid on the surgical site. (mp4 1.21MB)</p>
</caption>
</media>
<media position="anchor" xlink:href="12880_2015_89_MOESM3_ESM.avi" id="MOESM3">
<label>Additional file 3:</label>
<caption>
<p>
<bold>Supplementary videos 3.</bold>
Real-time computer-generated integral images of the mandibular canal, tooth root, and impacted third molar are overlaid on the surgical site. (mp4 2.42MB)</p>
</caption>
</media>
</p>
</sec>
</app>
</app-group>
<glossary>
<title>Abbreviations</title>
<def-list>
<def-item>
<term>AR</term>
<def>
<p>Augmented reality</p>
</def>
</def-item>
<def-item>
<term>OMS</term>
<def>
<p>Oral and maxillofacial surgeries</p>
</def>
</def-item>
<def-item>
<term>IV</term>
<def>
<p>Integral videography</p>
</def>
</def-item>
<def-item>
<term>GCP</term>
<def>
<p>Good clinical practice</p>
</def>
</def-item>
<def-item>
<term>ADCT</term>
<def>
<p>Conditions for the area detector CT</p>
</def>
</def-item>
<def-item>
<term>RP</term>
<def>
<p>Rapid prototyping</p>
</def>
</def-item>
<def-item>
<term>PC</term>
<def>
<p>Phase correlation</p>
</def>
</def-item>
<def-item>
<term>ROI</term>
<def>
<p>Regions of interest</p>
</def>
</def-item>
<def-item>
<term>CT</term>
<def>
<p>Computed tomography</p>
</def>
</def-item>
<def-item>
<term>TRE</term>
<def>
<p>Target registration error</p>
</def>
</def-item>
<def-item>
<term>SD</term>
<def>
<p>Standard deviation</p>
</def>
</def-item>
</def-list>
</glossary>
<fn-group>
<fn>
<p>
<bold>Competing interest</bold>
</p>
<p>The authors declare that they have no competing interests.</p>
</fn>
<fn>
<p>
<bold>Authors’ contributions</bold>
</p>
<p>HS conceived of the study, and participated in its design and coordination and drafted the manuscript. HHT and HL contributed to the design and analysis of the study data, and revised the manuscript. KM, TD, KH, and TT provided clinical insight that pervades the manuscript. All authors read and approved the final manuscript.</p>
</fn>
</fn-group>
<ack>
<p>This work was supported by a Grant-in-Aid for Scientific Research (23792318) of the Japan Society for the Promotion of Science (JSPS) in Japan.</p>
</ack>
<ref-list id="Bib1">
<title>References</title>
<ref id="CR1">
<label>1.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lovo</surname>
<given-names>EE</given-names>
</name>
<name>
<surname>Quintana</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Puebla</surname>
<given-names>MC</given-names>
</name>
<name>
<surname>Torrealba</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Santos</surname>
<given-names>JL</given-names>
</name>
<name>
<surname>Lira</surname>
<given-names>IH</given-names>
</name>
<etal></etal>
</person-group>
<article-title>A novel, inexpensive method of image coregistration for applications in image-guided surgery using augmented reality</article-title>
<source>Neurosurgery</source>
<year>2007</year>
<volume>60</volume>
<fpage>366</fpage>
<lpage>371</lpage>
<pub-id pub-id-type="doi">10.1227/01.NEU.0000255360.32689.FA</pub-id>
<pub-id pub-id-type="pmid">17415176</pub-id>
</element-citation>
</ref>
<ref id="CR2">
<label>2.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sielhorst</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Bichlmeier</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Heining</surname>
<given-names>SM</given-names>
</name>
<name>
<surname>Navab</surname>
<given-names>N</given-names>
</name>
</person-group>
<article-title>Depth perception--a major issue in medical AR: evaluation study by twenty surgeons</article-title>
<source>Med Image Comput Comput Assist Interv</source>
<year>2006</year>
<volume>9</volume>
<fpage>364</fpage>
<lpage>372</lpage>
<pub-id pub-id-type="pmid">17354911</pub-id>
</element-citation>
</ref>
<ref id="CR3">
<label>3.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kang</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Azizian</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Wilson</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Martin</surname>
<given-names>AD</given-names>
</name>
<name>
<surname>Kane</surname>
<given-names>TD</given-names>
</name>
</person-group>
<article-title>Stereoscopic augmented reality for laparoscopic surgery</article-title>
<source>Surg Endosc</source>
<year>2014</year>
<volume>28</volume>
<fpage>2227</fpage>
<lpage>2235</lpage>
<pub-id pub-id-type="doi">10.1007/s00464-014-3433-x</pub-id>
<pub-id pub-id-type="pmid">24488352</pub-id>
</element-citation>
</ref>
<ref id="CR4">
<label>4.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Volonte</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Pugin</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Bucher</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Sugimoto</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ratib</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Morel</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion</article-title>
<source>J Hepatobiliary Pancreat Sci</source>
<year>2011</year>
<volume>18</volume>
<fpage>506</fpage>
<lpage>509</lpage>
<pub-id pub-id-type="doi">10.1007/s00534-011-0385-6</pub-id>
<pub-id pub-id-type="pmid">21487758</pub-id>
</element-citation>
</ref>
<ref id="CR5">
<label>5.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fritz</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Thainual</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Ungi</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Flammang</surname>
<given-names>AJ</given-names>
</name>
<name>
<surname>Cho</surname>
<given-names>NB</given-names>
</name>
<name>
<surname>Fichtinger</surname>
<given-names>G</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Augmented reality visualization with image overlay for MRI-guided intervention: accuracy for lumbar spinal procedures with a 1.5-T MRI system</article-title>
<source>AJR Am J Roentgenol</source>
<year>2012</year>
<volume>198</volume>
<fpage>W266</fpage>
<lpage>W273</lpage>
<pub-id pub-id-type="doi">10.2214/AJR.11.6918</pub-id>
<pub-id pub-id-type="pmid">22358024</pub-id>
</element-citation>
</ref>
<ref id="CR6">
<label>6.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mahvash</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Besharati</surname>
<given-names>TL</given-names>
</name>
</person-group>
<article-title>A novel augmented reality system of image projection for image-guided neurosurgery</article-title>
<source>Acta Neurochir</source>
<year>2013</year>
<volume>155</volume>
<fpage>943</fpage>
<lpage>947</lpage>
<pub-id pub-id-type="doi">10.1007/s00701-013-1668-2</pub-id>
<pub-id pub-id-type="pmid">23494133</pub-id>
</element-citation>
</ref>
<ref id="CR7">
<label>7.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Puerto-Souza</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Cadeddu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Mariottini</surname>
<given-names>GL</given-names>
</name>
</person-group>
<article-title>Toward Long-term and Accurate Augmented-Reality for Monocular Endoscopic Videos</article-title>
<source>IEEE Trans Biomed Eng</source>
<year>2014</year>
<volume>61</volume>
<fpage>2609</fpage>
<lpage>2620</lpage>
<pub-id pub-id-type="doi">10.1109/TBME.2014.2323999</pub-id>
<pub-id pub-id-type="pmid">24835126</pub-id>
</element-citation>
</ref>
<ref id="CR8">
<label>8.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kersten-Oertel</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>SS</given-names>
</name>
<name>
<surname>Drouin</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Sinclair</surname>
<given-names>DS</given-names>
</name>
<name>
<surname>Collins</surname>
<given-names>DL</given-names>
</name>
</person-group>
<article-title>Augmented reality visualization for guidance in neurovascular surgery</article-title>
<source>Stud Health Technol Inform</source>
<year>2012</year>
<volume>173</volume>
<fpage>225</fpage>
<lpage>229</lpage>
<pub-id pub-id-type="pmid">22356991</pub-id>
</element-citation>
</ref>
<ref id="CR9">
<label>9.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Matsumoto</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Kubo</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Muratsu</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Tsumura</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Ishida</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Matsushita</surname>
<given-names>T</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Differing prosthetic alignment and femoral component sizing between 2 computer-assisted CT-free navigation systems in TKA</article-title>
<source>Orthopedics</source>
<year>2011</year>
<volume>34</volume>
<fpage>e860</fpage>
<lpage>e865</lpage>
<pub-id pub-id-type="pmid">22146202</pub-id>
</element-citation>
</ref>
<ref id="CR10">
<label>10.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yokoyama</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Abe</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Fujiwara</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Suzuki</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Nakajima</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Sugita</surname>
<given-names>N</given-names>
</name>
<etal></etal>
</person-group>
<article-title>A new navigation system for minimally invasive total knee arthroplasty</article-title>
<source>Acta Med Okayama</source>
<year>2013</year>
<volume>67</volume>
<fpage>351</fpage>
<lpage>358</lpage>
<pub-id pub-id-type="pmid">24356719</pub-id>
</element-citation>
</ref>
<ref id="CR11">
<label>11.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Suzuki</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Hattori</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Iimura</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Otori</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Onda</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Okamoto</surname>
<given-names>T</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Development of AR Surgical Navigation Systems for Multiple Surgical Regions</article-title>
<source>Stud Health Technol Inform</source>
<year>2014</year>
<volume>196</volume>
<fpage>404</fpage>
<lpage>408</lpage>
<pub-id pub-id-type="pmid">24732545</pub-id>
</element-citation>
</ref>
<ref id="CR12">
<label>12.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Badiali</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Ferrari</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Cutolo</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Freschi</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Caramella</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Bianchi</surname>
<given-names>A</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Augmented reality as an aid in maxillofacial surgery: validation of a wearable system allowing maxillary repositioning</article-title>
<source>J Craniomaxillofac Surg</source>
<year>2014</year>
<volume>42</volume>
<fpage>1970</fpage>
<lpage>1976</lpage>
<pub-id pub-id-type="doi">10.1016/j.jcms.2014.09.001</pub-id>
<pub-id pub-id-type="pmid">25441867</pub-id>
</element-citation>
</ref>
<ref id="CR13">
<label>13.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nijmeh</surname>
<given-names>AD</given-names>
</name>
<name>
<surname>Goodger</surname>
<given-names>NM</given-names>
</name>
<name>
<surname>Hawkes</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Edwards</surname>
<given-names>PJ</given-names>
</name>
<name>
<surname>McGurk</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Image-guided navigation in oral and maxillofacial surgery</article-title>
<source>Br J Oral Maxillofac Surg</source>
<year>2005</year>
<volume>43</volume>
<fpage>294</fpage>
<lpage>302</lpage>
<pub-id pub-id-type="doi">10.1016/j.bjoms.2004.11.018</pub-id>
<pub-id pub-id-type="pmid">15993282</pub-id>
</element-citation>
</ref>
<ref id="CR14">
<label>14.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Suenaga</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Liao</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Kobayashi</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Takato</surname>
<given-names>T</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Real-time marker-free patient registration and image-based navigation using stereo vision for dental surgery</article-title>
<source>LNCS</source>
<year>2013</year>
<volume>8090</volume>
<fpage>9</fpage>
<lpage>18</lpage>
</element-citation>
</ref>
<ref id="CR15">
<label>15.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Suenaga</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Hoshi</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Kobayashi</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Sakuma</surname>
<given-names>I</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery</article-title>
<source>IEEE Trans Biomed Eng</source>
<year>2014</year>
<volume>61</volume>
<fpage>1295</fpage>
<lpage>1304</lpage>
<pub-id pub-id-type="doi">10.1109/TBME.2014.2301191</pub-id>
<pub-id pub-id-type="pmid">24658253</pub-id>
</element-citation>
</ref>
<ref id="CR16">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lin</surname>
<given-names>YK</given-names>
</name>
<name>
<surname>Yau</surname>
<given-names>HT</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>IC</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Chung</surname>
<given-names>KH</given-names>
</name>
</person-group>
<article-title>A Novel Dental Implant Guided Surgery Based on Integration of Surgical Template and Augmented Reality</article-title>
<source>Clin Implant Dent Relat Res</source>
<year>2013</year>
</element-citation>
</ref>
<ref id="CR17">
<label>17.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kang</surname>
<given-names>SH</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>MK</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>JH</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>HK</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>SH</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>The validity of marker registration for an optimal integration method in mandibular navigation surgery</article-title>
<source>J Oral Maxillofac Surg</source>
<year>2013</year>
<volume>71</volume>
<fpage>366</fpage>
<lpage>375</lpage>
<pub-id pub-id-type="doi">10.1016/j.joms.2012.03.037</pub-id>
<pub-id pub-id-type="pmid">22695020</pub-id>
</element-citation>
</ref>
<ref id="CR18">
<label>18.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sun</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Luebbers</surname>
<given-names>HT</given-names>
</name>
<name>
<surname>Agbaje</surname>
<given-names>JO</given-names>
</name>
<name>
<surname>Schepers</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Vrielinck</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Lambrichts</surname>
<given-names>I</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Validation of anatomical landmarks-based registration for image-guided surgery: an in-vitro study</article-title>
<source>J Craniomaxillofac Surg</source>
<year>2013</year>
<volume>41</volume>
<fpage>522</fpage>
<lpage>526</lpage>
<pub-id pub-id-type="doi">10.1016/j.jcms.2012.11.017</pub-id>
<pub-id pub-id-type="pmid">23273492</pub-id>
</element-citation>
</ref>
<ref id="CR19">
<label>19.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liao</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Hata</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Nakajima</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Iwahara</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sakuma</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Dohi</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>Surgical navigation by autostereoscopic image overlay of integral videography</article-title>
<source>IEEE Trans Inf Technol Biomed</source>
<year>2004</year>
<volume>8</volume>
<fpage>114</fpage>
<lpage>121</lpage>
<pub-id pub-id-type="doi">10.1109/TITB.2004.826734</pub-id>
<pub-id pub-id-type="pmid">15217256</pub-id>
</element-citation>
</ref>
<ref id="CR20">
<label>20.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liao</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Ishihara</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Tran</surname>
<given-names>HH</given-names>
</name>
<name>
<surname>Masamune</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Sakuma</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Dohi</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay</article-title>
<source>Comput Med Imaging Graph</source>
<year>2010</year>
<volume>34</volume>
<fpage>46</fpage>
<lpage>54</lpage>
<pub-id pub-id-type="doi">10.1016/j.compmedimag.2009.07.003</pub-id>
<pub-id pub-id-type="pmid">19674871</pub-id>
</element-citation>
</ref>
<ref id="CR21">
<label>21.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tran</surname>
<given-names>HH</given-names>
</name>
<name>
<surname>Matsumiya</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Masamune</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Sakuma</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Dohi</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Liao</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Interactive 3-D navigation system for image-guided surgery</article-title>
<source>Int J Virtual Real</source>
<year>2009</year>
<volume>8</volume>
<fpage>9</fpage>
<lpage>16</lpage>
</element-citation>
</ref>
<ref id="CR22">
<label>22.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Suenaga</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Hoang Tran</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Liao</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Masamune</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Dohi</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Hoshi</surname>
<given-names>K</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Real-time
<italic>in situ</italic>
three-dimensional integral videography and surgical navigation using augmented reality: a pilot study</article-title>
<source>Int J Oral Sci</source>
<year>2013</year>
<volume>5</volume>
<fpage>98</fpage>
<lpage>102</lpage>
<pub-id pub-id-type="doi">10.1038/ijos.2013.26</pub-id>
<pub-id pub-id-type="pmid">23703710</pub-id>
</element-citation>
</ref>
<ref id="CR23">
<label>23.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Suenaga</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Liao</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Hoshi</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Kobayashi</surname>
<given-names>E</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation</article-title>
<source>Comput Med Imaging Graph</source>
<year>2015</year>
<volume>40</volume>
<fpage>147</fpage>
<lpage>159</lpage>
<pub-id pub-id-type="doi">10.1016/j.compmedimag.2014.11.003</pub-id>
<pub-id pub-id-type="pmid">25465067</pub-id>
</element-citation>
</ref>
<ref id="CR24">
<label>24.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Widmann</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Stoffner</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Bale</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Errors and error management in image-guided craniomaxillofacial surgery</article-title>
<source>Oral Surg Oral Med Oral Pathol Oral Radiol Endod</source>
<year>2009</year>
<volume>107</volume>
<fpage>701</fpage>
<lpage>715</lpage>
<pub-id pub-id-type="doi">10.1016/j.tripleo.2009.02.011</pub-id>
<pub-id pub-id-type="pmid">19426922</pub-id>
</element-citation>
</ref>
<ref id="CR25">
<label>25.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Noh</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Nabha</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Cho</surname>
<given-names>JH</given-names>
</name>
<name>
<surname>Hwang</surname>
<given-names>HS</given-names>
</name>
</person-group>
<article-title>Registration accuracy in the integration of laser-scanned dental images into maxillofacial cone-beam computed tomography images</article-title>
<source>Am J Orthod Dentofacial Orthop</source>
<year>2011</year>
<volume>140</volume>
<fpage>585</fpage>
<lpage>591</lpage>
<pub-id pub-id-type="doi">10.1016/j.ajodo.2011.04.018</pub-id>
<pub-id pub-id-type="pmid">21967948</pub-id>
</element-citation>
</ref>
<ref id="CR26">
<label>26.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fitzpatrick</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>West</surname>
<given-names>JB</given-names>
</name>
</person-group>
<article-title>The Distribution of Target Registration Error in Rigid-Body Point-Based Registration</article-title>
<source>IEEE Trans Med Imaging</source>
<year>2001</year>
<volume>20</volume>
<fpage>917</fpage>
<lpage>927</lpage>
<pub-id pub-id-type="doi">10.1109/42.952729</pub-id>
<pub-id pub-id-type="pmid">11585208</pub-id>
</element-citation>
</ref>
<ref id="CR27">
<label>27.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Casap</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Wexler</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Eliashar</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Computerized navigation for surgery of the lower jaw: comparison of 2 navigation systems</article-title>
<source>J Oral Maxillofac Surg</source>
<year>2008</year>
<volume>66</volume>
<fpage>1467</fpage>
<lpage>1475</lpage>
<pub-id pub-id-type="doi">10.1016/j.joms.2006.06.272</pub-id>
<pub-id pub-id-type="pmid">18571032</pub-id>
</element-citation>
</ref>
<ref id="CR28">
<label>28.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eggers</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Kress</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Muhling</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Fully automated registration of intraoperative computed tomography image data for image-guided craniofacial surgery</article-title>
<source>J Oral Maxillofac Surg</source>
<year>2008</year>
<volume>66</volume>
<fpage>1754</fpage>
<lpage>1760</lpage>
<pub-id pub-id-type="doi">10.1016/j.joms.2007.12.019</pub-id>
<pub-id pub-id-type="pmid">18634971</pub-id>
</element-citation>
</ref>
<ref id="CR29">
<label>29.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Chai</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Gan</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Registration strategy using occlusal splint based on augmented reality for mandibular angle oblique split osteotomy</article-title>
<source>J Craniofac Surg</source>
<year>2011</year>
<volume>22</volume>
<fpage>1806</fpage>
<lpage>1809</lpage>
<pub-id pub-id-type="doi">10.1097/SCS.0b013e31822e8064</pub-id>
<pub-id pub-id-type="pmid">21959439</pub-id>
</element-citation>
</ref>
<ref id="CR30">
<label>30.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kang</surname>
<given-names>SH</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>MK</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>JH</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>HK</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>Marker-free registration for the accurate integration of CT images and the subject's anatomy during navigation surgery of the maxillary sinus</article-title>
<source>Dentomaxillofac Radiol</source>
<year>2012</year>
<volume>41</volume>
<fpage>679</fpage>
<lpage>685</lpage>
<pub-id pub-id-type="doi">10.1259/dmfr/21358271</pub-id>
<pub-id pub-id-type="pmid">22499127</pub-id>
</element-citation>
</ref>
<ref id="CR31">
<label>31.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bouchard</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Magill</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Nikonovskiy</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Byl</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Murphy</surname>
<given-names>BA</given-names>
</name>
<name>
<surname>Kaban</surname>
<given-names>LB</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Osteomark: a surgical navigation system for oral and maxillofacial surgery</article-title>
<source>Int J Oral Maxillofac Surg</source>
<year>2012</year>
<volume>41</volume>
<fpage>265</fpage>
<lpage>270</lpage>
<pub-id pub-id-type="doi">10.1016/j.ijom.2011.10.017</pub-id>
<pub-id pub-id-type="pmid">22103996</pub-id>
</element-citation>
</ref>
<ref id="CR32">
<label>32.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marmulla</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Luth</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Muhling</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Hassfeld</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Markerless laser registration in image-guided oral and maxillofacial surgery</article-title>
<source>J Oral Maxillofac Surg</source>
<year>2004</year>
<volume>62</volume>
<fpage>845</fpage>
<lpage>851</lpage>
<pub-id pub-id-type="doi">10.1016/j.joms.2004.01.014</pub-id>
<pub-id pub-id-type="pmid">15218564</pub-id>
</element-citation>
</ref>
<ref id="CR33">
<label>33.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shamir</surname>
<given-names>RR</given-names>
</name>
<name>
<surname>Joskowicz</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Geometrical analysis of registration errors in point-based rigid-body registration using invariants</article-title>
<source>Med Image Anal</source>
<year>2011</year>
<volume>15</volume>
<fpage>85</fpage>
<lpage>95</lpage>
<pub-id pub-id-type="doi">10.1016/j.media.2010.07.010</pub-id>
<pub-id pub-id-type="pmid">20800534</pub-id>
</element-citation>
</ref>
<ref id="CR34">
<label>34.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Khadem</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Yeh</surname>
<given-names>CC</given-names>
</name>
<name>
<surname>Sadeghi-Tehrani</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Bax</surname>
<given-names>MR</given-names>
</name>
<name>
<surname>Johnson</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Welch</surname>
<given-names>JN</given-names>
</name>
</person-group>
<article-title>Comparative tracking error analysis of five different optical tracking systems</article-title>
<source>Comput Aided Surg</source>
<year>2000</year>
<volume>5</volume>
<fpage>98</fpage>
<lpage>107</lpage>
<pub-id pub-id-type="doi">10.3109/10929080009148876</pub-id>
<pub-id pub-id-type="pmid">10862132</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Santé/explor/EdenteV2/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000B76 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000B76 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Santé
   |area=    EdenteV2
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     PMC:4630916
   |texte=   Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/RBID.i   -Sk "pubmed:26525142" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a EdenteV2 

Wicri

This area was generated with Dilib version V0.6.32.
Data generation: Thu Nov 30 15:26:48 2017. Site generation: Tue Mar 8 16:36:20 2022