Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Spectrally encoded fiber-based structured lighting probe for intraoperative 3D imaging

Identifieur interne : 001C86 ( Ncbi/Merge ); précédent : 001C85; suivant : 001C87

Spectrally encoded fiber-based structured lighting probe for intraoperative 3D imaging

Auteurs : Neil T. Clancy [Royaume-Uni] ; Danail Stoyanov [Royaume-Uni] ; Lena Maier-Hein [Allemagne] ; Anja Groch [Allemagne] ; Guang-Zhong Yang [Royaume-Uni] ; Daniel S. Elson [Royaume-Uni]

Source :

RBID : PMC:3207380

Abstract

Three dimensional quantification of organ shape and structure during minimally invasive surgery (MIS) could enhance precision by allowing the registration of multi-modal or pre-operative image data (US/MRI/CT) with the live optical image. Structured illumination is one technique to obtain 3D information through the projection of a known pattern onto the tissue, although currently these systems tend to be used only for macroscopic imaging or open procedures rather than in endoscopy. To account for occlusions, where a projected feature may be hidden from view and/or confused with a neighboring point, a flexible multispectral structured illumination probe has been developed that labels each projected point with a specific wavelength using a supercontinuum laser. When imaged by a standard endoscope camera they can then be segmented using their RGB values, and their 3D coordinates calculated after camera calibration. The probe itself is sufficiently small (1.7 mm diameter) to allow it to be used in the biopsy channel of commonly used medical endoscopes. Surgical robots could therefore also employ this technology to solve navigation and visualization problems in MIS, and help to develop advanced surgical procedures such as natural orifice translumenal endoscopic surgery.


Url:
DOI: 10.1364/BOE.2.003119
PubMed: 22076272
PubMed Central: 3207380

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3207380

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Spectrally encoded fiber-based structured lighting probe for intraoperative 3D imaging</title>
<author>
<name sortKey="Clancy, Neil T" sort="Clancy, Neil T" uniqKey="Clancy N" first="Neil T." last="Clancy">Neil T. Clancy</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">Department of Surgery and Cancer, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Stoyanov, Danail" sort="Stoyanov, Danail" uniqKey="Stoyanov D" first="Danail" last="Stoyanov">Danail Stoyanov</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">Department of Computing, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Maier Hein, Lena" sort="Maier Hein, Lena" uniqKey="Maier Hein L" first="Lena" last="Maier-Hein">Lena Maier-Hein</name>
<affiliation wicri:level="1">
<nlm:aff id="aff4">German Cancer Research Center, Div. Medical and Biological Informatics, D-69120, Heidelberg,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Groch, Anja" sort="Groch, Anja" uniqKey="Groch A" first="Anja" last="Groch">Anja Groch</name>
<affiliation wicri:level="1">
<nlm:aff id="aff4">German Cancer Research Center, Div. Medical and Biological Informatics, D-69120, Heidelberg,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Yang, Guang Zhong" sort="Yang, Guang Zhong" uniqKey="Yang G" first="Guang-Zhong" last="Yang">Guang-Zhong Yang</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">Department of Computing, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Elson, Daniel S" sort="Elson, Daniel S" uniqKey="Elson D" first="Daniel S." last="Elson">Daniel S. Elson</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">Department of Surgery and Cancer, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22076272</idno>
<idno type="pmc">3207380</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3207380</idno>
<idno type="RBID">PMC:3207380</idno>
<idno type="doi">10.1364/BOE.2.003119</idno>
<date when="2011">2011</date>
<idno type="wicri:Area/Pmc/Corpus">001C03</idno>
<idno type="wicri:Area/Pmc/Curation">001C03</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001978</idno>
<idno type="wicri:Area/Ncbi/Merge">001C86</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Spectrally encoded fiber-based structured lighting probe for intraoperative 3D imaging</title>
<author>
<name sortKey="Clancy, Neil T" sort="Clancy, Neil T" uniqKey="Clancy N" first="Neil T." last="Clancy">Neil T. Clancy</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">Department of Surgery and Cancer, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Stoyanov, Danail" sort="Stoyanov, Danail" uniqKey="Stoyanov D" first="Danail" last="Stoyanov">Danail Stoyanov</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">Department of Computing, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Maier Hein, Lena" sort="Maier Hein, Lena" uniqKey="Maier Hein L" first="Lena" last="Maier-Hein">Lena Maier-Hein</name>
<affiliation wicri:level="1">
<nlm:aff id="aff4">German Cancer Research Center, Div. Medical and Biological Informatics, D-69120, Heidelberg,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Groch, Anja" sort="Groch, Anja" uniqKey="Groch A" first="Anja" last="Groch">Anja Groch</name>
<affiliation wicri:level="1">
<nlm:aff id="aff4">German Cancer Research Center, Div. Medical and Biological Informatics, D-69120, Heidelberg,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Yang, Guang Zhong" sort="Yang, Guang Zhong" uniqKey="Yang G" first="Guang-Zhong" last="Yang">Guang-Zhong Yang</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">Department of Computing, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Elson, Daniel S" sort="Elson, Daniel S" uniqKey="Elson D" first="Daniel S." last="Elson">Daniel S. Elson</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">Department of Surgery and Cancer, Imperial College London, SW7 2AZ,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Biomedical Optics Express</title>
<idno type="eISSN">2156-7085</idno>
<imprint>
<date when="2011">2011</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Three dimensional quantification of organ shape and structure during minimally invasive surgery (MIS) could enhance precision by allowing the registration of multi-modal or pre-operative image data (US/MRI/CT) with the live optical image. Structured illumination is one technique to obtain 3D information through the projection of a known pattern onto the tissue, although currently these systems tend to be used only for macroscopic imaging or open procedures rather than in endoscopy. To account for occlusions, where a projected feature may be hidden from view and/or confused with a neighboring point, a flexible multispectral structured illumination probe has been developed that labels each projected point with a specific wavelength using a supercontinuum laser. When imaged by a standard endoscope camera they can then be segmented using their RGB values, and their 3D coordinates calculated after camera calibration. The probe itself is sufficiently small (1.7 mm diameter) to allow it to be used in the biopsy channel of commonly used medical endoscopes. Surgical robots could therefore also employ this technology to solve navigation and visualization problems in MIS, and help to develop advanced surgical procedures such as natural orifice translumenal endoscopic surgery.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Voros, S" uniqKey="Voros S">S. Voros</name>
</author>
<author>
<name sortKey="Long, J A" uniqKey="Long J">J.-A. Long</name>
</author>
<author>
<name sortKey="Cinquin, P" uniqKey="Cinquin P">P. Cinquin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schwartz, J J" uniqKey="Schwartz J">J. J. Schwartz</name>
</author>
<author>
<name sortKey="Lichtenstein, G R" uniqKey="Lichtenstein G">G. R. Lichtenstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kato, S" uniqKey="Kato S">S. Kato</name>
</author>
<author>
<name sortKey="Fu, K I" uniqKey="Fu K">K. I. Fu</name>
</author>
<author>
<name sortKey="Sano, Y" uniqKey="Sano Y">Y. Sano</name>
</author>
<author>
<name sortKey="Fujii, T" uniqKey="Fujii T">T. Fujii</name>
</author>
<author>
<name sortKey="Saito, Y" uniqKey="Saito Y">Y. Saito</name>
</author>
<author>
<name sortKey="Matsuda, T" uniqKey="Matsuda T">T. Matsuda</name>
</author>
<author>
<name sortKey="Koba, I" uniqKey="Koba I">I. Koba</name>
</author>
<author>
<name sortKey="Yoshida, S" uniqKey="Yoshida S">S. Yoshida</name>
</author>
<author>
<name sortKey="Fujimori, T" uniqKey="Fujimori T">T. Fujimori</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Edwards, P J" uniqKey="Edwards P">P. J. Edwards</name>
</author>
<author>
<name sortKey="King, A P" uniqKey="King A">A. P. King</name>
</author>
<author>
<name sortKey="Hawkes, D J" uniqKey="Hawkes D">D. J. Hawkes</name>
</author>
<author>
<name sortKey="Fleig, O" uniqKey="Fleig O">O. Fleig</name>
</author>
<author>
<name sortKey="Maurer, C R J" uniqKey="Maurer C">C. R. J. Maurer</name>
</author>
<author>
<name sortKey="Hill, D L" uniqKey="Hill D">D. L. Hill</name>
</author>
<author>
<name sortKey="Fenlon, M R" uniqKey="Fenlon M">M. R. Fenlon</name>
</author>
<author>
<name sortKey="De Cunha, D A" uniqKey="De Cunha D">D. A. de Cunha</name>
</author>
<author>
<name sortKey="Gaston, R P" uniqKey="Gaston R">R. P. Gaston</name>
</author>
<author>
<name sortKey="Chandra, S" uniqKey="Chandra S">S. Chandra</name>
</author>
<author>
<name sortKey="Mannss, J" uniqKey="Mannss J">J. Mannss</name>
</author>
<author>
<name sortKey="Strong, A J" uniqKey="Strong A">A. J. Strong</name>
</author>
<author>
<name sortKey="Gleeson, M J" uniqKey="Gleeson M">M. J. Gleeson</name>
</author>
<author>
<name sortKey="Cox, T C" uniqKey="Cox T">T. C. Cox</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Penney, G P" uniqKey="Penney G">G. P. Penney</name>
</author>
<author>
<name sortKey="Barratt, D C" uniqKey="Barratt D">D. C. Barratt</name>
</author>
<author>
<name sortKey="Chan, C S K" uniqKey="Chan C">C. S. K. Chan</name>
</author>
<author>
<name sortKey="Slomczykowski, M" uniqKey="Slomczykowski M">M. Slomczykowski</name>
</author>
<author>
<name sortKey="Carter, T J" uniqKey="Carter T">T. J. Carter</name>
</author>
<author>
<name sortKey="Edwards, P J" uniqKey="Edwards P">P. J. Edwards</name>
</author>
<author>
<name sortKey="Hawkes, D J" uniqKey="Hawkes D">D. J. Hawkes</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stoyanov, D" uniqKey="Stoyanov D">D. Stoyanov</name>
</author>
<author>
<name sortKey="Darzi, A" uniqKey="Darzi A">A. Darzi</name>
</author>
<author>
<name sortKey="Yang, G Z" uniqKey="Yang G">G.-Z. Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geng, J" uniqKey="Geng J">J. Geng</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wu, T T" uniqKey="Wu T">T. T. Wu</name>
</author>
<author>
<name sortKey="Cheung, T H" uniqKey="Cheung T">T.-H. Cheung</name>
</author>
<author>
<name sortKey="Yim, S F" uniqKey="Yim S">S.-F. Yim</name>
</author>
<author>
<name sortKey="Qu, J Y" uniqKey="Qu J">J. Y. Qu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chan, M" uniqKey="Chan M">M. Chan</name>
</author>
<author>
<name sortKey="Lin, W" uniqKey="Lin W">W. Lin</name>
</author>
<author>
<name sortKey="Zhou, C" uniqKey="Zhou C">C. Zhou</name>
</author>
<author>
<name sortKey="Qu, J Y" uniqKey="Qu J">J. Y. Qu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Batlle, J" uniqKey="Batlle J">J. Batlle</name>
</author>
<author>
<name sortKey="Mouaddib, E" uniqKey="Mouaddib E">E. Mouaddib</name>
</author>
<author>
<name sortKey="Salvi, J" uniqKey="Salvi J">J. Salvi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Salvi, J" uniqKey="Salvi J">J. Salvi</name>
</author>
<author>
<name sortKey="Pages, J" uniqKey="Pages J">J. Pagès</name>
</author>
<author>
<name sortKey="Batlle, J" uniqKey="Batlle J">J. Batlle</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, H J" uniqKey="Chen H">H. J. Chen</name>
</author>
<author>
<name sortKey="Zhang, J" uniqKey="Zhang J">J. Zhang</name>
</author>
<author>
<name sortKey="Fang, J" uniqKey="Fang J">J. Fang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smith, T" uniqKey="Smith T">T. Smith</name>
</author>
<author>
<name sortKey="Guild, J" uniqKey="Guild J">J. Guild</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Biomed Opt Express</journal-id>
<journal-id journal-id-type="publisher-id">BOE</journal-id>
<journal-title-group>
<journal-title>Biomedical Optics Express</journal-title>
</journal-title-group>
<issn pub-type="epub">2156-7085</issn>
<publisher>
<publisher-name>Optical Society of America</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22076272</article-id>
<article-id pub-id-type="pmc">3207380</article-id>
<article-id pub-id-type="publisher-id">152969</article-id>
<article-id pub-id-type="doi">10.1364/BOE.2.003119</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Endoscopes, Catheters and Micro-Optics</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Spectrally encoded fiber-based structured lighting probe for intraoperative 3D imaging</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Clancy</surname>
<given-names>Neil T.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Stoyanov</surname>
<given-names>Danail</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Maier-Hein</surname>
<given-names>Lena</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Groch</surname>
<given-names>Anja</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Yang</surname>
<given-names>Guang-Zhong</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Elson</surname>
<given-names>Daniel S.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<aff id="aff1">
<label>1</label>
Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, SW7 2AZ,
<country>UK</country>
</aff>
<aff id="aff2">
<label>2</label>
Department of Surgery and Cancer, Imperial College London, SW7 2AZ,
<country>UK</country>
</aff>
<aff id="aff3">
<label>3</label>
Department of Computing, Imperial College London, SW7 2AZ,
<country>UK</country>
</aff>
<aff id="aff4">
<label>4</label>
German Cancer Research Center, Div. Medical and Biological Informatics, D-69120, Heidelberg,
<country>Germany</country>
</aff>
</contrib-group>
<author-notes>
<corresp id="cor1">
<label>*</label>
<email xlink:href="n.clancy@imperial.ac.uk">n.clancy@imperial.ac.uk</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>25</day>
<month>10</month>
<year>2011</year>
</pub-date>
<pub-date pub-type="collection">
<day>01</day>
<month>11</month>
<year>2011</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>25</day>
<month>10</month>
<year>2011</year>
</pub-date>
<pmc-comment> PMC Release delay is 0 months and 0 days and was based on the . </pmc-comment>
<volume>2</volume>
<issue>11</issue>
<fpage>3119</fpage>
<lpage>3128</lpage>
<history>
<date date-type="received">
<day>16</day>
<month>8</month>
<year>2011</year>
</date>
<date date-type="rev-recd">
<day>19</day>
<month>10</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>21</day>
<month>10</month>
<year>2011</year>
</date>
</history>
<permissions>
<copyright-statement>©2011 Optical Society of America</copyright-statement>
<copyright-year>2011</copyright-year>
<copyright-holder>OSA</copyright-holder>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by-nc-nd/3.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License, which permits download and redistribution, provided that the original work is properly cited. This license restricts the article from being modified or used commercially.</license-p>
</license>
</permissions>
<abstract>
<p>Three dimensional quantification of organ shape and structure during minimally invasive surgery (MIS) could enhance precision by allowing the registration of multi-modal or pre-operative image data (US/MRI/CT) with the live optical image. Structured illumination is one technique to obtain 3D information through the projection of a known pattern onto the tissue, although currently these systems tend to be used only for macroscopic imaging or open procedures rather than in endoscopy. To account for occlusions, where a projected feature may be hidden from view and/or confused with a neighboring point, a flexible multispectral structured illumination probe has been developed that labels each projected point with a specific wavelength using a supercontinuum laser. When imaged by a standard endoscope camera they can then be segmented using their RGB values, and their 3D coordinates calculated after camera calibration. The probe itself is sufficiently small (1.7 mm diameter) to allow it to be used in the biopsy channel of commonly used medical endoscopes. Surgical robots could therefore also employ this technology to solve navigation and visualization problems in MIS, and help to develop advanced surgical procedures such as natural orifice translumenal endoscopic surgery.</p>
</abstract>
<kwd-group kwd-group-type="OCIS">
<title>OCIS codes: </title>
<kwd>(330.1710) Color, measurement</kwd>
<kwd>(110.6880) Three-dimensional image acquisition</kwd>
<kwd>(170.1610) Clinical applications</kwd>
<kwd>(170.2150) Endoscopic imaging</kwd>
<kwd>(170.3890) Medical optics instrumentation</kwd>
</kwd-group>
<funding-group>
<award-group>
<funding-source id="sp1">EPSRC</funding-source>
<award-id rid="sp1">EP/E06342X/1</award-id>
<award-id rid="sp1">DT/F003064/1</award-id>
</award-group>
</funding-group>
<custom-meta-group>
<custom-meta>
<meta-name>OpenAccess</meta-name>
<meta-value>True</meta-value>
</custom-meta>
<custom-meta>
<meta-name>OpenAccessEmbargo</meta-name>
<meta-value>0</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec id="sec1-1">
<title>1. Introduction</title>
<p>Computer vision techniques have recently been applied in minimally invasive surgery (MIS) to provide more information to the surgeon from the white light endoscopic view and to compensate for the natural cues that are sacrificed when moving from the open surgery platform, for instance the loss of haptic feedback or stereo vision. These techniques may be used to track the surgical instruments, predict the motion of the tissue or detect the tissue surface profile [
<xref ref-type="bibr" rid="r1">1</xref>
,
<xref ref-type="bibr" rid="r2">2</xref>
], but they also have the potential to improve surgical accuracy and reliability by providing augmented views of the tissue and enabling diagnostic assistance.</p>
<p>The detection of tissue surface information in three dimensions has many possible applications in MIS including the registration of multimodal images or pre-operative image data, the improved navigation of robotic instruments and the measurement or characterization of tissue morphology for diagnostic purposes [
<xref ref-type="bibr" rid="r3">3</xref>
,
<xref ref-type="bibr" rid="r4">4</xref>
]. It is commonplace for preoperative MRI, CT or ultrasound scans of a patient to be used for surgical planning to locate and navigate to paronychial tumors. However during the surgery itself surgeons often rely on the optical identification of anatomical landmarks and navigate based on the relative positions between the static pre-operative images and the intraoperative view to make a judgment on how to dissect the tissue. A longstanding aim of the field of computer aided surgery is to help guide the surgery using an augmented reality view of the tissue revealing hidden structures that are present in the preoperative images but not in the live endoscopic view [
<xref ref-type="bibr" rid="r5">5</xref>
]. This could also allow the current finite element tissue models to be fitted to the preoperative data to allow the augmented view to deform and shift as the tissue is manipulated under known boundary constraints [
<xref ref-type="bibr" rid="r6">6</xref>
]. An example of the use of tissue surface morphology is the evaluation of the shape of colonic polyps. These have been shown to be related to the status of the lesion itself and have been proposed as a method of guiding or even replacing biopsies [
<xref ref-type="bibr" rid="r3">3</xref>
,
<xref ref-type="bibr" rid="r4">4</xref>
]. The accuracy of this type of technique is still down to the experience and judgment of the observer. Quantification of polyp surface shape could be used as an objective measure of these polyps and compared against a library of known cases to guide biopsy selection.</p>
<p>Due to the reasons mentioned above, it is a widely pursued goal of the surgical imaging community to obtain an intraoperative measurement of tissue surface geometry. This has been previously attempted to allow multimodal registration in cadaver measurements [
<xref ref-type="bibr" rid="r7">7</xref>
], microscopic imaging [
<xref ref-type="bibr" rid="r5">5</xref>
] and radiotherapy guidance [
<xref ref-type="bibr" rid="r8">8</xref>
], although MIS applications remain relatively unexplored. Arguably the most successful method of 3D surface measurement during MIS has been stereo reconstruction, where a rigid endoscope with two imaging channels simultaneously acquires corresponding ‘left’ and ‘right’ views of the scene and uses a mathematical model based on a pinhole camera assumption to triangulate the position of salient features (areas of high contrast, such as the intersection of two blood vessels) found in both channels [
<xref ref-type="bibr" rid="r9">9</xref>
<xref ref-type="bibr" rid="r11">11</xref>
]. However, this technique is not suited to homogenous or featureless tissues. In this case, structured lighting must be used to project artificial features onto the tissue surface. A number of techniques for extracting depth information using patterned light exist and involve projection of lines, grids or dots [
<xref ref-type="bibr" rid="r12">12</xref>
]. However, the brightness required and the size of the optics needed to provide the pattern has, thus far, mostly limited their use to biomedical applications outside of the endoscopic regime due to the difficulties in miniaturizing projective imaging systems [
<xref ref-type="bibr" rid="r13">13</xref>
,
<xref ref-type="bibr" rid="r14">14</xref>
]. Previous attempts at endoscopic implementation have only been suitable for rigid endoscopy and were not compatible with pattern encoding techniques which are more immune to tissue occlusions [
<xref ref-type="bibr" rid="r15">15</xref>
].</p>
<p>To achieve dense surface reconstruction, a number of projected features is required, which introduces computational problems in the identification of specific features in the presence of occlusions where parts of the tissue may obscure part of the pattern. This results in discontinuities and errors in the 3D reconstruction. To overcome this, the pattern must be coded so that each projected feature can be identified, thereby minimizing the risk that a feature is confused with a ‘missing’ one and causing an error in the reconstruction. In other applications, this has been done spatially (using binary words in the pattern for example) or by introducing a number of different colors into the pattern using digital projection technology [
<xref ref-type="bibr" rid="r16">16</xref>
<xref ref-type="bibr" rid="r18">18</xref>
]. However, these techniques were applied to non-endoscopic and in most cases non-biological situations.</p>
<p>In this paper, an endoscopic fiber-optic structured lighting probe for intraoperative 3D imaging is proposed to overcome the limitations of the technologies described above. It uses a broadband laser to encode a dense pattern of spots with a unique color for each, and a segmentation algorithm that recovers the spectral signature of each dot from standard RGB color CCD images. The probe is highly flexible, has a small diameter and produces a pattern of high brightness making it suitable for use with biological tissue and low light collection efficiency endoscope systems. The optical set-up and spot identification algorithm are described, along with initial results characterizing the performance and limitations of the system and 3D reconstructions of
<italic>ex vivo</italic>
tissue.</p>
</sec>
<sec id="sec1-2">
<title>2. Materials and methods</title>
<sec id="sec2-1">
<title>2.1. Probe design</title>
<p>The optical set-up in
<xref ref-type="fig" rid="g001">Fig. 1</xref>
<fig id="g001" fig-type="figure" orientation="portrait" position="float">
<label>Fig. 1</label>
<caption>
<p>(a) Broadband laser light from the supercontinuum is dispersed by an SF-11 prism, which is then coupled into the fibers (50 μm core) at the array end of the probe. The projected pattern is a magnified image of the end face of the bundle formed by the projection lens. (b) Emission spectrum of the supercontinuum laser source, with the wavelength range used by the probe indicated between the dashed lines.</p>
</caption>
<graphic xlink:href="boe-2-11-3119-g001"></graphic>
</fig>
shows how the spectrally encoded pattern is generated. The collimated output of a 4 W supercontinuum laser (SC400-4, Fianium Ltd., Southampton, UK) is dispersed using a prism (SF-11 glass) and focused into a thin line to couple it into the 127 fibers of the array end of a custom-made line-to-spot converter (FiberTech Optica, Inc., Canada). The distances between the prism and the coupling lens, and the coupling lens and the fiber array are set to the lens focal length so that the spectral components are sharply focused at the linear array. At the distal end of the probe, a short focal length aspheric lens (
<italic>f</italic>
= 4.51 mm) is used to form an image of the end face of the probe. Since the fibers are incoherently bundled, the result is that a random pattern of colored dots is projected onto the screen or sample. The brass ferrule at the array end of the probe is 12.5 mm in diameter, but the rest of the probe has a maximum outer diameter of 1.7 mm making it compatible with biopsy ports of commonly used flexible endoscopes.</p>
<p>Since the light source is a broadband laser and the fibers are closely spaced at the input end (average separation of 68 ± 3 μm), each projected spot effectively contains a unique low bandwidth spectral feature. It should be noted that although the laser is nominally 4 W, it emits over a 420-2200 nm range meaning that most of that power is contained in the near infrared, which is filtered out prior to reaching the prism. Only visible wavelengths are coupled into the fiber bundle so that the final output power at the probe tip after coupling losses is approximately 110 mW, well below the tissue damage threshold.</p>
</sec>
<sec id="sec2-2">
<title>2.2. Wavelength segmentation and centroiding</title>
<p>The identification and segmentation is based on recovery of the peak wavelength of each spot using knowledge of the relationship between the RGB colorspace and the wavelength of the light. A schematic is shown in
<xref ref-type="fig" rid="g002">Fig. 2</xref>
<fig id="g002" fig-type="figure" orientation="portrait" position="float">
<label>Fig. 2</label>
<caption>
<p>Spot segmentation and 3D calibration. (a) Cartoon image showing three projected spots, having different RGB values. (b) Each RGB triplet is converted to
<italic>xy</italic>
coordinates. A line projected through these coordinates from a reference white spot intersects the spectrum locus (dashed) at the dominant wavelength of the pixel. (c) RGB pixels are replaced by the calculated wavelength to form a greyscale ‘
<italic>λ</italic>
-space’ image which can be thresholded to find the centroids of spots of a particular wavelength. (d) Epipolar geometry showing different positions of a calibration object (c
<sub>1</sub>
-c
<sub>3</sub>
) and triangulation of points using spot centroids.</p>
</caption>
<graphic xlink:href="boe-2-11-3119-g002"></graphic>
</fig>
to illustrate the transformation between RGB and wavelength for each pixel and is further described in the text below.</p>
<p>The CIE 1931 xy colorspace is used to represent the colors of visible light in an area defined by wavelengths visible to the human eye [
<xref ref-type="bibr" rid="r19">19</xref>
]. The
<italic>xy</italic>
chromaticity coordinates of pure wavelengths form a curve called the spectrum locus at the boundary of the colorspace. The RGB values as recorded by a standard color digital camera however, are only capable of detecting a triangular subset of these colors defined by the RGB response of the camera indicated in
<xref ref-type="fig" rid="g002">Fig. 2 (b)</xref>
. When an RGB image of the spot pattern is recorded by a camera, the position of each pixel within this triangular space can be calculated with knowledge of the RGB system used. This is achieved for each pixel by converting the RGB triplet into the tristimulus values
<italic>XYZ</italic>
, and then into the chromaticity coordinates
<italic>xy</italic>
by multiplication by the 3 × 3 transformation matrix
<italic>M</italic>
. The elements of M are calculated based on the color co-ordinates of the vertices of the RGB triangle and the reference white [
<xref ref-type="bibr" rid="r20">20</xref>
].</p>
<p>The dominant wavelength detected at the pixel of interest is found by projecting a line from a reference white point, through the pixel’s
<italic>xy</italic>
coordinates, to the point of intersection with the spectrum locus. In practice this is achieved computationally by creating a look-up table of color coordinates representing the mapping of the spectrum locus onto the triangular RGB gamut. For each pixel in an image, the intersection point of a line defined by the reference white and pixel
<italic>xy</italic>
, and the nearest side of the RGB gamut is calculated. The closest point to this intersection is then located in the look-up table and its corresponding wavelength returned (
<xref ref-type="fig" rid="g002">Fig. 2 (b)</xref>
). The result is a greyscale ‘
<italic>λ</italic>
-space’ image where each pixel value is the calculated wavelength of the light detected.</p>
<p>Once the ‘
<italic>λ</italic>
-image’ is acquired, spots of a particular wavelength can be isolated by thresholding by wavelength and finding their centroids. The finite bandwidth of each projected spot, sensor noise and uncertainty in locating an RGB triplet’s dominant wavelength means that each spot consists of a narrow distribution of wavelength values in
<italic>λ</italic>
-space. Therefore a simple thresholding of the
<italic>λ</italic>
-image at a wavelength of interest results in a cluster of pixels scattered within the region of the spot rather than a continuous area. A region-growing algorithm searches for these clusters by examining the nearest neighbors of each pixel in the thresholded image, and those with values within ± 1 nm of the wavelength of interest are deemed suitable for inclusion in the region. In this way, clusters of pixels at the location of a projected spot are ‘grown’ into each other to form a single identifiable region. As a final step, a median filter is applied to remove any erroneously detected pixels outside of the spot. The centroid of the detected region is then recorded along with the histogram of wavelength values within it. The peak value of this distribution is recorded as the ‘label’ for that particular spot and the process is repeated for multiple spots within the image.</p>
</sec>
<sec id="sec2-3">
<title>2.3. 3D reconstruction</title>
<p>Triangulation of the 3D location of the imaged spots is accomplished through a calibration based on the pinhole camera model and methods previously developed for stereo reconstruction [
<xref ref-type="bibr" rid="r9">9</xref>
]. A calibration object (a plane with a checkerboard pattern of known dimensions) was imaged at 12 different positions in the camera’s field of view. White light and patterned illumination images were recorded for each position of the object to acquire calibration data for calculating the intrinsic geometrical parameters of the camera and also for estimating the 3D path of light rays projected from the probe.</p>
<p>Using the intrinsic results, it was possible to estimate the 3D position of the calibration object in metric space, so with the knowledge that the corresponding images of patterned illumination must show the spots lying on these planes, their 3D locations could be calculated. For each calibration image of the projected spots, the centroid of each spot was located using the algorithm described in Section 2.2. Backprojecting each spot’s rays to find the intersection with the calibration plane provided a set of 3D positions that the structured light ray passed through. Lines representing light rays were fitted to each spot’s 3D data, composed of 12 calibration positions, giving the origin of the projection and its position relative to the camera (
<xref ref-type="fig" rid="g002">Fig. 2 (d)</xref>
). With the projection path of the rays mapped out, the 3D position of an individual spot reflected from an object of unknown shape could be determined by finding its position on the camera’s image plane and projecting a ray through the ‘pinhole’ to intersect with its known projection path. Due to noise in the image the intersection is not guaranteed and the solution is taken as the midpoint of the shortest line connecting the two rays [
<xref ref-type="bibr" rid="r9">9</xref>
].</p>
</sec>
<sec id="sec2-4">
<title>2.4. Characterization and ex vivo testing</title>
<p>To test the effectiveness of the algorithm in recovering the wavelength of each spot, the pattern was projected onto a white screen and RGB images were recorded using a color CCD camera (DCU 223C, Thorlabs Ltd., UK). The white screen was then removed and spectra of individual spots were recorded using an optical fiber probe placed in the pattern’s plane and connected to a spectrometer (HR 4000, Ocean Optics, Inc., USA).</p>
<p>Since the optical properties of tissue within the body may vary significantly, the effect of background reflectivity on the detected spot wavelength was examined by imaging the pattern on different colored backgrounds. The performance of the system was then evaluated in biological tissue of varying optical properties using
<italic>ex vivo</italic>
kidney (ovine) and intestinal (porcine) tissue. Finally, the 3D reconstruction capability of the system was demonstrated by calibrating the projection probe and camera, and determining the shape of two validation objects (plane and cylinder) as well as
<italic>ex vivo</italic>
tissue of varying type, color and shape (kidney, fat and liver).</p>
</sec>
</sec>
<sec id="sec1-3">
<title>3. Results</title>
<sec id="sec2-5">
<title>3.1. Characterization: algorithm testing</title>
<p>
<xref ref-type="fig" rid="g003">Figure 3 (a)</xref>
<fig id="g003" fig-type="figure" orientation="portrait" position="float">
<label>Fig. 3</label>
<caption>
<p>(a) RGB image of pattern recorded by camera. (b)
<italic>λ</italic>
-space image with centroids of spots. (c) Plot showing spot wavelength as calculated by the segmentation algorithm against the wavelength measured using a spectrometer. The transmission response of the camera’s filters (normalized to 1) is overlaid and the identity line is shown in black. The error bars indicate ± 1 standard deviation.</p>
</caption>
<graphic xlink:href="boe-2-11-3119-g003"></graphic>
</fig>
shows the raw RGB image acquired by the camera when the pattern was incident on a plane white screen, while
<xref ref-type="fig" rid="g003">Fig. 3 (b)</xref>
shows the same pattern after conversion to
<italic>λ</italic>
-space. The wavelengths of the spots in the
<italic>λ</italic>
-space image after processing are plotted against the wavelengths measured with the spectrometer in
<xref ref-type="fig" rid="g003">Fig. 3 (c)</xref>
. In order to distinguish closely spaced wavelengths reliably, the algorithm requires a signal in at least two out of the three channels (red, green and blue), which define the working space of the camera’s RGB system. The filter response of the camera used, as measured in the lab using a reflectance standard and tunable filter, is also plotted in
<xref ref-type="fig" rid="g003">Fig. 3 (c)</xref>
and shows that the areas of the plot with the strongest correlation between calculated and measured wavelength correspond to spectral regions with strong overlap between adjacent channels.</p>
<p>The wavelength plot is marked by two areas of strong correlation separated by flat regions in the red (
<italic>λ</italic>
> 600 nm) and green (530 <
<italic>λ</italic>
< 560 nm), with significant noise in the blue (
<italic>λ</italic>
< 490 nm) due to low signal levels.</p>
<p>
<xref ref-type="fig" rid="g004">Figure 4</xref>
<fig id="g004" fig-type="figure" orientation="portrait" position="float">
<label>Fig. 4</label>
<caption>
<p>Calculated wavelength of a set of spots projected onto surfaces of different colors. (a) Blue and red card. (b)
<italic>Ex vivo</italic>
tissue: porcine intestine (inset, top left) and lamb kidney (inset, bottom right). The error bars represent ± 1 standard deviation.</p>
</caption>
<graphic xlink:href="boe-2-11-3119-g004"></graphic>
</fig>
shows how the detected spot wavelength varies with background optical properties. When card colored at opposite ends of the spectrum (red and blue) was used as a background (
<xref ref-type="fig" rid="g004">Fig. 4 (a)</xref>
), a strong correlation between the calculated wavelength of corresponding spots is seen and all values lie along the identity line (with the exception of a number of points at the far end of the red region;
<italic>λ</italic>
> 640 nm).</p>
<p>For
<italic>ex vivo</italic>
tissue (
<xref ref-type="fig" rid="g004">Fig. 4 (b)</xref>
), the wavelength values are again scattered along the identity line showing that the values outputted by the algorithm are in broad agreement in the 450-500 nm and 550-350 nm regions despite the noticeable difference in tissue color (inset). There is a noticeable increase in error however, at the blue end of the spectrum where a number of points deviate from the expected values due to the lower signal levels.</p>
</sec>
<sec id="sec2-6">
<title>3.2. 3D tissue surface reconstruction</title>
<p>The results of the projector-camera calibration are shown in
<xref ref-type="fig" rid="g005">Fig. 5 (a)</xref>
<fig id="g005" fig-type="figure" orientation="portrait" position="float">
<label>Fig. 5</label>
<caption>
<p>Three dimensional calibration and validation. (a) Origin and propagation direction of projected spots with respect to the camera (origin) as determined during calibration routine. (b) Planar object. (c) Cylindrical object, diameter = 81 mm. (d) Cross-section of cylindrical object with least-squares fit.</p>
</caption>
<graphic xlink:href="boe-2-11-3119-g005"></graphic>
</fig>
and indicate the relationship between the origin and propagation direction of the ‘best fit’ rays emitted from the probe tip and their intersection with the calibration object at different positions. The average error in fitting each ‘ray’ to its set of calibration points was 0.32 mm.</p>
<p>Two opaque validation objects were used to test the system’s 3D reconstruction. These were a flat object (black planar plastic divider, thickness 2 mm) and a brown polymer cylinder (radius 40.5 mm), whose 3D coordinates are shown in
<xref ref-type="fig" rid="g005">Figs. 5 (b) and (c)</xref>
respectively. A mesh surface was fitted to the 3D point clouds using a least-squares minimization algorithm implemented in Matlab (The MathWorks, Inc., USA). Reprojection errors using the calibrated rays were 1.3 pixels in
<italic>x</italic>
and 1.06 pixels in
<italic>y.</italic>
The data in
<xref ref-type="fig" rid="g005">Fig. 5 (b)</xref>
were fit by a planar surface with an error of 0.05 mm. The reconstructed cylindrical object was fitted by a cylinder with radius matching that of the validation object (40.5 mm) to sub-millimeter accuracy (rms difference = 0.15 mm) as shown in
<xref ref-type="fig" rid="g005">Figs. 5 (c) and (d)</xref>
.</p>
<p>Following the validation experiments, a number of samples of
<italic>ex vivo</italic>
tissue were examined to demonstrate operation of the probe and analysis algorithms over varying background optical properties and curvatures. The results in
<xref ref-type="fig" rid="g006">Fig. 6</xref>
<fig id="g006" fig-type="figure" orientation="portrait" position="float">
<label>Fig. 6</label>
<caption>
<p>Three-dimensional reconstruction of
<italic>ex vivo</italic>
tissue. (a) Porcine liver, ‘step’. (b) Ovine kidney, convex curve. (c) Porcine liver, convex curve. (d) Porcine tissue, border between fatty tissue and liver. (e) Porcine liver, ‘valley’.</p>
</caption>
<graphic xlink:href="boe-2-11-3119-g006"></graphic>
</fig>
show reconstructed 3D data for ovine kidney, porcine liver and fatty tissue, and have varying physical features (convex/concave curve, ‘step’ discontinuity and miscellaneous ‘fine’ structure). The reconstructed surface profiles were observed to match well the observed surface profiles of the tissues, which can be observed in the color photographs presented. These tests demonstrate that the technique may be applied on
<italic>ex vivo</italic>
tissue, while the validation with the test objects demonstrates the accuracy that may be obtained.</p>
</sec>
</sec>
<sec id="sec1-4">
<title>4. Discussion</title>
<p>The structured lighting probe described in this paper seeks to address the requirements for pattern projection in minimally invasive surgery by providing a pattern of high brightness, high density and employing a codification strategy based on spectral discrimination. The dispersed light from the supercontinuum laser ensures that each projected feature is labeled with a unique color which can then be detected from the camera RGB image using the segmentation algorithm described.</p>
<p>Initial results have shown that by using the segmentation algorithm with knowledge of the camera’s RGB system it is possible to identify the wavelength of individual spots. However, the performance of this algorithm is limited to regions of the spectrum where there is an overlap between the transmission spectra of adjacent filters (blue/green, green/blue). Outside of these regions, the camera only perceives a red, green or blue spot of varying intensity resulting in a flat response below 490 nm, above 600 nm and between 530 and 560 nm.</p>
<p>Using flat screens of different colors as objects, the calculated wavelength of individual spots was observed to be constant. This is a feature of this structured lighting probe due to the sharp spectra of the individual spots (FWHM ≈5 nm), which meant that the observed color did not change and only a variation in spot intensity was observed as the background reflectance spectrum was altered. In biological tissues the same color invariance is observed, particularly in the overlapping spectral regions identified earlier. However, there are noticeable errors in the blue region caused by the diffusion of longer wavelength light from neighboring spots through the tissue between neighboring spots. The intestinal tissue sample was not as strongly absorbing in the 400-500 nm region as the kidney, meaning that adjacent spots were more likely to be influenced by scattered light. For the blue spots this had the effect of biasing the calculated wavelength towards the green. The blue spots were more susceptible to this effect by green rather than red spots due to the relatively lower number of red spots in close proximity. Strong absorption by hemoglobin at wavelengths below 500 nm also led to low reflected intensities at these wavelengths.</p>
<p>For non-biological validation objects, reconstruction of their 3D shape in metric space was demonstrated. High accuracy was achieved, with reprojection errors on the order of one pixel in the calibration and sub-millimeter accuracy in measuring a cylindrical and planar validation objects.</p>
<p>Sections of
<italic>ex vivo</italic>
tissue were also reconstructed and visually appeared to match the tissue surfaces observed. For each tissue type the morphology could be distinguished, including the convex curves, valley and ‘step change’. However, fine structure in the fatty tissue was not resolved due to undersampling of the surface by the patterned light at that working distance, where the inter-spot spacing was of the order of several millimeters. This means that at long working distances this probe is limited to resolving sparse 3D structure with a resolution of less than a centimeter. However, at short working distances (2 cm or less) the pattern diameter could be reduced to ≈2 cm and the inter-spot separation to ≈2 mm or less, which would make more dense reconstruction possible. This would also be compatible with potential applications such as quantification of polyp shape during colonoscopy, where typical polyps can have diameters between 7 and 10 mm [
<xref ref-type="bibr" rid="r4">4</xref>
]. The segmentation and centroiding algorithm worked well in the green-red region of the spectrum, but a number of spots at the blue end could not be segmented and used in the reconstruction, reducing the accuracy of the final surface fit.</p>
<p>More calibration images could improve the accuracy of the calibration and hence the accuracy of reconstruction, however, the main source of error in the 3D reconstruction is in the detection of the spots. Their shape is dependent on distance and surface orientation, and any errors, from a partial occlusion of a spot for example, means that the centroid does not reflect the correct ray path. The amount by which these image measurement errors are converted into triangulation errors is determined by the ‘baseline’ (distance between the projector and camera). Previous results from stereo endoscopy have already demonstrated that reliable reconstructions can be achieved using a small baseline (≈5 mm) if the working distance is sufficiently short (< 5 cm) [
<xref ref-type="bibr" rid="r11">11</xref>
,
<xref ref-type="bibr" rid="r21">21</xref>
], which suggests that a future application for this device could be a colonoscope.</p>
</sec>
<sec id="sec1-5">
<title>5. Conclusions</title>
<p>A structured lighting probe has been built that is capable of delivering patterned light to the tissue using a compact fiber probe that is compatible with existing endoscopic devices. The optical set-up addresses the correspondence problem by coding the pattern with different wavelengths, assigning a unique wavelength to each projected spot. This can minimize the problem of occlusion, common to structured lighting set-ups, by tracking exactly which projected features have been blocked from view and avoid the risk of detected spots being matched to the calibrated rays of ‘missing’ spots.</p>
<p>A segmentation algorithm based on the CIE 1931 chromaticity diagram has been shown to successfully recover the wavelengths of individual spots in areas of the spectrum where there is an overlap between the transmission spectra of the camera’s RGB filters. In measurements on colored paper and tissue of varying optical properties it was shown that the calculated wavelength did not change with background reflectivity in the optimum region. This means that it will be possible to use the probe on a variety of tissues or at organ boundaries where optical properties may vary significantly.</p>
<p>Future work will be focused on optimizing the spectral output of the probe so that the number of spots with wavelengths in the optimal blue/green and green/red regions is maximized. A customized filter arrangement with increased spectral overlap will also be used to extend the range of these spectral regions. Spatial constraints will be added to the algorithm to aid discrimination of any spots that appear to have the same wavelength. To overcome some of the limitations imposed by absorption in tissue and to allow for practical clinical use, an interleaved imaging system will also be implemented. This would involve a high speed synchronized camera/shutter system to acquire patterned and white light images alternately in such a ratio as to minimize the visual impact of the pattern and make simultaneous normal white light viewing and 3D data acquisition possible. High-speed acquisition sequences of varying exposure times would also enable the acquisition of composite high dynamic range images where blue spots are made visible in the presence of hemoglobin without saturating those closer to the red end. Furthermore, absolute validations of the instrument accuracy in measurement of tissue will be carried out using ground truth data from CT images in order to better understand and minimize sources of error arising from diffusion and absorption of light. We believe that these modifications will allow a clinical structured lighting system to be constructed.</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgments</title>
<p>Funding for this project was provided by
<funding-source rid="sp1">EPSRC</funding-source>
grant
<award-id rid="sp1">EP/E06342X/1</award-id>
and
<award-id rid="sp1">DT/F003064/1</award-id>
.</p>
</ack>
<ref-list>
<title>References and links</title>
<ref id="r1">
<label>1</label>
<mixed-citation publication-type="book">D. Stoyanov, G. P. Mylonas, F. Deligianni, A. Darzi, and G.-Z. Yang, “Soft-tissue motion tracking and structure estimation for robotic assisted MIS procedures,” in
<italic>Medical Image Computing and Computer-Assisted Intervention—MICCAI 2005</italic>
, J. Duncan, and G. Gerig, eds. (Springer-Verlag, 2005), pp. 139–146.</mixed-citation>
</ref>
<ref id="r2">
<label>2</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Voros</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Long</surname>
<given-names>J.-A.</given-names>
</name>
<name>
<surname>Cinquin</surname>
<given-names>P.</given-names>
</name>
</person-group>
, “
<article-title>Automatic detection of instruments in laparoscopic images: a first step towards high-level command of robotic endoscopic holders</article-title>
,”
<source>Int. J. Robot. Res.</source>
<volume>26</volume>
(
<issue>11-12</issue>
),
<fpage>1173</fpage>
<lpage>1190</lpage>
(
<year>2007</year>
).
<pub-id pub-id-type="doi">10.1177/0278364907083395</pub-id>
</mixed-citation>
</ref>
<ref id="r3">
<label>3</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schwartz</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Lichtenstein</surname>
<given-names>G. R.</given-names>
</name>
</person-group>
, “
<article-title>Magnification endoscopy, chromoendoscopy and other novel techniques in evaluation of patients with IBD</article-title>
,”
<source>Tech. Gastrointest. Endosc.</source>
<volume>6</volume>
(
<issue>4</issue>
),
<fpage>182</fpage>
<lpage>188</lpage>
(
<year>2004</year>
).
<pub-id pub-id-type="doi">10.1016/j.tgie.2004.09.006</pub-id>
</mixed-citation>
</ref>
<ref id="r4">
<label>4</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kato</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Fu</surname>
<given-names>K. I.</given-names>
</name>
<name>
<surname>Sano</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Fujii</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Saito</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Matsuda</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Koba</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Yoshida</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Fujimori</surname>
<given-names>T.</given-names>
</name>
</person-group>
, “
<article-title>Magnifying colonoscopy as a non-biopsy technique for differential diagnosis of non-neoplastic and neoplastic lesions</article-title>
,”
<source>World J. Gastroenterol.</source>
<volume>12</volume>
(
<issue>9</issue>
),
<fpage>1416</fpage>
<lpage>1420</lpage>
(
<year>2006</year>
).
<pub-id pub-id-type="pmid">16552812</pub-id>
</mixed-citation>
</ref>
<ref id="r5">
<label>5</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Edwards</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>King</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Hawkes</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Fleig</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Maurer</surname>
<given-names>C. R. J.</given-names>
<suffix>Jr</suffix>
</name>
<name>
<surname>Hill</surname>
<given-names>D. L.</given-names>
</name>
<name>
<surname>Fenlon</surname>
<given-names>M. R.</given-names>
</name>
<name>
<surname>de Cunha</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Gaston</surname>
<given-names>R. P.</given-names>
</name>
<name>
<surname>Chandra</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Mannss</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Strong</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Gleeson</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Cox</surname>
<given-names>T. C.</given-names>
</name>
</person-group>
, “
<article-title>Stereo augmented reality in the surgical microscope</article-title>
,”
<source>Stud. Health Technol. Inform.</source>
<volume>62</volume>
,
<fpage>102</fpage>
<lpage>108</lpage>
(
<year>1999</year>
).
<pub-id pub-id-type="pmid">10538337</pub-id>
</mixed-citation>
</ref>
<ref id="r6">
<label>6</label>
<mixed-citation publication-type="book">P. Pratt and D. Stoyanov, Visentini-Scarzanella, and G.-Z. Yang, “Dynamic guidance for robotic surgery using image-constrained biomechanical models,” in
<italic>Medical Image Computing and Computer-Assisted Intervention—MICCAI 2010</italic>
, T. Jiang, N. Navab, J. P. W. Pluim, and M. A. Viergever, eds. (Springer-Verlag, 2010), pp, 77–85.</mixed-citation>
</ref>
<ref id="r7">
<label>7</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Penney</surname>
<given-names>G. P.</given-names>
</name>
<name>
<surname>Barratt</surname>
<given-names>D. C.</given-names>
</name>
<name>
<surname>Chan</surname>
<given-names>C. S. K.</given-names>
</name>
<name>
<surname>Slomczykowski</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Carter</surname>
<given-names>T. J.</given-names>
</name>
<name>
<surname>Edwards</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>Hawkes</surname>
<given-names>D. J.</given-names>
</name>
</person-group>
, “
<article-title>Cadaver validation of intensity-based ultrasound to CT registration</article-title>
,”
<source>Med. Image Anal.</source>
<volume>10</volume>
(
<issue>3</issue>
),
<fpage>385</fpage>
<lpage>395</lpage>
(
<year>2006</year>
).
<pub-id pub-id-type="doi">10.1016/j.media.2006.01.003</pub-id>
<pub-id pub-id-type="pmid">16520083</pub-id>
</mixed-citation>
</ref>
<ref id="r8">
<label>8</label>
<mixed-citation publication-type="confproc">C. J. Moore, P. J. Sharrock, F. Lilley, and D. Burton, “3D body surface measurement and display in radiotherapy part III: respiration and deformation in post-surgical breast cancer patients,” in
<italic>International Conference on Medical Information Visualisation—BioMedical Visualisation</italic>
(MedVis '06) (IEEE, 2006), pp. 109–114.</mixed-citation>
</ref>
<ref id="r9">
<label>9</label>
<mixed-citation publication-type="book">R. Hartley and A. Zisserman,
<italic>Multiple View Geometry in Computer Vision</italic>
(Cambridge University Press, 2000).</mixed-citation>
</ref>
<ref id="r10">
<label>10</label>
<mixed-citation publication-type="book">D. Stoyanov, A. Darzi, and G.-Z. Yang, “Dense 3D depth recovery for soft tissue deformation during robotically assisted laparoscopic surgery,” in
<italic>Medical Image Computing and Computer-Assisted Intervention—MICCAI 2004</italic>
, C. Barillot, D. R. Haynor, and P. Hellier, eds. (Springer-Verlag, 2004), 41–48.</mixed-citation>
</ref>
<ref id="r11">
<label>11</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stoyanov</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Darzi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>G.-Z.</given-names>
</name>
</person-group>
, “
<article-title>A practical approach towards accurate dense 3D depth recovery for robotic laparoscopic surgery</article-title>
,”
<source>Comput. Aided Surg.</source>
<volume>10</volume>
(
<issue>4</issue>
),
<fpage>199</fpage>
<lpage>208</lpage>
(
<year>2005</year>
).
<pub-id pub-id-type="pmid">16393789</pub-id>
</mixed-citation>
</ref>
<ref id="r12">
<label>12</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Geng</surname>
<given-names>J.</given-names>
</name>
</person-group>
, “
<article-title>DLP-based structured light 3D imaging technologies and applications</article-title>
,”
<source>Proc. SPIE</source>
<volume>7923</volume>
,
<fpage>79320B</fpage>
<lpage>, 79320B-15</lpage>
(
<year>2011</year>
).
<pub-id pub-id-type="doi">10.1117/12.873125</pub-id>
</mixed-citation>
</ref>
<ref id="r13">
<label>13</label>
<mixed-citation publication-type="confproc">C. J. Moore, D. R. Burton, O. Skydan, P. J. Sharrock, and M. Lalor, “3D body surface measurement and display in radiotherapy part I: technology of structured light surface sensing,” in
<italic>International Conference on Medical Information Visualisation—Biomedical Visualisation (MedVis '06)</italic>
(IEEE, 2006), pp. 97–102.</mixed-citation>
</ref>
<ref id="r14">
<label>14</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wu</surname>
<given-names>T. T.</given-names>
</name>
<name>
<surname>Cheung</surname>
<given-names>T.-H.</given-names>
</name>
<name>
<surname>Yim</surname>
<given-names>S.-F.</given-names>
</name>
<name>
<surname>Qu</surname>
<given-names>J. Y.</given-names>
</name>
</person-group>
, “
<article-title>Optical imaging of cervical precancerous lesions based on active stereo vision and motion tracking</article-title>
,”
<source>Opt. Express</source>
<volume>16</volume>
(
<issue>15</issue>
),
<fpage>11224</fpage>
<lpage>11230</lpage>
(
<year>2008</year>
).
<pub-id pub-id-type="doi">10.1364/OE.16.011224</pub-id>
<pub-id pub-id-type="pmid">18648438</pub-id>
</mixed-citation>
</ref>
<ref id="r15">
<label>15</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chan</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Qu</surname>
<given-names>J. Y.</given-names>
</name>
</person-group>
, “
<article-title>Miniaturized three-dimensional endoscopic imaging system based on active stereovision</article-title>
,”
<source>Appl. Opt.</source>
<volume>42</volume>
(
<issue>10</issue>
),
<fpage>1888</fpage>
<lpage>1898</lpage>
(
<year>2003</year>
).
<pub-id pub-id-type="doi">10.1364/AO.42.001888</pub-id>
<pub-id pub-id-type="pmid">12683771</pub-id>
</mixed-citation>
</ref>
<ref id="r16">
<label>16</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Batlle</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mouaddib</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Salvi</surname>
<given-names>J.</given-names>
</name>
</person-group>
, “
<article-title>Recent progress in coded structured light as a technique to solve the correspondence problem: a survey</article-title>
,”
<source>Pattern Recognit.</source>
<volume>31</volume>
(
<issue>7</issue>
),
<fpage>963</fpage>
<lpage>982</lpage>
(
<year>1998</year>
).
<pub-id pub-id-type="doi">10.1016/S0031-3203(97)00074-5</pub-id>
</mixed-citation>
</ref>
<ref id="r17">
<label>17</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Salvi</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Pagès</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Batlle</surname>
<given-names>J.</given-names>
</name>
</person-group>
, “
<article-title>Pattern codification strategies in structured light systems</article-title>
,”
<source>Pattern Recognit.</source>
<volume>37</volume>
(
<issue>4</issue>
),
<fpage>827</fpage>
<lpage>849</lpage>
(
<year>2004</year>
).
<pub-id pub-id-type="doi">10.1016/j.patcog.2003.10.002</pub-id>
</mixed-citation>
</ref>
<ref id="r18">
<label>18</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>H. J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Fang</surname>
<given-names>J.</given-names>
</name>
</person-group>
, “
<article-title>Surface height retrieval based on fringe shifting of color-encoded structured light pattern</article-title>
,”
<source>Opt. Lett.</source>
<volume>33</volume>
(
<issue>16</issue>
),
<fpage>1801</fpage>
<lpage>1803</lpage>
(
<year>2008</year>
).
<pub-id pub-id-type="doi">10.1364/OL.33.001801</pub-id>
<pub-id pub-id-type="pmid">18709092</pub-id>
</mixed-citation>
</ref>
<ref id="r19">
<label>19</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Smith</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Guild</surname>
<given-names>J.</given-names>
</name>
</person-group>
, “
<article-title>The C.I.E. colorimetric standards and their use</article-title>
,”
<source>Trans. Opt. Soc.</source>
<volume>33</volume>
(
<issue>3</issue>
),
<fpage>73</fpage>
<lpage>134</lpage>
(
<year>1932</year>
).
<pub-id pub-id-type="doi">10.1088/1475-4878/33/3/301</pub-id>
</mixed-citation>
</ref>
<ref id="r20">
<label>20</label>
<mixed-citation publication-type="book">V. C. Smith and J. Pokorny, “Color matching and color discrimination,” in
<italic>The Science of Color</italic>
, S. K. Shevell, ed. (Elsevier, 2003).</mixed-citation>
</ref>
<ref id="r21">
<label>21</label>
<mixed-citation publication-type="web">P. Kazanzides, S. DiMaio, A. Deguet, B. Vagvolgyi, M. Balicki, C. Schneider, R. Kumar, A. Jog, B. Itkowitz, C. Hasser, and R. Taylor, “The surgical assistant workstation in minimally invasive surgery and microsurgery,” in
<italic>MICCAI Workshop on Systems and Arch. for Computer Assisted Interventions</italic>
(2010),
<ext-link ext-link-type="uri" xlink:href="http://hdl.handle.net/10380/3179">http://hdl.handle.net/10380/3179</ext-link>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Allemagne</li>
<li>Royaume-Uni</li>
</country>
</list>
<tree>
<country name="Royaume-Uni">
<noRegion>
<name sortKey="Clancy, Neil T" sort="Clancy, Neil T" uniqKey="Clancy N" first="Neil T." last="Clancy">Neil T. Clancy</name>
</noRegion>
<name sortKey="Clancy, Neil T" sort="Clancy, Neil T" uniqKey="Clancy N" first="Neil T." last="Clancy">Neil T. Clancy</name>
<name sortKey="Elson, Daniel S" sort="Elson, Daniel S" uniqKey="Elson D" first="Daniel S." last="Elson">Daniel S. Elson</name>
<name sortKey="Elson, Daniel S" sort="Elson, Daniel S" uniqKey="Elson D" first="Daniel S." last="Elson">Daniel S. Elson</name>
<name sortKey="Stoyanov, Danail" sort="Stoyanov, Danail" uniqKey="Stoyanov D" first="Danail" last="Stoyanov">Danail Stoyanov</name>
<name sortKey="Stoyanov, Danail" sort="Stoyanov, Danail" uniqKey="Stoyanov D" first="Danail" last="Stoyanov">Danail Stoyanov</name>
<name sortKey="Yang, Guang Zhong" sort="Yang, Guang Zhong" uniqKey="Yang G" first="Guang-Zhong" last="Yang">Guang-Zhong Yang</name>
<name sortKey="Yang, Guang Zhong" sort="Yang, Guang Zhong" uniqKey="Yang G" first="Guang-Zhong" last="Yang">Guang-Zhong Yang</name>
</country>
<country name="Allemagne">
<noRegion>
<name sortKey="Maier Hein, Lena" sort="Maier Hein, Lena" uniqKey="Maier Hein L" first="Lena" last="Maier-Hein">Lena Maier-Hein</name>
</noRegion>
<name sortKey="Groch, Anja" sort="Groch, Anja" uniqKey="Groch A" first="Anja" last="Groch">Anja Groch</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001C86 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 001C86 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:3207380
   |texte=   Spectrally encoded fiber-based structured lighting probe for intraoperative 3D imaging
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:22076272" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024