Serveur d'exploration sur la télématique

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments

Identifieur interne : 000581 ( Pmc/Corpus ); précédent : 000580; suivant : 000582

Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments

Auteurs : Marta Marr N-Romera ; Juan C. García ; Miguel A. Sotelo ; Daniel Pizarro ; Manuel Mazo ; José M. Ca As ; Cristina Losada ; Álvaro Marcos

Source :

RBID : PMC:3230977

Abstract

This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.


Url:
DOI: 10.3390/s101008865
PubMed: 22163385
PubMed Central: 3230977

Links to Exploration step

PMC:3230977

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments</title>
<author>
<name sortKey="Marr N Romera, Marta" sort="Marr N Romera, Marta" uniqKey="Marr N Romera M" first="Marta" last="Marr N-Romera">Marta Marr N-Romera</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Garcia, Juan C" sort="Garcia, Juan C" uniqKey="Garcia J" first="Juan C." last="García">Juan C. García</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Sotelo, Miguel A" sort="Sotelo, Miguel A" uniqKey="Sotelo M" first="Miguel A." last="Sotelo">Miguel A. Sotelo</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Pizarro, Daniel" sort="Pizarro, Daniel" uniqKey="Pizarro D" first="Daniel" last="Pizarro">Daniel Pizarro</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Mazo, Manuel" sort="Mazo, Manuel" uniqKey="Mazo M" first="Manuel" last="Mazo">Manuel Mazo</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Ca As, Jose M" sort="Ca As, Jose M" uniqKey="Ca As J" first="José M." last="Ca As">José M. Ca As</name>
<affiliation>
<nlm:aff id="af2-sensors-10-08865"> Departamento de Sistemas Telemáticos y Computación, Universidad Rey Juan Carlos, C/Tulipán s/n, 28933, Móstoles, Madrid, Spain; E-Mail:
<email>jmplaza@gsyc.es</email>
(J.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Losada, Cristina" sort="Losada, Cristina" uniqKey="Losada C" first="Cristina" last="Losada">Cristina Losada</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Marcos, Alvaro" sort="Marcos, Alvaro" uniqKey="Marcos A" first="Álvaro" last="Marcos">Álvaro Marcos</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22163385</idno>
<idno type="pmc">3230977</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3230977</idno>
<idno type="RBID">PMC:3230977</idno>
<idno type="doi">10.3390/s101008865</idno>
<date when="2010">2010</date>
<idno type="wicri:Area/Pmc/Corpus">000581</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000581</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments</title>
<author>
<name sortKey="Marr N Romera, Marta" sort="Marr N Romera, Marta" uniqKey="Marr N Romera M" first="Marta" last="Marr N-Romera">Marta Marr N-Romera</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Garcia, Juan C" sort="Garcia, Juan C" uniqKey="Garcia J" first="Juan C." last="García">Juan C. García</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Sotelo, Miguel A" sort="Sotelo, Miguel A" uniqKey="Sotelo M" first="Miguel A." last="Sotelo">Miguel A. Sotelo</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Pizarro, Daniel" sort="Pizarro, Daniel" uniqKey="Pizarro D" first="Daniel" last="Pizarro">Daniel Pizarro</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Mazo, Manuel" sort="Mazo, Manuel" uniqKey="Mazo M" first="Manuel" last="Mazo">Manuel Mazo</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Ca As, Jose M" sort="Ca As, Jose M" uniqKey="Ca As J" first="José M." last="Ca As">José M. Ca As</name>
<affiliation>
<nlm:aff id="af2-sensors-10-08865"> Departamento de Sistemas Telemáticos y Computación, Universidad Rey Juan Carlos, C/Tulipán s/n, 28933, Móstoles, Madrid, Spain; E-Mail:
<email>jmplaza@gsyc.es</email>
(J.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Losada, Cristina" sort="Losada, Cristina" uniqKey="Losada C" first="Cristina" last="Losada">Cristina Losada</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Marcos, Alvaro" sort="Marcos, Alvaro" uniqKey="Marcos A" first="Álvaro" last="Marcos">Álvaro Marcos</name>
<affiliation>
<nlm:aff id="af1-sensors-10-08865"> Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Sensors (Basel, Switzerland)</title>
<idno type="eISSN">1424-8220</idno>
<imprint>
<date when="2010">2010</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Jia, Z" uniqKey="Jia Z">Z Jia</name>
</author>
<author>
<name sortKey="Balasuriya, A" uniqKey="Balasuriya A">A Balasuriya</name>
</author>
<author>
<name sortKey="Challa, S" uniqKey="Challa S">S Challa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Khan, Z" uniqKey="Khan Z">Z Khan</name>
</author>
<author>
<name sortKey="Balch, T" uniqKey="Balch T">T Balch</name>
</author>
<author>
<name sortKey="Dellaert, F" uniqKey="Dellaert F">F Dellaert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Isard, M" uniqKey="Isard M">M Isard</name>
</author>
<author>
<name sortKey="Blake, A" uniqKey="Blake A">A Blake</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, Y" uniqKey="Chen Y">Y Chen</name>
</author>
<author>
<name sortKey="Huang, Ts" uniqKey="Huang T">TS Huang</name>
</author>
<author>
<name sortKey="Rui, Y" uniqKey="Rui Y">Y Rui</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Odobez, Jm" uniqKey="Odobez J">JM Odobez</name>
</author>
<author>
<name sortKey="Gatica Perez, D" uniqKey="Gatica Perez D">D Gatica-Perez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Okuma, K" uniqKey="Okuma K">K Okuma</name>
</author>
<author>
<name sortKey="Taleghani, A" uniqKey="Taleghani A">A Taleghani</name>
</author>
<author>
<name sortKey="De Freitas, N" uniqKey="De Freitas N">N De Freitas</name>
</author>
<author>
<name sortKey="Little, Jj" uniqKey="Little J">JJ Little</name>
</author>
<author>
<name sortKey="Lowe, Dg" uniqKey="Lowe D">DG Lowe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thrun, S" uniqKey="Thrun S">S Thrun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arulampalam, Ms" uniqKey="Arulampalam M">MS Arulampalam</name>
</author>
<author>
<name sortKey="Maskell, S" uniqKey="Maskell S">S Maskell</name>
</author>
<author>
<name sortKey="Gordon, N" uniqKey="Gordon N">N Gordon</name>
</author>
<author>
<name sortKey="Clapp, T" uniqKey="Clapp T">T Clapp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gordon, Nj" uniqKey="Gordon N">NJ Gordon</name>
</author>
<author>
<name sortKey="Salmond, Dj" uniqKey="Salmond D">DJ Salmond</name>
</author>
<author>
<name sortKey="Smith, Afm" uniqKey="Smith A">AFM Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, X" uniqKey="Wang X">X Wang</name>
</author>
<author>
<name sortKey="Wang, S" uniqKey="Wang S">S Wang</name>
</author>
<author>
<name sortKey="Ma, J J" uniqKey="Ma J">J-J Ma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Welch, G" uniqKey="Welch G">G Welch</name>
</author>
<author>
<name sortKey="Bishop, G" uniqKey="Bishop G">G Bishop</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reid, Db" uniqKey="Reid D">DB Reid</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tweed, D" uniqKey="Tweed D">D Tweed</name>
</author>
<author>
<name sortKey="Calway, A" uniqKey="Calway A">A Calway</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smith, K" uniqKey="Smith K">K Smith</name>
</author>
<author>
<name sortKey="Gatica Perez, D" uniqKey="Gatica Perez D">D Gatica-Perez</name>
</author>
<author>
<name sortKey="Odobez, Jm" uniqKey="Odobez J">JM Odobez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maccormick, J" uniqKey="Maccormick J">J MacCormick</name>
</author>
<author>
<name sortKey="Blake, A" uniqKey="Blake A">A Blake</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schulz, D" uniqKey="Schulz D">D Schulz</name>
</author>
<author>
<name sortKey="Burgard, W" uniqKey="Burgard W">W Burgard</name>
</author>
<author>
<name sortKey="Fox, D" uniqKey="Fox D">D Fox</name>
</author>
<author>
<name sortKey="Cremers, Ab" uniqKey="Cremers A">AB Cremers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hue, C" uniqKey="Hue C">C Hue</name>
</author>
<author>
<name sortKey="Le Cadre, Jp" uniqKey="Le Cadre J">JP Le Cadre</name>
</author>
<author>
<name sortKey="Perez, P" uniqKey="Perez P">P Pérez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koller Meier, Eb" uniqKey="Koller Meier E">EB Koller-Meier</name>
</author>
<author>
<name sortKey="Ade, F" uniqKey="Ade F">F Ade</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schulz, D" uniqKey="Schulz D">D Schulz</name>
</author>
<author>
<name sortKey="Burgard, W" uniqKey="Burgard W">W Burgard</name>
</author>
<author>
<name sortKey="Fox, D" uniqKey="Fox D">D Fox</name>
</author>
<author>
<name sortKey="Cremers, Ab" uniqKey="Cremers A">AB Cremers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bar Shalom, Y" uniqKey="Bar Shalom Y">Y Bar-Shalom</name>
</author>
<author>
<name sortKey="Fortmann, T" uniqKey="Fortmann T">T Fortmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burguera, A" uniqKey="Burguera A">A Burguera</name>
</author>
<author>
<name sortKey="Gonzalez, Y" uniqKey="Gonzalez Y">Y González</name>
</author>
<author>
<name sortKey="Oliver, G" uniqKey="Oliver G">G Oliver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boufama, B" uniqKey="Boufama B">B Boufama</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Canny, Fj" uniqKey="Canny F">FJ Canny</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vermaak, J" uniqKey="Vermaak J">J Vermaak</name>
</author>
<author>
<name sortKey="Doucet, A" uniqKey="Doucet A">A Doucet</name>
</author>
<author>
<name sortKey="Perez, P" uniqKey="Perez P">P Perez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marr N, M" uniqKey="Marr N M">M Marrón</name>
</author>
<author>
<name sortKey="Sotelo, Ma" uniqKey="Sotelo M">MA Sotelo</name>
</author>
<author>
<name sortKey="Garcia, Jc" uniqKey="Garcia J">JC García</name>
</author>
<author>
<name sortKey="Broddfelt, J" uniqKey="Broddfelt J">J Broddfelt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bar Shalom, Y" uniqKey="Bar Shalom Y">Y Bar Shalom</name>
</author>
<author>
<name sortKey="Li, Xr" uniqKey="Li X">XR Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sensors (Basel)</journal-id>
<journal-title-group>
<journal-title>Sensors (Basel, Switzerland)</journal-title>
</journal-title-group>
<issn pub-type="epub">1424-8220</issn>
<publisher>
<publisher-name>Molecular Diversity Preservation International (MDPI)</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22163385</article-id>
<article-id pub-id-type="pmc">3230977</article-id>
<article-id pub-id-type="doi">10.3390/s101008865</article-id>
<article-id pub-id-type="publisher-id">sensors-10-08865</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Marrón-Romera</surname>
<given-names>Marta</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-10-08865">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c1-sensors-10-08865">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>García</surname>
<given-names>Juan C.</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-10-08865">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sotelo</surname>
<given-names>Miguel A.</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-10-08865">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Pizarro</surname>
<given-names>Daniel</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-10-08865">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Mazo</surname>
<given-names>Manuel</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-10-08865">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Cañas</surname>
<given-names>José M.</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-10-08865">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Losada</surname>
<given-names>Cristina</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-10-08865">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Marcos</surname>
<given-names>Álvaro</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-10-08865">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="af1-sensors-10-08865">
<label>1</label>
Electronics Department, University of Alcalá, Campus Universitario s/n, 28805, Alcalá de Henares, Madrid, Spain; E-Mails:
<email>jcarlos@depeca.uah.es</email>
(J.G.);
<email>sotelo@depeca.uah.es</email>
(M.S.);
<email>pizarro@depeca.uah.es</email>
(D.P.);
<email>mazo@depeca.uah.es</email>
(M.M.);
<email>losada@depeca.uah.es</email>
(C.L.);
<email>alvaro.marcos@depeca.uah.es</email>
(A.M.)</aff>
<aff id="af2-sensors-10-08865">
<label>2</label>
Departamento de Sistemas Telemáticos y Computación, Universidad Rey Juan Carlos, C/Tulipán s/n, 28933, Móstoles, Madrid, Spain; E-Mail:
<email>jmplaza@gsyc.es</email>
(J.C.)</aff>
<author-notes>
<corresp id="c1-sensors-10-08865">
<label>*</label>
Author to whom correspondence should be addressed; E-Mail:
<email>marta@depeca.uah.es</email>
; Tel.: +34-918856586; Fax: +34-918856591.</corresp>
</author-notes>
<pub-date pub-type="collection">
<year>2010</year>
</pub-date>
<pub-date pub-type="epub">
<day>28</day>
<month>9</month>
<year>2010</year>
</pub-date>
<volume>10</volume>
<issue>10</issue>
<fpage>8865</fpage>
<lpage>8887</lpage>
<history>
<date date-type="received">
<day>31</day>
<month>8</month>
<year>2010</year>
</date>
<date date-type="rev-recd">
<day>7</day>
<month>9</month>
<year>2010</year>
</date>
<date date-type="accepted">
<day>25</day>
<month>9</month>
<year>2010</year>
</date>
</history>
<permissions>
<copyright-statement>© 2010 by the authors; licensee MDPI, Basel, Switzerland.</copyright-statement>
<copyright-year>2010</copyright-year>
<license>
<license-p>
<pmc-comment>CREATIVE COMMONS</pmc-comment>
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/3.0/">http://creativecommons.org/licenses/by/3.0/</ext-link>
).</license-p>
</license>
</permissions>
<abstract>
<p>This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.</p>
</abstract>
<kwd-group>
<kwd>3D tracking</kwd>
<kwd>Bayesian estimation</kwd>
<kwd>stereo vision sensor</kwd>
<kwd>mobile robots</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec>
<label>1.</label>
<title>Introduction</title>
<p>Visual tracking is one of the areas of greater interest in robotics as it is related with topics such as visual surveillance or mobile robots navigation. Multiple approaches to this problem have been developed by research community during last decades [
<xref ref-type="bibr" rid="b1-sensors-10-08865">1</xref>
]. Among them, a sorting can be done according to methods used to detect or extract information from the image about objects in the scene:
<list list-type="bullet">
<list-item>
<p>With static cameras: background subtraction is generally applied to extract image information corresponding to dynamic objects in the scene. This method is wide spread among the research community [
<xref ref-type="bibr" rid="b2-sensors-10-08865">2</xref>
<xref ref-type="bibr" rid="b4-sensors-10-08865">4</xref>
], mainly in surveillance applications.</p>
</list-item>
<list-item>
<p>With a known model of the object to be tracked: this situation is very common in tracking applications, either using static cameras [
<xref ref-type="bibr" rid="b3-sensors-10-08865">3</xref>
,
<xref ref-type="bibr" rid="b4-sensors-10-08865">4</xref>
] or dynamic ones [
<xref ref-type="bibr" rid="b5-sensors-10-08865">5</xref>
,
<xref ref-type="bibr" rid="b6-sensors-10-08865">6</xref>
]. The detection process is computational more expensive, but the number of false alarms and the robustness of the detector are bigger than if looking for any kind of objects.</p>
</list-item>
</list>
</p>
<p>All the referred works solve the detection problem quite easily, thanks to the application of the mentioned restrictions. However, an appropriate solution is more difficult to find when the problem to be solved is the navigation of a mobile robot in complex and crowded indoor environments (
<xref ref-type="fig" rid="f1-sensors-10-08865">Figure 1</xref>
), like museums, railway stations, airports, commercial centers,
<italic>etc.</italic>
In those scenarios there is any number of dynamic obstacles around and the robot has to detect and track all of them in order to find a suitable path.</p>
<p>In this kind of scenario, both of the standard methods have important drawbacks. When models are used to detect the obstacles, there are problems with the execution time (obstacles can be far away before being identified) and with the modeling of any of the possible objects that could be found in the environment. By the other way, it is not possible to use background subtraction because its visual appearance changes continuously; this is because any element in the visual environment of the robot may be an obstacle, apart from objects that belong to building structures in which the robot is located.</p>
<p>Because the complexity of the information available from a visual sensor, it is convenient to organize first the visual data in the images at least into two classes: measurements coming from obstacles (obstacles class); and measurements coming from the environment (structural features class).</p>
<p>Once this information is available, data classified in the environment class can be used to make a reconstruction of robot surroundings structure. This process is especially interesting for robot navigation, as it can be used in a SLAM (Simultaneous Localization and Mapping [
<xref ref-type="bibr" rid="b7-sensors-10-08865">7</xref>
]) task.</p>
<p>At the same time, data assigned to the obstacles class can be used as an input for any of the tracking algorithms proposed by the scientific community. Taking into account the measurements characteristics, the position tracker has to consider the noise related to them in order to achieve reliable tracking results. Probabilistic algorithms, such as particle filters (PFs, [
<xref ref-type="bibr" rid="b8-sensors-10-08865">8</xref>
<xref ref-type="bibr" rid="b10-sensors-10-08865">10</xref>
]) and Kalman filters (KFs, [
<xref ref-type="bibr" rid="b11-sensors-10-08865">11</xref>
,
<xref ref-type="bibr" rid="b12-sensors-10-08865">12</xref>
]), can be used to develop this task as they include this noisy behavior in the estimation process by means of a probabilistic model.</p>
<p>Anyway, the objective is to calculate the posterior probability (also called belief,
<italic>p</italic>
(
<italic>x⃗
<sub>t</sub>
</italic>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
</sub>
)) of the state vector
<italic>x⃗
<sub>t</sub>
</italic>
and upon the output one
<italic>y⃗
<sub>t</sub>
</italic>
, which informs about the position of the target, by means of the Bayes rule, and through a recursive two steps estimation process (prediction-correction), in which some of the involved variables are stochastic.</p>
<p>Most solutions to this multi-tracking problem use one estimator for each object to be tracked [
<xref ref-type="bibr" rid="b12-sensors-10-08865">12</xref>
,
<xref ref-type="bibr" rid="b13-sensors-10-08865">13</xref>
]. These techniques are included in what is called MHT (Multi-Hypothesis Tracking) algorithm. It is also possible to use a single estimator for all the targets if the state vector size is dynamically adapted to include the state variables of the objects’ model as they appear or disappear in the scene [
<xref ref-type="bibr" rid="b14-sensors-10-08865">14</xref>
,
<xref ref-type="bibr" rid="b15-sensors-10-08865">15</xref>
]. Nevertheless, both options are computationally very expensive in order to use them in real time applications.</p>
<p>Then, the most suitable solution is to exploit the multimodality of the probabilistic algorithms in order to include all needed estimations in a single density function. With this idea, a PF is used as a multimodal estimator [
<xref ref-type="bibr" rid="b16-sensors-10-08865">16</xref>
,
<xref ref-type="bibr" rid="b17-sensors-10-08865">17</xref>
]. This idea has not been exploited by the scientific community adducing to the inefficiency of the estimation, due to the impoverishment problem that the PF suffers when working with multimodal densities [
<xref ref-type="bibr" rid="b18-sensors-10-08865">18</xref>
,
<xref ref-type="bibr" rid="b19-sensors-10-08865">19</xref>
].</p>
<p>Anyway, an association algorithm is needed. The association problem is easier if a single measurement for each target is available at each sample time [
<xref ref-type="bibr" rid="b20-sensors-10-08865">20</xref>
]. In contrast, the biggest the amount of information from each model is, the most reliable the estimation will be.</p>
<p>In the work presented here, the source of information is a vision system in order to obtain as more position information from each tracked object as possible. Thus, the needed association algorithm has also a high computational load but the reliability of the tracking process is increased.</p>
<p>The scientific community has tested different alternatives for the association task, including Maximum Likelihood (ML), Nearest Neighbor (NN) and Probabilistic Data Association (PDA) [
<xref ref-type="bibr" rid="b20-sensors-10-08865">20</xref>
]. In our case, we have selected the NN solution due to its deterministic character. Finally, not all proposals referred to in this introduction are appropriate if the number of objects to track is variable: it is necessary an extension of the previously mentioned algorithms.</p>
<p>In our work, the multimodal ability of the PF is used, and its impoverishment problem is mitigated by using a deterministic NN clustering process that, used as association process, is combined with the probabilistic approach in order to obtain efficient multi-tracking results. We use an extended version of a Bootstrap particle filter [
<xref ref-type="bibr" rid="b9-sensors-10-08865">9</xref>
], called XPFCP (eXtended Particle Filter with Clustering Process), to achieve the position estimation task with a single filter, in real time, and for tracking a variable number of objects detected with the on-board stereo vision process.
<xref ref-type="fig" rid="f2-sensors-10-08865">Figure 2</xref>
shows a functional description of the whole tracking application.</p>
<p>Data classified as belonging to the structural features class can be used by standard SLAM algorithms for environmental reconstruction tasks; however, this question is out of the scope of present paper as well as a detailed description of the stereo vision system.</p>
<p>This paper will describe the functionality of the two main processes of the multi-tracking proposal: Section 2 will detail the object detector, classifier and 3D locator; Section 3 will describe the multiple obstacles tracker, the XPFCP algorithm. Section 4 will show the obtained results under a set of testing scenarios. Finally, the paper ends with conclusions about the whole system behavior and the obtained results.</p>
</sec>
<sec>
<label>2.</label>
<title>Detection, Classification and Localization Processes</title>
<p>A stereo vision subsystem is considered as one of the most adequate ways to acquire important information about the different elements found in a dynamic environment. That is because:
<list list-type="bullet">
<list-item>
<p>The amount of information that can be extracted from an image is much bigger than the one that can be obtained from any other kind of sensor, such as laser or sonar [
<xref ref-type="bibr" rid="b21-sensors-10-08865">21</xref>
].</p>
</list-item>
<list-item>
<p>As the environmental configuration changes with time, with a single camera is not possible to obtain the depth coordinate of the objects’ position vector, and thus a stereo vision arrangement is needed.</p>
</list-item>
</list>
</p>
<p>An alternative to this visual sensor configuration could be to use a Time-Of-Flight (TOF) camera that provides depth information. However, currently these cameras are not available at an affordable price and the information obtained with this sensor is still far from versatile (not valid for long distances) and accurate (post-acquisition process is normally needed in order to compensate reflection effects).</p>
<p>A matching process based on the stereo vision system epipolar geometry allows obtaining the desired 3D position input information [
<italic>x
<sub>p,t</sub>
y
<sub>p,t</sub>
z
<sub>p,t</sub>
</italic>
]
<italic>
<sup>T</sup>
</italic>
of a point
<bold>P</bold>
<italic>
<sub>t</sub>
</italic>
from its projections,
<bold>p</bold>
<italic>
<sub>l,t</sub>
</italic>
and
<bold>p</bold>
<italic>
<sub>r,t</sub>
</italic>
, in a pair of synchronized images (
<italic>I
<sub>l,t</sub>
</italic>
= [
<italic>u
<sub>l,p,t</sub>
v
<sub>l,p,t</sub>
</italic>
]
<italic>
<sup>T</sup>
</italic>
,
<italic>I
<sub>r,t</sub>
</italic>
= [
<italic>u
<sub>r,p,t</sub>
v
<sub>r,p,t</sub>
</italic>
]
<italic>
<sup>T</sup>
</italic>
), as shown in
<xref ref-type="fig" rid="f3-sensors-10-08865">Figure 3</xref>
.</p>
<p>In this work, the left-right image matching process is solved with a Zero Mean Normalized Cross Correlation (ZNCC), due to its robustness [
<xref ref-type="bibr" rid="b22-sensors-10-08865">22</xref>
]. Each sampling time,
<italic>t</italic>
, for every pixel of interest (
<italic>i.e.</italic>
, in the left image),
<italic>I
<sub>l,t</sub>
</italic>
= [
<italic>u
<sub>l,p,t</sub>
v
<sub>l,p,t</sub>
</italic>
]
<italic>
<sup>T</sup>
</italic>
), this process consists on looking for a similar gray level among the pixels in the epipolar line at the paired image (the right one
<italic>I
<sub>r,t</sub>
</italic>
). 3D location of paired pixels can be found if, after a careful calibration process of both cameras location, the geometric extrinsic parameters of rotation,
<italic>R
<sub>lr</sub>
</italic>
, and translation,
<italic>T
<sub>lr</sub>
</italic>
, are known.</p>
<p>As it can be expected, this process is very time consuming. Therefore the 3D information to be obtained should be limited to set of points of interest in both images. In the case of this work, points coming from objects edges have enough information to develop the tracking task. Moreover, just the edges information will enable the possibility of partially reconstructing the structure of the environment in which this tracking is carried out. The global data acquisition process proposed in this paper includes the following main tasks: detection and classification; and 3D localization. Details of these two tasks are shown in
<xref ref-type="fig" rid="f4-sensors-10-08865">Figure 4</xref>
.</p>
<sec>
<label>2.1.</label>
<title>Detection and Classification</title>
<p>The detection and classification process (top group in
<xref ref-type="fig" rid="f4-sensors-10-08865">Figure 4</xref>
) is executed with each pair of frames (
<italic>I
<sub>l,t</sub>
</italic>
and
<italic>I
<sub>r,t</sub>
</italic>
) synchronously acquired in sampling time,
<italic>t</italic>
, from the stereo-camera set. This process is developed through the following steps.</p>
<sec>
<label>2.1.1.</label>
<title>Detection</title>
<p>Edges information is extracted from the pair of cameras with a Canny filter [
<xref ref-type="bibr" rid="b23-sensors-10-08865">23</xref>
]. This information is enough both to track all the objects in the wandering robot environment and partially reconstruct the environment structure.</p>
<p>Left image
<italic>I
<sub>l,t</sub>
</italic>
= [
<italic>u
<sub>l,p,t</sub>
v
<sub>l,p,t</sub>
</italic>
]
<italic>
<sup>T</sup>
</italic>
is used to extract those pixels that may be interesting in the tracking process. Image edges from human contour, tables, doors, columns, and so on are visible and distinguishable from the background (even in quite crowded scenes) and can be easily extracted from the filtered image.</p>
<p>In order to robustly find structural features, the Canny image is zeroed in the Regions Of Interest (ROIs) where an obstacle is expected to appear. Therefore, the classification step is run over a partial Canny image,
<inline-formula>
<mml:math id="M1">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">canny</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi></mml:mi>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="italic">mcanny</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
, though the full image is recovered to develop the 3D localization.</p>
</sec>
<sec>
<label>2.1.2.</label>
<title>Classification: Structural and Non-Structural Features</title>
<p>Within the partial Canny image
<italic>I
<sub>canny,l,t</sub>
</italic>
, edges corresponding with environmental structures have the characteristic of forming long lines. Thus, the classification process starts seeking structural shapes in the resulting image, through these typical features. Hough transform is used to search these long line segments in the partial Canny image.</p>
<p>The function
<italic>cvHoughLines2</italic>
[
<xref ref-type="bibr" rid="b24-sensors-10-08865">24</xref>
] from OpenCV [
<xref ref-type="bibr" rid="b25-sensors-10-08865">25</xref>
] library is used to accomplish the probabilistic Hough transform. This version of the Hough transform made by OpenCV allows finding line segments instead of whole ones if the image contains few long linear segments. This is the case of present application when obstacles in front of the camera set may occlude the structural elements of the scene.</p>
<p>This probabilistic version of Hough transform has five parameters to be tuned:
<list list-type="bullet">
<list-item>
<p>
<italic>rho</italic>
and
<italic>theta</italic>
are respectively the basic Hough transform distance and angle resolution parameters in pixels and radians.</p>
</list-item>
<list-item>
<p>
<italic>threshold</italic>
is the basic limit to overpass by the Hough accumulator in order to consider that a line exists.</p>
</list-item>
<list-item>
<p>
<italic>length</italic>
is needed in the probabilistic version of Hough transform, and is the minimum line length, in pixels, for the detector of segments. This parameter is very important in the related work as it allows taking into account a line made by very short segments, like those generated in scenes with many occlusions.</p>
</list-item>
<list-item>
<p>
<italic>gap</italic>
is also needed in the probabilistic version of Hough transform. This is the maximum gap in pixels between segment lines to be treated as a single line segment. This parameter is significant here, because it allows generating valid lines with very separated segments, due to occluding obstacles.</p>
</list-item>
</list>
</p>
<p>Due to the diversity of conditions that may appear in the experimental conditions an analytical study cannot be performed and thus all parameters have been empirically set. As a result of the challenging situation of obstacles in present application, not all lines related to structural elements in the environment are classified as structural features. In any case, the algorithm detects well enough the structural features existing in the scene: walls, columns, ceiling, floor, windows and so on. In the same way, it can also generate an obstacles features’ class neat enough to be used in the tracking step.</p>
<p>At the end of this classification step, two images are, therefore, obtained using the described process:
<list list-type="bullet">
<list-item>
<p>
<inline-formula>
<mml:math id="M2">
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">structure</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="italic">mstructure</mml:mi>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
with the environmental structures, formed by the long lines found at the partial Canny image.</p>
</list-item>
<list-item>
<p>
<inline-formula>
<mml:math id="M3">
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">obstacles</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="italic">mobstacles</mml:mi>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
with the full Canny image zeroed at the environmental structures.</p>
</list-item>
</list>
</p>
</sec>
</sec>
<sec>
<label>2.2.</label>
<title>3D Localization of Structural and Obstacles’ Features</title>
<p>Both images are the inputs to a 3D localization process to obtain the 3D coordinates of structural
<inline-formula>
<mml:math id="M4">
<mml:mrow>
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">structure</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="italic">mstructure</mml:mi>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
and obstacles’ features
<inline-formula>
<mml:math id="M5">
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">obstacles</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="italic">mobstacles</mml:mi>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>
. This is done in two phases by a matching process based on the epipolar geometry of the vision system; these phases are: 3D localization and obstacles’ features filtering.</p>
<sec>
<label>2.2.1.</label>
<title>Phase 1: 3D Localization</title>
<p>Features’ classes
<italic>Y
<sub>structure,t</sub>
</italic>
and
<italic>Y
<sub>obstacles,t</sub>
</italic>
are respectively obtained calculating the ZNCC value for each non-zero pixel at the corresponding modified left images,
<italic>I
<sub>structure,l,t</sub>
</italic>
and
<italic>I
<sub>obstacles,l,t</sub>
</italic>
and using the full right image
<italic>I
<sub>r,t</sub>
</italic>
. Those features whose ZNCC values reaches a threshold are validated and finally classified in the corresponding features’ classes,
<italic>Y
<sub>structure,t</sub>
</italic>
or
<italic>Y
<sub>obstacles,t</sub>
</italic>
.</p>
</sec>
<sec>
<label>2.2.2.</label>
<title>Phase 2: Obstacles’ Features Filterin</title>
<p>Due to occlusions and repetitive patterns, correspondences between points in left and right images are often not correct and some outliers appear. This effect mainly affects to obstacles’ features. In order to reject these outliers, a neighborhood filter is run in the XZ plane over all points classified in the obstacles’ class
<italic>Y
<sub>obstacles,t</sub>
</italic>
.</p>
<p>The heights coordinate (Y) in each 3D position vector
<inline-formula>
<mml:math id="M6">
<mml:msubsup>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="italic">mobstacles</mml:mi>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>
is also used to filter the spurious noise. So, a feasible set of points that characterizes obstacles’ position in the scene is obtained in order to be used as measurement vector (observation model) at the posterior multiple obstacles’ tracking task (see
<xref ref-type="fig" rid="f2-sensors-10-08865">Figure 2</xref>
).
<xref ref-type="fig" rid="f5-sensors-10-08865">Figure 5</xref>
and
<xref ref-type="fig" rid="f6-sensors-10-08865">Figure 6</xref>
show some results obtained at the end of the whole detection, classification and 3D localization process.</p>
<p>
<xref ref-type="fig" rid="f5-sensors-10-08865">Figure 5</xref>
shows a sequence of three frames belonging to a certain section of a single experiment. It is organized in two rows: the one at the top shows the results of the classification
<italic>I
<sub>structure,l,t</sub>
</italic>
. over the input Canny image
<italic>I
<sub>canny,l,t</sub>
</italic>
while the one at the bottom shows them over the original images. Those elements identified as members of the
<italic>structural features</italic>
class
<italic>Y
<sub>structure,t</sub>
</italic>
have been highlighted in both rows of images in order to show the behavior of the algorithm: in colors at the Canny image, and in yellow at the original image if their 3D localization
<inline-formula>
<mml:math id="M7">
<mml:msubsup>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="italic">mstructure</mml:mi>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>
has been found.</p>
<p>By the way,
<xref ref-type="fig" rid="f6-sensors-10-08865">Figure 6</xref>
shows a different section of the same experiment. There are four frames in sequence from left to right organized in three rows. The row at the top shows the Canny image
<italic>I
<sub>canny,l,t</sub>
</italic>
input to the classification process; the central row shows the set of original images, where those 3D points (
<inline-formula>
<mml:math id="M8">
<mml:msubsup>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="italic">mobstacles</mml:mi>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>
) assigned to the
<italic>obstacles’ features</italic>
class
<italic>Y
<sub>obstacles,t</sub>
</italic>
are then projected back in colors according to their height in the Y coordinate (light blue for lower values, dark one for middle ones and green for higher ones). Finally, the row at the bottom is a 2D projection over the ground (XZ plane) of the set of points of the
<italic>obstacles’ features</italic>
class
<italic>Y
<sub>obstacles,t</sub>
</italic>
. The clouds of points in the 2D projection allow perform the tracking task of the four persons found in the original sequence.</p>
<p>In this last figure, it can be noticed that obstacles’ features
<italic>Y
<sub>obstacles,t</sub>
</italic>
related to the legs of the persons in the scene do not include all edge points related to them in the preliminary Canny image
<italic>I
<sub>canny,l,t</sub>
</italic>
Nevertheless, the multi-obstacles’ tracker works perfectly in any situation as it is demonstrated in the video
<italic>MTracker.avi</italic>
(see
<xref ref-type="supplementary-material" rid="SD1">supplementary materials</xref>
) from the experiment shown in
<xref ref-type="fig" rid="f6-sensors-10-08865">Figure 6</xref>
. In all the frames there are enough edge points in all obstacles, from 115 to 150 features per person to be tracked; the total amount of them are displayed at the bottom of each column in
<xref ref-type="fig" rid="f6-sensors-10-08865">Figure 6</xref>
(parameter nPtosObs, text in red).</p>
<p>The difference between the points found in the Canny image and the final obstacles’ features class is related to the probabilistic Hough transformed used. As described in a previous section, the Hough algorithm is tuned to detect short segments of lines and classify them as structural features, in order to find them even in situations of high level of occlusion such the one displayed in
<xref ref-type="fig" rid="f6-sensors-10-08865">Figure 6</xref>
. Then, some linear features belonging to people arms or legs are sorted out to the structural class.</p>
</sec>
</sec>
</sec>
<sec>
<label>3.</label>
<title>The Multiple Obstacles’ Tracker</title>
<p>As discussed in the introduction, a probabilistic algorithm is the best solution in order to implement the multi-obstacles tracking task. The XPFCP (eXtended Particle Filter with Clustering Process) an extended version of the PF has been chosen to develop this process in order to exploit its multimodality.</p>
<p>The combination of both techniques (probabilistic estimation and deterministic association) increases the robustness of the PF multimodality, a behavior which is difficult to develop when this combination is not used, as seen in [
<xref ref-type="bibr" rid="b18-sensors-10-08865">18</xref>
]. In fact, the idea of combining probabilistic and deterministic techniques for tracking multiple objects has been proposed in different previous works, such as [
<xref ref-type="bibr" rid="b6-sensors-10-08865">6</xref>
] or [
<xref ref-type="bibr" rid="b26-sensors-10-08865">26</xref>
]. However none of them faced the idea of reinforcing the PF multimodality within the deterministic framework.</p>
<p>
<xref ref-type="fig" rid="f7-sensors-10-08865">Figure 7</xref>
shows a functional description of the multiple obstacles’ tracking algorithm proposed. As it can be noticed in the upper left corner of the figure, the input of the XPFCP is the obstacles’ features class
<italic>Y
<sub>obstacles,t</sub>
</italic>
: the set of measurements, unequally distributed among all obstacles in the scene, are clustered in a set of
<italic>k
<sub>in,t</sub>
</italic>
groups
<italic>G</italic>
<sub>1:</sub>
<italic>
<sub>k,t|in</sub>
</italic>
to work as observation density
<italic>p</italic>
(
<italic>y⃗
<sub>t</sub>
</italic>
) ≈
<italic>p</italic>
(
<italic>G</italic>
<sub>1:</sub>
<italic>
<sub>k,t|in</sub>
</italic>
).</p>
<p>On the other hand, the image at the lower left corner in
<xref ref-type="fig" rid="f7-sensors-10-08865">Figure 7</xref>
shows the output of the XPFCP based multi-obstacles tracking: a set of
<italic>k
<sub>out,t</sub>
</italic>
objects
<italic>G</italic>
<sub>1:</sub>
<italic>
<sub>k,t|out</sub>
</italic>
identified by colors with their corresponding location, speed and trajectory followed in the XYZ space.</p>
<p>The three standard steps of Bootstrap PF (prediction, correction and association) can also be seen in
<xref ref-type="fig" rid="f7-sensors-10-08865">Figure 7</xref>
. As shown in the figure, the PF implements a discrete representation of the belief
<italic>p</italic>
(
<italic>x⃗
<sub>t</sub>
</italic>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
</sub>
) with a set of
<italic>n</italic>
weighted samples
<inline-formula>
<mml:math id="M9">
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>w</mml:mi>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
(generally called particles) to develop the estimation task. Thanks to this kind of representation, different modes can be implemented in the discrete belief generated by the PF, which applied to the case of interest allow to characterize different tracked objects.</p>
<p>Besides, a new re-initialization step prior to the prediction one has also been included in the loop (dashed lines in
<xref ref-type="fig" rid="f7-sensors-10-08865">Figure 7</xref>
) in order to ease the generation of new modes in the
<italic>t</italic>
− 1 modified belief
<italic></italic>
(
<italic>x⃗</italic>
<sub>
<italic>t</italic>
−1</sub>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
−1</sub>
) output by this step. As shown in this figure, this new re-initialization step is executed using the clusters segmented from the XPFCP input data set of obstacles’ features
<italic>G</italic>
<sub>1:
<italic>k,t</italic>
−1|
<italic>in</italic>
</sub>
, therefore including in the tracking task a deterministic framework (blocks in blue in
<xref ref-type="fig" rid="f7-sensors-10-08865">Figure 7</xref>
).</p>
<p>The set
<italic>G</italic>
<sub>1:
<italic>k,t|in</italic>
</sub>
, is also used at the correction step of the XPFCP, modifying the standard step of the Bootstrap PF, as displayed in
<xref ref-type="fig" rid="f7-sensors-10-08865">Figure 7</xref>
(dashed lines). At this point, the clustering process works as a NN association one, reinforcing the preservation of multiple modes (as many as obstacles being tracked at each moment) in the output of the selection step: the final belief
<italic>p</italic>
(
<italic>x⃗
<sub>t</sub>
</italic>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
</sub>
).</p>
<p>The deterministic output
<italic>G</italic>
<sub>1:
<italic>k,t|out</italic>
</sub>
is obtained organizing in clusters the set of particles
<inline-formula>
<mml:math id="M10">
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>}</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
that characterizes the belief
<italic>p</italic>
(
<italic>x⃗
<sub>t</sub>
</italic>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
</sub>
) at the end of the XPFCP selection step. This new clustering process discriminates the different modes or maximum probability peaks in
<italic>p</italic>
(
<italic>x⃗
<sub>t</sub>
</italic>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
</sub>
), representing the state
<italic>x⃗
<sub>t</sub>
</italic>
of all
<italic>k
<sub>out,t</sub>
</italic>
objects being tracked by the probabilistic filter at that moment. The following subsections extend the description of XPFCP functionality.</p>
<sec>
<label>3.1.</label>
<title>The Tracking Model</title>
<p>The application of the XPFCP to the position estimation problem requires a model definition. In the application of interest, a Constant Velocity (CV) model is used [
<xref ref-type="bibr" rid="b27-sensors-10-08865">27</xref>
], where the actuation and observation models are defined by
<xref ref-type="disp-formula" rid="FD1">equation (1)</xref>
and
<xref ref-type="disp-formula" rid="FD2">equation (2)</xref>
, respectively:
<disp-formula id="FD1">
<label>(1)</label>
<mml:math id="M11">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>z</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>z</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>v</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="FD2">
<label>(2)</label>
<mml:math id="M12">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>o</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>As shown in
<xref ref-type="disp-formula" rid="FD1">equation (1)</xref>
, the estimation vector
<italic>x⃗</italic>
<sub>
<italic>t|t−</italic>
1</sub>
will define the position and speed state of the obstacle being tracked. In addition, the state noise vector
<italic>v⃗
<sub>t</sub>
</italic>
(empirically characterized as Gaussian and white) is included in the actuation model both to modify the constant speed of the obstacle, and to model the uncertainty related to the probabilistic estimation process.</p>
<p>Furthermore in
<xref ref-type="disp-formula" rid="FD2">equation (2)</xref>
,
<italic>y⃗
<sub>t</sub>
</italic>
defines the observable part of the state
<italic>x⃗</italic>
<sub>
<italic>t|t−</italic>
1</sub>
, that in this case matches with the 3D position information (
<inline-formula>
<mml:math id="M13">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">obstacles</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="italic">mobstacles</mml:mi>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
) extracted by the stereo vision process described in section 2. An observation noise vector
<italic>o⃗
<sub>t</sub>
</italic>
has also been included to model the noise related to that vision process, and so, it is characterized in an off-line previous step. This noise model makes possible to keep tracking objects when they are partially occluded.</p>
<p>Empirical studies over tests results, including different environmental and tracking conditions, were used to identify the standard deviation of all components in
<italic>v⃗
<sub>t</sub>
</italic>
and in
<italic>o⃗
<sub>t</sub>
</italic>
, resulting that σ
<italic>
<sub>v,i</sub>
</italic>
= 100
<italic>mm</italic>
/
<italic>i</italic>
= {
<italic>x, y, z, ẋ, ż</italic>
} and σ
<italic>
<sub>o,i</sub>
</italic>
= [150,200]
<italic>mm</italic>
/
<italic>i</italic>
= {
<italic>x,y,z</italic>
}. Besides, the study of sensibility concluded that a modification of a 100% in any of σ
<italic>
<sub>o,i</sub>
</italic>
generates an increase in the tracking error of around 24%, while the same modification in any of σ
<italic>
<sub>v,i</sub>
</italic>
generates ten times lower figures. This result indicates the importance of the observation noise vector in the multi-obstacles’ tracking task.</p>
</sec>
<sec>
<label>3.2.</label>
<title>Steps of the XPFCP</title>
<sec>
<label>3.2.1.</label>
<title>Clustering Measurements</title>
<p>The clustering process is done over the 3D position data set
<italic>Y
<sub>obstacles,t</sub>
</italic>
extracted by the stereo vision process. The output set of groups
<italic>G</italic>
<sub>1:
<italic>k,t|in</italic>
</sub>
generated by this process is then used in the re-initialization and correction steps of the XPFCP.</p>
<p>We propose an adapted version of Extended K-Means [
<xref ref-type="bibr" rid="b28-sensors-10-08865">28</xref>
] to solve this clustering task, called
<italic>Sequential K-Means with Validation</italic>
; a general description of it is presented in
<xref ref-type="fig" rid="f8-sensors-10-08865">Figure 8</xref>
. The simplicity and reliability of this clustering process ensures a correct re-initialization and association tasks in the XPFCP, within a low computational load that makes possible a real time execution of the global tracking task, as reveal the results obtained in our tests.</p>
<p>The main characteristics of this clustering proposal are listed below; while a deeper description of it can be found in [
<xref ref-type="bibr" rid="b28-sensors-10-08865">28</xref>
]:
<list list-type="bullet">
<list-item>
<p>The clustering algorithm adapts itself to an unknown and variable number
<italic>k
<sub>in,t</sub>
</italic>
clusters, as needed in this application.</p>
</list-item>
<list-item>
<p>A preliminary centroid
<italic>g⃗</italic>
<sub>1:
<italic>k,t|in</italic>
</sub>
prediction is included in the process in order to make fast and sure its convergence (the execution time of the proposal is decreased in 75% related to the standard K-Means’s one). This centroid prediction is possible thanks to the first and third steps of the blocks diagram in
<xref ref-type="fig" rid="f8-sensors-10-08865">Figure 8</xref>
: predicting an initial value for each centroid
<italic>g⃗</italic>
<sub>0,1:
<italic>k,t|in</italic>
</sub>
, and computing each centroid updating vector
<italic>u⃗</italic>
<sub>1:
<italic>k,t|in</italic>
</sub>
.</p>
</list-item>
<list-item>
<p>A window based validation process is added to the clustering proposal in order to increase its robustness against outliers in almost a noise rejection rate of 70%. Besides, this process provides the inclusion of an identifier τ
<sub>1:
<italic>k|out</italic>
</sub>
for each cluster obtained, with a 99% success rate meanwhile the cluster keeps appearing among the input data set
<italic>Y
<sub>obstacles,t</sub>
</italic>
. Thanks to this functionality, the validation process (last step, remarked in green in
<xref ref-type="fig" rid="f8-sensors-10-08865">Figure 8</xref>
) helps keeping track of temporal total occlusions of objects in the scene, as it is demonstrated in the video sequence
<italic>MTracker.avi</italic>
(see
<xref ref-type="supplementary-material" rid="SD1">supplementary materials</xref>
).</p>
</list-item>
</list>
</p>
<p>With these characteristics the set
<italic>G</italic>
<sub>1:
<italic>k,t|in</italic>
</sub>
≡ {
<italic>g⃗
<sub>j,t</sub>
</italic>
, τ
<italic>
<sub>j</sub>
</italic>
/
<italic>j</italic>
= 1 :
<italic>k
<sub>in,t</sub>
</italic>
} comprises a robust, filtered, compact and identified representation of the corresponding input data, which strengths the PF reliability in the multimodal estimation task pursuit.</p>
</sec>
<sec>
<label>3.2.2.</label>
<title>Re-Initialization</title>
<p>The main aim of adding the re-initialization step to the standard Bootstrap PF, is to insert
<italic>n</italic>
<sub>
<italic>m,t</italic>
−1</sub>
new particles to the discrete belief
<italic>S</italic>
<sub>
<italic>t</italic>
−1</sub>
<italic>p</italic>
(
<italic>x⃗</italic>
<sub>
<italic>t</italic>
−1</sub>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
−1</sub>
) from time
<italic>t</italic>
− 1. So, new tracking events (inclusion or loss of any object in the scene) are quickly updated in the estimation process.</p>
<p>Particles inserted in this new step are obtained randomly sampling among the members of all
<italic>k</italic>
<sub>
<italic>in,t</italic>
−1</sub>
clusters G
<sub>1:
<italic>k,t</italic>
−1|
<italic>in</italic>
</sub>
, segmented from the input data set of obstacles’ features
<italic>Y</italic>
<sub>
<italic>obstacles,t</italic>
−1</sub>
. Therefore, the re-initialization step generates the discrete density
<italic></italic>
<sub>
<italic>t</italic>
−1</sub>
<italic></italic>
(
<italic>x⃗</italic>
<sub>
<italic>t</italic>
−1</sub>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
−1</sub>
), which is a modification of
<italic>S</italic>
<sub>
<italic>t</italic>
−1</sub>
<italic>p</italic>
(
<italic>x⃗</italic>
<sub>
<italic>t</italic>
−1</sub>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
−1</sub>
) described by
<xref ref-type="disp-formula" rid="FD3">equation (3)</xref>
:
<disp-formula id="FD3">
<label>(3)</label>
<mml:math id="M14">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>S</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">in</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi></mml:mi>
<mml:mi>f</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>|</mml:mo>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>This process ensures that all observation hypotheses modeled by the density
<italic>p</italic>
(
<italic>G</italic>
<sub>1:
<italic>k,t</italic>
−1|
<italic>in</italic>
</sub>
) are considered equally in the re-initialization process.</p>
<p>In order to increase the probability of newly sensed objects in
<italic></italic>
<sub>
<italic>t</italic>
−1</sub>
, a specific number of particles
<italic>n</italic>
<sub>
<italic>m</italic>
|
<italic>i t</italic>
−1</sub>
is defined for each cluster
<italic>j</italic>
= 1:
<italic>k</italic>
<sub>
<italic>in,t</italic>
−1</sub>
to be inserted at this step, as shown in
<xref ref-type="disp-formula" rid="FD4">equation (4)</xref>
:
<disp-formula id="FD4">
<label>(4)</label>
<mml:math id="M15">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">in</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>n</mml:mi>
<mml:mi mathvariant="italic">init</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">init</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<italic>α</italic>
<sub>
<italic>init,j,t</italic>
−1</sub>
is a boolean parameter informing about the novelty of the cluster
<italic>G</italic>
<sub>
<italic>j,t</italic>
−1|
<italic>in</italic>
</sub>
in the set
<italic>G</italic>
<sub>1:
<italic>k,t</italic>
−1|
<italic>in</italic>
</sub>
;
<italic>n
<sub>init</sub>
</italic>
is the number of particles to append for each new cluster;
<italic>n
<sub>m</sub>
</italic>
is the minimum number of particles per cluster to be included; and
<italic>n</italic>
<sub>
<italic>m,t</italic>
−1</sub>
is the total amount of particles inserted at this step in
<italic>S</italic>
<sub>
<italic>t</italic>
−1</sub>
to get
<italic></italic>
<sub>
<italic>t</italic>
−1</sub>
.</p>
<p>Besides,
<inline-formula>
<mml:math id="M16">
<mml:mrow>
<mml:msub>
<mml:mi>γ</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac bevelled="true">
<mml:mrow>
<mml:msub>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:mfrac>
</mml:mrow>
</mml:math>
</inline-formula>
relates the number of particles inserted at re-initialization step
<italic>n</italic>
<sub>
<italic>m,t</italic>
−1</sub>
with the number
<italic>n</italic>
of them obtained at the output of this step. Using
<italic>γ
<sub>t</sub>
</italic>
a continuous version of
<xref ref-type="disp-formula" rid="FD3">equation (3)</xref>
can be expressed as shown in
<xref ref-type="disp-formula" rid="FD4">equation (4)</xref>
and in
<xref ref-type="fig" rid="f7-sensors-10-08865">Figure 7</xref>
:
<disp-formula id="FD5">
<label>(5)</label>
<mml:math id="M17">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>γ</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>+</mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>γ</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>G</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>|</mml:mo>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>The deterministic specification of
<italic>n</italic>
<sub>
<italic>m|j,t</italic>
−1</sub>
for each
<italic>j</italic>
= 1:
<italic>k</italic>
<sub>
<italic>in,t</italic>
−1</sub>
helps shortcoming the impoverishment problem of the PF in its multimodal application. This process ensures the particles diversification among all tracking hypotheses in the density estimated by the PF and increases the probability of newest ones, that otherwise would disappear along the filter evolution. Results included in section 4 demonstrates this assertion for a quite low value of γ
<italic>
<sub>t</sub>
</italic>
, that maintains the mathematical recursive rigor of the Bayesian algorithm.</p>
<p>This re-initialization step has a similar behavior that the one of the MCMC step (used
<italic>i.e.</italic>
, in [
<xref ref-type="bibr" rid="b15-sensors-10-08865">15</xref>
]) which moves the discrete density
<italic></italic>
(
<italic>x⃗</italic>
<sub>
<italic>t</italic>
−1</sub>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
</sub>
<sub>−1</sub>
) towards high likelihood areas in the probability space. In order to maintain constant the number of particles in
<italic>S
<sub>t</sub>
</italic>
along the time (and thus the XPFCP execution time), the
<italic>n</italic>
<sub>
<italic>m,t</italic>
−1</sub>
of them that are to be inserted at the re-initialization step at time are wisely erased at the selection step at time
<italic>t</italic>
− 1.</p>
</sec>
<sec>
<label>3.2.3.</label>
<title>Prediction</title>
<p>The set of
<italic>n</italic>
particles generated by the re-initialization step
<italic></italic>
<sub>
<italic>t</italic>
−1</sub>
<italic></italic>
(
<italic>x⃗</italic>
<sub>
<italic>t</italic>
−1</sub>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
</sub>
<sub>−1</sub>
) is updated through the actuation model, to obtain a discrete version of the prior
<italic>S</italic>
<sub>
<italic>t|t</italic>
−1</sub>
<italic>p</italic>
(
<italic>x⃗
<sub>t</sub>
</italic>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
−1</sub>
).
<disp-formula id="FD6">
<label>(6)</label>
<mml:math id="M18">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mfrac bevelled="true">
<mml:mn>1</mml:mn>
<mml:mi>n</mml:mi>
</mml:mfrac>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
</mml:mrow>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>In this case, the actuation model used
<italic>p</italic>
(
<italic>x⃗
<sub>t</sub>
</italic>
|
<italic>x⃗</italic>
<sub>
<italic>t</italic>
−1</sub>
) is defined in section 3.1, and so, the last expression in
<xref ref-type="disp-formula" rid="FD6">equation (6)</xref>
can be replaced by
<xref ref-type="disp-formula" rid="FD1">equation (1)</xref>
.</p>
<p>Thus, the state noise component
<italic>v⃗</italic>
<sub>
<italic>t</italic>
−1</sub>
is included in the particles’ state prediction with two main objectives: to create a small dispersion of the particles in the state space (needed to avoid degeneracy problems of the set [
<xref ref-type="bibr" rid="b9-sensors-10-08865">9</xref>
]); and a slight modification of the speed components in the state vector (needed to provide movement to the tracking hypothesis when using the CV model [
<xref ref-type="bibr" rid="b27-sensors-10-08865">27</xref>
]).</p>
<p>The simplicity of the CV model proposed eases its use for all objects to be tracked, no care its type or dynamics and without the help of an association task. Each particle
<inline-formula>
<mml:math id="M19">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
evolves according to the object’s dynamics that represents in the belief, as the related state vector includes the object speed components.</p>
</sec>
<sec>
<label>3.2.4.</label>
<title>Correction and Association</title>
<p>Particles’ weights
<inline-formula>
<mml:math id="M20">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo>˜</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
are computed at the correction step, using the expressions at
<xref ref-type="disp-formula" rid="FD7">equation (7)</xref>
, including a final normalization:
<disp-formula id="FD7">
<label>(7)</label>
<mml:math id="M21">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msubsup>
<mml:mi>w</mml:mi>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>g</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mfrac bevelled="true">
<mml:mrow>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mtext>min</mml:mtext>
<mml:mo></mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo></mml:mo>
<mml:mi>O</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msup>
<mml:mo>/</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msubsup>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo>˜</mml:mo>
</mml:mover>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>w</mml:mi>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msubsup>
<mml:mi>w</mml:mi>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:mfrac>
<mml:mo>/</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mtext>min</mml:mtext>
<mml:mo></mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mtext>min</mml:mtext>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>g</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
</disp-formula>
where
<italic>d</italic>
<sub>min,
<italic>i,t</italic>
</sub>
is the shortest distance in the observation space (XYZ in this case), for particle
<italic>S⃗</italic>
<sub>
<italic>i,t|t</italic>
−1</sub>
, between the projection in this space of the predicted state vector represented by the particle
<inline-formula>
<mml:math id="M22">
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
, and all centroids
<italic>g⃗</italic>
<sub>1:
<italic>k,t|in</italic>
</sub>
in the cluster set
<italic>G</italic>
<sub>1:
<italic>k,t|in</italic>
</sub>
, obtained from the objects’ observations set
<italic>Y
<sub>obstacles,t</sub>
</italic>
. The use of cluster centroids guarantees that the observation model applied is filtered, robust and accurate whatever the reliability of the observed object.</p>
<p>As shown in
<xref ref-type="disp-formula" rid="FD7">equation (7)</xref>
, in order to obtain the likelihood
<inline-formula>
<mml:math id="M23">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>g</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
used to compute the weights array
<italic>w⃗
<sub>t</sub>
</italic>
, the observation model defined by
<xref ref-type="disp-formula" rid="FD2">(2)</xref>
has to be utilized, as
<inline-formula>
<mml:math id="M24">
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
. Besides,
<italic>O</italic>
is the covariance matrix that characterizes the observation noise defined in the same model. This noise models the modifications of positions in the clusters
<italic>G
<sub>j,t|in</sub>
</italic>
centroid
<italic>g⃗</italic>
<sub>
<italic>j,t|in</italic>
</sub>
, when tracking objects that are partially occluded.</p>
<p>The equally weighted set
<inline-formula>
<mml:math id="M25">
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mfrac bevelled="true">
<mml:mn>1</mml:mn>
<mml:mi>n</mml:mi>
</mml:mfrac>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
output from the prediction step is therefore converted in the set
<inline-formula>
<mml:math id="M26">
<mml:mrow>
<mml:msubsup>
<mml:mi>S</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo>˜</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
.</p>
<p>The mentioned definition of
<italic>d</italic>
<sub>min,
<italic>i,t</italic>
</sub>
involves a NN association between the cluster,
<italic>G
<sub>j,t|in</sub>
</italic>
, whose centroid
<italic>g⃗
<sub>j,t|in</sub>
</italic>
is used in the particle’s weight
<inline-formula>
<mml:math id="M27">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo>˜</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
computation and the tracking hypothesis represented by the particle
<italic>S⃗</italic>
<sub>
<italic>i,t|t</italic>
−1</sub>
itself. In fact, this association means that
<italic>g⃗</italic>
<sub>
<italic>j,t|in</italic>
</sub>
is obtained from the observations generated by the tracking hypothesis represented by
<italic>S⃗</italic>
<sub>
<italic>i,t|t</italic>
−1</sub>
.</p>
<p>This association procedure and the re-initialization step remove the impoverishment problem that appears when a single PF is used to estimate different state vector values: all particles tend to be concentrated next to the most probable one, leaving the rest of its values without probabilistic representation at the output density. In [
<xref ref-type="bibr" rid="b17-sensors-10-08865">17</xref>
], the approximate number of efficient particles
<italic>
<sub>eff</sub>
</italic>
is used as a quality factor to evaluate the efficiency of every particle in the set. According this factor,
<italic>
<sub>eff</sub>
</italic>
should be above 66% in order to prevent the impoverishment risk at the particle set. This parameter is included among the results presented in next section in order to demonstrate how the XPFCP solves the impoverishment problem.</p>
</sec>
<sec>
<label>3.2.5.</label>
<title>Selection</title>
<p>Each particle of the set
<inline-formula>
<mml:math id="M28">
<mml:mrow>
<mml:msubsup>
<mml:mi>S</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo>˜</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>p</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
output from the correction step is resampled at the selection step (also called resampling step) according to the generated weight. As a result, an equally weighted particle set
<inline-formula>
<mml:math id="M29">
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mfrac bevelled="true">
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
is obtained, representing a discrete version of the final belief estimated by the Bayes filter
<italic>p</italic>
(
<italic>x
<sub>t</sub>
</italic>
|
<italic>y</italic>
<sub>1:
<italic>t</italic>
</sub>
). This final set
<italic>S
<sub>t</sub>
</italic>
is formed by
<italic>n</italic>
<italic>n
<sub>m,t</sub>
</italic>
particles, in order to have
<italic>n
<sub>m,t</sub>
</italic>
inserted at the next re-initialization step.</p>
</sec>
<sec>
<label>3.2.6.</label>
<title>Clustering Particles</title>
<p>From the discrete probabilistic distribution
<italic>S
<sub>t</sub>
</italic>
<italic>p</italic>
(
<italic>x
<sub>t</sub>
</italic>
|
<italic>y</italic>
<sub>1:
<italic>t</italic>
</sub>
) output by the selection step, a deterministic solution has to be generated by the XPFCP. This problem consists on finding the different modes included in the multimodal density
<italic>p</italic>
(
<italic>x
<sub>t</sub>
</italic>
|
<italic>y</italic>
<sub>1:
<italic>t</italic>
</sub>
) represented by the particle set
<italic>S
<sub>t</sub>
</italic>
; it has not an easy solution if those modes are not clearly different in that distribution.</p>
<p>Diverse proposals have been included in the XPFCP in order to achieve this differentiation. This is because keeping this multimodality in
<italic>p</italic>
(
<italic>x
<sub>t</sub>
</italic>
|
<italic>y</italic>
<sub>1:
<italic>t</italic>
</sub>
), while avoiding impoverishment problems in it, is the principal aim of all techniques proposed in this paper. Following section shows empirical results that demonstrates this.</p>
<p>Once ensured the differentiation, a simple algorithm can be used to segment in clusters the belief
<italic>p</italic>
(
<italic>x
<sub>t</sub>
</italic>
|
<italic>y</italic>
<sub>1:
<italic>t</italic>
</sub>
) at the end of the XPFCP loop. Therefore, these groups
<italic>G</italic>
<sub>1:
<italic>k,t|out</italic>
</sub>
will become the deterministic representation of the multiple obstacles’ hypotheses
<italic>Y
<sub>obstacles,t</sub>
</italic>
detected by the stereo vision algorithm described in Section 2.</p>
<p>In this work, the same
<italic>Sequential K-Means with Validation</italic>
, described in
<xref ref-type="fig" rid="f8-sensors-10-08865">Figure 8</xref>
, is used in order to obtain
<italic>G</italic>
<sub>1:
<italic>k,t|out</italic>
</sub>
from
<italic>S
<sub>t</sub>
</italic>
. Therefore, the deterministic representation of each
<italic>j</italic>
= 1 :
<italic>k
<sub>out,t</sub>
</italic>
tracked hypothesis will be a cluster
<italic>G
<sub>j,t|out</sub>
</italic>
with centroid
<italic>g⃗
<sub>j,t|out</sub>
</italic>
, with the same components as the state vector defined in
<xref ref-type="disp-formula" rid="FD1">(1)</xref>
, and an identification parameter
<italic>τ
<sub>j|out</sub>
</italic>
.</p>
</sec>
</sec>
</sec>
<sec>
<label>4.</label>
<title>Results</title>
<p>Different tests have been done in unstructured indoor environments, whose results are shown in this section. The stereo vision system used in the experiments is formed by two black and white digital cameras located in a static mounting arrangement, with a gap of 30 cm between them, and at a height of around 1.5 m from the floor. Vision processes have been developed using OpenCV libraries [
<xref ref-type="bibr" rid="b25-sensors-10-08865">25</xref>
] and run on a general purpose computer (Intel DUO 1.8GHz).</p>
<p>The global tracking algorithm described in this paper has been implemented on a mobile 4-wheeled robot platform. Specifically a Pioneer2AT from MobileRobots© [
<xref ref-type="bibr" rid="b29-sensors-10-08865">29</xref>
] has been used for the different tests. The robot includes a control interface to be guided around the environment, which can be used within the Player Control GNU Software, from the Player Project [
<xref ref-type="bibr" rid="b30-sensors-10-08865">30</xref>
].</p>
<p>
<xref ref-type="fig" rid="f9-sensors-10-08865">Figure 9</xref>
displays the functionality of the multi-tracking process in one of the tested situations. Three instants of the same experiment are shown in the figure. Each column presents the results obtained from a single capture; upper row are the input images, while lower row are 2D representations of objects’ data over the XZ ground plane.</p>
<p>Different data coming from the detected objects are found into each plot. According to the identification generated by the output clustering process, each group
<italic>G</italic>
<sub>1:
<italic>k,t|out</italic>
</sub>
has got a different and unique color. These groups are identified with a cylinder, thus this is shown as rectangles in the images and as circles in the ground projections. In both graphics, an arrow (with the same color than the corresponding group) shows the estimated speed of every obstacle being tracked at each situation, both in magnitude and in direction.</p>
<p>Particles’ state
<inline-formula>
<mml:math id="M30">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
(taken from the final set
<italic>S
<sub>t</sub>
</italic>
generated by the XPFCP) and 3D position of data set
<italic>Y
<sub>obstacles,t</sub>
</italic>
are represented by red and green dots, respectively, in each plot. Besides, the estimated values of position and speed (if non zero) of each obstacle are also depicted below its appearance in top row images.</p>
<p>Between any two plots in each column, a text row displays some information about the results shown; this is: the number of tracked obstacles (k); the execution time of the whole tracking application in ms (texe), the percent of
<italic>
<sub>eff</sub>
</italic>
(neff) and the frame number in the video sequence (iter). As it can be noticed in
<xref ref-type="fig" rid="f9-sensors-10-08865">Figure 9</xref>
, the observation system proposed and described in section 2 performs correctly its detection, classification and 3D localization task. Every object not belonging to the environmental structure is detected, localized and classified in the obstacle data set
<italic>Y
<sub>obstacles,t</sub>
</italic>
, in order to be tracked afterwards.</p>
<p>The multimodal algorithm also achieves the position estimation objective for all obstacles in the scene, regardless the number, shape, dynamics and type of the object. The XPFCP correctly tracks deformable and dynamic objects, such us persons, and static ones such us the paper bin, which can be seen besides the wall on the right.</p>
<p>Moreover, each tracked object characterized by the corresponding particles’ cluster
<italic>G</italic>
<sub>1:
<italic>k,t|out</italic>
</sub>
maintains its identity
<italic>τ</italic>
<sub>1:
<italic>k|out</italic>
</sub>
(shown with the same color in
<xref ref-type="fig" rid="f9-sensors-10-08865">Figure 9</xref>
) while the object stays in the scene even if it is partially or totally occluded (for a certain time) to the vision system. This is possible thanks to the particles’ clustering algorithm that includes a window based validation process.</p>
<p>In order to show in detail the behavior of the identification task,
<xref ref-type="fig" rid="f10-sensors-10-08865">Figure 10</xref>
shows the trajectories followed in the XZ plane by the four obstacles detected in another experiment. The robot stays stopped in front of the obstacles for the whole test.</p>
<p>Each colored spot represents during consecutive iterations the centroid position
<italic>g⃗</italic>
<sub>1:4|
<italic>out</italic>
</sub>
of the cluster related to the corresponding obstacle
<italic>G</italic>
<sub>1:4
<italic>,t|out</italic>
</sub>
; each color reflects the cluster identity
<italic>τ</italic>
<sub>1:4|
<italic>out</italic>
</sub>
. A dashed oriented arrow over each
<italic>g⃗</italic>
<sub>1:4|
<italic>out</italic>
</sub>
trace illustrates the ground truth of the path followed by the real obstacles. It can be hence conclude, that the correct identification of each object
<italic>τ</italic>
<sub>1:4|
<italic>out</italic>
</sub>
is maintained with a 100% of reliability, even when partial and total occlusions occur; this is the case shown on traces from obstacles three (in pink) and four (in light blue).</p>
<p>
<xref ref-type="fig" rid="f11-sensors-10-08865">Figure 11</xref>
demonstrates graphically the multimodal capability of the XPFCP proposal in a multi-tracking task. In this figure, the XPFCP functionality is compared to that of another multimodal multi-tracking proposal, described in [
<xref ref-type="bibr" rid="b18-sensors-10-08865">18</xref>
].</p>
<p>The bottom row of images in
<xref ref-type="fig" rid="f11-sensors-10-08865">Figure 11</xref>
shows the same particles and observation data set projections, as well as the tracking parameters texe, neff and iter, as described for
<xref ref-type="fig" rid="f9-sensors-10-08865">Figure 9</xref>
. Besides, the top row includes a plot of the density represented by the set output from the correction step by the two algorithms.</p>
<p>The information included in
<xref ref-type="fig" rid="f11-sensors-10-08865">Figure 11</xref>
allows concluding that the XPFCP proposed (left column) generates well differentiated modes in the final belief, according to the different estimation hypotheses; this is shown with four clear peaks on the belief distribution (top row). However, the PF based multi-tracking proposal presented in [
<xref ref-type="bibr" rid="b18-sensors-10-08865">18</xref>
] does not achieve the multimodality objective with the same efficiency than XPFCP, and therefore it cannot be used to robustly track multiple objects within a single estimator.</p>
<p>As theoretically asserted in previous sections, the measurements clustering algorithm used as deterministic association process have better results in the multimodal estimation task. Moreover, the results presented in
<xref ref-type="fig" rid="f11-sensors-10-08865">Figure 11</xref>
show that the multimodal density obtained with the XPFCP
<italic>S
<sub>t</sub>
</italic>
<italic>p</italic>
(
<italic>x⃗
<sub>t</sub>
</italic>
|
<italic>y⃗</italic>
<sub>1:
<italic>t</italic>
</sub>
), can be easily segmented to generate a deterministic output
<italic>G</italic>
<sub>1:
<italic>k,t|t</italic>
</sub>
, which is not the case with the results generated by the proposal in [
<xref ref-type="bibr" rid="b18-sensors-10-08865">18</xref>
]. A fast clustering algorithm, like the K-Means based proposed in this work, is enough to fulfill this task robustly and with low execution time. As it can be seen in the figure, the execution time of the XPFCP (texe = 28 ms) is almost 17 times smaller than the one of the other algorithm (texe = 474 ms); therefore, the Bayesian proposal presented in this paper is more appropriate for a real time application than the proposal in [
<xref ref-type="bibr" rid="b18-sensors-10-08865">18</xref>
].</p>
<p>Finally, the data shown in
<xref ref-type="fig" rid="f12-sensors-10-08865">Figure 12</xref>
confirm that the impoverishment problem related to the Bootstrap filter is minimized using the observation data set
<italic>Y
<sub>obstacles,t</sub>
</italic>
organized in clusters
<italic>G</italic>
<sub>1:
<italic>k,t|in</italic>
</sub>
at the re-initialization and correction steps. The bottom row of images in
<xref ref-type="fig" rid="f12-sensors-10-08865">Figure 12</xref>
shows the same information and parameters than the corresponding one in
<xref ref-type="fig" rid="f11-sensors-10-08865">Figure 11</xref>
. By the other side, the upper row plots the weights array
<inline-formula>
<mml:math id="M31">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo>˜</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
output from the correction step. Analyzing the results included in
<xref ref-type="fig" rid="f12-sensors-10-08865">Figure 12</xref>
, it is concluded that if the proposed segmentation in
<italic>G</italic>
<sub>1:
<italic>k,t|in</italic>
</sub>
clases is not used (left column plots) the poorest sensed object in the scene (the paper bin besides the wall on the right), has a reduced representation in the discrete distribution output of the correction step
<inline-formula>
<mml:math id="M32">
<mml:mrow>
<mml:msubsup>
<mml:mi>S</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo>˜</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
. However, results generated by the XPFCP in the same situation (right column plots) are much better. A visual comparison between both discrete distribution plots (top row) show the claimed behavior.</p>
<p>In order to analyze quantitatively this situation,
<xref ref-type="table" rid="t1-sensors-10-08865">Table 1</xref>
shows the number of particles in the set (output from the selection step) assigned to each object in the scene in
<xref ref-type="fig" rid="f12-sensors-10-08865">Figure 12</xref>
, numbered according its position in the image from left to right.</p>
<p>From the figures shown in
<xref ref-type="table" rid="t1-sensors-10-08865">Table 1</xref>
, It can be seen that particles are more equally distributed among all tracking hypotheses when using at the re-initialization and correction steps, avoiding the mentioned impoverishment problem.</p>
<p>As a final analysis,
<xref ref-type="table" rid="t2-sensors-10-08865">Table 2</xref>
resumes the results obtained with the proposed system (XPFCP with stereo vision data input) in a long experiment of 1,098 frames (video sequence of 1 min 13 s) with complex situations similar to the ones presented in
<xref ref-type="fig" rid="f9-sensors-10-08865">Figure 9</xref>
. The number of obstacles in the scene is changing from 0 to 5 along the sequence.</p>
<p>
<xref ref-type="table" rid="t2-sensors-10-08865">Table 2</xref>
data allow concluding that the multi-tracking proposal achieves the proposed objective reliably and robustly:
<list list-type="bullet">
<list-item>
<p>The low computational load of the tracking application enables its real time execution.</p>
</list-item>
<list-item>
<p>The impoverishment problem has been correctly solved because the number of efficient particles involved in the PF is above the established threshold (66%).</p>
</list-item>
<list-item>
<p>The XPFCP shows high identification reliability and robustness against noise.</p>
</list-item>
<list-item>
<p>A detailed analysis of tracking reliability shows errors (missed, duplicated or displaced objects) in about a 13% of iterations.</p>
</list-item>
<list-item>
<p>Nevertheless, noticeable errors in the tracking application (those of more than three consecutive iterations) only reached a 5.3% of iterations in the whole experiment.</p>
</list-item>
</list>
</p>
</sec>
<sec>
<label>5.</label>
<title>Conclusions</title>
<p>A robust estimator of the movement of obstacles in unstructured and indoor environments has been designed and tested. The proposed XPFCP is based on a probabilistic multimodal filter and is completed with a clustering process. The algorithm presented in this paper, provides high accuracy and robustness in the tracking task in complex environments, and obtains better figures than other up-to-date proposals.</p>
<p>As well, it has been developed a specific detection, classification and 3D localization algorithm for a stereo vision observation system. This algorithm is able to handle those tasks in a dynamic and complex indoor environment. The designed algorithm makes also a separation in real time of the measurements acquired from obstacles from those acquired from structural elements belonging to the environment.</p>
<p>The input data to the detection and classification process are stereo vision images, coming from a pair of synchronized cameras. The vision system has demonstrated to be robust in different scenes and distances up to 20 m.</p>
<p>Results obtained with the proposed algorithm are shown throughout this article. They prove that the exposed objectives have been achieved robustly and efficiently. The reliability shown by these results is especially important as the system is thought to be used in tracking applications for autonomous robot navigation.</p>
<p>To track a variable number of objects within a single algorithm, an estimator called XPFCP has been specified, developed and tested. In order to achieve this multimodal behavior, a combination of probabilistic and deterministic techniques has been successfully used.</p>
<p>The XPFCP includes a deterministic clustering process in order to increase the likelihood hypothesis of new objects appearing on the scene. This clustering improves the robustness of XPFCP compared with the behavior shown by other multimodal estimators.</p>
<p>Most tests have been run with a fixed number of 600 particles. This figure is kept constant so the XPFCP execution time is also constant; this is a very important fact in order to achieve a real time performance.</p>
<p>The designed XPFCP is based on simple observation and actuation models, and therefore it can be easily adapted to handle data coming up from different kinds of sensors and different types of obstacles to be tracked. This fact demonstrates that our tracking proposal is more flexible than other solutions found in the related literature, based on rigid models for the input data set.</p>
</sec>
<sec>
<title>Supplemental Information</title>
<supplementary-material content-type="local-data" id="SD1">
<media xlink:href="sensors-10-08865-s001.avi" xlink:type="simple" id="d32e5327" position="anchor" mimetype="video" mime-subtype="x-msvideo"></media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>This work has been supported by the Spanish Ministry of Science and Innovation under projects VISNU (ref. TIN2009-08984) and SDTEAM-UAH (ref. TIN2008-06856-C05-05).</p>
</ack>
<ref-list>
<title>References</title>
<ref id="b1-sensors-10-08865">
<label>1.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jia</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Balasuriya</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Challa</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Autonomous vehicles navigation with visual target tracking: Technical approaches</article-title>
<source>Algorithms</source>
<year>2008</year>
<volume>1</volume>
<fpage>153</fpage>
<lpage>182</lpage>
</element-citation>
</ref>
<ref id="b2-sensors-10-08865">
<label>2.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Khan</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Balch</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Dellaert</surname>
<given-names>F</given-names>
</name>
</person-group>
<article-title>A Rao-Blackwellized particle filter for eigen tracking</article-title>
<conf-name>Proceedings of the Third IEEE Conference on Computer Vision and Pattern Recognition</conf-name>
<conf-loc>Washington, DC, USA</conf-loc>
<conf-date>June 2004</conf-date>
<fpage>980</fpage>
<lpage>986</lpage>
</element-citation>
</ref>
<ref id="b3-sensors-10-08865">
<label>3.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Isard</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Blake</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Icondensation: Unifying low-level and high-level tracking in a stochastic framework</article-title>
<conf-name>Proceedings of the Fifth European Conference on Computer Vision</conf-name>
<conf-loc>Freiburg, Germany</conf-loc>
<conf-date>June 1998</conf-date>
<volume>1</volume>
<fpage>893</fpage>
<lpage>908</lpage>
</element-citation>
</ref>
<ref id="b4-sensors-10-08865">
<label>4.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>TS</given-names>
</name>
<name>
<surname>Rui</surname>
<given-names>Y</given-names>
</name>
</person-group>
<article-title>Mode-based multi-hypothesis head tracking using parametric contours</article-title>
<conf-name>Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition</conf-name>
<conf-loc>Washington, DC, USA</conf-loc>
<conf-date>May 2002</conf-date>
</element-citation>
</ref>
<ref id="b5-sensors-10-08865">
<label>5.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Odobez</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Gatica-Perez</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Embedding motion model-based stochastic tracking</article-title>
<conf-name>Proceedings of the Seventeenth International Conference on Pattern Recognition</conf-name>
<conf-loc>Cambridge, UK</conf-loc>
<conf-date>August 2004</conf-date>
<volume>2</volume>
<fpage>815</fpage>
<lpage>818</lpage>
</element-citation>
</ref>
<ref id="b6-sensors-10-08865">
<label>6.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Okuma</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Taleghani</surname>
<given-names>A</given-names>
</name>
<name>
<surname>De Freitas</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Little</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Lowe</surname>
<given-names>DG</given-names>
</name>
</person-group>
<article-title>A boosted particle filter: Multi-target detection and tracking</article-title>
<conf-name>Proceedings of the Eighth European Conference on Computer Vision</conf-name>
<conf-loc>Prague, Czech Republic</conf-loc>
<conf-date>May 2004</conf-date>
<volume>3021</volume>
<comment>Part I</comment>
<fpage>28</fpage>
<lpage>39</lpage>
</element-citation>
</ref>
<ref id="b7-sensors-10-08865">
<label>7.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thrun</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Probabilistic algorithms in robotics</article-title>
<source>AI Mag</source>
<year>2000</year>
<volume>21</volume>
<fpage>93</fpage>
<lpage>109</lpage>
</element-citation>
</ref>
<ref id="b8-sensors-10-08865">
<label>8.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arulampalam</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Maskell</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Gordon</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Clapp</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>A tutorial on particle filters for online nonlinear non-gaussian bayesian tracking</article-title>
<source>IEEE Trans. Signal. Proces</source>
<year>2002</year>
<volume>50</volume>
<fpage>174</fpage>
<lpage>188</lpage>
</element-citation>
</ref>
<ref id="b9-sensors-10-08865">
<label>9.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gordon</surname>
<given-names>NJ</given-names>
</name>
<name>
<surname>Salmond</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>AFM</given-names>
</name>
</person-group>
<article-title>Novel approach to nonlinear/non-gaussian bayesian state estimation</article-title>
<source>IEEE Proc. F</source>
<year>1993</year>
<volume>140</volume>
<fpage>107</fpage>
<lpage>113</lpage>
</element-citation>
</ref>
<ref id="b10-sensors-10-08865">
<label>10.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>J-J</given-names>
</name>
</person-group>
<article-title>An improved particle filter for target tracking in sensor systems</article-title>
<source>Sensors</source>
<year>2007</year>
<volume>7</volume>
<fpage>144</fpage>
<lpage>156</lpage>
</element-citation>
</ref>
<ref id="b11-sensors-10-08865">
<label>11.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Welch</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Bishop</surname>
<given-names>G</given-names>
</name>
</person-group>
<source>An Introduction to the Kalman Filter</source>
<comment>Technical Report: TR95-041</comment>
<publisher-name>ACM SIGGRAPH</publisher-name>
<publisher-loc>Los Angeles, CA, USA</publisher-loc>
<year>2001</year>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.cs.unc.edu/~tracker/ref/s2001/kalman/">http://www.cs.unc.edu/~tracker/ref/s2001/kalman/</ext-link>
(accesed on 30 June 2010)</comment>
</element-citation>
</ref>
<ref id="b12-sensors-10-08865">
<label>12.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reid</surname>
<given-names>DB</given-names>
</name>
</person-group>
<article-title>An algorithm for tracking multiple targets</article-title>
<source>IEEE Trans. Automat. Contr</source>
<year>1979</year>
<volume>24</volume>
<fpage>843</fpage>
<lpage>854</lpage>
</element-citation>
</ref>
<ref id="b13-sensors-10-08865">
<label>13.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Tweed</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Calway</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Tracking many objects using subordinated condensation</article-title>
<conf-name>Proceedings of the British Machine Vision Conference</conf-name>
<conf-loc>Cardiff, UK</conf-loc>
<conf-date>October 2002</conf-date>
<fpage>283</fpage>
<lpage>292</lpage>
</element-citation>
</ref>
<ref id="b14-sensors-10-08865">
<label>14.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Smith</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Gatica-Perez</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Odobez</surname>
<given-names>JM</given-names>
</name>
</person-group>
<article-title>Using particles to track varying numbers of interacting people</article-title>
<conf-name>Proceedings of the Fourth IEEE Conference on Computer Vision and Pattern Recognition</conf-name>
<conf-loc>San Diego, CA, USA</conf-loc>
<conf-date>June 2005</conf-date>
<fpage>962</fpage>
<lpage>969</lpage>
</element-citation>
</ref>
<ref id="b15-sensors-10-08865">
<label>15.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>MacCormick</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Blake</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>A probabilistic exclusion principle for tracking multiple objects</article-title>
<conf-name>Proceedings of the Seventh IEEE International Conference on Computer Vision</conf-name>
<conf-loc>Corfu, Greece</conf-loc>
<conf-date>September 1999</conf-date>
<fpage>572</fpage>
<lpage>578</lpage>
</element-citation>
</ref>
<ref id="b16-sensors-10-08865">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schulz</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Burgard</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Cremers</surname>
<given-names>AB</given-names>
</name>
</person-group>
<article-title>Tracking multiple moving targets with a mobile robot using particle filters and statistical data association</article-title>
<source>Int. J. Robot. Res</source>
<year>2003</year>
<volume>22</volume>
<fpage>99</fpage>
<lpage>116</lpage>
</element-citation>
</ref>
<ref id="b17-sensors-10-08865">
<label>17.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hue</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Le Cadre</surname>
<given-names>JP</given-names>
</name>
<name>
<surname>Pérez</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>A particle filter to track multiple objects</article-title>
<source>IEEE Trans. Aero. Elec. Sys</source>
<year>2002</year>
<volume>38</volume>
<fpage>791</fpage>
<lpage>812</lpage>
</element-citation>
</ref>
<ref id="b18-sensors-10-08865">
<label>18.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koller-Meier</surname>
<given-names>EB</given-names>
</name>
<name>
<surname>Ade</surname>
<given-names>F</given-names>
</name>
</person-group>
<article-title>Tracking multiple objects using a condensation algorithm</article-title>
<source>J. Robot. Auton. Syst</source>
<year>2001</year>
<volume>34</volume>
<fpage>93</fpage>
<lpage>105</lpage>
</element-citation>
</ref>
<ref id="b19-sensors-10-08865">
<label>19.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schulz</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Burgard</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Cremers</surname>
<given-names>AB</given-names>
</name>
</person-group>
<article-title>People tracking with mobile robots using sample-based joint probabilistic data association filters</article-title>
<source>Int. J. Robot. Res</source>
<year>2003</year>
<volume>22</volume>
<fpage>99</fpage>
<lpage>116</lpage>
</element-citation>
</ref>
<ref id="b20-sensors-10-08865">
<label>20.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Bar-Shalom</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Fortmann</surname>
<given-names>T</given-names>
</name>
</person-group>
<source>Tracking and Data Association</source>
<publisher-name>Academic Press</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>1988</year>
</element-citation>
</ref>
<ref id="b21-sensors-10-08865">
<label>21.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burguera</surname>
<given-names>A</given-names>
</name>
<name>
<surname>González</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Oliver</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Sonar semsor models and their application to mobile robot localization</article-title>
<source>Sensors</source>
<year>2009</year>
<volume>9</volume>
<fpage>10217</fpage>
<lpage>10243</lpage>
</element-citation>
</ref>
<ref id="b22-sensors-10-08865">
<label>22.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Boufama</surname>
<given-names>B</given-names>
</name>
</person-group>
<article-title>Reconstruction Tridimensionnelle en Vision par Ordinateur: Cas des Cameras non Etalonnees</article-title>
<comment>Ph.D. Thesis</comment>
<publisher-name>Institut National Polytechnique de Grenoble</publisher-name>
<publisher-loc>Grenoble, France</publisher-loc>
<year>1994</year>
</element-citation>
</ref>
<ref id="b23-sensors-10-08865">
<label>23.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Canny</surname>
<given-names>FJ</given-names>
</name>
</person-group>
<article-title>A computational approach to edge detection</article-title>
<source>IEEE Trans. Pattern Anal</source>
<year>1986</year>
<volume>8</volume>
<fpage>679</fpage>
<lpage>698</lpage>
</element-citation>
</ref>
<ref id="b24-sensors-10-08865">
<label>24.</label>
<mixed-citation publication-type="webpage">
<italic>Documentation of function cvHoughLines2.</italic>
Available online:
<ext-link ext-link-type="uri" xlink:href="http://opencv.willowgarage.com/documentation/feature_detection.html">http://opencv.willowgarage.com/documentation/feature_detection.html</ext-link>
(accessed on 27 August 2010).</mixed-citation>
</ref>
<ref id="b25-sensors-10-08865">
<label>25.</label>
<mixed-citation publication-type="webpage">
<italic>Project OpenCV.</italic>
Available online:
<ext-link ext-link-type="uri" xlink:href="http://sourceforge.net/projects/opencvlibrary/">http://sourceforge.net/projects/opencvlibrary/</ext-link>
(accesed on 27 August 2010).</mixed-citation>
</ref>
<ref id="b26-sensors-10-08865">
<label>26.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Vermaak</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Doucet</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Perez</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Maintaining multimodality through mixture tracking</article-title>
<conf-name>Proceedings of the Ninth IEEE International Conference on Computer Vision</conf-name>
<conf-loc>Nice, France</conf-loc>
<conf-date>June 2003</conf-date>
<fpage>1110</fpage>
<lpage>1116</lpage>
</element-citation>
</ref>
<ref id="b27-sensors-10-08865">
<label>27.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Marrón</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sotelo</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>García</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Broddfelt</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Comparing improved versions of ‘K-Means’ and ‘Subtractive’ clustering in a tracking applications</article-title>
<conf-name>Proceedings of the Eleventh International Workshop on Computer Aided Systems Theory</conf-name>
<conf-loc>Las Palmas de Gran Canaria, Spain</conf-loc>
<conf-date>February 2007</conf-date>
<fpage>252</fpage>
<lpage>255</lpage>
</element-citation>
</ref>
<ref id="b28-sensors-10-08865">
<label>28.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bar Shalom</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>XR</given-names>
</name>
</person-group>
<source>Estimation and Tracking Principles Techniques and Software</source>
<publisher-name>Artech House</publisher-name>
<publisher-loc>Boston, MA, USA</publisher-loc>
<year>1993</year>
</element-citation>
</ref>
<ref id="b29-sensors-10-08865">
<label>29.</label>
<mixed-citation publication-type="webpage">
<italic>MobileRobots.</italic>
Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.mobilerobots.com/Mobile_Robots.aspx">http://www.mobilerobots.com/Mobile_Robots.aspx</ext-link>
(accessed on 27 August 2010).</mixed-citation>
</ref>
<ref id="b30-sensors-10-08865">
<label>30.</label>
<mixed-citation publication-type="webpage">
<italic>The Player Project.</italic>
Available online:
<ext-link ext-link-type="uri" xlink:href="http://playerstage.sourceforge.net/">http://playerstage.sourceforge.net/</ext-link>
(accessed on 27 August 2010).</mixed-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="f1-sensors-10-08865" position="float">
<label>Figure 1.</label>
<caption>
<p>Framework and typical scenario: mobile robot navigation through complex and crowded indoor environments.</p>
</caption>
<graphic xlink:href="sensors-10-08865f1"></graphic>
</fig>
<fig id="f2-sensors-10-08865" position="float">
<label>Figure 2.</label>
<caption>
<p>General description of the global stereo vision based tracking system.</p>
</caption>
<graphic xlink:href="sensors-10-08865f2"></graphic>
</fig>
<fig id="f3-sensors-10-08865" position="float">
<label>Figure 3.</label>
<caption>
<p>Functional description of the stereo vision data extraction process.</p>
</caption>
<graphic xlink:href="sensors-10-08865f3"></graphic>
</fig>
<fig id="f4-sensors-10-08865" position="float">
<label>Figure 4.</label>
<caption>
<p>Flowchart of the data acquisition subsystem, based on a stereo vision process. Main tasks are: detection and classification (blocks at the top); and 3D localization (blocks at the bottom). Inner structure of each main task is highlighted and detailed.</p>
</caption>
<graphic xlink:href="sensors-10-08865f4"></graphic>
</fig>
<fig id="f5-sensors-10-08865" position="float">
<label>Figure 5.</label>
<caption>
<p>Results of the detection, classification and 3D location process in three frames of a real experiment. Detected structural features and related original images.</p>
</caption>
<graphic xlink:href="sensors-10-08865f5"></graphic>
</fig>
<fig id="f6-sensors-10-08865" position="float">
<label>Figure 6.</label>
<caption>
<p>Results of the detection, classification and 3D location process in four frames of a real experiment. Top row, detected edges; middle row, original images; bottom row, 2D ground projection of points classified as obstacles.</p>
</caption>
<graphic xlink:href="sensors-10-08865f6"></graphic>
</fig>
<fig id="f7-sensors-10-08865" position="float">
<label>Figure 7.</label>
<caption>
<p>Functional diagram of the multiple objects’ tracker based on a XPFCP. Deterministic tasks have a blue background while probabilistic tasks have a different color. Modified or new PF steps are remarked with dashed lines.</p>
</caption>
<graphic xlink:href="sensors-10-08865f7"></graphic>
</fig>
<fig id="f8-sensors-10-08865" position="float">
<label>Figure 8.</label>
<caption>
<p>Functional diagram of the modified version of the Extended K-Means (second step, white background), used in the correction step of the XPFCP: the
<italic>Sequential K-Means with Validation</italic>
. New steps of this clustering algorithm are highlighted in yellow and green.</p>
</caption>
<graphic xlink:href="sensors-10-08865f8"></graphic>
</fig>
<fig id="f9-sensors-10-08865" position="float">
<label>Figure 9.</label>
<caption>
<p>Results of the multi-tracking process in a real experiment. They are organized in columns, where the upper image shows the tracking results generated by the XPFCP for each object, projected in the image plane, and the lower one shows the same results projected into the XZ plane.</p>
</caption>
<graphic xlink:href="sensors-10-08865f9"></graphic>
</fig>
<fig id="f10-sensors-10-08865" position="float">
<label>Figure 10.</label>
<caption>
<p>Trajectory followed in the ground plane (XZ) by four obstacles according to the XPFCP estimation results in a real experiment.</p>
</caption>
<graphic xlink:href="sensors-10-08865f10"></graphic>
</fig>
<fig id="f11-sensors-10-08865" position="float">
<label>Figure 11.</label>
<caption>
<p>Results of the multi-tracking process in a real experiment: left column shows the results generated by the XPFCP; the right column shows the results of the proposal presented in [
<xref ref-type="bibr" rid="b18-sensors-10-08865">18</xref>
].</p>
</caption>
<graphic xlink:href="sensors-10-08865f11"></graphic>
</fig>
<fig id="f12-sensors-10-08865" position="float">
<label>Figure 12.</label>
<caption>
<p>Results of the multi-tracking process in a real experiment using the proposed XPFCP (left column of images), and the same results using an input data set not segmented in classes at the re-initialization and correction steps (right column of images).</p>
</caption>
<graphic xlink:href="sensors-10-08865f12"></graphic>
</fig>
<table-wrap id="t1-sensors-10-08865" position="float">
<label>Table 1.</label>
<caption>
<p>Distribution percentage of particles in the set
<italic>S
<sub>t</sub>
</italic>
among the tracked hypotheses in the situations shown in
<xref ref-type="fig" rid="f12-sensors-10-08865">Figure 12</xref>
.</p>
</caption>
<table frame="box" rules="all">
<thead>
<tr>
<th align="center" valign="middle" rowspan="2" colspan="1">
<bold>Algorithm</bold>
</th>
<th colspan="4" align="center" valign="bottom" rowspan="1">
<bold>Object</bold>
</th>
</tr>
<tr>
<th align="center" valign="bottom" rowspan="1" colspan="1">
<bold>1</bold>
</th>
<th align="center" valign="bottom" rowspan="1" colspan="1">
<bold>2</bold>
</th>
<th align="center" valign="bottom" rowspan="1" colspan="1">
<bold>3</bold>
</th>
<th align="center" valign="bottom" rowspan="1" colspan="1">
<bold>4</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="top" rowspan="1" colspan="1">Using
<italic>G</italic>
<sub>1:
<italic>k,t</italic>
−1|
<italic>in</italic>
</sub>
(left column plots)</td>
<td align="center" valign="top" rowspan="1" colspan="1">28.5</td>
<td align="center" valign="top" rowspan="1" colspan="1">28.1</td>
<td align="center" valign="top" rowspan="1" colspan="1">31.5</td>
<td align="center" valign="top" rowspan="1" colspan="1">10.9</td>
</tr>
<tr>
<td align="center" valign="top" rowspan="1" colspan="1">Not using
<italic>G</italic>
<sub>1:
<italic>k,t</italic>
−1|
<italic>in</italic>
</sub>
(right column plots)</td>
<td align="center" valign="top" rowspan="1" colspan="1">31.2</td>
<td align="center" valign="top" rowspan="1" colspan="1">42.2</td>
<td align="center" valign="top" rowspan="1" colspan="1">24.4</td>
<td align="center" valign="top" rowspan="1" colspan="1">2.2</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t2-sensors-10-08865" position="float">
<label>Table 2.</label>
<caption>
<p>Summary of the results obtained with the multi-tracking proposal in a long and complex experiment. The most relevant parameters in the XPFCP are tuned to the values:
<italic>n</italic>
= 600,
<italic>γ
<sub>t</sub>
</italic>
= 0.2,
<inline-formula>
<mml:math id="M33">
<mml:mrow>
<mml:mfrac bevelled="true">
<mml:mrow>
<mml:msub>
<mml:mi>n</mml:mi>
<mml:mi mathvariant="italic">init</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mn>5</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
, σ
<italic>
<sub>v,i</sub>
</italic>
= 100 /
<italic>i</italic>
= {
<italic>x,y,z,vx,vz</italic>
}, σ
<italic>
<sub>o,i</sub>
</italic>
= 150
<italic>mm</italic>
/
<italic>i</italic>
= {
<italic>x,y,z</italic>
}.</p>
</caption>
<table frame="box" rules="cols">
<thead>
<tr>
<th align="center" valign="middle" rowspan="1" colspan="1">
<bold>Parameter</bold>
</th>
<th align="center" valign="middle" rowspan="1" colspan="1">
<bold>Value</bold>
</th>
</tr>
<tr>
<th align="center" valign="middle" colspan="2" rowspan="1">
<hr></hr>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">Mean execution time</td>
<td align="center" valign="top" rowspan="1" colspan="1">40 ms (25 FPS)</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">Number of efficient particles,
<italic>
<sub>eff</sub>
</italic>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">69.8%</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">Mismatch identification (% frames)</td>
<td align="center" valign="top" rowspan="1" colspan="1">0%</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">Outliers rejection (% frames)</td>
<td align="center" valign="top" rowspan="1" colspan="1">99.9%</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">Missed objects (% frames)</td>
<td align="center" valign="top" rowspan="1" colspan="1">9.2%</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">Duplicated objects (% frames)</td>
<td align="center" valign="top" rowspan="1" colspan="1">3.3%</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">Displaced objects (% frames)</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.4%</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">Reliability in long term errors (% frames)</td>
<td align="center" valign="top" rowspan="1" colspan="1">Δ
<italic>t</italic>
> 0.6s → 3.5%, Δ
<italic>t</italic>
> 0.8s → 1.8%</td>
</tr>
</tbody>
</table>
</table-wrap>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/TelematiV1/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000581 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000581 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    TelematiV1
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     PMC:3230977
   |texte=   Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/RBID.i   -Sk "pubmed:22163385" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a TelematiV1 

Wicri

This area was generated with Dilib version V0.6.31.
Data generation: Thu Nov 2 16:09:04 2017. Site generation: Sun Mar 10 16:42:28 2024