Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Visual Sensor Technology for Advanced Surveillance Systems: Historical View, Technological Aspects and Research Activities in Italy

Identifieur interne : 001F99 ( Pmc/Checkpoint ); précédent : 001F98; suivant : 002000

Visual Sensor Technology for Advanced Surveillance Systems: Historical View, Technological Aspects and Research Activities in Italy

Auteurs : Gian Luca Foresti ; Christian Micheloni ; Claudio Piciarelli ; Lauro Snidaro

Source :

RBID : PMC:3348842

Abstract

The paper is a survey of the main technological aspects of advanced visual-based surveillance systems. A brief historical view of such systems from the origins to nowadays is given together with a short description of the main research projects in Italy on surveillance applications in the last twenty years. The paper then describes the main characteristics of an advanced visual sensor network that (a) directly processes locally acquired digital data, (b) automatically modifies intrinsic (focus, iris) and extrinsic (pan, tilt, zoom) parameters to increase the quality of acquired data and (c) automatically selects the best subset of sensors in order to monitor a given moving object in the observed environment.


Url:
DOI: 10.3390/s90402252
PubMed: 22574011
PubMed Central: 3348842


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3348842

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Visual Sensor Technology for Advanced Surveillance Systems: Historical View, Technological Aspects and Research Activities in Italy</title>
<author>
<name sortKey="Foresti, Gian Luca" sort="Foresti, Gian Luca" uniqKey="Foresti G" first="Gian Luca" last="Foresti">Gian Luca Foresti</name>
</author>
<author>
<name sortKey="Micheloni, Christian" sort="Micheloni, Christian" uniqKey="Micheloni C" first="Christian" last="Micheloni">Christian Micheloni</name>
</author>
<author>
<name sortKey="Piciarelli, Claudio" sort="Piciarelli, Claudio" uniqKey="Piciarelli C" first="Claudio" last="Piciarelli">Claudio Piciarelli</name>
</author>
<author>
<name sortKey="Snidaro, Lauro" sort="Snidaro, Lauro" uniqKey="Snidaro L" first="Lauro" last="Snidaro">Lauro Snidaro</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22574011</idno>
<idno type="pmc">3348842</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3348842</idno>
<idno type="RBID">PMC:3348842</idno>
<idno type="doi">10.3390/s90402252</idno>
<date when="2009">2009</date>
<idno type="wicri:Area/Pmc/Corpus">002811</idno>
<idno type="wicri:Area/Pmc/Curation">002811</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001F99</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Visual Sensor Technology for Advanced Surveillance Systems: Historical View, Technological Aspects and Research Activities in Italy</title>
<author>
<name sortKey="Foresti, Gian Luca" sort="Foresti, Gian Luca" uniqKey="Foresti G" first="Gian Luca" last="Foresti">Gian Luca Foresti</name>
</author>
<author>
<name sortKey="Micheloni, Christian" sort="Micheloni, Christian" uniqKey="Micheloni C" first="Christian" last="Micheloni">Christian Micheloni</name>
</author>
<author>
<name sortKey="Piciarelli, Claudio" sort="Piciarelli, Claudio" uniqKey="Piciarelli C" first="Claudio" last="Piciarelli">Claudio Piciarelli</name>
</author>
<author>
<name sortKey="Snidaro, Lauro" sort="Snidaro, Lauro" uniqKey="Snidaro L" first="Lauro" last="Snidaro">Lauro Snidaro</name>
</author>
</analytic>
<series>
<title level="j">Sensors (Basel, Switzerland)</title>
<idno type="eISSN">1424-8220</idno>
<imprint>
<date when="2009">2009</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The paper is a survey of the main technological aspects of advanced visual-based surveillance systems. A brief historical view of such systems from the origins to nowadays is given together with a short description of the main research projects in Italy on surveillance applications in the last twenty years. The paper then describes the main characteristics of an advanced visual sensor network that (a) directly processes locally acquired digital data, (b) automatically modifies intrinsic (focus, iris) and extrinsic (pan, tilt, zoom) parameters to increase the quality of acquired data and (c) automatically selects the best subset of sensors in order to monitor a given moving object in the observed environment.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Regazzoni, C S" uniqKey="Regazzoni C">C. S. Regazzoni</name>
</author>
<author>
<name sortKey="Visvanathan, R" uniqKey="Visvanathan R">R. Visvanathan</name>
</author>
<author>
<name sortKey="Foresti, G L" uniqKey="Foresti G">G. L. Foresti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Donold, C H M" uniqKey="Donold C">C. H. M. Donold</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pahlavan, K" uniqKey="Pahlavan K">K. Pahlavan</name>
</author>
<author>
<name sortKey="Levesque, A H" uniqKey="Levesque A">A. H. Levesque</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yilmaz, A" uniqKey="Yilmaz A">A. Yilmaz</name>
</author>
<author>
<name sortKey="Javed, O" uniqKey="Javed O">O. Javed</name>
</author>
<author>
<name sortKey="Shah, M" uniqKey="Shah M">M. Shah</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pantic, M" uniqKey="Pantic M">M. Pantic</name>
</author>
<author>
<name sortKey="Pentland, A" uniqKey="Pentland A">A. Pentland</name>
</author>
<author>
<name sortKey="Nijholt, A" uniqKey="Nijholt A">A. Nijholt</name>
</author>
<author>
<name sortKey="Huang, T" uniqKey="Huang T">T. Huang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haritaoglu, I" uniqKey="Haritaoglu I">I. Haritaoglu</name>
</author>
<author>
<name sortKey="Harwood, D" uniqKey="Harwood D">D. Harwood</name>
</author>
<author>
<name sortKey="Davis, L" uniqKey="Davis L">L. Davis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oliver, N M" uniqKey="Oliver N">N. M. Oliver</name>
</author>
<author>
<name sortKey="Rosario, B" uniqKey="Rosario B">B. Rosario</name>
</author>
<author>
<name sortKey="Pentland, A P" uniqKey="Pentland A">A. P. Pentland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ricquebourg, Y" uniqKey="Ricquebourg Y">Y. Ricquebourg</name>
</author>
<author>
<name sortKey="Bouthemy, P" uniqKey="Bouthemy P">P. Bouthemy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bremond, F" uniqKey="Bremond F">F. Bremond</name>
</author>
<author>
<name sortKey="Thonnat, M" uniqKey="Thonnat M">M. Thonnat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Faulus, D" uniqKey="Faulus D">D. Faulus</name>
</author>
<author>
<name sortKey="Ng, R" uniqKey="Ng R">R. Ng</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haanpaa, D P" uniqKey="Haanpaa D">D. P. Haanpaa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kurana, M S" uniqKey="Kurana M">M. S. Kurana</name>
</author>
<author>
<name sortKey="Tugcu, T" uniqKey="Tugcu T">T. Tugcu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stringa, E" uniqKey="Stringa E">E. Stringa</name>
</author>
<author>
<name sortKey="Regazzoni, C S" uniqKey="Regazzoni C">C. S. Regazzoni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kimura, N" uniqKey="Kimura N">N. Kimura</name>
</author>
<author>
<name sortKey="Latifi, S" uniqKey="Latifi S">S. Latifi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lu, W" uniqKey="Lu W">W. Lu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fazel, K" uniqKey="Fazel K">K. Fazel</name>
</author>
<author>
<name sortKey="Robertson, P" uniqKey="Robertson P">P. Robertson</name>
</author>
<author>
<name sortKey="Klank, O" uniqKey="Klank O">O. Klank</name>
</author>
<author>
<name sortKey="Vanselow, F" uniqKey="Vanselow F">F. Vanselow</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Batra, P" uniqKey="Batra P">P. Batra</name>
</author>
<author>
<name sortKey="Chang, S" uniqKey="Chang S">S. Chang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Manjunath, B S" uniqKey="Manjunath B">B. S. Manjunath</name>
</author>
<author>
<name sortKey="Huang, T" uniqKey="Huang T">T. Huang</name>
</author>
<author>
<name sortKey="Tekalp, A M" uniqKey="Tekalp A">A. M. Tekalp</name>
</author>
<author>
<name sortKey="Zhang, H J" uniqKey="Zhang H">H. J. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bjontegaard, G" uniqKey="Bjontegaard G">G. Bjontegaard</name>
</author>
<author>
<name sortKey="Lillevold, K" uniqKey="Lillevold K">K. Lillevold</name>
</author>
<author>
<name sortKey="Danielsen, R" uniqKey="Danielsen R">R. Danielsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cheng, H" uniqKey="Cheng H">H. Cheng</name>
</author>
<author>
<name sortKey="Li, X" uniqKey="Li X">X. Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benoispineau, J" uniqKey="Benoispineau J">J. Benoispineau</name>
</author>
<author>
<name sortKey="Morier, F" uniqKey="Morier F">F. Morier</name>
</author>
<author>
<name sortKey="Barba, D" uniqKey="Barba D">D. Barba</name>
</author>
<author>
<name sortKey="Sanson, H" uniqKey="Sanson H">H. Sanson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vasconcelos, N" uniqKey="Vasconcelos N">N. Vasconcelos</name>
</author>
<author>
<name sortKey="Lippman, A" uniqKey="Lippman A">A. Lippman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ebrahimi, T" uniqKey="Ebrahimi T">T. Ebrahimi</name>
</author>
<author>
<name sortKey="Salembier, P" uniqKey="Salembier P">P. Salembier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Akyildiz, I F" uniqKey="Akyildiz I">I. F. Akyildiz</name>
</author>
<author>
<name sortKey="Su, W" uniqKey="Su W">W. Su</name>
</author>
<author>
<name sortKey="Sankarasubramaniam, Y" uniqKey="Sankarasubramaniam Y">Y. Sankarasubramaniam</name>
</author>
<author>
<name sortKey="Cayirci, E" uniqKey="Cayirci E">E. Cayirci</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kersey, C" uniqKey="Kersey C">C. Kersey</name>
</author>
<author>
<name sortKey="Yu, Z" uniqKey="Yu Z">Z. Yu</name>
</author>
<author>
<name sortKey="Tsai, J" uniqKey="Tsai J">J. Tsai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sluzek, A" uniqKey="Sluzek A">A. Sluzek</name>
</author>
<author>
<name sortKey="Palaniappan, A" uniqKey="Palaniappan A">A. Palaniappan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Margi, C" uniqKey="Margi C">C. Margi</name>
</author>
<author>
<name sortKey="Petkov, V" uniqKey="Petkov V">V. Petkov</name>
</author>
<author>
<name sortKey="Obraczka, K" uniqKey="Obraczka K">K. Obraczka</name>
</author>
<author>
<name sortKey="Manduchi, R" uniqKey="Manduchi R">R. Manduchi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rahimi, M" uniqKey="Rahimi M">M. Rahimi</name>
</author>
<author>
<name sortKey="Baer, R" uniqKey="Baer R">R. Baer</name>
</author>
<author>
<name sortKey="Iroezi, O I" uniqKey="Iroezi O">O. I. Iroezi</name>
</author>
<author>
<name sortKey="Garcia, J C" uniqKey="Garcia J">J. C. Garcia</name>
</author>
<author>
<name sortKey="Warrior, J" uniqKey="Warrior J">J. Warrior</name>
</author>
<author>
<name sortKey="Estrin, D" uniqKey="Estrin D">D. Estrin</name>
</author>
<author>
<name sortKey="Srivastava, M" uniqKey="Srivastava M">M. Srivastava</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Regazzoni, C S" uniqKey="Regazzoni C">C. S. Regazzoni</name>
</author>
<author>
<name sortKey="Tesei, A" uniqKey="Tesei A">A. Tesei</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Foresti, G L" uniqKey="Foresti G">G. L. Foresti</name>
</author>
<author>
<name sortKey="Regazzoni, C S" uniqKey="Regazzoni C">C. S. Regazzoni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bogaert, M" uniqKey="Bogaert M">M. Bogaert</name>
</author>
<author>
<name sortKey="Chelq, N" uniqKey="Chelq N">N. Chelq</name>
</author>
<author>
<name sortKey="Cornez, P" uniqKey="Cornez P">P. Cornez</name>
</author>
<author>
<name sortKey="Regazzoni, C" uniqKey="Regazzoni C">C. Regazzoni</name>
</author>
<author>
<name sortKey="Teschioni, A" uniqKey="Teschioni A">A. Teschioni</name>
</author>
<author>
<name sortKey="Thonnat, M" uniqKey="Thonnat M">M. Thonnat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cucchiara, R" uniqKey="Cucchiara R">R. Cucchiara</name>
</author>
<author>
<name sortKey="Grana, C" uniqKey="Grana C">C. Grana</name>
</author>
<author>
<name sortKey="Piccardi, M" uniqKey="Piccardi M">M. Piccardi</name>
</author>
<author>
<name sortKey="Prati, A" uniqKey="Prati A">A. Prati</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Foresti, G" uniqKey="Foresti G">G. Foresti</name>
</author>
<author>
<name sortKey="Micheloni, C" uniqKey="Micheloni C">C. Micheloni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Piciarelli, C" uniqKey="Piciarelli C">C. Piciarelli</name>
</author>
<author>
<name sortKey="Micheloni, C" uniqKey="Micheloni C">C. Micheloni</name>
</author>
<author>
<name sortKey="Foresti, G" uniqKey="Foresti G">G. Foresti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carincotte, C" uniqKey="Carincotte C">C. Carincotte</name>
</author>
<author>
<name sortKey="Desurmont, X" uniqKey="Desurmont X">X. Desurmont</name>
</author>
<author>
<name sortKey="Ravera, B" uniqKey="Ravera B">B. Ravera</name>
</author>
<author>
<name sortKey="Bremond, F" uniqKey="Bremond F">F. Bremond</name>
</author>
<author>
<name sortKey="Orwell, J" uniqKey="Orwell J">J. Orwell</name>
</author>
<author>
<name sortKey="Velastin, S" uniqKey="Velastin S">S. Velastin</name>
</author>
<author>
<name sortKey="Odobez, J" uniqKey="Odobez J">J. Odobez</name>
</author>
<author>
<name sortKey="Corbucci, B" uniqKey="Corbucci B">B. Corbucci</name>
</author>
<author>
<name sortKey="Palo, J" uniqKey="Palo J">J. Palo</name>
</author>
<author>
<name sortKey="Cernocky, J" uniqKey="Cernocky J">J. Cernocky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fan, J" uniqKey="Fan J">J. Fan</name>
</author>
<author>
<name sortKey="Wang, R" uniqKey="Wang R">R. Wang</name>
</author>
<author>
<name sortKey="Zhang, L" uniqKey="Zhang L">L. Zhang</name>
</author>
<author>
<name sortKey="Xing, D" uniqKey="Xing D">D. Xing</name>
</author>
<author>
<name sortKey="Gan, F" uniqKey="Gan F">F. Gan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tsai, D" uniqKey="Tsai D">D. Tsai</name>
</author>
<author>
<name sortKey="Lin, C" uniqKey="Lin C">C. Lin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, D" uniqKey="Wang D">D. Wang</name>
</author>
<author>
<name sortKey="Feng, T" uniqKey="Feng T">T. Feng</name>
</author>
<author>
<name sortKey="Shum, H" uniqKey="Shum H">H. Shum</name>
</author>
<author>
<name sortKey="Ma, S" uniqKey="Ma S">S. Ma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stauffer, C" uniqKey="Stauffer C">C. Stauffer</name>
</author>
<author>
<name sortKey="Grimson, W E L" uniqKey="Grimson W">W. E. L. Grimson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tsai, W H" uniqKey="Tsai W">W.-H. Tsai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rosin, P L" uniqKey="Rosin P">P. L. Rosin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Snidaro, L" uniqKey="Snidaro L">L. Snidaro</name>
</author>
<author>
<name sortKey="Foresti, G" uniqKey="Foresti G">G. Foresti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yuille, A" uniqKey="Yuille A">A. Yuille</name>
</author>
<author>
<name sortKey="Vincent, L" uniqKey="Vincent L">L. Vincent</name>
</author>
<author>
<name sortKey="Geiger, D" uniqKey="Geiger D">D. Geiger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Serra, J" uniqKey="Serra J">J. Serra</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mikolajczyk, K" uniqKey="Mikolajczyk K">K. Mikolajczyk</name>
</author>
<author>
<name sortKey="Schmid, C" uniqKey="Schmid C">C. Schmid</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Micheloni, C" uniqKey="Micheloni C">C. Micheloni</name>
</author>
<author>
<name sortKey="Foresti, G" uniqKey="Foresti G">G. Foresti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Micheloni, C" uniqKey="Micheloni C">C. Micheloni</name>
</author>
<author>
<name sortKey="Foresti, G" uniqKey="Foresti G">G. Foresti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Micheloni, C" uniqKey="Micheloni C">C. Micheloni</name>
</author>
<author>
<name sortKey="Foresti, G" uniqKey="Foresti G">G. Foresti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Crowley, J" uniqKey="Crowley J">J. Crowley</name>
</author>
<author>
<name sortKey="Hall, D" uniqKey="Hall D">D. Hall</name>
</author>
<author>
<name sortKey="Emonet, R" uniqKey="Emonet R">R. Emonet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Snidaro, L" uniqKey="Snidaro L">L. Snidaro</name>
</author>
<author>
<name sortKey="Niu, R" uniqKey="Niu R">R. Niu</name>
</author>
<author>
<name sortKey="Foresti, G" uniqKey="Foresti G">G. Foresti</name>
</author>
<author>
<name sortKey="Varshney, P" uniqKey="Varshney P">P. Varshney</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Snidaro, L" uniqKey="Snidaro L">L. Snidaro</name>
</author>
<author>
<name sortKey="Foresti, G" uniqKey="Foresti G">G. Foresti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Avciba, I" uniqKey="Avciba I">I. Avcibaş</name>
</author>
<author>
<name sortKey="Sankur, B" uniqKey="Sankur B">B. Sankur</name>
</author>
<author>
<name sortKey="Sayood, K" uniqKey="Sayood K">K. Sayood</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Collins, R T" uniqKey="Collins R">R. T. Collins</name>
</author>
<author>
<name sortKey="Lipton, A J" uniqKey="Lipton A">A. J. Lipton</name>
</author>
<author>
<name sortKey="Kanade, T" uniqKey="Kanade T">T. Kanade</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Erdem, C E" uniqKey="Erdem C">Ç. E. Erdem</name>
</author>
<author>
<name sortKey="Sankur, B" uniqKey="Sankur B">B. Sankur</name>
</author>
<author>
<name sortKey="Tekalp, A M" uniqKey="Tekalp A">A. M. Tekalp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Correia, P L" uniqKey="Correia P">P. L. Correia</name>
</author>
<author>
<name sortKey="Pereira, F" uniqKey="Pereira F">F. Pereira</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Collins, R T" uniqKey="Collins R">R. T. Collins</name>
</author>
<author>
<name sortKey="Liu, Y" uniqKey="Liu Y">Y. Liu</name>
</author>
<author>
<name sortKey="Leordeanu, M" uniqKey="Leordeanu M">M. Leordeanu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nghiem, A" uniqKey="Nghiem A">A. Nghiem</name>
</author>
<author>
<name sortKey="Bremond, F" uniqKey="Bremond F">F. Bremond</name>
</author>
<author>
<name sortKey="Thonnat, M" uniqKey="Thonnat M">M. Thonnat</name>
</author>
<author>
<name sortKey="Ma, R" uniqKey="Ma R">R. Ma</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="review-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sensors (Basel)</journal-id>
<journal-id journal-id-type="iso-abbrev">Sensors (Basel)</journal-id>
<journal-title-group>
<journal-title>Sensors (Basel, Switzerland)</journal-title>
</journal-title-group>
<issn pub-type="epub">1424-8220</issn>
<publisher>
<publisher-name>Molecular Diversity Preservation International (MDPI)</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22574011</article-id>
<article-id pub-id-type="pmc">3348842</article-id>
<article-id pub-id-type="doi">10.3390/s90402252</article-id>
<article-id pub-id-type="publisher-id">sensors-09-02252</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Review</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Visual Sensor Technology for Advanced Surveillance Systems: Historical View, Technological Aspects and Research Activities in Italy</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Foresti</surname>
<given-names>Gian Luca</given-names>
</name>
<xref ref-type="corresp" rid="c1-sensors-09-02252">
<sup></sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Micheloni</surname>
<given-names>Christian</given-names>
</name>
<xref ref-type="author-notes" rid="fn1-sensors-09-02252"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Piciarelli</surname>
<given-names>Claudio</given-names>
</name>
<xref ref-type="author-notes" rid="fn1-sensors-09-02252"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Snidaro</surname>
<given-names>Lauro</given-names>
</name>
<xref ref-type="author-notes" rid="fn1-sensors-09-02252"></xref>
</contrib>
<aff id="af1-sensors-09-02252">Department of Mathematics and Computer Science University of Udine, via delle Scienze, 206, 33100 Udine, Italy</aff>
</contrib-group>
<author-notes>
<fn id="fn1-sensors-09-02252">
<p>E-mails:
<email>christian.micheloni@dimi.uniud.it</email>
(C.M.),
<email>claudio.piciarelli@dimi.uniud.it</email>
(C.P.),
<email>lauro.snidaro@dimi.uniud.it</email>
(L.S.)</p>
</fn>
<corresp id="c1-sensors-09-02252">
<label></label>
Author to whom correspondence should be addressed; E-Mail:
<email>gianluca.foresti@dimi.uniud.it</email>
</corresp>
</author-notes>
<pub-date pub-type="collection">
<year>2009</year>
</pub-date>
<pub-date pub-type="epub">
<day>30</day>
<month>3</month>
<year>2009</year>
</pub-date>
<volume>9</volume>
<issue>4</issue>
<fpage>2252</fpage>
<lpage>2270</lpage>
<history>
<date date-type="received">
<day>9</day>
<month>1</month>
<year>2009</year>
</date>
<date date-type="rev-recd">
<day>25</day>
<month>3</month>
<year>2009</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>3</month>
<year>2009</year>
</date>
</history>
<permissions>
<copyright-statement>© 2009 by the authors; licensee MDPI, Basel, Switzerland</copyright-statement>
<copyright-year>2009</copyright-year>
<license>
<license-p>
<pmc-comment>CREATIVE COMMONS</pmc-comment>
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/3.0/">http://creativecommons.org/licenses/by/3.0/</ext-link>
).</license-p>
</license>
</permissions>
<abstract>
<p>The paper is a survey of the main technological aspects of advanced visual-based surveillance systems. A brief historical view of such systems from the origins to nowadays is given together with a short description of the main research projects in Italy on surveillance applications in the last twenty years. The paper then describes the main characteristics of an advanced visual sensor network that (a) directly processes locally acquired digital data, (b) automatically modifies intrinsic (focus, iris) and extrinsic (pan, tilt, zoom) parameters to increase the quality of acquired data and (c) automatically selects the best subset of sensors in order to monitor a given moving object in the observed environment.</p>
</abstract>
<kwd-group>
<kwd>Advanced visual surveillance</kwd>
<kwd>visual sensor networks</kwd>
<kwd>active vision</kwd>
<kwd>object detection</kwd>
<kwd>tracking</kwd>
<kwd>human behaviour understanding</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec>
<label>1.</label>
<title>Introduction</title>
<p>In the last years, there has been a growing interest in surveillance applications due to the increasing availability of cheap visual sensors (e.g., optical and infrared cameras) and processors. In addition, after the events of September 11th, 2001, citizens are demanding much more safety and security in urban environments. These facts, in conjunction with the increasing maturity of algorithms and techniques, are making possible the use of surveillance systems in various application domains such as security, transportation and automotive industry.</p>
<p>The surveillance of remote and often unattended environments (e.g., metro lines and railway platforms, highways, airport waiting rooms or taxiways, nuclear plants, public areas, etc.) is a complex problem implying the cooperative use of multiple sensors. Surveillance systems have provided several degrees of assistance to operators and evolved in an incremental way according to the progress in technology and sensors. Several kinds of sensors are nowadays available for advanced surveillance systems: they range from tactile or pressure sensors (e.g., border surveillance) to chemical sensors (e.g., industrial plant surveillance or counter terrorism activities) to audio and visual sensors.</p>
<p>For monitoring wide outdoor areas, the most informative and versatile sensors are the visual ones. This information can be used to classify different kinds of objects (e.g., pedestrians, groups of people, motorcycles, cars, vans, lorries, buses, etc.) moving in the observed scene, to understand their behaviours and to detect anomalous events. Useful information (e.g., classification of the suspicious event, information about the class of detected objects in the scene, b/w or colour blob of the detected objects, etc.) can be transmitted to a remote operator for augmenting its monitoring capabilities and, if necessary, to take appropriate decisions.</p>
<p>The main objective of this paper is to analyze the technological aspects of advanced visual-based surveillance systems, with particular emphasis on advanced visual sensor networks that (a) directly process locally acquired digital data, (b) automatically modify intrinsic (focus, iris, etc.) and extrinsic (pan, tilt, zoom, etc.) parameters to increase the quality of acquired data and (c) automatically select the best subset of sensors in order to monitor (detect, track and recognize) a given moving object in the observed environment.</p>
<p>A major requirement in automated systems is the ability to self-diagnose when the video data is not usable for analysis purposes. For instance, when video sensors cameras are used in an outdoor application it is often the case that during certain times of the day there is direct lighting of the camera lens from sunlight; a situation that renders the video useless for monitoring purposes. Another example of such a scenario is a weather condition such as heavy snowfall during which the contrast levels are such that people detection at a distance is rather difficult to do. Thus in these scenarios, it is useful to have a system diagnostic that alerts the end-user of the unavailability of the automated intelligence functions. Ideally the function that evaluates the unavailability of a given system should estimate whether the input data is such that the system performance can be guaranteed to meet given user-defined specifications. In addition, the system should gracefully degrade in performance as the complexity of data increases. This is a very open research issue that is crucial to the deployment of these systems.</p>
<p>The paper is organized as follows. Section 2. presents a brief historical view of visual-based surveillance systems from the origins to nowadays and a short description of the main research projects in Italy on surveillance applications in the last twenty years. Section 3. describes the main technological aspects of the last generation of intelligent visual-based surveillance systems.</p>
</sec>
<sec>
<label>2.</label>
<title>Advanced Visual-based Surveillance Systems</title>
<p>Visual surveillance systems were born in 1960 where CCTVs became available on the market by providing data at acceptable quality. According to the classification proposed by Regazzoni et al. [
<xref ref-type="bibr" rid="b1-sensors-09-02252">1</xref>
] video surveillance systems that have been proposed in the literature can be classified under a technological perspective as belonging to three successive generations.</p>
<sec>
<label>2.1.</label>
<title>Evolution of Visual-Based Surveillance Systems</title>
<p>First video surveillance systems (1960–80) used multiple analog video cameras (sensor level) to monitor indoor or outdoor environments by transmitting and displaying analog visual signals in a remote control room (
<xref ref-type="fig" rid="f1-sensors-09-02252">Figure 1</xref>
). Multiple video signals were presented to the human operator after analog communication (local processing level) through a large set of monitors. Video streams were normally stored on analog storage devices (i.e., VHS, etc.).</p>
<p>These systems suffered of (a) limited attention span of the operators that may result in a significant rate of missed events of interest or alarms [
<xref ref-type="bibr" rid="b2-sensors-09-02252">2</xref>
], (b) high bandwidth requirements that limits the number of sensors to be used [
<xref ref-type="bibr" rid="b3-sensors-09-02252">3</xref>
] and (c) large amount of tapes to be stored that transform the off-line archival and retrieval of video frames containing events of interest in a complex operation. During the period 1980–90 the fast development of electronic systems allowed to increase the performances of video cameras, personal computers and communication technologies. In particular, advanced video cameras characterized by higher image resolution, low cost personal computers and more robust and less expensive communications links become available on the market. In this period, second generation surveillance systems (1980–2000) became a reality (
<xref ref-type="fig" rid="f2-sensors-09-02252">Figure 2</xref>
). The main characteristics of these systems were the use of digital video communications and the use of simple automatic video processing procedures able to help the operator in detecting some simple interesting events. Several important research papers have been published in that period describing results in real-time detection and tracking of moving objects in complex scenes [
<xref ref-type="bibr" rid="b4-sensors-09-02252">4</xref>
], human behaviour understanding [
<xref ref-type="bibr" rid="b5-sensors-09-02252">5</xref>
,
<xref ref-type="bibr" rid="b6-sensors-09-02252">6</xref>
,
<xref ref-type="bibr" rid="b7-sensors-09-02252">7</xref>
,
<xref ref-type="bibr" rid="b8-sensors-09-02252">8</xref>
,
<xref ref-type="bibr" rid="b9-sensors-09-02252">9</xref>
] intelligent man-machine interfaces [
<xref ref-type="bibr" rid="b10-sensors-09-02252">10</xref>
,
<xref ref-type="bibr" rid="b11-sensors-09-02252">11</xref>
] wireless and wired broadband access networks [
<xref ref-type="bibr" rid="b12-sensors-09-02252">12</xref>
], video compression and multimedia transmission for video based surveillance systems [
<xref ref-type="bibr" rid="b13-sensors-09-02252">13</xref>
,
<xref ref-type="bibr" rid="b14-sensors-09-02252">14</xref>
].</p>
<p>The second generation of surveillance systems reached only partially full digital video signal transmission and processing [
<xref ref-type="bibr" rid="b13-sensors-09-02252">13</xref>
,
<xref ref-type="bibr" rid="b15-sensors-09-02252">15</xref>
,
<xref ref-type="bibr" rid="b16-sensors-09-02252">16</xref>
,
<xref ref-type="bibr" rid="b17-sensors-09-02252">17</xref>
,
<xref ref-type="bibr" rid="b18-sensors-09-02252">18</xref>
,
<xref ref-type="bibr" rid="b19-sensors-09-02252">19</xref>
,
<xref ref-type="bibr" rid="b20-sensors-09-02252">20</xref>
,
<xref ref-type="bibr" rid="b21-sensors-09-02252">21</xref>
,
<xref ref-type="bibr" rid="b22-sensors-09-02252">22</xref>
,
<xref ref-type="bibr" rid="b23-sensors-09-02252">23</xref>
]. Few system subparts use digital methods to solve communication and processing problems. In the first years of the third millennium, studies began for providing a ”full digital” design of video-based surveillance systems, ranging from the sensor level up to the presentation of adequate visual information to the operators (
<xref ref-type="fig" rid="f3-sensors-09-02252">Figure 3</xref>
). In this new architecture model, advanced video cameras constitute the sensor layer while different advanced transmission devices using digital compression form the local processing layer. An intelligent hub able to integrate data coming from multiple low-level layers constitute the main component of the network layer, where all communications are in digital form. Finally, an advanced Man-Machine Interface (MMI) assists the operator by focusing his attention to a subset of interesting events and possible pre-alarms.</p>
<p>Research activity in the field of advanced visual-based surveillance systems is today mainly focused on three different directions: (a) to design and develop new embedded digital video sensors and advanced video processing/understanding algorithms, (b) to design and develop new sensor selection and data fusion algorithms, (c) to design and develop new high bandwidth access networks.</p>
<p>The design of new embedded digital video sensors is moving in the direction of directly process locally acquired digital data at sensor level [
<xref ref-type="bibr" rid="b24-sensors-09-02252">24</xref>
]. Hundreds of video sensors are organized in wireless networks [
<xref ref-type="bibr" rid="b25-sensors-09-02252">25</xref>
,
<xref ref-type="bibr" rid="b26-sensors-09-02252">26</xref>
] that must collect data in real-time and send to the control centre only video streams of interesting events. Due to the relatively high power consumption characteristics of cameras adequate procedures must be developed to devise energy-aware resource [
<xref ref-type="bibr" rid="b27-sensors-09-02252">27</xref>
]. Recently, power-hungry camera nodes have been studied integrating a CMOS camera and a micro-controller, but the image resolution, memory and computational power are not completely adequate for the visual tasks [
<xref ref-type="bibr" rid="b28-sensors-09-02252">28</xref>
]. The design and development of new sensor selection and data fusion algorithms allow to choose for a given event the best set of sensors from a large sensor network. Moreover, high bandwidth access networks allow to integrate both homogeneous and heterogeneous data coming from sensors spatially distributed over wide areas. In order to reduce the bandwidth requirements, encoding can be used to reduce the amount of visual data to be transmitted by using methods that exploit spatial-temporal information (e.g. MPEG, M-JPEG, H263, etc.) or that can be applied separately to video frames (e.g. JPEG). However, this operation requires more processing by increasing net energy consumption.</p>
</sec>
<sec>
<label>2.2.</label>
<title>Visual-Based Surveillance Systems in Italy</title>
<p>Starting from 1990 interesting second generation video-based surveillance systems have been studied in the context of different international (e.g. VSAM Program, USA) and European research programs (e.g. ESPRIT Program, European Union). Some Italian industries and Universities have participated to these research programs and have carried out prototypical demonstrators installed in real environments. The first International project on visual-based surveillance system with the participation of Italian partners (IRST and University of Genoa) was the EEC-ESPRIT II P5345 DIMUS (Data Integration in Multisensor Systems) project, in 1990–92. The aim of the project was the development of a multisensor surveillance system for the remote monitoring of metro line stations. Video cameras and acoustic sensors were integrated into a common framework able to help the operator of a remote control centre to detect some interesting events (e.g., people beyond the yellow line on the platform, crowed platform when the train is arriving, gunshots, etc.). A demonstrator was installed in the Genoa-DiNegro metro line station [
<xref ref-type="bibr" rid="b29-sensors-09-02252">29</xref>
]. Another interesting international project on the subject of visual-based surveillance system with the participation of Italian partners (Technopolis Csata, Assolari Nuove Tecnologie, University of Genoa) was the CEC-ESPRIT 6068 ATHENA (Advanced Teleoperation for Earthwork Equipment) project, developed in 1994–1998. The aim of the project was the automation of earthwork operations within waste disposal sites. The objective was to operate an unmanned vehicle with autonomous capabilities, and to equip the control station with advanced teleoperation capabilities. In particular, the visual surveillance system was in charge of the following tasks: (a) intruder detection in the sanitary landfill (monitored area) by using image processing techniques applied to b/w images acquired by multiple video cameras, (b) real-time tracking of detected intruders inside the monitored area and (c) collision avoidance.
<xref ref-type="fig" rid="f4-sensors-09-02252">Figure 4</xref>
shows a frame of the MMI of the ATHENA visual surveillance system.</p>
<p>An exhaustive description of the visual-based surveillance system developed in the ATHENA project can be found in [
<xref ref-type="bibr" rid="b30-sensors-09-02252">30</xref>
].</p>
<p>Italian participation in international projects on visual surveillance can be pointed out also in the ESPRIT Project 2152 VIEWS (
<italic>Visual Inspection and Evaluation of Wide-area Scenes</italic>
), in the ESPRIT Project 8483 PASSWORDS (
<italic>Parallel and real-time Advanced Surveillance System With Operator assistance for Revealing Dangerous Situations</italic>
), in the IST-1999-11287 project ADVISOR (
<italic>Annotated Digital Video for Surveillance and Optimised Retrieval</italic>
), in the IST-1999-10808 project VISOR BASE (
<italic>VIdeo Sensor Object Request Broker open Architecture for Distributed Services</italic>
) and in the IST-2007-045547 project VIDIVideo (
<italic>Interactive semantic video search with a large thesaurus of machine-learned audio-visual concepts</italic>
).</p>
<p>The VIEWS project was a reliable application of real time monitoring and surveillance of outdoor dynamic scenes in constrained situations. Two were the driving applications of the VIEWS project: ground traffic surveillance of civil airports and traffic surveillance of public roads. The main goal of this project was the detection, tracking and classification of static or moving object in the observed scene. The goal of the PASSWORDS project was to design and develop an innovative prototype of a real-time image analysis system for visual surveillance and security applications, based on low-cost hardware and distributed software. The main functional objectives consisted in: (a) detection of motion in specific areas; (b) stopped objects detection; (c) crowd (people, cars, etc.) flow measurement and quantitative estimation; (d) crowd shape-and-structure analysis (e.g., behaviour of persons or groups, etc.). Two pilot applications have been used to demonstrate the functionalities of the system: the surveillance of an outdoor light metro network and the surveillance of a supermarket and its surroundings [
<xref ref-type="bibr" rid="b31-sensors-09-02252">31</xref>
]. The ADVISOR project was addressing the environmental and economic pressures for increased use of public transport. Metro operators need to develop efficient management systems for their networks using CCTV for computer-assisted automatic incident detection, content based annotation of video recordings, behaviour pattern analysis of crowds and individuals, and ergonomic human computer interfaces. The VISOR BASE project was aimed at creating a CORBA adaptation of the requirements of digital video monitoring systems applied to the development of applications with artificial vision functionality. Examples of VISOR-compliant video processing modules are: a motion detector, a people counter, a number plate reader, a face recognizer, and a people tracker. During the period 1990–2000 some interesting video-based surveillance systems of the second generation have been developed within the context of Italian projects. In the Progetto Finalizzato Trasporti II (PFT2) (1993–1996) the University of Genoa developed a vision system for railway level crossing surveillance for supporting a human operator in a remote control room. The system was able to focus the attention of the operator on significant scenes provided by a b/w camera. The surveillance system prototype installed at the Genoa-Rivarolo railway level crossing had the following functionalities: (a) information acquisition from the environment by using various sensors, (b) information processing to identify dangerous situations in monitored areas (level crossing), (c) presentation of alarm situations to the human operator in the control room. In particular, some innovative system modules were implemented: (a) a change-detection module which allows the individuation of changed areas in the current frame with respect to the reference image (background), (b) a localization module which is able to identify and localize on a 2D map all possible objects (e.g., cars, motorcycles, persons) in the monitored area, and (c) an interpretation module able to generate an alarm signal when anomalous situations were detected.
<xref ref-type="fig" rid="f5-sensors-09-02252">Figure 5</xref>
shows the man machine interface of the PFT2 visual surveillance system.</p>
<p>In the last decade, some other video-surveillance systems have been developed in Italy for guarding remote environments in order to detect and prevent dangerous situations. In the context of the research activities financed by the Italian Ministry of University and Scientific Research (MURST), some projects can be analyzed. The Sakbot (Statistical And Knowledge-Based Object deTector) project [
<xref ref-type="bibr" rid="b32-sensors-09-02252">32</xref>
], developed in 2003 by the ImageLab group at the University of Modena and Reggio Emilia, applies computer vision techniques for outdoor (traffic control) and indoor surveillance. Object detection is performed by background suppression and the background is updated statistically. Shadows are detected and removed to increase the accuracy of the object detection process. The
<italic>PER</italic>
<sup>2</sup>
project, developed in 2003 by a group of four Italian Universities (Genoa, Cagliari, Udine and Polytechnic of Milan), has realized a distributed system for multisensor recognition with augmented perception for ambient security and customization [
<xref ref-type="bibr" rid="b33-sensors-09-02252">33</xref>
]. The aim of this project was to explore innovative methodologies finalized to equip advanced, multi-sensorial surveillance systems with features such as: increased perception and customized communications. These features are indispensable to increase ambient and user security. The augmented degree of perception in the system, achieved by an efficient processing of multi-sensorial data, resulted indispensable to find dangerous situations in complex scenes with a real time behaviour. Moreover, the augmented interaction degree between the system and the customer, obtained by a customized information transmission, also allowed to manage situations of real danger in high level dynamic contexts. In such a project, the adoption of heterogeneous sensors (see
<xref ref-type="fig" rid="f6-sensors-09-02252">Figure 6</xref>
) allowed to address a wider range of situations thanks to the active control of the fields of view of the surveillance network.</p>
<p>The studies in the field of active vision, started within the
<italic>PER</italic>
<sup>2</sup>
project, continued in a follow-up project for the development of Ambient Intelligence techniques for event analysis, sensor reconfiguration and multimodal interfaces. In the period 2007–09, four Italian Universities (Udine, Padova, Pavia and Rome ”La Sapienza”) have contributed to the development of the project ”Ambient Intelligence: Event Analysis, Sensor Reconfiguration and Multimodal Interfaces” [
<xref ref-type="bibr" rid="b34-sensors-09-02252">34</xref>
]. The aim was the study and development of new algorithms and techniques for the design of a network of heterogeneous sensors for automatic monitoring of public environments. The main architecture, presented in
<xref ref-type="fig" rid="f7-sensors-09-02252">Figure 7</xref>
, manages, data coming from sensors and alarms from selected (of interest) scenarios, as well as alerting operators for ground checking (collecting live data) by mobile devices (carried by guardians). For instance, the automatic detection of an undesired human behaviour can activate the sensors’ network reconfiguration (to improve further detection or recognition) or demand for a guardian to check it for recording a high quality image of the subject’s face. In the period 2006–08, the Rome Public Transportation Company (ATAC) has participated to the CARTAKER (Content Analysis and Retrieval Technologies to Apply Knowledge Extraction to massive Recording) [
<xref ref-type="bibr" rid="b35-sensors-09-02252">35</xref>
] IST European project. The project has developed and assessed multimedia knowledge-based content analysis for automatic situation awareness, diagnosis and decision support in the context of a metroline environment. Recent advances in the research field on visual-based surveillance systems in Italy can be found on the Proceeding of the First Workshop on VIdeoSurveillance projects in ITaly (VISIT 2008).</p>
</sec>
</sec>
<sec>
<label>3.</label>
<title>Intelligent Visual Sensors</title>
<sec sec-type="methods">
<label>3.1.</label>
<title>Visual Data Processing at Sensor Level</title>
<p>The processing of image sequences acquired by video sensors can be structured in several abstraction layers, ranging from the low-level processing routines in which each image is considered as a group of pixels and basic features need to be extracted (e.g. image edges, moving objects etc.) up to the highest abstraction level in which semantic labels are associated to images and parts of images in order to give a meaningful description of the actions, events and behaviours detected in the monitored scene. Even though the lowest processing level has been widely studied since the beginning of computer vision research, it is still affected by many open problems; actually it is common belief that the major limitations for high-level techniques is the lack of proper low-level algorithms for robust feature extraction. One of the most common low-level problems consists in the detection of moving objects within the scene observed by the sensor, a problem often referred to with the terms change detection, motion detection or background/foreground segmentation. The basic idea is to compare the current frame with the previous ones in order to detect changes, but several problems must be faced, e.g.:
<list list-type="bullet">
<list-item>
<p>camouflage effects are caused by moving objects similar in appearance to the background (changes in the scene do not imply changes in the image, e.g.,
<xref ref-type="fig" rid="f8-sensors-09-02252">Figure 8 (a) and (b)</xref>
)</p>
</list-item>
<list-item>
<p>light changes can lead to changes in the images that are not associated to real foreground objects (changes in the image do not imply changes in the scene, e.g.,
<xref ref-type="fig" rid="f8-sensors-09-02252">Figure 8 (c) and (d)</xref>
)</p>
</list-item>
<list-item>
<p>foreground aperture is a problem affecting the detection of moving objects with uniform appearance, so that motion can be detected only on the borders of the object (e.g.,
<xref ref-type="fig" rid="f8-sensors-09-02252">Figure 8 (e) and (f)</xref>
)</p>
</list-item>
<list-item>
<p>ghosting refers to the detection of false objects due to motion of elements initially considered as a part of the background (e.g.,
<xref ref-type="fig" rid="f8-sensors-09-02252">Figure 8 (g) and (h)</xref>
)</p>
</list-item>
</list>
</p>
<p>Change detection algorithms can be roughly classified in two main categories, depending on the elements which are compared in order to detect changes:
<list list-type="bullet">
<list-item>
<p>frame-by-frame algorithms</p>
</list-item>
<list-item>
<p>frame-background algorithms (with reference background image or with background models).</p>
</list-item>
</list>
</p>
<p>In the first case, moving objects are detected by searching for changes within two or more adjacent frames in the video sequences [
<xref ref-type="bibr" rid="b36-sensors-09-02252">36</xref>
]: if
<italic>F
<sub>t</sub>
</italic>
(
<italic>x, y</italic>
) is a frame at time
<italic>t</italic>
, the change detection image
<italic>D</italic>
(
<italic>x, y</italic>
) is defined as
<disp-formula id="FD1">
<label>(1)</label>
<mml:math id="M1">
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi></mml:mi>
<mml:mo stretchy="false">|</mml:mo>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo stretchy="false">|</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>This technique is typically robust to ghosting effects, but it is generally affected by foreground aperture problems as in
<xref ref-type="fig" rid="f8-sensors-09-02252">Figure 8(f)</xref>
, since two frames both containing the moving object are compared. Frame-background algorithms instead rely on a model representing the background scene without any moving object, and each frame is compared to the model. Background models can be simple images or more complex models containing for example statistical information on the temporal evolution of each background pixel. When using background images, let
<italic>B
<sub>t</sub>
</italic>
(
<italic>x, y</italic>
) be a background frame at time
<italic>t</italic>
, objects can be detected by image difference:
<disp-formula id="FD2">
<label>(2)</label>
<mml:math id="M2">
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi></mml:mi>
<mml:mo stretchy="false">|</mml:mo>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>B</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo stretchy="false">|</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
or by more complex image comparison techniques, such as Normalized Cross-Correlation [
<xref ref-type="bibr" rid="b37-sensors-09-02252">37</xref>
]. The background image also needs to be constantly updated in order to reflect small changes in the background appearance, for example due to slow light changes in outdoor environments. A typical approach is to apply a running average with exponential forgetting to each pixel value; this is the mean of the measured pixel values by giving more weight to the more recent measures [
<xref ref-type="bibr" rid="b38-sensors-09-02252">38</xref>
]. More complex background models can also be used; it is the case of the popular mixture-of-Gaussian background model proposed by Stauffer and Grimson [
<xref ref-type="bibr" rid="b39-sensors-09-02252">39</xref>
], in which each pixel of the scene is represented by a mixture of several Gaussians, in order to give a proper statistical model for those pixel with multimodal appearance (e.g. flickering screens, waving leaves, etc.).</p>
<p>Moreover, as the output image
<italic>D</italic>
(
<italic>x, y</italic>
) of the change detection techniques is a gray-level difference image, thresholding algorithms (i.e., the techniques proposed by Tsai [
<xref ref-type="bibr" rid="b40-sensors-09-02252">40</xref>
], Rosin [
<xref ref-type="bibr" rid="b41-sensors-09-02252">41</xref>
] and Snidaro [
<xref ref-type="bibr" rid="b42-sensors-09-02252">42</xref>
]) must be used to obtain a binary foreground/background image, where changing pixels are set to 1 and background pixels are set to 0. Practically, isolated points represent noise points, while compact regions (blobs) of changed pixels represent possible moving objects in the scene. Noise, artificial illumination, reflections and other effects can create a non uniform difference image, where a single threshold cannot locate all object pixels. In order to reduce the noise and to obtain uniform and compact regions, blob images can be filtered by using mathematical morphology operators such as erosion and dilation [
<xref ref-type="bibr" rid="b43-sensors-09-02252">43</xref>
]. Mathematical morphology describes images as sets and image processing operators as transformations among sets [
<xref ref-type="bibr" rid="b44-sensors-09-02252">44</xref>
].
<xref ref-type="fig" rid="f9-sensors-09-02252">Figure 9(a)</xref>
shows the output of the change detection operation performed on the input and background images of
<xref ref-type="fig" rid="f5-sensors-09-02252">Figure 5</xref>
, respectively.
<xref ref-type="fig" rid="f9-sensors-09-02252">Figure 9(b)</xref>
shows the output of the morphological operation.</p>
<p>Detected blobs can be further analyzed in order to assign them to predefined object categories. Powerful local features (i.e., SIFT, etc.) computed for each blob have proven to be very successful in object classification such they are distinctive, invariant to image transformations and robust to occlusions. An exhaustive comparison among different descriptors, different interest regions, and different matching approaches can be found in [
<xref ref-type="bibr" rid="b45-sensors-09-02252">45</xref>
].</p>
</sec>
<sec>
<label>3.2.</label>
<title>Automatic Camera Parameter Regulation</title>
<p>Video sensors take into consideration only a restricted area around the centre of the image to compute the optimal focus or iris position. Moreover, in context of visual surveillance application the objective is to improve the quality for a human operator. This, not necessarily means that such a quality is optimal for image processing techniques. The method developed in [
<xref ref-type="bibr" rid="b46-sensors-09-02252">46</xref>
] adaptively regulates the acquisition parameters (i.e. focus, iris) by applying quality operators on the object of interest. Hence, the control strategy is based on a hierarchy of neural networks trained on some useful quality functions and on camera parameters (see
<xref ref-type="fig" rid="f10-sensors-09-02252">Figure 10</xref>
).</p>
<p>Such a solution allows to drive the regulation of the image acquisition parameters on the basis of the target quality. It is interesting to notice how two different hierarchies are involved depending on the desired task. If the object of interest is out of focus, the hierarchy responsible of the focus tuning is activated. Depending on the defocusing degree of the object the systems requires only four (for really out of focus objects) or two (for slightly out of focus objects) steps to bring the object inside and optimal focus range.</p>
<p>In
<xref ref-type="fig" rid="f11-sensors-09-02252">Figure 11</xref>
, some results of the strategy proposed in [
<xref ref-type="bibr" rid="b46-sensors-09-02252">46</xref>
] for the focus regulation are presented. It is interesting to notice how in this case, from a starting frame in which the object of interest is really out of focus, by applying the four step strategy it is possible to focus on the object of interest in only four steps. Such a result is even more interesting when applied on tracked features [
<xref ref-type="bibr" rid="b47-sensors-09-02252">47</xref>
] for egomotion compensation [
<xref ref-type="bibr" rid="b48-sensors-09-02252">48</xref>
]. On the other hand, when the strategy requires to adjust the brightness of the target (
<xref ref-type="fig" rid="f12-sensors-09-02252">Figure 12</xref>
), the proposed solution first decides if the iris must be opened or closed then the amount of motion in the selected direction. This process would not finish as it is not possible to determine an optimal brightness value but it is possible only to determine if the new value is better than the previous one. For such a reason the strategy allows just one step of regulation after which the system can decide if another regulation is necessary or if it is better to adjust the focus.</p>
<p>In [
<xref ref-type="bibr" rid="b49-sensors-09-02252">49</xref>
], the problem of variations in operational conditions has been analyzed. This problem requires long set-up operations and frequent intervention by specialized personnel. An autonomic computing system has been developed to reduce the costs of installation by regulating internal parameters, by introducing self-configuration and self-repair of vision systems.</p>
</sec>
<sec>
<label>3.3.</label>
<title>Sensor Selection</title>
<p>The sensor selection problem is well known in the wireless sensors network domain. In the case where a large number of devices is deployed in vast environments, those devices are generally very inexpensive and with limited battery power. The sensor selection task consists in choosing the subset of sensors that optimizes the trade-off between the utility of the data transmitted while observing a given phenomenon and the cost (e.g. battery power consumed). In this section we will concentrate on video sensors. In addition, we will ignore possible constraints such as power consumption or bandwidth occupancy. We will instead consider only the utility factor of the data transmitted, not its cost. This means defining a way to measure the performance of a camera in performing a certain task. For example, if multiple sensors are monitoring the same scene, redundant data (i.e. positions of the objects) can be fused to improve tracking [
<xref ref-type="bibr" rid="b50-sensors-09-02252">50</xref>
]. The fusion process necessarily has to take into account a quality factor for each sensor, not to have the fused result swayed by unreliable data. Evaluating the performance of a video sensor can be a difficult task, though. It depends on the application and on the type of information that needs to be extracted from the sensors [
<xref ref-type="bibr" rid="b51-sensors-09-02252">51</xref>
]. Until recently, most of the work has concentrated on metrics used to estimate the quality of source images corrupted by noise or compression when the flawless original image is available [
<xref ref-type="bibr" rid="b50-sensors-09-02252">50</xref>
,
<xref ref-type="bibr" rid="b52-sensors-09-02252">52</xref>
]. Specifically dealing with surveillance applications, the evaluation of the results obtained after the source images have been processed (i.e. to perform change detection [
<xref ref-type="bibr" rid="b1-sensors-09-02252">1</xref>
,
<xref ref-type="bibr" rid="b53-sensors-09-02252">53</xref>
]) is an important step to consider to assess the performance of a sensor. Segmentation algorithms can be tested off-line on sequences for which a reference segmentation is available [
<xref ref-type="bibr" rid="b54-sensors-09-02252">54</xref>
]. However, for on-line systems, since no reference segmentation is available at run-time, the quality of the results must be estimated in absolute terms. Only recently the problem of estimating segmentation quality without ground truth as been addressed [
<xref ref-type="bibr" rid="b50-sensors-09-02252">50</xref>
,
<xref ref-type="bibr" rid="b54-sensors-09-02252">54</xref>
,
<xref ref-type="bibr" rid="b55-sensors-09-02252">55</xref>
]. Since no reference segmentation is used, the measures rely on several comparisons between the pixels of the detected object and those of the background [
<xref ref-type="bibr" rid="b50-sensors-09-02252">50</xref>
]. Other techniques may involve the comparison between two consecutive frames of the color histogram of the object or of its motion vectors [
<xref ref-type="bibr" rid="b55-sensors-09-02252">55</xref>
]. The evaluation of the segmentation can be performed globally on the entire scene or individually for each detected object [
<xref ref-type="bibr" rid="b55-sensors-09-02252">55</xref>
]. The former computes a global quality index for all the blobs detected in the scene. The latter expresses a quality figure for each one of them.</p>
<p>In
<xref ref-type="fig" rid="f13-sensors-09-02252">Figure 13</xref>
, an individual segmentation quality was computed according to the metric used in [
<xref ref-type="bibr" rid="b50-sensors-09-02252">50</xref>
] which is based on frame-background difference. The quality is expressed as index ranging from 0 (worst) to 1 (best). In the figure, two sensors are observing the same scene from different view angles. The images produced by the second sensor (b) have more contrast and the walking man can be better discriminated from the background with respect to the first sensor (a). This condition is reflected by the quality indexes shown below the bounding boxes. This quality measure was used in [
<xref ref-type="bibr" rid="b50-sensors-09-02252">50</xref>
] to assess the uncertainty related to the detected object and therefore the performance of the sensor in detecting the object. This information was exploited in the fusion process directed to obtain a robust position estimation and tracking of the target. A feature selection mechanism such as the one presented in [
<xref ref-type="bibr" rid="b56-sensors-09-02252">56</xref>
] can also be used to estimate the performance of the sensor in detecting a given target. The approach is able to select the most discriminative features to separate the target from the background by applying a two-class variance ratio to log likelihood distributions computed from samples of object and background pixels.</p>
</sec>
<sec>
<label>3.4.</label>
<title>Performance Evaluation</title>
<p>In order to complete the analysis of visual sensor technology for advanced surveillance systems it is mandatory to briefly describe evaluation methods that can be used to measure video processing performance. Standard evaluation methods depend heavily on the testing video sequences that can contain different video processing problems such as environmental conditions (e.g., fog, heavy rain, snow, etc.), illumination changes, occlusion, etc. In [
<xref ref-type="bibr" rid="b57-sensors-09-02252">57</xref>
], a new evaluation methodology able to isolate each video processing problem and define quantitative measures to compute the difficulty level of processing a given video has been presented. Specific metric measures have been also presented to evaluate the algorithm performance relatively to the problems of handling weakly contrasted objects and shadows.</p>
</sec>
</sec>
<sec sec-type="conclusions">
<label>4.</label>
<title>Conclusions</title>
<p>In this paper, a brief historical view of visual-based surveillance systems from the origins to nowadays has been presented together with a short description of the main research projects in Italy on surveillance applications in the last twenty years. The principal technological aspects of advanced visual-based surveillance systems have been analyzed and particular emphasis has been taken on advanced visual sensor networks describing the main activities in this research field. In addition, recent trends in visual sensors technology and recent processing techniques able to increase the quality of acquired data have been described. In particular, advanced visual-based procedures able automatically modify intrinsic and extrinsic camera parameters and automatically select the best subset of sensors in order to monitor a given object moving in the observed environment have been presented.</p>
</sec>
</body>
<back>
<ack>
<p>This work was supported in part by the Italian Ministry of University and Scientific Research (PRIN06) within the project “Ambient Intelligence: Event Analysis, Sensor Reconfiguration and Multimodal Interfaces” (2007–2008).</p>
</ack>
<ref-list>
<title>References and Notes</title>
<ref id="b1-sensors-09-02252">
<label>1.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Regazzoni</surname>
<given-names>C. S.</given-names>
</name>
<name>
<surname>Visvanathan</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Foresti</surname>
<given-names>G. L.</given-names>
</name>
</person-group>
<article-title>Scanning the issue / technology - Special Issue on Video Communications, processing and understanding for third generation surveillance systems</article-title>
<source>Proceedings of the IEEE</source>
<month>10</month>
<year>2001</year>
<volume>89</volume>
<fpage>1355</fpage>
<lpage>1367</lpage>
</element-citation>
</ref>
<ref id="b2-sensors-09-02252">
<label>2.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Donold</surname>
<given-names>C. H. M.</given-names>
</name>
</person-group>
<article-title>Assessing the human vigilance capacity of control rooms operators</article-title>
<conf-name>Proceedings of the International Conference on Human Interfaces in Control Rooms</conf-name>
<conf-loc>Cockpits and Command Centres</conf-loc>
<conf-date>1999</conf-date>
<fpage>7</fpage>
<lpage>11</lpage>
</element-citation>
</ref>
<ref id="b3-sensors-09-02252">
<label>3.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pahlavan</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Levesque</surname>
<given-names>A. H.</given-names>
</name>
</person-group>
<article-title>Wireless data communications</article-title>
<source>Proceedings of the IEEE</source>
<year>1994</year>
<volume>82</volume>
<fpage>1398</fpage>
<lpage>1430</lpage>
</element-citation>
</ref>
<ref id="b4-sensors-09-02252">
<label>4.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yilmaz</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Javed</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Shah</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Object tracking: A survey</article-title>
<source>ACM Computing Surveys</source>
<year>2006</year>
<volume>38</volume>
<fpage>1</fpage>
<lpage>45</lpage>
</element-citation>
</ref>
<ref id="b5-sensors-09-02252">
<label>5.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Pantic</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Pentland</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nijholt</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>T.</given-names>
</name>
</person-group>
<source>Artifical Intelligence for Human Computing</source>
<comment>chapter Human Computing and Machine Understanding of Human Behavior: A Survey,</comment>
<fpage>47</fpage>
<lpage>71</lpage>
<comment>Lecture Notes in Computer Science.</comment>
<publisher-name>Springer</publisher-name>
<year>2007</year>
</element-citation>
</ref>
<ref id="b6-sensors-09-02252">
<label>6.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haritaoglu</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Harwood</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Davis</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>
<italic>W</italic>
<sup>4</sup>
: Real-time surveillance of people and their activities</article-title>
<source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
<month>8</month>
<year>2000</year>
<volume>22</volume>
<fpage>809</fpage>
<lpage>830</lpage>
</element-citation>
</ref>
<ref id="b7-sensors-09-02252">
<label>7.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oliver</surname>
<given-names>N. M.</given-names>
</name>
<name>
<surname>Rosario</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Pentland</surname>
<given-names>A. P.</given-names>
</name>
</person-group>
<article-title>A bayesian computer vision system for modeling human interactions</article-title>
<source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
<year>2000</year>
<volume>22</volume>
<fpage>831</fpage>
<lpage>843</lpage>
</element-citation>
</ref>
<ref id="b8-sensors-09-02252">
<label>8.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ricquebourg</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Bouthemy</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Real-time tracking of moving persons by exploiting spatio-temporal image slices</article-title>
<month>8</month>
<year>2000</year>
<volume>22</volume>
<fpage>797</fpage>
<lpage>808</lpage>
</element-citation>
</ref>
<ref id="b9-sensors-09-02252">
<label>9.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bremond</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Thonnat</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Tracking multiple nonrigid objects in video sequences</article-title>
<month>9</month>
<year>1998</year>
<volume>8</volume>
<fpage>585</fpage>
<lpage>591</lpage>
</element-citation>
</ref>
<ref id="b10-sensors-09-02252">
<label>10.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Faulus</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Ng</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>An expressive language and interface for image querying</article-title>
<source>Machine Vision and Applications</source>
<month>6</month>
<year>1997</year>
<volume>10</volume>
<fpage>74</fpage>
<lpage>85</lpage>
</element-citation>
</ref>
<ref id="b11-sensors-09-02252">
<label>11.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haanpaa</surname>
<given-names>D. P.</given-names>
</name>
</person-group>
<article-title>An advanced haptic system for improving man-machine interfaces</article-title>
<source>Computer Graphics</source>
<year>1997</year>
<volume>21</volume>
<fpage>443</fpage>
<lpage>449</lpage>
</element-citation>
</ref>
<ref id="b12-sensors-09-02252">
<label>12.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kurana</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Tugcu</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>A survey on emerging broadband wireless access technologies</article-title>
<source>Computer Networks</source>
<year>2007</year>
<volume>51</volume>
<fpage>3013</fpage>
<lpage>3046</lpage>
</element-citation>
</ref>
<ref id="b13-sensors-09-02252">
<label>13.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stringa</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Regazzoni</surname>
<given-names>C. S.</given-names>
</name>
</person-group>
<article-title>Real-time video-shot detection for scene surveillance applications</article-title>
<year>2000</year>
<volume>9</volume>
<fpage>69</fpage>
<lpage>79</lpage>
</element-citation>
</ref>
<ref id="b14-sensors-09-02252">
<label>14.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kimura</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Latifi</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>A survey on data compression in wireless sensor networks</article-title>
<conf-name>Proceedings of the International Conference on Information Technology: Coding and Computing</conf-name>
<conf-loc>USA</conf-loc>
<conf-date>4–6 April 2005</conf-date>
<fpage>8</fpage>
<lpage>13</lpage>
</element-citation>
</ref>
<ref id="b15-sensors-09-02252">
<label>15.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lu</surname>
<given-names>W.</given-names>
</name>
</person-group>
<article-title>Scanning the issue - special issue on multidimensional broad-band wireless technologies and services</article-title>
<source>Proceedings of the IEEE</source>
<month>1</month>
<year>2001</year>
<volume>89</volume>
<fpage>3</fpage>
<lpage>5</lpage>
</element-citation>
</ref>
<ref id="b16-sensors-09-02252">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fazel</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Robertson</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Klank</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Vanselow</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Concept of a wireless indoor video communications system</article-title>
<source>Signal Processing: Image Communication</source>
<year>1998</year>
<volume>12</volume>
<fpage>193</fpage>
<lpage>208</lpage>
</element-citation>
</ref>
<ref id="b17-sensors-09-02252">
<label>17.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Batra</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Effective algorithms for video transmission over wireless channels</article-title>
<source>Signal Processing: Image Communication</source>
<month>4</month>
<year>1998</year>
<volume>12</volume>
<fpage>147</fpage>
<lpage>166</lpage>
</element-citation>
</ref>
<ref id="b18-sensors-09-02252">
<label>18.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Manjunath</surname>
<given-names>B. S.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Tekalp</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H. J.</given-names>
</name>
</person-group>
<article-title>Introduction to the special issue on image and video processing for digital libraries</article-title>
<year>2000</year>
<volume>9</volume>
<fpage>1</fpage>
<lpage>2</lpage>
</element-citation>
</ref>
<ref id="b19-sensors-09-02252">
<label>19.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bjontegaard</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Lillevold</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Danielsen</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>A comparison of different coding formats for digital coding of video using mpeg-2</article-title>
<year>1996</year>
<volume>5</volume>
<fpage>1271</fpage>
<lpage>1276</lpage>
</element-citation>
</ref>
<ref id="b20-sensors-09-02252">
<label>20.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cheng</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X.</given-names>
</name>
</person-group>
<article-title>Partial encryption of compressed images and videos</article-title>
<month>8</month>
<year>2000</year>
<volume>48</volume>
<fpage>2439</fpage>
<lpage>2451</lpage>
</element-citation>
</ref>
<ref id="b21-sensors-09-02252">
<label>21.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Benoispineau</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Morier</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Barba</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Sanson</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Hierarchical segmentation of video sequences for content manipulation and adaptive coding</article-title>
<source>Signal Processing</source>
<month>4</month>
<year>1998</year>
<volume>66</volume>
<fpage>181</fpage>
<lpage>201</lpage>
</element-citation>
</ref>
<ref id="b22-sensors-09-02252">
<label>22.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vasconcelos</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Lippman</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Statistical models of video structure for content analysis and characterization</article-title>
<month>1</month>
<year>2000</year>
<volume>9</volume>
<fpage>3</fpage>
<lpage>19</lpage>
</element-citation>
</ref>
<ref id="b23-sensors-09-02252">
<label>23.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ebrahimi</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Salembier</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Special issue on video sequence segmentation for content-based processing and manipulation</article-title>
<source>Signal Processing</source>
<year>1998</year>
<volume>66</volume>
<fpage>3</fpage>
<lpage>19</lpage>
</element-citation>
</ref>
<ref id="b24-sensors-09-02252">
<label>24.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Akyildiz</surname>
<given-names>I. F.</given-names>
</name>
<name>
<surname>Su</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Sankarasubramaniam</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Cayirci</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>Wireless sensor networks: a survey</article-title>
<source>Computer Networks</source>
<year>2002</year>
<volume>38</volume>
<fpage>393</fpage>
<lpage>422</lpage>
</element-citation>
</ref>
<ref id="b25-sensors-09-02252">
<label>25.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kersey</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Tsai</surname>
<given-names>J.</given-names>
</name>
</person-group>
<source>Wireless Ad Hoc Networking: Personal-Area, Local-Area, and the Sensory-Area Networks</source>
<comment>chapter Intrusion Detection for Wireless Network,</comment>
<fpage>505</fpage>
<lpage>533</lpage>
<publisher-name>CRC Press</publisher-name>
<year>2007</year>
</element-citation>
</ref>
<ref id="b26-sensors-09-02252">
<label>26.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Sluzek</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Palaniappan</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Development of a reconfigurable sensor network for intrusion detection</article-title>
<conf-name>Proceedings of the International Conference on Military and Aerospace Application of Programmable Logic Devices (MAPLD)</conf-name>
<conf-loc>Washington, D.C., USA</conf-loc>
<conf-date>September 2005</conf-date>
</element-citation>
</ref>
<ref id="b27-sensors-09-02252">
<label>27.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Margi</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Petkov</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Obraczka</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Manduchi</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Characterizing energy consumption in a visual sensor network testbed</article-title>
<conf-name>Proceedings on the 2nd International IEEE/Create-Net Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities</conf-name>
<conf-loc>Barcelona, Spain</conf-loc>
<conf-date>2006</conf-date>
</element-citation>
</ref>
<ref id="b28-sensors-09-02252">
<label>28.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Rahimi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Baer</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Iroezi</surname>
<given-names>O. I.</given-names>
</name>
<name>
<surname>Garcia</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Warrior</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Estrin</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Srivastava</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Cyclops: In situ image sensing and interpretation in wireless sensor networks</article-title>
<conf-name>Proceedings of the International Conference on Embedded Networked Sensor Systems</conf-name>
<conf-loc>San Diego, California, USA</conf-loc>
<conf-date>November 24 2005</conf-date>
<fpage>192</fpage>
<lpage>204</lpage>
</element-citation>
</ref>
<ref id="b29-sensors-09-02252">
<label>29.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Regazzoni</surname>
<given-names>C. S.</given-names>
</name>
<name>
<surname>Tesei</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Distributed data-fusion for real-time crowding estimation</article-title>
<source>Signal Processing</source>
<month>8</month>
<year>1996</year>
<volume>53</volume>
<fpage>47</fpage>
<lpage>63</lpage>
</element-citation>
</ref>
<ref id="b30-sensors-09-02252">
<label>30.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Foresti</surname>
<given-names>G. L.</given-names>
</name>
<name>
<surname>Regazzoni</surname>
<given-names>C. S.</given-names>
</name>
</person-group>
<article-title>Multisensor data fusion for driving autonomous vehicles in risky enviroments</article-title>
<source>IEEE Transactions on Vehicular Technology</source>
<month>9</month>
<year>2002</year>
<volume>51</volume>
<fpage>1165</fpage>
<lpage>1185</lpage>
</element-citation>
</ref>
<ref id="b31-sensors-09-02252">
<label>31.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Bogaert</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Chelq</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Cornez</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Regazzoni</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Teschioni</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Thonnat</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>The password project</article-title>
<conf-name>Proceedings of the International Conference on Image Processing</conf-name>
<conf-loc>Chicago, USA</conf-loc>
<year>1996</year>
<fpage>675</fpage>
<lpage>678</lpage>
</element-citation>
</ref>
<ref id="b32-sensors-09-02252">
<label>32.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cucchiara</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Grana</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Piccardi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Prati</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Detecting moving objects, ghosts, and shadows in video streams</article-title>
<month>10</month>
<year>2003</year>
<volume>25</volume>
<fpage>1337</fpage>
<lpage>1342</lpage>
</element-citation>
</ref>
<ref id="b33-sensors-09-02252">
<label>33.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Foresti</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Micheloni</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>A robust feature tracker for active surveillance of outdoor scenes</article-title>
<source>Electronic Letters on Computer Vision and Image Analysis</source>
<year>2003</year>
<volume>1</volume>
<fpage>21</fpage>
<lpage>34</lpage>
</element-citation>
</ref>
<ref id="b34-sensors-09-02252">
<label>34.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Piciarelli</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Micheloni</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Foresti</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Trajectory-based anomalous event detection</article-title>
<source>IEEE Transaction on Circuits and Systems for Video Technology</source>
<month>11</month>
<year>2008</year>
<volume>18</volume>
<fpage>1544</fpage>
<lpage>1554</lpage>
</element-citation>
</ref>
<ref id="b35-sensors-09-02252">
<label>35.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Carincotte</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Desurmont</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Ravera</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Bremond</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Orwell</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Velastin</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Odobez</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Corbucci</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Palo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cernocky</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Toward generic intelligent knowledge extraction from video and audio: the eu-funded CARETAKER project</article-title>
<conf-name>Proceedings of the IEE Conference on Imaging for Crime Detection and Prevention (ICDP)</conf-name>
<conf-loc>London, UK</conf-loc>
<conf-date>13–14 June 2006</conf-date>
<fpage>470</fpage>
<lpage>475</lpage>
</element-citation>
</ref>
<ref id="b36-sensors-09-02252">
<label>36.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Xing</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Gan</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Image sequence segmentation based on 2d temporal entropic thresholding</article-title>
<source>Pattern Recognition Letters</source>
<year>1996</year>
<volume>17</volume>
<fpage>1101</fpage>
<lpage>1107</lpage>
</element-citation>
</ref>
<ref id="b37-sensors-09-02252">
<label>37.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tsai</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Fast normalized cross correlation for defect detection</article-title>
<source>Pattern Recognition Letters</source>
<year>2003</year>
<volume>24</volume>
<fpage>2625</fpage>
<lpage>2631</lpage>
</element-citation>
</ref>
<ref id="b38-sensors-09-02252">
<label>38.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Shum</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>A novel probability model for background maintenance and subtraction</article-title>
<conf-name>Proceedings of the 15th International Conference on Vision Interface</conf-name>
<conf-loc>Calgari, Canada</conf-loc>
<conf-date>2002</conf-date>
<fpage>109</fpage>
<lpage>117</lpage>
</element-citation>
</ref>
<ref id="b39-sensors-09-02252">
<label>39.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stauffer</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Grimson</surname>
<given-names>W. E. L.</given-names>
</name>
</person-group>
<article-title>Learning patterns of activity using real-time tracking</article-title>
<source>IEEE Pattern Analysis and Machine Intelligence</source>
<month>8</month>
<year>2000</year>
<volume>22</volume>
<fpage>747</fpage>
<lpage>757</lpage>
</element-citation>
</ref>
<ref id="b40-sensors-09-02252">
<label>40.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tsai</surname>
<given-names>W.-H.</given-names>
</name>
</person-group>
<article-title>Moment-preserving thresholding: A new approach</article-title>
<source>Computer Vision, Graphics, and Image Processing</source>
<year>1985</year>
<volume>29</volume>
<fpage>377</fpage>
<lpage>393</lpage>
</element-citation>
</ref>
<ref id="b41-sensors-09-02252">
<label>41.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rosin</surname>
<given-names>P. L.</given-names>
</name>
</person-group>
<article-title>Unimodal thresholding</article-title>
<source>Pattern Recognition</source>
<year>2001</year>
<volume>34</volume>
<fpage>2083</fpage>
<lpage>2096</lpage>
</element-citation>
</ref>
<ref id="b42-sensors-09-02252">
<label>42.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Snidaro</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Foresti</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Real-time thresholding with Euler numbers</article-title>
<source>Pattern Recognition Letters</source>
<month>6</month>
<year>2003</year>
<volume>24</volume>
<fpage>1533</fpage>
<lpage>1544</lpage>
</element-citation>
</ref>
<ref id="b43-sensors-09-02252">
<label>43.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yuille</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Vincent</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Geiger</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Statistical morphology and bayesian reconstruction</article-title>
<source>Journal of Mathematical Imaging and Vision</source>
<year>1992</year>
<volume>1</volume>
<fpage>223</fpage>
<lpage>238</lpage>
</element-citation>
</ref>
<ref id="b44-sensors-09-02252">
<label>44.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Serra</surname>
<given-names>J.</given-names>
</name>
</person-group>
<source>Image Analysis and Mathematical Morphology</source>
<publisher-name>Academic Press</publisher-name>
<publisher-loc>London</publisher-loc>
<year>1982</year>
</element-citation>
</ref>
<ref id="b45-sensors-09-02252">
<label>45.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mikolajczyk</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Schmid</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>A performance evaluation of local descriptors</article-title>
<month>10</month>
<year>2005</year>
<volume>27</volume>
<fpage>1615</fpage>
<lpage>1630</lpage>
</element-citation>
</ref>
<ref id="b46-sensors-09-02252">
<label>46.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Micheloni</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Foresti</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Image acquisition enhancement for active video surveillance</article-title>
<conf-name>Proceedings of the International Conference on Pattern Recognition (ICPR)</conf-name>
<conf-loc>Cambridge, U.K.</conf-loc>
<conf-date>22–26 August 2004</conf-date>
<volume>3</volume>
<fpage>326</fpage>
<lpage>329</lpage>
</element-citation>
</ref>
<ref id="b47-sensors-09-02252">
<label>47.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Micheloni</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Foresti</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Fast good features selection for wide area monitoring</article-title>
<conf-name>Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance</conf-name>
<conf-loc>Miami, FL, USA</conf-loc>
<conf-date>July 2003</conf-date>
<fpage>271</fpage>
<lpage>276</lpage>
</element-citation>
</ref>
<ref id="b48-sensors-09-02252">
<label>48.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Micheloni</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Foresti</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Focusing on target’s features while tracking</article-title>
<conf-name>Proc. 18th International Conference on Pattern Recognition (ICPR)</conf-name>
<conf-loc>Honk Kong</conf-loc>
<conf-date>July 21–22, 2006</conf-date>
<volume>1</volume>
<fpage>836</fpage>
<lpage>839</lpage>
</element-citation>
</ref>
<ref id="b49-sensors-09-02252">
<label>49.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Crowley</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hall</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Emonet</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Autonomic computer vision systems</article-title>
<conf-name>Proceedings of the 5th International Conference on Computer Vision Systems (ICVS07)</conf-name>
<conf-loc>Bielefeld, Germany</conf-loc>
<conf-date>21–24 March 2007</conf-date>
</element-citation>
</ref>
<ref id="b50-sensors-09-02252">
<label>50.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Snidaro</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Niu</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Foresti</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Varshney</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Quality-Based Fusion of Multiple Video Sensors for Video Surveillance</article-title>
<month>8</month>
<year>2007</year>
<volume>37</volume>
<fpage>1044</fpage>
<lpage>1051</lpage>
</element-citation>
</ref>
<ref id="b51-sensors-09-02252">
<label>51.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Snidaro</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Foresti</surname>
<given-names>G.</given-names>
</name>
</person-group>
<source>Advances and Challenges in Multisensor Data and Information Processing</source>
<comment>chapter Sensor Performance Estimation for Multi-camera Ambient Security Systems: a Review,</comment>
<fpage>331</fpage>
<lpage>338</lpage>
<comment>NATO Security through Science Series, D: Information and Communication Security.</comment>
<publisher-name>IOS Press</publisher-name>
<year>2007</year>
</element-citation>
</ref>
<ref id="b52-sensors-09-02252">
<label>52.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Avcibaş</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Sankur</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Sayood</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>Statistical evaluation of image quality measures</article-title>
<source>Journal of Electronic Imaging</source>
<year>2002</year>
<volume>11</volume>
<fpage>206</fpage>
<lpage>223</lpage>
</element-citation>
</ref>
<ref id="b53-sensors-09-02252">
<label>53.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Collins</surname>
<given-names>R. T.</given-names>
</name>
<name>
<surname>Lipton</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Kanade</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Introduction to the special section on video surveillance</article-title>
<month>8</month>
<year>2000</year>
<volume>22</volume>
<fpage>745</fpage>
<lpage>746</lpage>
</element-citation>
</ref>
<ref id="b54-sensors-09-02252">
<label>54.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Erdem</surname>
<given-names>Ç. E.</given-names>
</name>
<name>
<surname>Sankur</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Tekalp</surname>
<given-names>A. M.</given-names>
</name>
</person-group>
<article-title>Performance measures for video object segmentation and tracking</article-title>
<month>7</month>
<year>2004</year>
<volume>13</volume>
<fpage>937</fpage>
<lpage>951</lpage>
</element-citation>
</ref>
<ref id="b55-sensors-09-02252">
<label>55.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Correia</surname>
<given-names>P. L.</given-names>
</name>
<name>
<surname>Pereira</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Objective evaluation of video segmentation quality</article-title>
<month>2</month>
<year>2003</year>
<volume>12</volume>
<fpage>186</fpage>
<lpage>200</lpage>
</element-citation>
</ref>
<ref id="b56-sensors-09-02252">
<label>56.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Collins</surname>
<given-names>R. T.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Leordeanu</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Online selection of discriminative tracking features</article-title>
<month>10</month>
<year>2005</year>
<volume>27</volume>
<fpage>1631</fpage>
<lpage>1643</lpage>
</element-citation>
</ref>
<ref id="b57-sensors-09-02252">
<label>57.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Nghiem</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bremond</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Thonnat</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>New evaluation approach for video processing algorithms</article-title>
<conf-name>Proceedings of the IEEE Workshop on Motion and Video Computing (WMVC07)</conf-name>
<conf-loc>Austin, Texas, USA</conf-loc>
<conf-date>February 23–24 2007</conf-date>
</element-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="f1-sensors-09-02252" position="float">
<label>Figure 1.</label>
<caption>
<p>Example of the general architectural of a video-based surveillance system of the first generation (1960–1980).</p>
</caption>
<graphic xlink:href="sensors-09-02252f1"></graphic>
</fig>
<fig id="f2-sensors-09-02252" position="float">
<label>Figure 2.</label>
<caption>
<p>Example of the general architectural of a video-based surveillance system of the second generation (1990–2000).</p>
</caption>
<graphic xlink:href="sensors-09-02252f2"></graphic>
</fig>
<fig id="f3-sensors-09-02252" position="float">
<label>Figure 3.</label>
<caption>
<p>Example of the general architectural of a video-based surveillance system of the third generation.</p>
</caption>
<graphic xlink:href="sensors-09-02252f3"></graphic>
</fig>
<fig id="f4-sensors-09-02252" position="float">
<label>Figure 4.</label>
<caption>
<p>Man-machine interface of the visual-based surveillance system developed within the ATHENA project (1994–1998).</p>
</caption>
<graphic xlink:href="sensors-09-02252f4"></graphic>
</fig>
<fig id="f5-sensors-09-02252" position="float">
<label>Figure 5.</label>
<caption>
<p>Man-machine interface of the visual-based surveillance system developed within the Italian PFT2 project (1993–1996).</p>
</caption>
<graphic xlink:href="sensors-09-02252f5"></graphic>
</fig>
<fig id="f6-sensors-09-02252" position="float">
<label>Figure 6.</label>
<caption>
<p>The
<italic>PER</italic>
<sup>2</sup>
project availed of the dynamic video-surveillance. In particular, static cameras have been supported by PTZ cameras. These, in context of active vision, are able to provide higher details and quality with respect to static ones.</p>
</caption>
<graphic xlink:href="sensors-09-02252f6"></graphic>
</fig>
<fig id="f7-sensors-09-02252" position="float">
<label>Figure 7.</label>
<caption>
<p>Architecture of the system for Ambient Intelligence.</p>
</caption>
<graphic xlink:href="sensors-09-02252f7"></graphic>
</fig>
<fig id="f8-sensors-09-02252" position="float">
<label>Figure 8.</label>
<caption>
<p>Examples of typical motion detection problems. (a,b)) camouflage, (c,d)) light changes, (e,f) foreground aperture, (g,h) ghosting.</p>
</caption>
<graphic xlink:href="sensors-09-02252f8"></graphic>
</fig>
<fig id="f9-sensors-09-02252" position="float">
<label>Figure 9.</label>
<caption>
<p>(a) Change detection operation performed on the images in
<xref ref-type="fig" rid="f5-sensors-09-02252">Figure 5</xref>
, (b) output of the morphological operation.</p>
</caption>
<graphic xlink:href="sensors-09-02252f9"></graphic>
</fig>
<fig id="f10-sensors-09-02252" position="float">
<label>Figure 10.</label>
<caption>
<p>The neural network hierarchy proposed by Micheloni and Foresti in [
<xref ref-type="bibr" rid="b46-sensors-09-02252">46</xref>
].</p>
</caption>
<graphic xlink:href="sensors-09-02252f10"></graphic>
</fig>
<fig id="f11-sensors-09-02252" position="float">
<label>Figure 11.</label>
<caption>
<p>Example of focusing of a moving object. In the first frame (a), the acquisition quality does not allow to accurately detect and recognize the object (e) (smooth contours). After four steps, the quality of the object of interest is much greater (d) and also its detection is improved (h) (sharp contours).</p>
</caption>
<graphic xlink:href="sensors-09-02252f11"></graphic>
</fig>
<fig id="f12-sensors-09-02252" position="float">
<label>Figure 12.</label>
<caption>
<p>Example of brightness control. As the target walks in a dark area (a), the system requests two consecutive opening of the iris on the basis of the targets brightness only (b) and (c). While the brightness of the object is considered appropriate by the system (d), the remaining of the image is overexposed from a human point of view.</p>
</caption>
<graphic xlink:href="sensors-09-02252f12"></graphic>
</fig>
<fig id="f13-sensors-09-02252" position="float">
<label>Figure 13.</label>
<caption>
<p>Individual segmentation quality of a moving person detected by two sensors (bounding boxes and segmentation scores are visualized on the source frames). According to the metric proposed in [
<xref ref-type="bibr" rid="b50-sensors-09-02252">50</xref>
], the second sensor (b) provides a better detection as the target and the background are more contrasted.</p>
</caption>
<graphic xlink:href="sensors-09-02252f13"></graphic>
</fig>
</floats-group>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Foresti, Gian Luca" sort="Foresti, Gian Luca" uniqKey="Foresti G" first="Gian Luca" last="Foresti">Gian Luca Foresti</name>
<name sortKey="Micheloni, Christian" sort="Micheloni, Christian" uniqKey="Micheloni C" first="Christian" last="Micheloni">Christian Micheloni</name>
<name sortKey="Piciarelli, Claudio" sort="Piciarelli, Claudio" uniqKey="Piciarelli C" first="Claudio" last="Piciarelli">Claudio Piciarelli</name>
<name sortKey="Snidaro, Lauro" sort="Snidaro, Lauro" uniqKey="Snidaro L" first="Lauro" last="Snidaro">Lauro Snidaro</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001F99 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001F99 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:3348842
   |texte=   Visual Sensor Technology for Advanced Surveillance Systems: Historical View, Technological Aspects and Research Activities in Italy
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:22574011" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024