Serveur d'exploration sur la télématique

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines

Identifieur interne : 000041 ( Pmc/Corpus ); précédent : 000040; suivant : 000042

Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines

Auteurs : Lara Del Val ; Alberto Izquierdo-Fuente ; Juan J. Villacorta ; Mariano Raboso

Source :

RBID : PMC:4507697

Abstract

Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.


Url:
DOI: 10.3390/s150614241
PubMed: 26091392
PubMed Central: 4507697

Links to Exploration step

PMC:4507697

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines</title>
<author>
<name sortKey="Del Val, Lara" sort="Del Val, Lara" uniqKey="Del Val L" first="Lara" last="Del Val">Lara Del Val</name>
<affiliation>
<nlm:aff id="af1-sensors-15-14241">Departamento de Ciencia de los Materiales e Ingeniería Metalúrgica, Expresión Gráfica de la Ingeniería, Ingeniería Cartográfica, Geodesia y Fotogrametría, Ingeniería Mecánica e Ingeniería de los Procesos de Fabricación, Área de Ingeniería Mecánica, Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Izquierdo Fuente, Alberto" sort="Izquierdo Fuente, Alberto" uniqKey="Izquierdo Fuente A" first="Alberto" last="Izquierdo-Fuente">Alberto Izquierdo-Fuente</name>
<affiliation>
<nlm:aff id="af2-sensors-15-14241">Departamento de Teoría de la Señal y Comunicaciones e Ingeniería Telemática, Universidad de Valladolid, Paseo Belén 15, 47011 Valladolid, Spain; E-Mails:
<email>alberto.izquierdo@tel.uva.es</email>
(A.I.-F.);
<email>juavil@tel.uva.es</email>
(J.J.V.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Villacorta, Juan J" sort="Villacorta, Juan J" uniqKey="Villacorta J" first="Juan J." last="Villacorta">Juan J. Villacorta</name>
<affiliation>
<nlm:aff id="af2-sensors-15-14241">Departamento de Teoría de la Señal y Comunicaciones e Ingeniería Telemática, Universidad de Valladolid, Paseo Belén 15, 47011 Valladolid, Spain; E-Mails:
<email>alberto.izquierdo@tel.uva.es</email>
(A.I.-F.);
<email>juavil@tel.uva.es</email>
(J.J.V.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Raboso, Mariano" sort="Raboso, Mariano" uniqKey="Raboso M" first="Mariano" last="Raboso">Mariano Raboso</name>
<affiliation>
<nlm:aff id="af3-sensors-15-14241">E.U. Informática, Universidad Pontificia de Salamanca, Calle Compañía 5, 37002 Salamanca, Spain; E-Mail:
<email>mrabosoma@upsa.es</email>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26091392</idno>
<idno type="pmc">4507697</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4507697</idno>
<idno type="RBID">PMC:4507697</idno>
<idno type="doi">10.3390/s150614241</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000041</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000041</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines</title>
<author>
<name sortKey="Del Val, Lara" sort="Del Val, Lara" uniqKey="Del Val L" first="Lara" last="Del Val">Lara Del Val</name>
<affiliation>
<nlm:aff id="af1-sensors-15-14241">Departamento de Ciencia de los Materiales e Ingeniería Metalúrgica, Expresión Gráfica de la Ingeniería, Ingeniería Cartográfica, Geodesia y Fotogrametría, Ingeniería Mecánica e Ingeniería de los Procesos de Fabricación, Área de Ingeniería Mecánica, Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Izquierdo Fuente, Alberto" sort="Izquierdo Fuente, Alberto" uniqKey="Izquierdo Fuente A" first="Alberto" last="Izquierdo-Fuente">Alberto Izquierdo-Fuente</name>
<affiliation>
<nlm:aff id="af2-sensors-15-14241">Departamento de Teoría de la Señal y Comunicaciones e Ingeniería Telemática, Universidad de Valladolid, Paseo Belén 15, 47011 Valladolid, Spain; E-Mails:
<email>alberto.izquierdo@tel.uva.es</email>
(A.I.-F.);
<email>juavil@tel.uva.es</email>
(J.J.V.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Villacorta, Juan J" sort="Villacorta, Juan J" uniqKey="Villacorta J" first="Juan J." last="Villacorta">Juan J. Villacorta</name>
<affiliation>
<nlm:aff id="af2-sensors-15-14241">Departamento de Teoría de la Señal y Comunicaciones e Ingeniería Telemática, Universidad de Valladolid, Paseo Belén 15, 47011 Valladolid, Spain; E-Mails:
<email>alberto.izquierdo@tel.uva.es</email>
(A.I.-F.);
<email>juavil@tel.uva.es</email>
(J.J.V.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Raboso, Mariano" sort="Raboso, Mariano" uniqKey="Raboso M" first="Mariano" last="Raboso">Mariano Raboso</name>
<affiliation>
<nlm:aff id="af3-sensors-15-14241">E.U. Informática, Universidad Pontificia de Salamanca, Calle Compañía 5, 37002 Salamanca, Spain; E-Mail:
<email>mrabosoma@upsa.es</email>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Sensors (Basel, Switzerland)</title>
<idno type="eISSN">1424-8220</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error
<italic>versus</italic>
the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Jain, A" uniqKey="Jain A">A. Jain</name>
</author>
<author>
<name sortKey="Bolle, R" uniqKey="Bolle R">R. Bolle</name>
</author>
<author>
<name sortKey="Pankanti, S" uniqKey="Pankanti S">S. Pankanti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Crispin, J" uniqKey="Crispin J">J. Crispin</name>
</author>
<author>
<name sortKey="Maffett, A" uniqKey="Maffett A">A. Maffett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Neubauer, W" uniqKey="Neubauer W">W. Neubauer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baker, C" uniqKey="Baker C">C. Baker</name>
</author>
<author>
<name sortKey="Vespe, M" uniqKey="Vespe M">M. Vespe</name>
</author>
<author>
<name sortKey="Jones, G" uniqKey="Jones G">G. Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Balleri, A" uniqKey="Balleri A">A. Balleri</name>
</author>
<author>
<name sortKey="Woodbridge, K" uniqKey="Woodbridge K">K. Woodbridge</name>
</author>
<author>
<name sortKey="Baker, C J" uniqKey="Baker C">C.J. Baker</name>
</author>
<author>
<name sortKey="Holderied, M W" uniqKey="Holderied M">M.W. Holderied</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Helversen, D" uniqKey="Helversen D">D. Helversen</name>
</author>
<author>
<name sortKey="Holderied, M W" uniqKey="Holderied M">M.W. Holderied</name>
</author>
<author>
<name sortKey="Helversen, O" uniqKey="Helversen O">O. Helversen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chevalier, L F" uniqKey="Chevalier L">L.F. Chevalier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ricker, D W" uniqKey="Ricker D">D.W. Ricker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moebus, M" uniqKey="Moebus M">M. Moebus</name>
</author>
<author>
<name sortKey="Zoubir, A M" uniqKey="Zoubir A">A.M. Zoubir</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moebus, M" uniqKey="Moebus M">M. Moebus</name>
</author>
<author>
<name sortKey="Zoubir, A M" uniqKey="Zoubir A">A.M. Zoubir</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duran, J D" uniqKey="Duran J">J.D. Duran</name>
</author>
<author>
<name sortKey="Fuente, A I" uniqKey="Fuente A">A.I. Fuente</name>
</author>
<author>
<name sortKey="Calvo, J J V" uniqKey="Calvo J">J.J.V. Calvo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Izquierdo Fuente, A" uniqKey="Izquierdo Fuente A">A. Izquierdo-Fuente</name>
</author>
<author>
<name sortKey="Villacorta Calvo, J" uniqKey="Villacorta Calvo J">J. Villacorta-Calvo</name>
</author>
<author>
<name sortKey="Raboso Mateos, M" uniqKey="Raboso Mateos M">M. Raboso-Mateos</name>
</author>
<author>
<name sortKey="Martinez Arribas, A" uniqKey="Martinez Arribas A">A. Martinez-Arribas</name>
</author>
<author>
<name sortKey="Rodriguez Merino, D" uniqKey="Rodriguez Merino D">D. Rodriguez-Merino</name>
</author>
<author>
<name sortKey="Del Val Puente, L" uniqKey="Del Val Puente L">L. del Val-Puente</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Izquierdo Fuente, A" uniqKey="Izquierdo Fuente A">A. Izquierdo-Fuente</name>
</author>
<author>
<name sortKey="Del Val Puente, L" uniqKey="Del Val Puente L">L. del Val-Puente</name>
</author>
<author>
<name sortKey="Jimenez G Mez, M I" uniqKey="Jimenez G Mez M">M.I. Jiménez-Gómez</name>
</author>
<author>
<name sortKey="Villacorta Calvo, J" uniqKey="Villacorta Calvo J">J. Villacorta-Calvo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jain, A K" uniqKey="Jain A">A.K. Jain</name>
</author>
<author>
<name sortKey="Nandakumar, K" uniqKey="Nandakumar K">K. Nandakumar</name>
</author>
<author>
<name sortKey="Ross, A" uniqKey="Ross A">A. Ross</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, E C" uniqKey="Lee E">E.C. Lee</name>
</author>
<author>
<name sortKey="Jung, H" uniqKey="Jung H">H. Jung</name>
</author>
<author>
<name sortKey="Kim, D" uniqKey="Kim D">D. Kim</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, E C" uniqKey="Lee E">E.C. Lee</name>
</author>
<author>
<name sortKey="Park, K R" uniqKey="Park K">K.R. Park</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hamdy, O" uniqKey="Hamdy O">O. Hamdy</name>
</author>
<author>
<name sortKey="Traore, I" uniqKey="Traore I">I. Traoré</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Izquierdo Fuente, A" uniqKey="Izquierdo Fuente A">A. Izquierdo-Fuente</name>
</author>
<author>
<name sortKey="Del Val Puente, L" uniqKey="Del Val Puente L">L. del Val-Puente</name>
</author>
<author>
<name sortKey="Villacorta Calvo, J" uniqKey="Villacorta Calvo J">J. Villacorta-Calvo</name>
</author>
<author>
<name sortKey="Raboso Mateos, M" uniqKey="Raboso Mateos M">M. Raboso-Mateos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cristianini, N" uniqKey="Cristianini N">N. Cristianini</name>
</author>
<author>
<name sortKey="Shawe Taylor, J" uniqKey="Shawe Taylor J">J. Shawe-Taylor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Theodoridis, S" uniqKey="Theodoridis S">S. Theodoridis</name>
</author>
<author>
<name sortKey="Koutroumbas, K" uniqKey="Koutroumbas K">K. Koutroumbas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bishop, C M" uniqKey="Bishop C">C.M. Bishop</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Skolnik, M I" uniqKey="Skolnik M">M.I. Skolnik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Izquierdo Fuente, A" uniqKey="Izquierdo Fuente A">A. Izquierdo-Fuente</name>
</author>
<author>
<name sortKey="Villacorta Calvo, J J" uniqKey="Villacorta Calvo J">J.J. Villacorta-Calvo</name>
</author>
<author>
<name sortKey="Val Puente, L" uniqKey="Val Puente L">L. Val-Puente</name>
</author>
<author>
<name sortKey="Jimenez Gomez, M I" uniqKey="Jimenez Gomez M">M.I. Jiménez-Gomez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wirth, W D" uniqKey="Wirth W">W.D. Wirth</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hsu, C W" uniqKey="Hsu C">C.W. Hsu</name>
</author>
<author>
<name sortKey="Chang, C C" uniqKey="Chang C">C.C. Chang</name>
</author>
<author>
<name sortKey="Lin, C J" uniqKey="Lin C">C.J. Lin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chang, C C" uniqKey="Chang C">C.C. Chang</name>
</author>
<author>
<name sortKey="Lin, C J" uniqKey="Lin C">C.J. Lin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cherkassky, V" uniqKey="Cherkassky V">V. Cherkassky</name>
</author>
<author>
<name sortKey="Ma, Y" uniqKey="Ma Y">Y. Ma</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sensors (Basel)</journal-id>
<journal-id journal-id-type="iso-abbrev">Sensors (Basel)</journal-id>
<journal-id journal-id-type="publisher-id">sensors</journal-id>
<journal-title-group>
<journal-title>Sensors (Basel, Switzerland)</journal-title>
</journal-title-group>
<issn pub-type="epub">1424-8220</issn>
<publisher>
<publisher-name>MDPI</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26091392</article-id>
<article-id pub-id-type="pmc">4507697</article-id>
<article-id pub-id-type="doi">10.3390/s150614241</article-id>
<article-id pub-id-type="publisher-id">sensors-15-14241</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>del Val</surname>
<given-names>Lara</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-15-14241">1</xref>
<xref rid="c1-sensors-15-14241" ref-type="corresp">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Izquierdo-Fuente</surname>
<given-names>Alberto</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-15-14241">2</xref>
<xref ref-type="author-notes" rid="fn1-sensors-15-14241"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Villacorta</surname>
<given-names>Juan J.</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-15-14241">2</xref>
<xref ref-type="author-notes" rid="fn1-sensors-15-14241"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Raboso</surname>
<given-names>Mariano</given-names>
</name>
<xref ref-type="aff" rid="af3-sensors-15-14241">3</xref>
<xref ref-type="author-notes" rid="fn1-sensors-15-14241"></xref>
</contrib>
</contrib-group>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Passaro</surname>
<given-names>Vittorio M.N.</given-names>
</name>
<role>Academic Editor</role>
</contrib>
</contrib-group>
<aff id="af1-sensors-15-14241">
<label>1</label>
Departamento de Ciencia de los Materiales e Ingeniería Metalúrgica, Expresión Gráfica de la Ingeniería, Ingeniería Cartográfica, Geodesia y Fotogrametría, Ingeniería Mecánica e Ingeniería de los Procesos de Fabricación, Área de Ingeniería Mecánica, Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain</aff>
<aff id="af2-sensors-15-14241">
<label>2</label>
Departamento de Teoría de la Señal y Comunicaciones e Ingeniería Telemática, Universidad de Valladolid, Paseo Belén 15, 47011 Valladolid, Spain; E-Mails:
<email>alberto.izquierdo@tel.uva.es</email>
(A.I.-F.);
<email>juavil@tel.uva.es</email>
(J.J.V.)</aff>
<aff id="af3-sensors-15-14241">
<label>3</label>
E.U. Informática, Universidad Pontificia de Salamanca, Calle Compañía 5, 37002 Salamanca, Spain; E-Mail:
<email>mrabosoma@upsa.es</email>
</aff>
<author-notes>
<fn id="fn1-sensors-15-14241">
<label></label>
<p>These authors contributed equally to this work.</p>
</fn>
<corresp id="c1-sensors-15-14241">
<label>*</label>
Author to whom correspondence should be addressed; E-Mail:
<email>lvalpue@eii.uva.es</email>
; Tel.: +34-983-184-443.</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>17</day>
<month>6</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<month>6</month>
<year>2015</year>
</pub-date>
<volume>15</volume>
<issue>6</issue>
<fpage>14241</fpage>
<lpage>14260</lpage>
<history>
<date date-type="received">
<day>06</day>
<month>5</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>10</day>
<month>6</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>© 2015 by the authors; licensee MDPI, Basel, Switzerland.</copyright-statement>
<copyright-year>2015</copyright-year>
<license>
<license-p>
<pmc-comment>CREATIVE COMMONS</pmc-comment>
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
).</license-p>
</license>
</permissions>
<abstract>
<p>Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error
<italic>versus</italic>
the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.</p>
</abstract>
<kwd-group>
<kwd>acoustic biometric system</kwd>
<kwd>acoustic images</kwd>
<kwd>preprocessing techniques</kwd>
<kwd>support vector machine</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec>
<title>1. Introduction</title>
<p>Biometric systems are based on the subject’s characteristics to allow his/her identification [
<xref rid="B1-sensors-15-14241" ref-type="bibr">1</xref>
]. The main biometric systems use elements such as fingerprints, retina, face, voice,
<italic>etc</italic>
. to characterize people and then classify them for subsequent identification and validation. Each of these systems requires the use of specific sensors to obtain the desired characteristics of the subject. Video cameras are often used as sensors to identify subjects or property although a radar could also be used to obtain the shape of a subject through reflection [
<xref rid="B2-sensors-15-14241" ref-type="bibr">2</xref>
,
<xref rid="B3-sensors-15-14241" ref-type="bibr">3</xref>
]. There are accurate and reliable classification systems based on acoustic radars:
<list list-type="bullet">
<list-item>
<p>Animal echolocation, developed by mammals such as bats, whales and dolphins through specific waveforms [
<xref rid="B4-sensors-15-14241" ref-type="bibr">4</xref>
,
<xref rid="B5-sensors-15-14241" ref-type="bibr">5</xref>
], or the identification of different types of flowers by other species [
<xref rid="B6-sensors-15-14241" ref-type="bibr">6</xref>
].</p>
</list-item>
<list-item>
<p>Acoustic signatures used in passive sonar systems [
<xref rid="B7-sensors-15-14241" ref-type="bibr">7</xref>
,
<xref rid="B8-sensors-15-14241" ref-type="bibr">8</xref>
], which analyze the signal received by a target in the time-frequency domain.</p>
</list-item>
</list>
</p>
<p>There is little literature on the use of an acoustic radar as a biometric system for human identification and an ultrasonic band rather than an audible frequency band is usually employed [
<xref rid="B9-sensors-15-14241" ref-type="bibr">9</xref>
,
<xref rid="B10-sensors-15-14241" ref-type="bibr">10</xref>
]. In previous works, the authors developed multisensor surveillance and tracking systems based on acoustic arrays and image sensors [
<xref rid="B11-sensors-15-14241" ref-type="bibr">11</xref>
,
<xref rid="B12-sensors-15-14241" ref-type="bibr">12</xref>
]. In another line of work, making the most of the adquired experience in acoustic arrays and image sensors, the authors developed a biometric identification system based on the acoustic images acquired with an electronically scanned array [
<xref rid="B13-sensors-15-14241" ref-type="bibr">13</xref>
]. The system tries to discriminate subjects in terms of their acoustic image, directly related to the subject’s shape, height and geometrical characteristics. These characteristics are considered “soft biometrics” and they used to be used along with other “hard biometics” (e.g., fingerprints) in order to uniquely identify a person.</p>
<p>The system obtained acoustic images by scanning the subjects in four frequencies of the acoustic band and in four different positions, defining an acoustic profile that comprises all of these images. Subsequently, the acoustic profile was compared to previously stored profiles to identify the subject. In this first system, Mean Square Error (MSE) between two images of the same frequency and position is used to compare the acoustic profiles, defining a global error as the sum of the errors associated with each image of the profile. Using the Equal Error Rate (EER) as a quality indicator, this system obtained an EER value of 6.22%, such as other emerging biometric identification systems [
<xref rid="B14-sensors-15-14241" ref-type="bibr">14</xref>
,
<xref rid="B15-sensors-15-14241" ref-type="bibr">15</xref>
,
<xref rid="B16-sensors-15-14241" ref-type="bibr">16</xref>
,
<xref rid="B17-sensors-15-14241" ref-type="bibr">17</xref>
].</p>
<p>In a later work [
<xref rid="B18-sensors-15-14241" ref-type="bibr">18</xref>
], the authors analyzed the contribution of each acoustic image—associated with a frequency and position—to the performance of the biometric system, finding that each image provides different degrees of information. Two main conclusions were obtained:
<list list-type="bullet">
<list-item>
<p>Each set of images associated to certain frequency provides different information, improving the system performance, thus, the number of frequencies used should be increased.</p>
</list-item>
<list-item>
<p>Information associated to certain subject positions only provides redundant information and does not improve the quality of the system, thus, the number of positions used should be decreased.</p>
</list-item>
</list>
</p>
<p>In a second stage of the analysis, a new global error function was proposed by weighting the MSE error of each image proportionately to the information that it provides. In this case, an EER value of 4% was obtained. The use of more efficient classification algorithms would provide an improvement in the classification error, which also represents an EER.</p>
<p>Since Support Vector Machines (SVMs) are algorithms that currently define Machine Learning [
<xref rid="B19-sensors-15-14241" ref-type="bibr">19</xref>
], it was decided to work with them in the classification tasks. Furthermore, SVMs are the unique algorithms used in the classification capable of working with high-dimensional data, such as the case of the acoustic profiles used.</p>
<p>This paper presents an improved biometric system that uses a SVM algorithm for classification and identification of subject. Since high dimensionality of acoustic profiles exponentially increases the computational burden of SVM classifiers, preprocessing and feature extraction techniques have been designed and implemented to improve the classifier performance. This new system is based on the results obtained in previous studies [
<xref rid="B18-sensors-15-14241" ref-type="bibr">18</xref>
].</p>
<p>In
<xref ref-type="sec" rid="sec2-sensors-15-14241">Section 2</xref>
, SVM classification algorithms and associated training techniques are explained.
<xref ref-type="sec" rid="sec3-sensors-15-14241">Section 3</xref>
describes the biometric system, including acquisition, preprocessing and classification systems. In
<xref ref-type="sec" rid="sec4-sensors-15-14241">Section 4</xref>
, an analysis of the results is done and finally,
<xref ref-type="sec" rid="sec5-sensors-15-14241">Section 5</xref>
presents the final conclusions.</p>
</sec>
<sec id="sec2-sensors-15-14241">
<title>2. Support Vector Machines</title>
<p>SVMs carry out binary classification by constructing a hyperplane defined by the weight vector
<bold>w</bold>
and the bias term
<italic>b</italic>
, as shown in
<xref ref-type="fig" rid="sensors-15-14241-f001">Figure 1</xref>
, so samples of different classes will be divided by a separation, as wide as possible. Thereby, SVM algorithms are called maximum margin classifiers, being γ the margin of separation.</p>
<fig id="sensors-15-14241-f001" position="float">
<label>Figure 1</label>
<caption>
<p>Hyperplane for binary classification.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g001"></graphic>
</fig>
<p>Based on a training set of
<italic>l</italic>
known samples formed by data vectors
<italic>x
<sub>i</sub>
</italic>
and the corresponding class labels
<italic>y
<sub>i</sub>
</italic>
to which they belong:
<disp-formula id="FD1">
<label>(1)</label>
<mml:math id="mm1">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mo></mml:mo>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>l</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>l</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mi>N</mml:mi>
</mml:msup>
<mml:mo>×</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Machine Learning algorithms obtain the hyperplane according to an optimization criterion, which must be validated subsequently.</p>
<p>In the validation phase, the class label of a new data vector
<italic>x</italic>
can be predicted by projecting
<bold>x</bold>
in the weight vector
<bold>w</bold>
:
<disp-formula>f(x) = w · x +
<italic>b</italic>
<label>(2)</label>
</disp-formula>
</p>
<p>The sign of this projection will reveal the predicted class label. Thus, new samples are mapped into the n-dimensional space and a class will be associated to them, depending on which side of the hyperplane has been mapped.</p>
<p>There are different possible hyperplanes that divide the data space into two subsets. Typically, the maximum margin criterion is used as an appropriate optimization criterion to obtain the hyperplane with the greater margin of separation
<italic>γ</italic>
(see
<xref ref-type="fig" rid="sensors-15-14241-f001">Figure 1</xref>
). Only the vectors (or samples) positioned on the margin—which are called support vectors and that in
<xref ref-type="fig" rid="sensors-15-14241-f001">Figure 1</xref>
are surrounded by a circle—are necessary to describe this hyperplane.</p>
<p>For a canonical representation of the hyperplane, the constraints
<italic>y
<sub>i</sub>
</italic>
(
<bold>w</bold>
·
<bold>x</bold>
<sub>i</sub>
+
<italic>b</italic>
) ≥ 1 must be met to find the margin γ = 2/||
<bold>w</bold>
||. The maximization of margin γ is equivalent to the minimization of (1/2) ||
<bold>w</bold>
||
<sup>2</sup>
, subject to the same restrictions.</p>
<p>Violation of restrictions involves the introduction of the variable ξ
<sub>i</sub>
, giving rise to the so-called problem of soft-margin SVM optimization:
<disp-formula id="FD2">
<label>(3)</label>
<mml:math id="mm2">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>w</mml:mi>
<mml:mo></mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:mi>C</mml:mi>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo></mml:mo>
<mml:mi>i</mml:mi>
</mml:munder>
<mml:mrow>
<mml:msub>
<mml:mi>ξ</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>.</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>w</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>ξ</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>ξ</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>></mml:mo>
<mml:mn>0</mml:mn>
<mml:mo></mml:mo>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>C is the regularization parameter, so that higher values of C correspond to stronger violations penalties.</p>
<p>In order to resolve the problem showed in Equation (3), it is rewritten in terms of positive Lagrange multipliers α
<sub>i</sub>
. In this way, it is required to maximize the following expression:
<disp-formula id="FD3">
<label>(4)</label>
<mml:math id="mm3">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>D</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>α</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>α</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>α</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</disp-formula>
subject to restrictions 0 ≤ α
<sub>i</sub>
≤ C and ∑
<sub>i</sub>
α
<sub>i</sub>
y
<sub>i</sub>
= 0, given the relation:
<disp-formula id="FD4">
<label>(5)</label>
<mml:math id="mm4">
<mml:mrow>
<mml:mi>w</mml:mi>
<mml:mo>=</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo></mml:mo>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>α</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</disp-formula>
where Ns denotes the number of resulting support vectors. The discriminant function on which the SVM optimization is based is obtained by substituting
<bold>w</bold>
in Equation (2):
<disp-formula id="FD5">
<label>(6)</label>
<mml:math id="mm5">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo></mml:mo>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>α</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>·</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Essentially, a SVM is a two-class classifier but, in practice, it is very common to find problems associated with
<italic>K</italic>
> 2 classes. In these cases, a multiclass classifier is needed. There are several methods that combine multiple two-class SVM to obtain a multiclass classifier. The most widespread methods are the one-
<italic>versus</italic>
-all and the one-
<italic>versus</italic>
-one [
<xref rid="B19-sensors-15-14241" ref-type="bibr">19</xref>
].</p>
<p>Training and Validation</p>
<p>The classifier learns from a training set—samples whose class labels are known—and defines a hyperplane. Then, this hyperplane is used to classify the samples from the validation set—whose classes are unknown. After that, the associated classes are compared with their corresponding classes and the error rate of the classifier is assessed, as shown in
<xref ref-type="fig" rid="sensors-15-14241-f002">Figure 2</xref>
.</p>
<fig id="sensors-15-14241-f002" position="float">
<label>Figure 2</label>
<caption>
<p>Classifier training.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g002"></graphic>
</fig>
<p>The number of available samples is finite (
<italic>N</italic>
samples) and should be divided among the training set and the validation set. In this work, the classification algorithm is trained, using the two most common training methods:
<list list-type="bullet">
<list-item>
<p>Leave-One-Out (LOO)</p>
</list-item>
<list-item>
<p>Cross Validation (CV)</p>
</list-item>
</list>
</p>
<p>In LOO [
<xref rid="B20-sensors-15-14241" ref-type="bibr">20</xref>
], training is carried out using
<italic>N</italic>
− 1 samples, and validation is performed using the sample which has been excluded. Errors are taken into account when the classification is wrong. This process is repeated
<italic>N</italic>
times, each time excluding a different sample. The total number of errors gives an estimation of the classification error rate.</p>
<p>On the other hand, the CV method involves taking the available data samples and dividing them into
<italic>S</italic>
groups (named folds) [
<xref rid="B21-sensors-15-14241" ref-type="bibr">21</xref>
].
<italic>S</italic>
-1 folds are used to train the model, and the remaining fold is used to validate. This procedure is repeated
<italic>S</italic>
times, taking a different fold each time to validate the model. Finally, the classification error rate is the average of the errors that have been obtained in each of the
<italic>S</italic>
runs. An example of a 5-fold cross-validation (
<italic>S</italic>
= 5) is shown in
<xref ref-type="fig" rid="sensors-15-14241-f003">Figure 3</xref>
, where the fold used to validate is highlighted.</p>
<fig id="sensors-15-14241-f003" position="float">
<label>Figure 3</label>
<caption>
<p>5-fold Cross Validation.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g003"></graphic>
</fig>
</sec>
<sec id="sec3-sensors-15-14241">
<title>3. System Description</title>
<p>Based on basic radar/sonar principles [
<xref rid="B22-sensors-15-14241" ref-type="bibr">22</xref>
,
<xref rid="B23-sensors-15-14241" ref-type="bibr">23</xref>
], an acoustic detection and ranging system for biometric identification was proposed [
<xref rid="B24-sensors-15-14241" ref-type="bibr">24</xref>
], according to the block diagram in
<xref ref-type="fig" rid="sensors-15-14241-f004">Figure 4</xref>
.</p>
<fig id="sensors-15-14241-f004" position="float">
<label>Figure 4</label>
<caption>
<p>Functional description block diagram.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g004"></graphic>
</fig>
<p>This system performs four main tasks: (i) subject scanning; (ii) acoustic images acquisition; (iii) images preprocessing and (iv) subject identification, based on classification algorithms.</p>
<sec>
<title>3.1. Acquisition System</title>
<p>The subject is electronically scanned in the azimuth coordinates using two linear arrays. For each steering angle the system performs: (i) transmission beamforming; (ii) reception beamforming and (iii) match filtering in the range coordinate. After processing all the steering angles, a two-dimensional matrix is formed and stored, representing the acoustic image.
<xref ref-type="fig" rid="sensors-15-14241-f005">Figure 5</xref>
shows the block diagram for the acquisition system.</p>
<fig id="sensors-15-14241-f005" position="float">
<label>Figure 5</label>
<caption>
<p>Acquisition system block diagram.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g005"></graphic>
</fig>
<p>
<xref ref-type="fig" rid="sensors-15-14241-f006">Figure 6</xref>
shows an example of an acoustic image, considering that the x axis represents the azimuth angle and the y axis, the range.</p>
<fig id="sensors-15-14241-f006" position="float">
<label>Figure 6</label>
<caption>
<p>Acoustic image example.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g006"></graphic>
</fig>
<p>Based on the conclusions of previous works [
<xref rid="B18-sensors-15-14241" ref-type="bibr">18</xref>
], a new system that employs P = 3 spatial positions and F = 9 frequencies is defined. This system generates P
<sub>i</sub>
acoustic profiles, associated to subject i and formed by P·F = 27 images. </p>
<p>The selected positions for the subject under analysis are: front view with arms outstretched (
<italic>p
<sub>1</sub>
</italic>
), back view (
<italic>p
<sub>2</sub>
</italic>
) and side view (
<italic>p
<sub>3</sub>
</italic>
). The nine frequencies are 500 Hz-spaced, from 8 kHz (
<italic>f
<sub>1</sub>
</italic>
) to 12 kHz (
<italic>f
<sub>9</sub>
</italic>
). The number of beams used for each frequency is shown in
<xref ref-type="table" rid="sensors-15-14241-t001">Table 1</xref>
. </p>
<table-wrap id="sensors-15-14241-t001" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t001_Table 1</object-id>
<label>Table 1</label>
<caption>
<p>Number of beams
<italic>vs.</italic>
frequency.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>1</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>2</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>3</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>4</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>5</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>6</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>7</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>8</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>9</sub>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="top" rowspan="1" colspan="1">13</td>
<td align="center" valign="top" rowspan="1" colspan="1">15</td>
<td align="center" valign="top" rowspan="1" colspan="1">15</td>
<td align="center" valign="top" rowspan="1" colspan="1">17</td>
<td align="center" valign="top" rowspan="1" colspan="1">17</td>
<td align="center" valign="top" rowspan="1" colspan="1">17</td>
<td align="center" valign="top" rowspan="1" colspan="1">19</td>
<td align="center" valign="top" rowspan="1" colspan="1">19</td>
<td align="center" valign="top" rowspan="1" colspan="1">21</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec3dot2-sensors-15-14241">
<title>3.2. Preprocessing and Parametrization Techniques</title>
<p>With the purpose of reducing the dimension of the acoustic profiles, eliminating redundant or non-significant information and thus reducing the associated computational burden, several preprocessing and parametrization techniques on acoustic images have been evaluated. Among the preprocessing techniques, the following processes are implemented:
<list list-type="bullet">
<list-item>
<p>spatial filtering</p>
</list-item>
<list-item>
<p>segmentation using Gaussian Mixture Models algorithms</p>
</list-item>
<list-item>
<p>masking</p>
</list-item>
<list-item>
<p>binarization</p>
</list-item>
</list>
</p>
<p>On the other hand, a reduced set of parameters was extracted from the acoustic images in order to characterize them. Two families of algorithms were analyzed:
<list list-type="bullet">
<list-item>
<p>line-based image coding</p>
</list-item>
<list-item>
<p>geometric feature extraction</p>
</list-item>
</list>
</p>
<p>
<xref ref-type="fig" rid="sensors-15-14241-f007">Figure 7</xref>
shows the processing scheme.</p>
<fig id="sensors-15-14241-f007" position="float">
<label>Figure 7</label>
<caption>
<p>Preprocessing and parametrization techniques.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g007"></graphic>
</fig>
<p>First, a spatial filter was implemented to smooth images, in order to reduce multimodalities of torso echoes and improve the segmentation process [
<xref rid="B10-sensors-15-14241" ref-type="bibr">10</xref>
]. Then, a segmentation algorithm was used to differentiate pixels associated to the object from the pixels associated to the background.</p>
<p>The Expectation-Maximization (EM) algorithm is used to adjust a Gaussian Mixture Model (GMM) formed by two Gaussians, associated with foreground and background, respectively. The pixels associated with the background are zeroed. </p>
<p>The dimensions of the images N × M—where N is the number of rows (dimension in range) and M is the number of columns (dimension in azimuth)—are detailed for each frequency and position in
<xref ref-type="table" rid="sensors-15-14241-t002">Table 2</xref>
. </p>
<table-wrap id="sensors-15-14241-t002" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t002_Table 2</object-id>
<label>Table 2</label>
<caption>
<p>Image sizes.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="top" rowspan="1" colspan="1">N × M</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>1</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>2</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>3</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>4</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>5</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>6</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>7</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>8</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>9</sub>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="top" rowspan="1" colspan="1">
<bold>p
<sub>1</sub>
, p
<sub>2</sub>
, p
<sub>3</sub>
</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">245 × 13</td>
<td align="center" valign="top" rowspan="1" colspan="1">245 × 15</td>
<td align="center" valign="top" rowspan="1" colspan="1">245 × 15</td>
<td align="center" valign="top" rowspan="1" colspan="1">245 × 17</td>
<td align="center" valign="top" rowspan="1" colspan="1">245 × 17</td>
<td align="center" valign="top" rowspan="1" colspan="1">245 × 17</td>
<td align="center" valign="top" rowspan="1" colspan="1">245 × 19</td>
<td align="center" valign="top" rowspan="1" colspan="1">245 × 19</td>
<td align="center" valign="top" rowspan="1" colspan="1">245 × 21</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The profiles formed by the acoustic images are stored to be processed and that is why the size of each pixel of the images has to be defined. The final size of the profiles gives the required storage space, and is related with the computational burden associated to the system. In this case, the value of each pixel is stored in memory using B = 32 bits.</p>
<p>Using masking techniques, the size of the images is reduced by adjusting them to the area that the subjects take up on the image. A statistical analysis of the acquired images was performed to determine the common area for each position and frequency. The sizes of the images obtained by this technique for each frequency and position are detailed in
<xref ref-type="table" rid="sensors-15-14241-t003">Table 3</xref>
.</p>
<table-wrap id="sensors-15-14241-t003" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t003_Table 3</object-id>
<label>Table 3</label>
<caption>
<p>Masked image sizes.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="top" rowspan="1" colspan="1">N × M</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>1</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>2</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>3</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>4</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>5</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>6</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>7</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>8</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>9</sub>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="top" rowspan="1" colspan="1">
<bold>p
<sub>1</sub>
</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">145 × 13</td>
<td align="center" valign="top" rowspan="1" colspan="1">145 × 15</td>
<td align="center" valign="top" rowspan="1" colspan="1">145 × 15</td>
<td align="center" valign="top" rowspan="1" colspan="1">145 × 17</td>
<td align="center" valign="top" rowspan="1" colspan="1">145 × 17</td>
<td align="center" valign="top" rowspan="1" colspan="1">145 × 17</td>
<td align="center" valign="top" rowspan="1" colspan="1">145 × 19</td>
<td align="center" valign="top" rowspan="1" colspan="1">145 × 19</td>
<td align="center" valign="top" rowspan="1" colspan="1">145 × 21</td>
</tr>
<tr>
<td align="center" valign="top" rowspan="1" colspan="1">
<bold>p
<sub>2</sub>
</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">155 × 11</td>
<td align="center" valign="top" rowspan="1" colspan="1">155 × 11</td>
<td align="center" valign="top" rowspan="1" colspan="1">155 × 11</td>
<td align="center" valign="top" rowspan="1" colspan="1">155 × 11</td>
<td align="center" valign="top" rowspan="1" colspan="1">155 × 11</td>
<td align="center" valign="top" rowspan="1" colspan="1">155 × 11</td>
<td align="center" valign="top" rowspan="1" colspan="1">155 × 11</td>
<td align="center" valign="top" rowspan="1" colspan="1">155 × 11</td>
<td align="center" valign="top" rowspan="1" colspan="1">155 × 11</td>
</tr>
<tr>
<td align="center" valign="top" rowspan="1" colspan="1">
<bold>p
<sub>3</sub>
</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">171 × 9</td>
<td align="center" valign="top" rowspan="1" colspan="1">171 × 9</td>
<td align="center" valign="top" rowspan="1" colspan="1">171 × 9</td>
<td align="center" valign="top" rowspan="1" colspan="1">171 × 9</td>
<td align="center" valign="top" rowspan="1" colspan="1">171 × 9</td>
<td align="center" valign="top" rowspan="1" colspan="1">171 × 9</td>
<td align="center" valign="top" rowspan="1" colspan="1">171 × 9</td>
<td align="center" valign="top" rowspan="1" colspan="1">171 × 9</td>
<td align="center" valign="top" rowspan="1" colspan="1">171 × 9</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Finally, the value of the pixels—encoded with 32 bits—is reduced to 1 bit by image binarization. This significantly reduces the storage space required. This operation is equivalent to associate a unit value to the foreground pixels.
<xref ref-type="fig" rid="sensors-15-14241-f008">Figure 8</xref>
shows an example of the use of the preprocessing techniques, showing an original
<xref ref-type="fig" rid="sensors-15-14241-f008">Figure 8</xref>
a, a segmented
<xref ref-type="fig" rid="sensors-15-14241-f008">Figure 8</xref>
b, a masked
<xref ref-type="fig" rid="sensors-15-14241-f008">Figure 8</xref>
c and a binarized
<xref ref-type="fig" rid="sensors-15-14241-f008">Figure 8</xref>
d image.</p>
<fig id="sensors-15-14241-f008" position="float">
<label>Figure 8</label>
<caption>
<p>Pre-processed images: (
<bold>a</bold>
) original; (
<bold>b</bold>
) segmented; (
<bold>c</bold>
) masked; (
<bold>d</bold>
) binarized.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g008"></graphic>
</fig>
<table-wrap id="sensors-15-14241-t004" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t004_Table 4</object-id>
<label>Table 4</label>
<caption>
<p>Image sizes using Row-based Image Coding.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="top" rowspan="1" colspan="1">L</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>1</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>2</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>3</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>4</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>5</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>6</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>7</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>8</sub>
</th>
<th align="center" valign="top" rowspan="1" colspan="1">f
<sub>9</sub>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="top" rowspan="1" colspan="1">
<bold>p
<sub>1</sub>
</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">145</td>
<td align="center" valign="top" rowspan="1" colspan="1">145</td>
<td align="center" valign="top" rowspan="1" colspan="1">145</td>
<td align="center" valign="top" rowspan="1" colspan="1">145</td>
<td align="center" valign="top" rowspan="1" colspan="1">145</td>
<td align="center" valign="top" rowspan="1" colspan="1">145</td>
<td align="center" valign="top" rowspan="1" colspan="1">145</td>
<td align="center" valign="top" rowspan="1" colspan="1">145</td>
<td align="center" valign="top" rowspan="1" colspan="1">145</td>
</tr>
<tr>
<td align="center" valign="top" rowspan="1" colspan="1">
<bold>p
<sub>2</sub>
</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">155</td>
<td align="center" valign="top" rowspan="1" colspan="1">155</td>
<td align="center" valign="top" rowspan="1" colspan="1">155</td>
<td align="center" valign="top" rowspan="1" colspan="1">155</td>
<td align="center" valign="top" rowspan="1" colspan="1">155</td>
<td align="center" valign="top" rowspan="1" colspan="1">155</td>
<td align="center" valign="top" rowspan="1" colspan="1">155</td>
<td align="center" valign="top" rowspan="1" colspan="1">155</td>
<td align="center" valign="top" rowspan="1" colspan="1">155</td>
</tr>
<tr>
<td align="center" valign="top" rowspan="1" colspan="1">
<bold>p
<sub>3</sub>
</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">171</td>
<td align="center" valign="top" rowspan="1" colspan="1">171</td>
<td align="center" valign="top" rowspan="1" colspan="1">171</td>
<td align="center" valign="top" rowspan="1" colspan="1">171</td>
<td align="center" valign="top" rowspan="1" colspan="1">171</td>
<td align="center" valign="top" rowspan="1" colspan="1">171</td>
<td align="center" valign="top" rowspan="1" colspan="1">171</td>
<td align="center" valign="top" rowspan="1" colspan="1">171</td>
<td align="center" valign="top" rowspan="1" colspan="1">171</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Starting from the binarized images, two feature extraction techniques were applied, significantly reducing the size of the acoustic images. First, Line-based Image Coding algorithms were analyzed. The images are broken down into a set of lines that can be rows or columns. For each line, the number of pixels with unit value is encoded. In this way, the size of each image is significantly reduced, from a N·M size to a L size, where L is N or M, as encoding is performed by rows or columns, respectively. The value of each parameter is stored in the memory using B = 8 bits. In row coding, sizes of the images obtained at each position and frequency are shown in
<xref ref-type="table" rid="sensors-15-14241-t004">Table 4</xref>
.</p>
<p>In column coding, sizes of the images obtained for each frequency position are detailed in
<xref ref-type="table" rid="sensors-15-14241-t005">Table 5</xref>
.</p>
<table-wrap id="sensors-15-14241-t005" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t005_Table 5</object-id>
<label>Table 5</label>
<caption>
<p>Image sizes using Column-based Image Coding.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="middle" rowspan="1" colspan="1">L</th>
<th align="center" valign="middle" rowspan="1" colspan="1">f
<sub>1</sub>
</th>
<th align="center" valign="middle" rowspan="1" colspan="1">f
<sub>2</sub>
</th>
<th align="center" valign="middle" rowspan="1" colspan="1">f
<sub>3</sub>
</th>
<th align="center" valign="middle" rowspan="1" colspan="1">f
<sub>4</sub>
</th>
<th align="center" valign="middle" rowspan="1" colspan="1">f
<sub>5</sub>
</th>
<th align="center" valign="middle" rowspan="1" colspan="1">f
<sub>6</sub>
</th>
<th align="center" valign="middle" rowspan="1" colspan="1">f
<sub>7</sub>
</th>
<th align="center" valign="middle" rowspan="1" colspan="1">f
<sub>8</sub>
</th>
<th align="center" valign="middle" rowspan="1" colspan="1">f
<sub>9</sub>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>p
<sub>1</sub>
</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">13</td>
<td align="center" valign="middle" rowspan="1" colspan="1">15</td>
<td align="center" valign="middle" rowspan="1" colspan="1">15</td>
<td align="center" valign="middle" rowspan="1" colspan="1">17</td>
<td align="center" valign="middle" rowspan="1" colspan="1">17</td>
<td align="center" valign="middle" rowspan="1" colspan="1">17</td>
<td align="center" valign="middle" rowspan="1" colspan="1">19</td>
<td align="center" valign="middle" rowspan="1" colspan="1">19</td>
<td align="center" valign="middle" rowspan="1" colspan="1">121</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>p
<sub>2</sub>
</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">11</td>
<td align="center" valign="middle" rowspan="1" colspan="1">11</td>
<td align="center" valign="middle" rowspan="1" colspan="1">11</td>
<td align="center" valign="middle" rowspan="1" colspan="1">11</td>
<td align="center" valign="middle" rowspan="1" colspan="1">11</td>
<td align="center" valign="middle" rowspan="1" colspan="1">11</td>
<td align="center" valign="middle" rowspan="1" colspan="1">11</td>
<td align="center" valign="middle" rowspan="1" colspan="1">11</td>
<td align="center" valign="middle" rowspan="1" colspan="1">11</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>p
<sub>3</sub>
</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" rowspan="1" colspan="1">9</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>As an example,
<xref ref-type="fig" rid="sensors-15-14241-f009">Figure 9</xref>
represents Line-based Image Coding using rows and columns of an acoustic image of size 6 × 12.</p>
<fig id="sensors-15-14241-f009" position="float">
<label>Figure 9</label>
<caption>
<p>Line-based Image Coding.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g009"></graphic>
</fig>
<p>In
<xref ref-type="fig" rid="sensors-15-14241-f009">Figure 9</xref>
, it can be observed that rows 3 and 4 gave an identical encoding, although they are different. To improve the information of each line and avoid ambiguous encodings, a second parameter that stores the starting position of the first nonzero pixel per line is added. With this improvement, the image dimension is doubled. As an example,
<xref ref-type="fig" rid="sensors-15-14241-f010">Figure 10</xref>
represents the new encoding methods for the previous image.</p>
<fig id="sensors-15-14241-f010" position="float">
<label>Figure 10</label>
<caption>
<p>Line-based Image Coding with position.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g010"></graphic>
</fig>
<p>Secondly, geometric feature extraction algorithms were analyzed. They show the following properties of images:
<list list-type="bullet">
<list-item>
<p>Area : A</p>
</list-item>
<list-item>
<p>Centroid: (c
<sub>x</sub>
, c
<sub>y</sub>
)</p>
</list-item>
<list-item>
<p>Perimeter: P</p>
</list-item>
</list>
</p>
<p>In this case, one parameter for area and perimeter, and two parameters for centroid are extracted from each image. The value of each parameter is stored in the memory using B = 32 bits.
<xref ref-type="fig" rid="sensors-15-14241-f011">Figure 11</xref>
shows the geometric features extracted from
<xref ref-type="fig" rid="sensors-15-14241-f008">Figure 8</xref>
d.</p>
<fig id="sensors-15-14241-f011" position="float">
<label>Figure 11</label>
<caption>
<p>Geometric feature extraction.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g011"></graphic>
</fig>
</sec>
<sec>
<title>3.3. Classification</title>
<p>In this work, tests based on linear SVM algorithms were performed. A linear SVM was used because, if the number of features is large, one may not need to map data of a higher dimensional space and using the linear kernel is good enough [
<xref rid="B25-sensors-15-14241" ref-type="bibr">25</xref>
]. Besides, although the dimension of the acoustic images is reduced with the preprocessing techniques, it is still too high in order to be used in SVMs based on a Gaussian kernel, involving an increment of the processing time and the computational burden, without an improvement in the classification error rate.</p>
<p>It was implemented with Matlab, specifically using the LIBSVM library, which allows multiclass SVM classification according to the
<italic>one-versus-one</italic>
algorithm [
<xref rid="B26-sensors-15-14241" ref-type="bibr">26</xref>
]. The methods LOO and CV, with 10, 5 and 4 folds were used for algorithm training.</p>
<p>In the linear SVM, the regularization parameter C was set to 5000, since a usual practice is to assign C to the range of output values of the SVM algorithm [
<xref rid="B27-sensors-15-14241" ref-type="bibr">27</xref>
],
<italic>i.e.</italic>
, the maximum number of possible errors, which coincides with the total number of samples.</p>
</sec>
</sec>
<sec id="sec4-sensors-15-14241">
<title>4. Analysis of Results</title>
<sec>
<title>4.1. Scenario Definition </title>
<p>This study assumes that the system is used as an access control to enter in a laboratory, where only five subjects have authorized access. The SVM algorithm must be able to classify the subjects who try to access the laboratory in six different classes:
<list list-type="bullet">
<list-item>
<p>a class for each of the five authorized subjects</p>
</list-item>
<list-item>
<p>a class associated to all other people, considered as intruders.</p>
</list-item>
</list>
</p>
<p>To evaluate the performance of the SVM classification algorithm in acoustic biometric system, 5000 profiles were used. They are divided into six classes according to the following distribution:
<list list-type="bullet">
<list-item>
<p>500 acoustic profiles for each of the five authorized people.</p>
</list-item>
<list-item>
<p>2500 acoustic profiles for 25 intruders.</p>
</list-item>
</list>
</p>
<p>In order to have a population sample as general as possible, the subjects whose acoustic images were used have different morphological characteristics, as shown in
<xref ref-type="table" rid="sensors-15-14241-t006">Table 6</xref>
. In this case, unlike previous tests [
<xref rid="B13-sensors-15-14241" ref-type="bibr">13</xref>
,
<xref rid="B18-sensors-15-14241" ref-type="bibr">18</xref>
], acoustic images of each subject were obtained on different days and with them wearing different clothes. In this way, the system will be able to classify the subjects according to who they actually are, without clothes being a distinctive factor.</p>
<table-wrap id="sensors-15-14241-t006" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t006_Table 6</object-id>
<label>Table 6</label>
<caption>
<p>Morphological features.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="middle" rowspan="1" colspan="1">Id</th>
<th align="center" valign="middle" rowspan="1" colspan="1"># Signatures</th>
<th align="center" valign="middle" rowspan="1" colspan="1">Gender</th>
<th align="center" valign="middle" rowspan="1" colspan="1">Constitution</th>
<th align="center" valign="middle" rowspan="1" colspan="1">Height</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="5" align="center" valign="middle" rowspan="1">
<bold>Authorized</bold>
</td>
</tr>
<tr style="border-top: solid thin">
<td align="center" valign="middle" rowspan="1" colspan="1">00</td>
<td align="center" valign="middle" rowspan="1" colspan="1">500</td>
<td align="center" valign="middle" rowspan="1" colspan="1">male</td>
<td align="center" valign="middle" rowspan="1" colspan="1">thin</td>
<td align="center" valign="middle" rowspan="1" colspan="1">average</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">01</td>
<td align="center" valign="middle" rowspan="1" colspan="1">500</td>
<td align="center" valign="middle" rowspan="1" colspan="1">female</td>
<td align="center" valign="middle" rowspan="1" colspan="1">normal</td>
<td align="center" valign="middle" rowspan="1" colspan="1">average</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">02</td>
<td align="center" valign="middle" rowspan="1" colspan="1">500</td>
<td align="center" valign="middle" rowspan="1" colspan="1">female</td>
<td align="center" valign="middle" rowspan="1" colspan="1">normal</td>
<td align="center" valign="middle" rowspan="1" colspan="1">small</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">03</td>
<td align="center" valign="middle" rowspan="1" colspan="1">500</td>
<td align="center" valign="middle" rowspan="1" colspan="1">male</td>
<td align="center" valign="middle" rowspan="1" colspan="1">strong</td>
<td align="center" valign="middle" rowspan="1" colspan="1">tall</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">04</td>
<td align="center" valign="middle" rowspan="1" colspan="1">500</td>
<td align="center" valign="middle" rowspan="1" colspan="1">male</td>
<td align="center" valign="middle" rowspan="1" colspan="1">very strong</td>
<td align="center" valign="middle" rowspan="1" colspan="1">tall</td>
</tr>
<tr style="border-top: solid thin">
<td colspan="5" align="center" valign="middle" rowspan="1">
<bold>Intruders</bold>
</td>
</tr>
<tr style="border-top: solid thin">
<td rowspan="13" align="center" valign="middle" colspan="1">05-29</td>
<td align="center" valign="middle" rowspan="1" colspan="1">125</td>
<td align="center" valign="middle" rowspan="1" colspan="1">male</td>
<td align="center" valign="middle" rowspan="1" colspan="1">strong</td>
<td align="center" valign="middle" rowspan="1" colspan="1">average</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">150</td>
<td align="center" valign="middle" rowspan="1" colspan="1">female</td>
<td align="center" valign="middle" rowspan="1" colspan="1">thin</td>
<td align="center" valign="middle" rowspan="1" colspan="1">average</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">150</td>
<td align="center" valign="middle" rowspan="1" colspan="1">male</td>
<td align="center" valign="middle" rowspan="1" colspan="1">thin</td>
<td align="center" valign="middle" rowspan="1" colspan="1">average</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">300</td>
<td align="center" valign="middle" rowspan="1" colspan="1">female</td>
<td align="center" valign="middle" rowspan="1" colspan="1">normal</td>
<td align="center" valign="middle" rowspan="1" colspan="1">average</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" rowspan="1" colspan="1">male</td>
<td align="center" valign="middle" rowspan="1" colspan="1">very strong</td>
<td align="center" valign="middle" rowspan="1" colspan="1">tall</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">125</td>
<td align="center" valign="middle" rowspan="1" colspan="1">male</td>
<td align="center" valign="middle" rowspan="1" colspan="1">normal</td>
<td align="center" valign="middle" rowspan="1" colspan="1">average</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">125</td>
<td align="center" valign="middle" rowspan="1" colspan="1">male</td>
<td align="center" valign="middle" rowspan="1" colspan="1">strong</td>
<td align="center" valign="middle" rowspan="1" colspan="1">tall</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" rowspan="1" colspan="1">female</td>
<td align="center" valign="middle" rowspan="1" colspan="1">normal</td>
<td align="center" valign="middle" rowspan="1" colspan="1">small</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">125</td>
<td align="center" valign="middle" rowspan="1" colspan="1">female</td>
<td align="center" valign="middle" rowspan="1" colspan="1">strong</td>
<td align="center" valign="middle" rowspan="1" colspan="1">small</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">125</td>
<td align="center" valign="middle" rowspan="1" colspan="1">female</td>
<td align="center" valign="middle" rowspan="1" colspan="1">strong</td>
<td align="center" valign="middle" rowspan="1" colspan="1">tall</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">325</td>
<td align="center" valign="middle" rowspan="1" colspan="1">male</td>
<td align="center" valign="middle" rowspan="1" colspan="1">strong</td>
<td align="center" valign="middle" rowspan="1" colspan="1">average</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">350</td>
<td align="center" valign="middle" rowspan="1" colspan="1">male</td>
<td align="center" valign="middle" rowspan="1" colspan="1">thin</td>
<td align="center" valign="middle" rowspan="1" colspan="1">small</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">400</td>
<td align="center" valign="middle" rowspan="1" colspan="1">female</td>
<td align="center" valign="middle" rowspan="1" colspan="1">thin</td>
<td align="center" valign="middle" rowspan="1" colspan="1">small</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Based on this scenario, a set of experiments were conducted to analyze the performance of the proposed classification algorithm using different acoustic profiles.
<xref ref-type="fig" rid="sensors-15-14241-f012">Figure 12</xref>
shows experiments that were carried out in this study. The system was tested with raw, preprocessed, binarized, line-based encoded and geometric features extracted profiles.</p>
<fig id="sensors-15-14241-f012" position="float">
<label>Figure 12</label>
<caption>
<p>Experiments.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g012"></graphic>
</fig>
</sec>
<sec>
<title>4.2. Raw Profiles</title>
<p>These acoustic profiles use raw images, without preprocessing. Each of the profiles has a size in the order of 3.5 × 10
<sup>6</sup>
; this value is obtained from the following Equation (7):
<disp-formula id="FD6">
<label>(7)</label>
<mml:math id="mm6">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mtext>size</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mtext>profile</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>B</mml:mi>
<mml:mo>·</mml:mo>
<mml:munderover>
<mml:mstyle mathsize="140%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>P</mml:mi>
</mml:munderover>
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>·</mml:mo>
<mml:munderover>
<mml:mstyle mathsize="140%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>F</mml:mi>
</mml:munderover>
<mml:msub>
<mml:mi>M</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
where B is the number of bits used to store the value of each pixel of the acoustic images, P is the number of positions used in the system, N
<sub>i</sub>
is the number of rows for each position, F is the number of frequencies and M
<sub>i,j</sub>
is the number of columns for each position and frequency. The specific values of these variables are shown in
<xref ref-type="sec" rid="sec3dot2-sensors-15-14241">Section 3.2</xref>
.</p>
<p>In this test, an average classification error rate of 0.46% with a standard deviation of 0.120 was obtained. Comparing the error rate obtained using SVMs, which represents an Error Equal Rate, with the EER obtained with the classifier based on mean squared error (MSE), the classification error rate was reduced significantly,
<italic>i.e.</italic>
, from 4% [
<xref rid="B18-sensors-15-14241" ref-type="bibr">18</xref>
] to 0.46%.</p>
</sec>
<sec>
<title>4.3. Preprocessed Profiles</title>
<p>In this case, raw profiles were first filtered, then segmented via GMM and, finally, masked, as explained in
<xref ref-type="sec" rid="sec3dot2-sensors-15-14241">Section 3.2</xref>
. Now, each preprocessed profile has a storage size in the order of 1.6 × 10
<sup>6</sup>
. This value was obtained using Equation (7).</p>
<p>A mean error classification rate of 0.46% with a standard deviation of 0.121 was obtained. Comparing these results with those obtained using raw profiles, it can be observed that when the profile size is reduced-equivalent to reducing computational burden-the error rate does not change. This shows that eliminated data does not provide relevant information to the classification task.</p>
<p>After that, the preprocessed profiles were binarized. Each profile has now a size in the order of 4 × 10
<sup>5</sup>
. In this case, an error rate of 0.75% with a standard deviation of 0.255 was obtained. If these results are compared with those obtained using preprocessed profiles without binarization, it can be observed that size reduction slightly increased the classification error rate, however, at a lower ratio than the reduction in size. </p>
<p>This shows that the most relevant data of the profiles is related to the shape of the subjects, not to the specific value of their pixels. This is why working with binarized profiles is the next step to achieve size reduction.
<xref ref-type="table" rid="sensors-15-14241-t007">Table 7</xref>
summarizes the results obtained in these tests, corresponding to the classification based on raw, preprocessed and binarized acoustic profiles.</p>
<table-wrap id="sensors-15-14241-t007" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t007_Table 7</object-id>
<label>Table 7</label>
<caption>
<p>Classification error rates for raw, preprocessed and binarized acoustic profiles.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" rowspan="1" colspan="1">Acoustic Profile</th>
<th align="center" valign="top" rowspan="1" colspan="1">E
<overline>rror Rat</overline>
e</th>
<th align="center" valign="top" rowspan="1" colspan="1">σ (Error Rate)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Raw</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.46%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.120</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Preprocessed</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.46%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.121</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Binarized</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.75%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.255</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>4.4. Parameter Extraction</title>
<sec>
<title>Line-Based Image Coding</title>
<p>First, line-based image coding was done using the length of rows or columns of the acoustic images. Then, the algorithm was improved, including the initial position of rows and columns. The size of the acoustic profiles obtained through this line coding is calculated using Equation (8):
<disp-formula id="FD7">
<label>(8)</label>
<mml:math id="mm7">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mtext>size</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mtext>profile</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>B</mml:mi>
<mml:mo>·</mml:mo>
<mml:mi>K</mml:mi>
<mml:mo>·</mml:mo>
<mml:munderover>
<mml:mstyle mathsize="140%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>P</mml:mi>
</mml:munderover>
<mml:munderover>
<mml:mstyle mathsize="140%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>F</mml:mi>
</mml:munderover>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
where B is the number of bits used to store the value of each parameter, K is the number of parameters used to encode each line—whose possible values are shown in
<xref ref-type="table" rid="sensors-15-14241-t008">Table 8</xref>
—P and F are the number of positions and frequencies, respectively, and L
<sub>i,j</sub>
is the number of lines,
<italic>i.e.</italic>
, rows, columns or the sum of both. The specific values of these variables are shown in
<xref ref-type="sec" rid="sec3dot2-sensors-15-14241">Section 3.2</xref>
.</p>
<table-wrap id="sensors-15-14241-t008" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t008_Table 8</object-id>
<label>Table 8</label>
<caption>
<p>Number of parameters per line.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" rowspan="1" colspan="1">Line-Based Image Coding</th>
<th align="center" valign="top" rowspan="1" colspan="1">K</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Based on Line Length</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">1</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Based on Line Length and Position</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">2</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In the first case, profiles have a size in the order of 3 × 10
<sup>4</sup>
, 2.6 × 10
<sup>3</sup>
and 3.6 × 10
<sup>4</sup>
, when row, columns and both rows and columns are encoded. The results obtained in these tests are shown in
<xref ref-type="table" rid="sensors-15-14241-t009">Table 9</xref>
. Line coding based on length reduces considerably the size of the acoustic profile, although classification error rate increases. On the other hand, despite error rate getting worse, its values are still acceptably low. The lowest error rate value is obtained using both row and column coding, with a mean value of 1.43% and a standard deviation of 0.390.</p>
<table-wrap id="sensors-15-14241-t009" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t009_Table 9</object-id>
<label>Table 9</label>
<caption>
<p>Classification error rates for line coding based on line length.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" rowspan="1" colspan="1">Line Coding</th>
<th align="center" valign="top" rowspan="1" colspan="1">E
<overline>rror Ra</overline>
te</th>
<th align="center" valign="top" rowspan="1" colspan="1">σ (Error Rate)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Row</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">1.93%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.498</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Column</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">1.97%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.546</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Row + Column</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">1.43%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.390</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In the second case, profiles are twice as large as the ones in the first case, since the initial position of the line is also encoded. These sizes are in the order of 6.8 × 10
<sup>4</sup>
, 5.3 × 10
<sup>3</sup>
and 7.3 × 10
<sup>4</sup>
, if rows, columns or both rows and columns are encoded. In this case, although the profile dimension increases, the classification error rate decreases. If both row and columns are encoded according to length and position, an error rate of 0.47% is obtained. This error rate value is similar to the one obtained when using raw profiles. The obtained results are shown in
<xref ref-type="table" rid="sensors-15-14241-t010">Table 10</xref>
. </p>
<table-wrap id="sensors-15-14241-t010" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t010_Table 10</object-id>
<label>Table 10</label>
<caption>
<p>Classification error rates for line coding based on length and position.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" rowspan="1" colspan="1">Line Coding</th>
<th align="center" valign="top" rowspan="1" colspan="1">E
<overline>rror Rat</overline>
e</th>
<th align="center" valign="top" rowspan="1" colspan="1">σ(Error Rate)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Row + Position</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">1.47%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.407</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Column + Position</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">1.86%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.391</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Row + Column + Position</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.46%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.061</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec>
<title>4.5. Geometric Feature Extraction</title>
<p>For this set of tests, the size of the profiles obtained by extracting geometric features of the acoustic images is calculated using Equation (9):
<disp-formula id="FD8">
<label>(9)</label>
<mml:math id="mm8">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mtext>size</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mtext>profile</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>B</mml:mi>
<mml:mo>·</mml:mo>
<mml:mi>K</mml:mi>
<mml:mo>·</mml:mo>
<mml:mi>P</mml:mi>
<mml:mo>·</mml:mo>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Where B is the number of bits needed to store the value of each parameter, K is the number of extracted features; P and F are the number of positions and frequencies, respectively.</p>
<p>The sizes of these acoustic profiles are in the order of 8.6 × 10
<sup>2</sup>
, if area or perimeter are used as geometric feature, and 1.7 × 10
<sup>3</sup>
, if centroid is employed. It can be observed that using geometric features reduces profile size, but the classification error rate increases excessively, as it is shown in
<xref ref-type="table" rid="sensors-15-14241-t011">Table 11</xref>
. The obtained error rates are between 11% and 15%. These values are considerably higher than the reference error rate of 4% [
<xref rid="B18-sensors-15-14241" ref-type="bibr">18</xref>
]. </p>
<table-wrap id="sensors-15-14241-t011" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t011_Table 11</object-id>
<label>Table 11</label>
<caption>
<p>Classification error rates using geometric features.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" rowspan="1" colspan="1">Geometric Parameters</th>
<th align="center" valign="top" rowspan="1" colspan="1">E
<overline>rror Rat</overline>
e</th>
<th align="center" valign="top" rowspan="1" colspan="1">σ(Error Rate)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Area</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">11.07%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.297</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Centroid</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">12.04%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.310</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Perimeter</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">15.04%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.325</td>
</tr>
<tr style="border-top: solid thin">
<td align="left" valign="top" rowspan="1" colspan="1">
<bold>Area + Centroid + Perimeter</bold>
</td>
<td align="center" valign="top" rowspan="1" colspan="1">6.86%</td>
<td align="center" valign="top" rowspan="1" colspan="1">0.430</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Aiming to reduce error rate, the employed features were joined to form new profiles. In this case, although the profile size increases, the obtained error rate is reduced to 6.86%, as
<xref ref-type="table" rid="sensors-15-14241-t011">Table 11</xref>
shows. This value of the error rate is above the reference one.</p>
</sec>
<sec>
<title>4.6. Results Discussion</title>
<p>
<xref ref-type="fig" rid="sensors-15-14241-f013">Figure 13</xref>
shows the classification error rate obtained for each test, and
<xref ref-type="fig" rid="sensors-15-14241-f014">Figure 14</xref>
shows the corresponding computational burden. This computational burden is calculated as the product of the number of support vectors, employed by the SVM for the classification, and their size.</p>
<fig id="sensors-15-14241-f013" position="float">
<label>Figure 13</label>
<caption>
<p>Classification error rates.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g013"></graphic>
</fig>
<fig id="sensors-15-14241-f014" position="float">
<label>Figure 14</label>
<caption>
<p>Computational burden.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g014"></graphic>
</fig>
<p>In order to analyze these parameters and their relationship, classification error and computational burden sensitivities were defined as follows:
<list list-type="bullet">
<list-item>
<p>Classification error sensitivity
<disp-formula id="FD9">
<label>(10)</label>
<mml:math id="mm9">
<mml:mrow>
<mml:msub>
<mml:mtext>S</mml:mtext>
<mml:mtext>e</mml:mtext>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
<mml:mo>_</mml:mo>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>w</mml:mi>
<mml:mo>_</mml:mo>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
with
<italic>raw</italic>
_
<italic>error</italic>
as the error using raw profiles.</p>
</list-item>
<list-item>
<p>Computational burden sensitivity
<disp-formula id="FD10">
<label>(11)</label>
<mml:math id="mm10">
<mml:mrow>
<mml:msub>
<mml:mtext>S</mml:mtext>
<mml:mtext>b</mml:mtext>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
<mml:mo>_</mml:mo>
<mml:mi>b</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>w</mml:mi>
<mml:mo>_</mml:mo>
<mml:mi>b</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
with
<italic>raw_burden</italic>
as the burden using raw profiles.</p>
</list-item>
</list>
</p>
<p>Error sensitivity shows how the error rate increases due to profile size reduction. In the same way, burden sensitivity shows how burden decreases due to profile size reduction. Given that S
<sub>b</sub>
values are always lower than 1, 1/S
<sub>b</sub>
has been analyzed in order to compare both sensitivities in a similar way. Sensitivity values are shown in
<xref ref-type="table" rid="sensors-15-14241-t012">Table 12</xref>
.</p>
<table-wrap id="sensors-15-14241-t012" position="float">
<object-id pub-id-type="pii">sensors-15-14241-t012_Table 12</object-id>
<label>Table 12</label>
<caption>
<p>Classification error and computational burden sensitivities.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="middle" rowspan="1" colspan="1"></th>
<th align="center" valign="middle" rowspan="1" colspan="1">S
<sub>e</sub>
</th>
<th align="center" valign="middle" rowspan="1" colspan="1">1/S
<sub>b</sub>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Raw Profiles</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1.00</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1.00</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Preprocessed</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1.00</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2.13</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Binarized</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1.62</td>
<td align="center" valign="middle" rowspan="1" colspan="1">7.85</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Row</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">4.17</td>
<td align="center" valign="middle" rowspan="1" colspan="1">109.72</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Column</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">4.25</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1297.28</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Row + Column</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">3.09</td>
<td align="center" valign="middle" rowspan="1" colspan="1">95.02</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Row + Position</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">3.17</td>
<td align="center" valign="middle" rowspan="1" colspan="1">51.43</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Col. + Position</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">4.02</td>
<td align="center" valign="middle" rowspan="1" colspan="1">599.31</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Row + Col. + Pos.</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1.00</td>
<td align="center" valign="middle" rowspan="1" colspan="1">43.90</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Area</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">23.94</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5581.96</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Centroid</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">26.02</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2883.10</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Perimeter</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">32.51</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5242.42</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>Area + Cent. + Peri.</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">14.83</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1382.36</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>
<xref ref-type="fig" rid="sensors-15-14241-f015">Figure 15</xref>
shows 1/S
<sub>b</sub>
versus S
<sub>e</sub>
in order to evaluate the relationship between error increment and burden reduction.</p>
<fig id="sensors-15-14241-f015" position="float">
<label>Figure 15</label>
<caption>
<p>Error increment vs. burden reduction.</p>
</caption>
<graphic xlink:href="sensors-15-14241-g015"></graphic>
</fig>
<p>The dashed line in
<xref ref-type="fig" rid="sensors-15-14241-f015">Figure 15</xref>
represents a reference S
<sub>e</sub>
value of 9.30, which corresponds to the reference error rate, 4%, obtained in previous works. Thus, those algorithms—whose error variation sensitivity is higher than this reference value—should not be taken into account. The use of geometric features of the acoustic images reduces computational burden by a factor close to 1400, but the classification error increases 14%, so they must be discarded. </p>
<p>The optimal working area, placed in the lower right part of
<xref ref-type="fig" rid="sensors-15-14241-f015">Figure 15</xref>
, corresponds to a high reduction of the computational burden—high 1/S
<sub>b</sub>
values—and a low increment of the error classification rate—low S
<sub>e</sub>
values. However, computational burden reduction involves a reduction of the amount of data that represents the acoustic profile, which brings about the possible elimination of relevant information for the biometric classification and the increment of the classification error.</p>
<p>On the other hand, those algorithms which eliminate non-relevant information of the acoustic profiles reduce the computational burden without an increment of the error classification rate. The preprocessed algorithm and the line coding algorithm based on length and position of rows and columns present the same error classification rate as the algorithm which uses raw profiles and a reduction of the computational burden of 2.13 and 43.90, respectively. Therefore, the line coding algorithm based on length and position of rows and columns shows the best performance among the assessed algorithms.</p>
</sec>
</sec>
<sec id="sec5-sensors-15-14241">
<title>5. Conclusions</title>
<p>An innovative biometric system, which significantly improves the performance of previous systems developed by the research group, is presented in this paper. Its improvement has been achieved through an increment of the number of frequencies analyzed, a reduction of the number of scanning positions, the use of preprocessing techniques on acoustic images and the use of SVM algorithms for the classification task. Reliability and robustness of the system were improved by employing a large set of subjects called intruders, by increasing the number of acoustic profiles captured for each subject and by this regarding the clothes they were wearing during the test so as not to affect the classification.</p>
<p>It has been verified that, as the size of the acoustic profiles decreases by using the preprocessing techniques, the classification error increases, because relevant information is removed. However, the line coding algorithm based on length and position of rows and columns allows reducing computational burden in several orders of magnitude without increasing the classification error rate. In this case, the information of the profiles eliminated by the algorithm is not relevant to the classifier. Thus, this preprocessing algorithm has been selected to be used in the improved biometric system.</p>
<p>On the other hand, the fact that this line coding algorithm was based on binarized images shows that the relevant information for the classifier is associated to the contour of the image. Finally, it was observed that the geometric features extracted from the acoustic images do not provide enough information for the classifier.</p>
<p>Our research group is currently working on improving the biometric system by using bidimensional arrays, employing new algorithms based on Gaussian Mixture Models and creating a large database of acoustic profiles.</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgments</title>
<p>The authors thank Anibal Figueiras for his technical support with Support Vector Machines.</p>
</ack>
<notes>
<title>Author Contributions</title>
<p>All authors contributed equally to the report research and writing of this paper.</p>
</notes>
<notes>
<title>Conflicts of Interest</title>
<p>The authors declare no conflict of interest.</p>
</notes>
<ref-list>
<title>References</title>
<ref id="B1-sensors-15-14241">
<label>1.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Jain</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bolle</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Pankanti</surname>
<given-names>S.</given-names>
</name>
</person-group>
<source>Introduction to Biometrics</source>
<edition>1st ed.</edition>
<publisher-name>Springer</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>1996</year>
<fpage>1</fpage>
<lpage>41</lpage>
</element-citation>
</ref>
<ref id="B2-sensors-15-14241">
<label>2.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Crispin</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Maffett</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Radar cross-section estimation for complex shapes</article-title>
<source>IEEE Proc.</source>
<year>1965</year>
<volume>53</volume>
<fpage>972</fpage>
<lpage>982</lpage>
<pub-id pub-id-type="doi">10.1109/PROC.1965.4076</pub-id>
</element-citation>
</ref>
<ref id="B3-sensors-15-14241">
<label>3.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Neubauer</surname>
<given-names>W.</given-names>
</name>
</person-group>
<article-title>A summation formula for use in determining the reflection from irregular bodies</article-title>
<source>J. Acoust. Soc. Am.</source>
<year>1963</year>
<volume>35</volume>
<fpage>279</fpage>
<lpage>285</lpage>
<pub-id pub-id-type="doi">10.1121/1.1918450</pub-id>
</element-citation>
</ref>
<ref id="B4-sensors-15-14241">
<label>4.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Baker</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Vespe</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>50 million years of waveform design</article-title>
<source>Proceedings of Forum on Engineering and Technology</source>
<conf-loc>London, UK</conf-loc>
<conf-date>22 November 2006</conf-date>
<fpage>7</fpage>
<lpage>21</lpage>
</element-citation>
</ref>
<ref id="B5-sensors-15-14241">
<label>5.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Balleri</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Woodbridge</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Baker</surname>
<given-names>C.J.</given-names>
</name>
<name>
<surname>Holderied</surname>
<given-names>M.W.</given-names>
</name>
</person-group>
<article-title>Flower Classification by bats: Radar comparisons</article-title>
<source>IEEE Aerosp.Electron. Syst. Mag.</source>
<year>2009</year>
<volume>5</volume>
<fpage>4</fpage>
<lpage>7</lpage>
<pub-id pub-id-type="doi">10.1109/MAES.2009.5109946</pub-id>
</element-citation>
</ref>
<ref id="B6-sensors-15-14241">
<label>6.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Helversen</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Holderied</surname>
<given-names>M.W.</given-names>
</name>
<name>
<surname>Helversen</surname>
<given-names>O.</given-names>
</name>
</person-group>
<article-title>Echoes of bat-pollinated bell-shaped flowers: Conspicuous for nectar-feeding bats</article-title>
<source>J. Exp. Biol.</source>
<year>2003</year>
<volume>6</volume>
<fpage>1025</fpage>
<lpage>1034</lpage>
<pub-id pub-id-type="doi">10.1242/jeb.00203</pub-id>
</element-citation>
</ref>
<ref id="B7-sensors-15-14241">
<label>7.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Chevalier</surname>
<given-names>L.F.</given-names>
</name>
</person-group>
<source>Principles of Radar and Sonar Signal Processing</source>
<edition>1st ed.</edition>
<publisher-name>Artech House</publisher-name>
<publisher-loc>Boston, MA, USA</publisher-loc>
<year>2002</year>
</element-citation>
</ref>
<ref id="B8-sensors-15-14241">
<label>8.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ricker</surname>
<given-names>D.W.</given-names>
</name>
</person-group>
<source>Echo Signal Processing</source>
<edition>1st ed.</edition>
<publisher-name>Kluwer</publisher-name>
<publisher-loc>Dordrecht, The Netherlands</publisher-loc>
<year>2003</year>
</element-citation>
</ref>
<ref id="B9-sensors-15-14241">
<label>9.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Moebus</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zoubir</surname>
<given-names>A.M.</given-names>
</name>
</person-group>
<article-title>Three-dimensional ultrasound imaging in air using a 2D array on a fixed platform</article-title>
<source>Proceedings of IEEE International Conference on Acoustic, Speech and Signal Processing</source>
<conf-loc>Honolulu, HI, USA</conf-loc>
<conf-date>15–20 April 2007</conf-date>
<fpage>961</fpage>
<lpage>964</lpage>
</element-citation>
</ref>
<ref id="B10-sensors-15-14241">
<label>10.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Moebus</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zoubir</surname>
<given-names>A.M.</given-names>
</name>
</person-group>
<article-title>Parameterization of acoustic images for the detection of human presence by mobile platforms</article-title>
<source>Proceedings of IEEE International Conference on Acoustic, Speech and Signal Processing</source>
<conf-loc>Dallas, TX, USA</conf-loc>
<conf-date>14–19 March 2010</conf-date>
<fpage>3538</fpage>
<lpage>3541</lpage>
</element-citation>
</ref>
<ref id="B11-sensors-15-14241">
<label>11.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Duran</surname>
<given-names>J.D.</given-names>
</name>
<name>
<surname>Fuente</surname>
<given-names>A.I.</given-names>
</name>
<name>
<surname>Calvo</surname>
<given-names>J.J.V.</given-names>
</name>
</person-group>
<article-title>Multisensorial modular system of monitoring and tracking with information fusion techniques and neural networks</article-title>
<source>Proceedings of IEEE International Carnahan Conference on Security Technology</source>
<conf-loc>Madrid, Spain</conf-loc>
<conf-date>5–7 October 1999</conf-date>
<fpage>59</fpage>
<lpage>66</lpage>
</element-citation>
</ref>
<ref id="B12-sensors-15-14241">
<label>12.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Izquierdo-Fuente</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Villacorta-Calvo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Raboso-Mateos</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Martinez-Arribas</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Rodriguez-Merino</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>del Val-Puente</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>A human classification system for a video-acoustic detection platform</article-title>
<source>Proceedings of International Carnahan Conference on Security Technology</source>
<conf-loc>Albuquerque, NM, USA</conf-loc>
<conf-date>12–15 October 2004</conf-date>
<fpage>145</fpage>
<lpage>152</lpage>
</element-citation>
</ref>
<ref id="B13-sensors-15-14241">
<label>13.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Izquierdo-Fuente</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>del Val-Puente</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Jiménez-Gómez</surname>
<given-names>M.I.</given-names>
</name>
<name>
<surname>Villacorta-Calvo</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Performance evaluation of a biometric system based on acoustic images</article-title>
<source>Sensors</source>
<year>2011</year>
<volume>11</volume>
<fpage>9499</fpage>
<lpage>9519</lpage>
<pub-id pub-id-type="doi">10.3390/s111009499</pub-id>
<pub-id pub-id-type="pmid">22163708</pub-id>
</element-citation>
</ref>
<ref id="B14-sensors-15-14241">
<label>14.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jain</surname>
<given-names>A.K.</given-names>
</name>
<name>
<surname>Nandakumar</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Ross</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Score normalization in multimodal biometric systems</article-title>
<source>Pattern Recogn.</source>
<year>2005</year>
<volume>38</volume>
<fpage>2270</fpage>
<lpage>2285</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2005.01.012</pub-id>
</element-citation>
</ref>
<ref id="B15-sensors-15-14241">
<label>15.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>E.C.</given-names>
</name>
<name>
<surname>Jung</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>New finger biometric method using near infrared imaging</article-title>
<source>Sensors.</source>
<year>2011</year>
<volume>11</volume>
<fpage>2319</fpage>
<lpage>2333</lpage>
<pub-id pub-id-type="doi">10.3390/s110302319</pub-id>
<pub-id pub-id-type="pmid">22163741</pub-id>
</element-citation>
</ref>
<ref id="B16-sensors-15-14241">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>E.C.</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>K.R.</given-names>
</name>
</person-group>
<article-title>Image restoration of skin scattering and optical blurring for finger vein recognition</article-title>
<source>Opt. Laser Eng.</source>
<year>2011</year>
<volume>49</volume>
<fpage>816</fpage>
<lpage>828</lpage>
<pub-id pub-id-type="doi">10.1016/j.optlaseng.2011.03.004</pub-id>
</element-citation>
</ref>
<ref id="B17-sensors-15-14241">
<label>17.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Hamdy</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Traoré</surname>
<given-names>I.</given-names>
</name>
</person-group>
<article-title>Cognitive-based biometrics system for static user authentication</article-title>
<source>Proceedings of 4th International Conference on Internet Monitoring and Protection</source>
<conf-loc>Venice, Italy</conf-loc>
<conf-date>24–28 May 2009</conf-date>
<fpage>90</fpage>
<lpage>97</lpage>
</element-citation>
</ref>
<ref id="B18-sensors-15-14241">
<label>18.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Izquierdo-Fuente</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>del Val-Puente</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Villacorta-Calvo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Raboso-Mateos</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Optimization of a biometric system based on acoustic images</article-title>
<source>Sci. World J.</source>
<year>2014</year>
<volume>2014</volume>
<pub-id pub-id-type="doi">10.1155/2014/780835</pub-id>
<pub-id pub-id-type="pmid">24616643</pub-id>
</element-citation>
</ref>
<ref id="B19-sensors-15-14241">
<label>19.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Cristianini</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Shawe-Taylor</surname>
<given-names>J.</given-names>
</name>
</person-group>
<source>Support Vector Machines and Other Kernel-Based Learning Methods</source>
<edition>1st ed.</edition>
<publisher-name>Cambridge University Press</publisher-name>
<publisher-loc>Cambridge, UK</publisher-loc>
<year>2000</year>
</element-citation>
</ref>
<ref id="B20-sensors-15-14241">
<label>20.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Theodoridis</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Koutroumbas</surname>
<given-names>K.</given-names>
</name>
</person-group>
<source>Pattern Recognition</source>
<edition>1st ed.</edition>
<publisher-name>Academic Press</publisher-name>
<publisher-loc>Melbourne, Australia</publisher-loc>
<year>2008</year>
</element-citation>
</ref>
<ref id="B21-sensors-15-14241">
<label>21.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bishop</surname>
<given-names>C.M.</given-names>
</name>
</person-group>
<source>Pattern Recognition and Machine Learning</source>
<edition>1st ed.</edition>
<publisher-name>Springer</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>2006</year>
</element-citation>
</ref>
<ref id="B22-sensors-15-14241">
<label>22.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Skolnik</surname>
<given-names>M.I.</given-names>
</name>
</person-group>
<source>Introduction to Radar Systems</source>
<edition>3rd ed.</edition>
<publisher-name>McGraw Hill</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>2001</year>
</element-citation>
</ref>
<ref id="B23-sensors-15-14241">
<label>23.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Izquierdo-Fuente</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Villacorta-Calvo</surname>
<given-names>J.J.</given-names>
</name>
<name>
<surname>Val-Puente</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Jiménez-Gomez</surname>
<given-names>M.I.</given-names>
</name>
</person-group>
<article-title>A simple methodology of calibration for sensor arrays for acoustical radar system</article-title>
<source>Proceedings of 118th Convention Audio Engineering Society</source>
<conf-loc>Barcelona, Spain</conf-loc>
<conf-date>28–31 May 2005</conf-date>
</element-citation>
</ref>
<ref id="B24-sensors-15-14241">
<label>24.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wirth</surname>
<given-names>W.D.</given-names>
</name>
</person-group>
<article-title>Radar Techniques Using Array Antennas</article-title>
<source>IEE Radar, Sonar, Navigation and Avionics Series 10</source>
<edition>1st ed.</edition>
<publisher-name>The Institution of Electrical Engineers</publisher-name>
<publisher-loc>London, UK</publisher-loc>
<year>2001</year>
</element-citation>
</ref>
<ref id="B25-sensors-15-14241">
<label>25.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hsu</surname>
<given-names>C.W.</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>C.C.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>C.J.</given-names>
</name>
</person-group>
<source>A Practical Guide to Support Vector Classification. Technical Report</source>
<publisher-name>Department of Computer Science, National Taiwan University</publisher-name>
<publisher-loc>Taipei, Taiwan</publisher-loc>
<year>2003</year>
</element-citation>
</ref>
<ref id="B26-sensors-15-14241">
<label>26.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chang</surname>
<given-names>C.C.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>C.J.</given-names>
</name>
</person-group>
<article-title>LIBSVM: A library for support vector machines</article-title>
<source>ACM Trans. Intellig. Syst. Technol.</source>
<year>2011</year>
<volume>2</volume>
<fpage>1</fpage>
<lpage>27</lpage>
<pub-id pub-id-type="doi">10.1145/1961189.1961199</pub-id>
</element-citation>
</ref>
<ref id="B27-sensors-15-14241">
<label>27.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cherkassky</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>Practical selection of SVM parameters and noise estimation for SVM regression</article-title>
<source>Neural Netw.</source>
<year>2004</year>
<volume>17</volume>
<fpage>113</fpage>
<lpage>126</lpage>
<pub-id pub-id-type="doi">10.1016/S0893-6080(03)00169-2</pub-id>
<pub-id pub-id-type="pmid">14690712</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/TelematiV1/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000041 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000041 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    TelematiV1
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     PMC:4507697
   |texte=   Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/RBID.i   -Sk "pubmed:26091392" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a TelematiV1 

Wicri

This area was generated with Dilib version V0.6.31.
Data generation: Thu Nov 2 16:09:04 2017. Site generation: Sun Mar 10 16:42:28 2024