Serveur sur les données et bibliothèques médicales au Maghreb (version finale)

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.
***** Acces problem to record *****\

Identifieur interne : 000317 ( Pmc/Corpus ); précédent : 0003169; suivant : 0003180 ***** probable Xml problem with record *****

Links to Exploration step


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Face Recognition Systems: A Survey</title>
<author>
<name sortKey="Kortli, Yassin" sort="Kortli, Yassin" uniqKey="Kortli Y" first="Yassin" last="Kortli">Yassin Kortli</name>
<affiliation>
<nlm:aff id="af1-sensors-20-00342">AI-ED Department, Yncrea Ouest, 20 rue du Cuirassé de Bretagne, 29200 Brest, France;
<email>maher.jridi@isen-ouest.yncrea.fr</email>
(M.J.);
<email>ayman.alfalou@isen-ouest.yncrea.fr</email>
(A.A.F.)</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="af2-sensors-20-00342">Electronic and Micro-electronic Laboratory, Faculty of Sciences of Monastir, University of Monastir, Monastir 5000, Tunisia</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Jridi, Maher" sort="Jridi, Maher" uniqKey="Jridi M" first="Maher" last="Jridi">Maher Jridi</name>
<affiliation>
<nlm:aff id="af1-sensors-20-00342">AI-ED Department, Yncrea Ouest, 20 rue du Cuirassé de Bretagne, 29200 Brest, France;
<email>maher.jridi@isen-ouest.yncrea.fr</email>
(M.J.);
<email>ayman.alfalou@isen-ouest.yncrea.fr</email>
(A.A.F.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Al Falou, Ayman" sort="Al Falou, Ayman" uniqKey="Al Falou A" first="Ayman" last="Al Falou">Ayman Al Falou</name>
<affiliation>
<nlm:aff id="af1-sensors-20-00342">AI-ED Department, Yncrea Ouest, 20 rue du Cuirassé de Bretagne, 29200 Brest, France;
<email>maher.jridi@isen-ouest.yncrea.fr</email>
(M.J.);
<email>ayman.alfalou@isen-ouest.yncrea.fr</email>
(A.A.F.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Atri, Mohamed" sort="Atri, Mohamed" uniqKey="Atri M" first="Mohamed" last="Atri">Mohamed Atri</name>
<affiliation>
<nlm:aff id="af3-sensors-20-00342">College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia;
<email>matri@kku.edu.sa</email>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">31936089</idno>
<idno type="pmc">7013584</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7013584</idno>
<idno type="RBID">PMC:7013584</idno>
<idno type="doi">10.3390/s20020342</idno>
<date when="2020">2020</date>
<idno type="wicri:Area/Pmc/Corpus">000317</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000317</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Face Recognition Systems: A Survey</title>
<author>
<name sortKey="Kortli, Yassin" sort="Kortli, Yassin" uniqKey="Kortli Y" first="Yassin" last="Kortli">Yassin Kortli</name>
<affiliation>
<nlm:aff id="af1-sensors-20-00342">AI-ED Department, Yncrea Ouest, 20 rue du Cuirassé de Bretagne, 29200 Brest, France;
<email>maher.jridi@isen-ouest.yncrea.fr</email>
(M.J.);
<email>ayman.alfalou@isen-ouest.yncrea.fr</email>
(A.A.F.)</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="af2-sensors-20-00342">Electronic and Micro-electronic Laboratory, Faculty of Sciences of Monastir, University of Monastir, Monastir 5000, Tunisia</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Jridi, Maher" sort="Jridi, Maher" uniqKey="Jridi M" first="Maher" last="Jridi">Maher Jridi</name>
<affiliation>
<nlm:aff id="af1-sensors-20-00342">AI-ED Department, Yncrea Ouest, 20 rue du Cuirassé de Bretagne, 29200 Brest, France;
<email>maher.jridi@isen-ouest.yncrea.fr</email>
(M.J.);
<email>ayman.alfalou@isen-ouest.yncrea.fr</email>
(A.A.F.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Al Falou, Ayman" sort="Al Falou, Ayman" uniqKey="Al Falou A" first="Ayman" last="Al Falou">Ayman Al Falou</name>
<affiliation>
<nlm:aff id="af1-sensors-20-00342">AI-ED Department, Yncrea Ouest, 20 rue du Cuirassé de Bretagne, 29200 Brest, France;
<email>maher.jridi@isen-ouest.yncrea.fr</email>
(M.J.);
<email>ayman.alfalou@isen-ouest.yncrea.fr</email>
(A.A.F.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Atri, Mohamed" sort="Atri, Mohamed" uniqKey="Atri M" first="Mohamed" last="Atri">Mohamed Atri</name>
<affiliation>
<nlm:aff id="af3-sensors-20-00342">College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia;
<email>matri@kku.edu.sa</email>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Sensors (Basel, Switzerland)</title>
<idno type="eISSN">1424-8220</idno>
<imprint>
<date when="2020">2020</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Over the past few decades, interest in theories and algorithms for face recognition has been growing rapidly. Video surveillance, criminal identification, building access control, and unmanned and autonomous vehicles are just a few examples of concrete applications that are gaining attraction among industries. Various techniques are being developed including local, holistic, and hybrid approaches, which provide a face image description using only a few face image features or the whole facial features. The main contribution of this survey is to review some well-known techniques for each approach and to give the taxonomy of their categories. In the paper, a detailed comparison between these techniques is exposed by listing the advantages and the disadvantages of their schemes in terms of robustness, accuracy, complexity, and discrimination. One interesting feature mentioned in the paper is about the database used for face recognition. An overview of the most commonly used databases, including those of supervised and unsupervised learning, is given. Numerical results of the most interesting techniques are given along with the context of experiments and challenges handled by these techniques. Finally, a solid discussion is given in the paper about future directions in terms of techniques to be used for face recognition.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Liao, S" uniqKey="Liao S">S. Liao</name>
</author>
<author>
<name sortKey="Jain, A K" uniqKey="Jain A">A.K. Jain</name>
</author>
<author>
<name sortKey="Li, S Z" uniqKey="Li S">S.Z. Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jridi, M" uniqKey="Jridi M">M. Jridi</name>
</author>
<author>
<name sortKey="Napoleon, T" uniqKey="Napoleon T">T. Napoléon</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Napoleon, T" uniqKey="Napoleon T">T. Napoléon</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ouerhani, Y" uniqKey="Ouerhani Y">Y. Ouerhani</name>
</author>
<author>
<name sortKey="Jridi, M" uniqKey="Jridi M">M. Jridi</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yang, W" uniqKey="Yang W">W. Yang</name>
</author>
<author>
<name sortKey="Wang, S" uniqKey="Wang S">S. Wang</name>
</author>
<author>
<name sortKey="Hu, J" uniqKey="Hu J">J. Hu</name>
</author>
<author>
<name sortKey="Zheng, G" uniqKey="Zheng G">G. Zheng</name>
</author>
<author>
<name sortKey="Valli, C" uniqKey="Valli C">C. Valli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, N P" uniqKey="Patel N">N.P. Patel</name>
</author>
<author>
<name sortKey="Kale, A" uniqKey="Kale A">A. Kale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, Q" uniqKey="Wang Q">Q. Wang</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Brosseau, C" uniqKey="Brosseau C">C. Brosseau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Brosseau, C" uniqKey="Brosseau C">C. Brosseau</name>
</author>
<author>
<name sortKey="Kaddah, W" uniqKey="Kaddah W">W. Kaddah</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhao, C" uniqKey="Zhao C">C. Zhao</name>
</author>
<author>
<name sortKey="Li, X" uniqKey="Li X">X. Li</name>
</author>
<author>
<name sortKey="Cang, Y" uniqKey="Cang Y">Y. Cang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hajirassouliha, A" uniqKey="Hajirassouliha A">A. HajiRassouliha</name>
</author>
<author>
<name sortKey="Gamage, T P B" uniqKey="Gamage T">T.P.B. Gamage</name>
</author>
<author>
<name sortKey="Parker, M D" uniqKey="Parker M">M.D. Parker</name>
</author>
<author>
<name sortKey="Nash, M P" uniqKey="Nash M">M.P. Nash</name>
</author>
<author>
<name sortKey="Taberner, A J" uniqKey="Taberner A">A.J. Taberner</name>
</author>
<author>
<name sortKey="Nielsen, P M" uniqKey="Nielsen P">P.M. Nielsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kortli, Y" uniqKey="Kortli Y">Y. Kortli</name>
</author>
<author>
<name sortKey="Jridi, M" uniqKey="Jridi M">M. Jridi</name>
</author>
<author>
<name sortKey="Al Falou, A" uniqKey="Al Falou A">A. Al Falou</name>
</author>
<author>
<name sortKey="Atri, M" uniqKey="Atri M">M. Atri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dehai, Z" uniqKey="Dehai Z">Z. Dehai</name>
</author>
<author>
<name sortKey="Da, D" uniqKey="Da D">D. Da</name>
</author>
<author>
<name sortKey="Jin, L" uniqKey="Jin L">L. Jin</name>
</author>
<author>
<name sortKey="Qing, L" uniqKey="Qing L">L. Qing</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ouerhani, Y" uniqKey="Ouerhani Y">Y. Ouerhani</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Brosseau, C" uniqKey="Brosseau C">C. Brosseau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, W" uniqKey="Liu W">W. Liu</name>
</author>
<author>
<name sortKey="Wang, Z" uniqKey="Wang Z">Z. Wang</name>
</author>
<author>
<name sortKey="Liu, X" uniqKey="Liu X">X. Liu</name>
</author>
<author>
<name sortKey="Zeng, N" uniqKey="Zeng N">N. Zeng</name>
</author>
<author>
<name sortKey="Liu, Y" uniqKey="Liu Y">Y. Liu</name>
</author>
<author>
<name sortKey="Alsaadi, F E" uniqKey="Alsaadi F">F.E. Alsaadi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xi, M" uniqKey="Xi M">M. Xi</name>
</author>
<author>
<name sortKey="Chen, L" uniqKey="Chen L">L. Chen</name>
</author>
<author>
<name sortKey="Polajnar, D" uniqKey="Polajnar D">D. Polajnar</name>
</author>
<author>
<name sortKey="Tong, W" uniqKey="Tong W">W. Tong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ojala, T" uniqKey="Ojala T">T. Ojala</name>
</author>
<author>
<name sortKey="Pietik Inen, M" uniqKey="Pietik Inen M">M. Pietikäinen</name>
</author>
<author>
<name sortKey="Harwood, D" uniqKey="Harwood D">D. Harwood</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gowda, H D S" uniqKey="Gowda H">H.D.S. Gowda</name>
</author>
<author>
<name sortKey="Kumar, G H" uniqKey="Kumar G">G.H. Kumar</name>
</author>
<author>
<name sortKey="Imran, M" uniqKey="Imran M">M. Imran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ouerhani, Y" uniqKey="Ouerhani Y">Y. Ouerhani</name>
</author>
<author>
<name sortKey="Jridi, M" uniqKey="Jridi M">M. Jridi</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Brosseau, C" uniqKey="Brosseau C">C. Brosseau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mousa Pasandi, M E" uniqKey="Mousa Pasandi M">M.E. Mousa Pasandi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Khoi, P" uniqKey="Khoi P">P. Khoi</name>
</author>
<author>
<name sortKey="Thien, L H" uniqKey="Thien L">L.H. Thien</name>
</author>
<author>
<name sortKey="Viet, V H" uniqKey="Viet V">V.H. Viet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zeppelzauer, M" uniqKey="Zeppelzauer M">M. Zeppelzauer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parmar, D N" uniqKey="Parmar D">D.N. Parmar</name>
</author>
<author>
<name sortKey="Mehta, B B" uniqKey="Mehta B">B.B. Mehta</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vinay, A" uniqKey="Vinay A">A. Vinay</name>
</author>
<author>
<name sortKey="Hebbar, D" uniqKey="Hebbar D">D. Hebbar</name>
</author>
<author>
<name sortKey="Shekhar, V S" uniqKey="Shekhar V">V.S. Shekhar</name>
</author>
<author>
<name sortKey="Murthy, K B" uniqKey="Murthy K">K.B. Murthy</name>
</author>
<author>
<name sortKey="Natarajan, S" uniqKey="Natarajan S">S. Natarajan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yang, H" uniqKey="Yang H">H. Yang</name>
</author>
<author>
<name sortKey="Wang, X A" uniqKey="Wang X">X.A. Wang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Viola, P" uniqKey="Viola P">P. Viola</name>
</author>
<author>
<name sortKey="Jones, M" uniqKey="Jones M">M. Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rettkowski, J" uniqKey="Rettkowski J">J. Rettkowski</name>
</author>
<author>
<name sortKey="Boutros, A" uniqKey="Boutros A">A. Boutros</name>
</author>
<author>
<name sortKey="Gohringer, D" uniqKey="Gohringer D">D. Göhringer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seo, H J" uniqKey="Seo H">H.J. Seo</name>
</author>
<author>
<name sortKey="Milanfar, P" uniqKey="Milanfar P">P. Milanfar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shah, J H" uniqKey="Shah J">J.H. Shah</name>
</author>
<author>
<name sortKey="Sharif, M" uniqKey="Sharif M">M. Sharif</name>
</author>
<author>
<name sortKey="Raza, M" uniqKey="Raza M">M. Raza</name>
</author>
<author>
<name sortKey="Azeem, A" uniqKey="Azeem A">A. Azeem</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Du, G" uniqKey="Du G">G. Du</name>
</author>
<author>
<name sortKey="Su, F" uniqKey="Su F">F. Su</name>
</author>
<author>
<name sortKey="Cai, A" uniqKey="Cai A">A. Cai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Calonder, M" uniqKey="Calonder M">M. Calonder</name>
</author>
<author>
<name sortKey="Lepetit, V" uniqKey="Lepetit V">V. Lepetit</name>
</author>
<author>
<name sortKey="Ozuysal, M" uniqKey="Ozuysal M">M. Ozuysal</name>
</author>
<author>
<name sortKey="Trzcinski, T" uniqKey="Trzcinski T">T. Trzcinski</name>
</author>
<author>
<name sortKey="Strecha, C" uniqKey="Strecha C">C. Strecha</name>
</author>
<author>
<name sortKey="Fua, P" uniqKey="Fua P">P. Fua</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smach, F" uniqKey="Smach F">F. Smach</name>
</author>
<author>
<name sortKey="Miteran, J" uniqKey="Miteran J">J. Miteran</name>
</author>
<author>
<name sortKey="Atri, M" uniqKey="Atri M">M. Atri</name>
</author>
<author>
<name sortKey="Dubois, J" uniqKey="Dubois J">J. Dubois</name>
</author>
<author>
<name sortKey="Abid, M" uniqKey="Abid M">M. Abid</name>
</author>
<author>
<name sortKey="Gauthier, J P" uniqKey="Gauthier J">J.P. Gauthier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kortli, Y" uniqKey="Kortli Y">Y. Kortli</name>
</author>
<author>
<name sortKey="Jridi, M" uniqKey="Jridi M">M. Jridi</name>
</author>
<author>
<name sortKey="Al Falou, A" uniqKey="Al Falou A">A. Al Falou</name>
</author>
<author>
<name sortKey="Atri, M" uniqKey="Atri M">M. Atri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, Q" uniqKey="Wang Q">Q. Wang</name>
</author>
<author>
<name sortKey="Xiong, D" uniqKey="Xiong D">D. Xiong</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Brosseau, C" uniqKey="Brosseau C">C. Brosseau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Turk, M" uniqKey="Turk M">M. Turk</name>
</author>
<author>
<name sortKey="Pentland, A" uniqKey="Pentland A">A. Pentland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Annalakshmi, M" uniqKey="Annalakshmi M">M. Annalakshmi</name>
</author>
<author>
<name sortKey="Roomi, S M M" uniqKey="Roomi S">S.M.M. Roomi</name>
</author>
<author>
<name sortKey="Naveedh, A S" uniqKey="Naveedh A">A.S. Naveedh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hussain, S U" uniqKey="Hussain S">S.U. Hussain</name>
</author>
<author>
<name sortKey="Napoleon, T" uniqKey="Napoleon T">T. Napoléon</name>
</author>
<author>
<name sortKey="Jurie, F" uniqKey="Jurie F">F. Jurie</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Brosseau, C" uniqKey="Brosseau C">C. Brosseau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Napoleon, T" uniqKey="Napoleon T">T. Napoléon</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schroff, F" uniqKey="Schroff F">F. Schroff</name>
</author>
<author>
<name sortKey="Kalenichenko, D" uniqKey="Kalenichenko D">D. Kalenichenko</name>
</author>
<author>
<name sortKey="Philbin, J" uniqKey="Philbin J">J. Philbin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kambi Beli, I" uniqKey="Kambi Beli I">I. Kambi Beli</name>
</author>
<author>
<name sortKey="Guo, C" uniqKey="Guo C">C. Guo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benarab, D" uniqKey="Benarab D">D. Benarab</name>
</author>
<author>
<name sortKey="Napoleon, T" uniqKey="Napoleon T">T. Napoléon</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Verney, A" uniqKey="Verney A">A. Verney</name>
</author>
<author>
<name sortKey="Hellard, P" uniqKey="Hellard P">P. Hellard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bonnen, K" uniqKey="Bonnen K">K. Bonnen</name>
</author>
<author>
<name sortKey="Klare, B F" uniqKey="Klare B">B.F. Klare</name>
</author>
<author>
<name sortKey="Jain, A K" uniqKey="Jain A">A.K. Jain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ren, J" uniqKey="Ren J">J. Ren</name>
</author>
<author>
<name sortKey="Jiang, X" uniqKey="Jiang X">X. Jiang</name>
</author>
<author>
<name sortKey="Yuan, J" uniqKey="Yuan J">J. Yuan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Karaaba, M" uniqKey="Karaaba M">M. Karaaba</name>
</author>
<author>
<name sortKey="Surinta, O" uniqKey="Surinta O">O. Surinta</name>
</author>
<author>
<name sortKey="Schomaker, L" uniqKey="Schomaker L">L. Schomaker</name>
</author>
<author>
<name sortKey="Wiering, M A" uniqKey="Wiering M">M.A. Wiering</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huang, C" uniqKey="Huang C">C. Huang</name>
</author>
<author>
<name sortKey="Huang, J" uniqKey="Huang J">J. Huang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arigbabu, O A" uniqKey="Arigbabu O">O.A. Arigbabu</name>
</author>
<author>
<name sortKey="Ahmad, S M S" uniqKey="Ahmad S">S.M.S. Ahmad</name>
</author>
<author>
<name sortKey="Adnan, W A W" uniqKey="Adnan W">W.A.W. Adnan</name>
</author>
<author>
<name sortKey="Yussof, S" uniqKey="Yussof S">S. Yussof</name>
</author>
<author>
<name sortKey="Mahmood, S" uniqKey="Mahmood S">S. Mahmood</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lugh, A V" uniqKey="Lugh A">A.V. Lugh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weaver, C S" uniqKey="Weaver C">C.S. Weaver</name>
</author>
<author>
<name sortKey="Goodman, J W" uniqKey="Goodman J">J.W. Goodman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Horner, J L" uniqKey="Horner J">J.L. Horner</name>
</author>
<author>
<name sortKey="Gianino, P D" uniqKey="Gianino P">P.D. Gianino</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leonard, I" uniqKey="Leonard I">I. Leonard</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Brosseau, C" uniqKey="Brosseau C">C. Brosseau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Katz, P" uniqKey="Katz P">P. Katz</name>
</author>
<author>
<name sortKey="Aron, M" uniqKey="Aron M">M. Aron</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Brosseau, C" uniqKey="Brosseau C">C. Brosseau</name>
</author>
<author>
<name sortKey="Katz, P" uniqKey="Katz P">P. Katz</name>
</author>
<author>
<name sortKey="Alam, M S" uniqKey="Alam M">M.S. Alam</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Elbouz, M" uniqKey="Elbouz M">M. Elbouz</name>
</author>
<author>
<name sortKey="Bouzidi, F" uniqKey="Bouzidi F">F. Bouzidi</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Brosseau, C" uniqKey="Brosseau C">C. Brosseau</name>
</author>
<author>
<name sortKey="Leonard, I" uniqKey="Leonard I">I. Leonard</name>
</author>
<author>
<name sortKey="Benkelfat, B E" uniqKey="Benkelfat B">B.E. Benkelfat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Heflin, B" uniqKey="Heflin B">B. Heflin</name>
</author>
<author>
<name sortKey="Scheirer, W" uniqKey="Scheirer W">W. Scheirer</name>
</author>
<author>
<name sortKey="Boult, T E" uniqKey="Boult T">T.E. Boult</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhu, X" uniqKey="Zhu X">X. Zhu</name>
</author>
<author>
<name sortKey="Liao, S" uniqKey="Liao S">S. Liao</name>
</author>
<author>
<name sortKey="Lei, Z" uniqKey="Lei Z">Z. Lei</name>
</author>
<author>
<name sortKey="Liu, R" uniqKey="Liu R">R. Liu</name>
</author>
<author>
<name sortKey="Li, S Z" uniqKey="Li S">S.Z. Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lenc, L" uniqKey="Lenc L">L. Lenc</name>
</author>
<author>
<name sortKey="Kral, P" uniqKey="Kral P">P. Král</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="I K, " uniqKey="I K ">Ş. Işık</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mahier, J" uniqKey="Mahier J">J. Mahier</name>
</author>
<author>
<name sortKey="Hemery, B" uniqKey="Hemery B">B. Hemery</name>
</author>
<author>
<name sortKey="El Abed, M" uniqKey="El Abed M">M. El-Abed</name>
</author>
<author>
<name sortKey="El Allam, M" uniqKey="El Allam M">M. El-Allam</name>
</author>
<author>
<name sortKey="Bouhaddaoui, M" uniqKey="Bouhaddaoui M">M. Bouhaddaoui</name>
</author>
<author>
<name sortKey="Rosenberger, C" uniqKey="Rosenberger C">C. Rosenberger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alahi, A" uniqKey="Alahi A">A. Alahi</name>
</author>
<author>
<name sortKey="Ortiz, R" uniqKey="Ortiz R">R. Ortiz</name>
</author>
<author>
<name sortKey="Vandergheynst, P" uniqKey="Vandergheynst P">P. Vandergheynst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arashloo, S R" uniqKey="Arashloo S">S.R. Arashloo</name>
</author>
<author>
<name sortKey="Kittler, J" uniqKey="Kittler J">J. Kittler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ghorbel, A" uniqKey="Ghorbel A">A. Ghorbel</name>
</author>
<author>
<name sortKey="Tajouri, I" uniqKey="Tajouri I">I. Tajouri</name>
</author>
<author>
<name sortKey="Aydi, W" uniqKey="Aydi W">W. Aydi</name>
</author>
<author>
<name sortKey="Masmoudi, N" uniqKey="Masmoudi N">N. Masmoudi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lima, A" uniqKey="Lima A">A. Lima</name>
</author>
<author>
<name sortKey="Zen, H" uniqKey="Zen H">H. Zen</name>
</author>
<author>
<name sortKey="Nankaku, Y" uniqKey="Nankaku Y">Y. Nankaku</name>
</author>
<author>
<name sortKey="Miyajima, C" uniqKey="Miyajima C">C. Miyajima</name>
</author>
<author>
<name sortKey="Tokuda, K" uniqKey="Tokuda K">K. Tokuda</name>
</author>
<author>
<name sortKey="Kitamura, T" uniqKey="Kitamura T">T. Kitamura</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Devi, B J" uniqKey="Devi B">B.J. Devi</name>
</author>
<author>
<name sortKey="Veeranjaneyulu, N" uniqKey="Veeranjaneyulu N">N. Veeranjaneyulu</name>
</author>
<author>
<name sortKey="Kishore, K V K" uniqKey="Kishore K">K.V.K. Kishore</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Simonyan, K" uniqKey="Simonyan K">K. Simonyan</name>
</author>
<author>
<name sortKey="Parkhi, O M" uniqKey="Parkhi O">O.M. Parkhi</name>
</author>
<author>
<name sortKey="Vedaldi, A" uniqKey="Vedaldi A">A. Vedaldi</name>
</author>
<author>
<name sortKey="Zisserman, A" uniqKey="Zisserman A">A. Zisserman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, B" uniqKey="Li B">B. Li</name>
</author>
<author>
<name sortKey="Ma, K K" uniqKey="Ma K">K.K. Ma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Agarwal, R" uniqKey="Agarwal R">R. Agarwal</name>
</author>
<author>
<name sortKey="Jain, R" uniqKey="Jain R">R. Jain</name>
</author>
<author>
<name sortKey="Regunathan, R" uniqKey="Regunathan R">R. Regunathan</name>
</author>
<author>
<name sortKey="Kumar, C P" uniqKey="Kumar C">C.P. Kumar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cui, Z" uniqKey="Cui Z">Z. Cui</name>
</author>
<author>
<name sortKey="Li, W" uniqKey="Li W">W. Li</name>
</author>
<author>
<name sortKey="Xu, D" uniqKey="Xu D">D. Xu</name>
</author>
<author>
<name sortKey="Shan, S" uniqKey="Shan S">S. Shan</name>
</author>
<author>
<name sortKey="Chen, X" uniqKey="Chen X">X. Chen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Prince, S" uniqKey="Prince S">S. Prince</name>
</author>
<author>
<name sortKey="Li, P" uniqKey="Li P">P. Li</name>
</author>
<author>
<name sortKey="Fu, Y" uniqKey="Fu Y">Y. Fu</name>
</author>
<author>
<name sortKey="Mohammed, U" uniqKey="Mohammed U">U. Mohammed</name>
</author>
<author>
<name sortKey="Elder, J" uniqKey="Elder J">J. Elder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perlibakas, V" uniqKey="Perlibakas V">V. Perlibakas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huang, Z H" uniqKey="Huang Z">Z.H. Huang</name>
</author>
<author>
<name sortKey="Li, W J" uniqKey="Li W">W.J. Li</name>
</author>
<author>
<name sortKey="Shang, J" uniqKey="Shang J">J. Shang</name>
</author>
<author>
<name sortKey="Wang, J" uniqKey="Wang J">J. Wang</name>
</author>
<author>
<name sortKey="Zhang, T" uniqKey="Zhang T">T. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sufyanu, Z" uniqKey="Sufyanu Z">Z. Sufyanu</name>
</author>
<author>
<name sortKey="Mohamad, F S" uniqKey="Mohamad F">F.S. Mohamad</name>
</author>
<author>
<name sortKey="Yusuf, A A" uniqKey="Yusuf A">A.A. Yusuf</name>
</author>
<author>
<name sortKey="Mamat, M B" uniqKey="Mamat M">M.B. Mamat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hoffmann, H" uniqKey="Hoffmann H">H. Hoffmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arashloo, S R" uniqKey="Arashloo S">S.R. Arashloo</name>
</author>
<author>
<name sortKey="Kittler, J" uniqKey="Kittler J">J. Kittler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vinay, A" uniqKey="Vinay A">A. Vinay</name>
</author>
<author>
<name sortKey="Shekhar, V S" uniqKey="Shekhar V">V.S. Shekhar</name>
</author>
<author>
<name sortKey="Murthy, K B" uniqKey="Murthy K">K.B. Murthy</name>
</author>
<author>
<name sortKey="Natarajan, S" uniqKey="Natarajan S">S. Natarajan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sivasathya, M" uniqKey="Sivasathya M">M. Sivasathya</name>
</author>
<author>
<name sortKey="Joans, S M" uniqKey="Joans S">S.M. Joans</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, B" uniqKey="Zhang B">B. Zhang</name>
</author>
<author>
<name sortKey="Chen, X" uniqKey="Chen X">X. Chen</name>
</author>
<author>
<name sortKey="Shan, S" uniqKey="Shan S">S. Shan</name>
</author>
<author>
<name sortKey="Gao, W" uniqKey="Gao W">W. Gao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vankayalapati, H D" uniqKey="Vankayalapati H">H.D. Vankayalapati</name>
</author>
<author>
<name sortKey="Kyamakya, K" uniqKey="Kyamakya K">K. Kyamakya</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Javidi, B" uniqKey="Javidi B">B. Javidi</name>
</author>
<author>
<name sortKey="Li, J" uniqKey="Li J">J. Li</name>
</author>
<author>
<name sortKey="Tang, Q" uniqKey="Tang Q">Q. Tang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yang, J" uniqKey="Yang J">J. Yang</name>
</author>
<author>
<name sortKey="Frangi, A F" uniqKey="Frangi A">A.F. Frangi</name>
</author>
<author>
<name sortKey="Yang, J Y" uniqKey="Yang J">J.Y. Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pang, Y" uniqKey="Pang Y">Y. Pang</name>
</author>
<author>
<name sortKey="Liu, Z" uniqKey="Liu Z">Z. Liu</name>
</author>
<author>
<name sortKey="Yu, N" uniqKey="Yu N">N. Yu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, Y" uniqKey="Wang Y">Y. Wang</name>
</author>
<author>
<name sortKey="Fei, P" uniqKey="Fei P">P. Fei</name>
</author>
<author>
<name sortKey="Fan, X" uniqKey="Fan X">X. Fan</name>
</author>
<author>
<name sortKey="Li, H" uniqKey="Li H">H. Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, S" uniqKey="Li S">S. Li</name>
</author>
<author>
<name sortKey="Yao, Y F" uniqKey="Yao Y">Y.F. Yao</name>
</author>
<author>
<name sortKey="Jing, X Y" uniqKey="Jing X">X.Y. Jing</name>
</author>
<author>
<name sortKey="Chang, H" uniqKey="Chang H">H. Chang</name>
</author>
<author>
<name sortKey="Gao, S Q" uniqKey="Gao S">S.Q. Gao</name>
</author>
<author>
<name sortKey="Zhang, D" uniqKey="Zhang D">D. Zhang</name>
</author>
<author>
<name sortKey="Yang, J Y" uniqKey="Yang J">J.Y. Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Khan, S A" uniqKey="Khan S">S.A. Khan</name>
</author>
<author>
<name sortKey="Ishtiaq, M" uniqKey="Ishtiaq M">M. Ishtiaq</name>
</author>
<author>
<name sortKey="Nazir, M" uniqKey="Nazir M">M. Nazir</name>
</author>
<author>
<name sortKey="Shaheen, M" uniqKey="Shaheen M">M. Shaheen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hafez, S F" uniqKey="Hafez S">S.F. Hafez</name>
</author>
<author>
<name sortKey="Selim, M M" uniqKey="Selim M">M.M. Selim</name>
</author>
<author>
<name sortKey="Zayed, H H" uniqKey="Zayed H">H.H. Zayed</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shanbhag, S S" uniqKey="Shanbhag S">S.S. Shanbhag</name>
</author>
<author>
<name sortKey="Bargi, S" uniqKey="Bargi S">S. Bargi</name>
</author>
<author>
<name sortKey="Manikantan, K" uniqKey="Manikantan K">K. Manikantan</name>
</author>
<author>
<name sortKey="Ramachandran, S" uniqKey="Ramachandran S">S. Ramachandran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fan, J" uniqKey="Fan J">J. Fan</name>
</author>
<author>
<name sortKey="Chow, T W" uniqKey="Chow T">T.W. Chow</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vinay, A" uniqKey="Vinay A">A. Vinay</name>
</author>
<author>
<name sortKey="Cholin, A S" uniqKey="Cholin A">A.S. Cholin</name>
</author>
<author>
<name sortKey="Bhat, A D" uniqKey="Bhat A">A.D. Bhat</name>
</author>
<author>
<name sortKey="Murthy, K B" uniqKey="Murthy K">K.B. Murthy</name>
</author>
<author>
<name sortKey="Natarajan, S" uniqKey="Natarajan S">S. Natarajan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lu, J" uniqKey="Lu J">J. Lu</name>
</author>
<author>
<name sortKey="Plataniotis, K N" uniqKey="Plataniotis K">K.N. Plataniotis</name>
</author>
<author>
<name sortKey="Venetsanopoulos, A N" uniqKey="Venetsanopoulos A">A.N. Venetsanopoulos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yang, W J" uniqKey="Yang W">W.J. Yang</name>
</author>
<author>
<name sortKey="Chen, Y C" uniqKey="Chen Y">Y.C. Chen</name>
</author>
<author>
<name sortKey="Chung, P C" uniqKey="Chung P">P.C. Chung</name>
</author>
<author>
<name sortKey="Yang, J F" uniqKey="Yang J">J.F. Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ouanan, H" uniqKey="Ouanan H">H. Ouanan</name>
</author>
<author>
<name sortKey="Ouanan, M" uniqKey="Ouanan M">M. Ouanan</name>
</author>
<author>
<name sortKey="Aksasse, B" uniqKey="Aksasse B">B. Aksasse</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fathima, A A" uniqKey="Fathima A">A.A. Fathima</name>
</author>
<author>
<name sortKey="Ajitha, S" uniqKey="Ajitha S">S. Ajitha</name>
</author>
<author>
<name sortKey="Vaidehi, V" uniqKey="Vaidehi V">V. Vaidehi</name>
</author>
<author>
<name sortKey="Hemalatha, M" uniqKey="Hemalatha M">M. Hemalatha</name>
</author>
<author>
<name sortKey="Karthigaiveni, R" uniqKey="Karthigaiveni R">R. Karthigaiveni</name>
</author>
<author>
<name sortKey="Kumar, R" uniqKey="Kumar R">R. Kumar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barkan, O" uniqKey="Barkan O">O. Barkan</name>
</author>
<author>
<name sortKey="Weill, J" uniqKey="Weill J">J. Weill</name>
</author>
<author>
<name sortKey="Wolf, L" uniqKey="Wolf L">L. Wolf</name>
</author>
<author>
<name sortKey="Aronowitz, H" uniqKey="Aronowitz H">H. Aronowitz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Juefei Xu, F" uniqKey="Juefei Xu F">F. Juefei-Xu</name>
</author>
<author>
<name sortKey="Luu, K" uniqKey="Luu K">K. Luu</name>
</author>
<author>
<name sortKey="Savvides, M" uniqKey="Savvides M">M. Savvides</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yan, Y" uniqKey="Yan Y">Y. Yan</name>
</author>
<author>
<name sortKey="Wang, H" uniqKey="Wang H">H. Wang</name>
</author>
<author>
<name sortKey="Suter, D" uniqKey="Suter D">D. Suter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ding, C" uniqKey="Ding C">C. Ding</name>
</author>
<author>
<name sortKey="Tao, D" uniqKey="Tao D">D. Tao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sharma, R" uniqKey="Sharma R">R. Sharma</name>
</author>
<author>
<name sortKey="Patterh, M S" uniqKey="Patterh M">M.S. Patterh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moussa, M" uniqKey="Moussa M">M. Moussa</name>
</author>
<author>
<name sortKey="Hmila, M" uniqKey="Hmila M">M. Hmila</name>
</author>
<author>
<name sortKey="Douik, A" uniqKey="Douik A">A. Douik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mian, A" uniqKey="Mian A">A. Mian</name>
</author>
<author>
<name sortKey="Bennamoun, M" uniqKey="Bennamoun M">M. Bennamoun</name>
</author>
<author>
<name sortKey="Owens, R" uniqKey="Owens R">R. Owens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cho, H" uniqKey="Cho H">H. Cho</name>
</author>
<author>
<name sortKey="Roberts, R" uniqKey="Roberts R">R. Roberts</name>
</author>
<author>
<name sortKey="Jung, B" uniqKey="Jung B">B. Jung</name>
</author>
<author>
<name sortKey="Choi, O" uniqKey="Choi O">O. Choi</name>
</author>
<author>
<name sortKey="Moon, S" uniqKey="Moon S">S. Moon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guru, D S" uniqKey="Guru D">D.S. Guru</name>
</author>
<author>
<name sortKey="Suraj, M G" uniqKey="Suraj M">M.G. Suraj</name>
</author>
<author>
<name sortKey="Manjunath, S" uniqKey="Manjunath S">S. Manjunath</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sing, J K" uniqKey="Sing J">J.K. Sing</name>
</author>
<author>
<name sortKey="Chowdhury, S" uniqKey="Chowdhury S">S. Chowdhury</name>
</author>
<author>
<name sortKey="Basu, D K" uniqKey="Basu D">D.K. Basu</name>
</author>
<author>
<name sortKey="Nasipuri, M" uniqKey="Nasipuri M">M. Nasipuri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kamencay, P" uniqKey="Kamencay P">P. Kamencay</name>
</author>
<author>
<name sortKey="Zachariasova, M" uniqKey="Zachariasova M">M. Zachariasova</name>
</author>
<author>
<name sortKey="Hudec, R" uniqKey="Hudec R">R. Hudec</name>
</author>
<author>
<name sortKey="Jarina, R" uniqKey="Jarina R">R. Jarina</name>
</author>
<author>
<name sortKey="Benco, M" uniqKey="Benco M">M. Benco</name>
</author>
<author>
<name sortKey="Hlubik, J" uniqKey="Hlubik J">J. Hlubik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sun, J" uniqKey="Sun J">J. Sun</name>
</author>
<author>
<name sortKey="Fu, Y" uniqKey="Fu Y">Y. Fu</name>
</author>
<author>
<name sortKey="Li, S" uniqKey="Li S">S. Li</name>
</author>
<author>
<name sortKey="He, J" uniqKey="He J">J. He</name>
</author>
<author>
<name sortKey="Xu, C" uniqKey="Xu C">C. Xu</name>
</author>
<author>
<name sortKey="Tan, L" uniqKey="Tan L">L. Tan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soltanpour, S" uniqKey="Soltanpour S">S. Soltanpour</name>
</author>
<author>
<name sortKey="Boufama, B" uniqKey="Boufama B">B. Boufama</name>
</author>
<author>
<name sortKey="Wu, Q J" uniqKey="Wu Q">Q.J. Wu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sharma, G" uniqKey="Sharma G">G. Sharma</name>
</author>
<author>
<name sortKey="Ul Hussain, S" uniqKey="Ul Hussain S">S. ul Hussain</name>
</author>
<author>
<name sortKey="Jurie, F" uniqKey="Jurie F">F. Jurie</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, J" uniqKey="Zhang J">J. Zhang</name>
</author>
<author>
<name sortKey="Marszalek, M" uniqKey="Marszalek M">M. Marszałek</name>
</author>
<author>
<name sortKey="Lazebnik, S" uniqKey="Lazebnik S">S. Lazebnik</name>
</author>
<author>
<name sortKey="Schmid, C" uniqKey="Schmid C">C. Schmid</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leonard, I" uniqKey="Leonard I">I. Leonard</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Brosseau, C" uniqKey="Brosseau C">C. Brosseau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shen, L" uniqKey="Shen L">L. Shen</name>
</author>
<author>
<name sortKey="Bai, L" uniqKey="Bai L">L. Bai</name>
</author>
<author>
<name sortKey="Ji, Z" uniqKey="Ji Z">Z. Ji</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pratima, D" uniqKey="Pratima D">D. Pratima</name>
</author>
<author>
<name sortKey="Nimmakanti, N" uniqKey="Nimmakanti N">N. Nimmakanti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, C" uniqKey="Zhang C">C. Zhang</name>
</author>
<author>
<name sortKey="Prasanna, V" uniqKey="Prasanna V">V. Prasanna</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nguyen, D T" uniqKey="Nguyen D">D.T. Nguyen</name>
</author>
<author>
<name sortKey="Pham, T D" uniqKey="Pham T">T.D. Pham</name>
</author>
<author>
<name sortKey="Lee, M B" uniqKey="Lee M">M.B. Lee</name>
</author>
<author>
<name sortKey="Park, K R" uniqKey="Park K">K.R. Park</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parkhi, O M" uniqKey="Parkhi O">O.M. Parkhi</name>
</author>
<author>
<name sortKey="Vedaldi, A" uniqKey="Vedaldi A">A. Vedaldi</name>
</author>
<author>
<name sortKey="Zisserman, A" uniqKey="Zisserman A">A. Zisserman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wen, Y" uniqKey="Wen Y">Y. Wen</name>
</author>
<author>
<name sortKey="Zhang, K" uniqKey="Zhang K">K. Zhang</name>
</author>
<author>
<name sortKey="Li, Z" uniqKey="Li Z">Z. Li</name>
</author>
<author>
<name sortKey="Qiao, Y" uniqKey="Qiao Y">Y. Qiao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Passalis, N" uniqKey="Passalis N">N. Passalis</name>
</author>
<author>
<name sortKey="Tefas, A" uniqKey="Tefas A">A. Tefas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, W" uniqKey="Liu W">W. Liu</name>
</author>
<author>
<name sortKey="Wen, Y" uniqKey="Wen Y">Y. Wen</name>
</author>
<author>
<name sortKey="Yu, Z" uniqKey="Yu Z">Z. Yu</name>
</author>
<author>
<name sortKey="Li, M" uniqKey="Li M">M. Li</name>
</author>
<author>
<name sortKey="Raj, B" uniqKey="Raj B">B. Raj</name>
</author>
<author>
<name sortKey="Song, L" uniqKey="Song L">L. Song</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amato, G" uniqKey="Amato G">G. Amato</name>
</author>
<author>
<name sortKey="Falchi, F" uniqKey="Falchi F">F. Falchi</name>
</author>
<author>
<name sortKey="Gennaro, C" uniqKey="Gennaro C">C. Gennaro</name>
</author>
<author>
<name sortKey="Massoli, F V" uniqKey="Massoli F">F.V. Massoli</name>
</author>
<author>
<name sortKey="Passalis, N" uniqKey="Passalis N">N. Passalis</name>
</author>
<author>
<name sortKey="Tefas, A" uniqKey="Tefas A">A. Tefas</name>
</author>
<author>
<name sortKey="Vairo, C" uniqKey="Vairo C">C. Vairo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Taigman, Y" uniqKey="Taigman Y">Y. Taigman</name>
</author>
<author>
<name sortKey="Yang, M" uniqKey="Yang M">M. Yang</name>
</author>
<author>
<name sortKey="Ranzato, M A" uniqKey="Ranzato M">M.A. Ranzato</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, Z" uniqKey="Ma Z">Z. Ma</name>
</author>
<author>
<name sortKey="Ding, Y" uniqKey="Ding Y">Y. Ding</name>
</author>
<author>
<name sortKey="Li, B" uniqKey="Li B">B. Li</name>
</author>
<author>
<name sortKey="Yuan, X" uniqKey="Yuan X">X. Yuan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koo, J" uniqKey="Koo J">J. Koo</name>
</author>
<author>
<name sortKey="Cho, S" uniqKey="Cho S">S. Cho</name>
</author>
<author>
<name sortKey="Baek, N" uniqKey="Baek N">N. Baek</name>
</author>
<author>
<name sortKey="Kim, M" uniqKey="Kim M">M. Kim</name>
</author>
<author>
<name sortKey="Park, K" uniqKey="Park K">K. Park</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cho, S" uniqKey="Cho S">S. Cho</name>
</author>
<author>
<name sortKey="Baek, N" uniqKey="Baek N">N. Baek</name>
</author>
<author>
<name sortKey="Kim, M" uniqKey="Kim M">M. Kim</name>
</author>
<author>
<name sortKey="Koo, J" uniqKey="Koo J">J. Koo</name>
</author>
<author>
<name sortKey="Kim, J" uniqKey="Kim J">J. Kim</name>
</author>
<author>
<name sortKey="Park, K" uniqKey="Park K">K. Park</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koshy, R" uniqKey="Koshy R">R. Koshy</name>
</author>
<author>
<name sortKey="Mahmood, A" uniqKey="Mahmood A">A. Mahmood</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Elmahmudi, A" uniqKey="Elmahmudi A">A. Elmahmudi</name>
</author>
<author>
<name sortKey="Ugail, H" uniqKey="Ugail H">H. Ugail</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seibold, C" uniqKey="Seibold C">C. Seibold</name>
</author>
<author>
<name sortKey="Samek, W" uniqKey="Samek W">W. Samek</name>
</author>
<author>
<name sortKey="Hilsmann, A" uniqKey="Hilsmann A">A. Hilsmann</name>
</author>
<author>
<name sortKey="Eisert, P" uniqKey="Eisert P">P. Eisert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yim, J" uniqKey="Yim J">J. Yim</name>
</author>
<author>
<name sortKey="Jung, H" uniqKey="Jung H">H. Jung</name>
</author>
<author>
<name sortKey="Yoo, B" uniqKey="Yoo B">B. Yoo</name>
</author>
<author>
<name sortKey="Choi, C" uniqKey="Choi C">C. Choi</name>
</author>
<author>
<name sortKey="Park, D" uniqKey="Park D">D. Park</name>
</author>
<author>
<name sortKey="Kim, J" uniqKey="Kim J">J. Kim</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bajrami, X" uniqKey="Bajrami X">X. Bajrami</name>
</author>
<author>
<name sortKey="Gashi, B" uniqKey="Gashi B">B. Gashi</name>
</author>
<author>
<name sortKey="Murturi, I" uniqKey="Murturi I">I. Murturi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gourier, N" uniqKey="Gourier N">N. Gourier</name>
</author>
<author>
<name sortKey="Hall, D" uniqKey="Hall D">D. Hall</name>
</author>
<author>
<name sortKey="Crowley, J L" uniqKey="Crowley J">J.L. Crowley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gonzalez Sosa, E" uniqKey="Gonzalez Sosa E">E. Gonzalez-Sosa</name>
</author>
<author>
<name sortKey="Fierrez, J" uniqKey="Fierrez J">J. Fierrez</name>
</author>
<author>
<name sortKey="Vera Rodriguez, R" uniqKey="Vera Rodriguez R">R. Vera-Rodriguez</name>
</author>
<author>
<name sortKey="Alonso Fernandez, F" uniqKey="Alonso Fernandez F">F. Alonso-Fernandez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boukamcha, H" uniqKey="Boukamcha H">H. Boukamcha</name>
</author>
<author>
<name sortKey="Hallek, M" uniqKey="Hallek M">M. Hallek</name>
</author>
<author>
<name sortKey="Smach, F" uniqKey="Smach F">F. Smach</name>
</author>
<author>
<name sortKey="Atri, M" uniqKey="Atri M">M. Atri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ouerhani, Y" uniqKey="Ouerhani Y">Y. Ouerhani</name>
</author>
<author>
<name sortKey="Jridi, M" uniqKey="Jridi M">M. Jridi</name>
</author>
<author>
<name sortKey="Alfalou, A" uniqKey="Alfalou A">A. Alfalou</name>
</author>
<author>
<name sortKey="Brosseau, C" uniqKey="Brosseau C">C. Brosseau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Su, C" uniqKey="Su C">C. Su</name>
</author>
<author>
<name sortKey="Yan, Y" uniqKey="Yan Y">Y. Yan</name>
</author>
<author>
<name sortKey="Chen, S" uniqKey="Chen S">S. Chen</name>
</author>
<author>
<name sortKey="Wang, H" uniqKey="Wang H">H. Wang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Co Kun, M" uniqKey="Co Kun M">M. Coşkun</name>
</author>
<author>
<name sortKey="Ucar, A" uniqKey="Ucar A">A. Uçar</name>
</author>
<author>
<name sortKey="Yildirim, O" uniqKey="Yildirim O">Ö. Yildirim</name>
</author>
<author>
<name sortKey="Demir, Y" uniqKey="Demir Y">Y. Demir</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="review-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sensors (Basel)</journal-id>
<journal-id journal-id-type="iso-abbrev">Sensors (Basel)</journal-id>
<journal-id journal-id-type="publisher-id">sensors</journal-id>
<journal-title-group>
<journal-title>Sensors (Basel, Switzerland)</journal-title>
</journal-title-group>
<issn pub-type="epub">1424-8220</issn>
<publisher>
<publisher-name>MDPI</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">31936089</article-id>
<article-id pub-id-type="pmc">7013584</article-id>
<article-id pub-id-type="doi">10.3390/s20020342</article-id>
<article-id pub-id-type="publisher-id">sensors-20-00342</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Review</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Face Recognition Systems: A Survey</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Kortli</surname>
<given-names>Yassin</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-20-00342">1</xref>
<xref ref-type="aff" rid="af2-sensors-20-00342">2</xref>
<xref rid="c1-sensors-20-00342" ref-type="corresp">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Jridi</surname>
<given-names>Maher</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-20-00342">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Al Falou</surname>
<given-names>Ayman</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-20-00342">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Atri</surname>
<given-names>Mohamed</given-names>
</name>
<xref ref-type="aff" rid="af3-sensors-20-00342">3</xref>
</contrib>
</contrib-group>
<aff id="af1-sensors-20-00342">
<label>1</label>
AI-ED Department, Yncrea Ouest, 20 rue du Cuirassé de Bretagne, 29200 Brest, France;
<email>maher.jridi@isen-ouest.yncrea.fr</email>
(M.J.);
<email>ayman.alfalou@isen-ouest.yncrea.fr</email>
(A.A.F.)</aff>
<aff id="af2-sensors-20-00342">
<label>2</label>
Electronic and Micro-electronic Laboratory, Faculty of Sciences of Monastir, University of Monastir, Monastir 5000, Tunisia</aff>
<aff id="af3-sensors-20-00342">
<label>3</label>
College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia;
<email>matri@kku.edu.sa</email>
</aff>
<author-notes>
<corresp id="c1-sensors-20-00342">
<label>*</label>
Correspondence:
<email>yassin.kortli@isen-ouest.yncrea.fr</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>1</month>
<year>2020</year>
</pub-date>
<pub-date pub-type="collection">
<month>1</month>
<year>2020</year>
</pub-date>
<volume>20</volume>
<issue>2</issue>
<elocation-id>342</elocation-id>
<history>
<date date-type="received">
<day>15</day>
<month>10</month>
<year>2019</year>
</date>
<date date-type="accepted">
<day>15</day>
<month>12</month>
<year>2019</year>
</date>
</history>
<permissions>
<copyright-statement>© 2020 by the authors.</copyright-statement>
<copyright-year>2020</copyright-year>
<license license-type="open-access">
<license-p>Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
).</license-p>
</license>
</permissions>
<abstract>
<p>Over the past few decades, interest in theories and algorithms for face recognition has been growing rapidly. Video surveillance, criminal identification, building access control, and unmanned and autonomous vehicles are just a few examples of concrete applications that are gaining attraction among industries. Various techniques are being developed including local, holistic, and hybrid approaches, which provide a face image description using only a few face image features or the whole facial features. The main contribution of this survey is to review some well-known techniques for each approach and to give the taxonomy of their categories. In the paper, a detailed comparison between these techniques is exposed by listing the advantages and the disadvantages of their schemes in terms of robustness, accuracy, complexity, and discrimination. One interesting feature mentioned in the paper is about the database used for face recognition. An overview of the most commonly used databases, including those of supervised and unsupervised learning, is given. Numerical results of the most interesting techniques are given along with the context of experiments and challenges handled by these techniques. Finally, a solid discussion is given in the paper about future directions in terms of techniques to be used for face recognition.</p>
</abstract>
<kwd-group>
<kwd>face recognition systems</kwd>
<kwd>person identification</kwd>
<kwd>biometric systems</kwd>
<kwd>survey</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1-sensors-20-00342">
<title>1. Introduction</title>
<p>The objective of developing biometric applications, such as facial recognition, has recently become important in smart cities. In addition, many scientists and engineers around the world have focused on establishing increasingly robust and accurate algorithms and methods for these types of systems and their application in everyday life. All types of security systems must protect all personal data. The most commonly used type for recognition is the password. However, through the development of information technologies and security algorithms, many systems are beginning to use many biometric factors for recognition task [
<xref rid="B1-sensors-20-00342" ref-type="bibr">1</xref>
,
<xref rid="B2-sensors-20-00342" ref-type="bibr">2</xref>
,
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
,
<xref rid="B4-sensors-20-00342" ref-type="bibr">4</xref>
]. These biometric factors make it possible to identify people’s identity by their physiological or behavioral characteristics. They also provide several advantages, for example, the presence of a person in front of the sensor is sufficient, and there is no more need to remember several passwords or confidential codes anymore. In this context, many recognition systems based on different biometric factors such as iris, fingerprints [
<xref rid="B5-sensors-20-00342" ref-type="bibr">5</xref>
], voice [
<xref rid="B6-sensors-20-00342" ref-type="bibr">6</xref>
], and face have been deployed in recent years.</p>
<p>Systems that identify people based on their biological characteristics are very attractive because they are easy to use. The human face is composed of different structures and characteristics. For this reason, in recent years, it has become one of the most widely used biometric authentication systems, given its potential in many applications and fields (surveillance, home security, border control, and so on) [
<xref rid="B7-sensors-20-00342" ref-type="bibr">7</xref>
,
<xref rid="B8-sensors-20-00342" ref-type="bibr">8</xref>
,
<xref rid="B9-sensors-20-00342" ref-type="bibr">9</xref>
]. Facial recognition system as an ID (identity) is already being offered to consumers outside of phones, including at airport check-ins, sports stadiums, and concerts. In addition, this system does not require the intervention of people to operate, which makes it possible to identify people only from images obtained from the camera. In addition, many biometric systems that are developed using different types of search provide good identification accuracy. However, it would be interesting to develop new biometric systems for face recognition in order to reach real-time constraints.</p>
<p>Owing to the huge volume of data generated and rapid advancement in artificial intelligence techniques, traditional computing models have become inadequate to process data, especially for complex applications like those related to feature extraction. Graphics processing units (GPUs) [
<xref rid="B4-sensors-20-00342" ref-type="bibr">4</xref>
], central processing unit (CPU) [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
], and programmable gate arrays (FPGAs) [
<xref rid="B10-sensors-20-00342" ref-type="bibr">10</xref>
] are required to efficiently perform complex computing tasks. GPUs have computing cores that are several orders of magnitude larger than traditional CPU and allow greater capacity to perform parallel computing. Unlike GPUs, the FPGAs have a flexible hardware configuration and offer better performance than GPUs in terms of energy efficiency. However, FPGAs present a major drawback related to the programming time, which is higher than that of CPU and GPU.</p>
<p>There are many computer vision approaches proposed to address face detection or recognition tasks with high robustness and discrimination, such as local, subspace, and hybrid approaches [
<xref rid="B10-sensors-20-00342" ref-type="bibr">10</xref>
,
<xref rid="B11-sensors-20-00342" ref-type="bibr">11</xref>
,
<xref rid="B12-sensors-20-00342" ref-type="bibr">12</xref>
,
<xref rid="B13-sensors-20-00342" ref-type="bibr">13</xref>
,
<xref rid="B14-sensors-20-00342" ref-type="bibr">14</xref>
,
<xref rid="B15-sensors-20-00342" ref-type="bibr">15</xref>
,
<xref rid="B16-sensors-20-00342" ref-type="bibr">16</xref>
]. However, several issues still need to be addressed owing to various challenges, such as head orientation, lighting conditions, and facial expression. The most interesting techniques are developed to face all these challenges, and thus develop reliable face recognition systems. Nevertheless, they require high processing time, high memory consumption, and are relatively complex.</p>
<p>Rapid advances in technologies such as digital cameras, portable devices, and increased demand for security make the face recognition system one of the primary biometric technologies.</p>
<p>To sum up, the contributions of this paper review are as follows:
<list list-type="order">
<list-item>
<p>We first introduced face recognition as a biometric technique.</p>
</list-item>
<list-item>
<p>We presented the state of the art of the existing face recognition techniques classified into three approaches: local, holistic, and hybrid.</p>
</list-item>
<list-item>
<p>The surveyed approaches were summarized and compared under different conditions.</p>
</list-item>
<list-item>
<p>We presented the most popular face databases used to test these approaches.</p>
</list-item>
<list-item>
<p>We highlighted some new promising research directions.</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec2-sensors-20-00342">
<title>2. Face Recognition Systems Survey</title>
<sec id="sec2dot1-sensors-20-00342">
<title>2.1. Essential Steps of Face Recognition Systems</title>
<p>Before detailing the techniques used, it is necessary to make a brief description of the problems that must be faced and solved in order to perform the face recognition task correctly. For several security applications, as detailed in the works of [
<xref rid="B17-sensors-20-00342" ref-type="bibr">17</xref>
,
<xref rid="B18-sensors-20-00342" ref-type="bibr">18</xref>
,
<xref rid="B19-sensors-20-00342" ref-type="bibr">19</xref>
,
<xref rid="B20-sensors-20-00342" ref-type="bibr">20</xref>
,
<xref rid="B21-sensors-20-00342" ref-type="bibr">21</xref>
,
<xref rid="B22-sensors-20-00342" ref-type="bibr">22</xref>
], the characteristics that make a face recognition system useful are the following: its ability to work with both videos and images, to process in real time, to be robust in different lighting conditions, to be independent of the person (regardless of hair, ethnicity, or gender), and to be able to work with faces from different angles. Different types of sensors, including RGB, depth, EEG, thermal, and wearable inertial sensors, are used to obtain data. These sensors may provide extra information and help the face recognition systems to identify face images in both static images and video sequences. Moreover, three categories of sensors that may improve the reliability and the accuracy of a face recognition system by tackling the challenges include illumination variation, head pose, and facial expression in pure image/video processing. The first group is non-visual sensors, such as audio, depth, and EEG sensors, which provide extra information in addition to the visual dimension and improve the recognition reliability, for example, in illumination variation and position shift situation. The second is detailed-face sensors, which detect a small dynamic change of a face component, such as eye-trackers, which may help differentiate the background noise and the face images. The last is target-focused sensors, such as infrared thermal sensors, which can facilitate the face recognition systems to filter useless visual contents and may help resistance illumination variation.</p>
<p>Three basic steps are used to develop a robust face recognition system: (1) face detection, (2) feature extraction, and (3) face recognition (shown in
<xref ref-type="fig" rid="sensors-20-00342-f001">Figure 1</xref>
) [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
,
<xref rid="B23-sensors-20-00342" ref-type="bibr">23</xref>
]. The face detection step is used to detect and locate the human face image obtained by the system. The feature extraction step is employed to extract the feature vectors for any human face located in the first step. Finally, the face recognition step includes the features extracted from the human face in order to compare it with all template face databases to decide the human face identity.
<list list-type="bullet">
<list-item>
<p>
<italic>Face Detection</italic>
: The face recognition system begins first with the localization of the human faces in a particular image. The purpose of this step is to determine if the input image contains human faces or not. The variations of illumination and facial expression can prevent proper face detection. In order to facilitate the design of a further face recognition system and make it more robust, pre-processing steps are performed. Many techniques are used to detect and locate the human face image, for example, Viola–Jones detector [
<xref rid="B24-sensors-20-00342" ref-type="bibr">24</xref>
,
<xref rid="B25-sensors-20-00342" ref-type="bibr">25</xref>
], histogram of oriented gradient (HOG) [
<xref rid="B13-sensors-20-00342" ref-type="bibr">13</xref>
,
<xref rid="B26-sensors-20-00342" ref-type="bibr">26</xref>
], and principal component analysis (PCA) [
<xref rid="B27-sensors-20-00342" ref-type="bibr">27</xref>
,
<xref rid="B28-sensors-20-00342" ref-type="bibr">28</xref>
]. Also, the face detection step can be used for video and image classification, object detection [
<xref rid="B29-sensors-20-00342" ref-type="bibr">29</xref>
], region-of-interest detection [
<xref rid="B30-sensors-20-00342" ref-type="bibr">30</xref>
], and so on.</p>
</list-item>
<list-item>
<p>
<italic>Feature Extraction</italic>
: The main function of this step is to extract the features of the face images detected in the detection step. This step represents a face with a set of features vector called a “signature” that describes the prominent features of the face image such as mouth, nose, and eyes with their geometry distribution [
<xref rid="B31-sensors-20-00342" ref-type="bibr">31</xref>
,
<xref rid="B32-sensors-20-00342" ref-type="bibr">32</xref>
]. Each face is characterized by its structure, size, and shape, which allow it to be identified. Several techniques involve extracting the shape of the mouth, eyes, or nose to identify the face using the size and distance [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
]. HOG [
<xref rid="B33-sensors-20-00342" ref-type="bibr">33</xref>
], Eigenface [
<xref rid="B34-sensors-20-00342" ref-type="bibr">34</xref>
], independent component analysis (ICA), linear discriminant analysis (LDA) [
<xref rid="B27-sensors-20-00342" ref-type="bibr">27</xref>
,
<xref rid="B35-sensors-20-00342" ref-type="bibr">35</xref>
], scale-invariant feature transform (SIFT) [
<xref rid="B23-sensors-20-00342" ref-type="bibr">23</xref>
], gabor filter, local phase quantization (LPQ) [
<xref rid="B36-sensors-20-00342" ref-type="bibr">36</xref>
], Haar wavelets, Fourier transforms [
<xref rid="B31-sensors-20-00342" ref-type="bibr">31</xref>
], and local binary pattern (LBP) [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
,
<xref rid="B10-sensors-20-00342" ref-type="bibr">10</xref>
] techniques are widely used to extract the face features.</p>
</list-item>
<list-item>
<p>
<italic>Face Recognition</italic>
: This step considers the features extracted from the background during the feature extraction step and compares it with known faces stored in a specific database. There are two general applications of face recognition, one is called identification and another one is called verification. During the identification step, a test face is compared with a set of faces aiming to find the most likely match. During the identification step, a test face is compared with a known face in the database in order to make the acceptance or rejection decision [
<xref rid="B7-sensors-20-00342" ref-type="bibr">7</xref>
,
<xref rid="B19-sensors-20-00342" ref-type="bibr">19</xref>
]. Correlation filters (CFs) [
<xref rid="B18-sensors-20-00342" ref-type="bibr">18</xref>
,
<xref rid="B37-sensors-20-00342" ref-type="bibr">37</xref>
,
<xref rid="B38-sensors-20-00342" ref-type="bibr">38</xref>
], convolutional neural network (CNN) [
<xref rid="B39-sensors-20-00342" ref-type="bibr">39</xref>
], and also k-nearest neighbor (K-NN) [
<xref rid="B40-sensors-20-00342" ref-type="bibr">40</xref>
] are known to effectively address this task.</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec2dot2-sensors-20-00342">
<title>2.2. Classification of Face Recognition Systems</title>
<p>Compared with other biometric systems such as the eye, iris, or fingerprint recognition systems, the face recognition system is not the most efficient and reliable [
<xref rid="B5-sensors-20-00342" ref-type="bibr">5</xref>
]. Moreover, this biometric system has many constraints resulting from many challenges, despite all the above advantages. The recognition under the controlled environments has been saturated. Nevertheless, in uncontrolled environments, the problem remains open owing to large variations in lighting conditions, facial expressions, age, dynamic background, and so on. In this paper survey, we review the most advanced face recognition techniques proposed in controlled/uncontrolled environments using different databases.</p>
<p>Several systems are implemented to identify a human face in 2D or 3D images. In this review paper, we will classify these systems into three approaches based on their detection and recognition method (
<xref ref-type="fig" rid="sensors-20-00342-f002">Figure 2</xref>
): (1) local, (2) holistic (subspace), and (3) hybrid approaches. The first approach is classified according to certain facial features, not considering the whole face. The second approach employs the entire face as input data and then projects into a small subspace or in correlation plane. The third approach uses local and global features in order to improve face recognition accuracy.</p>
</sec>
</sec>
<sec id="sec3-sensors-20-00342">
<title>3. Local Approaches</title>
<p>In the context of face recognition, local approaches treat only some facial features. They are more sensitive to facial expressions, occlusions, and pose [
<xref rid="B1-sensors-20-00342" ref-type="bibr">1</xref>
]. The main objective of these approaches is to discover distinctive features. Generally, these approaches can be divided into two categories: (1) local appearance-based techniques are used to extract local features, while the face image is divided into small regions (patches) [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
,
<xref rid="B32-sensors-20-00342" ref-type="bibr">32</xref>
]. (2) Key-points-based techniques are used to detect the points of interest in the face image, after which the features localized on these points are extracted.</p>
<sec id="sec3dot1-sensors-20-00342">
<title>3.1. Local Appearance-Based Techniques</title>
<p>It is a geometrical technique, also called feature or analytic technique. In this case, the face image is represented by a set of distinctive vectors with low dimensions or small regions (patches). Local appearance-based techniques focus on critical points of the face such as the nose, mouth, and eyes to generate more details. Also, it takes into account the particularity of the face as a natural form to identify and use a reduced number of parameters. In addition, these techniques describe the local features through pixel orientations, histograms [
<xref rid="B13-sensors-20-00342" ref-type="bibr">13</xref>
,
<xref rid="B26-sensors-20-00342" ref-type="bibr">26</xref>
], geometric properties, and correlation planes [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
,
<xref rid="B33-sensors-20-00342" ref-type="bibr">33</xref>
,
<xref rid="B41-sensors-20-00342" ref-type="bibr">41</xref>
].
<list list-type="bullet">
<list-item>
<p>Local binary pattern (LBP) and it’s variant: LBP is a great general texture technique used to extract features from any object [
<xref rid="B16-sensors-20-00342" ref-type="bibr">16</xref>
]. It has widely performed in many applications such as face recognition [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
], facial expression recognition, texture segmentation, and texture classification. The LBP technique first divides the facial image into spatial arrays. Next, within each array square, a
<inline-formula>
<mml:math id="mm1">
<mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
pixel matrix
<inline-formula>
<mml:math id="mm2">
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi mathvariant="normal">p</mml:mi>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:mi mathvariant="normal">p</mml:mi>
<mml:mn>8</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
) is mapped across the square. The pixel of this matrix is a threshold with the value of the center pixel
<inline-formula>
<mml:math id="mm3">
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi mathvariant="normal">p</mml:mi>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
(i.e., use the intensity value of the center pixel
<inline-formula>
<mml:math id="mm4">
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">i</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">p</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
as a reference for thresholding) to produce the binary code. If a neighbor pixel’s value is lower than the center pixel value, it is given a zero; otherwise, it is given one. The binary code contains information about the local texture. Finally, for each array square, a histogram of these codes is built, and the histograms are concatenated to form the feature vector. The LBP is defined in a matrix of size 3 × 3, as shown in Equation (1).
<disp-formula id="FD1-sensors-20-00342">
<label>(1)</label>
<mml:math id="mm5">
<mml:mrow>
<mml:mrow>
<mml:mi>LBP</mml:mi>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mn>8</mml:mn>
</mml:munderover>
<mml:msup>
<mml:mn>2</mml:mn>
<mml:mi>p</mml:mi>
</mml:msup>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>i</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>i</mml:mi>
<mml:mi>p</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mtext>    </mml:mtext>
<mml:mi>w</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mtext> </mml:mtext>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo><</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm6">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>i</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm7">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>i</mml:mi>
<mml:mi>p</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
are the intensity value of the center pixel and neighborhood pixels, respectively.
<xref ref-type="fig" rid="sensors-20-00342-f003">Figure 3</xref>
illustrates the procedure of the LBP technique.</p>
<p>Khoi et al. [
<xref rid="B20-sensors-20-00342" ref-type="bibr">20</xref>
] propose a fast face recognition system based on LBP, pyramid of local binary pattern (PLBP), and rotation invariant local binary pattern (RI-LBP). Xi et al. [
<xref rid="B15-sensors-20-00342" ref-type="bibr">15</xref>
] have introduced a new unsupervised deep learning-based technique, called local binary pattern network (LBPNet), to extract hierarchical representations of data. The LBPNet maintains the same topology as the convolutional neural network (CNN). The experimental results obtained using the public benchmarks (i.e., LFW and FERET) have shown that LBPNet is comparable to other unsupervised techniques. Laure et al. [
<xref rid="B40-sensors-20-00342" ref-type="bibr">40</xref>
] have implemented a method that helps to solve face recognition issues with large variations of parameters such as expression, illumination, and different poses. This method is based on two techniques: LBP and K-NN techniques. Owing to its invariance to the rotation of the target image, LBP become one of the important techniques used for face recognition. Bonnen et al. [
<xref rid="B42-sensors-20-00342" ref-type="bibr">42</xref>
] proposed a variant of the LBP technique named “multiscale local binary pattern (MLBP)” for features’ extraction. Another LBP extension is the local ternary pattern (LTP) technique [
<xref rid="B43-sensors-20-00342" ref-type="bibr">43</xref>
], which is less sensitive to the noise than the original LBP technique. This technique uses three steps to compute the differences between the neighboring ones and the central pixel. Hussain et al. [
<xref rid="B36-sensors-20-00342" ref-type="bibr">36</xref>
] develop a local quantized pattern (LQP) technique for face representation. LQP is a generalization of local pattern features and is intrinsically robust to illumination conditions. The LQP features use the disk layout to sample pixels from the local neighborhood and obtain a pair of binary codes using ternary split coding. These codes are quantized, with each one using a separately learned codebook.</p>
</list-item>
<list-item>
<p>Histogram of oriented gradients (HOG) [
<xref rid="B44-sensors-20-00342" ref-type="bibr">44</xref>
]: The HOG is one of the best descriptors used for shape and edge description. The HOG technique can describe the face shape using the distribution of edge direction or light intensity gradient. The process of this technique done by sharing the whole face image into cells (small region or area); a histogram of pixel edge direction or direction gradients is generated of each cell; and, finally, the histograms of the whole cells are combined to extract the feature of the face image. The feature vector computation by the HOG descriptor proceeds as follows [
<xref rid="B10-sensors-20-00342" ref-type="bibr">10</xref>
,
<xref rid="B13-sensors-20-00342" ref-type="bibr">13</xref>
,
<xref rid="B26-sensors-20-00342" ref-type="bibr">26</xref>
,
<xref rid="B45-sensors-20-00342" ref-type="bibr">45</xref>
]: firstly, divide the local image into regions called cells, and then calculate the amplitude of the first-order gradients of each cell in both the horizontal and vertical direction. The most common method is to apply a 1D mask, [–1 0 1].
<disp-formula id="FD2-sensors-20-00342">
<label>(2)</label>
<mml:math id="mm8">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="FD3-sensors-20-00342">
<label>(3)</label>
<mml:math id="mm9">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm10">
<mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is the pixel value of the point
<inline-formula>
<mml:math id="mm11">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm12">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm13">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
denote the horizontal gradient amplitude and the vertical gradient amplitude, respectively. The magnitude of the gradient and the orientation of each pixel (
<italic>x</italic>
,
<italic>y</italic>
) are computed as follows:
<disp-formula id="FD4-sensors-20-00342">
<label>(4)</label>
<mml:math id="mm14">
<mml:mrow>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="FD5-sensors-20-00342">
<label>(5)</label>
<mml:math id="mm15">
<mml:mrow>
<mml:mrow>
<mml:mi>θ</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>tan</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>The magnitude of the gradient and the orientation of each pixel in the cell are voted in nine bins with the tri-linear interpolation. The histograms of each cell are generated pixel based on direction gradients and, finally, the histograms of the whole cells are combined to extract the feature of the face image. Karaaba et al. [
<xref rid="B44-sensors-20-00342" ref-type="bibr">44</xref>
] proposed a combination of different histograms of oriented gradients (HOG) to perform a robust face recognition system. This technique is named “multi-HOG”.</p>
<p>The authors create a vector of distances between the target and the reference face images for identification. Arigbabu et al. [
<xref rid="B46-sensors-20-00342" ref-type="bibr">46</xref>
] proposed a novel face recognition system based on the Laplacian filter and the pyramid histogram of gradient (PHOG) descriptor. In addition, to investigate the face recognition problem, support vector machine (SVM) is used with different kernel functions.</p>
</list-item>
<list-item>
<p>Correlation filters: Face recognition systems based on the correlation filter (CF) have given good results in terms of robustness, location accuracy, efficiency, and discrimination. In the field of facial recognition, the correlation techniques have attracted great interest since the first use of an optical correlator [
<xref rid="B47-sensors-20-00342" ref-type="bibr">47</xref>
]. These techniques provide the following advantages: high ability for discrimination, desired noise robustness, shift-invariance, and inherent parallelism. On the basis of these advantages, many optoelectronic hybrid solutions of correlation filters (CFs) have been introduced such as the joint transform correlator (JTC) [
<xref rid="B48-sensors-20-00342" ref-type="bibr">48</xref>
] and VanderLugt correlator (VLC) [
<xref rid="B47-sensors-20-00342" ref-type="bibr">47</xref>
] techniques. The purpose of these techniques is to calculate the degree of similarity between target and reference images. The decision is taken by the detection of a correlation peak. Both techniques (VLC and JTC) are based on the “
<inline-formula>
<mml:math id="mm16">
<mml:mrow>
<mml:mrow>
<mml:mn>4</mml:mn>
<mml:mi>f</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
” optical configuration [
<xref rid="B37-sensors-20-00342" ref-type="bibr">37</xref>
]. This configuration is created by two convergent lenses (
<xref ref-type="fig" rid="sensors-20-00342-f004">Figure 4</xref>
). The face image
<inline-formula>
<mml:math id="mm17">
<mml:mrow>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
is processed by the fast Fourier transform (FFT) based on the first lens in the Fourier plane
<inline-formula>
<mml:math id="mm18">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>F</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
. In this Fourier plane, a specific filter
<inline-formula>
<mml:math id="mm19">
<mml:mrow>
<mml:mi mathvariant="normal">P</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
is applied (for example, the phase-only filter (POF) filter [
<xref rid="B2-sensors-20-00342" ref-type="bibr">2</xref>
]) using optoelectronic interfaces. Finally, to obtain the filtered face image
<inline-formula>
<mml:math id="mm20">
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>F</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
(or the correlation plane), the inverse FFT (IFFT) is made with the second lens in the output plane.</p>
<p>For example, the VLC technique is done by two cascade Fourier transform structures realized by two lenses [
<xref rid="B4-sensors-20-00342" ref-type="bibr">4</xref>
], as presented in
<xref ref-type="fig" rid="sensors-20-00342-f005">Figure 5</xref>
. The VLC technique is presented as follows: firstly, a 2D-FFT is applied to the target image to get a target spectrum
<inline-formula>
<mml:math id="mm21">
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
. After that, a multiplication between the target spectrum and the filter obtain with the 2D-FFT of a reference image is affected, and this result is placed in the Fourier plane. Next, it provides the correlation result recorded on the correlation plane, where this multiplication is affected by inverse FF.</p>
<p>The correlation result, described by the peak intensity, is used to determine the similarity degree between the target and reference images.
<disp-formula id="FD6-sensors-20-00342">
<label>(6)</label>
<mml:math id="mm22">
<mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>F</mml:mi>
<mml:mi>F</mml:mi>
<mml:msup>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>S</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mo></mml:mo>
<mml:mi>P</mml:mi>
<mml:mi>O</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm23">
<mml:mrow>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>F</mml:mi>
<mml:msup>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
stands for the inverse fast FT (FFT) operation, * represents the conjugate operation, and ∘ denotes the element-wise array multiplication. To enhance the matching process, Horner and Gianino [
<xref rid="B49-sensors-20-00342" ref-type="bibr">49</xref>
] proposed a phase-only filter (POF). The POF filter can produce correlation peaks marked with enhanced discrimination capability. The POF is an optimized filter defined as follows:
<disp-formula id="FD7-sensors-20-00342">
<label>(7)</label>
<mml:math id="mm24">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>H</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>O</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>S</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm25">
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>S</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is the complex conjugate of the 2D-FFT of the reference image. To evaluate the decision, the peak to correlation energy (PCE) is defined as the energy in the correlation peaks’ intensity normalized to the overall energy of the correlation plane.
<disp-formula id="FD8-sensors-20-00342">
<label>(8)</label>
<mml:math id="mm26">
<mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mstyle mathsize="100%" displaystyle="false">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mstyle mathsize="100%" displaystyle="false">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm27">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
,
<inline-formula>
<mml:math id="mm28">
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
are the coefficient coordinates;
<inline-formula>
<mml:math id="mm29">
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm30">
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
are the size of the correlation plane and the size of the peak correlation spot, respectively;
<inline-formula>
<mml:math id="mm31">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is the energy in the correlation peaks; and
<inline-formula>
<mml:math id="mm32">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is the overall energy of the correlation plane. Correlation techniques are widely applied in recognition and identification applications [
<xref rid="B4-sensors-20-00342" ref-type="bibr">4</xref>
,
<xref rid="B37-sensors-20-00342" ref-type="bibr">37</xref>
,
<xref rid="B50-sensors-20-00342" ref-type="bibr">50</xref>
,
<xref rid="B51-sensors-20-00342" ref-type="bibr">51</xref>
,
<xref rid="B52-sensors-20-00342" ref-type="bibr">52</xref>
,
<xref rid="B53-sensors-20-00342" ref-type="bibr">53</xref>
]. For example, in the work of [
<xref rid="B4-sensors-20-00342" ref-type="bibr">4</xref>
], the authors presented the efficiency performances of the VLC technique based on the “4f” configuration for identification using GPU Nvidia Geforce 8400 GS. The POF filter is used for the decision. Another important work in this area of research is presented by Leonard et al. [
<xref rid="B50-sensors-20-00342" ref-type="bibr">50</xref>
], which presented good performance and the simplicity of the correlation filters for the field of face recognition. In addition, many specific filters such as POF, BPOF, Ad, IF, and so on are used to select the best filter based on its sensitivity to the rotation, scale, and noise. Napoléon et al. [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
] introduced a novel system for identification and verification fields based on an optimized 3D modeling under different illumination conditions, which allows reconstructing faces in different poses. In particular, to deform the synthetic model, an active shape model for detecting a set of key points on the face is proposed in
<xref ref-type="fig" rid="sensors-20-00342-f006">Figure 6</xref>
. The VanderLugt correlator is proposed to perform the identification and the LBP descriptor is used to optimize the performances of a correlation technique under different illumination conditions. The experiments are performed on the Pointing Head Pose Image Database (PHPID) database with an elevation ranging from −30° to +30°.</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec3dot2-sensors-20-00342">
<title>3.2. Key-Points-Based Techniques</title>
<p>The key-points-based techniques are used to detect specific geometric features, according to some geometric information of the face surface (e.g., the distance between the eyes, the width of the head). These techniques can be defined by two significant steps, key-point detection and feature extraction [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
,
<xref rid="B30-sensors-20-00342" ref-type="bibr">30</xref>
,
<xref rid="B54-sensors-20-00342" ref-type="bibr">54</xref>
,
<xref rid="B55-sensors-20-00342" ref-type="bibr">55</xref>
]. The first step focuses on the performance of the detectors of the key-point features of the face image. The second step focuses on the representation of the information carried with the key-point features of the face image. Although these techniques can solve the missing parts and occlusions, scale invariant feature transform (SIFT), binary robust independent elementary features (BRIEF), and speeded-up robust features (SURF) techniques are widely used to describe the feature of the face image.
<list list-type="bullet">
<list-item>
<p>Scale invariant feature transform (SIFT) [
<xref rid="B56-sensors-20-00342" ref-type="bibr">56</xref>
,
<xref rid="B57-sensors-20-00342" ref-type="bibr">57</xref>
]: SIFT is an algorithm used to detect and describe the local features of an image. This algorithm is widely used to link two images by their local descriptors, which contain information to make a match between them. The main idea of the SIFT descriptor is to convert the image into a representation composed of points of interest. These points contain the characteristic information of the face image. SIFT presents invariance to scale and rotation. It is commonly used today and fast, which is essential in real-time applications, but one of its disadvantages is the time of matching of the critical points. The algorithm is realized in four steps: (1) detection of the maximum and minimum points in the space-scale, (2) location of characteristic points, (3) assignment of orientation, and (4) a descriptor of the characteristic point. A framework to detect the key-points based on the SIFT descriptor was proposed by L. Lenc et al. [
<xref rid="B56-sensors-20-00342" ref-type="bibr">56</xref>
], where they use the SIFT technique in combination with a Kepenekci approach for the face recognition.</p>
</list-item>
<list-item>
<p>Speeded-up robust features (SURF) [
<xref rid="B29-sensors-20-00342" ref-type="bibr">29</xref>
,
<xref rid="B57-sensors-20-00342" ref-type="bibr">57</xref>
]: the SURF technique is inspired by SIFT, but uses wavelets and an approximation of the Hessian determinant to achieve better performance [
<xref rid="B29-sensors-20-00342" ref-type="bibr">29</xref>
]. SURF is a detector and descriptor that claims to achieve the same, or even better, results in terms of repeatability, distinction, and robustness compared with the SIFT descriptor. The main advantage of SURF is the execution time, which is less than that used by the SIFT descriptor. Besides, the SIFT descriptor is more adapted to describe faces affected by illumination conditions, scaling, translation, and rotation [
<xref rid="B57-sensors-20-00342" ref-type="bibr">57</xref>
]. To detect feature points, SURF seeks to find the maximum of an approximation of the Hessian matrix using integral images to dramatically reduce the processing computational time.
<xref ref-type="fig" rid="sensors-20-00342-f007">Figure 7</xref>
shows an example of SURF descriptor for face recognition using AR face datasets [
<xref rid="B58-sensors-20-00342" ref-type="bibr">58</xref>
].</p>
</list-item>
<list-item>
<p>Binary robust independent elementary features (BRIEF) [
<xref rid="B30-sensors-20-00342" ref-type="bibr">30</xref>
,
<xref rid="B57-sensors-20-00342" ref-type="bibr">57</xref>
]: BRIEF is a binary descriptor that is simple and fast to compute. This descriptor is based on the differences between the pixel intensity that are similar to the family of binary descriptors such as binary robust invariant scalable (BRISK) and fast retina keypoint (FREAK) in terms of evaluation. To reduce noise, the BRIEF descriptor smoothens the image patches. After that, the differences between the pixel intensity are used to represent the descriptor. This descriptor has achieved the best performance and accuracy in pattern recognition.</p>
</list-item>
<list-item>
<p>Fast retina keypoint (FREAK) [
<xref rid="B57-sensors-20-00342" ref-type="bibr">57</xref>
,
<xref rid="B59-sensors-20-00342" ref-type="bibr">59</xref>
]: the FREAK descriptor proposed by Alahi et al. [
<xref rid="B59-sensors-20-00342" ref-type="bibr">59</xref>
] uses a retinal sampling circular grid. This descriptor uses 43 sampling patterns based on retinal receptive fields that are shown in
<xref ref-type="fig" rid="sensors-20-00342-f008">Figure 8</xref>
. To extract a binary descriptor, these 43 receptive fields are sampled by decreasing factors as the distance from the thousand potential pairs to a patch’s center yields. Each pair is smoothed with Gaussian functions. Finally, the binary descriptors are represented by setting a threshold and considering the sign of differences between pairs.</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec3dot3-sensors-20-00342">
<title>3.3. Summary of Local Approaches</title>
<p>
<xref rid="sensors-20-00342-t001" ref-type="table">Table 1</xref>
summarizes the local approaches that we presented in this section. Various techniques are introduced to locate and to identify the human faces based on some regions of the face, geometric features, and facial expressions. These techniques provide robust recognition under different illumination conditions and facial expressions. Furthermore, they are sensitive to noise, and invariant to translations and rotations.</p>
</sec>
</sec>
<sec id="sec4-sensors-20-00342">
<title>4. Holistic Approach</title>
<p>Holistic or subspace approaches are supposed to process the whole face, that is, they do not require extracting face regions or features points (eyes, mouth, noses, and so on). The main function of these approaches is to represent the face image by a matrix of pixels, and this matrix is often converted into feature vectors to facilitate their treatment. After that, these feature vectors are implemented in low dimensional space. However, holistic or subspace techniques are sensitive to variations (facial expressions, illumination, and poses), and these advantages make these approaches widely used. Moreover, these approaches can be divided into categories, including linear and non-linear techniques, based on the method used to represent the subspace.</p>
<sec id="sec4dot1-sensors-20-00342">
<title>4.1. Linear Techniques</title>
<p>The most popular linear techniques used for face recognition systems are Eigenfaces (principal component analysis; PCA) technique, Fisherfaces (linear discriminative analysis; LDA) technique, and independent component analysis (ICA).
<list list-type="bullet">
<list-item>
<p>Eigenface [
<xref rid="B34-sensors-20-00342" ref-type="bibr">34</xref>
] and principal component analysis (PCA) [
<xref rid="B27-sensors-20-00342" ref-type="bibr">27</xref>
,
<xref rid="B62-sensors-20-00342" ref-type="bibr">62</xref>
]: Eigenfaces is one of the popular methods of holistic approaches used to extract features points of the face image. This approach is based on the principal component analysis (PCA) technique. The principal components created by the PCA technique are used as Eigenfaces or face templates. The PCA technique transforms a number of possibly correlated variables into a small number of incorrect variables called “principal components”. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically.
<xref ref-type="fig" rid="sensors-20-00342-f009">Figure 9</xref>
shows how the face can be represented by a small number of features. PCA calculates the Eigenvectors of the covariance matrix, and projects the original data onto a lower dimensional feature space, which are defined by Eigenvectors with large Eigenvalues. PCA has been used in face representation and recognition, where the Eigenvectors calculated are referred to as Eigenfaces (as shown in
<xref ref-type="fig" rid="sensors-20-00342-f010">Figure 10</xref>
).</p>
<p>An image may also be considering the vector of dimension
<inline-formula>
<mml:math id="mm33">
<mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, so that a typical image of size 4 × 4 becomes a vector of dimension 16. Let the training set of images be
<inline-formula>
<mml:math id="mm34">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo></mml:mo>
<mml:mtext> </mml:mtext>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mi>N</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
. The average face of the set is defined by the following:
<disp-formula id="FD9-sensors-20-00342">
<label>(9)</label>
<mml:math id="mm35">
<mml:mrow>
<mml:mrow>
<mml:mover accent="false">
<mml:mi>X</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>N</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mi>X</mml:mi>
<mml:msub>
<mml:mtext> </mml:mtext>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Calculate the estimate covariance matrix to represent the scatter degree of all feature vectors related to the average vector. The covariance matrix
<inline-formula>
<mml:math id="mm36">
<mml:mrow>
<mml:mi>Q</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
is defined by the following:
<disp-formula id="FD10-sensors-20-00342">
<label>(10)</label>
<mml:math id="mm37">
<mml:mrow>
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>N</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mover accent="false">
<mml:mi>X</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mover accent="false">
<mml:mi>X</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi mathvariant="normal">T</mml:mi>
</mml:msup>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>The Eigenvectors and corresponding Eigen-values are computed using
<disp-formula id="FD11-sensors-20-00342">
<label>(11)</label>
<mml:math id="mm38">
<mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>V</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>λ</mml:mi>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>     </mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>ϵ</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>V</mml:mi>
<mml:mo></mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm39">
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
is the set of eigenvectors matrix
<inline-formula>
<mml:math id="mm40">
<mml:mrow>
<mml:mi>Q</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
associated with its eigenvalue
<inline-formula>
<mml:math id="mm41">
<mml:mrow>
<mml:mi>λ</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
. Project all the training images of
<inline-formula>
<mml:math id="mm42">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
person to the corresponding Eigen-subspace:
<disp-formula id="FD12-sensors-20-00342">
<label>(12)</label>
<mml:math id="mm43">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>y</mml:mi>
<mml:mi>k</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mi>w</mml:mi>
<mml:mi>T</mml:mi>
</mml:msup>
<mml:mtext>  </mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
<mml:mo>,</mml:mo>
<mml:mtext>     </mml:mtext>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mn>3</mml:mn>
<mml:mtext> </mml:mtext>
<mml:mo></mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>N</mml:mi>
<mml:mo>)</mml:mo>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where the
<inline-formula>
<mml:math id="mm44">
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>y</mml:mi>
<mml:mi>k</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
are the projections of
<inline-formula>
<mml:math id="mm45">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
and are called the principal components, also known as eigenfaces. The face images are represented as a linear combination of these vectors’ “principal components”. In order to extract facial features, PCA and LDA are two different feature extraction algorithms that are used. Wavelet fusion and neural networks are applied to classify facial features. The ORL database is used for evaluation.
<xref ref-type="fig" rid="sensors-20-00342-f010">Figure 10</xref>
shows the first five Eigenfaces constructed from the ORL database [
<xref rid="B63-sensors-20-00342" ref-type="bibr">63</xref>
].</p>
</list-item>
<list-item>
<p>Fisherface and linear discriminative analysis (LDA) [
<xref rid="B64-sensors-20-00342" ref-type="bibr">64</xref>
,
<xref rid="B65-sensors-20-00342" ref-type="bibr">65</xref>
]: The Fisherface method is based on the same principle of similarity as the Eigenfaces method. The objective of this method is to reduce the high dimensional image space based on the linear discriminant analysis (LDA) technique instead of the PCA technique. The LDA technique is commonly used for dimensionality reduction and face recognition [
<xref rid="B66-sensors-20-00342" ref-type="bibr">66</xref>
]. PCA is an unsupervised technique, while LDA is a supervised learning technique and uses the data information. For all samples of all classes, the within-class scatter matrix
<inline-formula>
<mml:math id="mm46">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>W</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and the between-class scatter matrix
<inline-formula>
<mml:math id="mm47">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>B</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
are defined as follows:
<disp-formula id="FD15-sensors-20-00342">
<label>(13)</label>
<mml:math id="mm48">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>B</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>C</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi>M</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msup>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="FD16-sensors-20-00342">
<label>(14)</label>
<mml:math id="mm49">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>w</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>C</mml:mi>
</mml:msubsup>
<mml:munder>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mi>ϵ</mml:mi>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:msub>
<mml:mi>M</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msup>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm50">
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
is the mean vector of samples belonging to class
<inline-formula>
<mml:math id="mm51">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
,
<inline-formula>
<mml:math id="mm52">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
represents the set of samples belonging to class
<inline-formula>
<mml:math id="mm53">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
with
<inline-formula>
<mml:math id="mm54">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
being the number image of that class,
<inline-formula>
<mml:math id="mm55">
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
is the number of distinct classes, and
<inline-formula>
<mml:math id="mm56">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>M</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is the number of training samples in class
<inline-formula>
<mml:math id="mm57">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
.
<inline-formula>
<mml:math id="mm58">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>B</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
describes the scatter of features around the overall mean for all face classes and
<inline-formula>
<mml:math id="mm59">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>w</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
describes the scatter of features around the mean of each face class. The goal is to maximize the ratio
<inline-formula>
<mml:math id="mm60">
<mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>B</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mrow>
<mml:mo>/</mml:mo>
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>w</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
|, in other words, minimizing
<inline-formula>
<mml:math id="mm61">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>w</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
while maximiz
<inline-formula>
<mml:math id="mm62">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>ing</mml:mi>
<mml:mtext> </mml:mtext>
</mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>B</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
.
<xref ref-type="fig" rid="sensors-20-00342-f011">Figure 11</xref>
shows the first five Eigenfaces and Fisherfaces obtained from the ORL database [
<xref rid="B63-sensors-20-00342" ref-type="bibr">63</xref>
].</p>
</list-item>
<list-item>
<p>Independent component analysis (ICA) [
<xref rid="B35-sensors-20-00342" ref-type="bibr">35</xref>
]: The ICA technique is used for the calculation of the basic vectors of a given space. The goal of this technique is to perform a linear transformation in order to reduce the statistical dependence between the different basic vectors, which allows the analysis of independent components. It is determined that they are not orthogonal to each other. In addition, the acquisition of images from different sources is sought in uncorrelated variables, which makes it possible to obtain greater efficiency, because ICA acquires images within statistically independent variables.</p>
</list-item>
<list-item>
<p>Improvements of the PCA, LDA, and ICA techniques: To improve the linear subspace techniques, many types of research are developed. Z. Cui et al. [
<xref rid="B67-sensors-20-00342" ref-type="bibr">67</xref>
] proposed a new spatial face region descriptor (SFRD) method to extract the face region, and to deal with noise variation. This method is described as follows: divide each face image in many spatial regions, and extract token-frequency (TF) features from each region by sum-pooling the reconstruction coefficients over the patches within each region. Finally, extract the SFRD for face images by applying a variant of the PCA technique called “whitened principal component analysis (WPCA)” to reduce the feature dimension and remove the noise in the leading eigenvectors. Besides, the authors in [
<xref rid="B68-sensors-20-00342" ref-type="bibr">68</xref>
] proposed a variant of the LDA called probabilistic linear discriminant analysis (PLDA) to seek directions in space that have maximum discriminability, and are hence most suitable for both face recognition and frontal face recognition under varying pose.</p>
</list-item>
<list-item>
<p>Gabor filters: Gabor filters are spatial sinusoids located by a Gaussian window that allows for extracting the features from images by selecting their frequency, orientation, and scale. To enhance the performance under unconstrained environments for face recognition, Gabor filters are transformed according to the shape and pose to extract the feature vectors of face image combined with the PCA in the work of [
<xref rid="B69-sensors-20-00342" ref-type="bibr">69</xref>
]. The PCA is applied to the Gabor features to remove the redundancies and to get the best face images description. Finally, the cosine metric is used to evaluate the similarity.</p>
</list-item>
<list-item>
<p>Frequency domain analysis [
<xref rid="B70-sensors-20-00342" ref-type="bibr">70</xref>
,
<xref rid="B71-sensors-20-00342" ref-type="bibr">71</xref>
]: Finally, the analysis techniques in the frequency domain offer a representation of the human face as a function of low-frequency components that present high energy. The discrete Fourier transform (DFT), discrete cosine transform (DCT), or discrete wavelet transform (DWT) techniques are independent of the data, and thus do not require training.</p>
</list-item>
<list-item>
<p>Discrete wavelet transform (DWT): Another linear technique used for face recognition. In the work of [
<xref rid="B70-sensors-20-00342" ref-type="bibr">70</xref>
], the authors used a two-dimensional discrete wavelet transform (2D-DWT) method for face recognition using a new patch strategy. A non-uniform patch strategy for the top-level’s low-frequency sub-band is proposed by using an integral projection technique for two top-level high-frequency sub-bands of 2D-DWT based on the average image of all training samples. This patch strategy is better for retaining the integrity of local information, and is more suitable to reflect the structure feature of the face image. When constructing the patching strategy using the testing and training samples, the decision is performed using the neighbor classifier. Many databases are used to evaluate this method, including Labeled Faces in Wild (LFW), Extended Yale B, Face Recognition Technology (FERET), and AR.</p>
</list-item>
<list-item>
<p>Discrete cosine transform (DCT) [
<xref rid="B71-sensors-20-00342" ref-type="bibr">71</xref>
] can be used for global and local face recognition systems. DCT is a transformation that represents a finite sequence of data as the sum of a series of cosine functions oscillating at different frequencies. This technique is widely used in face recognition systems [
<xref rid="B71-sensors-20-00342" ref-type="bibr">71</xref>
], from audio and image compression to spectral methods for the numerical resolution of differential equations. The required steps to implement the DCT technique are presented as follows.</p>
</list-item>
</list>
</p>
<p>Owing to their limitations in managing the linearity in face recognition, the subspace or holistic techniques are not appropriate to represent the exact details of geometric varieties of the face images. Linear techniques offer a faithful description of face images when the data structures are linear. However, when the face images data structures are non-linear, many types of research use a function named “kernel” to construct a large space where the problem becomes linear. The required steps to implement the DCT technique are presented as Algorithm 1.
<array orientation="portrait">
<tbody>
<tr>
<td align="left" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Algorithm 1.</bold>
DCT Algorithm</td>
</tr>
<tr>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="simple">
<list-item>
<label>
<italic>1.</italic>
 </label>
<p>The input image is N by M;</p>
</list-item>
<list-item>
<label>
<italic>2.</italic>
 </label>
<p>f(i,j) is the intensity of the pixel in row i and column j;</p>
</list-item>
<list-item>
<label>
<italic>3.</italic>
 </label>
<p>
<italic>F(u,v) is the DCT coefficient in row u and column v of the DCT matrix:</italic>
<disp-formula>
<mml:math id="mm63">
<mml:mrow>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:munderover>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>j</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula>
<mml:math id="mm64">
<mml:mrow>
<mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:munderover>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>u</mml:mi>
<mml:mi>π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>j</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<italic>where</italic>
<inline-formula>
<mml:math id="mm65">
<mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo></mml:mo>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
,
<inline-formula>
<mml:math id="mm66">
<mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo></mml:mo>
<mml:mi>N</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>and</italic>
<inline-formula>
<mml:math id="mm67">
<mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msqrt>
<mml:mn>2</mml:mn>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<label>
<italic>4.</italic>
 </label>
<p>For most images, much of the signal energy lies at low frequencies; these appear in the upper left corner of the DCT.</p>
</list-item>
<list-item>
<label>
<italic>5.</italic>
 </label>
<p>Compression is achieved since the lower right values represent higher frequencies, and are often small - small enough to be neglected with little visible distortion.</p>
</list-item>
<list-item>
<label>
<italic>6.</italic>
 </label>
<p>The DCT input is an 8 by 8 array of integers. This array contains each pixel’s grayscale level;</p>
</list-item>
<list-item>
<label>
<italic>7.</italic>
 </label>
<p>8-bit pixels have levels from 0 to 255.</p>
</list-item>
</list>
</td>
</tr>
</tbody>
</array>
</p>
</sec>
<sec id="sec4dot2-sensors-20-00342">
<title>4.2. Nonlinear Techniques</title>
<p>
<list list-type="bullet">
<list-item>
<p>Kernel PCA (KPCA) [
<xref rid="B28-sensors-20-00342" ref-type="bibr">28</xref>
]: is an improved method of PCA, which uses kernel method techniques. KPCA computes the Eigenfaces or the Eigenvectors of the kernel matrix, while PCA computes the covariance matrix. In addition, KPCA is a representation of the PCA technique on the high-dimensional feature space mapped by the associated kernel function. Three significant steps of the KPCA algorithm are used to calculates the function of the kernel matrix
<inline-formula>
<mml:math id="mm68">
<mml:mrow>
<mml:mi mathvariant="normal">K</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
of distribution consisting of
<inline-formula>
<mml:math id="mm69">
<mml:mrow>
<mml:mi mathvariant="normal">n</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
data points
<inline-formula>
<mml:math id="mm70">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mi mathvariant="normal">i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi mathvariant="normal">R</mml:mi>
<mml:mi mathvariant="normal">d</mml:mi>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, after which the data points are mapped into a high-dimensional feature space
<inline-formula>
<mml:math id="mm71">
<mml:mrow>
<mml:mi mathvariant="normal">F</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
, as shown in Algorithm 2.
<array orientation="portrait">
<tbody>
<tr>
<td align="left" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Algorithm 2.</bold>
Kernel PCA Algorithm</td>
</tr>
<tr>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>
<italic>Step 1: Determine the dot product of the matrix</italic>
<inline-formula>
<mml:math id="mm72">
<mml:mrow>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>using kernel function:</italic>
<inline-formula>
<mml:math id="mm73">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>K</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
.</p>
</list-item>
<list-item>
<p>
<italic>Step 2: Calculate the Eigenvectors from the resultant matrix</italic>
<inline-formula>
<mml:math id="mm74">
<mml:mrow>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>and normalize with the function:</italic>
<inline-formula>
<mml:math id="mm75">
<mml:mrow>
<mml:mrow>
<mml:mi>γ</mml:mi>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>α</mml:mi>
<mml:mi>k</mml:mi>
<mml:mi>α</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
.</p>
</list-item>
<list-item>
<p>
<italic>Step 3: Calculate the test point projection on to Eigenvectors</italic>
<inline-formula>
<mml:math id="mm76">
<mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>using kernel function:</italic>
<inline-formula>
<mml:math id="mm77">
<mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>k</mml:mi>
<mml:mi>φ</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mstyle mathsize="100%" displaystyle="false">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mi>i</mml:mi>
<mml:mi>m</mml:mi>
</mml:msubsup>
<mml:mi>α</mml:mi>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mtext> </mml:mtext>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
</list>
</td>
</tr>
</tbody>
</array>
</p>
<p>The performance of the KPCA technique depends on the choice of the kernel matrix K. The Gaussian or polynomial kernel are linear typically-used kernels. KPCA has been successfully used for novelty detection [
<xref rid="B72-sensors-20-00342" ref-type="bibr">72</xref>
] or for speech recognition [
<xref rid="B62-sensors-20-00342" ref-type="bibr">62</xref>
].</p>
</list-item>
<list-item>
<p>Kernel linear discriminant analysis (KDA) [
<xref rid="B73-sensors-20-00342" ref-type="bibr">73</xref>
]: the KLDA technique is a kernel extension of the linear LDA technique, in the same kernel extension of PCA. Arashloo et al. [
<xref rid="B73-sensors-20-00342" ref-type="bibr">73</xref>
] proposed a nonlinear binary class-specific kernel discriminant analysis classifier (CS-KDA) based on the spectral regression kernel discriminant analysis. Other nonlinear techniques have also been used in the context of facial recognition:</p>
</list-item>
<list-item>
<p>Gabor-KLDA [
<xref rid="B74-sensors-20-00342" ref-type="bibr">74</xref>
].</p>
</list-item>
<list-item>
<p>Evolutionary weighted principal component analysis (EWPCA) [
<xref rid="B75-sensors-20-00342" ref-type="bibr">75</xref>
].</p>
</list-item>
<list-item>
<p>Kernelized maximum average margin criterion (KMAMC), SVM, and kernel Fisher discriminant analysis (KFD) [
<xref rid="B76-sensors-20-00342" ref-type="bibr">76</xref>
].</p>
</list-item>
<list-item>
<p>Wavelet transform (WT), radon transform (RT), and cellular neural networks (CNN) [
<xref rid="B77-sensors-20-00342" ref-type="bibr">77</xref>
].</p>
</list-item>
<list-item>
<p>Joint transform correlator-based two-layer neural network [
<xref rid="B78-sensors-20-00342" ref-type="bibr">78</xref>
].</p>
</list-item>
<list-item>
<p>Kernel Fisher discriminant analysis (KFD) and KPCA [
<xref rid="B79-sensors-20-00342" ref-type="bibr">79</xref>
].</p>
</list-item>
<list-item>
<p>Locally linear embedding (LLE) and LDA [
<xref rid="B80-sensors-20-00342" ref-type="bibr">80</xref>
].</p>
</list-item>
<list-item>
<p>Nonlinear locality preserving with deep networks [
<xref rid="B81-sensors-20-00342" ref-type="bibr">81</xref>
].</p>
</list-item>
<list-item>
<p>Nonlinear DCT and kernel discriminative common vector (KDCV) [
<xref rid="B82-sensors-20-00342" ref-type="bibr">82</xref>
].</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec4dot3-sensors-20-00342">
<title>4.3. Summary of Holistic Approaches</title>
<p>
<xref rid="sensors-20-00342-t002" ref-type="table">Table 2</xref>
summarizes the different subspace techniques discussed in this section, which are introduced to reduce the dimensionality and the complexity of the detection or recognition steps. Linear and non-linear techniques offer robust recognition under different lighting conditions and facial expressions. Although these techniques (linear and non-linear) allow a better reduction in dimensionality and improve the recognition rate, they are not invariant to translations and rotations compared with local techniques.</p>
</sec>
</sec>
<sec id="sec5-sensors-20-00342">
<title>5. Hybrid Approach</title>
<sec id="sec5dot1-sensors-20-00342">
<title>5.1. Technique Presentation</title>
<p>The hybrid approaches are based on local and subspace features in order to use the benefits of both subspace and local techniques, which have the potential to offer better performance for face recognition systems.
<list list-type="bullet">
<list-item>
<p>Gabor wavelet and linear discriminant analysis (GW-LDA) [
<xref rid="B91-sensors-20-00342" ref-type="bibr">91</xref>
]: Fathima et al. [
<xref rid="B91-sensors-20-00342" ref-type="bibr">91</xref>
] proposed a hybrid approach combining Gabor wavelet and linear discriminant analysis (HGWLDA) for face recognition. The grayscale face image is approximated and reduced in dimension. The authors have convolved the grayscale face image with a bank of Gabor filters with varying orientations and scales. After that, a subspace technique 2D-LDA is used to maximize the inter-class space and reduce the intra-class space. To classify and recognize the test face image, the k-nearest neighbour (k-NN) classifier is used. The recognition task is done by comparing the test face image feature with each of the training set features. The experimental results show the robustness of this approach in different lighting conditions.</p>
</list-item>
<list-item>
<p>Over-complete LBP (OCLBP), LDA, and within class covariance normalization (WCCN): Barkan et al. [
<xref rid="B92-sensors-20-00342" ref-type="bibr">92</xref>
] proposed a new representation of face image based over-complete LBP (OCLBP). This representation is a multi-scale modified version of the LBP technique. The LDA technique is performed to reduce the high dimensionality representations. Finally, the within class covariance normalization (WCCN) is the metric learning technique used for face recognition.</p>
</list-item>
<list-item>
<p>Advanced correlation filters and Walsh LBP (WLBP): Juefei et al. [
<xref rid="B93-sensors-20-00342" ref-type="bibr">93</xref>
] implemented a single-sample periocular-based alignment-robust face recognition technique based on high-dimensional Walsh LBP (WLBP). This technique utilizes only one sample per subject class and generates new face images under a wide range of 3D rotations using the 3D generic elastic model, which is both accurate and computationally inexpensive. The LFW database is used for evaluation, and the proposed method outperformed the state-of-the-art algorithms under four evaluation protocols with a high accuracy of 89.69%.</p>
</list-item>
<list-item>
<p>Multi-sub-region-based correlation filter bank (MS-CFB): Yan et al. [
<xref rid="B94-sensors-20-00342" ref-type="bibr">94</xref>
] propose an effective feature extraction technique for robust face recognition, named multi-sub-region-based correlation filter bank (MS-CFB). MS-CFB extracts the local features independently for each face sub-region. After that, the different face sub-regions are concatenated to give optimal overall correlation outputs. This technique reduces the complexity, achieves higher recognition rates, and provides a better feature representation for recognition compared with several state-of-the-art techniques on various public face databases.</p>
</list-item>
<list-item>
<p>SIFT features, Fisher vectors, and PCA: Simonyan et al. [
<xref rid="B64-sensors-20-00342" ref-type="bibr">64</xref>
] have developed a novel method for face recognition based on the SIFT descriptor and Fisher vectors. The authors propose a discriminative dimensionality reduction owing to the high dimensionality of the Fisher vectors. After that, these vectors are projected into a low dimensional subspace with a linear projection. The objective of this methodology is to describe the image based on dense SIFT features and Fisher vectors encoding to achieve high performance on the challenging LFW dataset in both restricted and unrestricted settings.</p>
</list-item>
<list-item>
<p>CNNs and stacked auto-encoder (SAE) techniques: Ding et al. [
<xref rid="B95-sensors-20-00342" ref-type="bibr">95</xref>
] proposed multimodal deep face representation (MM-DFR) framework based on convolutional neural networks (CNNs) technique from the original holistic face image, rendered frontal face by 3D face model (stand for holistic facial features and local facial features, respectively), and uniformly sampled image patches. The proposed MM-DFR framework has two steps: a CNNs technique is used to extract the features and a three-layer stacked auto-encoder (SAE) technique is employed to compress the high-dimensional deep feature into a compact face signature. The LFW database is used to evaluate the identification performance of MM-DFR. The flowchart of the proposed MM-DFR framework is shown in
<xref ref-type="fig" rid="sensors-20-00342-f012">Figure 12</xref>
.</p>
</list-item>
<list-item>
<p>PCA and ANFIS: Sharma et al. [
<xref rid="B96-sensors-20-00342" ref-type="bibr">96</xref>
] propose an efficient pose-invariant face recognition system based on PCA technique and ANFIS classifier. The PCA technique is employed to extract the features of an image, and the ANFIS classifier is developed for identification under a variety of pose conditions. The performance of the proposed system based on PCA–ANFIS is better than ICA–ANFIS and LDA–ANFIS for the face recognition task. The ORL database is used for evaluation.</p>
</list-item>
<list-item>
<p>DCT and PCA: Ojala et al. [
<xref rid="B97-sensors-20-00342" ref-type="bibr">97</xref>
] develop a fast face recognition system based on DCT and PCA techniques. Genetic algorithm (GA) technique is used to extract facial features, which allows to remove irrelevant features and reduces the number of features. In addition, the DCT–PCA technique is used to extract the features and reduce the dimensionality. The minimum Euclidian distance (ED) as a measurement is used for the decision. Various face databases are used to demonstrate the effectiveness of this system.</p>
</list-item>
<list-item>
<p>PCA, SIFT, and iterative closest point (ICP): Mian et al. [
<xref rid="B98-sensors-20-00342" ref-type="bibr">98</xref>
] present a multimodal (2D and 3D) face recognition system based on hybrid matching to achieve efficiency and robustness to facial expressions. The Hotelling transform is performed to automatically correct the pose of a 3D face using its texture. After that, in order to form a rejection classifier, a novel 3D spherical face representation (SFR) in conjunction with the SIFT descriptor is used, which provide efficient recognition in the case of large galleries by eliminating a large number of candidates’ faces. A modified iterative closest point (ICP) algorithm is used for the decision. This system is less sensitive and robust to facial expressions, which achieved a 98.6% verification rate and 96.1% identification rate on the complete FRGC v2 database.</p>
</list-item>
<list-item>
<p>PCA, local Gabor binary pattern histogram sequence (LGBPHS), and GABOR wavelets: Cho et al. [
<xref rid="B99-sensors-20-00342" ref-type="bibr">99</xref>
] proposed a computationally efficient hybrid face recognition system that employs both holistic and local features. The PCA technique is used to reduce the dimensionality. After that, the local Gabor binary pattern histogram sequence (LGBPHS) technique is employed to realize the recognition stage, which proposed to reduce the complexity caused by the Gabor filters. The experimental results show a better recognition rate compared with the PCA and Gabor wavelet techniques under illumination variations. The Extended Yale Face Database B is used to demonstrate the effectiveness of this system.</p>
</list-item>
<list-item>
<p>PCA and Fisher linear discriminant (FLD) [
<xref rid="B100-sensors-20-00342" ref-type="bibr">100</xref>
,
<xref rid="B101-sensors-20-00342" ref-type="bibr">101</xref>
]: Sing et al. [
<xref rid="B101-sensors-20-00342" ref-type="bibr">101</xref>
] propose a novel hybrid technique for face representation and recognition, which exploits both local and subspace features. In order to extract the local features, the whole image is divided into a sub-regions, while the global features are extracted directly from the whole image. After that, PCA and Fisher linear discriminant (FLD) techniques are introduced on the fused feature vector to reduce the dimensionality. The CMU-PIE, FERET, and AR face databases are used for the evaluation.</p>
</list-item>
<list-item>
<p>SPCA–KNN [
<xref rid="B102-sensors-20-00342" ref-type="bibr">102</xref>
]: Kamencay et al. [
<xref rid="B102-sensors-20-00342" ref-type="bibr">102</xref>
] develop a new face recognition method based on SIFT features, as well as PCA and KNN techniques. The Hessian–Laplace detector along with SPCA descriptor is performed to extract the local features. SPCA is introduced to identify the human face. KNN classifier is introduced to identify the closest human faces from the trained features. The results of the experiment have a recognition rate of 92% for the unsegmented ESSEX database and 96% for the segmented database (700 training images).</p>
</list-item>
<list-item>
<p>Convolution operations, LSTM recurrent units, and ELM classifier [
<xref rid="B103-sensors-20-00342" ref-type="bibr">103</xref>
]: Sun et al. [
<xref rid="B103-sensors-20-00342" ref-type="bibr">103</xref>
] propose a hybrid deep structure called CNN–LSTM–ELM in order to achieve sequential human activity recognition (HAR). Their proposed CNN–LSTM–ELM structure is evaluated using the OPPORTUNITY dataset, which contains 46,495 training samples and 9894 testing samples, and each sample is a sequence. The model training and testing runs on a GPU with 1536 cores, 1050 MHz clock speed, and 8 GB RAM. The flowchart of the proposed CNN–LSTM–ELM structure is shown in
<xref ref-type="fig" rid="sensors-20-00342-f013">Figure 13</xref>
[
<xref rid="B103-sensors-20-00342" ref-type="bibr">103</xref>
].</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec5dot2-sensors-20-00342">
<title>5.2. Summary of Hybrid Approaches</title>
<p>
<xref rid="sensors-20-00342-t003" ref-type="table">Table 3</xref>
summarizes the hybrid approaches that we presented in this section. Various techniques are introduced to improve the performance and the accuracy of recognition systems. The combination between the local approaches and the subspace approach provides robust recognition and reduction of dimensionality under different illumination conditions and facial expressions. Furthermore, these technologies are presented to be sensitive to noise, and invariant to translations and rotations.</p>
</sec>
</sec>
<sec id="sec6-sensors-20-00342">
<title>6. Assessment of Face Recognition Approaches</title>
<p>In the last step of recognition, the face extracted from the background during the face detection step is compared with known faces stored in a specific database. To make the decision, several techniques of comparison are used. This section describes the most common techniques used to make the decision and comparison.</p>
<sec id="sec6dot1-sensors-20-00342">
<title>6.1. Measures of Similarity or Distances</title>
<p>
<list list-type="bullet">
<list-item>
<p>Peak-to-correlation energy (PCE) or peak-to-sidelobe ratio (PSR) [
<xref rid="B18-sensors-20-00342" ref-type="bibr">18</xref>
]: The PCE was introduced in (8).</p>
</list-item>
<list-item>
<p>Euclidean distance [
<xref rid="B54-sensors-20-00342" ref-type="bibr">54</xref>
]: The Euclidean distance is one of the most basic measures used to compute the direct distance between two points in a plane. If we have two points
<inline-formula>
<mml:math id="mm78">
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">P</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm79">
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">P</mml:mi>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
with the coordinates
<inline-formula>
<mml:math id="mm80">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm81">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
respectively, the calculation of the Euclidean distance between them would be as follows:
<disp-formula id="FD17-sensors-20-00342">
<label>(15)</label>
<mml:math id="mm82">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mi>E</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>P</mml:mi>
<mml:mn>2</mml:mn>
<mml:mtext> </mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
<mml:mo></mml:mo>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
<mml:mo></mml:mo>
<mml:mi>y</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>In general, the Euclidean distance between two points
<inline-formula>
<mml:math id="mm83">
<mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>p</mml:mi>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mo></mml:mo>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>p</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm84">
<mml:mrow>
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>q</mml:mi>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>q</mml:mi>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo></mml:mo>
<mml:mtext> </mml:mtext>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>q</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
in the n-dimensional space would be defined by the following:
<disp-formula id="FD18-sensors-20-00342">
<label>(16)</label>
<mml:math id="mm85">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mi>E</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>Q</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:msubsup>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>i</mml:mi>
<mml:mo></mml:mo>
<mml:mi>q</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
</list-item>
<list-item>
<p>Bhattacharyya distance [
<xref rid="B104-sensors-20-00342" ref-type="bibr">104</xref>
,
<xref rid="B105-sensors-20-00342" ref-type="bibr">105</xref>
]: The Bhattacharyya distance is a statistical measure that quantifies the similarity between two discrete or continuous probability distributions. This distance is particularly known for its low processing time and its low sensitivity to noise. For the probability distributions
<italic>p</italic>
and
<italic>q</italic>
defined on the same domain, the distance of Bhattacharyya is defined as follows:
<disp-formula id="FD19-sensors-20-00342">
<label>(17)</label>
<mml:math id="mm86">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mi>B</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mo></mml:mo>
<mml:mi>l</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="FD20-sensors-20-00342">
<label>(18)</label>
<mml:math id="mm87">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mstyle mathsize="100%" displaystyle="false">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
<mml:mi>X</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msqrt>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msqrt>
<mml:mtext> </mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mo>;</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>B</mml:mi>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mstyle mathsize="100%" displaystyle="false">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:msqrt>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msqrt>
<mml:mtext> </mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm88">
<mml:mrow>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is the Bhattacharyya coefficient, defined as Equation (18a) for discrete probability distributions and as Equation (18b) for continuous probability distributions. In both cases, 0 ≤
<italic>BC</italic>
≤ 1 and 0 ≤
<italic>DB</italic>
≤ ∞. In its simplest formulation, the Bhattacharyya distance between two classes that follow a normal distribution can be calculated from a mean (
<inline-formula>
<mml:math id="mm89">
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
) and the variance (
<inline-formula>
<mml:math id="mm90">
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>σ</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
):
<disp-formula id="FD22-sensors-20-00342">
<label>(19)</label>
<mml:math id="mm91">
<mml:mrow>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi>B</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mn>4</mml:mn>
</mml:mfrac>
<mml:mi>l</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mn>4</mml:mn>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>p</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>q</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>q</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>p</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mn>4</mml:mn>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>q</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>q</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>p</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
</list-item>
<list-item>
<p>Chi-squared distance [
<xref rid="B106-sensors-20-00342" ref-type="bibr">106</xref>
]: The Chi-squared
<inline-formula>
<mml:math id="mm92">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
distance was weighted by the value of the samples, which allows knowing the same relevance for sample differences with few occurrences as those with multiple occurrences. To compare two histograms
<inline-formula>
<mml:math id="mm93">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo></mml:mo>
<mml:mtext> </mml:mtext>
<mml:mo></mml:mo>
<mml:mtext> </mml:mtext>
<mml:mo></mml:mo>
<mml:mo>.</mml:mo>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm94">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo></mml:mo>
<mml:mtext> </mml:mtext>
<mml:mo></mml:mo>
<mml:mtext> </mml:mtext>
<mml:mo></mml:mo>
<mml:mo>.</mml:mo>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, the Chi-squared
<inline-formula>
<mml:math id="mm95">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
distance can be defined as follows:
<disp-formula id="FD23-sensors-20-00342">
<label>(20)</label>
<mml:math id="mm96">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:munderover>
<mml:mstyle mathsize="100%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:munderover>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec6dot2-sensors-20-00342">
<title>6.2. Classifiers</title>
<p>There are many face classification techniques in the literature that allow to select, from a few examples, the group or class to which the objects belong. Some of them are based on statistics, such as the Bayesian classifier and correlation [
<xref rid="B18-sensors-20-00342" ref-type="bibr">18</xref>
], and so on, and others based on the regions that generate the different classes in the decision space, such as K-means [
<xref rid="B9-sensors-20-00342" ref-type="bibr">9</xref>
], CNN [
<xref rid="B103-sensors-20-00342" ref-type="bibr">103</xref>
], artificial neural networks (ANNs) [
<xref rid="B37-sensors-20-00342" ref-type="bibr">37</xref>
], support vector machines (SVMs) [
<xref rid="B26-sensors-20-00342" ref-type="bibr">26</xref>
,
<xref rid="B107-sensors-20-00342" ref-type="bibr">107</xref>
], k-nearest neighbors (K-NNs), decision trees (DTs), and so on.
<list list-type="bullet">
<list-item>
<p>Support vector machines (SVMs) [
<xref rid="B13-sensors-20-00342" ref-type="bibr">13</xref>
,
<xref rid="B26-sensors-20-00342" ref-type="bibr">26</xref>
]: The feature vectors extracted by any descriptor are classified by linear or nonlinear SVM. The SVM classifier may realize the separation of the classes with an optimal hyperplane. To determine the last, only the closest points of the total learning set should be used; these points are called support vectors (
<xref ref-type="fig" rid="sensors-20-00342-f014">Figure 14</xref>
).</p>
<p>There is an infinite number of hyperplanes capable of perfectly separating two classes, which implies to select a hyperplane that maximizes the minimal distance between the learning examples and the learning hyperplane (i.e., the distance between the support vectors and the hyperplane). This distance is called “margin”. The SVM classifier is used to calculate the optimal hyperplane that categorizes a set of labels training data in the correct class. The optimal hyperplane is solved as follows:
<disp-formula id="FD24-sensors-20-00342">
<label>(21)</label>
<mml:math id="mm97">
<mml:mrow>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mi>n</mml:mi>
</mml:msup>
</mml:mrow>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Given that
<inline-formula>
<mml:math id="mm98">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
are the training features vectors and
<inline-formula>
<mml:math id="mm99">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
are the corresponding set of
<inline-formula>
<mml:math id="mm100">
<mml:mrow>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
(1 or −1) labels. An SVM tries to find a hyperplane to distinguish the samples with the smallest errors. The classification function is obtained by calculating the distance between the input vector and the hyperplane.
<disp-formula id="FD25-sensors-20-00342">
<label>(22)</label>
<mml:math id="mm101">
<mml:mrow>
<mml:mrow>
<mml:mi>w</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm102">
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm103">
<mml:mrow>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
are the parameters of the model. Shen et al. [
<xref rid="B108-sensors-20-00342" ref-type="bibr">108</xref>
] proposed the Gabor filter to extract the face features and applied the SVM for classification. The proposed FaceNet method achieves a good record accuracy of 99.63% and 95.12% using the LFW YouTube Faces DB datasets, respectively.</p>
</list-item>
<list-item>
<p>k-nearest neighbor (k-NN) [
<xref rid="B17-sensors-20-00342" ref-type="bibr">17</xref>
,
<xref rid="B91-sensors-20-00342" ref-type="bibr">91</xref>
]: k-NN is an indolent algorithm because, in training, it saves little information, and thus does not build models of difference, for example, decision trees.</p>
</list-item>
<list-item>
<p>K-means [
<xref rid="B9-sensors-20-00342" ref-type="bibr">9</xref>
,
<xref rid="B109-sensors-20-00342" ref-type="bibr">109</xref>
]: It is called K-means because it represents each of the groups by the average (or weighted average) of its points, called the centroid. In the K-means algorithm, it is necessary to specify a priori the number of clusters k that one wishes to form in order to start the process.</p>
</list-item>
<list-item>
<p>Deep learning (DL): An automatic learning technique that uses neural network architectures. The term “deep” refers to the number of hidden layers in the neural network. While conventional neural networks have one layer, deep neural networks (DNN) contain several layers, as presented in
<xref ref-type="fig" rid="sensors-20-00342-f015">Figure 15</xref>
.</p>
</list-item>
</list>
</p>
<p>Various variants of neural networks have been developed in the last years, such as convolutional neural networks (CNN) [
<xref rid="B14-sensors-20-00342" ref-type="bibr">14</xref>
,
<xref rid="B110-sensors-20-00342" ref-type="bibr">110</xref>
] and recurrent neural networks (RNN) [
<xref rid="B111-sensors-20-00342" ref-type="bibr">111</xref>
], which very effective for image detection and recognition tasks. CNNs are a very successful deep model and are used today in many applications [
<xref rid="B112-sensors-20-00342" ref-type="bibr">112</xref>
]. From a structural point of view, CNNs are made up of three different types of layers: convolution layers, pooling layers, and fully-connected layers.
<list list-type="order">
<list-item>
<p>
<italic>Convolutional layer</italic>
: sometimes called the feature extractor layer because features of the image are extracted within this layer. Convolution preserves the spatial relationship between pixels by learning image features using small squares of the input image. The input image is convoluted by employing a set of learnable neurons. This produces a feature map or activation map in the output image, after which the feature maps are fed as input data to the next convolutional layer. The convolutional layer also contains rectified linear unit (ReLU) activation to convert all negative value to zero. This makes it very computationally efficient, as few neurons are activated each time.</p>
</list-item>
<list-item>
<p>
<italic>Pooling layer:</italic>
used to reduce dimensions, with the aim of reducing processing times by retaining the most important information after convolution. This layer basically reduces the number of parameters and computation in the network, controlling over fitting by progressively reducing the spatial size of the network. There are two operations in this layer: average pooling and maximum pooling:
<list list-type="simple">
<list-item>
<label>-</label>
<p>Average-pooling takes all the elements of the sub-matrix, calculates their average, and stores the value in the output matrix.</p>
</list-item>
<list-item>
<label>-</label>
<p>Max-pooling searches for the highest value found in the sub-matrix and saves it in the output matrix.</p>
</list-item>
</list>
</p>
</list-item>
<list-item>
<p>
<italic>Fully-connected layer</italic>
: in this layer, the neurons have a complete connection to all the activations from the previous layers. It connects neurons in one layer to neurons in another layer. It is used to classify images between different categories by training.</p>
</list-item>
</list>
</p>
<p>Wen et al. [
<xref rid="B113-sensors-20-00342" ref-type="bibr">113</xref>
] introduce a new supervision signal, called center loss, for the face recognition task in order to improve the discriminative power of the deeply learned features. Specifically, the proposed center loss function is trainable and easy to optimize in the CNNs. Several important face recognition benchmarks are used for evaluation including LFW, YTF, and MegaFace Challenge. Passalis and Tefas [
<xref rid="B114-sensors-20-00342" ref-type="bibr">114</xref>
] propose a supervised codebook learning method for the bag-of-features representation able to learn face retrieval-oriented codebooks. This allows using significantly smaller codebooks enhancing both the retrieval time and storage requirements. Liu et al. [
<xref rid="B115-sensors-20-00342" ref-type="bibr">115</xref>
] and Amato et al. [
<xref rid="B116-sensors-20-00342" ref-type="bibr">116</xref>
] propose a deep face recognition technique under open-set protocol based on the CNN technique. A face dataset composed of 39,037 faces images belonging to 42 different identities is used to perform the experiments. Taigman et al. [
<xref rid="B117-sensors-20-00342" ref-type="bibr">117</xref>
] present a system (DeepFace) able to outperform existing systems with only very minimal adaptation. It is trained on a large dataset of faces acquired from a population vastly different than the one used to construct the evaluation benchmarks. This technique achieves an accuracy of 97.35% on the LFW. Ma et al. [
<xref rid="B118-sensors-20-00342" ref-type="bibr">118</xref>
] introduce a robust local binary pattern (LBP) guiding pooling (G-RLBP) mechanism to improve the recognition rates of the CNN models, which can successfully lower the noise impact. Koo et al. [
<xref rid="B119-sensors-20-00342" ref-type="bibr">119</xref>
] propose a multimodal human recognition method that uses both the face and body and is based on a deep CNN. Cho et al. [
<xref rid="B120-sensors-20-00342" ref-type="bibr">120</xref>
] propose a nighttime face detection method based on CNN technique for visible-light images. Koshy and Mahmood [
<xref rid="B121-sensors-20-00342" ref-type="bibr">121</xref>
] develop deep architectures for face liveness detection that uses a combination of texture analysis and a CNN technique to classify the captured image as real or fake. Elmahmudi and Ugail [
<xref rid="B122-sensors-20-00342" ref-type="bibr">122</xref>
] present the performance of machine learning for face recognition using partial faces and other manipulations of the face such as rotation and zooming, which we use as training and recognition cues. The experimental results on the tasks of face verification and face identification show that the model obtained by the proposed DNN training framework achieves 97.3% accuracy on the LFW database with low training complexity. Seibold et al. [
<xref rid="B123-sensors-20-00342" ref-type="bibr">123</xref>
] proposed a morphing attack detection method based on DNNs. A fully automatic face image morphing pipeline with exchangeable components was used to generate morphing attacks, train neural networks based on these data, and analyze their accuracy. Yim et al. [
<xref rid="B124-sensors-20-00342" ref-type="bibr">124</xref>
] propose a new deep architecture based on a novel type of multitask learning, which can achieve superior performance in rotating to a target-pose face image from an arbitrary pose and illumination image while preserving identity. Nguyen et al. [
<xref rid="B111-sensors-20-00342" ref-type="bibr">111</xref>
] propose a new approach for detecting presentation attack face images to enhance the security level of a face recognition system. The objective of this study was the use of a very deep stacked CNN–RNN network to learn the discrimination features from a sequence of face images. Finally, Bajrami et al. [
<xref rid="B125-sensors-20-00342" ref-type="bibr">125</xref>
] present experiment results with LDA and DNN for face recognition, while their efficiency and performance are tested on the LFW dataset. The experimental results show that the DNN method achieves better recognition accuracy, and the recognition time is much faster than that of the LDA method in large-scale datasets.</p>
</sec>
<sec id="sec6dot3-sensors-20-00342">
<title>6.3. Databases Used</title>
<p>The most commonly used databases for face recognition systems under different conditions are Pointing Head Pose Image Database (PHPID) [
<xref rid="B126-sensors-20-00342" ref-type="bibr">126</xref>
], Labeled Faces in Wild (LFW) [
<xref rid="B127-sensors-20-00342" ref-type="bibr">127</xref>
], FERET [
<xref rid="B15-sensors-20-00342" ref-type="bibr">15</xref>
,
<xref rid="B16-sensors-20-00342" ref-type="bibr">16</xref>
], ORL, and Yale. The last are used for face recognition systems under different conditions, which provide information for supervised and unsupervised learning. Supervised learning is based on two training modules: image unrestricted training setting and image restricted training setting. For the first model, only “same” or “not same” binary labels are used in the training splits. For the second model, the identities of the person in each pair are provided in the training splits.
<list list-type="bullet">
<list-item>
<p>LFW (Labeled Faces in the Wild) database was created in October 2007. It contains 13,333 images of 5749 subjects, with 1680 subjects with at least two images and the rest with a single image. These face images were taken on the Internet, pre-processed, and localized by the Viola–Jones detector with a resolution of 250 × 250 pixels. Most of them are in color, although there are also some in grayscale and presented in JPG format and organized by folders.</p>
</list-item>
<list-item>
<p>FERET (Face Recognition Technology) database was created in 15 sessions in a semi-controlled environment between August 1993 and July 1996. It contains 1564 sets of images, with a total of 14,126 images. The duplicate series belong to subjects already present in the series of individual images, which were generally captured one day apart. Some images taken from the same subject vary overtime for a few years and can be used to treat facial changes that appear over time. The images have a depth of 24 bits, RGB, so they are color images, with a resolution of 512 × 768 pixels.</p>
</list-item>
<list-item>
<p>AR face database was created by Aleix Martínez and Robert Benavente in the computer vision center (CVC) of the Autonomous University of Barcelona in June 1998. It contains more than 4000 images of 126 subjects, including 70 men and 56 women. They were taken at the CVC under a controlled environment. The images were taken frontally to the subjects, with different facial expressions and three different lighting conditions, as well as several accessories: scarves, glasses, or sunglasses. Two imaging sessions were performed with the same subjects, 14 days apart. These images are a resolution of 576 × 768 pixels and a depth of 24 bits, under the RGB RAW format.</p>
</list-item>
<list-item>
<p>ORL Database of Faces was performed between April 1992 and April 1994 at the AT & T laboratory in Cambridge. It consists of a total of 10 images per subject, out of a total of 40 images. For some subjects, the images were taken at different times, with varying illumination and facial expressions: eyes open/closed, smiling/without a smile, as well as with or without glasses. The images were taken under a black homogeneous background, in a vertical position and frontally to the subject, with some small rotation. These are images with a resolution of 92 × 112 pixels in grayscale.</p>
</list-item>
<list-item>
<p>Extended Yale Face B database contains 16,128 images of 640 × 480 grayscale of 28 individuals under 9 poses and 64 different lighting conditions. It also includes a set of images made with the face of individuals only.</p>
</list-item>
<list-item>
<p>Pointing Head Pose Image Database (PHPID) is one of the most widely used for face recognition. It contains 2790 monocular face images of 15 persons with tilt angles from −90° to +90° and variations of pan. Every person has two series of 93 different poses (93 images). The face images were taken under different skin color and with or without glasses.</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec6dot4-sensors-20-00342">
<title>6.4. Comparison between Holistic, Local, and Hybrid Techniques</title>
<p>In this section, we present some advantages and disadvantages of holistic, local, and hybrid approaches to identifying faces during the last 20 years. DL approaches can be considered as a statistical approach (holistic method), because the training procedure scheme usually searches for statistical structures in the input patterns.
<xref rid="sensors-20-00342-t004" ref-type="table">Table 4</xref>
presents a brief summary of the three approaches.</p>
</sec>
</sec>
<sec sec-type="discussion" id="sec7-sensors-20-00342">
<title>7. Discussion about Future Directions and Conclusions</title>
<sec id="sec7dot1-sensors-20-00342">
<title>7.1. Discussion</title>
<p>In the past decade, the face recognition system has become one of the most important biometric authentication methods. Many techniques are used to develop many face recognition systems based on facial information. Generally, the existing techniques can be classified into three approaches, depending on the type of desired features.
<list list-type="bullet">
<list-item>
<p>Local approaches: use features in which the face described partially. For example, some system could consist of extracting local features such as the eyes, mouth, and nose. The features’ values are calculated from the lines or points that can be represented on the face image for the recognition step.</p>
</list-item>
<list-item>
<p>Holistic approaches: use features that globally describe the complete face as a model, including the background (although it is desirable to occupy the smallest possible surface).</p>
</list-item>
<list-item>
<p>Hybrid approaches: combine local and holistic approaches.</p>
</list-item>
</list>
</p>
<p>In particular, recognition methods performed on static images produce good results under different lighting and expression conditions. However, in most cases, only the face images are processed at the same size and scale. Many methods require numerous training images, which limits their use for real-time systems, where the response time is an important aspect.</p>
<p>The main purpose of techniques such as HOG, LBP, Gabor filters, BRIEF, SURF, and SIFT is to discover distinctive features, which can be divided into two parts: (1) local appearance-based techniques, which are used to extract local features when the face image is divided into small regions (including HOG, LBP, Gabor filters, and correlation filters); and (2) key-points-based techniques, which are used to detect the points of interest in the face image, after which features’ extraction is localized based on these points, including BRIEF, SURF, and SIFT. In the context of face recognition, local techniques only treat certain facial features, which make them very sensitive to facial expressions and occlusions [
<xref rid="B4-sensors-20-00342" ref-type="bibr">4</xref>
,
<xref rid="B14-sensors-20-00342" ref-type="bibr">14</xref>
,
<xref rid="B37-sensors-20-00342" ref-type="bibr">37</xref>
,
<xref rid="B50-sensors-20-00342" ref-type="bibr">50</xref>
,
<xref rid="B51-sensors-20-00342" ref-type="bibr">51</xref>
,
<xref rid="B52-sensors-20-00342" ref-type="bibr">52</xref>
,
<xref rid="B53-sensors-20-00342" ref-type="bibr">53</xref>
]. The relative robustness is the main advantage of these feature-based local techniques. Additionally, they take into account the peculiarity of the face as a natural form to recognize a reduced number of parameters. Another advantage is that they have a high compaction capacity and a high comparison speed. The main disadvantages of these methods are the difficulty of automating the detection of facial features and the fact that the person responsible for the implementation of these systems must make an arbitrary decision on really important points.</p>
<p>Unlike the local approaches, holistic approaches are other methods used for face recognition, which treat the whole face image and do not require extracting face regions or features points (eyes, mouth, noses, and so on). The main function of these approaches is to represent the face image with a matrix of pixels. This matrix is often converted into feature vectors to facilitate their treatment. After that, the feature vectors are applied in a low-dimensional space. In fact, subspace techniques are sensitive to different variations (facial expressions, illumination, and different poses), which make them easy to implement. Many subspace techniques are implemented to represent faces such as Eigenface, Eigenfisher, PCA, and LDA, which can be divided into two categories: linear and non-linear techniques. The main advantage of holistic approaches is that they do not destroy image information by focusing only on regions or points of interest. However, this property represents a disadvantage because it assumes that all the pixels of the image have the same importance. As a result, these techniques are not only computationally expensive, but also require a high degree of correlation between the test and the training images. In addition, these approaches generally ignore local details, which means they are rarely used to identify faces.</p>
<p>Hybrid approaches are based on local and global features to exploit the benefits of both techniques. These approaches combine the two approaches described above into a single system to improve the performance and accuracy of recognition. The choice of the required method to be used must take into account the application in which it was applied. For example, in the face recognition systems that use very small images, methods based on local features are a bad choice. Another consideration in the algorithm selection process is the number of training examples needed. Finally, we can remember that the tendency is to develop hybrid methods that combine the advantages of local and holistic approaches, but these methods are very complex and require more processing time.</p>
<p>A notable limitation that we found in all the publications reviewed is methodological: despite that the 2D facial recognition has reached a significant level of maturity and a high success rate, it is not surprising that it continues to be one of the most active research areas in computer vision. Considering the results published to date, in the opinion of these authors, three particularly promising techniques for further development of this area stand out: (i) the development of 3D face recognition methods; (ii) the use of multimodal fusion methods of complementary data types, in particular those based on visible and infrared images; and (iii) the use of DL methods.
<list list-type="order">
<list-item>
<p>Three-dimensional face recognition: In 2D image-based techniques, some features are lost owing to the 3D structure of the face. Lighting and pose variations are two major unresolved problems of 2D face recognition. Recently, 3D facial recognition for facial recognition has been widely studied by the scientific community to overcome unresolved problems in 2D facial recognition and to achieve significantly higher accuracy by measuring geometry of rigid features on the face. For this reason, several recent systems based on 3D data have been developed [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
,
<xref rid="B93-sensors-20-00342" ref-type="bibr">93</xref>
,
<xref rid="B95-sensors-20-00342" ref-type="bibr">95</xref>
,
<xref rid="B128-sensors-20-00342" ref-type="bibr">128</xref>
,
<xref rid="B129-sensors-20-00342" ref-type="bibr">129</xref>
].</p>
</list-item>
<list-item>
<p>Multimodal facial recognition: sensors have been developed in recent years with a proven ability to acquire not only two-dimensional texture information, but also facial shape, that is, three-dimensional information. For this reason, some recent studies have merged the two types of 2D and 3D information to take advantage of each of them and obtain a hybrid system that improves the recognition as the only modality [
<xref rid="B98-sensors-20-00342" ref-type="bibr">98</xref>
].</p>
</list-item>
<list-item>
<p>Deep learning (DL): a very broad concept, which means that it has no exact definition, but studies [
<xref rid="B14-sensors-20-00342" ref-type="bibr">14</xref>
,
<xref rid="B110-sensors-20-00342" ref-type="bibr">110</xref>
,
<xref rid="B111-sensors-20-00342" ref-type="bibr">111</xref>
,
<xref rid="B112-sensors-20-00342" ref-type="bibr">112</xref>
,
<xref rid="B113-sensors-20-00342" ref-type="bibr">113</xref>
,
<xref rid="B121-sensors-20-00342" ref-type="bibr">121</xref>
,
<xref rid="B130-sensors-20-00342" ref-type="bibr">130</xref>
,
<xref rid="B131-sensors-20-00342" ref-type="bibr">131</xref>
] agree that DL includes a set of algorithms that attempt to model high level abstractions, by modeling multiple processing layers. This field of research began in the 1980s and is a branch of automatic learning where algorithms are used in the formation of deep neural networks (DNN) to achieve greater accuracy than other classical techniques. In recent progress, a point has been reached where DL performs better than people in some tasks, for example, to recognize objects in images.</p>
</list-item>
</list>
</p>
<p>Finally, researchers have gone further by using multimodal and DL facial recognition systems.</p>
</sec>
<sec id="sec7dot2-sensors-20-00342">
<title>7.2. Conclusions</title>
<p>Face recognition system is a popular study task in the field of image processing and computer vision, owing to its potentially enormous application as well as its theoretical value. This system is widely deployed in many real-world applications such as security, surveillance, homeland security, access control, image search, human-machine, and entertainment. However, these applications pose different challenges such as lighting conditions and facial expressions. This paper highlights the recent research on the 2D or 3D face recognition system, focusing mainly on approaches based on local, holistic (subspace), and hybrid features. A comparative study between these approaches in terms of processing time, complexity, discrimination, and robustness was carried out. We can conclude that local feature techniques are the best choice concerning discrimination, rotation, translation, complexity, and accuracy. We hope that this survey paper will further encourage researchers in this field to participate and pay more attention to the use of local techniques for face recognition systems.</p>
</sec>
</sec>
</body>
<back>
<notes>
<title>Author Contributions</title>
<p>Y.K. highlights the recent research on the 2D or 3D face recognition system, focusing mainly on approaches based on local, holistic, and hybrid features. M.J., A.A.F. and M.A. supervised the research and helped in the revision processes. All authors have read and agreed to the published version of the manuscript.</p>
</notes>
<notes>
<title>Funding</title>
<p>The paper is co-financed by L@bISEN of ISEN Yncrea Ouest Brest, France, Dept Ai-DE, Team Vision-AD and by FSM University of Monastir, Tunisia with collaboration of the Ministry of Higher Education and Scientific Research of Tunisia. The context of the paper is the PhD project of Yassin Kortli.</p>
</notes>
<notes notes-type="COI-statement">
<title>Conflicts of Interest</title>
<p>The authors declare no conflict of interest. </p>
</notes>
<ref-list>
<title>References</title>
<ref id="B1-sensors-20-00342">
<label>1.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liao</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Jain</surname>
<given-names>A.K.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>S.Z.</given-names>
</name>
</person-group>
<article-title>Partial face recognition: Alignment-free approach</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>2012</year>
<volume>35</volume>
<fpage>1193</fpage>
<lpage>1205</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2012.191</pub-id>
<pub-id pub-id-type="pmid">23520259</pub-id>
</element-citation>
</ref>
<ref id="B2-sensors-20-00342">
<label>2.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jridi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Napoléon</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>One lens optical correlation: Application to face recognition</article-title>
<source>Appl. Opt.</source>
<year>2018</year>
<volume>57</volume>
<fpage>2087</fpage>
<lpage>2095</lpage>
<pub-id pub-id-type="doi">10.1364/AO.57.002087</pub-id>
<pub-id pub-id-type="pmid">29603998</pub-id>
</element-citation>
</ref>
<ref id="B3-sensors-20-00342">
<label>3.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Napoléon</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Pose invariant face recognition: 3D model from single photo</article-title>
<source>Opt. Lasers Eng.</source>
<year>2017</year>
<volume>89</volume>
<fpage>150</fpage>
<lpage>161</lpage>
<pub-id pub-id-type="doi">10.1016/j.optlaseng.2016.06.019</pub-id>
</element-citation>
</ref>
<ref id="B4-sensors-20-00342">
<label>4.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ouerhani</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Jridi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Fast face recognition approach using a graphical processing unit “GPU”</article-title>
<source>Proceedings of the 2010 IEEE International Conference on Imaging Systems and Techniques</source>
<conf-loc>Thessaloniki, Greece</conf-loc>
<conf-date>1–2 July 2010</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2010</year>
<fpage>80</fpage>
<lpage>84</lpage>
</element-citation>
</ref>
<ref id="B5-sensors-20-00342">
<label>5.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Valli</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>A fingerprint and finger-vein based cancelable multi-biometric system</article-title>
<source>Pattern Recognit.</source>
<year>2018</year>
<volume>78</volume>
<fpage>242</fpage>
<lpage>251</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2018.01.026</pub-id>
</element-citation>
</ref>
<ref id="B6-sensors-20-00342">
<label>6.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>N.P.</given-names>
</name>
<name>
<surname>Kale</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Optimize Approach to Voice Recognition Using IoT</article-title>
<source>Proceedings of the 2018 International Conference on Advances in Communication and Computing Technology (ICACCT)</source>
<conf-loc>Sangamner, India</conf-loc>
<conf-date>8–9 February 2018</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2018</year>
<fpage>251</fpage>
<lpage>256</lpage>
</element-citation>
</ref>
<ref id="B7-sensors-20-00342">
<label>7.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosseau</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>New perspectives in face correlation research: A tutorial</article-title>
<source>Adv. Opt. Photonics</source>
<year>2017</year>
<volume>9</volume>
<fpage>1</fpage>
<lpage>78</lpage>
<pub-id pub-id-type="doi">10.1364/AOP.9.000001</pub-id>
</element-citation>
</ref>
<ref id="B8-sensors-20-00342">
<label>8.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosseau</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Kaddah</surname>
<given-names>W.</given-names>
</name>
</person-group>
<article-title>Optimization of decision making for face recognition based on nonlinear correlation plane</article-title>
<source>Opt. Commun.</source>
<year>2015</year>
<volume>343</volume>
<fpage>22</fpage>
<lpage>27</lpage>
<pub-id pub-id-type="doi">10.1016/j.optcom.2015.01.017</pub-id>
</element-citation>
</ref>
<ref id="B9-sensors-20-00342">
<label>9.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhao</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Cang</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>Bisecting k-means clustering based face recognition using block-based bag of words model</article-title>
<source>Opt. Int. J. Light Electron Opt.</source>
<year>2015</year>
<volume>126</volume>
<fpage>1761</fpage>
<lpage>1766</lpage>
<pub-id pub-id-type="doi">10.1016/j.ijleo.2015.04.068</pub-id>
</element-citation>
</ref>
<ref id="B10-sensors-20-00342">
<label>10.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>HajiRassouliha</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Gamage</surname>
<given-names>T.P.B.</given-names>
</name>
<name>
<surname>Parker</surname>
<given-names>M.D.</given-names>
</name>
<name>
<surname>Nash</surname>
<given-names>M.P.</given-names>
</name>
<name>
<surname>Taberner</surname>
<given-names>A.J.</given-names>
</name>
<name>
<surname>Nielsen</surname>
<given-names>P.M.</given-names>
</name>
</person-group>
<article-title>FPGA implementation of 2D cross-correlation for real-time 3D tracking of deformable surfaces</article-title>
<source>Proceedings of the 2013 28th International Conference on Image and Vision Computing New Zealand (IVCNZ 2013)</source>
<conf-loc>Wellington, New Zealand</conf-loc>
<conf-date>27–29 November 2013</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2013</year>
<fpage>352</fpage>
<lpage>357</lpage>
</element-citation>
</ref>
<ref id="B11-sensors-20-00342">
<label>11.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kortli</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Jridi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Al Falou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Atri</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>A comparative study of CFs, LBP, HOG, SIFT, SURF, and BRIEF techniques for face recognition</article-title>
<source>Pattern Recognition and Tracking XXIX</source>
<comment>International Society for Optics and Photonics</comment>
<publisher-name>SPIE</publisher-name>
<publisher-loc>Bellingham, WA, USA</publisher-loc>
<year>2018</year>
<volume>Volume 10649</volume>
<fpage>106490M</fpage>
</element-citation>
</ref>
<ref id="B12-sensors-20-00342">
<label>12.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Dehai</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Da</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Jin</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Qing</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>A pca-based face recognition method by applying fast fourier transform in pre-processing</article-title>
<source>3rd International Conference on Multimedia Technology (ICMT-13)</source>
<publisher-name>Atlantis Press</publisher-name>
<publisher-loc>Paris, France</publisher-loc>
<year>2013</year>
</element-citation>
</ref>
<ref id="B13-sensors-20-00342">
<label>13.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ouerhani</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosseau</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Road mark recognition using HOG-SVM and correlation</article-title>
<source>Optics and Photonics for Information Processing XI</source>
<comment>International Society for Optics and Photonics</comment>
<publisher-name>SPIE</publisher-name>
<publisher-loc>Bellingham, WA, USA</publisher-loc>
<year>2017</year>
<volume>Volume 10395</volume>
<fpage>103950Q</fpage>
</element-citation>
</ref>
<ref id="B14-sensors-20-00342">
<label>14.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Zeng</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Alsaadi</surname>
<given-names>F.E.</given-names>
</name>
</person-group>
<article-title>A survey of deep neural network architectures and their applications</article-title>
<source>Neurocomputing</source>
<year>2017</year>
<volume>234</volume>
<fpage>11</fpage>
<lpage>26</lpage>
<pub-id pub-id-type="doi">10.1016/j.neucom.2016.12.038</pub-id>
</element-citation>
</ref>
<ref id="B15-sensors-20-00342">
<label>15.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Xi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Polajnar</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Tong</surname>
<given-names>W.</given-names>
</name>
</person-group>
<article-title>Local binary pattern network: A deep learning approach for face recognition</article-title>
<source>Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP)</source>
<conf-loc>Phoenix, AZ, USA</conf-loc>
<conf-date>25–28 September 2016</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2016</year>
<fpage>3224</fpage>
<lpage>3228</lpage>
</element-citation>
</ref>
<ref id="B16-sensors-20-00342">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ojala</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Pietikäinen</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Harwood</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>A comparative study of texture measures with classification based on featured distributions</article-title>
<source>Pattern Recognit.</source>
<year>1996</year>
<volume>29</volume>
<fpage>51</fpage>
<lpage>59</lpage>
<pub-id pub-id-type="doi">10.1016/0031-3203(95)00067-4</pub-id>
</element-citation>
</ref>
<ref id="B17-sensors-20-00342">
<label>17.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gowda</surname>
<given-names>H.D.S.</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>G.H.</given-names>
</name>
<name>
<surname>Imran</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Multimodal Biometric Recognition System Based on Nonparametric Classifiers</article-title>
<source>Data Anal. Learn.</source>
<year>2018</year>
<volume>43</volume>
<fpage>269</fpage>
<lpage>278</lpage>
</element-citation>
</ref>
<ref id="B18-sensors-20-00342">
<label>18.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ouerhani</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Jridi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosseau</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter</article-title>
<source>Opt. Commun.</source>
<year>2013</year>
<volume>289</volume>
<fpage>33</fpage>
<lpage>44</lpage>
<pub-id pub-id-type="doi">10.1016/j.optcom.2012.09.074</pub-id>
</element-citation>
</ref>
<ref id="B19-sensors-20-00342">
<label>19.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Mousa Pasandi</surname>
<given-names>M.E.</given-names>
</name>
</person-group>
<article-title>Face, Age and Gender Recognition Using Local Descriptors</article-title>
<source>Ph.D. Thesis</source>
<publisher-name>Université d’Ottawa/University of Ottawa</publisher-name>
<publisher-loc>Ottawa, ON, Canada</publisher-loc>
<year>2014</year>
</element-citation>
</ref>
<ref id="B20-sensors-20-00342">
<label>20.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Khoi</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Thien</surname>
<given-names>L.H.</given-names>
</name>
<name>
<surname>Viet</surname>
<given-names>V.H.</given-names>
</name>
</person-group>
<article-title>Face Retrieval Based on Local Binary Pattern and Its Variants: A Comprehensive Study</article-title>
<source>Int. J. Adv. Comput. Sci. Appl.</source>
<year>2016</year>
<volume>7</volume>
<fpage>249</fpage>
<lpage>258</lpage>
<pub-id pub-id-type="doi">10.14569/IJACSA.2016.070632</pub-id>
</element-citation>
</ref>
<ref id="B21-sensors-20-00342">
<label>21.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zeppelzauer</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Automated detection of elephants in wildlife video</article-title>
<source>EURASIP J. Image Video Process.</source>
<year>2013</year>
<volume>46</volume>
<fpage>2013</fpage>
<pub-id pub-id-type="doi">10.1186/1687-5281-2013-46</pub-id>
<pub-id pub-id-type="pmid">25902006</pub-id>
</element-citation>
</ref>
<ref id="B22-sensors-20-00342">
<label>22.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parmar</surname>
<given-names>D.N.</given-names>
</name>
<name>
<surname>Mehta</surname>
<given-names>B.B.</given-names>
</name>
</person-group>
<article-title>Face recognition methods & applications</article-title>
<source>arXiv</source>
<year>2014</year>
<pub-id pub-id-type="arxiv">1403.0485</pub-id>
</element-citation>
</ref>
<ref id="B23-sensors-20-00342">
<label>23.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vinay</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Hebbar</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Shekhar</surname>
<given-names>V.S.</given-names>
</name>
<name>
<surname>Murthy</surname>
<given-names>K.B.</given-names>
</name>
<name>
<surname>Natarajan</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Two novel detector-descriptor based approaches for face recognition using sift and surf</article-title>
<source>Procedia Comput. Sci.</source>
<year>2015</year>
<volume>70</volume>
<fpage>185</fpage>
<lpage>197</lpage>
</element-citation>
</ref>
<ref id="B24-sensors-20-00342">
<label>24.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>X.A.</given-names>
</name>
</person-group>
<article-title>Cascade classifier for face detection</article-title>
<source>J. Algorithms Comput. Technol.</source>
<year>2016</year>
<volume>10</volume>
<fpage>187</fpage>
<lpage>197</lpage>
<pub-id pub-id-type="doi">10.1177/1748301816649073</pub-id>
</element-citation>
</ref>
<ref id="B25-sensors-20-00342">
<label>25.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Viola</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Rapid object detection using a boosted cascade of simple features</article-title>
<source>Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source>
<conf-loc>Kauai, HI, USA</conf-loc>
<conf-date>8–14 December 2001</conf-date>
</element-citation>
</ref>
<ref id="B26-sensors-20-00342">
<label>26.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rettkowski</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Boutros</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Göhringer</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>HW/SW Co-Design of the HOG algorithm on a Xilinx Zynq SoC</article-title>
<source>J. Parallel Distrib. Comput.</source>
<year>2017</year>
<volume>109</volume>
<fpage>50</fpage>
<lpage>62</lpage>
<pub-id pub-id-type="doi">10.1016/j.jpdc.2017.05.005</pub-id>
</element-citation>
</ref>
<ref id="B27-sensors-20-00342">
<label>27.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seo</surname>
<given-names>H.J.</given-names>
</name>
<name>
<surname>Milanfar</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Face verification using the lark representation</article-title>
<source>IEEE Trans. Inf. Forensics Secur.</source>
<year>2011</year>
<volume>6</volume>
<fpage>1275</fpage>
<lpage>1286</lpage>
<pub-id pub-id-type="doi">10.1109/TIFS.2011.2159205</pub-id>
</element-citation>
</ref>
<ref id="B28-sensors-20-00342">
<label>28.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shah</surname>
<given-names>J.H.</given-names>
</name>
<name>
<surname>Sharif</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Raza</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Azeem</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>A Survey: Linear and Nonlinear PCA Based Face Recognition Techniques</article-title>
<source>Int. Arab J. Inf. Technol.</source>
<year>2013</year>
<volume>10</volume>
<fpage>536</fpage>
<lpage>545</lpage>
</element-citation>
</ref>
<ref id="B29-sensors-20-00342">
<label>29.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Du</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Su</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Cai</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Face recognition using SURF features</article-title>
<source>MIPPR 2009: Pattern Recognition and Computer Vision</source>
<comment>International Society for Optics and Photonics</comment>
<publisher-name>SPIE</publisher-name>
<publisher-loc>Bellingham, WA, USA</publisher-loc>
<year>2009</year>
<volume>Volume 7496</volume>
<fpage>749628</fpage>
</element-citation>
</ref>
<ref id="B30-sensors-20-00342">
<label>30.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Calonder</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lepetit</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Ozuysal</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Trzcinski</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Strecha</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Fua</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>BRIEF: Computing a local binary descriptor very fast</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>2011</year>
<volume>34</volume>
<fpage>1281</fpage>
<lpage>1298</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2011.222</pub-id>
<pub-id pub-id-type="pmid">22084141</pub-id>
</element-citation>
</ref>
<ref id="B31-sensors-20-00342">
<label>31.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Smach</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Miteran</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Atri</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Dubois</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Abid</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Gauthier</surname>
<given-names>J.P.</given-names>
</name>
</person-group>
<article-title>An FPGA-based accelerator for Fourier Descriptors computing for color object recognition using SVM</article-title>
<source>J. Real-Time Image Process.</source>
<year>2007</year>
<volume>2</volume>
<fpage>249</fpage>
<lpage>258</lpage>
<pub-id pub-id-type="doi">10.1007/s11554-007-0065-6</pub-id>
</element-citation>
</ref>
<ref id="B32-sensors-20-00342">
<label>32.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kortli</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Jridi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Al Falou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Atri</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>A novel face detection approach using local binary pattern histogram and support vector machine</article-title>
<source>Proceedings of the 2018 International Conference on Advanced Systems and Electric Technologies (IC_ASET)</source>
<conf-loc>Hammamet, Tunisia</conf-loc>
<conf-date>22–25 March 2018</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2018</year>
<fpage>28</fpage>
<lpage>33</lpage>
</element-citation>
</ref>
<ref id="B33-sensors-20-00342">
<label>33.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Xiong</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosseau</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Optical image authentication scheme using dual polarization decoding configuration</article-title>
<source>Opt. Lasers Eng.</source>
<year>2019</year>
<volume>112</volume>
<fpage>151</fpage>
<lpage>161</lpage>
<pub-id pub-id-type="doi">10.1016/j.optlaseng.2018.09.008</pub-id>
</element-citation>
</ref>
<ref id="B34-sensors-20-00342">
<label>34.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Turk</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Pentland</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Eigenfaces for recognition</article-title>
<source>J. Cogn. Neurosci.</source>
<year>1991</year>
<volume>3</volume>
<fpage>71</fpage>
<lpage>86</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.1991.3.1.71</pub-id>
<pub-id pub-id-type="pmid">23964806</pub-id>
</element-citation>
</ref>
<ref id="B35-sensors-20-00342">
<label>35.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Annalakshmi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Roomi</surname>
<given-names>S.M.M.</given-names>
</name>
<name>
<surname>Naveedh</surname>
<given-names>A.S.</given-names>
</name>
</person-group>
<article-title>A hybrid technique for gender classification with SLBP and HOG features</article-title>
<source>Clust. Comput.</source>
<year>2019</year>
<volume>22</volume>
<fpage>11</fpage>
<lpage>20</lpage>
<pub-id pub-id-type="doi">10.1007/s10586-017-1585-x</pub-id>
</element-citation>
</ref>
<ref id="B36-sensors-20-00342">
<label>36.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hussain</surname>
<given-names>S.U.</given-names>
</name>
<name>
<surname>Napoléon</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Jurie</surname>
<given-names>F.</given-names>
</name>
</person-group>
<source>Face Recognition Using Local Quantized Patterns</source>
<publisher-name>HAL</publisher-name>
<publisher-loc>Bengaluru, India</publisher-loc>
<year>2012</year>
</element-citation>
</ref>
<ref id="B37-sensors-20-00342">
<label>37.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosseau</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Understanding Correlation Techniques for Face Recognition: From Basics to Applications</article-title>
<source>Face Recognition</source>
<person-group person-group-type="editor">
<name>
<surname>Oravec</surname>
<given-names>M.</given-names>
</name>
</person-group>
<publisher-name>IntechOpen</publisher-name>
<publisher-loc>Rijeka, Croatia</publisher-loc>
<year>2010</year>
</element-citation>
</ref>
<ref id="B38-sensors-20-00342">
<label>38.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Napoléon</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Local binary patterns preprocessing for face identification/verification using the VanderLugt correlator</article-title>
<source>Optical Pattern Recognition XXV</source>
<comment>International Society for Optics and Photonics</comment>
<publisher-name>SPIE</publisher-name>
<publisher-loc>Bellingham, WA, USA</publisher-loc>
<year>2014</year>
<volume>Volume 9094</volume>
<fpage>909408</fpage>
</element-citation>
</ref>
<ref id="B39-sensors-20-00342">
<label>39.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Schroff</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kalenichenko</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Philbin</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Facenet: A unified embedding for face recognition and clustering</article-title>
<source>Proceedings of the IEEE conference on computer vision and pattern recognition</source>
<conf-loc>Boston, MA, USA</conf-loc>
<conf-date>7–12 June 2015</conf-date>
<fpage>815</fpage>
<lpage>823</lpage>
</element-citation>
</ref>
<ref id="B40-sensors-20-00342">
<label>40.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kambi Beli</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Enhancing face identification using local binary patterns and k-nearest neighbors</article-title>
<source>J. Imaging</source>
<year>2017</year>
<volume>3</volume>
<elocation-id>37</elocation-id>
<pub-id pub-id-type="doi">10.3390/jimaging3030037</pub-id>
</element-citation>
</ref>
<ref id="B41-sensors-20-00342">
<label>41.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Benarab</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Napoléon</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Verney</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Hellard</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques</article-title>
<source>Opt. Commun.</source>
<year>2015</year>
<volume>356</volume>
<fpage>256</fpage>
<lpage>268</lpage>
<pub-id pub-id-type="doi">10.1016/j.optcom.2015.07.056</pub-id>
</element-citation>
</ref>
<ref id="B42-sensors-20-00342">
<label>42.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bonnen</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Klare</surname>
<given-names>B.F.</given-names>
</name>
<name>
<surname>Jain</surname>
<given-names>A.K.</given-names>
</name>
</person-group>
<article-title>Component-based representation in automated face recognition</article-title>
<source>IEEE Trans. Inf. Forensics Secur.</source>
<year>2012</year>
<volume>8</volume>
<fpage>239</fpage>
<lpage>253</lpage>
<pub-id pub-id-type="doi">10.1109/TIFS.2012.2226580</pub-id>
</element-citation>
</ref>
<ref id="B43-sensors-20-00342">
<label>43.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ren</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Relaxed local ternary pattern for face recognition</article-title>
<source>Proceedings of the 2013 IEEE International Conference on Image Processing</source>
<conf-loc>Melbourne, Australia</conf-loc>
<conf-date>15–18 September 2013</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2013</year>
<fpage>3680</fpage>
<lpage>3684</lpage>
</element-citation>
</ref>
<ref id="B44-sensors-20-00342">
<label>44.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Karaaba</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Surinta</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Schomaker</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Wiering</surname>
<given-names>M.A.</given-names>
</name>
</person-group>
<article-title>Robust face recognition by computing distances from multiple histograms of oriented gradients</article-title>
<source>Proceedings of the 2015 IEEE Symposium Series on Computational Intelligence</source>
<conf-loc>Cape Town, South Africa</conf-loc>
<conf-date>7–10 December 2015</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2015</year>
<fpage>203</fpage>
<lpage>209</lpage>
</element-citation>
</ref>
<ref id="B45-sensors-20-00342">
<label>45.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>A fast HOG descriptor using lookup table and integral image</article-title>
<source>arXiv</source>
<year>2017</year>
<pub-id pub-id-type="arxiv">1703.06256</pub-id>
</element-citation>
</ref>
<ref id="B46-sensors-20-00342">
<label>46.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arigbabu</surname>
<given-names>O.A.</given-names>
</name>
<name>
<surname>Ahmad</surname>
<given-names>S.M.S.</given-names>
</name>
<name>
<surname>Adnan</surname>
<given-names>W.A.W.</given-names>
</name>
<name>
<surname>Yussof</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Mahmood</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Soft biometrics: Gender recognition from unconstrained face images using local feature descriptor</article-title>
<source>arXiv</source>
<year>2017</year>
<pub-id pub-id-type="arxiv">1702.02537</pub-id>
</element-citation>
</ref>
<ref id="B47-sensors-20-00342">
<label>47.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lugh</surname>
<given-names>A.V.</given-names>
</name>
</person-group>
<article-title>Signal detection by complex spatial filtering</article-title>
<source>IEEE Trans. Inf. Theory</source>
<year>1964</year>
<volume>10</volume>
<fpage>139</fpage>
</element-citation>
</ref>
<ref id="B48-sensors-20-00342">
<label>48.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weaver</surname>
<given-names>C.S.</given-names>
</name>
<name>
<surname>Goodman</surname>
<given-names>J.W.</given-names>
</name>
</person-group>
<article-title>A technique for optically convolving two functions</article-title>
<source>Appl. Opt.</source>
<year>1966</year>
<volume>5</volume>
<fpage>1248</fpage>
<lpage>1249</lpage>
<pub-id pub-id-type="doi">10.1364/AO.5.001248</pub-id>
<pub-id pub-id-type="pmid">20049063</pub-id>
</element-citation>
</ref>
<ref id="B49-sensors-20-00342">
<label>49.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Horner</surname>
<given-names>J.L.</given-names>
</name>
<name>
<surname>Gianino</surname>
<given-names>P.D.</given-names>
</name>
</person-group>
<article-title>Phase-only matched filtering</article-title>
<source>Appl. Opt.</source>
<year>1984</year>
<volume>23</volume>
<fpage>812</fpage>
<lpage>816</lpage>
<pub-id pub-id-type="doi">10.1364/AO.23.000812</pub-id>
<pub-id pub-id-type="pmid">18204645</pub-id>
</element-citation>
</ref>
<ref id="B50-sensors-20-00342">
<label>50.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Leonard</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosseau</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Face recognition based on composite correlation filters: Analysis of their performances</article-title>
<source>Face Recognition: Methods, Applications and Technology</source>
<publisher-name>Nova Science Pub Inc.</publisher-name>
<publisher-loc>London, UK</publisher-loc>
<year>2012</year>
</element-citation>
</ref>
<ref id="B51-sensors-20-00342">
<label>51.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Katz</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Aron</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
</person-group>
<source>A Face-Tracking System to Detect Falls in the Elderly</source>
<comment>SPIE Newsroom</comment>
<publisher-name>SPIE</publisher-name>
<publisher-loc>Bellingham, WA, USA</publisher-loc>
<year>2013</year>
</element-citation>
</ref>
<ref id="B52-sensors-20-00342">
<label>52.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosseau</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Katz</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Alam</surname>
<given-names>M.S.</given-names>
</name>
</person-group>
<article-title>Decision optimization for face recognition based on an alternate correlation plane quantification metric</article-title>
<source>Opt. Lett.</source>
<year>2012</year>
<volume>37</volume>
<fpage>1562</fpage>
<lpage>1564</lpage>
<pub-id pub-id-type="doi">10.1364/OL.37.001562</pub-id>
<pub-id pub-id-type="pmid">22555738</pub-id>
</element-citation>
</ref>
<ref id="B53-sensors-20-00342">
<label>53.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Elbouz</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bouzidi</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosseau</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Leonard</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Benkelfat</surname>
<given-names>B.E.</given-names>
</name>
</person-group>
<article-title>Adapted all-numerical correlator for face recognition applications</article-title>
<source>Optical Pattern Recognition XXIV</source>
<comment>International Society for Optics and Photonics</comment>
<publisher-name>SPIE</publisher-name>
<publisher-loc>Bellingham, WA, USA</publisher-loc>
<year>2013</year>
<volume>Volume 8748</volume>
<fpage>874807</fpage>
</element-citation>
</ref>
<ref id="B54-sensors-20-00342">
<label>54.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Heflin</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Scheirer</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Boult</surname>
<given-names>T.E.</given-names>
</name>
</person-group>
<article-title>For your eyes only</article-title>
<source>Proceedings of the 2012 IEEE Workshop on the Applications of Computer Vision (WACV)</source>
<conf-loc>Breckenridge, CO, USA</conf-loc>
<conf-date>9–11 January 2012</conf-date>
<fpage>193</fpage>
<lpage>200</lpage>
</element-citation>
</ref>
<ref id="B55-sensors-20-00342">
<label>55.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Liao</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lei</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>S.Z.</given-names>
</name>
</person-group>
<article-title>Feature correlation filter for face recognition</article-title>
<source>Advances in Biometrics, Proceedings of the International Conference on Biometrics, Seoul, Korea, 27–29 August 2007</source>
<publisher-name>Springer</publisher-name>
<publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>
<year>2007</year>
<volume>Volume 4642</volume>
<fpage>77</fpage>
<lpage>86</lpage>
</element-citation>
</ref>
<ref id="B56-sensors-20-00342">
<label>56.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lenc</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Král</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Automatic face recognition system based on the SIFT features</article-title>
<source>Comput. Electr. Eng.</source>
<year>2015</year>
<volume>46</volume>
<fpage>256</fpage>
<lpage>272</lpage>
<pub-id pub-id-type="doi">10.1016/j.compeleceng.2015.01.014</pub-id>
</element-citation>
</ref>
<ref id="B57-sensors-20-00342">
<label>57.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Işık</surname>
<given-names>Ş.</given-names>
</name>
</person-group>
<article-title>A comparative evaluation of well-known feature detectors and descriptors</article-title>
<source>Int. J. Appl. Math. Electron. Comput.</source>
<year>2014</year>
<volume>3</volume>
<fpage>1</fpage>
<lpage>6</lpage>
<pub-id pub-id-type="doi">10.18100/ijamec.60004</pub-id>
</element-citation>
</ref>
<ref id="B58-sensors-20-00342">
<label>58.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mahier</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hemery</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>El-Abed</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>El-Allam</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bouhaddaoui</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Rosenberger</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Computation evabio: A tool for performance evaluation in biometrics</article-title>
<source>Int. J. Autom. Identif. Technol.</source>
<year>2011</year>
<volume>24</volume>
<fpage>hal-00984026</fpage>
</element-citation>
</ref>
<ref id="B59-sensors-20-00342">
<label>59.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Alahi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ortiz</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Vandergheynst</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Freak: Fast retina keypoint</article-title>
<source>Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition</source>
<conf-loc>Providence, RI, USA</conf-loc>
<conf-date>16–21 June 2012</conf-date>
<fpage>510</fpage>
<lpage>517</lpage>
</element-citation>
</ref>
<ref id="B60-sensors-20-00342">
<label>60.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Arashloo</surname>
<given-names>S.R.</given-names>
</name>
<name>
<surname>Kittler</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Efficient processing of MRFs for unconstrained-pose face recognition</article-title>
<source>Proceedings of the 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS)</source>
<conf-loc>Rlington, VA, USA</conf-loc>
<conf-date>29 September–2 October 2013</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2013</year>
<fpage>1</fpage>
<lpage>8</lpage>
</element-citation>
</ref>
<ref id="B61-sensors-20-00342">
<label>61.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ghorbel</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Tajouri</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Aydi</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Masmoudi</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>A comparative study of GOM, uLBP, VLC and fractional Eigenfaces for face recognition</article-title>
<source>Proceedings of the 2016 International Image Processing, Applications and Systems (IPAS)</source>
<conf-loc>Hammamet, Tunisia</conf-loc>
<conf-date>5–7 November 2016</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2016</year>
<fpage>1</fpage>
<lpage>5</lpage>
</element-citation>
</ref>
<ref id="B62-sensors-20-00342">
<label>62.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lima</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Zen</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Nankaku</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Miyajima</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Tokuda</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kitamura</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>On the use of kernel PCA for feature extraction in speech recognition</article-title>
<source>IEICE Trans. Inf. Syst.</source>
<year>2004</year>
<volume>87</volume>
<fpage>2802</fpage>
<lpage>2811</lpage>
</element-citation>
</ref>
<ref id="B63-sensors-20-00342">
<label>63.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Devi</surname>
<given-names>B.J.</given-names>
</name>
<name>
<surname>Veeranjaneyulu</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Kishore</surname>
<given-names>K.V.K.</given-names>
</name>
</person-group>
<article-title>A novel face recognition system based on combining eigenfaces with fisher faces using wavelets</article-title>
<source>Procedia Comput. Sci.</source>
<year>2010</year>
<volume>2</volume>
<fpage>44</fpage>
<lpage>51</lpage>
<pub-id pub-id-type="doi">10.1016/j.procs.2010.11.007</pub-id>
</element-citation>
</ref>
<ref id="B64-sensors-20-00342">
<label>64.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Simonyan</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Parkhi</surname>
<given-names>O.M.</given-names>
</name>
<name>
<surname>Vedaldi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Zisserman</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Fisher vector faces in the wild</article-title>
<source>Proceedings of the BMVC 2013—British Machine Vision Conference</source>
<conf-loc>Bristol, UK</conf-loc>
<conf-date>9–13 September 2013</conf-date>
</element-citation>
</ref>
<ref id="B65-sensors-20-00342">
<label>65.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>K.K.</given-names>
</name>
</person-group>
<article-title>Fisherface vs. eigenface in the dual-tree complex wavelet domain</article-title>
<source>Proceedings of the 2009 Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing</source>
<conf-loc>Kyoto, Japan</conf-loc>
<conf-date>12–14 September 2009</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2009</year>
<fpage>30</fpage>
<lpage>33</lpage>
</element-citation>
</ref>
<ref id="B66-sensors-20-00342">
<label>66.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Agarwal</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Jain</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Regunathan</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>C.P.</given-names>
</name>
</person-group>
<article-title>Automatic Attendance System Using Face Recognition Technique</article-title>
<source>Proceedings of the 2nd International Conference on Data Engineering and Communication Technology</source>
<publisher-name>Springer</publisher-name>
<publisher-loc>Singapore</publisher-loc>
<year>2019</year>
<fpage>525</fpage>
<lpage>533</lpage>
</element-citation>
</ref>
<ref id="B67-sensors-20-00342">
<label>67.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Cui</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Shan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>X.</given-names>
</name>
</person-group>
<article-title>Fusing robust face region descriptors via multiple metric learning for face recognition in the wild</article-title>
<source>Proceedings of the IEEE conference on computer vision and pattern recognition</source>
<conf-loc>Portland, OR, USA</conf-loc>
<conf-date>23–28 June 2013</conf-date>
<fpage>3554</fpage>
<lpage>3561</lpage>
</element-citation>
</ref>
<ref id="B68-sensors-20-00342">
<label>68.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Prince</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Fu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Mohammed</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Elder</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Probabilistic models for inference about identity</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>2011</year>
<volume>34</volume>
<fpage>144</fpage>
<lpage>157</lpage>
<pub-id pub-id-type="pmid">21576751</pub-id>
</element-citation>
</ref>
<ref id="B69-sensors-20-00342">
<label>69.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Perlibakas</surname>
<given-names>V.</given-names>
</name>
</person-group>
<article-title>Face recognition using principal component analysis and log-gabor filters</article-title>
<source>arXiv</source>
<year>2006</year>
<pub-id pub-id-type="arxiv">cs/0605025</pub-id>
</element-citation>
</ref>
<ref id="B70-sensors-20-00342">
<label>70.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>Z.H.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>W.J.</given-names>
</name>
<name>
<surname>Shang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Non-uniform patch based face recognition via 2D-DWT</article-title>
<source>Image Vision Comput.</source>
<year>2015</year>
<volume>37</volume>
<fpage>12</fpage>
<lpage>19</lpage>
<pub-id pub-id-type="doi">10.1016/j.imavis.2014.12.005</pub-id>
</element-citation>
</ref>
<ref id="B71-sensors-20-00342">
<label>71.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sufyanu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Mohamad</surname>
<given-names>F.S.</given-names>
</name>
<name>
<surname>Yusuf</surname>
<given-names>A.A.</given-names>
</name>
<name>
<surname>Mamat</surname>
<given-names>M.B.</given-names>
</name>
</person-group>
<article-title>Enhanced Face Recognition Using Discrete Cosine Transform</article-title>
<source>Eng. Lett.</source>
<year>2016</year>
<volume>24</volume>
<fpage>52</fpage>
<lpage>61</lpage>
</element-citation>
</ref>
<ref id="B72-sensors-20-00342">
<label>72.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hoffmann</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Kernel PCA for novelty detection</article-title>
<source>Pattern Recognit.</source>
<year>2007</year>
<volume>40</volume>
<fpage>863</fpage>
<lpage>874</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2006.07.009</pub-id>
</element-citation>
</ref>
<ref id="B73-sensors-20-00342">
<label>73.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arashloo</surname>
<given-names>S.R.</given-names>
</name>
<name>
<surname>Kittler</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Class-specific kernel fusion of multiple descriptors for face verification using multiscale binarised statistical image features</article-title>
<source>IEEE Trans. Inf. Forensics Secur.</source>
<year>2014</year>
<volume>9</volume>
<fpage>2100</fpage>
<lpage>2109</lpage>
<pub-id pub-id-type="doi">10.1109/TIFS.2014.2359587</pub-id>
</element-citation>
</ref>
<ref id="B74-sensors-20-00342">
<label>74.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vinay</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Shekhar</surname>
<given-names>V.S.</given-names>
</name>
<name>
<surname>Murthy</surname>
<given-names>K.B.</given-names>
</name>
<name>
<surname>Natarajan</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Performance study of LDA and KFA for gabor based face recognition system</article-title>
<source>Procedia Comput. Sci.</source>
<year>2015</year>
<volume>57</volume>
<fpage>960</fpage>
<lpage>969</lpage>
<pub-id pub-id-type="doi">10.1016/j.procs.2015.07.493</pub-id>
</element-citation>
</ref>
<ref id="B75-sensors-20-00342">
<label>75.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sivasathya</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Joans</surname>
<given-names>S.M.</given-names>
</name>
</person-group>
<article-title>Image Feature Extraction using Non Linear Principle Component Analysis</article-title>
<source>Procedia Eng.</source>
<year>2012</year>
<volume>38</volume>
<fpage>911</fpage>
<lpage>917</lpage>
<pub-id pub-id-type="doi">10.1016/j.proeng.2012.06.114</pub-id>
</element-citation>
</ref>
<ref id="B76-sensors-20-00342">
<label>76.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Shan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>W.</given-names>
</name>
</person-group>
<article-title>Nonlinear face recognition based on maximum average margin criterion</article-title>
<source>Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)</source>
<conf-loc>San Diego, CA, USA</conf-loc>
<conf-date>20–25 June 2005</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2005</year>
<volume>Volume 1</volume>
<fpage>554</fpage>
<lpage>559</lpage>
</element-citation>
</ref>
<ref id="B77-sensors-20-00342">
<label>77.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Vankayalapati</surname>
<given-names>H.D.</given-names>
</name>
<name>
<surname>Kyamakya</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>Nonlinear feature extraction approaches with application to face recognition over large databases</article-title>
<source>Proceedings of the 2009 2nd International Workshop on Nonlinear Dynamics and Synchronization</source>
<conf-loc>Klagenfurt, Austria</conf-loc>
<conf-date>20–21 July 2009</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2009</year>
<fpage>44</fpage>
<lpage>48</lpage>
</element-citation>
</ref>
<ref id="B78-sensors-20-00342">
<label>78.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Javidi</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Tang</surname>
<given-names>Q.</given-names>
</name>
</person-group>
<article-title>Optical implementation of neural networks for face recognition by the use of nonlinear joint transform correlators</article-title>
<source>Appl. Opt.</source>
<year>1995</year>
<volume>34</volume>
<fpage>3950</fpage>
<lpage>3962</lpage>
<pub-id pub-id-type="doi">10.1364/AO.34.003950</pub-id>
<pub-id pub-id-type="pmid">21052218</pub-id>
</element-citation>
</ref>
<ref id="B79-sensors-20-00342">
<label>79.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Frangi</surname>
<given-names>A.F.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>J.Y.</given-names>
</name>
</person-group>
<article-title>A new kernel Fisher discriminant algorithm with application to face recognition</article-title>
<source>Neurocomputing</source>
<year>2004</year>
<volume>56</volume>
<fpage>415</fpage>
<lpage>421</lpage>
<pub-id pub-id-type="doi">10.1016/S0925-2312(03)00444-2</pub-id>
</element-citation>
</ref>
<ref id="B80-sensors-20-00342">
<label>80.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>A new nonlinear feature extraction method for face recognition</article-title>
<source>Neurocomputing</source>
<year>2006</year>
<volume>69</volume>
<fpage>949</fpage>
<lpage>953</lpage>
<pub-id pub-id-type="doi">10.1016/j.neucom.2005.07.005</pub-id>
</element-citation>
</ref>
<ref id="B81-sensors-20-00342">
<label>81.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Fei</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Fan</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Face recognition using nonlinear locality preserving with deep networks</article-title>
<source>Proceedings of the 7th International Conference on Internet Multimedia Computing and Service</source>
<conf-loc>Hunan, China</conf-loc>
<conf-date>19–21 August 2015</conf-date>
<publisher-name>ACM</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>2015</year>
<fpage>66</fpage>
</element-citation>
</ref>
<ref id="B82-sensors-20-00342">
<label>82.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Yao</surname>
<given-names>Y.F.</given-names>
</name>
<name>
<surname>Jing</surname>
<given-names>X.Y.</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>S.Q.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>J.Y.</given-names>
</name>
</person-group>
<article-title>Face recognition based on nonlinear DCT discriminant feature extraction using improved kernel DCV</article-title>
<source>IEICE Trans. Inf. Syst.</source>
<year>2009</year>
<volume>92</volume>
<fpage>2527</fpage>
<lpage>2530</lpage>
<pub-id pub-id-type="doi">10.1587/transinf.E92.D.2527</pub-id>
</element-citation>
</ref>
<ref id="B83-sensors-20-00342">
<label>83.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Khan</surname>
<given-names>S.A.</given-names>
</name>
<name>
<surname>Ishtiaq</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nazir</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Shaheen</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Face recognition under varying expressions and illumination using particle swarm optimization</article-title>
<source>J. Comput. Sci.</source>
<year>2018</year>
<volume>28</volume>
<fpage>94</fpage>
<lpage>100</lpage>
<pub-id pub-id-type="doi">10.1016/j.jocs.2018.08.005</pub-id>
</element-citation>
</ref>
<ref id="B84-sensors-20-00342">
<label>84.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hafez</surname>
<given-names>S.F.</given-names>
</name>
<name>
<surname>Selim</surname>
<given-names>M.M.</given-names>
</name>
<name>
<surname>Zayed</surname>
<given-names>H.H.</given-names>
</name>
</person-group>
<article-title>2d face recognition system based on selected gabor filters and linear discriminant analysis lda</article-title>
<source>arXiv</source>
<year>2015</year>
<pub-id pub-id-type="arxiv">1503.03741</pub-id>
</element-citation>
</ref>
<ref id="B85-sensors-20-00342">
<label>85.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Shanbhag</surname>
<given-names>S.S.</given-names>
</name>
<name>
<surname>Bargi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Manikantan</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Ramachandran</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Face recognition using wavelet transforms-based feature extraction and spatial differentiation-based pre-processing</article-title>
<source>Proceedings of the 2014 International Conference on Science Engineering and Management Research (ICSEMR)</source>
<conf-loc>Chennai, India</conf-loc>
<conf-date>27–29 November 2014</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2014</year>
<fpage>1</fpage>
<lpage>8</lpage>
</element-citation>
</ref>
<ref id="B86-sensors-20-00342">
<label>86.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chow</surname>
<given-names>T.W.</given-names>
</name>
</person-group>
<article-title>Exactly Robust Kernel Principal Component Analysis</article-title>
<source>IEEE Trans. Neural Netw. Learn. Syst.</source>
<year>2019</year>
<pub-id pub-id-type="doi">10.1109/TNNLS.2019.2909686</pub-id>
<pub-id pub-id-type="pmid">31034425</pub-id>
</element-citation>
</ref>
<ref id="B87-sensors-20-00342">
<label>87.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vinay</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Cholin</surname>
<given-names>A.S.</given-names>
</name>
<name>
<surname>Bhat</surname>
<given-names>A.D.</given-names>
</name>
<name>
<surname>Murthy</surname>
<given-names>K.B.</given-names>
</name>
<name>
<surname>Natarajan</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>An Efficient ORB based Face Recognition framework for Human-Robot Interaction</article-title>
<source>Procedia Comput. Sci.</source>
<year>2018</year>
<volume>133</volume>
<fpage>913</fpage>
<lpage>923</lpage>
</element-citation>
</ref>
<ref id="B88-sensors-20-00342">
<label>88.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Plataniotis</surname>
<given-names>K.N.</given-names>
</name>
<name>
<surname>Venetsanopoulos</surname>
<given-names>A.N.</given-names>
</name>
</person-group>
<article-title>Face recognition using kernel direct discriminant analysis algorithms</article-title>
<source>IEEE Trans. Neural Netw.</source>
<year>2003</year>
<volume>14</volume>
<fpage>117</fpage>
<lpage>126</lpage>
<pub-id pub-id-type="pmid">18237995</pub-id>
</element-citation>
</ref>
<ref id="B89-sensors-20-00342">
<label>89.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>W.J.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y.C.</given-names>
</name>
<name>
<surname>Chung</surname>
<given-names>P.C.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>J.F.</given-names>
</name>
</person-group>
<article-title>Multi-feature shape regression for face alignment</article-title>
<source>EURASIP J. Adv. Signal Process.</source>
<year>2018</year>
<volume>2018</volume>
<fpage>51</fpage>
<pub-id pub-id-type="doi">10.1186/s13634-018-0572-6</pub-id>
</element-citation>
</ref>
<ref id="B90-sensors-20-00342">
<label>90.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ouanan</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Ouanan</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Aksasse</surname>
<given-names>B.</given-names>
</name>
</person-group>
<article-title>Non-linear dictionary representation of deep features for face recognition from a single sample per person</article-title>
<source>Procedia Comput. Sci.</source>
<year>2018</year>
<volume>127</volume>
<fpage>114</fpage>
<lpage>122</lpage>
<pub-id pub-id-type="doi">10.1016/j.procs.2018.01.105</pub-id>
</element-citation>
</ref>
<ref id="B91-sensors-20-00342">
<label>91.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Fathima</surname>
<given-names>A.A.</given-names>
</name>
<name>
<surname>Ajitha</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Vaidehi</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Hemalatha</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Karthigaiveni</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis</article-title>
<source>Proceedings of the 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS)</source>
<conf-loc>Bhubaneswar, India</conf-loc>
<conf-date>2–3 November 2015</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2015</year>
<fpage>220</fpage>
<lpage>225</lpage>
</element-citation>
</ref>
<ref id="B92-sensors-20-00342">
<label>92.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Barkan</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Weill</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wolf</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Aronowitz</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Fast high dimensional vector multiplication face recognition</article-title>
<source>Proceedings of the IEEE International Conference on Computer Vision</source>
<conf-loc>Sydney, Australia</conf-loc>
<conf-date>1–8 December 2013</conf-date>
<fpage>1960</fpage>
<lpage>1967</lpage>
</element-citation>
</ref>
<ref id="B93-sensors-20-00342">
<label>93.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Juefei-Xu</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Luu</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Savvides</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Spartans: Single-sample periocular-based alignment-robust recognition technique applied to non-frontal scenarios</article-title>
<source>IEEE Trans. Image Process.</source>
<year>2015</year>
<volume>24</volume>
<fpage>4780</fpage>
<lpage>4795</lpage>
<pub-id pub-id-type="doi">10.1109/TIP.2015.2468173</pub-id>
<pub-id pub-id-type="pmid">26285149</pub-id>
</element-citation>
</ref>
<ref id="B94-sensors-20-00342">
<label>94.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yan</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Suter</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Multi-subregion based correlation filter bank for robust face recognition</article-title>
<source>Pattern Recognit.</source>
<year>2014</year>
<volume>47</volume>
<fpage>3487</fpage>
<lpage>3501</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2014.05.004</pub-id>
</element-citation>
</ref>
<ref id="B95-sensors-20-00342">
<label>95.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ding</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Tao</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Robust face recognition via multimodal deep face representation</article-title>
<source>IEEE Trans. Multimed.</source>
<year>2015</year>
<volume>17</volume>
<fpage>2049</fpage>
<lpage>2058</lpage>
<pub-id pub-id-type="doi">10.1109/TMM.2015.2477042</pub-id>
</element-citation>
</ref>
<ref id="B96-sensors-20-00342">
<label>96.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sharma</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Patterh</surname>
<given-names>M.S.</given-names>
</name>
</person-group>
<article-title>A new pose invariant face recognition system using PCA and ANFIS</article-title>
<source>Optik</source>
<year>2015</year>
<volume>126</volume>
<fpage>3483</fpage>
<lpage>3487</lpage>
<pub-id pub-id-type="doi">10.1016/j.ijleo.2015.08.205</pub-id>
</element-citation>
</ref>
<ref id="B97-sensors-20-00342">
<label>97.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Moussa</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hmila</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Douik</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>A Novel Face Recognition Approach Based on Genetic Algorithm Optimization</article-title>
<source>Stud. Inform. Control</source>
<year>2018</year>
<volume>27</volume>
<fpage>127</fpage>
<lpage>134</lpage>
<pub-id pub-id-type="doi">10.24846/v27i1y201813</pub-id>
</element-citation>
</ref>
<ref id="B98-sensors-20-00342">
<label>98.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mian</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bennamoun</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Owens</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>An efficient multimodal 2D-3D hybrid approach to automatic face recognition</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>2007</year>
<volume>29</volume>
<fpage>1927</fpage>
<lpage>1943</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2007.1105</pub-id>
<pub-id pub-id-type="pmid">17848775</pub-id>
</element-citation>
</ref>
<ref id="B99-sensors-20-00342">
<label>99.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cho</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Roberts</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Jung</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Choi</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Moon</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>An efficient hybrid face recognition algorithm using PCA and GABOR wavelets</article-title>
<source>Int. J. Adv. Robot. Syst.</source>
<year>2014</year>
<volume>11</volume>
<fpage>59</fpage>
<pub-id pub-id-type="doi">10.5772/58473</pub-id>
</element-citation>
</ref>
<ref id="B100-sensors-20-00342">
<label>100.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guru</surname>
<given-names>D.S.</given-names>
</name>
<name>
<surname>Suraj</surname>
<given-names>M.G.</given-names>
</name>
<name>
<surname>Manjunath</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Fusion of covariance matrices of PCA and FLD</article-title>
<source>Pattern Recognit. Lett.</source>
<year>2011</year>
<volume>32</volume>
<fpage>432</fpage>
<lpage>440</lpage>
<pub-id pub-id-type="doi">10.1016/j.patrec.2010.10.006</pub-id>
</element-citation>
</ref>
<ref id="B101-sensors-20-00342">
<label>101.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sing</surname>
<given-names>J.K.</given-names>
</name>
<name>
<surname>Chowdhury</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Basu</surname>
<given-names>D.K.</given-names>
</name>
<name>
<surname>Nasipuri</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>An improved hybrid approach to face recognition by fusing local and global discriminant features</article-title>
<source>Int. J. Biom.</source>
<year>2012</year>
<volume>4</volume>
<fpage>144</fpage>
<lpage>164</lpage>
<pub-id pub-id-type="doi">10.1504/IJBM.2012.046245</pub-id>
</element-citation>
</ref>
<ref id="B102-sensors-20-00342">
<label>102.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kamencay</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Zachariasova</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hudec</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Jarina</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Benco</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hlubik</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>A novel approach to face recognition using image segmentation based on spca-knn method</article-title>
<source>Radioengineering</source>
<year>2013</year>
<volume>22</volume>
<fpage>92</fpage>
<lpage>99</lpage>
</element-citation>
</ref>
<ref id="B103-sensors-20-00342">
<label>103.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sun</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Fu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>He</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Sequential Human Activity Recognition Based on Deep Convolutional Network and Extreme Learning Machine Using Wearable Sensors</article-title>
<source>J. Sens.</source>
<year>2018</year>
<volume>2018</volume>
<fpage>10</fpage>
<pub-id pub-id-type="doi">10.1155/2018/8580959</pub-id>
</element-citation>
</ref>
<ref id="B104-sensors-20-00342">
<label>104.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Soltanpour</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Boufama</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Q.J.</given-names>
</name>
</person-group>
<article-title>A survey of local feature methods for 3D face recognition</article-title>
<source>Pattern Recognit.</source>
<year>2017</year>
<volume>72</volume>
<fpage>391</fpage>
<lpage>406</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2017.08.003</pub-id>
</element-citation>
</ref>
<ref id="B105-sensors-20-00342">
<label>105.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Sharma</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>ul Hussain</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Jurie</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Local higher-order statistics (LHS) for texture categorization and facial analysis</article-title>
<source>European Conference on Computer Vision</source>
<publisher-name>Springer</publisher-name>
<publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>
<year>2012</year>
<fpage>1</fpage>
<lpage>12</lpage>
</element-citation>
</ref>
<ref id="B106-sensors-20-00342">
<label>106.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Marszałek</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lazebnik</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Schmid</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Local features and kernels for classification of texture and object categories: A comprehensive study</article-title>
<source>Int. J. Comput. Vis.</source>
<year>2007</year>
<volume>73</volume>
<fpage>213</fpage>
<lpage>238</lpage>
<pub-id pub-id-type="doi">10.1007/s11263-006-9794-4</pub-id>
</element-citation>
</ref>
<ref id="B107-sensors-20-00342">
<label>107.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Leonard</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosseau</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Spectral optimized asymmetric segmented phase-only correlation filter</article-title>
<source>Appl. Opt.</source>
<year>2012</year>
<volume>51</volume>
<fpage>2638</fpage>
<lpage>2650</lpage>
<pub-id pub-id-type="doi">10.1364/AO.51.002638</pub-id>
<pub-id pub-id-type="pmid">22614484</pub-id>
</element-citation>
</ref>
<ref id="B108-sensors-20-00342">
<label>108.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Shen</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Bai</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Ji</surname>
<given-names>Z.</given-names>
</name>
</person-group>
<article-title>A svm face recognition method based on optimized gabor features</article-title>
<source>International Conference on Advances in Visual Information Systems</source>
<publisher-name>Springer</publisher-name>
<publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>
<year>2007</year>
<fpage>165</fpage>
<lpage>174</lpage>
</element-citation>
</ref>
<ref id="B109-sensors-20-00342">
<label>109.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pratima</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Nimmakanti</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>Pattern Recognition Algorithms for Cluster Identification Problem</article-title>
<source>Int. J. Comput. Sci. Inform.</source>
<year>2012</year>
<volume>1</volume>
<fpage>2231</fpage>
<lpage>5292</lpage>
</element-citation>
</ref>
<ref id="B110-sensors-20-00342">
<label>110.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Prasanna</surname>
<given-names>V.</given-names>
</name>
</person-group>
<article-title>Frequency domain acceleration of convolutional neural networks on CPU-FPGA shared memory system</article-title>
<source>Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays</source>
<conf-loc>Monterey, CA, USA</conf-loc>
<conf-date>22–24 February 2017</conf-date>
<publisher-name>ACM</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>2017</year>
<fpage>35</fpage>
<lpage>44</lpage>
</element-citation>
</ref>
<ref id="B111-sensors-20-00342">
<label>111.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nguyen</surname>
<given-names>D.T.</given-names>
</name>
<name>
<surname>Pham</surname>
<given-names>T.D.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>M.B.</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>K.R.</given-names>
</name>
</person-group>
<article-title>Visible-Light Camera Sensor-Based Presentation Attack Detection for Face Recognition by Combining Spatial and Temporal Information</article-title>
<source>Sensors</source>
<year>2019</year>
<volume>19</volume>
<elocation-id>410</elocation-id>
<pub-id pub-id-type="doi">10.3390/s19020410</pub-id>
<pub-id pub-id-type="pmid">30669531</pub-id>
</element-citation>
</ref>
<ref id="B112-sensors-20-00342">
<label>112.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Parkhi</surname>
<given-names>O.M.</given-names>
</name>
<name>
<surname>Vedaldi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Zisserman</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Deep face recognition</article-title>
<source>Proceedings of the BMVC 2015—British Machine Vision Conference</source>
<conf-loc>Swansea, UK</conf-loc>
<conf-date>7–10 September</conf-date>
</element-citation>
</ref>
<ref id="B113-sensors-20-00342">
<label>113.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Qiao</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>A discriminative feature learning approach for deep face recognition</article-title>
<source>European Conference on Computer Vision</source>
<publisher-name>Springer</publisher-name>
<publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>
<year>2016</year>
<fpage>499</fpage>
<lpage>515</lpage>
</element-citation>
</ref>
<ref id="B114-sensors-20-00342">
<label>114.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Passalis</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Tefas</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Spatial bag of features learning for large scale face image retrieval</article-title>
<source>INNS Conference on Big Data</source>
<publisher-name>Springer</publisher-name>
<publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>
<year>2016</year>
<fpage>8</fpage>
<lpage>17</lpage>
</element-citation>
</ref>
<ref id="B115-sensors-20-00342">
<label>115.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Wen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Raj</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Song</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Sphereface: Deep hypersphere embedding for face recognition</article-title>
<source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
<conf-loc>Honolulu, HI, USA</conf-loc>
<conf-date>21–26 July 2017</conf-date>
<fpage>212</fpage>
<lpage>220</lpage>
</element-citation>
</ref>
<ref id="B116-sensors-20-00342">
<label>116.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Amato</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Falchi</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Gennaro</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Massoli</surname>
<given-names>F.V.</given-names>
</name>
<name>
<surname>Passalis</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Tefas</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Vairo</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Face Verification and Recognition for Digital Forensics and Information Security</article-title>
<source>Proceedings of the 2019 7th International Symposium on Digital Forensics and Security (ISDFS)</source>
<conf-loc>Barcelos, Portugal</conf-loc>
<conf-date>10–12 June 2019</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2019</year>
<fpage>1</fpage>
<lpage>6</lpage>
</element-citation>
</ref>
<ref id="B117-sensors-20-00342">
<label>117.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Taigman</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ranzato</surname>
<given-names>M.A.</given-names>
</name>
</person-group>
<article-title>Wolf, LDeepface: Closing the gap to human-level performance in face verification</article-title>
<source>Proceedings of the IEEE conference on computer vision and pattern recognition</source>
<conf-loc>Washington, DC, USA</conf-loc>
<conf-date>23–28 June 2014</conf-date>
<fpage>1701</fpage>
<lpage>1708</lpage>
</element-citation>
</ref>
<ref id="B118-sensors-20-00342">
<label>118.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Ding</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>X.</given-names>
</name>
</person-group>
<article-title>Deep CNNs with Robust LBP Guiding Pooling for Face Recognition</article-title>
<source>Sensors</source>
<year>2018</year>
<volume>18</volume>
<elocation-id>3876</elocation-id>
<pub-id pub-id-type="doi">10.3390/s18113876</pub-id>
<pub-id pub-id-type="pmid">30423850</pub-id>
</element-citation>
</ref>
<ref id="B119-sensors-20-00342">
<label>119.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cho</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Baek</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>CNN-Based Multimodal Human Recognition in Surveillance Environments</article-title>
<source>Sensors</source>
<year>2018</year>
<volume>18</volume>
<elocation-id>3040</elocation-id>
<pub-id pub-id-type="doi">10.3390/s18093040</pub-id>
</element-citation>
</ref>
<ref id="B120-sensors-20-00342">
<label>120.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cho</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Baek</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Koo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>Detection in Nighttime Images Using Visible-Light Camera Sensors with Two-Step Faster Region-Based Convolutional Neural Network</article-title>
<source>Sensors</source>
<year>2018</year>
<volume>18</volume>
<elocation-id>2995</elocation-id>
<pub-id pub-id-type="doi">10.3390/s18092995</pub-id>
</element-citation>
</ref>
<ref id="B121-sensors-20-00342">
<label>121.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koshy</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Mahmood</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Optimizing Deep CNN Architectures for Face Liveness Detection</article-title>
<source>Entropy</source>
<year>2019</year>
<volume>21</volume>
<elocation-id>423</elocation-id>
<pub-id pub-id-type="doi">10.3390/e21040423</pub-id>
</element-citation>
</ref>
<ref id="B122-sensors-20-00342">
<label>122.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Elmahmudi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ugail</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Deep face recognition using imperfect facial data</article-title>
<source>Future Gener. Comput. Syst.</source>
<year>2019</year>
<volume>99</volume>
<fpage>213</fpage>
<lpage>225</lpage>
<pub-id pub-id-type="doi">10.1016/j.future.2019.04.025</pub-id>
</element-citation>
</ref>
<ref id="B123-sensors-20-00342">
<label>123.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seibold</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Samek</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Hilsmann</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Eisert</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Accurate and robust neural networks for security related applications exampled by face morphing attacks</article-title>
<source>arXiv</source>
<year>2018</year>
<pub-id pub-id-type="arxiv">1806.04265</pub-id>
</element-citation>
</ref>
<ref id="B124-sensors-20-00342">
<label>124.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Yim</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jung</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Yoo</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Choi</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Rotating your face using multi-task deep neural network</article-title>
<source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
<conf-loc>Boston, MA, USA</conf-loc>
<conf-date>7–12 June 2015</conf-date>
<fpage>676</fpage>
<lpage>684</lpage>
</element-citation>
</ref>
<ref id="B125-sensors-20-00342">
<label>125.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bajrami</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Gashi</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Murturi</surname>
<given-names>I.</given-names>
</name>
</person-group>
<article-title>Face recognition performance using linear discriminant analysis and deep neural networks</article-title>
<source>Int. J. Appl. Pattern Recognit.</source>
<year>2018</year>
<volume>5</volume>
<fpage>240</fpage>
<lpage>250</lpage>
<pub-id pub-id-type="doi">10.1504/IJAPR.2018.094818</pub-id>
</element-citation>
</ref>
<ref id="B126-sensors-20-00342">
<label>126.</label>
<element-citation publication-type="web">
<person-group person-group-type="author">
<name>
<surname>Gourier</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Hall</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Crowley</surname>
<given-names>J.L.</given-names>
</name>
</person-group>
<article-title>Estimating Face Orientation from Robust Detection of Salient Facial Structures</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="venus.inrialpes.fr/jlc/papers/Pointing04-Gourier.pdf">venus.inrialpes.fr/jlc/papers/Pointing04-Gourier.pdf</ext-link>
</comment>
<date-in-citation content-type="access-date" iso-8601-date="2019-12-15">(accessed on 15 December 2019)</date-in-citation>
</element-citation>
</ref>
<ref id="B127-sensors-20-00342">
<label>127.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gonzalez-Sosa</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Fierrez</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Vera-Rodriguez</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Alonso-Fernandez</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Facial soft biometrics for recognition in the wild: Recent works, annotation, and COTS evaluation</article-title>
<source>IEEE Trans. Inf. Forensics Secur.</source>
<year>2018</year>
<volume>13</volume>
<fpage>2001</fpage>
<lpage>2014</lpage>
<pub-id pub-id-type="doi">10.1109/TIFS.2018.2807791</pub-id>
</element-citation>
</ref>
<ref id="B128-sensors-20-00342">
<label>128.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boukamcha</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Hallek</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Smach</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Atri</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Automatic landmark detection and 3D Face data extraction</article-title>
<source>J. Comput. Sci.</source>
<year>2017</year>
<volume>21</volume>
<fpage>340</fpage>
<lpage>348</lpage>
<pub-id pub-id-type="doi">10.1016/j.jocs.2016.11.015</pub-id>
</element-citation>
</ref>
<ref id="B129-sensors-20-00342">
<label>129.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ouerhani</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Jridi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Alfalou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosseau</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Graphics processor unit implementation of correlation technique using a segmented phase only composite filter</article-title>
<source>Opt. Commun.</source>
<year>2013</year>
<volume>289</volume>
<fpage>33</fpage>
<lpage>44</lpage>
<pub-id pub-id-type="doi">10.1016/j.optcom.2012.09.074</pub-id>
</element-citation>
</ref>
<ref id="B130-sensors-20-00342">
<label>130.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Su</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Yan</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>An efficient deep neural networks training framework for robust face recognition</article-title>
<source>Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP)</source>
<conf-loc>Beijing, China</conf-loc>
<conf-date>17–20 September 2017</conf-date>
<fpage>3800</fpage>
<lpage>3804</lpage>
</element-citation>
</ref>
<ref id="B131-sensors-20-00342">
<label>131.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Coşkun</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Uçar</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Yildirim</surname>
<given-names>Ö.</given-names>
</name>
<name>
<surname>Demir</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>Face recognition based on convolutional neural network</article-title>
<source>Proceedings of the 2017 International Conference on Modern Electrical and Energy Systems (MEES)</source>
<conf-loc>Kremenchuk, Ukraine</conf-loc>
<conf-date>15–17 November 2017</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2017</year>
<fpage>376</fpage>
<lpage>379</lpage>
</element-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="sensors-20-00342-f001" orientation="portrait" position="float">
<label>Figure 1</label>
<caption>
<p>Face recognition structure [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
,
<xref rid="B23-sensors-20-00342" ref-type="bibr">23</xref>
].</p>
</caption>
<graphic xlink:href="sensors-20-00342-g001"></graphic>
</fig>
<fig id="sensors-20-00342-f002" orientation="portrait" position="float">
<label>Figure 2</label>
<caption>
<p>Face recognition methods. SIFT, scale-invariant feature transform; SURF, scale-invariant feature transform; BRIEF, binary robust independent elementary features; LBP, local binary pattern; HOG, histogram of oriented gradients; LPQ, local phase quantization; PCA, principal component analysis; LDA, linear discriminant analysis; KPCA, kernel PCA; CNN, convolutional neural network; SVM, support vector machine.</p>
</caption>
<graphic xlink:href="sensors-20-00342-g002"></graphic>
</fig>
<fig id="sensors-20-00342-f003" orientation="portrait" position="float">
<label>Figure 3</label>
<caption>
<p>The local binary pattern (LBP) descriptor [
<xref rid="B19-sensors-20-00342" ref-type="bibr">19</xref>
].</p>
</caption>
<graphic xlink:href="sensors-20-00342-g003"></graphic>
</fig>
<fig id="sensors-20-00342-f004" orientation="portrait" position="float">
<label>Figure 4</label>
<caption>
<p>All “
<inline-formula>
<mml:math id="mm104">
<mml:mrow>
<mml:mrow>
<mml:mn>4</mml:mn>
<mml:mi mathvariant="normal">f</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
” optical configuration [
<xref rid="B37-sensors-20-00342" ref-type="bibr">37</xref>
].</p>
</caption>
<graphic xlink:href="sensors-20-00342-g004"></graphic>
</fig>
<fig id="sensors-20-00342-f005" orientation="portrait" position="float">
<label>Figure 5</label>
<caption>
<p>Flowchart of the VanderLugt correlator (VLC) technique [
<xref rid="B4-sensors-20-00342" ref-type="bibr">4</xref>
]. FFT, fast Fourier transform; POF, phase-only filter.</p>
</caption>
<graphic xlink:href="sensors-20-00342-g005"></graphic>
</fig>
<fig id="sensors-20-00342-f006" orientation="portrait" position="float">
<label>Figure 6</label>
<caption>
<p>(
<bold>a</bold>
) Creation of the 3D face of a person, (
<bold>b</bold>
) results of the detection of 29 landmarks of a face using the active shape model, (
<bold>c</bold>
) results of the detection of 26 landmarks of a face [
<xref rid="B3-sensors-20-00342" ref-type="bibr">3</xref>
].</p>
</caption>
<graphic xlink:href="sensors-20-00342-g006"></graphic>
</fig>
<fig id="sensors-20-00342-f007" orientation="portrait" position="float">
<label>Figure 7</label>
<caption>
<p>Face recognition based on the speeded-up robust features (SURF) descriptor [
<xref rid="B58-sensors-20-00342" ref-type="bibr">58</xref>
]: recognition using fast library for approximate nearest neighbors (FLANN) distance.</p>
</caption>
<graphic xlink:href="sensors-20-00342-g007"></graphic>
</fig>
<fig id="sensors-20-00342-f008" orientation="portrait" position="float">
<label>Figure 8</label>
<caption>
<p>Fast retina keypoint (FREAK) descriptor used 43 sampling patterns [
<xref rid="B19-sensors-20-00342" ref-type="bibr">19</xref>
].</p>
</caption>
<graphic xlink:href="sensors-20-00342-g008"></graphic>
</fig>
<fig id="sensors-20-00342-f009" orientation="portrait" position="float">
<label>Figure 9</label>
<caption>
<p>Example of dimensional reduction when applying principal component analysis (PCA) [
<xref rid="B62-sensors-20-00342" ref-type="bibr">62</xref>
].</p>
</caption>
<graphic xlink:href="sensors-20-00342-g009"></graphic>
</fig>
<fig id="sensors-20-00342-f010" orientation="portrait" position="float">
<label>Figure 10</label>
<caption>
<p>The first five Eigenfaces built from the ORL database [
<xref rid="B63-sensors-20-00342" ref-type="bibr">63</xref>
].</p>
</caption>
<graphic xlink:href="sensors-20-00342-g010"></graphic>
</fig>
<fig id="sensors-20-00342-f011" orientation="portrait" position="float">
<label>Figure 11</label>
<caption>
<p>The first five Fisherfaces obtained from the ORL database [
<xref rid="B63-sensors-20-00342" ref-type="bibr">63</xref>
].</p>
</caption>
<graphic xlink:href="sensors-20-00342-g011"></graphic>
</fig>
<fig id="sensors-20-00342-f012" orientation="portrait" position="float">
<label>Figure 12</label>
<caption>
<p>Flowchart of the proposed multimodal deep face representation (MM-DFR) technique [
<xref rid="B95-sensors-20-00342" ref-type="bibr">95</xref>
]. CNN, convolutional neural network.</p>
</caption>
<graphic xlink:href="sensors-20-00342-g012"></graphic>
</fig>
<fig id="sensors-20-00342-f013" orientation="portrait" position="float">
<label>Figure 13</label>
<caption>
<p>The proposed CNN–LSTM–ELM [
<xref rid="B103-sensors-20-00342" ref-type="bibr">103</xref>
].</p>
</caption>
<graphic xlink:href="sensors-20-00342-g013"></graphic>
</fig>
<fig id="sensors-20-00342-f014" orientation="portrait" position="float">
<label>Figure 14</label>
<caption>
<p>Optimal hyperplane, support vectors, and maximum margin.</p>
</caption>
<graphic xlink:href="sensors-20-00342-g014"></graphic>
</fig>
<fig id="sensors-20-00342-f015" orientation="portrait" position="float">
<label>Figure 15</label>
<caption>
<p>Artificial neural network.</p>
</caption>
<graphic xlink:href="sensors-20-00342-g015"></graphic>
</fig>
<table-wrap id="sensors-20-00342-t001" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-20-00342-t001_Table 1</object-id>
<label>Table 1</label>
<caption>
<p>Summary of local approaches. SIFT, scale-invariant feature transform; SURF, scale-invariant feature transform; BRIEF, binary robust independent elementary features; LBP, local binary pattern; HOG, histogram of oriented gradients; LPQ, local phase quantization; PCA, principal component analysis; LDA, linear discriminant analysis; KPCA, kernel PCA; CNN, convolutional neural network; SVM, support vector machine; PLBP, pyramid of LBP; KNN, k-nearest neighbor; MLBP, multiscale LBP; LTP, local ternary pattern.; PHOG, pyramid HOG; VLC, VanderLugt correlator; LFW, Labeled Faces in the Wild; FERET, Face Recognition Technology; PHPID, Pointing Head Pose Image Database; PCE, peak to correlation energy; POF, phase-only filter; PSR, peak-to-sidelobe ratio.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th colspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Author/Technique Used</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Database</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Matching</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Limitation</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Advantage</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Result</th>
</tr>
<tr>
<th colspan="7" align="center" valign="middle" style="border-bottom:solid thin" rowspan="1">Local Appearance-Based Techniques</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Khoi et al. [
<xref rid="B20-sensors-20-00342" ref-type="bibr">20</xref>
]</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">LBP</td>
<td align="center" valign="middle" rowspan="1" colspan="1">TDF</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">MAP</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Skewness in face image</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Robust feature in fontal face</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">CF1999</td>
<td align="center" valign="middle" rowspan="1" colspan="1">13.03%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90.95%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Xi et al. [
<xref rid="B15-sensors-20-00342" ref-type="bibr">15</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">LBPNet</td>
<td align="center" valign="middle" rowspan="1" colspan="1">FERET</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Cosine similarity</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Complexities of CNN</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">High recognition accuracy</td>
<td align="center" valign="middle" rowspan="1" colspan="1">97.80%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94.04%</td>
</tr>
<tr>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Khoi et al. [
<xref rid="B20-sensors-20-00342" ref-type="bibr">20</xref>
]</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">PLBP</td>
<td align="center" valign="middle" rowspan="1" colspan="1">TDF</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">MAP</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Skewness in face image</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Robust feature in fontal face</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5.50%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">CF</td>
<td align="center" valign="middle" rowspan="1" colspan="1">9.70%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91.97%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Laure et al. [
<xref rid="B40-sensors-20-00342" ref-type="bibr">40</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">LBP and KNN</td>
<td align="center" valign="middle" rowspan="1" colspan="1">LFW</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">KNN</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Illumination conditions</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Robust</td>
<td align="center" valign="middle" rowspan="1" colspan="1">85.71%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">CMU-PIE</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99.26%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Bonnen et al. [
<xref rid="B42-sensors-20-00342" ref-type="bibr">42</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">MRF and MLBP</td>
<td align="center" valign="middle" rowspan="1" colspan="1">AR (Scream)</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Cosine similarity</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Landmark extraction fails or is not ideal</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Robust to changes in facial expression</td>
<td align="center" valign="middle" rowspan="1" colspan="1">86.10%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FERET (Wearing sunglasses) </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Ren et al. [
<xref rid="B43-sensors-20-00342" ref-type="bibr">43</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Relaxed LTP</td>
<td align="center" valign="middle" rowspan="1" colspan="1">CMU-PIE</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Chisquare distance</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Noise level</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Superior performance compared with LBP, LTP</td>
<td align="center" valign="middle" rowspan="1" colspan="1">95.75%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Yale B</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98.71%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Hussain et al. [
<xref rid="B60-sensors-20-00342" ref-type="bibr">60</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">LPQ</td>
<td align="center" valign="middle" rowspan="1" colspan="1">FERET/</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Cosine similarity</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Lot of discriminative information</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Robust to illumination variations</td>
<td align="center" valign="middle" rowspan="1" colspan="1">99.20%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75.30%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Karaaba et al. [
<xref rid="B44-sensors-20-00342" ref-type="bibr">44</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">HOG and MMD</td>
<td align="center" valign="middle" rowspan="1" colspan="1">FERET</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">MMD/MLPD</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Low recognition accuracy</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Aligning difficulties</td>
<td align="center" valign="middle" rowspan="1" colspan="1">68.59%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">23.49%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Arigbabu et al. [
<xref rid="B46-sensors-20-00342" ref-type="bibr">46</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PHOG and SVM</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SVM</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Complexity and time of computation</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Head pose variation</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.50%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Leonard et al. [
<xref rid="B50-sensors-20-00342" ref-type="bibr">50</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">VLC correlator</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PHPID</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ASPOF</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">The low number of the reference image used</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Robustness to noise</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Napoléon et al. [
<xref rid="B38-sensors-20-00342" ref-type="bibr">38</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">LBP and VLC</td>
<td align="center" valign="middle" rowspan="1" colspan="1">YaleB</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">POF</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Illumination</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Rotation + Translation</td>
<td align="center" valign="middle" rowspan="1" colspan="1">98.40%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">YaleB Extended</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95.80%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Heflin et al. [
<xref rid="B54-sensors-20-00342" ref-type="bibr">54</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">correlation filter</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW/PHPID</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PSR</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Some pre-processing steps </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">More effort on the eye localization stage</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">39.48%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Zhu et al. [
<xref rid="B55-sensors-20-00342" ref-type="bibr">55</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">PCA–FCF</td>
<td align="center" valign="middle" rowspan="1" colspan="1">CMU-PIE</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Correlation filter</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Use only linear method</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Occlusion-insensitive</td>
<td align="center" valign="middle" rowspan="1" colspan="1">96.60%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FRGC2.0</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91.92%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Seo et al. [
<xref rid="B27-sensors-20-00342" ref-type="bibr">27</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LARK + PCA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Cosine similarity</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Face detection</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Reducing computational complexity</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">78.90%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Ghorbel et al. [
<xref rid="B61-sensors-20-00342" ref-type="bibr">61</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">VLC + DoG</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FERET</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PCE</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Low recognition rate</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Robustness</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">81.51%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Ghorbel et al. [
<xref rid="B61-sensors-20-00342" ref-type="bibr">61</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">uLBP + DoG</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FERET</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">chi-square distance</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Robustness</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Processing time</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93.39%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Ouerhani et al. [
<xref rid="B18-sensors-20-00342" ref-type="bibr">18</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">VLC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PHPID</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PCE</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Power</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Processing time</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77%</td>
</tr>
<tr>
<td colspan="7" align="center" valign="middle" style="border-bottom:solid thin" rowspan="1">
<bold>Key-Points-Based Techniques</bold>
</td>
</tr>
<tr>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Lenc et al. [
<xref rid="B56-sensors-20-00342" ref-type="bibr">56</xref>
]</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">SIFT</td>
<td align="center" valign="middle" rowspan="1" colspan="1">FERET</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">a posterior probability</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Still far to be perfect</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Sufficiently robust on lower quality real data</td>
<td align="center" valign="middle" rowspan="1" colspan="1">97.30%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">AR</td>
<td align="center" valign="middle" rowspan="1" colspan="1">95.80%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98.04%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Du et al. [
<xref rid="B29-sensors-20-00342" ref-type="bibr">29</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SURF</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FLANN distance</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Processing time</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Robustness and distinctiveness</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95.60%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Vinay et al. [
<xref rid="B23-sensors-20-00342" ref-type="bibr">23</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">SURF + SIFT</td>
<td align="center" valign="middle" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" rowspan="1" colspan="1">FLANN</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Processing time</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Robust in unconstrained scenarios</td>
<td align="center" valign="middle" rowspan="1" colspan="1">78.86%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Face94</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">distance</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96.67%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Calonder et al. [
<xref rid="B30-sensors-20-00342" ref-type="bibr">30</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">BRIEF</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">_</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">KNN</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Low recognition rate</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Low processing time</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">48%</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-20-00342-t002" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-20-00342-t002_Table 2</object-id>
<label>Table 2</label>
<caption>
<p>Subspace approaches. ICA, independent component analysis; DWT, discrete wavelet transform; FFT, fast Fourier transform; DCT, discrete cosine transform.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th colspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Author/Techniques Used</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Databases </th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Matching</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Limitation</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Advantage </th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Result</th>
</tr>
<tr>
<th colspan="2" align="center" valign="middle" style="border-bottom:solid thin" rowspan="1"></th>
<th colspan="5" align="center" valign="middle" style="border-bottom:solid thin" rowspan="1">Linear Techniques</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Seo et al. [
<xref rid="B27-sensors-20-00342" ref-type="bibr">27</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LARK and PCA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">L2 distance</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Detection accuracy</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Reducing computational complexity</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">85.10%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Annalakshmi et al. [
<xref rid="B35-sensors-20-00342" ref-type="bibr">35</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ICA and LDA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Bayesian Classifier</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Sensitivity </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Good accuracy</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Annalakshmi et al. [
<xref rid="B35-sensors-20-00342" ref-type="bibr">35</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PCA and LDA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Bayesian Classifier</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Sensitivity </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Specificity</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">59%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Hussain et al. [
<xref rid="B36-sensors-20-00342" ref-type="bibr">36</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">LQP and Gabor</td>
<td align="center" valign="middle" rowspan="1" colspan="1">FERET</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Cosine similarity</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Lot of discriminative information</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Robust to illumination variations</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">99.2%
<break></break>
75.3%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Gowda et al. [
<xref rid="B17-sensors-20-00342" ref-type="bibr">17</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LPQ and LDA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">MEPCO</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SVM </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Computation time</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Good accuracy</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99.13%</td>
</tr>
<tr>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Z. Cui et al. [
<xref rid="B67-sensors-20-00342" ref-type="bibr">67</xref>
]</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">BoW</td>
<td align="center" valign="middle" rowspan="1" colspan="1">AR</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">ASM</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Occlusions</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Robust</td>
<td align="center" valign="middle" rowspan="1" colspan="1">99.43%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">ORL </td>
<td align="center" valign="middle" rowspan="1" colspan="1">99.50%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FERET</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">82.30%</td>
</tr>
<tr>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Khan et al. [
<xref rid="B83-sensors-20-00342" ref-type="bibr">83</xref>
]</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">PSO and DWT</td>
<td align="center" valign="middle" rowspan="1" colspan="1">CK</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Euclidienne distance</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Noise</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Robust to illumination</td>
<td align="center" valign="middle" rowspan="1" colspan="1">98.60%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">MMI</td>
<td align="center" valign="middle" rowspan="1" colspan="1">95.50%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">JAFFE</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98.80%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Huang et al. [
<xref rid="B70-sensors-20-00342" ref-type="bibr">70</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">2D-DWT</td>
<td align="center" valign="middle" rowspan="1" colspan="1">FERET</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">KNN</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Pose</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Frontal or near-frontal facial images</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">90.63%
<break></break>
97.10%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Perlibakas and Vytautas [
<xref rid="B69-sensors-20-00342" ref-type="bibr">69</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PCA and Gabor filter</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FERET</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Cosine metric</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Precision</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Pose</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87.77%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Hafez et al. [
<xref rid="B84-sensors-20-00342" ref-type="bibr">84</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Gabor filter and LDA</td>
<td align="center" valign="middle" rowspan="1" colspan="1">ORL</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">2DNCC </td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Pose</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Good recognition performance</td>
<td align="center" valign="middle" rowspan="1" colspan="1">98.33%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">C. YaleB</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99.33%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Sufyanu et al. [
<xref rid="B71-sensors-20-00342" ref-type="bibr">71</xref>
]</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">DCT</td>
<td align="center" valign="middle" rowspan="1" colspan="1">ORL</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">NCC</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">High memory</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Controlled and uncontrolled databases</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">93.40%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Yale</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Shanbhag et al. [
<xref rid="B85-sensors-20-00342" ref-type="bibr">85</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">DWT and BPSO</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">_ _</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">_ _</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Rotation</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Significant reduction in the number of features</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.44%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Ghorbel et al. [
<xref rid="B61-sensors-20-00342" ref-type="bibr">61</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Eigenfaces and DoG filter</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FERET</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Chi-square distance</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Processing time</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Reduce the representation</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">84.26%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhang et al. [
<xref rid="B12-sensors-20-00342" ref-type="bibr">12</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PCA and FFT</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">YALE</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SVM</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Complexity</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Discrimination</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93.42%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhang et al. [
<xref rid="B12-sensors-20-00342" ref-type="bibr">12</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PCA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">YALE</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SVM</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Recognition rate</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Reduce the dimensionality </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">84.21%</td>
</tr>
<tr>
<td colspan="2" align="center" valign="middle" style="border-bottom:solid thin" rowspan="1"></td>
<td colspan="5" align="center" valign="middle" style="border-bottom:solid thin" rowspan="1">
<bold>Nonlinear Techniques</bold>
</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Fan et al. [
<xref rid="B86-sensors-20-00342" ref-type="bibr">86</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">RKPCA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">MNIST ORL </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">RBF kernel</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Complexity</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Robust to sparse noises</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">_</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Vinay et al. [
<xref rid="B87-sensors-20-00342" ref-type="bibr">87</xref>
] </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ORB and KPCA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ORL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FLANN Matching</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Processing time</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Robust</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87.30%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Vinay et al. [
<xref rid="B87-sensors-20-00342" ref-type="bibr">87</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SURF and KPCA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ORL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FLANN Matching</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Processing time</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Reduce the dimensionality</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">80.34%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Vinay et al. [
<xref rid="B87-sensors-20-00342" ref-type="bibr">87</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SIFT and KPCA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ORL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FLANN Matching</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Low recognition rate</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Complexity</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">69.20%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Lu et al. [
<xref rid="B88-sensors-20-00342" ref-type="bibr">88</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">KPCA and GDA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">UMIST face</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SVM</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">High error rate </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Excellent performance</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">48%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Yang et al. [
<xref rid="B89-sensors-20-00342" ref-type="bibr">89</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PCA and MSR</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">HELEN face</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ESR</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Complexity</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Utilizes color, gradient, and regional information</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98.00%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Yang et al. [
<xref rid="B89-sensors-20-00342" ref-type="bibr">89</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LDA and MSR</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FRGC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ESR</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Low performances</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Utilizes color, gradient, and regional information</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90.75%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Ouanan et al. [
<xref rid="B90-sensors-20-00342" ref-type="bibr">90</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FDDL </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">AR</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">CNN</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Occlusion</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Orientations, expressions</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98.00%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Vankayalapati and Kyamakya [
<xref rid="B77-sensors-20-00342" ref-type="bibr">77</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">CNN</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ORL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">_ _</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Poses</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">High recognition rate</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Devi et al. [
<xref rid="B63-sensors-20-00342" ref-type="bibr">63</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2FNN</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ORL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">_ _</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Complexity</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Low error rate</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98.5</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-20-00342-t003" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-20-00342-t003_Table 3</object-id>
<label>Table 3</label>
<caption>
<p>Hybrid approaches. GW, Gabor wavelet; OCLBP, over-complete LBP; WCCN, within class covariance normalization; WLBP, Walsh LPB; ICP, iterative closest point; LGBPHS, local Gabor binary pattern histogram sequence; FLD, Fisher linear discriminant; SAE, stacked auto-encoder.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th colspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Author/Technique Used</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Database</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Matching</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Limitation</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Advantage </th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Result</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Fathima et al. [
<xref rid="B91-sensors-20-00342" ref-type="bibr">91</xref>
]</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">GW-LDA</td>
<td align="center" valign="middle" rowspan="1" colspan="1">AT&T</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">k-NN</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">High processing time</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Illumination invariant and reduce the dimensionality</td>
<td align="center" valign="middle" rowspan="1" colspan="1">88%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">FACES94</td>
<td align="center" valign="middle" rowspan="1" colspan="1">94.02%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">MITINDIA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.12%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Barkan et al., [
<xref rid="B92-sensors-20-00342" ref-type="bibr">92</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">OCLBP, LDA, and WCCN</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">WCCN</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">_</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Reduce the dimensionality</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87.85%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Juefei et al. [
<xref rid="B93-sensors-20-00342" ref-type="bibr">93</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ACF and WLBP</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1"></td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Complexity</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Pose conditions</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">89.69%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Simonyan et al. [
<xref rid="B64-sensors-20-00342" ref-type="bibr">64</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Fisher + SIFT</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Mahalanobis matrix</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Single feature type</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Robust</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87.47%</td>
</tr>
<tr>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Sharma et al. [
<xref rid="B96-sensors-20-00342" ref-type="bibr">96</xref>
]</td>
<td align="center" valign="middle" rowspan="1" colspan="1">PCA–ANFIS</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">ORL</td>
<td align="center" valign="middle" rowspan="1" colspan="1">ANFIS</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Sensitivity-specificity</td>
<td align="center" valign="middle" rowspan="1" colspan="1"></td>
<td align="center" valign="middle" rowspan="1" colspan="1">96.66%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">ICA–ANFIS</td>
<td align="center" valign="middle" rowspan="1" colspan="1">ANFIS</td>
<td align="center" valign="middle" rowspan="1" colspan="1">Pose conditions</td>
<td align="center" valign="middle" rowspan="1" colspan="1">71.30%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LDA–ANFIS</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ANFIS</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1"></td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">68%</td>
</tr>
<tr>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Ojala et al. [
<xref rid="B97-sensors-20-00342" ref-type="bibr">97</xref>
] </td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">DCT–PCA</td>
<td align="center" valign="middle" rowspan="1" colspan="1">ORL</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Euclidian distance</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Complexity</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Reduce the dimensionality</td>
<td align="center" valign="middle" rowspan="1" colspan="1">92.62%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">UMIST</td>
<td align="center" valign="middle" rowspan="1" colspan="1">99.40%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">YALE</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95.50%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Mian et al. [
<xref rid="B98-sensors-20-00342" ref-type="bibr">98</xref>
] </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Hotelling transform, SIFT, and ICP</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">FRGC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ICP</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Processing time</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Facial expressions</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99.74%</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Cho et al. [
<xref rid="B99-sensors-20-00342" ref-type="bibr">99</xref>
]</td>
<td align="center" valign="middle" rowspan="1" colspan="1">PCA–LGBPHS</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Extended Yale Face</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Bhattacharyya distance</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Illumination condition</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Complexity</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">95%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PCA–GABOR Wavelets</td>
</tr>
<tr>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Sing et al. [
<xref rid="B101-sensors-20-00342" ref-type="bibr">101</xref>
]</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">PCA–FLD</td>
<td align="center" valign="middle" rowspan="1" colspan="1">CMU</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">SVM</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Robustness</td>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Pose, illumination, and expression</td>
<td align="center" valign="middle" rowspan="1" colspan="1">71.98%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">FERET</td>
<td align="center" valign="middle" rowspan="1" colspan="1">94.73%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">AR</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">68.65%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Kamencay et al. [
<xref rid="B102-sensors-20-00342" ref-type="bibr">102</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SPCA-KNN</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ESSEX</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">KNN</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Processing time</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Expression variation</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96.80%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Sun et al. [
<xref rid="B103-sensors-20-00342" ref-type="bibr">103</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">CNN–LSTM–ELM</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">OPPORTUNITY</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LSTM/ELM</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">High processing time</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Automatically learn feature representations</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90.60%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Ding et al. [
<xref rid="B95-sensors-20-00342" ref-type="bibr">95</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">CNNs and SAE</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LFW</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">_ _</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Complexity</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">High recognition rate</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99%</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-20-00342-t004" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-20-00342-t004_Table 4</object-id>
<label>Table 4</label>
<caption>
<p>General performance of face recognition approaches.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th colspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Approaches</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Databases Used</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Advantages</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Disadvantages</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Performances</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Challenges Handled</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">
<bold>Local</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Local Appearance</bold>
</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">TDF, CF1999,
<break></break>
LFW, FERET,
<break></break>
CMU-PIE, AR,
<break></break>
Yale B, PHPID,
<break></break>
YaleB Extended, FRGC2.0, Face94.</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Easy to implement, allowing an analysis of images in a difficult environment in real-time [
<xref rid="B38-sensors-20-00342" ref-type="bibr">38</xref>
].</p>
</list-item>
<list-item>
<p>Invariant to size, orientation, and lighting [
<xref rid="B47-sensors-20-00342" ref-type="bibr">47</xref>
,
<xref rid="B48-sensors-20-00342" ref-type="bibr">48</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Lack discrimination ability.</p>
</list-item>
<list-item>
<p>It is difficult to automatic detect feature in this approach.</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>High performance in terms of processing time and recognition rate [
<xref rid="B15-sensors-20-00342" ref-type="bibr">15</xref>
,
<xref rid="B38-sensors-20-00342" ref-type="bibr">38</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Pose variations [
<xref rid="B42-sensors-20-00342" ref-type="bibr">42</xref>
], various lighting conditions[
<xref rid="B60-sensors-20-00342" ref-type="bibr">60</xref>
], facial expressions [
<xref rid="B38-sensors-20-00342" ref-type="bibr">38</xref>
], and low resolution.</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Key-Points</bold>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Does not require prior knowledge of the images [
<xref rid="B56-sensors-20-00342" ref-type="bibr">56</xref>
].</p>
</list-item>
<list-item>
<p>Different illumination conditions, scaling, aging effects, facial expressions, face occlusions, and noisy images [
<xref rid="B57-sensors-20-00342" ref-type="bibr">57</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>More affected by orientation changes or the expression of the face [
<xref rid="B23-sensors-20-00342" ref-type="bibr">23</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>High processing time [
<xref rid="B29-sensors-20-00342" ref-type="bibr">29</xref>
].</p>
</list-item>
<list-item>
<p>Low recognition rate [
<xref rid="B30-sensors-20-00342" ref-type="bibr">30</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Different illumination conditions, facial expressions, aging effects, scaling, face occlusions and noisy images [
<xref rid="B56-sensors-20-00342" ref-type="bibr">56</xref>
].</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">
<bold>Holistic</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Linear</bold>
</td>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">LFW, FERET, MEPCO, AR, ORL, CK, MMI, JAFFE,
<break></break>
C. Yale B, Yale, MNIST, ORL, UMIST face, HELEN face, FRGC.</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>When frontal views of faces are used, these techniques provide good performance [
<xref rid="B35-sensors-20-00342" ref-type="bibr">35</xref>
,
<xref rid="B70-sensors-20-00342" ref-type="bibr">70</xref>
].</p>
</list-item>
<list-item>
<p>Recognition is effective and simple.</p>
</list-item>
<list-item>
<p>Dimensionality reduction, represent global information [
<xref rid="B17-sensors-20-00342" ref-type="bibr">17</xref>
,
<xref rid="B27-sensors-20-00342" ref-type="bibr">27</xref>
,
<xref rid="B67-sensors-20-00342" ref-type="bibr">67</xref>
,
<xref rid="B70-sensors-20-00342" ref-type="bibr">70</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Sensitive to the rotation and the translation of the face images.</p>
</list-item>
<list-item>
<p>Can only classify a face that is “known” to the database.</p>
</list-item>
<list-item>
<p>Low speed in the face recognition caused by a long feature vector [
<xref rid="B36-sensors-20-00342" ref-type="bibr">36</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Processed with larger size features.</p>
</list-item>
<list-item>
<p>High processing time [
<xref rid="B17-sensors-20-00342" ref-type="bibr">17</xref>
].</p>
</list-item>
<list-item>
<p>High performance in terms of recognition rate [
<xref rid="B67-sensors-20-00342" ref-type="bibr">67</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Different illumination conditions [
<xref rid="B36-sensors-20-00342" ref-type="bibr">36</xref>
,
<xref rid="B83-sensors-20-00342" ref-type="bibr">83</xref>
], scaling, facial expressions.</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Non-Linear</bold>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Dimensionality reduction [
<xref rid="B86-sensors-20-00342" ref-type="bibr">86</xref>
,
<xref rid="B87-sensors-20-00342" ref-type="bibr">87</xref>
,
<xref rid="B88-sensors-20-00342" ref-type="bibr">88</xref>
].</p>
</list-item>
<list-item>
<p>They are because of supervised classification problems.</p>
</list-item>
<list-item>
<p>Automatically detect feature in this approach (CNN and RNN) [
<xref rid="B63-sensors-20-00342" ref-type="bibr">63</xref>
,
<xref rid="B77-sensors-20-00342" ref-type="bibr">77</xref>
,
<xref rid="B90-sensors-20-00342" ref-type="bibr">90</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>The recognition performance depends on the chosen kernel [
<xref rid="B88-sensors-20-00342" ref-type="bibr">88</xref>
].</p>
</list-item>
<list-item>
<p>More difficult to implement than the local technique.</p>
</list-item>
<list-item>
<p>Recognition rate unsatisfying [
<xref rid="B87-sensors-20-00342" ref-type="bibr">87</xref>
,
<xref rid="B88-sensors-20-00342" ref-type="bibr">88</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Complexity [
<xref rid="B88-sensors-20-00342" ref-type="bibr">88</xref>
].</p>
</list-item>
<list-item>
<p>Computationally expensive and require a high degree of correlation between the test and training images (SVM, CNN) [
<xref rid="B88-sensors-20-00342" ref-type="bibr">88</xref>
,
<xref rid="B90-sensors-20-00342" ref-type="bibr">90</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Different illumination [
<xref rid="B36-sensors-20-00342" ref-type="bibr">36</xref>
,
<xref rid="B83-sensors-20-00342" ref-type="bibr">83</xref>
], poses [
<xref rid="B70-sensors-20-00342" ref-type="bibr">70</xref>
], conditions, scaling, facial expressions.</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td colspan="2" align="center" valign="middle" style="border-bottom:solid thin" rowspan="1">
<bold>Hybrid</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">AT&T, FACES94,
<break></break>
MITINDIA, LFW, ORL, UMIST, YALE, FRGC, Extended Yale, CMU, FERET, AR, ESSEX. </td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Provides faster systems and efficient recognition [
<xref rid="B95-sensors-20-00342" ref-type="bibr">95</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>More difficult to implement.</p>
</list-item>
<list-item>
<p>Complex and computational cost [
<xref rid="B93-sensors-20-00342" ref-type="bibr">93</xref>
,
<xref rid="B95-sensors-20-00342" ref-type="bibr">95</xref>
,
<xref rid="B97-sensors-20-00342" ref-type="bibr">97</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>High recognition rate [
<xref rid="B95-sensors-20-00342" ref-type="bibr">95</xref>
].</p>
</list-item>
<list-item>
<p>High computational complexity [
<xref rid="B97-sensors-20-00342" ref-type="bibr">97</xref>
].</p>
</list-item>
</list>
</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<list list-type="bullet">
<list-item>
<p>Pose, illumination conditions, and facial expressions [
<xref rid="B101-sensors-20-00342" ref-type="bibr">101</xref>
,
<xref rid="B102-sensors-20-00342" ref-type="bibr">102</xref>
].</p>
</list-item>
</list>
</td>
</tr>
</tbody>
</table>
</table-wrap>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Sante/explor/MaghrebDataLibMedV2/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000317  | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000317  | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Sante
   |area=    MaghrebDataLibMedV2
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     
   |texte=   
}}

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Wed Jun 30 18:27:05 2021. Site generation: Wed Jun 30 18:34:21 2021