Serveur sur les données et bibliothèques médicales au Maghreb (version finale)

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.
***** Acces problem to record *****\

Identifieur interne : 000283 ( Pmc/Corpus ); précédent : 0002829; suivant : 0002840 ***** probable Xml problem with record *****

Links to Exploration step


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Multi-Block Color-Binarized Statistical Images for Single-Sample Face Recognition</title>
<author>
<name sortKey="Adjabi, Insaf" sort="Adjabi, Insaf" uniqKey="Adjabi I" first="Insaf" last="Adjabi">Insaf Adjabi</name>
<affiliation>
<nlm:aff id="af1-sensors-21-00728">Department of Computer Science, LIMPAF, University of Bouira, Bouira 10000, Algeria;
<email>i.adjabi@univ-bouira.dz</email>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Ouahabi, Abdeldjalil" sort="Ouahabi, Abdeldjalil" uniqKey="Ouahabi A" first="Abdeldjalil" last="Ouahabi">Abdeldjalil Ouahabi</name>
<affiliation>
<nlm:aff id="af1-sensors-21-00728">Department of Computer Science, LIMPAF, University of Bouira, Bouira 10000, Algeria;
<email>i.adjabi@univ-bouira.dz</email>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="af2-sensors-21-00728">Polytech Tours, Imaging and Brain, INSERM U930, University of Tours, 37200 Tours, France</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Benzaoui, Amir" sort="Benzaoui, Amir" uniqKey="Benzaoui A" first="Amir" last="Benzaoui">Amir Benzaoui</name>
<affiliation>
<nlm:aff id="af3-sensors-21-00728">Department of Electrical Engineering, University of Bouira, Bouira 10000, Algeria;
<email>a.benzaoui@univ-bouira.dz</email>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Jacques, Sebastien" sort="Jacques, Sebastien" uniqKey="Jacques S" first="Sébastien" last="Jacques">Sébastien Jacques</name>
<affiliation>
<nlm:aff id="af4-sensors-21-00728">GREMAN UMR 7347, University of Tours, CNRS, INSA Centre Val-de-Loire, 37200 Tours, France;
<email>sebastien.jacques@univ-tours.fr</email>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">33494516</idno>
<idno type="pmc">7865363</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7865363</idno>
<idno type="RBID">PMC:7865363</idno>
<idno type="doi">10.3390/s21030728</idno>
<date when="2021">2021</date>
<idno type="wicri:Area/Pmc/Corpus">000283</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000283</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Multi-Block Color-Binarized Statistical Images for Single-Sample Face Recognition</title>
<author>
<name sortKey="Adjabi, Insaf" sort="Adjabi, Insaf" uniqKey="Adjabi I" first="Insaf" last="Adjabi">Insaf Adjabi</name>
<affiliation>
<nlm:aff id="af1-sensors-21-00728">Department of Computer Science, LIMPAF, University of Bouira, Bouira 10000, Algeria;
<email>i.adjabi@univ-bouira.dz</email>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Ouahabi, Abdeldjalil" sort="Ouahabi, Abdeldjalil" uniqKey="Ouahabi A" first="Abdeldjalil" last="Ouahabi">Abdeldjalil Ouahabi</name>
<affiliation>
<nlm:aff id="af1-sensors-21-00728">Department of Computer Science, LIMPAF, University of Bouira, Bouira 10000, Algeria;
<email>i.adjabi@univ-bouira.dz</email>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="af2-sensors-21-00728">Polytech Tours, Imaging and Brain, INSERM U930, University of Tours, 37200 Tours, France</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Benzaoui, Amir" sort="Benzaoui, Amir" uniqKey="Benzaoui A" first="Amir" last="Benzaoui">Amir Benzaoui</name>
<affiliation>
<nlm:aff id="af3-sensors-21-00728">Department of Electrical Engineering, University of Bouira, Bouira 10000, Algeria;
<email>a.benzaoui@univ-bouira.dz</email>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Jacques, Sebastien" sort="Jacques, Sebastien" uniqKey="Jacques S" first="Sébastien" last="Jacques">Sébastien Jacques</name>
<affiliation>
<nlm:aff id="af4-sensors-21-00728">GREMAN UMR 7347, University of Tours, CNRS, INSA Centre Val-de-Loire, 37200 Tours, France;
<email>sebastien.jacques@univ-tours.fr</email>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Sensors (Basel, Switzerland)</title>
<idno type="eISSN">1424-8220</idno>
<imprint>
<date when="2021">2021</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Single-Sample Face Recognition (SSFR) is a computer vision challenge. In this scenario, there is only one example from each individual on which to train the system, making it difficult to identify persons in unconstrained environments, mainly when dealing with changes in facial expression, posture, lighting, and occlusion. This paper discusses the relevance of an original method for SSFR, called Multi-Block Color-Binarized Statistical Image Features (MB-C-BSIF), which exploits several kinds of features, namely, local, regional, global, and textured-color characteristics. First, the MB-C-BSIF method decomposes a facial image into three channels (e.g., red, green, and blue), then it divides each channel into equal non-overlapping blocks to select the local facial characteristics that are consequently employed in the classification phase. Finally, the identity is determined by calculating the similarities among the characteristic vectors adopting a distance measurement of the K-nearest neighbors (K-NN) classifier. Extensive experiments on several subsets of the unconstrained Alex and Robert (AR) and Labeled Faces in the Wild (LFW) databases show that the MB-C-BSIF achieves superior and competitive results in unconstrained situations when compared to current state-of-the-art methods, especially when dealing with changes in facial expression, lighting, and occlusion. The average classification accuracies are 96.17% and 99% for the AR database with two specific protocols (i.e., Protocols I and II, respectively), and 38.01% for the challenging LFW database. These performances are clearly superior to those obtained by state-of-the-art methods. Furthermore, the proposed method uses algorithms based only on simple and elementary image processing operations that do not imply higher computational costs as in holistic, sparse or deep learning methods, making it ideal for real-time identification.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Alay, N" uniqKey="Alay N">N. Alay</name>
</author>
<author>
<name sortKey="Al Baity, H H" uniqKey="Al Baity H">H.H. Al-Baity</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pagnin, E" uniqKey="Pagnin E">E. Pagnin</name>
</author>
<author>
<name sortKey="Mitrokotsa, A" uniqKey="Mitrokotsa A">A. Mitrokotsa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mahfouz, A" uniqKey="Mahfouz A">A. Mahfouz</name>
</author>
<author>
<name sortKey="Mahmoud, T M" uniqKey="Mahmoud T">T.M. Mahmoud</name>
</author>
<author>
<name sortKey="Sharaf Eldin, A" uniqKey="Sharaf Eldin A">A. Sharaf Eldin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ferrara, M" uniqKey="Ferrara M">M. Ferrara</name>
</author>
<author>
<name sortKey="Cappelli, R" uniqKey="Cappelli R">R. Cappelli</name>
</author>
<author>
<name sortKey="Maltoni, D" uniqKey="Maltoni D">D. Maltoni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thompson, J" uniqKey="Thompson J">J. Thompson</name>
</author>
<author>
<name sortKey="Flynn, P" uniqKey="Flynn P">P. Flynn</name>
</author>
<author>
<name sortKey="Boehnen, C" uniqKey="Boehnen C">C. Boehnen</name>
</author>
<author>
<name sortKey="Santos Villalobos, H" uniqKey="Santos Villalobos H">H. Santos-Villalobos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benzaoui, A" uniqKey="Benzaoui A">A. Benzaoui</name>
</author>
<author>
<name sortKey="Bourouba, H" uniqKey="Bourouba H">H. Bourouba</name>
</author>
<author>
<name sortKey="Boukrouche, A" uniqKey="Boukrouche A">A. Boukrouche</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Phillips, P J" uniqKey="Phillips P">P.J. Phillips</name>
</author>
<author>
<name sortKey="Flynn, P J" uniqKey="Flynn P">P.J. Flynn</name>
</author>
<author>
<name sortKey="Scruggs, T" uniqKey="Scruggs T">T. Scruggs</name>
</author>
<author>
<name sortKey="Bowyer, K W" uniqKey="Bowyer K">K.W. Bowyer</name>
</author>
<author>
<name sortKey="Chang, J" uniqKey="Chang J">J. Chang</name>
</author>
<author>
<name sortKey="Hoffman, K" uniqKey="Hoffman K">K. Hoffman</name>
</author>
<author>
<name sortKey="Marques, J" uniqKey="Marques J">J. Marques</name>
</author>
<author>
<name sortKey="Min, J" uniqKey="Min J">J. Min</name>
</author>
<author>
<name sortKey="Worek, W" uniqKey="Worek W">W. Worek</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Femmam, S" uniqKey="Femmam S">S. Femmam</name>
</author>
<author>
<name sortKey="M Irdi, N K" uniqKey="M Irdi N">N.K. M’Sirdi</name>
</author>
<author>
<name sortKey="Ouahabi, A" uniqKey="Ouahabi A">A. Ouahabi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ring, T" uniqKey="Ring T">T. Ring</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Phillips, P J" uniqKey="Phillips P">P.J. Phillips</name>
</author>
<author>
<name sortKey="Yates, A N" uniqKey="Yates A">A.N. Yates</name>
</author>
<author>
<name sortKey="Hu, Y" uniqKey="Hu Y">Y. Hu</name>
</author>
<author>
<name sortKey="Hahn, A C" uniqKey="Hahn A">A.C. Hahn</name>
</author>
<author>
<name sortKey="Noyes, E" uniqKey="Noyes E">E. Noyes</name>
</author>
<author>
<name sortKey="Jackson, K" uniqKey="Jackson K">K. Jackson</name>
</author>
<author>
<name sortKey="Cavazos, J G" uniqKey="Cavazos J">J.G. Cavazos</name>
</author>
<author>
<name sortKey="Jeckeln, G" uniqKey="Jeckeln G">G. Jeckeln</name>
</author>
<author>
<name sortKey="Ranjan, R" uniqKey="Ranjan R">R. Ranjan</name>
</author>
<author>
<name sortKey="Sankaranarayanan, S" uniqKey="Sankaranarayanan S">S. Sankaranarayanan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kortli, Y" uniqKey="Kortli Y">Y. Kortli</name>
</author>
<author>
<name sortKey="Jridi, M" uniqKey="Jridi M">M. Jridi</name>
</author>
<author>
<name sortKey="Al Falou, A" uniqKey="Al Falou A">A. Al Falou</name>
</author>
<author>
<name sortKey="Atri, M" uniqKey="Atri M">M. Atri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ouahabi, A" uniqKey="Ouahabi A">A. Ouahabi</name>
</author>
<author>
<name sortKey="Taleb Ahmed, A" uniqKey="Taleb Ahmed A">A. Taleb-Ahmed</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rahman, J U" uniqKey="Rahman J">J.U. Rahman</name>
</author>
<author>
<name sortKey="Chen, Q" uniqKey="Chen Q">Q. Chen</name>
</author>
<author>
<name sortKey="Yang, Z" uniqKey="Yang Z">Z. Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fan, Z" uniqKey="Fan Z">Z. Fan</name>
</author>
<author>
<name sortKey="Jamil, M" uniqKey="Jamil M">M. Jamil</name>
</author>
<author>
<name sortKey="Sadiq, M T" uniqKey="Sadiq M">M.T. Sadiq</name>
</author>
<author>
<name sortKey="Huang, X" uniqKey="Huang X">X. Huang</name>
</author>
<author>
<name sortKey="Yu, X" uniqKey="Yu X">X. Yu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benzaoui, A" uniqKey="Benzaoui A">A. Benzaoui</name>
</author>
<author>
<name sortKey="Boukrouche, A" uniqKey="Boukrouche A">A. Boukrouche</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vapnik, V N" uniqKey="Vapnik V">V.N. Vapnik</name>
</author>
<author>
<name sortKey="Chervonenkis, A" uniqKey="Chervonenkis A">A. Chervonenkis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vezzetti, E" uniqKey="Vezzetti E">E. Vezzetti</name>
</author>
<author>
<name sortKey="Marcolin, F" uniqKey="Marcolin F">F. Marcolin</name>
</author>
<author>
<name sortKey="Tornincasa, S" uniqKey="Tornincasa S">S. Tornincasa</name>
</author>
<author>
<name sortKey="Ulrich, L" uniqKey="Ulrich L">L. Ulrich</name>
</author>
<author>
<name sortKey="Dagnes, N" uniqKey="Dagnes N">N. Dagnes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Echeagaray Patron, B A" uniqKey="Echeagaray Patron B">B.A. Echeagaray-Patron</name>
</author>
<author>
<name sortKey="Miramontes Jaramillo, D" uniqKey="Miramontes Jaramillo D">D. Miramontes-Jaramillo</name>
</author>
<author>
<name sortKey="Kober, V" uniqKey="Kober V">V. Kober</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kannala, J" uniqKey="Kannala J">J. Kannala</name>
</author>
<author>
<name sortKey="Rahtu, E" uniqKey="Rahtu E">E. Rahtu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Djeddi, M" uniqKey="Djeddi M">M. Djeddi</name>
</author>
<author>
<name sortKey="Ouahabi, A" uniqKey="Ouahabi A">A. Ouahabi</name>
</author>
<author>
<name sortKey="Batatia, H" uniqKey="Batatia H">H. Batatia</name>
</author>
<author>
<name sortKey="Basarab, A" uniqKey="Basarab A">A. Basarab</name>
</author>
<author>
<name sortKey="Kouame, D" uniqKey="Kouame D">D. Kouamé</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ouahabi, A" uniqKey="Ouahabi A">A. Ouahabi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ouahabi, A" uniqKey="Ouahabi A">A. Ouahabi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ouahabi, A" uniqKey="Ouahabi A">A. Ouahabi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sidahmed, S" uniqKey="Sidahmed S">S. Sidahmed</name>
</author>
<author>
<name sortKey="Messali, Z" uniqKey="Messali Z">Z. Messali</name>
</author>
<author>
<name sortKey="Ouahabi, A" uniqKey="Ouahabi A">A. Ouahabi</name>
</author>
<author>
<name sortKey="Trepout, S" uniqKey="Trepout S">S. Trépout</name>
</author>
<author>
<name sortKey="Messaoudi, C" uniqKey="Messaoudi C">C. Messaoudi</name>
</author>
<author>
<name sortKey="Marco, S" uniqKey="Marco S">S. Marco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kumar, N" uniqKey="Kumar N">N. Kumar</name>
</author>
<author>
<name sortKey="Garg, V" uniqKey="Garg V">V. Garg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vetter, T" uniqKey="Vetter T">T. Vetter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, D" uniqKey="Zhang D">D. Zhang</name>
</author>
<author>
<name sortKey="Chen, S" uniqKey="Chen S">S. Chen</name>
</author>
<author>
<name sortKey="Zhou, Z H" uniqKey="Zhou Z">Z.H. Zhou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gao, Q X" uniqKey="Gao Q">Q.X. Gao</name>
</author>
<author>
<name sortKey="Zhang, L" uniqKey="Zhang L">L. Zhang</name>
</author>
<author>
<name sortKey="Zhang, D" uniqKey="Zhang D">D. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hu, C" uniqKey="Hu C">C. Hu</name>
</author>
<author>
<name sortKey="Ye, M" uniqKey="Ye M">M. Ye</name>
</author>
<author>
<name sortKey="Ji, S" uniqKey="Ji S">S. Ji</name>
</author>
<author>
<name sortKey="Zeng, W" uniqKey="Zeng W">W. Zeng</name>
</author>
<author>
<name sortKey="Lu, X" uniqKey="Lu X">X. Lu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dong, X" uniqKey="Dong X">X. Dong</name>
</author>
<author>
<name sortKey="Wu, F" uniqKey="Wu F">F. Wu</name>
</author>
<author>
<name sortKey="Jing, X Y" uniqKey="Jing X">X.Y. Jing</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deng, W" uniqKey="Deng W">W. Deng</name>
</author>
<author>
<name sortKey="Hu, J" uniqKey="Hu J">J. Hu</name>
</author>
<author>
<name sortKey="Guo, J" uniqKey="Guo J">J. Guo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yang, M" uniqKey="Yang M">M. Yang</name>
</author>
<author>
<name sortKey="Van, L V" uniqKey="Van L">L.V. Van</name>
</author>
<author>
<name sortKey="Zhang, L" uniqKey="Zhang L">L. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhu, P" uniqKey="Zhu P">P. Zhu</name>
</author>
<author>
<name sortKey="Yang, M" uniqKey="Yang M">M. Yang</name>
</author>
<author>
<name sortKey="Zhang, L" uniqKey="Zhang L">L. Zhang</name>
</author>
<author>
<name sortKey="Lee, L" uniqKey="Lee L">L. Lee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhu, P" uniqKey="Zhu P">P. Zhu</name>
</author>
<author>
<name sortKey="Zhang, L" uniqKey="Zhang L">L. Zhang</name>
</author>
<author>
<name sortKey="Hu, Q" uniqKey="Hu Q">Q. Hu</name>
</author>
<author>
<name sortKey="Shiu, S C K" uniqKey="Shiu S">S.C.K. Shiu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, L" uniqKey="Zhang L">L. Zhang</name>
</author>
<author>
<name sortKey="Yang, M" uniqKey="Yang M">M. Yang</name>
</author>
<author>
<name sortKey="Feng, X" uniqKey="Feng X">X. Feng</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lu, J" uniqKey="Lu J">J. Lu</name>
</author>
<author>
<name sortKey="Tan, Y P" uniqKey="Tan Y">Y.P. Tan</name>
</author>
<author>
<name sortKey="Wang, G" uniqKey="Wang G">G. Wang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, W" uniqKey="Zhang W">W. Zhang</name>
</author>
<author>
<name sortKey="Xu, Z" uniqKey="Xu Z">Z. Xu</name>
</author>
<author>
<name sortKey="Wang, Y" uniqKey="Wang Y">Y. Wang</name>
</author>
<author>
<name sortKey="Lu, Z" uniqKey="Lu Z">Z. Lu</name>
</author>
<author>
<name sortKey="Li, W" uniqKey="Li W">W. Li</name>
</author>
<author>
<name sortKey="Liao, Q" uniqKey="Liao Q">Q. Liao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, J" uniqKey="Gu J">J. Gu</name>
</author>
<author>
<name sortKey="Hu, H" uniqKey="Hu H">H. Hu</name>
</author>
<author>
<name sortKey="Li, H" uniqKey="Li H">H. Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, Z" uniqKey="Zhang Z">Z. Zhang</name>
</author>
<author>
<name sortKey="Zhang, L" uniqKey="Zhang L">L. Zhang</name>
</author>
<author>
<name sortKey="Zhang, M" uniqKey="Zhang M">M. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mimouna, A" uniqKey="Mimouna A">A. Mimouna</name>
</author>
<author>
<name sortKey="Alouani, I" uniqKey="Alouani I">I. Alouani</name>
</author>
<author>
<name sortKey="Ben Khalifa, A" uniqKey="Ben Khalifa A">A. Ben Khalifa</name>
</author>
<author>
<name sortKey="El Hillali, Y" uniqKey="El Hillali Y">Y. El Hillali</name>
</author>
<author>
<name sortKey="Taleb Ahmed, A" uniqKey="Taleb Ahmed A">A. Taleb-Ahmed</name>
</author>
<author>
<name sortKey="Menhaj, A" uniqKey="Menhaj A">A. Menhaj</name>
</author>
<author>
<name sortKey="Ouahabi, A" uniqKey="Ouahabi A">A. Ouahabi</name>
</author>
<author>
<name sortKey="Ben Amara, N E" uniqKey="Ben Amara N">N.E. Ben Amara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zeng, J" uniqKey="Zeng J">J. Zeng</name>
</author>
<author>
<name sortKey="Zhao, X" uniqKey="Zhao X">X. Zhao</name>
</author>
<author>
<name sortKey="Qin, C" uniqKey="Qin C">C. Qin</name>
</author>
<author>
<name sortKey="Lin, Z" uniqKey="Lin Z">Z. Lin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ding, C" uniqKey="Ding C">C. Ding</name>
</author>
<author>
<name sortKey="Bao, T" uniqKey="Bao T">T. Bao</name>
</author>
<author>
<name sortKey="Karmoshi, S" uniqKey="Karmoshi S">S. Karmoshi</name>
</author>
<author>
<name sortKey="Zhu, M" uniqKey="Zhu M">M. Zhu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, Y" uniqKey="Zhang Y">Y. Zhang</name>
</author>
<author>
<name sortKey="Peng, H" uniqKey="Peng H">H. Peng</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Du, Q" uniqKey="Du Q">Q. Du</name>
</author>
<author>
<name sortKey="Da, F" uniqKey="Da F">F. Da</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stone, J V" uniqKey="Stone J">J.V. Stone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ataman, E" uniqKey="Ataman E">E. Ataman</name>
</author>
<author>
<name sortKey="Aatre, V" uniqKey="Aatre V">V. Aatre</name>
</author>
<author>
<name sortKey="Wong, K" uniqKey="Wong K">K. Wong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benzaoui, A" uniqKey="Benzaoui A">A. Benzaoui</name>
</author>
<author>
<name sortKey="Hadid, A" uniqKey="Hadid A">A. Hadid</name>
</author>
<author>
<name sortKey="Boukrouche, A" uniqKey="Boukrouche A">A. Boukrouche</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zehani, S" uniqKey="Zehani S">S. Zehani</name>
</author>
<author>
<name sortKey="Ouahabi, A" uniqKey="Ouahabi A">A. Ouahabi</name>
</author>
<author>
<name sortKey="Oussalah, M" uniqKey="Oussalah M">M. Oussalah</name>
</author>
<author>
<name sortKey="Mimi, M" uniqKey="Mimi M">M. Mimi</name>
</author>
<author>
<name sortKey="Taleb Ahmed, A" uniqKey="Taleb Ahmed A">A. Taleb-Ahmed</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ojala, T" uniqKey="Ojala T">T. Ojala</name>
</author>
<author>
<name sortKey="Pietikainen, M" uniqKey="Pietikainen M">M. Pietikainen</name>
</author>
<author>
<name sortKey="Maenpaa, T" uniqKey="Maenpaa T">T. Maenpaa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ojansivu, V" uniqKey="Ojansivu V">V. Ojansivu</name>
</author>
<author>
<name sortKey="Heikkil, J" uniqKey="Heikkil J">J. Heikkil</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martinez, A M" uniqKey="Martinez A">A.M. Martinez</name>
</author>
<author>
<name sortKey="Benavente, R" uniqKey="Benavente R">R. Benavente</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huang, G B" uniqKey="Huang G">G.B. Huang</name>
</author>
<author>
<name sortKey="Mattar, M" uniqKey="Mattar M">M. Mattar</name>
</author>
<author>
<name sortKey="Berg, T" uniqKey="Berg T">T. Berg</name>
</author>
<author>
<name sortKey="Learned Miller, E" uniqKey="Learned Miller E">E. Learned-Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mehrasa, N" uniqKey="Mehrasa N">N. Mehrasa</name>
</author>
<author>
<name sortKey="Ali, A" uniqKey="Ali A">A. Ali</name>
</author>
<author>
<name sortKey="Homayun, M" uniqKey="Homayun M">M. Homayun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ji, H K" uniqKey="Ji H">H.K. Ji</name>
</author>
<author>
<name sortKey="Sun, Q S" uniqKey="Sun Q">Q.S. Sun</name>
</author>
<author>
<name sortKey="Ji, Z X" uniqKey="Ji Z">Z.X. Ji</name>
</author>
<author>
<name sortKey="Yuan, Y H" uniqKey="Yuan Y">Y.H. Yuan</name>
</author>
<author>
<name sortKey="Zhang, G Q" uniqKey="Zhang G">G.Q. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Turk, M" uniqKey="Turk M">M. Turk</name>
</author>
<author>
<name sortKey="Pentland, A" uniqKey="Pentland A">A. Pentland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wu, J" uniqKey="Wu J">J. Wu</name>
</author>
<author>
<name sortKey="Zhou, Z H" uniqKey="Zhou Z">Z.H. Zhou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, S" uniqKey="Chen S">S. Chen</name>
</author>
<author>
<name sortKey="Zhang, D" uniqKey="Zhang D">D. Zhang</name>
</author>
<author>
<name sortKey="Zhou, Z H" uniqKey="Zhou Z">Z.H. Zhou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yang, J" uniqKey="Yang J">J. Yang</name>
</author>
<author>
<name sortKey="Zhang, D" uniqKey="Zhang D">D. Zhang</name>
</author>
<author>
<name sortKey="Frangi, A F" uniqKey="Frangi A">A.F. Frangi</name>
</author>
<author>
<name sortKey="Yang, J Y" uniqKey="Yang J">J.Y. Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gottumukkal, R" uniqKey="Gottumukkal R">R. Gottumukkal</name>
</author>
<author>
<name sortKey="Asari, V K" uniqKey="Asari V">V.K. Asari</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, S" uniqKey="Chen S">S. Chen</name>
</author>
<author>
<name sortKey="Liu, J" uniqKey="Liu J">J. Liu</name>
</author>
<author>
<name sortKey="Zhou, Z H" uniqKey="Zhou Z">Z.H. Zhou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, D" uniqKey="Zhang D">D. Zhang</name>
</author>
<author>
<name sortKey="Zhou, Z H" uniqKey="Zhou Z">Z.H. Zhou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tan, X" uniqKey="Tan X">X. Tan</name>
</author>
<author>
<name sortKey="Chen, S" uniqKey="Chen S">S. Chen</name>
</author>
<author>
<name sortKey="Zhou, Z H" uniqKey="Zhou Z">Z.H. Zhou</name>
</author>
<author>
<name sortKey="Zhang, F" uniqKey="Zhang F">F. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="He, X" uniqKey="He X">X. He</name>
</author>
<author>
<name sortKey="Yan, S" uniqKey="Yan S">S. Yan</name>
</author>
<author>
<name sortKey="Hu, Y" uniqKey="Hu Y">Y. Hu</name>
</author>
<author>
<name sortKey="Niyogi, P" uniqKey="Niyogi P">P. Niyogi</name>
</author>
<author>
<name sortKey="Zhang, H J" uniqKey="Zhang H">H.J. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deng, W" uniqKey="Deng W">W. Deng</name>
</author>
<author>
<name sortKey="Hu, J" uniqKey="Hu J">J. Hu</name>
</author>
<author>
<name sortKey="Guo, J" uniqKey="Guo J">J. Guo</name>
</author>
<author>
<name sortKey="Cai, W" uniqKey="Cai W">W. Cai</name>
</author>
<author>
<name sortKey="Fenf, D" uniqKey="Fenf D">D. Fenf</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chu, Y" uniqKey="Chu Y">Y. Chu</name>
</author>
<author>
<name sortKey="Zhao, L" uniqKey="Zhao L">L. Zhao</name>
</author>
<author>
<name sortKey="Ahmad, T" uniqKey="Ahmad T">T. Ahmad</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pang, M" uniqKey="Pang M">M. Pang</name>
</author>
<author>
<name sortKey="Cheung, Y" uniqKey="Cheung Y">Y. Cheung</name>
</author>
<author>
<name sortKey="Wang, B" uniqKey="Wang B">B. Wang</name>
</author>
<author>
<name sortKey="Liu, R" uniqKey="Liu R">R. Liu</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cuculo, V" uniqKey="Cuculo V">V. Cuculo</name>
</author>
<author>
<name sortKey="D Melio, A" uniqKey="D Melio A">A. D’Amelio</name>
</author>
<author>
<name sortKey="Grossi, G" uniqKey="Grossi G">G. Grossi</name>
</author>
<author>
<name sortKey="Lanzarotti, R" uniqKey="Lanzarotti R">R. Lanzarotti</name>
</author>
<author>
<name sortKey="Lin, J" uniqKey="Lin J">J. Lin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wright, J" uniqKey="Wright J">J. Wright</name>
</author>
<author>
<name sortKey="Yang, A Y" uniqKey="Yang A">A.Y. Yang</name>
</author>
<author>
<name sortKey="Ganesh, A" uniqKey="Ganesh A">A. Ganesh</name>
</author>
<author>
<name sortKey="Sastry, S S" uniqKey="Sastry S">S.S. Sastry</name>
</author>
<author>
<name sortKey="Ma, Y" uniqKey="Ma Y">Y. Ma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Su, Y" uniqKey="Su Y">Y. Su</name>
</author>
<author>
<name sortKey="Shan, S" uniqKey="Shan S">S. Shan</name>
</author>
<author>
<name sortKey="Chen, X" uniqKey="Chen X">X. Chen</name>
</author>
<author>
<name sortKey="Gao, W" uniqKey="Gao W">W. Gao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhou, D" uniqKey="Zhou D">D. Zhou</name>
</author>
<author>
<name sortKey="Yang, D" uniqKey="Yang D">D. Yang</name>
</author>
<author>
<name sortKey="Zhang, X" uniqKey="Zhang X">X. Zhang</name>
</author>
<author>
<name sortKey="Huang, S" uniqKey="Huang S">S. Huang</name>
</author>
<author>
<name sortKey="Feng, S" uniqKey="Feng S">S. Feng</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zeng, J" uniqKey="Zeng J">J. Zeng</name>
</author>
<author>
<name sortKey="Zhao, X" uniqKey="Zhao X">X. Zhao</name>
</author>
<author>
<name sortKey="Gan, J" uniqKey="Gan J">J. Gan</name>
</author>
<author>
<name sortKey="Mai, C" uniqKey="Mai C">C. Mai</name>
</author>
<author>
<name sortKey="Zhai, Y" uniqKey="Zhai Y">Y. Zhai</name>
</author>
<author>
<name sortKey="Wang, F" uniqKey="Wang F">F. Wang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Adjabi, I" uniqKey="Adjabi I">I. Adjabi</name>
</author>
<author>
<name sortKey="Ouahabi, A" uniqKey="Ouahabi A">A. Ouahabi</name>
</author>
<author>
<name sortKey="Benzaoui, A" uniqKey="Benzaoui A">A. Benzaoui</name>
</author>
<author>
<name sortKey="Taleb Ahmed, A" uniqKey="Taleb Ahmed A">A. Taleb-Ahmed</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sadiq, M T" uniqKey="Sadiq M">M.T. Sadiq</name>
</author>
<author>
<name sortKey="Yu, X" uniqKey="Yu X">X. Yu</name>
</author>
<author>
<name sortKey="Yuan, Z" uniqKey="Yuan Z">Z. Yuan</name>
</author>
<author>
<name sortKey="Aziz, M Z" uniqKey="Aziz M">M.Z. Aziz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sadiq, M T" uniqKey="Sadiq M">M.T. Sadiq</name>
</author>
<author>
<name sortKey="Yu, X" uniqKey="Yu X">X. Yu</name>
</author>
<author>
<name sortKey="Yuan, Z" uniqKey="Yuan Z">Z. Yuan</name>
</author>
<author>
<name sortKey="Fan, Z" uniqKey="Fan Z">Z. Fan</name>
</author>
<author>
<name sortKey="Rehman, A U" uniqKey="Rehman A">A.U. Rehman</name>
</author>
<author>
<name sortKey="Li, G" uniqKey="Li G">G. Li</name>
</author>
<author>
<name sortKey="Xiao, G" uniqKey="Xiao G">G. Xiao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sadiq, M T" uniqKey="Sadiq M">M.T. Sadiq</name>
</author>
<author>
<name sortKey="Yu, X" uniqKey="Yu X">X. Yu</name>
</author>
<author>
<name sortKey="Yuan, Z" uniqKey="Yuan Z">Z. Yuan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Khaldi, Y" uniqKey="Khaldi Y">Y. Khaldi</name>
</author>
<author>
<name sortKey="Benzaoui, A" uniqKey="Benzaoui A">A. Benzaoui</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nguyen, B P" uniqKey="Nguyen B">B.P. Nguyen</name>
</author>
<author>
<name sortKey="Tay, W L" uniqKey="Tay W">W.L. Tay</name>
</author>
<author>
<name sortKey="Chui, C K" uniqKey="Chui C">C.K. Chui</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sensors (Basel)</journal-id>
<journal-id journal-id-type="iso-abbrev">Sensors (Basel)</journal-id>
<journal-id journal-id-type="publisher-id">sensors</journal-id>
<journal-title-group>
<journal-title>Sensors (Basel, Switzerland)</journal-title>
</journal-title-group>
<issn pub-type="epub">1424-8220</issn>
<publisher>
<publisher-name>MDPI</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">33494516</article-id>
<article-id pub-id-type="pmc">7865363</article-id>
<article-id pub-id-type="doi">10.3390/s21030728</article-id>
<article-id pub-id-type="publisher-id">sensors-21-00728</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Multi-Block Color-Binarized Statistical Images for Single-Sample Face Recognition</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Adjabi</surname>
<given-names>Insaf</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-21-00728">1</xref>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid" authenticated="true">https://orcid.org/0000-0002-6392-7693</contrib-id>
<name>
<surname>Ouahabi</surname>
<given-names>Abdeldjalil</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-21-00728">1</xref>
<xref ref-type="aff" rid="af2-sensors-21-00728">2</xref>
<xref rid="c1-sensors-21-00728" ref-type="corresp">*</xref>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid" authenticated="true">https://orcid.org/0000-0003-0437-1143</contrib-id>
<name>
<surname>Benzaoui</surname>
<given-names>Amir</given-names>
</name>
<xref ref-type="aff" rid="af3-sensors-21-00728">3</xref>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid" authenticated="true">https://orcid.org/0000-0002-6798-6914</contrib-id>
<name>
<surname>Jacques</surname>
<given-names>Sébastien</given-names>
</name>
<xref ref-type="aff" rid="af4-sensors-21-00728">4</xref>
</contrib>
</contrib-group>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Park</surname>
<given-names>Kang Ryoung</given-names>
</name>
<role>Academic Editor</role>
</contrib>
</contrib-group>
<aff id="af1-sensors-21-00728">
<label>1</label>
Department of Computer Science, LIMPAF, University of Bouira, Bouira 10000, Algeria;
<email>i.adjabi@univ-bouira.dz</email>
</aff>
<aff id="af2-sensors-21-00728">
<label>2</label>
Polytech Tours, Imaging and Brain, INSERM U930, University of Tours, 37200 Tours, France</aff>
<aff id="af3-sensors-21-00728">
<label>3</label>
Department of Electrical Engineering, University of Bouira, Bouira 10000, Algeria;
<email>a.benzaoui@univ-bouira.dz</email>
</aff>
<aff id="af4-sensors-21-00728">
<label>4</label>
GREMAN UMR 7347, University of Tours, CNRS, INSA Centre Val-de-Loire, 37200 Tours, France;
<email>sebastien.jacques@univ-tours.fr</email>
</aff>
<author-notes>
<corresp id="c1-sensors-21-00728">
<label>*</label>
Correspondence:
<email>abdeldjalil.ouahabi@univ-tours.fr</email>
; Tel.: +33-2-4736-1323</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>21</day>
<month>1</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<month>2</month>
<year>2021</year>
</pub-date>
<volume>21</volume>
<issue>3</issue>
<elocation-id>728</elocation-id>
<history>
<date date-type="received">
<day>08</day>
<month>12</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>19</day>
<month>1</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>© 2021 by the authors.</copyright-statement>
<copyright-year>2021</copyright-year>
<license license-type="open-access">
<license-p>Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
).</license-p>
</license>
</permissions>
<abstract>
<p>Single-Sample Face Recognition (SSFR) is a computer vision challenge. In this scenario, there is only one example from each individual on which to train the system, making it difficult to identify persons in unconstrained environments, mainly when dealing with changes in facial expression, posture, lighting, and occlusion. This paper discusses the relevance of an original method for SSFR, called Multi-Block Color-Binarized Statistical Image Features (MB-C-BSIF), which exploits several kinds of features, namely, local, regional, global, and textured-color characteristics. First, the MB-C-BSIF method decomposes a facial image into three channels (e.g., red, green, and blue), then it divides each channel into equal non-overlapping blocks to select the local facial characteristics that are consequently employed in the classification phase. Finally, the identity is determined by calculating the similarities among the characteristic vectors adopting a distance measurement of the K-nearest neighbors (K-NN) classifier. Extensive experiments on several subsets of the unconstrained Alex and Robert (AR) and Labeled Faces in the Wild (LFW) databases show that the MB-C-BSIF achieves superior and competitive results in unconstrained situations when compared to current state-of-the-art methods, especially when dealing with changes in facial expression, lighting, and occlusion. The average classification accuracies are 96.17% and 99% for the AR database with two specific protocols (i.e., Protocols I and II, respectively), and 38.01% for the challenging LFW database. These performances are clearly superior to those obtained by state-of-the-art methods. Furthermore, the proposed method uses algorithms based only on simple and elementary image processing operations that do not imply higher computational costs as in holistic, sparse or deep learning methods, making it ideal for real-time identification.</p>
</abstract>
<kwd-group>
<kwd>biometrics</kwd>
<kwd>face recognition</kwd>
<kwd>single-sample face recognition</kwd>
<kwd>binarized statistical image features</kwd>
<kwd>K-nearest neighbors</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1-sensors-21-00728">
<title>1. Introduction</title>
<p>Generally speaking, biometrics aims to identify or verify an individual’s identity according to some physical or behavioral characteristics [
<xref rid="B1-sensors-21-00728" ref-type="bibr">1</xref>
]. Biometric practices replace conventional knowledge-based solutions, such as passwords or PINs, and possession-based strategies, such as ID cards or badges [
<xref rid="B2-sensors-21-00728" ref-type="bibr">2</xref>
]. Several biometric methods have been developed to varying degrees and are being implemented and used in numerous commercial applications [
<xref rid="B3-sensors-21-00728" ref-type="bibr">3</xref>
].</p>
<p>Fingerprints are the biometric features most commonly used to identify criminals [
<xref rid="B4-sensors-21-00728" ref-type="bibr">4</xref>
]. The first automated fingerprint authentication device was commercialized in the early 1960s. Multiple studies have shown that the iris of the eye is the most accurate modality since its texture remains stable throughout a person’s life [
<xref rid="B5-sensors-21-00728" ref-type="bibr">5</xref>
]. However, those techniques have the significant drawback of being invasive, which significantly restricts their applications. Besides, iris recognition remains problematic for users who do not wish to put their eyes in front of a sensor. On the contrary, biometric recognition based on facial analysis does not pose any such user constraints. In contrast to other biometric modalities, face recognition is a modality that can be employed without any user–sensor co-operation and can be applied discreetly in surveillance applications. Face recognition has many advantages: the sensor device (i.e., the camera) is simple to mount; it is not costly; it does not require subject co-operation; there are no hygiene issues; and, being passive, people much prefer this modality [
<xref rid="B6-sensors-21-00728" ref-type="bibr">6</xref>
].</p>
<p>Two-dimensional face recognition with Single-Sample Face Recognition (SSFR) (i.e., using a Single- Sample Per Person (SSPP) in the training set) has already matured as a technology. Although the latest studies on the Face Recognition Grand Challenge (FRGC) [
<xref rid="B7-sensors-21-00728" ref-type="bibr">7</xref>
] project have shown that computer vision systems [
<xref rid="B8-sensors-21-00728" ref-type="bibr">8</xref>
] offer better performance than human visual systems in controlled conditions [
<xref rid="B9-sensors-21-00728" ref-type="bibr">9</xref>
], research into face recognition, however, needs to be geared towards more realistic uncontrolled conditions. In an uncontrolled scenario, human visual systems are more robust when dealing with the numerous possibilities that can impact the recognition process [
<xref rid="B10-sensors-21-00728" ref-type="bibr">10</xref>
], such as variations in lighting, facial orientation, facial expression, and facial appearance due to the presence of sunglasses, a scarf, a beard, or makeup. Solving these challenges will make 2D face recognition techniques a much more important technology for identification or identity verification.</p>
<p>Several methods and algorithms have been suggested in the face recognition literature. They can be subdivided into four fundamental approaches depending on the method used for feature extraction and classification: holistic, local, hybrid, and deep learning approaches [
<xref rid="B11-sensors-21-00728" ref-type="bibr">11</xref>
]. The deep learning class [
<xref rid="B12-sensors-21-00728" ref-type="bibr">12</xref>
], which applies consecutive layers of information processing arranged hierarchically for representation, learning, and classification, has dramatically increased state-of-the-art performance, especially with unconstrained large-scale databases, and encouraged real-world applications [
<xref rid="B13-sensors-21-00728" ref-type="bibr">13</xref>
,
<xref rid="B14-sensors-21-00728" ref-type="bibr">14</xref>
].</p>
<p>Most current methods in the literature use several facial images (samples) per person in the training set. Nevertheless, in real-world systems (e.g., in fugitive tracking, identity cards, immigration management, or passports), only SSFR systems are used (due to the limited storage and privacy policy), which employ a single sample per person in the training stage (generally neutral images acquired in controlled conditions), i.e., just one example of the person to be recognized is recorded in the database and accessible for the recognition task [
<xref rid="B15-sensors-21-00728" ref-type="bibr">15</xref>
]. Since there are insufficient data (i.e., we do not have several samples per person) to perform supervised learning, many well-known algorithms may not work particularly well. For instance, Deep Neural Networks (DNNs) [
<xref rid="B13-sensors-21-00728" ref-type="bibr">13</xref>
] can be used in powerful face recognition techniques. Nonetheless, they necessitate a considerable volume of training data to work well. Vapnik and Chervonenkis [
<xref rid="B16-sensors-21-00728" ref-type="bibr">16</xref>
] showed that vast training data must ensure learning systems’ generalization in their statistical learning theorem. In addition, the use of three-dimensional (3D) imaging instead of two-dimensional representation (2D) has made it possible to cover several issues related to image acquisition conditions, in particular pose, lighting and make-up variations. While 3D models offer a better representation of the face shape for a clear distinction between persons [
<xref rid="B17-sensors-21-00728" ref-type="bibr">17</xref>
,
<xref rid="B18-sensors-21-00728" ref-type="bibr">18</xref>
], they are often not suitable for real-time applications because they require expensive and sophisticated calculations and specific sensors. We infer that SSFR remains an unsolved issue in academic and business circles, particularly with respect to the major efforts and growth in face recognition.</p>
<p>In this paper, we tackle the SSFR issue in unconstrained conditions by proposing an efficient method based on a variant of the local texture operator Binarized Statistical Image Features (BSIF) [
<xref rid="B19-sensors-21-00728" ref-type="bibr">19</xref>
] called Multi-Block Color-binarized Statistical Image Features (MB-C-BSIF). It employs local color texture information to obtain honest and precise representation. The BSIF descriptor has been widely used in texture analysis [
<xref rid="B20-sensors-21-00728" ref-type="bibr">20</xref>
,
<xref rid="B21-sensors-21-00728" ref-type="bibr">21</xref>
] and has proven its utility in many computer vision tasks. In the first step, the proposed method uses preprocessing to enhance the quality of facial photographs and remove noise [
<xref rid="B22-sensors-21-00728" ref-type="bibr">22</xref>
,
<xref rid="B23-sensors-21-00728" ref-type="bibr">23</xref>
,
<xref rid="B24-sensors-21-00728" ref-type="bibr">24</xref>
]. The color image is then decomposed into three channels (e.g., red, green, and blue for the RGB color-space). Next, to find the optimum configuration, several multi-block decompositions are checked and examined under various color-spaces (i.e., we tested RGB, Hue Saturation Value (HSV), in addition to the YCbCr color-spaces, where Y is the luma component; Cb and Cr are the blue-difference and red-difference chroma components, respectively). Finally, classification is undertaken using the distance measurement of the K-nearest neighbors (K-NN) classifier. Compared to several related works, the advantage of our method lies in exploiting several kinds of information: local, regional, global, and color-texture. Besides, the algorithm of our method is simple and does not require greater complexity, which makes it suitable for real-time applications (e.g., surveillance systems or real-time identification). Our system is based on only basic and simple image processing operations (e.g., median filtering, a simple convolution, or histogram calculation), involving a much lower computational cost than existing systems. For example, (1) Subspace or sparse representation-based methods involve many calculations and higher time in dimensionality reduction, or (2) Deep learning methods involve very high complexity cost and require many computations. For such systems, GPUs’ need clearly shows that many calculations must be done in parallel; GPUs are designed to run concurrently with thousands of processor cores, making for extensive parallelism where each core is concentrated on making accurate calculations. With a standard CPU, a considerable amount of time for training and testing will be needed for deep learning systems.</p>
<p>The rest of the paper is structured as follows. We discuss relevant research about SSFR in
<xref ref-type="sec" rid="sec2-sensors-21-00728">Section 2</xref>
.
<xref ref-type="sec" rid="sec3-sensors-21-00728">Section 3</xref>
describes our suggested method. In
<xref ref-type="sec" rid="sec4-sensors-21-00728">Section 4</xref>
, the experimental study, key findings, and comparisons are performed and presented to show our method’s superiority.
<xref ref-type="sec" rid="sec5-sensors-21-00728">Section 5</xref>
of the paper presents key findings and discusses research perspectives.</p>
</sec>
<sec id="sec2-sensors-21-00728">
<title>2. Related Work </title>
<p>Current methods designed to resolve the SSFR issue can be categorized into four fundamental classes [
<xref rid="B25-sensors-21-00728" ref-type="bibr">25</xref>
], namely: virtual sample generating, generic learning, image partitioning, and deep learning methods.</p>
<sec id="sec2dot1-sensors-21-00728">
<title>2.1. Virtual Sample Generating Methods</title>
<p>The methods in this category produce some additional virtual training samples for each individual to augment the gallery (i.e., data augmentation), so that discriminative sub-space learning can be employed to extract features. For example, Vetter (1998) [
<xref rid="B26-sensors-21-00728" ref-type="bibr">26</xref>
] proposed a robust SSFR algorithm by generating 3D facial models through the recovery of high-fidelity reflectance and geometry. Zhang et al. (2005) [
<xref rid="B27-sensors-21-00728" ref-type="bibr">27</xref>
] and Gao et al. (2008) [
<xref rid="B28-sensors-21-00728" ref-type="bibr">28</xref>
] developed two techniques to tackle the issue of SSFR based on the singular value decomposition (SVD). Hu et al. (2015) [
<xref rid="B29-sensors-21-00728" ref-type="bibr">29</xref>
] suggested a different SSFR system based on the lower-upper (LU) algorithm. In their approach, each single subject was decomposed and transposed employing the LU procedure and each raw image was rearranged according to its energy. Dong et al. (2018) [
<xref rid="B30-sensors-21-00728" ref-type="bibr">30</xref>
] proposed an effective method for the completion of SSFR tasks called K-Nearest Neighbors virtual image set-based Multi-manifold Discriminant Learning (KNNMMDL). They also suggested an algorithm named K-Nearest Neighbor-based Virtual Sample Generating (KNNVSG) to augment the information of intra-class variation in the training samples. They also proposed the Image Set-based Multi-manifold Discriminant Learning algorithm (ISMMDL) to exploit intra-class variation information. While these methods can somewhat alleviate the SSFR problem, their main disadvantage lies in the strong correlation between the virtual images, which cannot be regarded as independent examples for the selection of features.</p>
</sec>
<sec id="sec2dot2-sensors-21-00728">
<title>2.2. Generic Learning Methods</title>
<p>The methods in this category first extract discriminant characteristics from a supplementary generic training set that includes several examples per individual and then use those characteristics for SSFR tasks. Deng et al. (2012) [
<xref rid="B31-sensors-21-00728" ref-type="bibr">31</xref>
] developed the Extended Sparse Representation Classifier (ESRC) technique in which the intra-class variant dictionary is created from generic persons not incorporated in the gallery set to increase the efficiency of the identification process. In a method called Sparse Variation Dictionary Learning (SVDL), Yang et al. (2013) [
<xref rid="B32-sensors-21-00728" ref-type="bibr">32</xref>
] trained a sparse variation dictionary by considering the relation between the training set and the outside generic set, disregarding the distinctive features of various organs of the human face. Zhu et al. (2014) [
<xref rid="B33-sensors-21-00728" ref-type="bibr">33</xref>
] suggested a system for SSFR based on Local Generic Representation (LGR), which leverages the benefits of both image partitioning and generic learning and takes into account the fact that the intra-class face variation can be spread among various subjects.</p>
</sec>
<sec id="sec2dot3-sensors-21-00728">
<title>2.3. Image Partitioning Methods</title>
<p>The methods in this category divide each person’s images into local blocks, extract the discriminant characteristics, and, finally, perform classifications based on the selected discriminant characteristics. Zhu et al. (2012) [
<xref rid="B34-sensors-21-00728" ref-type="bibr">34</xref>
] developed a Patch-based CRC (PCRC) algorithm that applies the original method proposed by Zhang et al. (2011) [
<xref rid="B35-sensors-21-00728" ref-type="bibr">35</xref>
], named Collaborative Representation-based Classification (CRC), to each block. Lu et al. (2012) [
<xref rid="B36-sensors-21-00728" ref-type="bibr">36</xref>
] suggested a technique called Discriminant Multi-manifold Analysis (DMMA) that divides any registered image into multiple non-overlapping blocks and then learns several feature spaces to optimize the various margins of different individuals. Zhang et al. (2018) [
<xref rid="B37-sensors-21-00728" ref-type="bibr">37</xref>
] developed local histogram-based face image operators. They decomposed each image into different non-overlapping blocks. Next, they tried to derive a matrix to project the blocks into an optimal subspace to maximize the different margins of different individuals. Each column was then redesigned to an image filter to treat facial images and the filter responses were binarized using a fixed threshold. Gu et al. (2018) [
<xref rid="B38-sensors-21-00728" ref-type="bibr">38</xref>
] proposed a method called Local Robust Sparse Representation (LRSR). The main idea of this technique is to merge a local sparse representation model with a block-based generic variation dictionary learning model to determine the possible facial intra-class variations of the test images. Zhang et al. (2020) [
<xref rid="B39-sensors-21-00728" ref-type="bibr">39</xref>
] introduced a novel Nearest Neighbor Classifier (NNC) distance measurement to resolve SSFR problems. The suggested technique, entitled Dissimilarity-based Nearest Neighbor Classifier (DNNC), divides all images into equal non-overlapping blocks and produces an organized image block-set. The dissimilarities among the given query image block-set and the training image block-sets are calculated and considered by the NNC distance metric.</p>
</sec>
<sec id="sec2dot4-sensors-21-00728">
<title>2.4. Deep Learning Methods</title>
<p>The methods in this category employ consecutive hidden layers of information-processing arranged hierarchically for representation, learning, and classification. They can automatically determine complex non-linear data structures [
<xref rid="B40-sensors-21-00728" ref-type="bibr">40</xref>
]. Zeng et al. (2017) [
<xref rid="B41-sensors-21-00728" ref-type="bibr">41</xref>
] proposed a method that uses Deep Convolutional Neural Networks (DCNNs). Firstly, they propose using an expanding sample technique to augment the training sample set, and then a trained DCNN model is implemented and fine-tuned by those expanding samples to be used in the classification process. Ding et al. (2017) [
<xref rid="B42-sensors-21-00728" ref-type="bibr">42</xref>
] developed a deep learning technique centered on a Kernel Principal Component Analysis Network (KPCANet) and a novel weighted voting technique. First, the aligned facial image is segmented into multiple non-overlapping blocks to create the training set. Then, a KPCANet is employed to get filters and banks of features. Lastly, recognition of the unlabeled probe is achieved by applying the weighted voting form. Zhang and Peng (2018) [
<xref rid="B43-sensors-21-00728" ref-type="bibr">43</xref>
] introduced a different method to generate intra-class variances using a deep auto-encoder. They then used these intra-class variations to expand the new examples. First, a generalized deep auto-encoder is used to train facial images in the gallery. Second, a Class-specific Deep Auto-encoder (CDA) is fine-tuned with a single example. Finally, the corresponding CDA is employed to expand the new samples. Du and Da (2020) [
<xref rid="B44-sensors-21-00728" ref-type="bibr">44</xref>
] proposed a method entitled Block Dictionary Learning (BDL) that fuses Sparse Representation (SR) with CNNs. SR is implemented to augment CNN efficiency by improving the inter-class feature variations and creating a global-to-local dictionary learning process to increase the method’s robustness.</p>
<p>It is clear that the deep learning approach for face recognition has gained particular attention in recent years, but it suffers considerably with SSFR systems as they still require a significant amount of information in the training set.</p>
<p>Motivated by the successes of the third approach, “image partitioning”, and the reliability of the local texture descriptor BSIF, in this paper, we propose an image partitioning method to address the problems of SSFR. The proposed method, called MB-C-BSIF, decomposes each image into several color channels, divides each color component into various equal non-overlapping blocks, and applies the BSIF descriptor to each block-component to extract the discriminative features. In the following section, the framework of the MB-C-BSIF is explained in detail.</p>
</sec>
</sec>
<sec id="sec3-sensors-21-00728">
<title>3. Proposed Method</title>
<p>This section details the MB-C-BSIF method (see
<xref ref-type="fig" rid="sensors-21-00728-f001">Figure 1</xref>
) proposed in this article to solve the SSFR problem. MB-C-BSIF is an approach based on image partitioning and consists of three key steps: image pre-processing, feature extraction based on MB-C-BSIF, and classification. In the following subsections, we present these three phases in detail.</p>
<sec id="sec3dot1-sensors-21-00728">
<title>3.1. Preprocessing</title>
<p>The suggested feature extraction and classification rules compose the essential steps in our proposed SSFR. However, before driving these two steps, pre-processing is necessary to improve the visual quality of the captured image. The facial image is enhanced by applying histogram normalization and then filtered with a non-linear filter. The median filter [
<xref rid="B45-sensors-21-00728" ref-type="bibr">45</xref>
] was adopted to minimize noise while preserving the facial appearance and enhancing the operational outcomes [
<xref rid="B46-sensors-21-00728" ref-type="bibr">46</xref>
].</p>
</sec>
<sec id="sec3dot2-sensors-21-00728">
<title>3.2. MB-C-BSIF-Based Feature Extraction</title>
<p>Our advanced feature extraction technique is based on the multi-block color representation of the BSIF descriptor, entitled Multi-Block Color BSIF (MB-C-BSIF). The BSIF operator proposed by Kannala and Rahtu [
<xref rid="B16-sensors-21-00728" ref-type="bibr">16</xref>
] is an efficient and robust descriptor for texture analysis [
<xref rid="B47-sensors-21-00728" ref-type="bibr">47</xref>
,
<xref rid="B48-sensors-21-00728" ref-type="bibr">48</xref>
]. BSIF focuses on creating local image descriptors that powerfully encode texture information and are appropriate for describing image regions in the form of histograms. The method calculates a binary code for all pixels by linearly projecting local image blocks onto a subspace whose basis vectors are learned from natural pictures through Independent Component Analysis (ICA) [
<xref rid="B45-sensors-21-00728" ref-type="bibr">45</xref>
] and by binarizing the coordinates through thresholding. The number of basis vectors defines the length of the binary code string. Image regions can be conveniently represented with histograms of the pixels’ binary codes. Other descriptors that generate binary codes, such as the Local Binary Pattern (LBP) [
<xref rid="B49-sensors-21-00728" ref-type="bibr">49</xref>
] and the Local Phase Quantization (LPQ) [
<xref rid="B50-sensors-21-00728" ref-type="bibr">50</xref>
], have inspired the BSIF process. However, the BSIF is based on natural image statistics rather than heuristic or handcrafted code constructions, enhancing its modeling capabilities.</p>
<p>Technically speaking, the
<inline-formula>
<mml:math id="mm1">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
filter response is calculated, for a given picture patch
<inline-formula>
<mml:math id="mm2">
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
of size
<inline-formula>
<mml:math id="mm3">
<mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
pixels and a linear filter
<inline-formula>
<mml:math id="mm4">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
of the same size, by:
<disp-formula id="FD1-sensors-21-00728">
<label>(1)</label>
<mml:math id="mm5">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mstyle mathsize="140%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:munder>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mfenced>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>X</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo> </mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where the index
<inline-formula>
<mml:math id="mm6">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
in
<inline-formula>
<mml:math id="mm7">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
indicates the
<inline-formula>
<mml:math id="mm8">
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
filter.</p>
<p>The binarized
<inline-formula>
<mml:math id="mm9">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
feature is calculated as follows:
<disp-formula id="FD2-sensors-21-00728">
<label>(2)</label>
<mml:math id="mm10">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfenced close="" open="{">
<mml:mrow>
<mml:mtable equalrows="true" equalcolumns="true">
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:mi>if</mml:mi>
<mml:mo> </mml:mo>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>></mml:mo>
<mml:mn>0</mml:mn>
<mml:mo> </mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo> </mml:mo>
<mml:mi>otherwise</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>The BSIF descriptor has two key parameters: the filter size
<inline-formula>
<mml:math id="mm11">
<mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and the bit string length
<inline-formula>
<mml:math id="mm12">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
. Using ICA,
<inline-formula>
<mml:math id="mm13">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
filters are trained by optimizing
<inline-formula>
<mml:math id="mm14">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
’s statistical independence. The training of
<inline-formula>
<mml:math id="mm15">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
filters is based on different choices of parameter values. In particular, each filter set was trained using 50,000 image patches.
<xref ref-type="fig" rid="sensors-21-00728-f002">Figure 2</xref>
displays some examples of the filters obtained with
<inline-formula>
<mml:math id="mm16">
<mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>7</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>7</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm17">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>8</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
.
<xref ref-type="fig" rid="sensors-21-00728-f003">Figure 3</xref>
provides some examples of facial images and their respective BSIF representations (with
<inline-formula>
<mml:math id="mm18">
<mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>7</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>7</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm19">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>8</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
).</p>
<p>Like LBP and LPQ methodologies, the BSIF codes’ co-occurrences are collected in a histogram
<inline-formula>
<mml:math id="mm20">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, which is employed as a feature vector.</p>
<p>However, the simple BSIF operator based on a single block does not possess information that dominates the texture characteristics, which is forceful for the image’s occlusion and rotation. To address those limitations, an extension of the basic BSIF, the Multi-Block BSIF (MB-BSIF), is used. The concept is based on partitioning the original image into non-overlapping blocks. An undefined facial image may be split equally along the horizontal and vertical directions. As an illustration, we can derive 1, 4, or 16 blocks by segmenting the image into grids of 1 × 1, 2 × 2, or 4 × 4, as shown in
<xref ref-type="fig" rid="sensors-21-00728-f004">Figure 4</xref>
. Each block possesses details about its composition, such as the nose, eyes, or eyebrows. Overall, these blocks provide information about position relationships, such as nose to mouth and eye to eye. The blocks and the data between them are thus essential for SSFR tasks. </p>
<p>Our idea was to segment the image into equal non-overlapping blocks and calculate the BSIF operator’s histograms related to the different blocks. The histogram
<inline-formula>
<mml:math id="mm21">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
represents the fusion of the regular histograms calculated for the different blocks, as shown in
<xref ref-type="fig" rid="sensors-21-00728-f005">Figure 5</xref>
.</p>
<p>In the face recognition literature, some works have concentrated solely on analyzing the luminance details of facial images (i.e., grayscale). This paper suggests a different and exciting technique that exploits color texture information and shows that analysis of chrominance can be beneficial to SSFR systems. To prove this idea, we can separate the RGB facial image into three channels (i.e., red, green, and blue) and then compute the MB-BSIF separately for each channel. The final feature vector is the concatenation of their histograms in a global histogram
<inline-formula>
<mml:math id="mm22">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:mi>H</mml:mi>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
. This approach is called Multi-Block Color BSIF (MB-C-BSIF).
<xref ref-type="fig" rid="sensors-21-00728-f005">Figure 5</xref>
provides a schematic illustration of the proposed MB-C-BSIF framework.</p>
<p>We note that the RGB is the most commonly employed color-space for detecting, modeling, and displaying color images. Nevertheless, its use in image interpretation is restricted due to the broad connection between the three color channels (i.e., red, green, and blue) and the inadequate separation of details in terms of luminance and chrominance. To identify captured objects, the various color channels can be highly discriminative and offer excellent contrast for several visual indicators from natural skin tones. In addition to the RGB, we studied and tested two additional color-spaces—HSV and YCbCr—to exploit color texture details. These color-spaces are based on separating components of the chrominance and luminance. For the HSV color-space, the dimensions of hue and saturation determine the image’s chrominance while the dimension of brightness (v) matches the luminance. The YCbCr color-space divides the components of the RGB into luminance (Y), chrominance blue (Cb), and chrominance red (Cr). We should note that the representation of chrominance components in the HSV and YCbCr domains is dissimilar, and consequently, they can offer additional color texture descriptions for SSFR systems.</p>
</sec>
<sec id="sec3dot3-sensors-21-00728">
<title>3.3. K-Nearest Neighbors (K-NN) Classifier</title>
<p>During the classification process, each tested facial image is compared with those saved in the dataset. To assign the corresponding label (i.e., identity) to the tested image, we used the K-NN classifier associated with a distance metric. In scenarios of general usage, K-NNs show excellent flexibility and usability in substantial applications.</p>
<p>Technically speaking, for a presented training set
<inline-formula>
<mml:math id="mm23">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:mfenced close="}" open="{">
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo></mml:mo>
<mml:mo> </mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo></mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, where
<inline-formula>
<mml:math id="mm24">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mo> </mml:mo>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mi>D</mml:mi>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
denotes the
<inline-formula>
<mml:math id="mm25">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msup>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
person’s feature vector,
<inline-formula>
<mml:math id="mm26">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
denotes this person’s label,
<inline-formula>
<mml:math id="mm27">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is the dimension of the characteristic vector, and
<inline-formula>
<mml:math id="mm28">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
represents the number of persons. For a test person
<inline-formula>
<mml:math id="mm29">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mi>D</mml:mi>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
that is expected to be classified, the K-NN is used to determine a training person
<inline-formula>
<mml:math id="mm30">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mo> </mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
resembling to
<inline-formula>
<mml:math id="mm31">
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
based on the distance rate and then attribute the label of
<inline-formula>
<mml:math id="mm32">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mo> </mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
to
<inline-formula>
<mml:math id="mm33">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
.</p>
<p>K-NN can be implemented with various distance measurements. We evaluated and compared three widely used distance metrics in this work: Hamming, Euclidean, and city block (also called Manhattan distance).</p>
<p>The Hamming distance between
<inline-formula>
<mml:math id="mm34">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm35">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo> </mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is calculated as follows:
<disp-formula id="FD3-sensors-21-00728">
<label>(3)</label>
<mml:math id="mm36">
<mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo> </mml:mo>
<mml:mo>=</mml:mo>
<mml:mo> </mml:mo>
<mml:munderover>
<mml:mstyle mathsize="140%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>D</mml:mi>
</mml:munderover>
<mml:mo></mml:mo>
<mml:msup>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>j</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo></mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>The Euclidean distance between
<inline-formula>
<mml:math id="mm37">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm38">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo> </mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is formulated as follows:
<disp-formula id="FD4-sensors-21-00728">
<label>(4)</label>
<mml:math id="mm39">
<mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mstyle mathsize="140%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>D</mml:mi>
</mml:msubsup>
<mml:msup>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>j</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>The city block distance between
<inline-formula>
<mml:math id="mm40">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm41">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo> </mml:mo>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is measured as follows:
<disp-formula id="FD5-sensors-21-00728">
<label>(5)</label>
<mml:math id="mm42">
<mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mo> </mml:mo>
<mml:munderover>
<mml:mstyle mathsize="140%" displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>D</mml:mi>
</mml:munderover>
<mml:mo></mml:mo>
<mml:mfenced>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>j</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo></mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm43">
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm44">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
are two vectors of dimension
<inline-formula>
<mml:math id="mm45">
<mml:mrow>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
while
<inline-formula>
<mml:math id="mm46">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo> </mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is the
<inline-formula>
<mml:math id="mm47">
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
th feature of
<inline-formula>
<mml:math id="mm48">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, and
<inline-formula>
<mml:math id="mm49">
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>j</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is the
<inline-formula>
<mml:math id="mm50">
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
th feature of
<inline-formula>
<mml:math id="mm51">
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
.</p>
<p>The corresponding label of
<inline-formula>
<mml:math id="mm52">
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
can be determined by:
<disp-formula id="FD6-sensors-21-00728">
<label>(6)</label>
<mml:math id="mm53">
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:msup>
<mml:mi>i</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<disp-formula id="FD7-sensors-21-00728">
<label>(7)</label>
<mml:math id="mm54">
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>i</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:msub>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo></mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>The distance metric in SSFR corresponds to calculating the similarities between the test example and the training examples.</p>
<p>The Algorithm 1 sums up our proposed method of SSFR recognition.</p>
<array orientation="portrait">
<tbody>
<tr>
<td align="left" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Algorithm 1</bold>
SSFR based on MB-C-BSIF and K-NN </td>
</tr>
<tr>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Input:</bold>
Facial image
<italic>X</italic>
<break></break>
1.   Apply histogram normalization on
<italic>X</italic>
<break></break>
2.   Apply median filtering on
<italic>X</italic>
<break></break>
3.   Divide
<italic>X</italic>
into three components (red, green, blue):
<inline-formula>
<mml:math id="mm10545432">
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>C</mml:mi>
<mml:mi>n</mml:mi>
</mml:msup>
<mml:mo>;</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
<break></break>
4.   
<bold>for</bold>
<inline-formula>
<mml:math id="mm253456">
<mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo> </mml:mo>
<mml:mi>to</mml:mi>
<mml:mo> </mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
<break></break>
5.   Divide
<inline-formula>
<mml:math id="mm35345">
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>C</mml:mi>
<mml:mi>n</mml:mi>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
into
<inline-formula>
<mml:math id="mm45424">
<mml:mrow>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
equivalent blocks:
<inline-formula>
<mml:math id="mm55345">
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi>k</mml:mi>
<mml:mi>n</mml:mi>
</mml:msubsup>
<mml:mo>;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo></mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
<break></break>
6.   
<bold>for</bold>
<inline-formula>
<mml:math id="mm65435">
<mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo> </mml:mo>
<mml:mi>to</mml:mi>
<mml:mo> </mml:mo>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
<break></break>
7.   Compute BSIF on the block-component
<inline-formula>
<mml:math id="mm75435">
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi>k</mml:mi>
<mml:mi>n</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
:
<inline-formula>
<mml:math id="mm85435">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:msubsup>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
<break></break>
8.   
<bold>end for</bold>
<break></break>
9.   Concatenate the computed MB-BSIF features of the component
<inline-formula>
<mml:math id="mm9435">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
:
<break></break>
10.   
<inline-formula>
<mml:math id="mm105435">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:msup>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mi>H</mml:mi>
<mml:msubsup>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:mi>H</mml:mi>
<mml:msubsup>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:mo></mml:mo>
<mml:mo>+</mml:mo>
<mml:mi>H</mml:mi>
<mml:msubsup>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>K</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
<break></break>
11.   
<bold>end for</bold>
<break></break>
12.   Concatenate the computed MB-C-BSIF features:
<inline-formula>
<mml:math id="mm13451">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mn>3</mml:mn>
<mml:mo>=</mml:mo>
<mml:mi>H</mml:mi>
<mml:msup>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:mi>H</mml:mi>
<mml:msup>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:mi>H</mml:mi>
<mml:msup>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>3</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
<break></break>
13.   Apply K-NN associated with a metric distance
<break></break>
<bold>Output:</bold>
Identification decision</td>
</tr>
</tbody>
</array>
</sec>
</sec>
<sec id="sec4-sensors-21-00728">
<title>4. Experimental Analysis</title>
<p>The proposed SSFR was evaluated using the unconstrained Alex and Robert (AR) [
<xref rid="B51-sensors-21-00728" ref-type="bibr">51</xref>
] and Labeled Faces in the Wild (LFW) [
<xref rid="B52-sensors-21-00728" ref-type="bibr">52</xref>
] databases. In this section, we present the specifications of each utilized database and their experimental setups. Furthermore, we analyze the findings obtained from our proposed SSFR method and compare the accuracy of recognition with other current state-of-the-art approaches.</p>
<sec id="sec4dot1-sensors-21-00728">
<title>4.1. Experiments on the AR Database</title>
<sec id="sec4dot1dot1-sensors-21-00728">
<title>4.1.1. Database Description</title>
<p>The Alex and Robert (AR) face database [
<xref rid="B51-sensors-21-00728" ref-type="bibr">51</xref>
] includes more than 4000 colored facial photographs of 126 individuals (56 females and 70 males); each individual has 26 different images with a frontal face taken with several facial expressions, lighting conditions, and occlusions. These photographs were acquired at an interval of two-weeks and their analysis was in two sessions (shots 1 and 2). Each session comprised 13 facial photographs per subject. A subset of facial photographs of 100 distinct individuals (50 males and 50 females) was selected in the subsequent experiments.
<xref ref-type="fig" rid="sensors-21-00728-f006">Figure 6</xref>
displays the 26 facial images of the first individual from the AR database, along with detailed descriptions of them.</p>
</sec>
<sec id="sec4dot1dot2-sensors-21-00728">
<title>4.1.2. Setups</title>
<p>To determine the efficiency of the proposed MB-C-BSIF in dealing with changes in facial expression, subset A (normal-1) was used as the training set and subsets B (smiling-1), C (angry-1), D (screaming-1), N (normal-2), O (smiling-2), P (angry-2), and Q (screaming-2) were employed for the test set. The facial images from the eight subsets displayed different facial expressions and were used in two different sessions. For the training set, we employed 100 images of the normal-1 type (100 images for 100 persons, i.e., one image per person). Moreover, we employed 700 images in the test set (smiling-1, angry-1, screaming-1, normal-2, smiling-2, angry-2, and screaming-2). These 700 images were divided into seven subsets for testing, with each subset containing 100 images.</p>
<p>As shown in
<xref ref-type="fig" rid="sensors-21-00728-f006">Figure 6</xref>
, two forms of occlusion are found in 12 subsets. The first is occlusion by sunglasses, as seen in subsets H, I, J, U, V, and W, while the second is occlusion by a scarf in subsets K, L, M, X, Y, and Z. In these 12 subsets, each individual’s photographs have various illumination conditions and were acquired in two distinct stages. There are 100 different items in each subset and the total number of facial photographs used in the test set was 1200. To examine the performance of the suggested MB-C-BSIF under conditions of object occlusion, we considered subset A as the training set and the 12 occlusion subjects as the test set, which was similar to the initial setup.</p>
</sec>
<sec id="sec4dot1dot3-sensors-21-00728">
<title>4.1.3. Experiment #1 (Effects of BSIF Parameters)</title>
<p>As stated in
<xref ref-type="sec" rid="sec3dot2-sensors-21-00728">Section 3.2</xref>
, the BSIF operator is based on two parameters: filter kernel size
<inline-formula>
<mml:math id="mm66">
<mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and bit string length
<inline-formula>
<mml:math id="mm67">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
. In this test, we assessed the proposed method by testing various BSIF parameters to obtain the best configuration, i.e., the one that yielded the best recognition accuracy. We transformed the image into a grayscale level, we did not segment the image into non-overlapping blocks (i.e., 1 × 1 block), and we used the city block distance associated with K-NN.
<xref rid="sensors-21-00728-t001" ref-type="table">Table 1</xref>
,
<xref rid="sensors-21-00728-t002" ref-type="table">Table 2</xref>
and
<xref rid="sensors-21-00728-t003" ref-type="table">Table 3</xref>
show comprehensive details and comparisons of results obtained using some (key) BSIF configurations for facial expression variation subsets, occlusion subsets for sunglasses, and occlusion subsets for scarfs, respectively. The best results are in bold.</p>
<p>We note that using the parameters
<inline-formula>
<mml:math id="mm68">
<mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>17</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>17</mml:mn>
<mml:mo> </mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm69">
<mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>12</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
for the BSIF operator achieves the best performance in identification compared to other configurations considered in this experiment. Furthermore, an increase in the identification rate appears when we augment the values of
<inline-formula>
<mml:math id="mm70">
<mml:mrow>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
or
<inline-formula>
<mml:math id="mm71">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
. The implemented configuration can achieve better accuracy for changes in facial expression with all seven subsets. However, for subset Q, which is characterized by considerable variation in facial expression, the accuracy of recognition was very low (71%). Lastly, the performance of this implemented configuration under conditions of occlusion by an object is unsatisfactory, especially with occlusion by a scarf, and needs further improvement.</p>
</sec>
<sec id="sec4dot1dot4-sensors-21-00728">
<title>4.1.4. Experiment #2 (Effects of Distance)</title>
<p>In this experiment, we evaluated the last configuration (i.e., grayscale level image, 1 × 1 block
<inline-formula>
<mml:math id="mm72">
<mml:mrow>
<mml:mrow>
<mml:mo> </mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>17</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>17</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, and
<inline-formula>
<mml:math id="mm73">
<mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>12</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
) by checking various distances associated with K-NN for classification.
<xref rid="sensors-21-00728-t004" ref-type="table">Table 4</xref>
,
<xref rid="sensors-21-00728-t005" ref-type="table">Table 5</xref>
and
<xref rid="sensors-21-00728-t006" ref-type="table">Table 6</xref>
compare the results achieved by adopting the city block distance and other well-known distances with facial expression variation subsets, occlusion subsets for sunglasses, and occlusion subsets for scarfs, respectively. The best results are in bold.</p>
<p>We note that the city block distance produced the most reliable recognition performance compared to the other distances analyzed in this test, such as the Hamming and Euclidean distances. As such, we can say that the city block distance is the most suitable for our method.</p>
</sec>
<sec id="sec4dot1dot5-sensors-21-00728">
<title>4.1.5. Experiment #3 (Effects of Image Segmentation)</title>
<p>To improve recognition accuracy, especially under conditions of occlusion, we proposed decomposing the image into several non-overlapping blocks, as discussed in
<xref ref-type="sec" rid="sec3dot2-sensors-21-00728">Section 3.2</xref>
. The objective of this test was to estimate identification performance when MB-BSIF features are used instead of their global computation over an entire image. In this paper, three methods for image segmentation are considered and compared. Each original image was divided into 1 × 1 (i.e., global information), 2 × 2, and 4 × 4 blocks (i.e., local information). In other terms, an image was divided into 1 block (i.e., the original image), 4 blocks, and 16 blocks. For the last two cases, the feature vectors (i.e., histograms H1) derived from each block were fused to create the entire image extracted feature vector (Histogram H2).
<xref rid="sensors-21-00728-t007" ref-type="table">Table 7</xref>
,
<xref rid="sensors-21-00728-t008" ref-type="table">Table 8</xref>
and
<xref rid="sensors-21-00728-t009" ref-type="table">Table 9</xref>
present and compare the recognition accuracy of the tested MB-BSIF for various blocks with subsets of facial expression variation, occlusion subsets for sunglasses, and occlusion subsets for scarfs, respectively (with grayscale images, city block distance,
<inline-formula>
<mml:math id="mm74">
<mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>17</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>17</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, and
<inline-formula>
<mml:math id="mm75">
<mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>12</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
). The best results are in bold.</p>
<p>From the resulting outputs, we can observe that:
<list list-type="simple">
<list-item>
<label>-</label>
<p>For subsets of facial expression variation, a small change arises because the results of the previous experiment were already reasonable (e.g., subsets A, B, D, N, and P). However, the accuracy rises from 71% to 76% for subset Q, which is characterized by significant changes in facial expression.</p>
</list-item>
<list-item>
<label>-</label>
<p>For occluded subsets, there was a significant increase in recognition accuracy when the number of blocks was augmented. As an illustration, when we applied 1 to 16 patches, the accuracy grew from 31% to 71% for subset Z, from 46% to 79% for subset W, and from 48% to 84% for subset Y.</p>
</list-item>
<list-item>
<label>-</label>
<p>As such, in the case of partial occlusion, we may claim that local information is essential. It helps to go deeper in extracting relevant information from the face like details about the facial structure, such as the nose, eyes, or mouth, and information about position relationships, such as nose to mouth, eye to eye, and so on.</p>
</list-item>
<list-item>
<label>-</label>
<p>Finally, we note that the 4 × 4 blocks provided the optimum configuration with the best accuracy for subsets of facial expression, occlusion by sunglasses, and scarf occlusion.</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec4dot1dot6-sensors-21-00728">
<title>4.1.6. Experiment #4 (Effects of Color Texture Information)</title>
<p>For this analysis, we evaluated the performance of the last configuration (i.e., segmentation of the image into 4 × 4 blocks, K-NN associated with city block distance,
<inline-formula>
<mml:math id="mm76">
<mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>17</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>17</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, and
<inline-formula>
<mml:math id="mm77">
<mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>12</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
) by testing three color-spaces, namely, RGB, HSV, and YCbCr, instead of transforming the image into grayscale. This feature extraction method is called MB-C-BSIF, as described in
<xref ref-type="sec" rid="sec3dot2-sensors-21-00728">Section 3.2</xref>
. The AR database images are already in RGB and so do not need a transformation of the first color-space. However, the images must be converted from RGB to HSV and RGB to YCbCr for the other color-spaces.
<xref rid="sensors-21-00728-t010" ref-type="table">Table 10</xref>
,
<xref rid="sensors-21-00728-t011" ref-type="table">Table 11</xref>
and
<xref rid="sensors-21-00728-t012" ref-type="table">Table 12</xref>
display and compare the recognition accuracy of the MB-C-BSIF using several color-spaces with subsets of facial expression variations, occlusion by sunglasses, and occlusion by a scarf, respectively. The best results are in bold.</p>
<p>From the resulting outputs, we can see that:
<list list-type="simple">
<list-item>
<label>-</label>
<p>The results are almost identical for subsets of facial expression variation with all checked color-spaces. In fact, with the HSV color-space, a slight improvement is reported, although slight degradations are observed with both RGB and YCbCr color-spaces.</p>
</list-item>
<list-item>
<label>-</label>
<p>All color-spaces see enhanced recognition accuracy compared to the grayscale standard for sunglasses occlusion subsets. RGB is the color-space with the highest output, seeing an increase from 91.83% to 93.50% in terms of average accuracy.</p>
</list-item>
<list-item>
<label>-</label>
<p>HSV shows some regression for scarf occlusion subsets, but both the RGB and YCbCr color-spaces display some progress compared to the grayscale norm. Additionally, RGB remains the color-space with the highest output.</p>
</list-item>
<list-item>
<label>-</label>
<p>The most significant observation is that the RGB color-space saw significantly improved performance in the V, W, Y, and Z subsets (from 81% to 85% with V; 79% to 84% with W; 84% to 88% with Y; and 77% to 87% with Z). Note that images of these occluded subsets are characterized by light degradation (either to the right or left, as shown in
<xref ref-type="fig" rid="sensors-21-00728-f006">Figure 6</xref>
).</p>
</list-item>
<list-item>
<label>-</label>
<p>Finally, we note that the optimum color-space, providing a perfect balance between lighting restoration and improvement in identification, was the RGB.</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec4dot1dot7-sensors-21-00728">
<title>4.1.7. Comparison #1 (Protocol I)</title>
<p>To confirm that our suggested method produces superior recognition performance with variations in facial expression, we compared the collected results with several state-of-the-art methods recently employed to tackle the SSFR issue.
<xref rid="sensors-21-00728-t013" ref-type="table">Table 13</xref>
presents the highest accuracies obtained using the same subsets and the same assessment protocol with Subset A as the training set and subsets of facial expression variations B, C, D, N, O, and P constituting the test set. The results presented in
<xref rid="sensors-21-00728-t013" ref-type="table">Table 13</xref>
are taken from several references [
<xref rid="B36-sensors-21-00728" ref-type="bibr">36</xref>
,
<xref rid="B39-sensors-21-00728" ref-type="bibr">39</xref>
,
<xref rid="B53-sensors-21-00728" ref-type="bibr">53</xref>
,
<xref rid="B54-sensors-21-00728" ref-type="bibr">54</xref>
]. “- -” signifies that the considered method has no experimental results. The best results are in bold.</p>
<p>The outcomes obtained validate the robustness and reliability of our proposed SSFR system compared to state-of-the-art methods when assessed with identical subsets. We suggest a competitive technique that has achieved a desirable level of identification accuracy with the six subsets of up to: 100.00% for B and C; 95.00% for D; 97.00% for N; 92.00% for O; and 93.00% for P.</p>
<p>For all subsets, our suggested technique surpasses the state-of-the-art methods analyzed in this paper, i.e., the proposed MB-C-BSIF can achieve excellent identification performance under the condition of variation in facial expression.</p>
</sec>
<sec id="sec4dot1dot8-sensors-21-00728">
<title>4.1.8. Comparison #2 (Protocol II)</title>
<p>To further demonstrate the efficacy of our proposed SSFR system, we also compared the best configuration of the MB-C-BSIF (i.e., RGB color-space, segmentation of the image into 4 × 4 blocks, city block distance,
<inline-formula>
<mml:math id="mm78">
<mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>17</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>17</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, and
<inline-formula>
<mml:math id="mm79">
<mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>12</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
) with recently published work under unconstrained conditions. We followed the same experimental protocol described in [
<xref rid="B33-sensors-21-00728" ref-type="bibr">33</xref>
,
<xref rid="B39-sensors-21-00728" ref-type="bibr">39</xref>
].
<xref rid="sensors-21-00728-t014" ref-type="table">Table 14</xref>
displays the accuracies of the works compared on the tested subsets H + K (i.e., occlusion by sunglasses and scarf) and subsets J + M (i.e., occlusion by sunglasses and scarf with variations in lighting). The best results are in bold.</p>
<p>In
<xref rid="sensors-21-00728-t014" ref-type="table">Table 14</xref>
, we can observe that the work presented by Zhu et al. [
<xref rid="B33-sensors-21-00728" ref-type="bibr">33</xref>
], called LGR, shows a comparable level, but the identification accuracy of our MB-C-BSIF procedure is much higher than all the methods considered for both test sessions.</p>
<p>Compared to related SSFRs, which can be categorized as either generic learning methods (e.g., ESRC [
<xref rid="B31-sensors-21-00728" ref-type="bibr">31</xref>
], SVDL [
<xref rid="B32-sensors-21-00728" ref-type="bibr">32</xref>
], and LGR [
<xref rid="B33-sensors-21-00728" ref-type="bibr">33</xref>
], image partitioning methods (e.g., CRC [
<xref rid="B35-sensors-21-00728" ref-type="bibr">35</xref>
], PCRC [
<xref rid="B34-sensors-21-00728" ref-type="bibr">34</xref>
], and DNNC [
<xref rid="B39-sensors-21-00728" ref-type="bibr">39</xref>
]) or deep learning methods (e.g., DCNN [
<xref rid="B41-sensors-21-00728" ref-type="bibr">41</xref>
] and BDL [
<xref rid="B44-sensors-21-00728" ref-type="bibr">44</xref>
]), the capabilities of our method can be explained in terms of its exploitation of different forms of information. This can be summarized as follows:
<list list-type="simple">
<list-item>
<label>-</label>
<p>The BSIF descriptor scans the image pixel by pixel, i.e., we consider the benefits of local information.</p>
</list-item>
<list-item>
<label>-</label>
<p>The image is decomposed into several blocks, i.e., we exploit regional information.</p>
</list-item>
<list-item>
<label>-</label>
<p>BSIF descriptor occurrences are accumulated in a global histogram, i.e., we manipulate global information.</p>
</list-item>
<list-item>
<label>-</label>
<p>The MB-BSIF is applied to all RGB image components, i.e., color texture information is exploited.</p>
</list-item>
</list>
</p>
<table-wrap id="sensors-21-00728-t014" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t014_Table 14</object-id>
<label>Table 14</label>
<caption>
<p>Comparison of 12 methods on occlusion and lighting-occlusion sessions.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Authors</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Year</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Method</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Occlusion (H + K) (%)</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Lighting + Occlusion (J + M) (%)</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Average Accuracy (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhang et al. [
<xref rid="B35-sensors-21-00728" ref-type="bibr">35</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2011</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">CRC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">58.10</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">23.80</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">40.95</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Deng et al. [
<xref rid="B31-sensors-21-00728" ref-type="bibr">31</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2012</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ESRC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">83.10</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">68.60</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75.85</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhu et al. [
<xref rid="B34-sensors-21-00728" ref-type="bibr">34</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2012</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PCRC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95.60</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">81.30</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.45</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Yang et al. [
<xref rid="B32-sensors-21-00728" ref-type="bibr">32</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2013</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SVDL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">86.30</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">79.40</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">82.85</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Lu et al. [
<xref rid="B36-sensors-21-00728" ref-type="bibr">36</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2012</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">DMMA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">46.90</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">30.90</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">38.90</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhu et al. [
<xref rid="B33-sensors-21-00728" ref-type="bibr">33</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2014</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LGR</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98.80</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96.30</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97.55</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Ref. [
<xref rid="B67-sensors-21-00728" ref-type="bibr">67</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2016</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SeetaFace</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">63.13</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">55.63</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">59.39</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zeng et al. [
<xref rid="B41-sensors-21-00728" ref-type="bibr">41</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2017</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">DCNN</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96.5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.3</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92.20</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Chu et al. [
<xref rid="B65-sensors-21-00728" ref-type="bibr">65</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2019</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">MFSA+</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91.3</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">79.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">85.20</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Cuculo et al. [
<xref rid="B68-sensors-21-00728" ref-type="bibr">68</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2019</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SSLD</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90.18</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">82.02</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">86.10</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhang et al. [
<xref rid="B39-sensors-21-00728" ref-type="bibr">39</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2020</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">DNNC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92.50</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">79.50</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">86.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Du and Da [
<xref rid="B44-sensors-21-00728" ref-type="bibr">44</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2020</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">BDL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93.03</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91.55</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92.29</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Our method</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>2021</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>MB-C-BSIF</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>99.5</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>98.5</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>99.00</bold>
</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>To summarize this first experiment, the performance of the proposed approach was evaluated using the AR database. In this experiment, the issues studied were changes in facial expression, lighting and occlusion by sunglasses and headscarf, which are the most common cases in real-world applications. As presented in
<xref rid="sensors-21-00728-t013" ref-type="table">Table 13</xref>
and
<xref rid="sensors-21-00728-t014" ref-type="table">Table 14</xref>
, our system obtained very good results (i.e., 96.17% with Protocol I and 99% with Protocol II) that surpass all the approaches compared (including the handcrafted and deep-learning-based approaches), i.e., that the approach we propose is appropriate and effective in the presence of the problems mentioned above.</p>
</sec>
</sec>
<sec id="sec4dot2-sensors-21-00728">
<title>4.2. Experiments on the LFW Database</title>
<sec id="sec4dot2dot1-sensors-21-00728">
<title>4.2.1. Database Description</title>
<p>The Labeled Faces in the Wild (LFW) database [
<xref rid="B52-sensors-21-00728" ref-type="bibr">52</xref>
] comprises more than 13,000 photos collected from the World Wide Web of 5749 diverse subjects in challenging situations, of which 1680 subjects possess two or more shots per individual. Our tests employed the LFW-a, a variant of the standard LFW where the facial images are aligned with a commercial normalization tool. It can be observed that the intra-class differences in this database are very high compared to the well-known constrained databases and face normalization has been carried out. The size of each image is 250 × 250 pixels and uses the jpeg extension. LFW is a very challenging database: it aims to investigate the unconstrained issues of face recognition, such as changes in lighting, age, clothing, focus, facial expression, color saturation, posture, race, hairstyle, background, camera quality, gender, ethnicity, and other factors, as presented in
<xref ref-type="fig" rid="sensors-21-00728-f007">Figure 7</xref>
.</p>
</sec>
<sec id="sec4dot2dot2-sensors-21-00728">
<title>4.2.2. Experimental Protocol</title>
<p>This study followed the experimental protocol presented in [
<xref rid="B30-sensors-21-00728" ref-type="bibr">30</xref>
,
<xref rid="B32-sensors-21-00728" ref-type="bibr">32</xref>
,
<xref rid="B33-sensors-21-00728" ref-type="bibr">33</xref>
,
<xref rid="B34-sensors-21-00728" ref-type="bibr">34</xref>
]. From the LFW-a database, we selected only those subjects possessing more than 10 images to obtain a subset containing the facial images of 158 individuals. We cropped each image to a size of 120 × 120 pixels and then resized it to 80 × 80 pixels. We considered the first 50 subjects’ facial photographs to create the training set and the test set. We randomly selected one shot from each subject for the training set, while the remaining images were employed in the test set. This process was repeated for five permutations and the average result for each was taken into consideration.</p>
</sec>
<sec id="sec4dot2dot3-sensors-21-00728">
<title>4.2.3. Limitations of SSFR Systems</title>
<p>In this section, the SSFR systems, and particularly the method we propose, will be voluntarily tested in a situation that is not adapted to their application: they are applicable in the case where only one sample is available and, very often, this sample is captured in very poor conditions.</p>
<p>We are particularly interested in cases where hundreds of samples are available, as in the LFW database, or when the training stage is based on millions of samples. In such a situation, deep learning approaches must be obviously chosen.</p>
<p>Therefore, the objective of this section is to assess the limitations of our approach.</p>
<p>
<xref rid="sensors-21-00728-t015" ref-type="table">Table 15</xref>
summarizes the performance of several rival approaches in terms of identification accuracy. Our best result was obtained by adopting the following configuration:
<list list-type="simple">
<list-item>
<label>-</label>
<p>BSIF descriptor with filter size
<inline-formula>
<mml:math id="mm80">
<mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>17</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>17</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and bit string length
<inline-formula>
<mml:math id="mm81">
<mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>12</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
.</p>
</list-item>
<list-item>
<label>-</label>
<p>K-NN classifier associated with city block distance.</p>
</list-item>
<list-item>
<label>-</label>
<p>Segmentation of the image into blocks of 40 × 40 and 20 × 20 pixels.</p>
</list-item>
<list-item>
<label>-</label>
<p>RGB color-space.</p>
</list-item>
</list>
</p>
<p>We can observe that the traditional approaches did not achieve particularly good identification accuracies. This is primarily because the photographs in the LFW database have been taken in unregulated conditions, which generates facial images with rich intra-class differences and increases face recognition complexity. As a consequence, the efficiency of the SSFR procedure is reduced. However, our recommended solution is better than the other competing traditional approaches. The superiority of our method can be explained by its exploitation of different forms of information, namely: local, regional, global, and color texture information. SVDL [
<xref rid="B32-sensors-21-00728" ref-type="bibr">32</xref>
] and LGR [
<xref rid="B33-sensors-21-00728" ref-type="bibr">33</xref>
] also achieved success in SSFR because the intra-class variance information obtained from other subjects in the standardized training set (i.e., augmenting the training-data) helped boost the performance of the system. Additionally, KNNMMDL [
<xref rid="B30-sensors-21-00728" ref-type="bibr">30</xref>
] achieved good performance because it uses the Weber-face algorithm in the preprocessing step, which handles the illumination variation issue and employs data augmentation to enrich the intra-class variation in the training set.</p>
<p>In another experiment, we implemented and tested the successful DeepFace algorithm [
<xref rid="B12-sensors-21-00728" ref-type="bibr">12</xref>
], whose weights were trained on millions of images from the ImageNet database that are close to real-life situations. As presented in
<xref rid="sensors-21-00728-t015" ref-type="table">Table 15</xref>
, the DeepFace algorithm shows significant superiority to the compared methods. This success is down to the profound and specific training of the weights in addition to the significant number of images employed in its operation.</p>
<p>In a recent work by Zeng et al. [
<xref rid="B72-sensors-21-00728" ref-type="bibr">72</xref>
], the authors combined traditional (handcrafted) and deep learning (TDL) characteristics to overcome the limitation of each class. They reached an identification accuracy of near 74%, which is something of a quantum leap in this challenging topic.</p>
<p>In the comparative study presented in [
<xref rid="B73-sensors-21-00728" ref-type="bibr">73</xref>
], we can see that current face recognition systems employing several examples in the training set achieve very high accuracy with the LFW database, especially with deep-learning-based methods. However, SSFR systems suffer considerably when using the challenging LFW database and further research is required to improve their reliability.</p>
<p>In the situation where the learning stage is based on millions of images, the proposed SSFR technique cannot be used. In such a situation, References [
<xref rid="B12-sensors-21-00728" ref-type="bibr">12</xref>
,
<xref rid="B72-sensors-21-00728" ref-type="bibr">72</xref>
], which use deep learning techniques with data augmentation [
<xref rid="B12-sensors-21-00728" ref-type="bibr">12</xref>
] or deep learning features combined with handcrafted features [
<xref rid="B72-sensors-21-00728" ref-type="bibr">72</xref>
], allow one to obtain better accuracy.</p>
<p>Finally, the proposed SSFR method is reserved for the case where only one sample per person is available, which is the most common case in the real world through remote surveillance or unmanned aerial vehicles’ shots. In these applications, faces are most often captured under harsh conditions, such as changing lighting, posture, or if the person is wearing accessories such as glasses, masks, or disguises. In these cases, the method proposed here is by far the most accurate. Finally, it would be interesting to explore and test some proven approaches that have shown good performance in solving real-world problems, in order to evaluate their performance using the same protocol and database, such as multi-scale principal component analysis (MSPCA) [
<xref rid="B74-sensors-21-00728" ref-type="bibr">74</xref>
], signal decomposition methods [
<xref rid="B75-sensors-21-00728" ref-type="bibr">75</xref>
,
<xref rid="B76-sensors-21-00728" ref-type="bibr">76</xref>
], generative adversarial neural networks (GAN) [
<xref rid="B77-sensors-21-00728" ref-type="bibr">77</xref>
], and centroid-displacement-based-K-NN [
<xref rid="B78-sensors-21-00728" ref-type="bibr">78</xref>
].</p>
</sec>
</sec>
</sec>
<sec id="sec5-sensors-21-00728">
<title>5. Conclusions and Perspectives</title>
<p>In this paper, we have presented an original method for Single-Sample Face Recognition (SSFR) based on the Multi-Block Color-binarized Statistical Image Features (MB-C-BSIF) descriptor. It allows for the extraction of features for classification by the K-nearest neighbors (K-NN) method. The proposed method exploits various kinds of information, including local, regional, global, and color texture information. In our experiments, the MB-C-BSIF has been evaluated on several subsets of images from the unconstrained AR and LFW databases. Experiments conducted on the AR database have shown that our method significantly improves the performance of SSFR classification when dealing with several variations of facial recognition. The proposed feature extraction strategy achieves a high accuracy, with an average value of 96.17% and 99% for the AR database with Protocols I and II, respectively. These significant results validate the effectiveness of the proposed method compared to state-of-the-art methods. The potential applications of the method are oriented towards a computer-aided technology that can be used for real-time identification.</p>
<p>In the future, we aim to explore the effectiveness of combining both deep learning and traditional methods in addressing the SSFR issue. Hybrid features combine handcrafted features with deep characteristics to collect richer information than those obtained by a single feature extraction method, thus improving the level of recognition. Besides, we plan to develop a deep learning method based on semantic information, such as age, gender, and ethnicity, to solve the problem of SSFR, which is an area that deserves further study. We also aim to investigate and analyze the SSFR issue in unconstrained environments using large-scale databases that hold millions of facial images.</p>
</sec>
</body>
<back>
<fn-group>
<fn>
<p>
<bold>Publisher’s Note:</bold>
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.</p>
</fn>
</fn-group>
<notes>
<title>Author Contributions</title>
<p>Investigation, software, writing original draft, I.A.; project administration, supervision, validation, writing, review and editing, A.O.; methodology, validation, writing, review and editing, A.B.; validation, writing, review and editing, S.J. All authors have read and agreed to the published version of the manuscript.</p>
</notes>
<notes>
<title>Funding</title>
<p>This research received no external funding.</p>
</notes>
<notes>
<title>Institutional Review Board Statement</title>
<p>Not applicable.</p>
</notes>
<notes>
<title>Informed Consent Statement</title>
<p>Not applicable.</p>
</notes>
<notes notes-type="data-availability">
<title>Data Availability Statement</title>
<p>Data sharing not applicable.</p>
</notes>
<notes notes-type="COI-statement">
<title>Conflicts of Interest</title>
<p>The authors declare no conflict of interest.</p>
</notes>
<ref-list>
<title>References</title>
<ref id="B1-sensors-21-00728">
<label>1.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alay</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Al-Baity</surname>
<given-names>H.H.</given-names>
</name>
</person-group>
<article-title>Deep Learning Approach for Multimodal Biometric Recognition System Based on Fusion of Iris, Face, and Finger Vein Traits</article-title>
<source>Sensors</source>
<year>2020</year>
<volume>20</volume>
<elocation-id>5523</elocation-id>
<pub-id pub-id-type="doi">10.3390/s20195523</pub-id>
<pub-id pub-id-type="pmid">32992524</pub-id>
</element-citation>
</ref>
<ref id="B2-sensors-21-00728">
<label>2.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pagnin</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Mitrokotsa</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Privacy-Preserving Biometric Authentication: Challenges and Directions</article-title>
<source>Secur. Commun. Netw.</source>
<year>2017</year>
<volume>2017</volume>
<fpage>1</fpage>
<lpage>9</lpage>
<pub-id pub-id-type="doi">10.1155/2017/7129505</pub-id>
</element-citation>
</ref>
<ref id="B3-sensors-21-00728">
<label>3.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mahfouz</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Mahmoud</surname>
<given-names>T.M.</given-names>
</name>
<name>
<surname>Sharaf Eldin</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>A Survey on Behavioral Biometric Authentication on Smartphones</article-title>
<source>J. Inf. Secur. Appl.</source>
<year>2017</year>
<volume>37</volume>
<fpage>28</fpage>
<lpage>37</lpage>
<pub-id pub-id-type="doi">10.1016/j.jisa.2017.10.002</pub-id>
</element-citation>
</ref>
<ref id="B4-sensors-21-00728">
<label>4.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ferrara</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Cappelli</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Maltoni</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>On the Feasibility of Creating Double-Identity Fingerprints</article-title>
<source>IEEE Trans. Inf. Forensics Secur.</source>
<year>2017</year>
<volume>12</volume>
<fpage>892</fpage>
<lpage>900</lpage>
<pub-id pub-id-type="doi">10.1109/TIFS.2016.2639345</pub-id>
</element-citation>
</ref>
<ref id="B5-sensors-21-00728">
<label>5.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thompson</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Flynn</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Boehnen</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Santos-Villalobos</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Assessing the Impact of Corneal Refraction and Iris Tissue Non-Planarity on Iris Recognition</article-title>
<source>IEEE Trans. Inf. Forensics Secur.</source>
<year>2019</year>
<volume>14</volume>
<fpage>2102</fpage>
<lpage>2112</lpage>
<pub-id pub-id-type="doi">10.1109/TIFS.2018.2869342</pub-id>
</element-citation>
</ref>
<ref id="B6-sensors-21-00728">
<label>6.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Benzaoui</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bourouba</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Boukrouche</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>System for Automatic Faces Detection</article-title>
<source>Proceedings of the 3rd International Conference on Image Processing, Theory, Tools, and Applications (IPTA)</source>
<conf-loc>Istanbul, Turkey</conf-loc>
<conf-date>15–18 October 2012</conf-date>
<fpage>354</fpage>
<lpage>358</lpage>
</element-citation>
</ref>
<ref id="B7-sensors-21-00728">
<label>7.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Phillips</surname>
<given-names>P.J.</given-names>
</name>
<name>
<surname>Flynn</surname>
<given-names>P.J.</given-names>
</name>
<name>
<surname>Scruggs</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Bowyer</surname>
<given-names>K.W.</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hoffman</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Marques</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Min</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Worek</surname>
<given-names>W.</given-names>
</name>
</person-group>
<article-title>Overview of the Face Recognition Grand Challenge</article-title>
<source>Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)</source>
<conf-loc>San Diego, CA, USA</conf-loc>
<conf-date>20–26 June 2005</conf-date>
<fpage>947</fpage>
<lpage>954</lpage>
</element-citation>
</ref>
<ref id="B8-sensors-21-00728">
<label>8.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Femmam</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>M’Sirdi</surname>
<given-names>N.K.</given-names>
</name>
<name>
<surname>Ouahabi</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Perception and Characterization of Materials Using Signal Processing Techniques</article-title>
<source>IEEE Trans. Instrum. Meas.</source>
<year>2001</year>
<volume>50</volume>
<fpage>1203</fpage>
<lpage>1211</lpage>
<pub-id pub-id-type="doi">10.1109/19.963184</pub-id>
</element-citation>
</ref>
<ref id="B9-sensors-21-00728">
<label>9.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ring</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Humans vs Machines: The Future of Facial Recognition</article-title>
<source>Biom. Technol. Today</source>
<year>2016</year>
<volume>4</volume>
<fpage>5</fpage>
<lpage>8</lpage>
<pub-id pub-id-type="doi">10.1016/S0969-4765(16)30067-4</pub-id>
</element-citation>
</ref>
<ref id="B10-sensors-21-00728">
<label>10.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Phillips</surname>
<given-names>P.J.</given-names>
</name>
<name>
<surname>Yates</surname>
<given-names>A.N.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Hahn</surname>
<given-names>A.C.</given-names>
</name>
<name>
<surname>Noyes</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Jackson</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Cavazos</surname>
<given-names>J.G.</given-names>
</name>
<name>
<surname>Jeckeln</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Ranjan</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Sankaranarayanan</surname>
<given-names>S.</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Face Recognition Accuracy of Forensic Examiners, Superrecognizers, and Face Recognition Algorithms</article-title>
<source>Proc. Natl. Acad. Sci. USA</source>
<year>2018</year>
<volume>115</volume>
<fpage>6171</fpage>
<lpage>6176</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.1721355115</pub-id>
<pub-id pub-id-type="pmid">29844174</pub-id>
</element-citation>
</ref>
<ref id="B11-sensors-21-00728">
<label>11.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kortli</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Jridi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Al Falou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Atri</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Face Recognition Systems: A Survey</article-title>
<source>Sensors</source>
<year>2020</year>
<volume>20</volume>
<elocation-id>342</elocation-id>
<pub-id pub-id-type="doi">10.3390/s20020342</pub-id>
</element-citation>
</ref>
<ref id="B12-sensors-21-00728">
<label>12.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ouahabi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Taleb-Ahmed</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Deep learning for real-time semantic segmentation: Application in ultrasound imaging</article-title>
<source>Pattern Recognition Letters</source>
<year>2021</year>
<volume>144</volume>
<fpage>27</fpage>
<lpage>34</lpage>
</element-citation>
</ref>
<ref id="B13-sensors-21-00728">
<label>13.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rahman</surname>
<given-names>J.U.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>Z.</given-names>
</name>
</person-group>
<article-title>Additive Parameter for Deep Face Recognition</article-title>
<source>Commun. Math. Stat.</source>
<year>2019</year>
<volume>8</volume>
<fpage>203</fpage>
<lpage>217</lpage>
<pub-id pub-id-type="doi">10.1007/s40304-019-00198-z</pub-id>
</element-citation>
</ref>
<ref id="B14-sensors-21-00728">
<label>14.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fan</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Jamil</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sadiq</surname>
<given-names>M.T.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>X.</given-names>
</name>
</person-group>
<article-title>Exploiting Multiple Optimizers with Transfer Learning Techniques for the Identification of COVID-19 Patients</article-title>
<source>J. Healthc. Eng.</source>
<year>2020</year>
<volume>2020</volume>
<fpage>1</fpage>
<lpage>13</lpage>
</element-citation>
</ref>
<ref id="B15-sensors-21-00728">
<label>15.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Benzaoui</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Boukrouche</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Ear Recognition Using Local Color Texture Descriptors from One Sample Image Per Person</article-title>
<source>Proceedings of the 4th International Conference on Control, Decision and Information Technologies (CoDIT)</source>
<conf-loc>Barcelona, Spain</conf-loc>
<conf-date>5–7 April 2017</conf-date>
<fpage>827</fpage>
<lpage>832</lpage>
</element-citation>
</ref>
<ref id="B16-sensors-21-00728">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vapnik</surname>
<given-names>V.N.</given-names>
</name>
<name>
<surname>Chervonenkis</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Learning Theory and Its Applications</article-title>
<source>IEEE Trans. Neural Netw.</source>
<year>1999</year>
<volume>10</volume>
<fpage>985</fpage>
<lpage>987</lpage>
</element-citation>
</ref>
<ref id="B17-sensors-21-00728">
<label>17.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vezzetti</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Marcolin</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Tornincasa</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ulrich</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Dagnes</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>3D Geometry-Based Automatic Landmark Localization in Presence of Facial Occlusions</article-title>
<source>Multimed. Tools Appl.</source>
<year>2017</year>
<volume>77</volume>
<fpage>14177</fpage>
<lpage>14205</lpage>
<pub-id pub-id-type="doi">10.1007/s11042-017-5025-y</pub-id>
</element-citation>
</ref>
<ref id="B18-sensors-21-00728">
<label>18.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Echeagaray-Patron</surname>
<given-names>B.A.</given-names>
</name>
<name>
<surname>Miramontes-Jaramillo</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Kober</surname>
<given-names>V.</given-names>
</name>
</person-group>
<article-title>Conformal Parameterization and Curvature Analysis for 3D Facial Recognition</article-title>
<source>Proceedings of the 2015 International Conference on Computational Science and Computational Intelligence (CSCI)</source>
<conf-loc>Las Vegas, NV, USA</conf-loc>
<conf-date>7–9 December 2015</conf-date>
<fpage>843</fpage>
<lpage>844</lpage>
</element-citation>
</ref>
<ref id="B19-sensors-21-00728">
<label>19.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kannala</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Rahtu</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>BSIF: Binarized Statistical Image Features</article-title>
<source>Proceedings of the 21th International Conference on Pattern Recognition (ICPR)</source>
<conf-loc>Tsukuba, Japan</conf-loc>
<conf-date>11–15 November 2012</conf-date>
<fpage>1363</fpage>
<lpage>1366</lpage>
</element-citation>
</ref>
<ref id="B20-sensors-21-00728">
<label>20.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Djeddi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ouahabi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Batatia</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Basarab</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kouamé</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Discrete Wavelet for Multifractal Texture Classification: Application to Medical Ultrasound Imaging</article-title>
<source>Proceedings of the 2010 IEEE International Conference on Image Processing</source>
<conf-loc>Hong Kong, China</conf-loc>
<conf-date>26–29 September 2010</conf-date>
<fpage>637</fpage>
<lpage>640</lpage>
</element-citation>
</ref>
<ref id="B21-sensors-21-00728">
<label>21.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ouahabi</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Multifractal Analysis for Texture Characterization: A New Approach Based on DWT</article-title>
<source>Proceedings of the 10th International Conference on Information Science, Signal Processing and their Applications (ISSPA 2010)</source>
<conf-loc>Kuala Lumpur, Malaysia</conf-loc>
<conf-date>10–13 May 2010</conf-date>
<fpage>698</fpage>
<lpage>703</lpage>
</element-citation>
</ref>
<ref id="B22-sensors-21-00728">
<label>22.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ouahabi</surname>
<given-names>A.</given-names>
</name>
</person-group>
<source>Signal and Image Multiresolution Analysis</source>
<edition>1st ed.</edition>
<publisher-name>ISTE-Wiley</publisher-name>
<publisher-loc>London, UK</publisher-loc>
<year>2012</year>
</element-citation>
</ref>
<ref id="B23-sensors-21-00728">
<label>23.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ouahabi</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>A Review of Wavelet Denoising in Medical Imaging</article-title>
<source>Proceedings of the 8th International Workshop on Systems, Signal Processing and their Applications (WoSSPA)</source>
<conf-loc>Tipaza, Algeria</conf-loc>
<conf-date>12–15 May 2013</conf-date>
<fpage>19</fpage>
<lpage>26</lpage>
</element-citation>
</ref>
<ref id="B24-sensors-21-00728">
<label>24.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sidahmed</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Messali</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Ouahabi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Trépout</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Messaoudi</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Marco</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Nonparametric Denoising Methods Based on Contourlet Transform with Sharp Frequency Localization: Application to Low Exposure Time Electron Microscopy Images</article-title>
<source>Entropy</source>
<year>2015</year>
<volume>17</volume>
<fpage>3461</fpage>
<lpage>3478</lpage>
</element-citation>
</ref>
<ref id="B25-sensors-21-00728">
<label>25.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kumar</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Garg</surname>
<given-names>V.</given-names>
</name>
</person-group>
<article-title>Single Sample Face Recognition in the Last Decade: A Survey</article-title>
<source>Int. J. Pattern Recognit. Artif. Intell.</source>
<year>2019</year>
<volume>33</volume>
<fpage>1956009</fpage>
<pub-id pub-id-type="doi">10.1142/S0218001419560093</pub-id>
</element-citation>
</ref>
<ref id="B26-sensors-21-00728">
<label>26.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vetter</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Synthesis of Novel Views from a Single Face Image</article-title>
<source>Int. J. Comput. Vis.</source>
<year>1998</year>
<volume>28</volume>
<fpage>103</fpage>
<lpage>116</lpage>
<pub-id pub-id-type="doi">10.1023/A:1008058932445</pub-id>
</element-citation>
</ref>
<ref id="B27-sensors-21-00728">
<label>27.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>Z.H.</given-names>
</name>
</person-group>
<article-title>A New Face Recognition Method Based on SVD Perturbation for Single Example Image per Person</article-title>
<source>Appl. Math. Comput.</source>
<year>2005</year>
<volume>163</volume>
<fpage>895</fpage>
<lpage>907</lpage>
<pub-id pub-id-type="doi">10.1016/j.amc.2004.04.016</pub-id>
</element-citation>
</ref>
<ref id="B28-sensors-21-00728">
<label>28.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gao</surname>
<given-names>Q.X.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Face Recognition Using FLDA with Single Training Image per Person</article-title>
<source>Appl. Math. Comput.</source>
<year>2008</year>
<volume>205</volume>
<fpage>726</fpage>
<lpage>734</lpage>
<pub-id pub-id-type="doi">10.1016/j.amc.2008.05.019</pub-id>
</element-citation>
</ref>
<ref id="B29-sensors-21-00728">
<label>29.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hu</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Ye</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ji</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zeng</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>X.</given-names>
</name>
</person-group>
<article-title>A New Face Recognition Method Based on Image Decomposition for Single Sample per Person Problem</article-title>
<source>Neurocomputing</source>
<year>2015</year>
<volume>160</volume>
<fpage>287</fpage>
<lpage>299</lpage>
<pub-id pub-id-type="doi">10.1016/j.neucom.2015.02.032</pub-id>
</element-citation>
</ref>
<ref id="B30-sensors-21-00728">
<label>30.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dong</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Jing</surname>
<given-names>X.Y.</given-names>
</name>
</person-group>
<article-title>Generic Training Set Based Multimanifold Discriminant Learning for Single Sample Face Recognition</article-title>
<source>KSII Trans. Internet Inf. Syst.</source>
<year>2018</year>
<volume>12</volume>
<fpage>368</fpage>
<lpage>391</lpage>
</element-citation>
</ref>
<ref id="B31-sensors-21-00728">
<label>31.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Deng</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Extended SRC: Undersampled Face Recognition via Intraclass Variant Dictionary</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>2012</year>
<volume>34</volume>
<fpage>1864</fpage>
<lpage>1870</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2012.30</pub-id>
<pub-id pub-id-type="pmid">22813959</pub-id>
</element-citation>
</ref>
<ref id="B32-sensors-21-00728">
<label>32.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Van</surname>
<given-names>L.V.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person</article-title>
<source>Proceedings of the IEEE International Conference on Computer Vision (ICCV)</source>
<conf-loc>Sydney, Australia</conf-loc>
<conf-date>1–8 December 2013</conf-date>
<fpage>689</fpage>
<lpage>696</lpage>
</element-citation>
</ref>
<ref id="B33-sensors-21-00728">
<label>33.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Local Generic Representation for Face Recognition with Single Sample per Person</article-title>
<source>Proceedings of the Asian Conference on Computer Vision (ACCV)</source>
<conf-loc>Singapore</conf-loc>
<conf-date>1–5 November 2014</conf-date>
<fpage>34</fpage>
<lpage>50</lpage>
</element-citation>
</ref>
<ref id="B34-sensors-21-00728">
<label>34.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Shiu</surname>
<given-names>S.C.K.</given-names>
</name>
</person-group>
<article-title>Multi-Scale Patch Based Collaborative Representation for Face Recognition with Margin Distribution Optimization</article-title>
<source>Proceedings of the European Conference on Computer Vision (ECCV)</source>
<conf-loc>Florence, Italy</conf-loc>
<conf-date>7–13 October 2012</conf-date>
<fpage>822</fpage>
<lpage>835</lpage>
</element-citation>
</ref>
<ref id="B35-sensors-21-00728">
<label>35.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>X.</given-names>
</name>
</person-group>
<article-title>Sparse Representation or Collaborative Representation: Which Helps Face Recognition?</article-title>
<source>Proceedings of the International Conference on Computer Vision (ICCV)</source>
<conf-loc>Barcelona, Spain</conf-loc>
<conf-date>6–13 November 2011</conf-date>
<fpage>471</fpage>
<lpage>478</lpage>
</element-citation>
</ref>
<ref id="B36-sensors-21-00728">
<label>36.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>Y.P.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Discriminative Multimanifold Analysis for Face Recognition from a Single Training Sample per Person</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>2012</year>
<volume>35</volume>
<fpage>39</fpage>
<lpage>51</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2012.70</pub-id>
<pub-id pub-id-type="pmid">22431525</pub-id>
</element-citation>
</ref>
<ref id="B37-sensors-21-00728">
<label>37.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Liao</surname>
<given-names>Q.</given-names>
</name>
</person-group>
<article-title>Binarized Features with Discriminant Manifold Filters for Robust Single-Sample Face Recognition</article-title>
<source>Signal Process. Image Commun.</source>
<year>2018</year>
<volume>65</volume>
<fpage>1</fpage>
<lpage>10</lpage>
<pub-id pub-id-type="doi">10.1016/j.image.2018.03.003</pub-id>
</element-citation>
</ref>
<ref id="B38-sensors-21-00728">
<label>38.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Local Robust Sparse Representation for Face Recognition with Single Sample per Person</article-title>
<source>IEEE/CAA J. Autom. Sin.</source>
<year>2018</year>
<volume>5</volume>
<fpage>547</fpage>
<lpage>554</lpage>
<pub-id pub-id-type="doi">10.1109/JAS.2017.7510658</pub-id>
</element-citation>
</ref>
<ref id="B39-sensors-21-00728">
<label>39.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Dissimilarity-Based Nearest Neighbor Classifier for Single-Sample Face Recognition</article-title>
<source>Vis. Comput.</source>
<year>2020</year>
<fpage>1</fpage>
<lpage>12</lpage>
<pub-id pub-id-type="doi">10.1007/s00371-020-01827-3</pub-id>
</element-citation>
</ref>
<ref id="B40-sensors-21-00728">
<label>40.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mimouna</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Alouani</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Ben Khalifa</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>El Hillali</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Taleb-Ahmed</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Menhaj</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ouahabi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ben Amara</surname>
<given-names>N.E.</given-names>
</name>
</person-group>
<article-title>OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception</article-title>
<source>Electronics</source>
<year>2020</year>
<volume>9</volume>
<elocation-id>560</elocation-id>
<pub-id pub-id-type="doi">10.3390/electronics9040560</pub-id>
</element-citation>
</ref>
<ref id="B41-sensors-21-00728">
<label>41.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zeng</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Qin</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>Z.</given-names>
</name>
</person-group>
<article-title>Single Sample per Person Face Recognition Based on Deep Convolutional Neural Network</article-title>
<source>Proceedings of the 3rd IEEE International Conference on Computer and Communications (ICCC)</source>
<conf-loc>Chengdu, China</conf-loc>
<conf-date>13–16 December 2017</conf-date>
<fpage>1647</fpage>
<lpage>1651</lpage>
</element-citation>
</ref>
<ref id="B42-sensors-21-00728">
<label>42.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ding</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Bao</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Karmoshi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Single Sample per Person Face Recognition with KPCANet and a Weighted Voting Scheme</article-title>
<source>Signal Image Video Process.</source>
<year>2017</year>
<volume>11</volume>
<fpage>1213</fpage>
<lpage>1220</lpage>
<pub-id pub-id-type="doi">10.1007/s11760-017-1077-8</pub-id>
</element-citation>
</ref>
<ref id="B43-sensors-21-00728">
<label>43.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Peng</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Sample Reconstruction with Deep Autoencoder for One Sample per Person Face Recognition</article-title>
<source>IET Comput. Vis.</source>
<year>2018</year>
<volume>11</volume>
<fpage>471</fpage>
<lpage>478</lpage>
<pub-id pub-id-type="doi">10.1049/iet-cvi.2016.0322</pub-id>
</element-citation>
</ref>
<ref id="B44-sensors-21-00728">
<label>44.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Du</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Da</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Block Dictionary Learning-Driven Convolutional Neural Networks for Few-Shot Face Recognition</article-title>
<source>Vis. Comput.</source>
<year>2020</year>
<fpage>1</fpage>
<lpage>10</lpage>
</element-citation>
</ref>
<ref id="B45-sensors-21-00728">
<label>45.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stone</surname>
<given-names>J.V.</given-names>
</name>
</person-group>
<article-title>Independent Component Analysis: An Introduction</article-title>
<source>Trends Cogn. Sci.</source>
<year>2002</year>
<volume>6</volume>
<fpage>59</fpage>
<lpage>64</lpage>
<pub-id pub-id-type="doi">10.1016/S1364-6613(00)01813-1</pub-id>
<pub-id pub-id-type="pmid">15866182</pub-id>
</element-citation>
</ref>
<ref id="B46-sensors-21-00728">
<label>46.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ataman</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Aatre</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>A Fast Method for Real-Time Median Filtering</article-title>
<source>IEEE Trans. Acoust. Speech Signal Process.</source>
<year>1980</year>
<volume>28</volume>
<fpage>415</fpage>
<lpage>421</lpage>
<pub-id pub-id-type="doi">10.1109/TASSP.1980.1163426</pub-id>
</element-citation>
</ref>
<ref id="B47-sensors-21-00728">
<label>47.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Benzaoui</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Hadid</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Boukrouche</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Ear Biometric Recognition Using Local Texture Descriptors</article-title>
<source>J. Electron. Imaging</source>
<year>2014</year>
<volume>23</volume>
<fpage>053008</fpage>
<pub-id pub-id-type="doi">10.1117/1.JEI.23.5.053008</pub-id>
</element-citation>
</ref>
<ref id="B48-sensors-21-00728">
<label>48.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zehani</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ouahabi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Oussalah</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Mimi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Taleb-Ahmed</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Bone Microarchitecture Characterization Based on Fractal Analysis in Spatial Frequency Domain Imaging</article-title>
<source>Int. J. Imaging Syst. Technol.</source>
<year>2020</year>
<fpage>1</fpage>
<lpage>19</lpage>
</element-citation>
</ref>
<ref id="B49-sensors-21-00728">
<label>49.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ojala</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Pietikainen</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Maenpaa</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>2002</year>
<volume>24</volume>
<fpage>971</fpage>
<lpage>987</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2002.1017623</pub-id>
</element-citation>
</ref>
<ref id="B50-sensors-21-00728">
<label>50.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ojansivu</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Heikkil</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Blur Insensitive Texture Classification Using Local Phase Quantization</article-title>
<source>Proceedings of the 3rd International Conference on Image and Signal Processing (ICSIP)</source>
<conf-loc>Paris, France</conf-loc>
<conf-date>7–8 July 2012</conf-date>
<fpage>236</fpage>
<lpage>243</lpage>
</element-citation>
</ref>
<ref id="B51-sensors-21-00728">
<label>51.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Martinez</surname>
<given-names>A.M.</given-names>
</name>
<name>
<surname>Benavente</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>The AR Face Database</article-title>
<source>CVC Tech. Rep.</source>
<year>1998</year>
<volume>24</volume>
<fpage>1</fpage>
<lpage>10</lpage>
</element-citation>
</ref>
<ref id="B52-sensors-21-00728">
<label>52.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>G.B.</given-names>
</name>
<name>
<surname>Mattar</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Berg</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Learned-Miller</surname>
<given-names>E.</given-names>
</name>
</person-group>
<source>Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments</source>
<comment>Technical Report 07-49</comment>
<publisher-name>University of Massachusetts</publisher-name>
<publisher-loc>Amherst, MA, USA</publisher-loc>
<year>2007</year>
<fpage>7</fpage>
<lpage>49</lpage>
</element-citation>
</ref>
<ref id="B53-sensors-21-00728">
<label>53.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mehrasa</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Ali</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Homayun</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>A Supervised Multimanifold Method with Locality Preserving for Face Recognition Using Single Sample per Person</article-title>
<source>J. Cent. South Univ.</source>
<year>2017</year>
<volume>24</volume>
<fpage>2853</fpage>
<lpage>2861</lpage>
<pub-id pub-id-type="doi">10.1007/s11771-017-3700-9</pub-id>
</element-citation>
</ref>
<ref id="B54-sensors-21-00728">
<label>54.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ji</surname>
<given-names>H.K.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>Q.S.</given-names>
</name>
<name>
<surname>Ji</surname>
<given-names>Z.X.</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>Y.H.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>G.Q.</given-names>
</name>
</person-group>
<article-title>Collaborative Probabilistic Labels for Face Recognition from Single Sample per Person</article-title>
<source>Pattern Recognit.</source>
<year>2017</year>
<volume>62</volume>
<fpage>125</fpage>
<lpage>134</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2016.08.007</pub-id>
</element-citation>
</ref>
<ref id="B55-sensors-21-00728">
<label>55.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Turk</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Pentland</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Eigenfaces for Recognition</article-title>
<source>J. Cogn. Neurosci.</source>
<year>1991</year>
<volume>3</volume>
<fpage>71</fpage>
<lpage>86</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.1991.3.1.71</pub-id>
<pub-id pub-id-type="pmid">23964806</pub-id>
</element-citation>
</ref>
<ref id="B56-sensors-21-00728">
<label>56.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>Z.H.</given-names>
</name>
</person-group>
<article-title>Face Recognition with One Training Image per Person</article-title>
<source>Pattern Recognit. Lett.</source>
<year>2002</year>
<volume>23</volume>
<fpage>1711</fpage>
<lpage>1719</lpage>
<pub-id pub-id-type="doi">10.1016/S0167-8655(02)00134-4</pub-id>
</element-citation>
</ref>
<ref id="B57-sensors-21-00728">
<label>57.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>Z.H.</given-names>
</name>
</person-group>
<article-title>Enhanced (PC)2A for Face Recognition with One Training Image per Person</article-title>
<source>Pattern Recognit. Lett.</source>
<year>2004</year>
<volume>25</volume>
<fpage>1173</fpage>
<lpage>1181</lpage>
<pub-id pub-id-type="doi">10.1016/j.patrec.2004.03.012</pub-id>
</element-citation>
</ref>
<ref id="B58-sensors-21-00728">
<label>58.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Frangi</surname>
<given-names>A.F.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>J.Y.</given-names>
</name>
</person-group>
<article-title>Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>2004</year>
<volume>26</volume>
<fpage>131</fpage>
<lpage>137</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2004.1261097</pub-id>
<pub-id pub-id-type="pmid">15382693</pub-id>
</element-citation>
</ref>
<ref id="B59-sensors-21-00728">
<label>59.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gottumukkal</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Asari</surname>
<given-names>V.K.</given-names>
</name>
</person-group>
<article-title>An Improved Face Recognition Technique Based on Modular PCA Approach</article-title>
<source>Pattern Recognit. Lett.</source>
<year>2004</year>
<volume>25</volume>
<fpage>429</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="doi">10.1016/j.patrec.2003.11.005</pub-id>
</element-citation>
</ref>
<ref id="B60-sensors-21-00728">
<label>60.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>Z.H.</given-names>
</name>
</person-group>
<article-title>Making FLDA Applicable to Face Recognition with One Sample per Person</article-title>
<source>Pattern Recognit.</source>
<year>2004</year>
<volume>37</volume>
<fpage>1553</fpage>
<lpage>1555</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2003.12.010</pub-id>
</element-citation>
</ref>
<ref id="B61-sensors-21-00728">
<label>61.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>Z.H.</given-names>
</name>
</person-group>
<article-title>(2D)2PCA: Two-Directional Two-Dimensional PCA for Efficient Face Representation and Recognition</article-title>
<source>Neurocomputing</source>
<year>2005</year>
<volume>69</volume>
<fpage>224</fpage>
<lpage>231</lpage>
<pub-id pub-id-type="doi">10.1016/j.neucom.2005.06.004</pub-id>
</element-citation>
</ref>
<ref id="B62-sensors-21-00728">
<label>62.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tan</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>Z.H.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Recognizing Partially Occluded, Expression Variant Faces from Single Training Image per Person with SOM and Soft K-NN Ensemble</article-title>
<source>IEEE Trans. Neural Netw.</source>
<year>2005</year>
<volume>16</volume>
<fpage>875</fpage>
<lpage>886</lpage>
<pub-id pub-id-type="doi">10.1109/TNN.2005.849817</pub-id>
<pub-id pub-id-type="pmid">16121729</pub-id>
</element-citation>
</ref>
<ref id="B63-sensors-21-00728">
<label>63.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>He</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Yan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Niyogi</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H.J.</given-names>
</name>
</person-group>
<article-title>Face Recognition Using Laplacian Faces</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>2005</year>
<volume>27</volume>
<fpage>328</fpage>
<lpage>340</lpage>
<pub-id pub-id-type="pmid">15747789</pub-id>
</element-citation>
</ref>
<ref id="B64-sensors-21-00728">
<label>64.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Deng</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cai</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Fenf</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Robust, Accurate and Efficient Face Recognition from a Single Training Image: A Uniform Pursuit Approach</article-title>
<source>Pattern Recognit.</source>
<year>2010</year>
<volume>43</volume>
<fpage>1748</fpage>
<lpage>1762</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2009.12.004</pub-id>
</element-citation>
</ref>
<ref id="B65-sensors-21-00728">
<label>65.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Ahmad</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Multiple Feature Subspaces Analysis for Single Sample per Person Face Recognition</article-title>
<source>Vis. Comput.</source>
<year>2019</year>
<volume>35</volume>
<fpage>239</fpage>
<lpage>256</lpage>
<pub-id pub-id-type="doi">10.1007/s00371-017-1468-4</pub-id>
</element-citation>
</ref>
<ref id="B66-sensors-21-00728">
<label>66.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Cheung</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Robust Heterogeneous Discriminative Analysis for Face Recognition with Single Sample per Person</article-title>
<source>Pattern Recognit.</source>
<year>2019</year>
<volume>89</volume>
<fpage>91</fpage>
<lpage>107</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2019.01.005</pub-id>
</element-citation>
</ref>
<ref id="B67-sensors-21-00728">
<label>67.</label>
<element-citation publication-type="web">
<article-title>Seetafaceengine</article-title>
<year>2016</year>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="https://github.com/seetaface/SeetaFaceEngine">https://github.com/seetaface/SeetaFaceEngine</ext-link>
</comment>
<date-in-citation content-type="access-date" iso-8601-date="2020-09-01">(accessed on 1 September 2020)</date-in-citation>
</element-citation>
</ref>
<ref id="B68-sensors-21-00728">
<label>68.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cuculo</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>D’Amelio</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Grossi</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Lanzarotti</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Robust Single-Sample Face Recognition by Sparsity-Driven Sub-Dictionary Learning Using Deep Features</article-title>
<source>Sensors</source>
<year>2019</year>
<volume>19</volume>
<elocation-id>146</elocation-id>
<pub-id pub-id-type="doi">10.3390/s19010146</pub-id>
<pub-id pub-id-type="pmid">30609846</pub-id>
</element-citation>
</ref>
<ref id="B69-sensors-21-00728">
<label>69.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wright</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>A.Y.</given-names>
</name>
<name>
<surname>Ganesh</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Sastry</surname>
<given-names>S.S.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>Robust Face Recognition via Sparse Representation</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>2009</year>
<volume>31</volume>
<fpage>210</fpage>
<lpage>227</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2008.79</pub-id>
<pub-id pub-id-type="pmid">19110489</pub-id>
</element-citation>
</ref>
<ref id="B70-sensors-21-00728">
<label>70.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Su</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Shan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>W.</given-names>
</name>
</person-group>
<article-title>Adaptive Generic Learning for Face Recognition from a Single Sample per Person</article-title>
<source>Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source>
<conf-loc>San Francisco, CA, USA</conf-loc>
<conf-date>13–18 June 2010</conf-date>
<fpage>2699</fpage>
<lpage>2706</lpage>
</element-citation>
</ref>
<ref id="B71-sensors-21-00728">
<label>71.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Discriminative Probabilistic Latent Semantic Analysis with Application to Single Sample Face Recognition</article-title>
<source>Neural Process. Lett.</source>
<year>2019</year>
<volume>49</volume>
<fpage>1273</fpage>
<lpage>1298</lpage>
<pub-id pub-id-type="doi">10.1007/s11063-018-9852-2</pub-id>
</element-citation>
</ref>
<ref id="B72-sensors-21-00728">
<label>72.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zeng</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Gan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mai</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Zhai</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Deep Convolutional Neural Network Used in Single Sample per Person Face Recognition</article-title>
<source>Comput. Intell. Neurosci.</source>
<year>2018</year>
<volume>2018</volume>
<fpage>1</fpage>
<lpage>11</lpage>
<pub-id pub-id-type="doi">10.1155/2018/3803627</pub-id>
</element-citation>
</ref>
<ref id="B73-sensors-21-00728">
<label>73.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Adjabi</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Ouahabi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Benzaoui</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Taleb-Ahmed</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Past, Present, and Future of Face Recognition: A Review</article-title>
<source>Electronics</source>
<year>2020</year>
<volume>9</volume>
<elocation-id>1188</elocation-id>
<pub-id pub-id-type="doi">10.3390/electronics9081188</pub-id>
</element-citation>
</ref>
<ref id="B74-sensors-21-00728">
<label>74.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sadiq</surname>
<given-names>M.T.</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Aziz</surname>
<given-names>M.Z.</given-names>
</name>
</person-group>
<article-title>Motor Imagery BCI Classification Based on Novel Two-Dimensional Modelling in Empirical Wavelet Transform</article-title>
<source>Electron. Lett.</source>
<year>2020</year>
<volume>56</volume>
<fpage>1367</fpage>
<lpage>1369</lpage>
<pub-id pub-id-type="doi">10.1049/el.2020.2509</pub-id>
</element-citation>
</ref>
<ref id="B75-sensors-21-00728">
<label>75.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sadiq</surname>
<given-names>M.T.</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Fan</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Rehman</surname>
<given-names>A.U.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Xiao</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Motor Imagery EEG Signals Classification Based on Mode Amplitude and Frequency Components Using Empirical Wavelet Transform</article-title>
<source>IEEE Access</source>
<year>2019</year>
<volume>7</volume>
<fpage>127678</fpage>
<lpage>127692</lpage>
<pub-id pub-id-type="doi">10.1109/ACCESS.2019.2939623</pub-id>
</element-citation>
</ref>
<ref id="B76-sensors-21-00728">
<label>76.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sadiq</surname>
<given-names>M.T.</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>Z.</given-names>
</name>
</person-group>
<article-title>Exploiting Dimensionality Reduction and Neural Network Techniques for the Development of Expert Brain—Computer Interfaces</article-title>
<source>Expert Syst. Appl.</source>
<year>2021</year>
<volume>164</volume>
<fpage>114031</fpage>
<pub-id pub-id-type="doi">10.1016/j.eswa.2020.114031</pub-id>
</element-citation>
</ref>
<ref id="B77-sensors-21-00728">
<label>77.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Khaldi</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Benzaoui</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>A New Framework for Grayscale Ear Images Recognition Using Generative Adversarial Networks under Unconstrained Conditions</article-title>
<source>Evol. Syst.</source>
<year>2020</year>
<pub-id pub-id-type="doi">10.1007/s12530-020-09346-1</pub-id>
</element-citation>
</ref>
<ref id="B78-sensors-21-00728">
<label>78.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nguyen</surname>
<given-names>B.P.</given-names>
</name>
<name>
<surname>Tay</surname>
<given-names>W.L.</given-names>
</name>
<name>
<surname>Chui</surname>
<given-names>C.K.</given-names>
</name>
</person-group>
<article-title>Robust Biometric Recognition from Palm Depth Images for Gloved Hands</article-title>
<source>IEEE Trans. Hum. Mach. Syst.</source>
<year>2015</year>
<volume>45</volume>
<fpage>799</fpage>
<lpage>804</lpage>
<pub-id pub-id-type="doi">10.1109/THMS.2015.2453203</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="sensors-21-00728-f001" orientation="portrait" position="float">
<label>Figure 1</label>
<caption>
<p>Schematic of the proposed Single-Sample Face Recognition (SSFR) system based on the Multi-Block Color-Binarized Statistical Image Features (MB-C-BSIF) descriptor.</p>
</caption>
<graphic xlink:href="sensors-21-00728-g001"></graphic>
</fig>
<fig id="sensors-21-00728-f002" orientation="portrait" position="float">
<label>Figure 2</label>
<caption>
<p>Examples of 7 × 7 BSIF filter banks learned from natural pictures.</p>
</caption>
<graphic xlink:href="sensors-21-00728-g002"></graphic>
</fig>
<fig id="sensors-21-00728-f003" orientation="portrait" position="float">
<label>Figure 3</label>
<caption>
<p>(
<bold>a</bold>
) Examples of facial images, and (
<bold>b</bold>
) their parallel BSIF representations.</p>
</caption>
<graphic xlink:href="sensors-21-00728-g003"></graphic>
</fig>
<fig id="sensors-21-00728-f004" orientation="portrait" position="float">
<label>Figure 4</label>
<caption>
<p>Examples of multi-block (MB) image decomposition: (
<bold>a</bold>
) 1 × 1, (
<bold>b</bold>
) 2 × 2, and (
<bold>c</bold>
) 4 × 4.</p>
</caption>
<graphic xlink:href="sensors-21-00728-g004"></graphic>
</fig>
<fig id="sensors-21-00728-f005" orientation="portrait" position="float">
<label>Figure 5</label>
<caption>
<p>Structure of the proposed feature extraction approach: MB-C-BSIF.</p>
</caption>
<graphic xlink:href="sensors-21-00728-g005"></graphic>
</fig>
<fig id="sensors-21-00728-f006" orientation="portrait" position="float">
<label>Figure 6</label>
<caption>
<p>The 26 facial images of the first individual from the AR database and their detailed descriptions.</p>
</caption>
<graphic xlink:href="sensors-21-00728-g006"></graphic>
</fig>
<fig id="sensors-21-00728-f007" orientation="portrait" position="float">
<label>Figure 7</label>
<caption>
<p>Examples of two different subjects from the Labeled Faces in the Wild (LFW)-a database.</p>
</caption>
<graphic xlink:href="sensors-21-00728-g007"></graphic>
</fig>
<table-wrap id="sensors-21-00728-t001" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t001_Table 1</object-id>
<label>Table 1</label>
<caption>
<p>Comparison of the results obtained using six BSIF configurations with changes in facial expression.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">
<inline-formula>
<mml:math id="mm82">
<mml:mrow>
<mml:mstyle mathvariant="bold">
<mml:mrow>
<mml:mi mathvariant="bold-italic">l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi mathvariant="bold-italic">l</mml:mi>
<mml:mo> </mml:mo>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold">Pixels</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">
<inline-formula>
<mml:math id="mm83">
<mml:mrow>
<mml:mstyle mathvariant="bold">
<mml:mrow>
<mml:mi mathvariant="bold-italic">n</mml:mi>
<mml:mo> </mml:mo>
<mml:mfenced>
<mml:mrow>
<mml:mstyle mathvariant="bold" mathsize="normal">
<mml:mi mathvariant="bold">Bits</mml:mi>
</mml:mstyle>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th colspan="7" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%) </th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%) </th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">B </th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">C </th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">D </th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">N </th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">O </th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">P </th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Q </th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">3 × 3</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">5 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">70 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">72 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">38 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">36 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">20 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">24 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">14 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">39.14</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">5 × 5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">9 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">59 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">60 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">66 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">30 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">68.71</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">9 × 9</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">53 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.71</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">11 × 11</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">8 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">74 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">85 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">70 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">43 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.57</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">15 × 15</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">73 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">17 × 17</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">71 </td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94.14</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t002" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t002_Table 2</object-id>
<label>Table 2</label>
<caption>
<p>Comparison of the results obtained using six BSIF configurations with occlusion by sunglasses.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">
<inline-formula>
<mml:math id="mm84">
<mml:mrow>
<mml:mstyle mathvariant="bold">
<mml:mrow>
<mml:mi mathvariant="bold-italic">l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi mathvariant="bold-italic">l</mml:mi>
<mml:mo> </mml:mo>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold">Pixels</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">
<inline-formula>
<mml:math id="mm85">
<mml:mrow>
<mml:mstyle mathvariant="bold">
<mml:mrow>
<mml:mi mathvariant="bold-italic">n</mml:mi>
<mml:mo> </mml:mo>
<mml:mfenced>
<mml:mrow>
<mml:mstyle mathvariant="bold" mathsize="normal">
<mml:mi mathvariant="bold">Bits</mml:mi>
</mml:mstyle>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th colspan="6" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%)</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">H</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">I</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">J</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">U</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">V</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">W</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">3 × 3</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">29</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">8</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">4</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">4</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">3</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">10.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">5 × 5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">70</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">24</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">14</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">28</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">14</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">8</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">26.50</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">9 × 9</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">80</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">61</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">80</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">38</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">30</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">61.50</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">11 × 11</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">8</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">78</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">34</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">23</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">48</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">26</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">15</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">37.33</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">15 × 15</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">84</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">85</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">50</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">46</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75.33</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">17 × 17</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">89</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">58</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">46</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">78.50</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t003" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t003_Table 3</object-id>
<label>Table 3</label>
<caption>
<p>Comparison of the results obtained using six BSIF configurations with occlusion by scarf.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">
<inline-formula>
<mml:math id="mm86">
<mml:mrow>
<mml:mstyle mathvariant="bold">
<mml:mrow>
<mml:mi mathvariant="bold-italic">l</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi mathvariant="bold-italic">l</mml:mi>
<mml:mo> </mml:mo>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold">Pixels</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">
<inline-formula>
<mml:math id="mm87">
<mml:mrow>
<mml:mstyle mathvariant="bold">
<mml:mrow>
<mml:mi mathvariant="bold-italic">n</mml:mi>
<mml:mo> </mml:mo>
<mml:mfenced>
<mml:mrow>
<mml:mstyle mathvariant="bold" mathsize="normal">
<mml:mi mathvariant="bold">Bits</mml:mi>
</mml:mstyle>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th colspan="6" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%)</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">K</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">L</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">M</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">X</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Y</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Z</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">3 × 3</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">7</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">4</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">3</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">3.33</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">5 × 5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">22</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">6</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">6</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">9.50</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">9 × 9</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">54</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">34</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">52</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">31</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">15</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">45.67</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">11 × 11</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">8</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">52</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">22</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">7</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">32.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">15 × 15</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">69</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">64</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">79</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">48</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">37</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">65.67</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">17 × 17</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">80</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">63</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">48</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">31</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">68.33</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t004" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t004_Table 4</object-id>
<label>Table 4</label>
<caption>
<p>Comparison of the results obtained using different distances with changes in facial expression.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Distance</th>
<th colspan="7" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%)</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">B</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">C</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">D</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">N</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">O</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">P</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Q</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Hamming</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">63</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">79</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">9</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">69</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">23</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">40</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">6</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">41.29</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Euclidean</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">80</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">83</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">82</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">43</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">82.43</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">City block</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">71</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94.14</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t005" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t005_Table 5</object-id>
<label>Table 5</label>
<caption>
<p>Comparison of the results obtained using different distances with occlusion by sunglasses.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Distance</th>
<th colspan="6" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%)</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">H</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">I</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">J</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">U</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">V</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">W</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Hamming</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">37</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">6</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">11</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">4</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">10.83</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Euclidean</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">68</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">42</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">68</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">31</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">17</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">53.67</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">City block</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">89</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">58</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">46</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">78.50</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t006" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t006_Table 6</object-id>
<label>Table 6</label>
<caption>
<p>Comparison of the results obtained using different distances with occlusion by scarf.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Distance</th>
<th colspan="6" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%)</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">K</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">L</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">M</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">X</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Y</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Z</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Hamming</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">34</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">8</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">20</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">4</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">4</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12.50</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Euclidean</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">79</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">32</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">16</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">41</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">22</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">32.50</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">City block</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">80</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">63</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">48</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">31</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">68.33</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t007" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t007_Table 7</object-id>
<label>Table 7</label>
<caption>
<p>Comparison of the results obtained using different divided blocks with changes in facial expression.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Segmentation</th>
<th colspan="7" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%)</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">B</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">C</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">D</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">N</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">O</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">P</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Q</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">(1 × 1)</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">71</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94.14</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">(2 × 2)</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">60</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90.86</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">(4 × 4)</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">76</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94.57</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t008" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t008_Table 8</object-id>
<label>Table 8</label>
<caption>
<p>Comparison of the results obtained using different divided blocks with occlusion by sunglasses.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Segmentation</th>
<th colspan="6" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%)</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">H</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">I</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">J</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">U</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">V</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">W</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">(1 × 1)</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">89</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">58</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">46</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">78.50</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">(2 × 2)</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">83</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">71</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90.33</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">(4 × 4)</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">81</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">79</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91.83</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t009" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t009_Table 9</object-id>
<label>Table 9</label>
<caption>
<p>Comparison of the results obtained using different divided blocks with occlusion by scarf.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Segmentation</th>
<th colspan="6" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%)</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">K</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">L</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">M</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">X</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Y</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Z</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">(1 × 1)</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">80</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">63</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">48</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">31</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">68.33</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">(2 × 2)</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">79</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">72</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">(4 × 4)</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">84</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91.00</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t010" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t010_Table 10</object-id>
<label>Table 10</label>
<caption>
<p>Comparison of the results obtained using different color-spaces with changes in facial expression.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Color-Space</th>
<th colspan="7" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%)</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">B</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">C</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">D</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">N</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">O</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">P</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Q</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Gray Scale</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">76</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94.57</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">RGB</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">67</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">HSV</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94.86</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">YCbCr</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">73</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93.29</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t011" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t011_Table 11</object-id>
<label>Table 11</label>
<caption>
<p>Comparison of the results obtained using different color-spaces with occlusion by sunglasses.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Color-Space</th>
<th colspan="6" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%)</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">H</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">I</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">J</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">U</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">V</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">W</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Gray Scale</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">81</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">79</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91.83</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">RGB</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">85</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">84</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93.50</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">HSV</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">82</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">80</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92.33</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">YCbCr</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">81</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">80</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91.83</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t012" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t012_Table 12</object-id>
<label>Table 12</label>
<caption>
<p>Comparison of the results obtained using different color-spaces with occlusion by scarf.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Color-Space</th>
<th colspan="6" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy (%)</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">K</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">L</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">M</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">X</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Y</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Z</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Gray Scale</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">84</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">RGB</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">81</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92.67</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">HSV</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">95</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">74</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.17</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">YCbCr</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">78</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">91.67</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t013" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t013_Table 13</object-id>
<label>Table 13</label>
<caption>
<p>Comparison of 18 methods of facial expression variation subsets.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Authors</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Year</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Method</th>
<th colspan="6" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1">Accuracy</th>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1">Average Accuracy (%)</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">B</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">C</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">D</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">N</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">O</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">P</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Turk, Pentland [
<xref rid="B55-sensors-21-00728" ref-type="bibr">55</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">1991</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PCA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">60.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">76.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">67.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.33</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Wu and Zhou [
<xref rid="B56-sensors-21-00728" ref-type="bibr">56</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2002</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">(PC)
<sup>2</sup>
A</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">62.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">74.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">67.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.33</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Chen et al. [
<xref rid="B57-sensors-21-00728" ref-type="bibr">57</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2004</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">E(PC)
<sup>2</sup>
A</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">63.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">68.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.83</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Yang et al. [
<xref rid="B58-sensors-21-00728" ref-type="bibr">58</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2004</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2DPCA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">60.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">76.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">76.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">67.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.17</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Gottumukkal and Asari [
<xref rid="B59-sensors-21-00728" ref-type="bibr">59</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2004</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Block-PCA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">60.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">76.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">67.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.33</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Chen et al. [
<xref rid="B60-sensors-21-00728" ref-type="bibr">60</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2004</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Block-LDA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">85.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">79.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">29.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">73.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">59.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">59.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">64.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhang and Zhou [
<xref rid="B61-sensors-21-00728" ref-type="bibr">61</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2005</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">(2D)
<sup>2</sup>
PCA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">89.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">60.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">71.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">76.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">66.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">76.70</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Tan et al. [
<xref rid="B62-sensors-21-00728" ref-type="bibr">62</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2005</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SOM</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">64.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">73.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">70.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">78.30</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">He et al. [
<xref rid="B63-sensors-21-00728" ref-type="bibr">63</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2005</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LPP</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">87.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">36.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">86.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">74.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">78.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75.83</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhang et al. [
<xref rid="B27-sensors-21-00728" ref-type="bibr">27</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2005</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SVD-LDA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">73.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">29.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">56.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">58.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">61.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Deng et al. [
<xref rid="B64-sensors-21-00728" ref-type="bibr">64</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2010</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">UP</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">59.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">74.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">66.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Lu et al. [
<xref rid="B36-sensors-21-00728" ref-type="bibr">36</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2012</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">DMMA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">69.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">85.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">85.50</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">79.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Mehrasa et al. [
<xref rid="B53-sensors-21-00728" ref-type="bibr">53</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2017</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SLPMM</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">94.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">65.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Ji et al. [
<xref rid="B54-sensors-21-00728" ref-type="bibr">54</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2017</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">CPL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92.22</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">88.06</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">83.61</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">83.59</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.95</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">72.82</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">83.04</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhang et al. [
<xref rid="B37-sensors-21-00728" ref-type="bibr">37</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2018</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">DMF</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">99.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">66.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Chu et al. [
<xref rid="B65-sensors-21-00728" ref-type="bibr">65</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2019</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">MFSA+</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">74.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">93.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">85.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">86.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">89.66</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Pang et al. [
<xref rid="B66-sensors-21-00728" ref-type="bibr">66</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2019</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">RHDA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97.08</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">97.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">96.25</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">- -</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhang et al. [
<xref rid="B39-sensors-21-00728" ref-type="bibr">39</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2020</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">DNNC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">69.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">92.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">76.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">85.00</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">86.67</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Our method</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>2021</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>MB-C-BSIF</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>100.00</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>100.00</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>95.00</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>97.00</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>92.00</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>93.00</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>96.17</bold>
</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-21-00728-t015" orientation="portrait" position="float">
<object-id pub-id-type="pii">sensors-21-00728-t015_Table 15</object-id>
<label>Table 15</label>
<caption>
<p>Identification accuracies using the LFW database.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Authors</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Year</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Method</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Accuracy (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Chen et al. [
<xref rid="B60-sensors-21-00728" ref-type="bibr">60</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2004</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Block LDA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">16.40</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhang et al. [
<xref rid="B27-sensors-21-00728" ref-type="bibr">27</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2005</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SVD-FLDA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">15.50</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Wright et al. [
<xref rid="B69-sensors-21-00728" ref-type="bibr">69</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2009</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SRC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">20.40</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Su et al. [
<xref rid="B70-sensors-21-00728" ref-type="bibr">70</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2010</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">AGL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">19.20</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhang et al. [
<xref rid="B35-sensors-21-00728" ref-type="bibr">35</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2011</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">CRC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">19.80</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Deng et al. [
<xref rid="B31-sensors-21-00728" ref-type="bibr">31</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2012</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">ESRC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">27.30</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhu et al. [
<xref rid="B34-sensors-21-00728" ref-type="bibr">34</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2012</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">PCRC</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">24.20</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Yang et al. [
<xref rid="B32-sensors-21-00728" ref-type="bibr">32</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2013</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">SVDL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">28.60</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Lu et al. [
<xref rid="B36-sensors-21-00728" ref-type="bibr">36</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2012</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">DMMA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">17.80</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhu et al. [
<xref rid="B33-sensors-21-00728" ref-type="bibr">33</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2014</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">LGR</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">30.40</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Ji et al. [
<xref rid="B54-sensors-21-00728" ref-type="bibr">54</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2017</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">CPL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">25.20</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Dong et al. [
<xref rid="B30-sensors-21-00728" ref-type="bibr">30</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2018</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">KNNMMDL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">32.30</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Chu et al. [
<xref rid="B65-sensors-21-00728" ref-type="bibr">65</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2019</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">MFSA+</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">26.23</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Pang et al. [
<xref rid="B66-sensors-21-00728" ref-type="bibr">66</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2019</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">RHDA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">32.89</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zhou et al. [
<xref rid="B71-sensors-21-00728" ref-type="bibr">71</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2019</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">DpLSA</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">37.55</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Our method</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2021</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">MB-C-BSIF</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">38.01</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Parkhi et al. [
<xref rid="B12-sensors-21-00728" ref-type="bibr">12</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2015</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Deep-Face</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">62.63</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Zeng et al. [
<xref rid="B72-sensors-21-00728" ref-type="bibr">72</xref>
]</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2018</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">TDL</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">74.00</td>
</tr>
</tbody>
</table>
</table-wrap>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Sante/explor/MaghrebDataLibMedV2/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000283  | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000283  | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Sante
   |area=    MaghrebDataLibMedV2
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     
   |texte=   
}}

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Wed Jun 30 18:27:05 2021. Site generation: Wed Jun 30 18:34:21 2021