Serveur sur les données et bibliothèques médicales au Maghreb (version finale)

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.
***** Acces problem to record *****\

Identifieur interne : 000166 ( Pmc/Corpus ); précédent : 0001659; suivant : 0001670 ***** probable Xml problem with record *****

Links to Exploration step


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Optimum Feature Selection with Particle Swarm Optimization to Face Recognition System Using Gabor Wavelet Transform and Deep Learning</title>
<author>
<name sortKey="Ahmed, Sulayman" sort="Ahmed, Sulayman" uniqKey="Ahmed S" first="Sulayman" last="Ahmed">Sulayman Ahmed</name>
<affiliation>
<nlm:aff id="I1">ENETCOM, Universite de Sfax, Tunisia</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Frikha, Mondher" sort="Frikha, Mondher" uniqKey="Frikha M" first="Mondher" last="Frikha">Mondher Frikha</name>
<affiliation>
<nlm:aff id="I1">ENETCOM, Universite de Sfax, Tunisia</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Hussein, Taha Darwassh Hanawy" sort="Hussein, Taha Darwassh Hanawy" uniqKey="Hussein T" first="Taha Darwassh Hanawy" last="Hussein">Taha Darwassh Hanawy Hussein</name>
<affiliation>
<nlm:aff id="I2">Kirkuk University, Kirkuk, Iraq</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Rahebi, Javad" sort="Rahebi, Javad" uniqKey="Rahebi J" first="Javad" last="Rahebi">Javad Rahebi</name>
<affiliation>
<nlm:aff id="I3">Department of Software Engineering, Istanbul Ayvansaray University, Istanbul, Turkey</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">33778071</idno>
<idno type="pmc">7969091</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7969091</idno>
<idno type="RBID">PMC:7969091</idno>
<idno type="doi">10.1155/2021/6621540</idno>
<date when="2021">2021</date>
<idno type="wicri:Area/Pmc/Corpus">000166</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000166</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Optimum Feature Selection with Particle Swarm Optimization to Face Recognition System Using Gabor Wavelet Transform and Deep Learning</title>
<author>
<name sortKey="Ahmed, Sulayman" sort="Ahmed, Sulayman" uniqKey="Ahmed S" first="Sulayman" last="Ahmed">Sulayman Ahmed</name>
<affiliation>
<nlm:aff id="I1">ENETCOM, Universite de Sfax, Tunisia</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Frikha, Mondher" sort="Frikha, Mondher" uniqKey="Frikha M" first="Mondher" last="Frikha">Mondher Frikha</name>
<affiliation>
<nlm:aff id="I1">ENETCOM, Universite de Sfax, Tunisia</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Hussein, Taha Darwassh Hanawy" sort="Hussein, Taha Darwassh Hanawy" uniqKey="Hussein T" first="Taha Darwassh Hanawy" last="Hussein">Taha Darwassh Hanawy Hussein</name>
<affiliation>
<nlm:aff id="I2">Kirkuk University, Kirkuk, Iraq</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Rahebi, Javad" sort="Rahebi, Javad" uniqKey="Rahebi J" first="Javad" last="Rahebi">Javad Rahebi</name>
<affiliation>
<nlm:aff id="I3">Department of Software Engineering, Istanbul Ayvansaray University, Istanbul, Turkey</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">BioMed Research International</title>
<idno type="ISSN">2314-6133</idno>
<idno type="eISSN">2314-6141</idno>
<imprint>
<date when="2021">2021</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>In this study, Gabor wavelet transform on the strength of deep learning which is a new approach for the symmetry face database is presented. A proposed face recognition system was developed to be used for different purposes. We used Gabor wavelet transform for feature extraction of symmetry face training data, and then, we used the deep learning method for recognition. We implemented and evaluated the proposed method on ORL and YALE databases with MATLAB 2020a. Moreover, the same experiments were conducted applying particle swarm optimization (PSO) for the feature selection approach. The implementation of Gabor wavelet feature extraction with a high number of training image samples has proved to be more effective than other methods in our study. The recognition rate when implementing the PSO methods on the ORL database is 85.42% while it is 92% with the three methods on the YALE database. However, the use of the PSO algorithm has increased the accuracy rate to 96.22% for the ORL database and 94.66% for the YALE database.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Bouguila, J" uniqKey="Bouguila J">J. Bouguila</name>
</author>
<author>
<name sortKey="Khochtali, H" uniqKey="Khochtali H">H. Khochtali</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bledsoe, W W" uniqKey="Bledsoe W">W. W. Bledsoe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yazdi, M" uniqKey="Yazdi M">M. Yazdi</name>
</author>
<author>
<name sortKey="Mardani Samani, S" uniqKey="Mardani Samani S">S. Mardani-Samani</name>
</author>
<author>
<name sortKey="Bordbar, M" uniqKey="Bordbar M">M. Bordbar</name>
</author>
<author>
<name sortKey="Mobaraki, R" uniqKey="Mobaraki R">R. Mobaraki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Horng, W B" uniqKey="Horng W">W.-B. Horng</name>
</author>
<author>
<name sortKey="Lee, C P" uniqKey="Lee C">C.-P. Lee</name>
</author>
<author>
<name sortKey="Chen, C W" uniqKey="Chen C">C.-W. Chen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ahonen, T" uniqKey="Ahonen T">T. Ahonen</name>
</author>
<author>
<name sortKey="Hadid, A" uniqKey="Hadid A">A. Hadid</name>
</author>
<author>
<name sortKey="Pietik Inen, M" uniqKey="Pietik Inen M">M. Pietikäinen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhao, G" uniqKey="Zhao G">G. Zhao</name>
</author>
<author>
<name sortKey="Pietikainen, M" uniqKey="Pietikainen M">M. Pietikainen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chandra Mohan, M" uniqKey="Chandra Mohan M">M. Chandra Mohan</name>
</author>
<author>
<name sortKey="Vijaya Kumar, V" uniqKey="Vijaya Kumar V">V. Vijaya Kumar</name>
</author>
<author>
<name sortKey="Damodaram, A" uniqKey="Damodaram A">A. Damodaram</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kumar, V V" uniqKey="Kumar V">V. V. Kumar</name>
</author>
<author>
<name sortKey="Murty, G S" uniqKey="Murty G">G. S. Murty</name>
</author>
<author>
<name sortKey="Kumar, P S" uniqKey="Kumar P">P. S. Kumar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kroeker, K L" uniqKey="Kroeker K">K. L. Kroeker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ming Zhang" uniqKey="Ming Zhang">Ming Zhang</name>
</author>
<author>
<name sortKey="Fulcher, J" uniqKey="Fulcher J">J. Fulcher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Feng, X" uniqKey="Feng X">X. Feng</name>
</author>
<author>
<name sortKey="Pietikainen, M" uniqKey="Pietikainen M">M. Pietikainen</name>
</author>
<author>
<name sortKey="Hadid, A" uniqKey="Hadid A">A. Hadid</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Elad, M" uniqKey="Elad M">M. Elad</name>
</author>
<author>
<name sortKey="Goldenberg, R" uniqKey="Goldenberg R">R. Goldenberg</name>
</author>
<author>
<name sortKey="Kimmel, R" uniqKey="Kimmel R">R. Kimmel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Skodras, A" uniqKey="Skodras A">A. Skodras</name>
</author>
<author>
<name sortKey="Christopoulos, C" uniqKey="Christopoulos C">C. Christopoulos</name>
</author>
<author>
<name sortKey="Ebrahimi, T" uniqKey="Ebrahimi T">T. Ebrahimi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rakshit, S" uniqKey="Rakshit S">S. Rakshit</name>
</author>
<author>
<name sortKey="Monro, D M" uniqKey="Monro D">D. M. Monro</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lu, J" uniqKey="Lu J">J. Lu</name>
</author>
<author>
<name sortKey="Plataniotis, K N" uniqKey="Plataniotis K">K. N. Plataniotis</name>
</author>
<author>
<name sortKey="Venetsanopoulos, A N" uniqKey="Venetsanopoulos A">A. N. Venetsanopoulos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lu, Z" uniqKey="Lu Z">Z. Lu</name>
</author>
<author>
<name sortKey="Linghua Zhang" uniqKey="Linghua Zhang">Linghua Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, J" uniqKey="Chen J">J. Chen</name>
</author>
<author>
<name sortKey="Patel, V M" uniqKey="Patel V">V. M. Patel</name>
</author>
<author>
<name sortKey="Liu, L" uniqKey="Liu L">L. Liu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhao, W" uniqKey="Zhao W">W. Zhao</name>
</author>
<author>
<name sortKey="Chellappa, R" uniqKey="Chellappa R">R. Chellappa</name>
</author>
<author>
<name sortKey="Phillips, P J" uniqKey="Phillips P">P. J. Phillips</name>
</author>
<author>
<name sortKey="Rosenfeld, A" uniqKey="Rosenfeld A">A. Rosenfeld</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wiskott, L" uniqKey="Wiskott L">L. Wiskott</name>
</author>
<author>
<name sortKey="Fellous, J M" uniqKey="Fellous J">J.-M. Fellous</name>
</author>
<author>
<name sortKey="Kuiger, N" uniqKey="Kuiger N">N. Kuiger</name>
</author>
<author>
<name sortKey="Von Der Malsburg, C" uniqKey="Von Der Malsburg C">C. Von Der Malsburg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Georghiades, A S" uniqKey="Georghiades A">A. S. Georghiades</name>
</author>
<author>
<name sortKey="Belhumeur, P N" uniqKey="Belhumeur P">P. N. Belhumeur</name>
</author>
<author>
<name sortKey="Kriegman, D J" uniqKey="Kriegman D">D. J. Kriegman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Belhumeur, P N" uniqKey="Belhumeur P">P. N. Belhumeur</name>
</author>
<author>
<name sortKey="Hespanha, J P" uniqKey="Hespanha J">J. P. Hespanha</name>
</author>
<author>
<name sortKey="Kriegman, D J" uniqKey="Kriegman D">D. J. Kriegman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Turk, M A" uniqKey="Turk M">M. A. Turk</name>
</author>
<author>
<name sortKey="Pentland, A P" uniqKey="Pentland A">A. P. Pentland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yang, M" uniqKey="Yang M">M. Yang</name>
</author>
<author>
<name sortKey="Zhang, L" uniqKey="Zhang L">L. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guo, G" uniqKey="Guo G">G. Guo</name>
</author>
<author>
<name sortKey="Zhang, N" uniqKey="Zhang N">N. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Massoli, F V" uniqKey="Massoli F">F. V. Massoli</name>
</author>
<author>
<name sortKey="Amato, G" uniqKey="Amato G">G. Amato</name>
</author>
<author>
<name sortKey="Falchi, F" uniqKey="Falchi F">F. Falchi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Iqbal, M" uniqKey="Iqbal M">M. Iqbal</name>
</author>
<author>
<name sortKey="Sameem, M S I" uniqKey="Sameem M">M. S. I. Sameem</name>
</author>
<author>
<name sortKey="Naqvi, N" uniqKey="Naqvi N">N. Naqvi</name>
</author>
<author>
<name sortKey="Kanwal, S" uniqKey="Kanwal S">S. Kanwal</name>
</author>
<author>
<name sortKey="Ye, Z" uniqKey="Ye Z">Z. Ye</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xu, C" uniqKey="Xu C">C. Xu</name>
</author>
<author>
<name sortKey="Liu, Q" uniqKey="Liu Q">Q. Liu</name>
</author>
<author>
<name sortKey="Ye, M" uniqKey="Ye M">M. Ye</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tran, C K" uniqKey="Tran C">C.-K. Tran</name>
</author>
<author>
<name sortKey="Tseng, C D" uniqKey="Tseng C">C.-D. Tseng</name>
</author>
<author>
<name sortKey="Lee, T F" uniqKey="Lee T">T.-F. Lee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nikan, S" uniqKey="Nikan S">S. Nikan</name>
</author>
<author>
<name sortKey="Ahmadi, M" uniqKey="Ahmadi M">M. Ahmadi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kim, T K" uniqKey="Kim T">T.-K. Kim</name>
</author>
<author>
<name sortKey="Kittler, J" uniqKey="Kittler J">J. Kittler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="He, X" uniqKey="He X">X. He</name>
</author>
<author>
<name sortKey="Yan, S" uniqKey="Yan S">S. Yan</name>
</author>
<author>
<name sortKey="Hu, Y" uniqKey="Hu Y">Y. Hu</name>
</author>
<author>
<name sortKey="Niyogi, P" uniqKey="Niyogi P">P. Niyogi</name>
</author>
<author>
<name sortKey="Zhang, H J" uniqKey="Zhang H">H.-J. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pentland, A" uniqKey="Pentland A">A. Pentland</name>
</author>
<author>
<name sortKey="Moghaddam, B" uniqKey="Moghaddam B">B. Moghaddam</name>
</author>
<author>
<name sortKey="Starner, T" uniqKey="Starner T">T. Starner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gross, R" uniqKey="Gross R">R. Gross</name>
</author>
<author>
<name sortKey="Matthews, I" uniqKey="Matthews I">I. Matthews</name>
</author>
<author>
<name sortKey="Baker, S" uniqKey="Baker S">S. Baker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhao, C" uniqKey="Zhao C">C. Zhao</name>
</author>
<author>
<name sortKey="Li, X" uniqKey="Li X">X. Li</name>
</author>
<author>
<name sortKey="Dong, Y" uniqKey="Dong Y">Y. Dong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mairal, J" uniqKey="Mairal J">J. Mairal</name>
</author>
<author>
<name sortKey="Ponce, J" uniqKey="Ponce J">J. Ponce</name>
</author>
<author>
<name sortKey="Sapiro, G" uniqKey="Sapiro G">G. Sapiro</name>
</author>
<author>
<name sortKey="Zisserman, A" uniqKey="Zisserman A">A. Zisserman</name>
</author>
<author>
<name sortKey="Bach, F R" uniqKey="Bach F">F. R. Bach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sunday, M A" uniqKey="Sunday M">M. A. Sunday</name>
</author>
<author>
<name sortKey="Patel, P A" uniqKey="Patel P">P. A. Patel</name>
</author>
<author>
<name sortKey="Dodd, M D" uniqKey="Dodd M">M. D. Dodd</name>
</author>
<author>
<name sortKey="Gauthier, I" uniqKey="Gauthier I">I. Gauthier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Inamizu, S" uniqKey="Inamizu S">S. Inamizu</name>
</author>
<author>
<name sortKey="Yamada, E" uniqKey="Yamada E">E. Yamada</name>
</author>
<author>
<name sortKey="Ogata, K" uniqKey="Ogata K">K. Ogata</name>
</author>
<author>
<name sortKey="Uehara, T" uniqKey="Uehara T">T. Uehara</name>
</author>
<author>
<name sortKey="Kira, J I" uniqKey="Kira J">J.-i. Kira</name>
</author>
<author>
<name sortKey="Tobimatsu, S" uniqKey="Tobimatsu S">S. Tobimatsu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhou, S K" uniqKey="Zhou S">S. K. Zhou</name>
</author>
<author>
<name sortKey="Chellappa, R" uniqKey="Chellappa R">R. Chellappa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blanz, V" uniqKey="Blanz V">V. Blanz</name>
</author>
<author>
<name sortKey="Vetter, T" uniqKey="Vetter T">T. Vetter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, L" uniqKey="Zhang L">L. Zhang</name>
</author>
<author>
<name sortKey="Samaras, D" uniqKey="Samaras D">D. Samaras</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blanz, V" uniqKey="Blanz V">V. Blanz</name>
</author>
<author>
<name sortKey="Scherbaum, K" uniqKey="Scherbaum K">K. Scherbaum</name>
</author>
<author>
<name sortKey="Vetter, T" uniqKey="Vetter T">T. Vetter</name>
</author>
<author>
<name sortKey="Seidel, H P" uniqKey="Seidel H">H. P. Seidel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Royer, J" uniqKey="Royer J">J. Royer</name>
</author>
<author>
<name sortKey="Blais, C" uniqKey="Blais C">C. Blais</name>
</author>
<author>
<name sortKey="Charbonneau, I" uniqKey="Charbonneau I">I. Charbonneau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kas, M" uniqKey="Kas M">M. Kas</name>
</author>
<author>
<name sortKey="El Merabet, Y" uniqKey="El Merabet Y">Y. el merabet</name>
</author>
<author>
<name sortKey="Ruichek, Y" uniqKey="Ruichek Y">Y. Ruichek</name>
</author>
<author>
<name sortKey="Messoussi, R" uniqKey="Messoussi R">R. Messoussi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shashua, A" uniqKey="Shashua A">A. Shashua</name>
</author>
<author>
<name sortKey="Riklin Raviv, T" uniqKey="Riklin Raviv T">T. Riklin-Raviv</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhou, S" uniqKey="Zhou S">S. Zhou</name>
</author>
<author>
<name sortKey="Chellappa, R" uniqKey="Chellappa R">R. Chellappa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Basri, R" uniqKey="Basri R">R. Basri</name>
</author>
<author>
<name sortKey="Jacobs, D W" uniqKey="Jacobs D">D. W. Jacobs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ramamoorthi, R" uniqKey="Ramamoorthi R">R. Ramamoorthi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gao, Y" uniqKey="Gao Y">Y. Gao</name>
</author>
<author>
<name sortKey="Lee, H J" uniqKey="Lee H">H. J. Lee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, Z" uniqKey="Liu Z">Z. Liu</name>
</author>
<author>
<name sortKey="Pu, J" uniqKey="Pu J">J. Pu</name>
</author>
<author>
<name sortKey="Wu, Q" uniqKey="Wu Q">Q. Wu</name>
</author>
<author>
<name sortKey="Zhao, X" uniqKey="Zhao X">X. Zhao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vasilescu, M A O" uniqKey="Vasilescu M">M. A. O. Vasilescu</name>
</author>
<author>
<name sortKey="Terzopoulos, D" uniqKey="Terzopoulos D">D. Terzopoulos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, R" uniqKey="Wang R">R. Wang</name>
</author>
<author>
<name sortKey="Shan, S" uniqKey="Shan S">S. Shan</name>
</author>
<author>
<name sortKey="Chen, X" uniqKey="Chen X">X. Chen</name>
</author>
<author>
<name sortKey="Gao, W" uniqKey="Gao W">W. Gao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tenenbaum, J B" uniqKey="Tenenbaum J">J. B. Tenenbaum</name>
</author>
<author>
<name sortKey="Freeman, W T" uniqKey="Freeman W">W. T. Freeman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shin, D" uniqKey="Shin D">D. Shin</name>
</author>
<author>
<name sortKey="Lee, H S" uniqKey="Lee H">H.-S. Lee</name>
</author>
<author>
<name sortKey="Kim, D" uniqKey="Kim D">D. Kim</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Prince, S J" uniqKey="Prince S">S. J. Prince</name>
</author>
<author>
<name sortKey="Warrell, J" uniqKey="Warrell J">J. Warrell</name>
</author>
<author>
<name sortKey="Elder, J H" uniqKey="Elder J">J. H. Elder</name>
</author>
<author>
<name sortKey="Felisberti, F M" uniqKey="Felisberti F">F. M. Felisberti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Elgammal, A" uniqKey="Elgammal A">A. Elgammal</name>
</author>
<author>
<name sortKey="Lee, C S" uniqKey="Lee C">C.-S. Lee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wright, J" uniqKey="Wright J">J. Wright</name>
</author>
<author>
<name sortKey="Yang, A Y" uniqKey="Yang A">A. Y. Yang</name>
</author>
<author>
<name sortKey="Ganesh, A" uniqKey="Ganesh A">A. Ganesh</name>
</author>
<author>
<name sortKey="Sastry, S S" uniqKey="Sastry S">S. S. Sastry</name>
</author>
<author>
<name sortKey="Yi Ma" uniqKey="Yi Ma">Yi Ma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Allagwail, S" uniqKey="Allagwail S">S. Allagwail</name>
</author>
<author>
<name sortKey="Gedik, O S" uniqKey="Gedik O">O. S. Gedik</name>
</author>
<author>
<name sortKey="Rahebi, J" uniqKey="Rahebi J">J. Rahebi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kamarainen, J K" uniqKey="Kamarainen J">J.-K. Kamarainen</name>
</author>
<author>
<name sortKey="Kyrki, V" uniqKey="Kyrki V">V. Kyrki</name>
</author>
<author>
<name sortKey="Kalviainen, H" uniqKey="Kalviainen H">H. Kalviainen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meshgini, S" uniqKey="Meshgini S">S. Meshgini</name>
</author>
<author>
<name sortKey="Aghagolzadeh, A" uniqKey="Aghagolzadeh A">A. Aghagolzadeh</name>
</author>
<author>
<name sortKey="Seyedarabi, H" uniqKey="Seyedarabi H">H. Seyedarabi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haghighat, M" uniqKey="Haghighat M">M. Haghighat</name>
</author>
<author>
<name sortKey="Zonouz, S" uniqKey="Zonouz S">S. Zonouz</name>
</author>
<author>
<name sortKey="Abdel Mottaleb, M" uniqKey="Abdel Mottaleb M">M. Abdel-Mottaleb</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Banks, A" uniqKey="Banks A">A. Banks</name>
</author>
<author>
<name sortKey="Vincent, J" uniqKey="Vincent J">J. Vincent</name>
</author>
<author>
<name sortKey="Anyakoha, C" uniqKey="Anyakoha C">C. Anyakoha</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kennedy, J" uniqKey="Kennedy J">J. Kennedy</name>
</author>
<author>
<name sortKey="Eberhart, R" uniqKey="Eberhart R">R. Eberhart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hafiz, F" uniqKey="Hafiz F">F. Hafiz</name>
</author>
<author>
<name sortKey="Swain, A" uniqKey="Swain A">A. Swain</name>
</author>
<author>
<name sortKey="Patel, N" uniqKey="Patel N">N. Patel</name>
</author>
<author>
<name sortKey="Naik, C" uniqKey="Naik C">C. Naik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kennedy, J" uniqKey="Kennedy J">J. Kennedy</name>
</author>
<author>
<name sortKey="Eberhart, R C" uniqKey="Eberhart R">R. C. Eberhart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shi, Y" uniqKey="Shi Y">Y. Shi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shi, Y" uniqKey="Shi Y">Y. Shi</name>
</author>
<author>
<name sortKey="Eberhart, R" uniqKey="Eberhart R">R. Eberhart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Unler, A" uniqKey="Unler A">A. Unler</name>
</author>
<author>
<name sortKey="Murat, A" uniqKey="Murat A">A. Murat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Too, J" uniqKey="Too J">J. Too</name>
</author>
<author>
<name sortKey="Abdullah, A R" uniqKey="Abdullah A">A. R. Abdullah</name>
</author>
<author>
<name sortKey="Mohd Saad, N" uniqKey="Mohd Saad N">N. Mohd Saad</name>
</author>
<author>
<name sortKey="Tee, W" uniqKey="Tee W">W. Tee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, L" uniqKey="Zhang L">L. Zhang</name>
</author>
<author>
<name sortKey="Yang, M" uniqKey="Yang M">M. Yang</name>
</author>
<author>
<name sortKey="Feng, X" uniqKey="Feng X">X. Feng</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Biomed Res Int</journal-id>
<journal-id journal-id-type="iso-abbrev">Biomed Res Int</journal-id>
<journal-id journal-id-type="publisher-id">BMRI</journal-id>
<journal-title-group>
<journal-title>BioMed Research International</journal-title>
</journal-title-group>
<issn pub-type="ppub">2314-6133</issn>
<issn pub-type="epub">2314-6141</issn>
<publisher>
<publisher-name>Hindawi</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">33778071</article-id>
<article-id pub-id-type="pmc">7969091</article-id>
<article-id pub-id-type="doi">10.1155/2021/6621540</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Optimum Feature Selection with Particle Swarm Optimization to Face Recognition System Using Gabor Wavelet Transform and Deep Learning</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Ahmed</surname>
<given-names>Sulayman</given-names>
</name>
<xref ref-type="aff" rid="I1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Frikha</surname>
<given-names>Mondher</given-names>
</name>
<xref ref-type="aff" rid="I1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Hussein</surname>
<given-names>Taha Darwassh Hanawy</given-names>
</name>
<xref ref-type="aff" rid="I2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<contrib-id contrib-id-type="orcid" authenticated="false">https://orcid.org/0000-0001-9875-4860</contrib-id>
<name>
<surname>Rahebi</surname>
<given-names>Javad</given-names>
</name>
<email>cevatrahebi@ayvansaray.edu.tr</email>
<xref ref-type="aff" rid="I3">
<sup>3</sup>
</xref>
</contrib>
</contrib-group>
<aff id="I1">
<sup>1</sup>
ENETCOM, Universite de Sfax, Tunisia</aff>
<aff id="I2">
<sup>2</sup>
Kirkuk University, Kirkuk, Iraq</aff>
<aff id="I3">
<sup>3</sup>
Department of Software Engineering, Istanbul Ayvansaray University, Istanbul, Turkey</aff>
<author-notes>
<fn fn-type="other">
<p>Academic Editor: B. D. Parameshachari</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<pub-date pub-type="epub">
<day>10</day>
<month>3</month>
<year>2021</year>
</pub-date>
<volume>2021</volume>
<elocation-id>6621540</elocation-id>
<history>
<date date-type="received">
<day>1</day>
<month>1</month>
<year>2021</year>
</date>
<date date-type="rev-recd">
<day>6</day>
<month>2</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>2</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2021 Sulayman Ahmed et al.</copyright-statement>
<copyright-year>2021</copyright-year>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<abstract>
<p>In this study, Gabor wavelet transform on the strength of deep learning which is a new approach for the symmetry face database is presented. A proposed face recognition system was developed to be used for different purposes. We used Gabor wavelet transform for feature extraction of symmetry face training data, and then, we used the deep learning method for recognition. We implemented and evaluated the proposed method on ORL and YALE databases with MATLAB 2020a. Moreover, the same experiments were conducted applying particle swarm optimization (PSO) for the feature selection approach. The implementation of Gabor wavelet feature extraction with a high number of training image samples has proved to be more effective than other methods in our study. The recognition rate when implementing the PSO methods on the ORL database is 85.42% while it is 92% with the three methods on the YALE database. However, the use of the PSO algorithm has increased the accuracy rate to 96.22% for the ORL database and 94.66% for the YALE database.</p>
</abstract>
</article-meta>
</front>
<body>
<sec id="sec1">
<title>1. Introduction</title>
<p>Face recognition has attracted a lot of interest in recent years [
<xref rid="B1" ref-type="bibr">1</xref>
]. It has become one of the main areas of study in machine vision, pattern recognition, and machine learning. In face recognition, the system selects a face that is more like the desired face according to the trained faces and considers it as the final answer.</p>
<p>Facial recognition was proposed in the 1960s. The first semiautomatic facial recognition system was produced by Woody Bledsoe, Helen Chan Kurt, and Charles Bisson [
<xref rid="B2" ref-type="bibr">2</xref>
]. However, the human face includes a number of details that have been used in many systems, such as artificial age classification [
<xref rid="B3" ref-type="bibr">3</xref>
,
<xref rid="B4" ref-type="bibr">4</xref>
], facial identification [
<xref rid="B5" ref-type="bibr">5</xref>
], forecasting images and restoration apps [
<xref rid="B6" ref-type="bibr">6</xref>
,
<xref rid="B7" ref-type="bibr">7</xref>
], description of gender and gestures [
<xref rid="B8" ref-type="bibr">8</xref>
], human-computer interaction (HCI), electronic consumer experience management and audience recording, and tracking of security cameras. Applications for face recognition include monitoring, forensic and medical apps, security applications, in the banks, detection of the person in international centers of transition, access control, and several different fields. Recently, facial recognition technologies were widely used in particular in areas needing strict security measures (airports, police stations, banks, sports fields, and surveillance of entry and exit from business companies).</p>
<p>Computer security is considered to be important in the world today [
<xref rid="B9" ref-type="bibr">9</xref>
]. Face recognition remains an important subject in computer vision sciences. This is because the current systems perform well in relatively controlled conditions but appear to fail when there are issues with facial images, for example, presenting a particular face that differs by various factors, such as variations in posture, position, occlusion, lighting, make-up, and noise- and blur-induced image damage. Although researchers have developed many technologies, multiple different solutions have been attempted to address the problem of changing conditions of the environment. These conditions are the main challenges to facial recognition. Difficulties of the face recognition problem derive from the fact that the faces tend to be approximately similar in their most typical shape (i.e., the front view), and the variations between them are very slight. As a consequence, frontal images formalize a large concentration of the size of the image. This size makes it nearly difficult for typical pattern recognition methods to recognize correctly with a high degree of level of success [
<xref rid="B10" ref-type="bibr">10</xref>
]. Another concentration is the database images [
<xref rid="B11" ref-type="bibr">11</xref>
]. It must have sufficient information for effective face recognition, so the recognition must be possible when dealing with the test image. It is also difficult to determine if there is enough information in the stored images so that the relevant information can be extracted from the databases. Often, unnecessary information is also present in the images of the database, resulting in higher storage consumption and higher processing times. In addition, the optimal size of the images requires to be stored in the databases for effective results [
<xref rid="B12" ref-type="bibr">12</xref>
,
<xref rid="B13" ref-type="bibr">13</xref>
]. The image size can be compressed to the required size and be stored in the databases. When the image size is compressed, there would be a loss of features, but large numbers of these images can be stored and transmitted through the network fast [
<xref rid="B14" ref-type="bibr">14</xref>
].</p>
<p>In this paper, we used Gabor wavelet transform for feature extraction and then for reducing the features. To find the best feature, the PSO method is used. For the recognition of a face, the deep learning method with 6 layers is used.</p>
</sec>
<sec id="sec2">
<title>2. Literature Review</title>
<p>Facial recognition is currently divided into two general categories: appearance-based methods, which statistically process the face, and model-based methods that operate geometrically [
<xref rid="B15" ref-type="bibr">15</xref>
]. For face recognition [
<xref rid="B16" ref-type="bibr">16</xref>
], discriminative dictionary learning and sparse representation are used. In their method, the Gabor amplitude images are implemented by the bank of Gabor filter. Furthemore, the local binary pattern (LBP) is used for feature extraction [
<xref rid="B17" ref-type="bibr">17</xref>
]. Face recognition can be considered one of the most significant applications in the image processing domain [
<xref rid="B18" ref-type="bibr">18</xref>
]. However, illumination and pose invariant recognitions are still the most obvious problems. Viewpoint and illumination are vital to the efficiency of the recognition system because these two factors differ when face images are taken in an uncontrolled environment. Elastic bunch graph matching [
<xref rid="B19" ref-type="bibr">19</xref>
], one of the feature-based methods, has been known for a long time to be accomplished toward several factors such as illumination and viewpoint [
<xref rid="B18" ref-type="bibr">18</xref>
]. Their excessive susceptibility to feature extraction and measurement of the extracted features [
<xref rid="B20" ref-type="bibr">20</xref>
] are what make them unreliable. As a result, the dominant method in the literature is appearance-based methods.</p>
<p>Ahonen et al. proposed a face recognition model with native binary patterns (LBP) [
<xref rid="B5" ref-type="bibr">5</xref>
]. In their study, the point ensuring the robustness of their work is that the algorithm is not sensitive to light.</p>
<p>The fisherface [
<xref rid="B21" ref-type="bibr">21</xref>
] technique is one of the milestones for face recognition under variations. In linear discriminant analysis (LDA), interperson alteration is used optimally with large and intraperson alteration efficaciously small to construct a subspace [
<xref rid="B21" ref-type="bibr">21</xref>
]. Like the PCA [
<xref rid="B22" ref-type="bibr">22</xref>
], the main disadvantage of this technique is that the data space is a consideration of Euclidean. The method does not succeed as multimodal distributed face images when data points are located in a nonlinear subspace.</p>
<p>The sparse representation algorithm based on the Gabor feature is proposed by Yang and Zhang [
<xref rid="B23" ref-type="bibr">23</xref>
]. In their method, the SRC and Gabor features are combined. Using this technique, they improved the human face recognition rate and reduced the complexity of computation.</p>
<p>The deep learning approaches are investigated [
<xref rid="B24" ref-type="bibr">24</xref>
]. The cross-resolution face recognition scenario based on the deep learning method is performed [
<xref rid="B25" ref-type="bibr">25</xref>
]. They robustly extracted the features by deep properties with a cross-resolution scenario. In [
<xref rid="B26" ref-type="bibr">26</xref>
], the angularly discriminative features based on deep learning for face recognition are utilized.</p>
<p>Xu et al. presented the new artificial neural network to face recognition called coupled autoencoder networks (CAN). This helps to overcome age-invariant face recognitions and redemption troubles [
<xref rid="B27" ref-type="bibr">27</xref>
].</p>
<p>The effect of variations in condition on face recognition has been investigated by authors [
<xref rid="B28" ref-type="bibr">28</xref>
]. Consequently, the dominant method has been the appearance-based method. Nikan and Ahmadi [
<xref rid="B29" ref-type="bibr">29</xref>
] introduced a new procedure that propped up fusion of global and local structures.</p>
<p>In [
<xref rid="B30" ref-type="bibr">30</xref>
], local linear transformations were used on behalf of one global transformation, which is a good improvement. The technique suggests different pose classes to different mapping functions. When a probe image is examined, its pose is determined by soft clustering. Deciding the number of pose clusters is a difficult task as in all clustering algorithms. Moreover, novel poses cannot be treated in case of critical variations. In [
<xref rid="B31" ref-type="bibr">31</xref>
], the authors used the neighborhood structure of the input space to determine the underlying nonlinear manifold of multimodal face images. What is used to calculate the basic set is called Laplacian Faces Locality Preserving Projections (LPP). When examining face images with other poses, facial expressions, and illumination conditions, their recognition performance was higher than that of fisherfaces or eigenfaces. In [
<xref rid="B32" ref-type="bibr">32</xref>
], pose variation using view-based eigenfaces was studied. For every view, eigenfaces were numbered to apply a standard dimensional subspace as separate transformations. In addition, a feature-based scheme is included within the eigenfeatures introduced by the authors. As in [
<xref rid="B33" ref-type="bibr">33</xref>
], their performance depends highly on decoupling. Here, the eigenilluminant field technique was used to identify the subspace of poses. Zhao et al. [
<xref rid="B34" ref-type="bibr">34</xref>
] prepared the blurry invariant binary identifier to face recognition. They enhanced the corral among the binary codes of sharp face images and blurred face images of positive image pairs about to learn matrix of projection. Then, they used the learned projection matrix to procure blur-robust binary codes by quantizing projected pixel difference vectors (PDVs) in the trial phase. The discriminative DL method by training a classifier of the coding coefficients is proposed by Mairal et al. [
<xref rid="B35" ref-type="bibr">35</xref>
]. For texture classification and digit recognition, they verified their method. In [
<xref rid="B36" ref-type="bibr">36</xref>
], the sex and country population density interact to predict face recognition talent. The face plus word recognition based on the euro magnetic correlates of hemispheric specialization is presented in [
<xref rid="B37" ref-type="bibr">37</xref>
].</p>
<p>A method that is insensitive to illumination changes was produced by the authors in [
<xref rid="B38" ref-type="bibr">38</xref>
] through combining the generalized concept of photometric stereo and eigenlight field. 3D morphable face models were used in [
<xref rid="B20" ref-type="bibr">20</xref>
,
<xref rid="B39" ref-type="bibr">39</xref>
,
<xref rid="B40" ref-type="bibr">40</xref>
, 41] to defined novel poses, which have performances higher than that of the previous research works. Rendering ability for new poses and illumination conditions is exceptional with 3D morphable models [
<xref rid="B41" ref-type="bibr">41</xref>
]. However, the computational cost of generating 3D models from 2D images or using laser scanners to access 3D models decreases the efficiency of the recognition system.</p>
<p>Royer et al. [
<xref rid="B42" ref-type="bibr">42</xref>
] used the eye region to identify a face accurately. The mixed neighborhood topology with the cross decoded patterns is done by [
<xref rid="B43" ref-type="bibr">43</xref>
].</p>
<p>Illumination variance was studied in [
<xref rid="B44" ref-type="bibr">44</xref>
]. The quotient image was suggested by the authors as an identity signature that is insensitive to illumination. While the approximation does not work well, then the probe image has an unexpected shade. Its probe images could be identified with particular illumination then the gallery images. The technique requires only one gallery image for a thing. The technique in [
<xref rid="B45" ref-type="bibr">45</xref>
] introduced additional constraints on the albedo and the surface normal to solve the shadow problem. An illumination cone model was proposed in [
<xref rid="B20" ref-type="bibr">20</xref>
]. The authors discussed a series of images of an object in a fixed pose only describing a convex cone in all lighting conditions. The method needs some images to test their identity and then to guess its surface geometry and albedo map. They defined different illumination cones for each sampled viewpoint to deal with pose variations. The authors discussed in [
<xref rid="B46" ref-type="bibr">46</xref>
,
<xref rid="B47" ref-type="bibr">47</xref>
] the use of all Lambert reflecting functions to create all kinds of illumination conditions for Lambertian objects. The researchers presented the approximation of plenty of variation of illumination achieved using only nine spherical harmonics. The multiple virtual views and alignment errors are presented in [
<xref rid="B48" ref-type="bibr">48</xref>
]. They manipulated the cross-pose face recognition method.</p>
<p>A methodology for recognition was also used in [
<xref rid="B46" ref-type="bibr">46</xref>
]. In [
<xref rid="B40" ref-type="bibr">40</xref>
], a spherical harmonics approach was exploited, and good recognition results were presented. They designed a 3D morphable model to achieve pose invariance, and this needs to generate 3D face models from 2D images.</p>
<p>Original and symmetrical examples of face training were used [
<xref rid="B49" ref-type="bibr">49</xref>
] to perform collaborative representation for face recognition.</p>
<p>A nonlinear subspace approach was introduced using the tensor representation of faces, such as facial expressions, illumination, and poses [
<xref rid="B50" ref-type="bibr">50</xref>
]. The
<italic>n</italic>
mode tensor Singular Value Decomposition (SVD) could form the basis of an image. In this technique, various images are required under different variations for each training identity. In [
<xref rid="B51" ref-type="bibr">51</xref>
], there was another nonlinear assumption for each identity in the database, and a gallery manifold is stored. When a test identity with several new poses needs to be defined, first, its probe manifold is constructed, then using manifold to manifold distance can help to define its identity.</p>
<p>The main drawback is the requirement of multiple images of the test person. The authors in [
<xref rid="B52" ref-type="bibr">52</xref>
] introduced a considerable idea by bilinear generative models to decompose orthogonal factors. They showed a separable bilinear mapping between the input space and the lower dimensional subspace. After determining all the parameters of mappings, identity and pose information can be separated explicitly. The recognition and synthesizing capabilities of the technique were analyzed, and the results were encouraging. In [
<xref rid="B53" ref-type="bibr">53</xref>
], illumination invariance was examined using a similar framework. In addition, a ridge regression technique was designed to come through the matrix inversion needed in the symmetric bilinear model. A modified asymmetric model in [
<xref rid="B54" ref-type="bibr">54</xref>
] is aimed at overcoming pose variations. One of the most important factors affecting performance is the solvation of the pose space. The authors in [
<xref rid="B55" ref-type="bibr">55</xref>
] incorporated the nonlinearity of the generative models. They recommended a nonlinear scheme combined with the bilinear model and tried to remove the linearity constraint of the classical generative models. Wright et al. [
<xref rid="B56" ref-type="bibr">56</xref>
] presented a robust method for face recognition. They used sparse representation for feature extraction.</p>
</sec>
<sec id="sec3">
<title>3. Proposed Method</title>
<p>In this paper, the face recognition system undergoes stages. These three stages are feature extraction using Gabor wavelet transform, selecting the best features with the PSO method, and face recognition with the deep learning method.</p>
<sec id="sec3.1">
<title>3.1. Feature Extraction Using Gabor Wavelet Transform</title>
<p>A much useful instrument in image processing, especially in image identification, is the Gabor filter. The Gabor filter over the spatial field, which has two dimensions, is a Gaussian kernel function as explained below by a complex sinusoidal plane wave:
<disp-formula id="eq1">
<label>(1)</label>
<mml:math id="M1">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>G</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mi>π</mml:mi>
<mml:mi>γ</mml:mi>
<mml:mi>η</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mi mathvariant="normal">exp</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>γ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
<mml:mi mathvariant="normal">exp</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mn>2</mml:mn>
<mml:mi>π</mml:mi>
<mml:mi>f</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:mi>ϕ</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>Here,
<italic>f</italic>
represents the sinusoid's frequency,
<italic>θ</italic>
is the orientation of the normal to the parallel Gabor function's stripes,
<italic>ϕ</italic>
is the phase offset,
<italic>σ</italic>
is the Gaussian envelope's standard deviation, and
<italic>γ</italic>
is the spatial aspect ratio that determines the elliptic support for the function of Gabor.</p>
<p>
<italic>x</italic>
′ and
<italic>y</italic>
′ can be calculated as the following equations:
<disp-formula id="eq2">
<label>(2)</label>
<mml:math id="M2">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mi>x</mml:mi>
<mml:mi mathvariant="normal">cos</mml:mi>
<mml:mi>θ</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>y</mml:mi>
<mml:mi mathvariant="normal">sin</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>θ</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi mathvariant="normal">sin</mml:mi>
<mml:mi>θ</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>y</mml:mi>
<mml:mi mathvariant="normal">cos</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>θ</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>
<xref ref-type="fig" rid="fig1">Figure 1</xref>
shows the influence of changing some parameters for Gabor's function.</p>
<p>Some of the various benefits of Gabor filters are invariance rotation, scaling, translation, and resistance to distortion of images such as illumination change [
<xref rid="B58" ref-type="bibr">58</xref>
,
<xref rid="B59" ref-type="bibr">59</xref>
]. They are specially proper for fabric representation plus discrimination.</p>
<p>A range of Gabor filters with other frequencies and directions can be used to extract many features such as texture analysis and segmentation from an image [
<xref rid="B60" ref-type="bibr">60</xref>
]. By varying the orientation, we can look for fabric orientation in a specific direction. By varying the standard deviation of the Gaussian envelope, we change the basis' support or the image's size region being analyzed.</p>
<p>When the features are extracted, the best relative set of features are selected using the PSO method for a flexible face recognition system.</p>
</sec>
<sec id="sec3.2">
<title>3.2. Feature Selection with Particle Swarm Optimization</title>
<p>Particle swarm optimization (PSO) or known as the bird swarm algorithm was initially created in 1995 by Kenny and Eberhart [
<xref rid="B61" ref-type="bibr">61</xref>
]. PSO is a mathematical method that tries to solve optimization problems. For each problem, there are particles (solutions) flying over the problem area based on some mathematical calculations for the velocity and position of the particle. Each particle has fitness values that are measured by the fitness function to be optimized and has velocity that guides the flying of the particles [
<xref rid="B62" ref-type="bibr">62</xref>
].</p>
<p>In computational techniques, PSO is used as a random optimization algorithm for feature selection and classification. This is done by iteratively selecting the most relative and useful set of features to improve or maintain the classification performance for a robust facial recognition system [
<xref rid="B63" ref-type="bibr">63</xref>
].</p>
<p>The basic idea behind this algorithm is the coevolvement of different classes of birds rather than focusing on a certain class of birds. This algorithm contributes to effective search abilities [
<xref rid="B64" ref-type="bibr">64</xref>
]. The PSO algorithm is illustrated in
<xref ref-type="fig" rid="fig2">Figure 2</xref>
.</p>
<p>First, all the particles are assigned primary values; after that, fit values for each particles are estimated. Then, the current fit value is determined; if it is better than the previous one, then we upgrade it to the current value, but if the old fit value is better; we keep it [
<xref rid="B65" ref-type="bibr">65</xref>
]. The algorithm ends, and this process is repeated until the best solution is obtained.</p>
<p>The equation of the PSO algorithm is demonstrated below:
<disp-formula id="eq3">
<label>(3)</label>
<mml:math id="M3">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:msubsup>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mi>w</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mtext>best</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mtext>best</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>Each particle is upgraded with two “best” values in each iteration. Here,
<italic>v</italic>
denotes velocity which is bounded between
<italic>w</italic>
<sub>max</sub>
and
<italic>w</italic>
<sub>min</sub>
,
<italic>w</italic>
is inertia weight, and
<italic>x</italic>
is solution [
<xref rid="B66" ref-type="bibr">66</xref>
,
<xref rid="B67" ref-type="bibr">67</xref>
]. Continuing,
<italic>t</italic>
refers to the number of irritation,
<italic>i</italic>
to the order of practicality in population, and
<italic>d</italic>
to the dimension of search space.
<italic>c</italic>
<sub>1</sub>
and
<italic>c</italic>
<sub>2</sub>
indicate acceleration factor;
<italic>r</italic>
<sub>1</sub>
and
<italic>r</italic>
<sub>2</sub>
are two independent random numbers in [0, 1].
<italic>p</italic>
best implies the personal best solution (the best solution that has been found yet), while
<italic>g</italic>
best implies the global solution which is recorded by the particle swarm optimizer. This optimizer is the best worth yet achieved by any particle for the entire population.</p>
<p>Afterwards, velocity is updated to a probability value as demonstrated in the following equation:
<disp-formula id="eq4">
<label>(4)</label>
<mml:math id="M4">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>s</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>Practical position and
<italic>p</italic>
best with
<italic>g</italic>
best are converted to the following equations:
<disp-formula id="eq5">
<label>(5)</label>
<mml:math id="M5">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mtext>if </mml:mtext>
<mml:mi mathvariant="normal">rand</mml:mi>
<mml:mo><</mml:mo>
<mml:mi>s</mml:mi>
<mml:mtext></mml:mtext>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mtext>otherwise</mml:mtext>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
where rand is a random number between 0 and 1.
<disp-formula id="eq6">
<label>(6)</label>
<mml:math id="M6">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mtext>best</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mtext>if </mml:mtext>
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo><</mml:mo>
<mml:mi>F</mml:mi>
<mml:mtext></mml:mtext>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mtext>best</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mtext>best</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mtext>otherwise</mml:mtext>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mtext>best</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mtext>best</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mtext>if </mml:mtext>
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mtext>best</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo><</mml:mo>
<mml:mi>F</mml:mi>
<mml:mtext></mml:mtext>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mtext>best</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:msub>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mtext>best</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mtext>otherwise</mml:mtext>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
where
<italic>F</italic>
is the fitness function:
<disp-formula id="eq7">
<label>(7)</label>
<mml:math id="M7">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>w</mml:mi>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">max</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">max</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">min</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">max</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>The parameter used for particle swarm optimization is shown in
<xref rid="tab1" ref-type="table">Table 1</xref>
.</p>
<p>We obtained these parameters experimentally.</p>
</sec>
<sec id="sec3.3">
<title>3.3. Convolutional Neural Network</title>
<p>The main component of a convolution neural network (CNN) is the convolution layer. The approach behind a convolution layer is a feature which has been learned locally for any given input (for example, any 2D images). It should be helpful in other regions of that same input source. For example, a feature for edge detection, which was proved useful in one part of the image, might be helpful in the other regions of the image at a possible general feature extraction stage. The learning of other features in an image such as edges oriented at an angle or curves is obtained by sliding the filters across the image with a step or stride size which is constant for a given convolution layer.</p>
<p>Layers of more than one subsampling and convolutional layer, preferably fully added layers, are called CNN.
<italic>M</italic>
is the height and width of the image, and
<italic>r</italic>
is the number of channels, while the input of an accessible layer is the image
<italic>m</italic>
×
<italic>m</italic>
×
<italic>r</italic>
, e.g., an RGB image has
<italic>r</italic>
= 3. The convolutional layer can differ in every core it has; this is because it will have
<italic>k</italic>
kernels or filters of size
<italic>n</italic>
×
<italic>n</italic>
×
<italic>q</italic>
, where
<italic>n</italic>
is much smaller than the size of the image and
<italic>q</italic>
could be smaller than the number of channels.
<xref ref-type="fig" rid="fig3"> Figure 3</xref>
shows the general topology of a CNN.</p>
</sec>
</sec>
<sec id="sec4">
<title>4. Experimental Result and Discussion</title>
<p>This chapter shows that the outcomes are derived from the simulation using MATLAB 2020a. The recognition system consists of three stages. The first is
<italic>the feature extraction</italic>
; in this stage, we used Gabor wavelet transform. The second is
<italic>feature selection</italic>
. In this stage, we used particle swarm optimization (PSO), on features that are obtained from Gabor wavelet transform. In the final stage,
<italic>the classification</italic>
, we used deep learning with 6 layers.</p>
<p>The database in this study is used from ORL databases. The ORL (Olivetti Research Laboratory) face database contains 400 images of 40 different people. There are ten different grayscale images of each of 40 distinct persons. Images were captured at various times, and they have various variations including various expressions (closed/open eyes, not smiling/smiling). The details of the face (with/without glasses) are included. Images were taken with a tolerance for some tilting and rotation of the face up to 20 degrees [
<xref rid="B49" ref-type="bibr">49</xref>
].</p>
<p>Some face images from the ORL database are shown in
<xref ref-type="fig" rid="fig4">Figure 4</xref>
.</p>
<p>Some simulation of the first face image is implemented on MATLAB 2020a, and the results are shown in
<xref ref-type="fig" rid="fig5">Figure 5</xref>
.</p>
<p>For evaluating the proposed method, we used the mean squared error (MSE), mean absolute percentage error (MAPE), and
<italic>R</italic>
-squared method. The mean squared error (MSE) is shown by
<disp-formula id="eq8">
<label>(8)</label>
<mml:math id="M8">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mtext>MSE</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:munderover>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
</mml:mover>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>The mean absolute percentage error (MAPE) is shown in the following equation:
<disp-formula id="eq9">
<label>(9)</label>
<mml:math id="M9">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mtext>MAPE</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:munderover>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mfenced open="|" close="|">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>For
<italic>R</italic>
square, we have the estimated value as
<disp-formula id="eq10">
<label>(10)</label>
<mml:math id="M10">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mover accent="false">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>¯</mml:mo>
</mml:mrow>
</mml:mover>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:munderover>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>Then, the variability of the data set can be measured using three sums of squares formulas. The total sum of squares is proportional to the variance of the data:
<disp-formula id="eq11">
<label>(11)</label>
<mml:math id="M11">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>S</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>tot</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mover accent="false">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>¯</mml:mo>
</mml:mrow>
</mml:mover>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>The regression sum of squares is also called the explained sum of squares:
<disp-formula id="eq12">
<label>(12)</label>
<mml:math id="M12">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>S</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>reg</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mover accent="false">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>¯</mml:mo>
</mml:mrow>
</mml:mover>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>The sum of squares of residuals is the residual sum of squares:
<disp-formula id="eq13">
<label>(13)</label>
<mml:math id="M13">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>S</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>res</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>The most general definition of the coefficient of determination is
<disp-formula id="eq14">
<label>(14)</label>
<mml:math id="M14">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:msup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>res</mml:mtext>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>tot</mml:mtext>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p>
<xref rid="tab2" ref-type="table">Table 2</xref>
shows the specifications of the layer that is used in deep learning.</p>
<p>The procedure and test were performed using actual with symmetrical species from ORL and YALE datasets. The results are shown in Figures
<xref ref-type="fig" rid="fig6">6</xref>
<xref ref-type="fig" rid="fig7"></xref>
<xref ref-type="fig" rid="fig8"></xref>
<xref ref-type="fig" rid="fig9">9</xref>
. The results show that the system using the ORL dataset revealed how the preprocessing stage improves the accuracy. They also indicate how we can merge or fuse two methods of feature extraction to produce a powerful third method that can accomplish the job.</p>
<p>The comparison of the MSE, RMSE, MAPE, and
<italic>R</italic>
for train data is shown in
<xref ref-type="fig" rid="fig7">Figure 7</xref>
.</p>
<p>The result using the PSO is shown in Figures
<xref ref-type="fig" rid="fig10">10</xref>
<xref ref-type="fig" rid="fig11"></xref>
<xref ref-type="fig" rid="fig12"></xref>
<xref ref-type="fig" rid="fig13">13</xref>
.</p>
<p>We have observed that the recognition rate and accuracy results from the experiments cannot be met when utilizing the Gabor wavelet and deep learning due to some variation of the values of features which corrupts the classification step. So, when compared with Gabor wavelet features, the variety will be large. Hence, the features are between -14 and 254. Therefore, optimum features are chosen.</p>
<p>PSO methods try to address this problem by selecting only the optimum features from Gabor wavelet. The performance of the classifier is based on the number of features. Too less or too redundant features can reduce the accuracy rates. Therefore, the number of features must be chosen carefully. In PSO, the basic process is that there are a number of particles; each one of them is flying through the problem area arbitrarily searching for the previous best solution and the global best solution of the whole swarm. Then, velocity is modified at each iteration which will define the movement of the particles to be more or less random. Therefore, the algorithms are converged. This method was used in literature [
<xref rid="B68" ref-type="bibr">68</xref>
], using the PSO method for selecting the best features.</p>
<p>In our experiments, we have used Gabor wavelet for feature extraction obtaining 10304 features. When the features were extracted, the implementation of PSO reduces the features to 5142. The best and most optimum features are selected by eliminating the highest and lowest values of features using the fitness function which determines the features that are the closest to each other in the amount. The experimental results obtained a 96% recognition rate on the ORL database when implementing the proposed method.</p>
<p>For the cause of completeness, we compare the performance of PCA [
<xref rid="B22" ref-type="bibr">22</xref>
], SRC [
<xref rid="B56" ref-type="bibr">56</xref>
], CRC [
<xref rid="B69" ref-type="bibr">69</xref>
], Gabor wavelet with Euclidian method [
<xref rid="B57" ref-type="bibr">57</xref>
], symmetrical face sample method [
<xref rid="B49" ref-type="bibr">49</xref>
], and the proposed method.</p>
<p>The comparison of other methods with the proposed methods is shown in
<xref rid="tab3" ref-type="table">Table 3</xref>
.</p>
</sec>
<sec id="sec5">
<title>5. Conclusion</title>
<p>The use of the symmetry property of the face is an efficient way to increase the performances of the face recognition systems. In this study, a new method is provided for the face recognition system. The new method is upgraded to use the benefits of symmetry property in the face data. The feature space is another way to implement the use of symmetry property in the face. There are many methods for feature extraction; however, none of them can handle the symmetry procedure in the feature space. The suggested methods can perform the symmetry procedure either in the image space or in the feature space. The introduced method is examined and tested for face recognition using data from ORL and YALE datasets.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability">
<title>Data Availability</title>
<p>All data available for readers are included within the article.</p>
</sec>
<sec sec-type="COI-statement">
<title>Conflicts of Interest</title>
<p>The authors declare that they have no conflicts of interest.</p>
</sec>
<ref-list>
<ref id="B1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bouguila</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Khochtali</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Facial plastic surgery and face recognition algorithms: interaction and challenges. A scoping review and future directions</article-title>
<source>
<italic toggle="yes">Journal of Stomatology, Oral and Maxillofacial Surgery</italic>
</source>
<year>2020</year>
<volume>121</volume>
<issue>6</issue>
<fpage>696</fpage>
<lpage>703</lpage>
<pub-id pub-id-type="doi">10.1016/j.jormas.2020.06.007</pub-id>
</element-citation>
</ref>
<ref id="B2">
<label>2</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bledsoe</surname>
<given-names>W. W.</given-names>
</name>
</person-group>
<article-title>Semiautomatic facial recognition</article-title>
<year>1968</year>
<publisher-name>Technical Report SRI Project 6693</publisher-name>
</element-citation>
</ref>
<ref id="B3">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yazdi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Mardani-Samani</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Bordbar</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Mobaraki</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Age classification based on RBF neural network</article-title>
<source>
<italic toggle="yes">Canadian Journal on Image Processing and Computer Vision</italic>
</source>
<year>2012</year>
<volume>3</volume>
<issue>2</issue>
<fpage>38</fpage>
<lpage>42</lpage>
</element-citation>
</ref>
<ref id="B4">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Horng</surname>
<given-names>W.-B.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>C.-P.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>C.-W.</given-names>
</name>
</person-group>
<article-title>Classification of age groups based on facial features</article-title>
<source>
<italic toggle="yes">Journal of Applied Science and Engineering</italic>
</source>
<year>2001</year>
<volume>4</volume>
<issue>3</issue>
<fpage>183</fpage>
<lpage>192</lpage>
</element-citation>
</ref>
<ref id="B5">
<label>5</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Ahonen</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Hadid</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Pietikäinen</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Face recognition with local binary patterns</article-title>
<source>
<italic toggle="yes">European conference on computer vision</italic>
</source>
<year>2004</year>
<fpage>469</fpage>
<lpage>481</lpage>
</element-citation>
</ref>
<ref id="B6">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhao</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Pietikainen</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Dynamic texture recognition using local binary patterns with an application to facial expressions</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>2007</year>
<volume>29</volume>
<issue>6</issue>
<fpage>915</fpage>
<lpage>928</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2007.1110</pub-id>
<pub-id pub-id-type="other">2-s2.0-34247557079</pub-id>
<pub-id pub-id-type="pmid">17431293</pub-id>
</element-citation>
</ref>
<ref id="B7">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chandra Mohan</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Vijaya Kumar</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Damodaram</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Novel Method of Adulthood Classification Based on Geometrical Features of Face</article-title>
<source>
<italic toggle="yes">GVIP Journal of Graphics, Vision and Image Processing</italic>
</source>
<year>2010</year>
</element-citation>
</ref>
<ref id="B8">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kumar</surname>
<given-names>V. V.</given-names>
</name>
<name>
<surname>Murty</surname>
<given-names>G. S.</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>P. S.</given-names>
</name>
</person-group>
<article-title>Classification of facial expressions based on transitions derived from third order neighborhood LBP</article-title>
<source>
<italic toggle="yes">Global Journal of Computer Science and Technology</italic>
</source>
<year>2014</year>
<volume>14</volume>
<issue>1-F</issue>
</element-citation>
</ref>
<ref id="B9">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kroeker</surname>
<given-names>K. L.</given-names>
</name>
</person-group>
<article-title>Face recognition breakthrough</article-title>
<source>
<italic toggle="yes">Communications of the ACM</italic>
</source>
<year>2009</year>
<volume>52</volume>
<issue>8</issue>
<fpage>18</fpage>
<lpage>19</lpage>
</element-citation>
</ref>
<ref id="B10">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ming Zhang</surname>
</name>
<name>
<surname>Fulcher</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Face recognition using artificial neural network group-based adaptive tolerance (GAT) trees</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Neural Networks</italic>
</source>
<year>1996</year>
<volume>7</volume>
<issue>3</issue>
<fpage>555</fpage>
<lpage>567</lpage>
<pub-id pub-id-type="doi">10.1109/72.501715</pub-id>
<pub-id pub-id-type="other">2-s2.0-0030141869</pub-id>
<pub-id pub-id-type="pmid">18263454</pub-id>
</element-citation>
</ref>
<ref id="B11">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Feng</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Pietikainen</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hadid</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Facial expression recognition with local binary patterns and linear programming</article-title>
<source>
<italic toggle="yes">Pattern Recognition And Image Analysis C/C of Raspoznavaniye Obrazov I Analiz Izobrazhenii</italic>
</source>
<year>2005</year>
<volume>15</volume>
<issue>2</issue>
<fpage>p. 546</fpage>
</element-citation>
</ref>
<ref id="B12">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Elad</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Goldenberg</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Kimmel</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Low bit-rate compression of facial images</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Image Processing</italic>
</source>
<year>2007</year>
<volume>16</volume>
<issue>9</issue>
<fpage>2379</fpage>
<lpage>2383</lpage>
<pub-id pub-id-type="doi">10.1109/TIP.2007.903259</pub-id>
<pub-id pub-id-type="other">2-s2.0-34548277575</pub-id>
<pub-id pub-id-type="pmid">17784610</pub-id>
</element-citation>
</ref>
<ref id="B13">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Skodras</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Christopoulos</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Ebrahimi</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>The jpeg 2000 still image compression standard</article-title>
<source>
<italic toggle="yes">IEEE Signal Processing Magazine</italic>
</source>
<year>2001</year>
<volume>18</volume>
<issue>5</issue>
<fpage>36</fpage>
<lpage>58</lpage>
<pub-id pub-id-type="doi">10.1109/79.952804</pub-id>
<pub-id pub-id-type="other">2-s2.0-0035445526</pub-id>
</element-citation>
</ref>
<ref id="B14">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rakshit</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Monro</surname>
<given-names>D. M.</given-names>
</name>
</person-group>
<article-title>An evaluation of image sampling and compression for human iris recognition</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Information Forensics and Security</italic>
</source>
<year>2007</year>
<volume>2</volume>
<issue>3</issue>
<fpage>605</fpage>
<lpage>612</lpage>
<pub-id pub-id-type="doi">10.1109/TIFS.2007.902401</pub-id>
<pub-id pub-id-type="other">2-s2.0-34548188067</pub-id>
</element-citation>
</ref>
<ref id="B15">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Plataniotis</surname>
<given-names>K. N.</given-names>
</name>
<name>
<surname>Venetsanopoulos</surname>
<given-names>A. N.</given-names>
</name>
</person-group>
<article-title>Face recognition using LDA-based algorithms</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Neural Networks</italic>
</source>
<year>2003</year>
<volume>14</volume>
<issue>1</issue>
<fpage>195</fpage>
<lpage>200</lpage>
<pub-id pub-id-type="doi">10.1109/TNN.2002.806647</pub-id>
<pub-id pub-id-type="other">2-s2.0-0037274506</pub-id>
<pub-id pub-id-type="pmid">18238001</pub-id>
</element-citation>
</ref>
<ref id="B16">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Linghua Zhang</surname>
</name>
</person-group>
<article-title>Face recognition algorithm based on discriminative dictionary learning and sparse representation</article-title>
<source>
<italic toggle="yes">Neurocomputing</italic>
</source>
<year>2016</year>
<volume>174</volume>
<fpage>749</fpage>
<lpage>755</lpage>
<pub-id pub-id-type="doi">10.1016/j.neucom.2015.09.091</pub-id>
<pub-id pub-id-type="other">2-s2.0-84949664098</pub-id>
</element-citation>
</ref>
<ref id="B17">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>V. M.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>L.</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Robust local features for remote face recognition</article-title>
<source>
<italic toggle="yes">Image and Vision Computing</italic>
</source>
<year>2017</year>
<volume>64</volume>
<fpage>34</fpage>
<lpage>46</lpage>
<pub-id pub-id-type="doi">10.1016/j.imavis.2017.05.006</pub-id>
<pub-id pub-id-type="other">2-s2.0-85020919834</pub-id>
</element-citation>
</ref>
<ref id="B18">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhao</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Chellappa</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Phillips</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>Rosenfeld</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Face recognition: a literature survey</article-title>
<source>
<italic toggle="yes">ACM Computing Surveys (CSUR)</italic>
</source>
<year>2003</year>
<volume>35</volume>
<issue>4</issue>
<fpage>399</fpage>
<lpage>458</lpage>
</element-citation>
</ref>
<ref id="B19">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wiskott</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Fellous</surname>
<given-names>J.-M.</given-names>
</name>
<name>
<surname>Kuiger</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Von Der Malsburg</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Face recognition by elastic bunch graph matching</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>1997</year>
<volume>19</volume>
<issue>7</issue>
<fpage>775</fpage>
<lpage>779</lpage>
</element-citation>
</ref>
<ref id="B20">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Georghiades</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Belhumeur</surname>
<given-names>P. N.</given-names>
</name>
<name>
<surname>Kriegman</surname>
<given-names>D. J.</given-names>
</name>
</person-group>
<article-title>From few to many: illumination cone models for face recognition under variable lighting and pose</article-title>
<source>
<italic toggle="yes">Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>2001</year>
<volume>23</volume>
<issue>6</issue>
<fpage>643</fpage>
<lpage>660</lpage>
</element-citation>
</ref>
<ref id="B21">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Belhumeur</surname>
<given-names>P. N.</given-names>
</name>
<name>
<surname>Hespanha</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Kriegman</surname>
<given-names>D. J.</given-names>
</name>
</person-group>
<article-title>Eigenfaces vs. fisherfaces: recognition using class specific linear projection</article-title>
<source>
<italic toggle="yes">Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>1997</year>
<volume>19</volume>
<issue>7</issue>
<fpage>711</fpage>
<lpage>720</lpage>
</element-citation>
</ref>
<ref id="B22">
<label>22</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Turk</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Pentland</surname>
<given-names>A. P.</given-names>
</name>
</person-group>
<article-title>Face recognition using eigenfaces</article-title>
<conf-name>Proceedings 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition</conf-name>
<conf-date>1991</conf-date>
<conf-loc>Maui, HI, USA</conf-loc>
<fpage>586</fpage>
<lpage>591</lpage>
</element-citation>
</ref>
<ref id="B23">
<label>23</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Gabor feature based sparse representation for face recognition with Gabor occlusion dictionary</article-title>
<source>
<italic toggle="yes">Computer Vision–ECCV 2010</italic>
</source>
<year>2010</year>
<fpage>448</fpage>
<lpage>461</lpage>
<series>Lecture Notes in Computer Science</series>
</element-citation>
</ref>
<ref id="B24">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guo</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>A survey on deep learning based face recognition</article-title>
<source>
<italic toggle="yes">Computer Vision and Image Understanding</italic>
</source>
<year>2019</year>
<volume>189, article 102805</volume>
<pub-id pub-id-type="doi">10.1016/j.cviu.2019.102805</pub-id>
<pub-id pub-id-type="other">2-s2.0-85071677641</pub-id>
</element-citation>
</ref>
<ref id="B25">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Massoli</surname>
<given-names>F. V.</given-names>
</name>
<name>
<surname>Amato</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Falchi</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Cross-resolution learning for face recognition</article-title>
<source>
<italic toggle="yes">Image and Vision Computing</italic>
</source>
<year>2020</year>
<volume>99, article 103927</volume>
<pub-id pub-id-type="doi">10.1016/j.imavis.2020.103927</pub-id>
</element-citation>
</ref>
<ref id="B26">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Iqbal</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sameem</surname>
<given-names>M. S. I.</given-names>
</name>
<name>
<surname>Naqvi</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Kanwal</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ye</surname>
<given-names>Z.</given-names>
</name>
</person-group>
<article-title>A deep learning approach for face recognition based on angularly discriminative features</article-title>
<source>
<italic toggle="yes">Pattern Recognition Letters</italic>
</source>
<year>2019</year>
<volume>128</volume>
<fpage>414</fpage>
<lpage>419</lpage>
<pub-id pub-id-type="doi">10.1016/j.patrec.2019.10.002</pub-id>
<pub-id pub-id-type="other">2-s2.0-85073234605</pub-id>
</element-citation>
</ref>
<ref id="B27">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Ye</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Age invariant face recognition and retrieval by coupled auto-encoder networks</article-title>
<source>
<italic toggle="yes">Neurocomputing</italic>
</source>
<year>2017</year>
<volume>222</volume>
<fpage>62</fpage>
<lpage>71</lpage>
<pub-id pub-id-type="doi">10.1016/j.neucom.2016.10.010</pub-id>
<pub-id pub-id-type="other">2-s2.0-84997541719</pub-id>
</element-citation>
</ref>
<ref id="B28">
<label>28</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Tran</surname>
<given-names>C.-K.</given-names>
</name>
<name>
<surname>Tseng</surname>
<given-names>C.-D.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>T.-F.</given-names>
</name>
</person-group>
<article-title>Improving the face recognition accuracy under varying illumination conditions for local binary patterns and local ternary patterns based on weber-face and singular value decomposition</article-title>
<conf-name>2016 3rd International Conference on Green Technology and Sustainable Development (GTSD)</conf-name>
<conf-date>November 2016</conf-date>
<conf-loc>Kaohsiung, Taiwan</conf-loc>
<fpage>5</fpage>
<lpage>9</lpage>
</element-citation>
</ref>
<ref id="B29">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nikan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ahmadi</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>A modified technique for face recognition under degraded conditions</article-title>
<source>
<italic toggle="yes">Journal of Visual Communication and Image Representation</italic>
</source>
<year>2018</year>
<volume>55</volume>
<fpage>742</fpage>
<lpage>755</lpage>
<pub-id pub-id-type="doi">10.1016/j.jvcir.2018.08.007</pub-id>
<pub-id pub-id-type="other">2-s2.0-85051636539</pub-id>
</element-citation>
</ref>
<ref id="B30">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kim</surname>
<given-names>T.-K.</given-names>
</name>
<name>
<surname>Kittler</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Locally linear discriminant analysis for multimodally distributed classes for face recognition with a single model image</article-title>
<source>
<italic toggle="yes">Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>2005</year>
<volume>27</volume>
<issue>3</issue>
<fpage>318</fpage>
<lpage>327</lpage>
</element-citation>
</ref>
<ref id="B31">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>He</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Yan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Niyogi</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H.-J.</given-names>
</name>
</person-group>
<article-title>Face recognition using Laplacianfaces</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>2005</year>
<volume>27</volume>
<issue>3</issue>
<fpage>328</fpage>
<lpage>340</lpage>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://ieeexplore.ieee.org/ielx5/34/30209/01388260.pdf?tp=&arnumber=1388260&isnumber=30209">http://ieeexplore.ieee.org/ielx5/34/30209/01388260.pdf?tp=&arnumber=1388260&isnumber=30209</ext-link>
</comment>
<pub-id pub-id-type="pmid">15747789</pub-id>
</element-citation>
</ref>
<ref id="B32">
<label>32</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Pentland</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Moghaddam</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Starner</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>View-based and modular eigenspaces for face recognition</article-title>
<conf-name>1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition</conf-name>
<conf-date>June 1994</conf-date>
<conf-loc>Seattle, WA, USA</conf-loc>
<fpage>84</fpage>
<lpage>91</lpage>
</element-citation>
</ref>
<ref id="B33">
<label>33</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Gross</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Matthews</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Baker</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Eigen light-fields and face recognition across pose</article-title>
<conf-name>Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition</conf-name>
<conf-date>May 2002</conf-date>
<conf-loc>Washington, DC, USA</conf-loc>
<fpage>1</fpage>
<lpage>7</lpage>
</element-citation>
</ref>
<ref id="B34">
<label>34</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhao</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Dong</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>Learning blur invariant binary descriptor for face recognition</article-title>
<source>
<italic toggle="yes">Neurocomputing</italic>
</source>
<year>2020</year>
<volume>404</volume>
<fpage>34</fpage>
<lpage>40</lpage>
</element-citation>
</ref>
<ref id="B35">
<label>35</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Mairal</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ponce</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sapiro</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Zisserman</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bach</surname>
<given-names>F. R.</given-names>
</name>
</person-group>
<article-title>Supervised dictionary learning</article-title>
<source>
<italic toggle="yes">Advances in Neural Information Processing Systems</italic>
</source>
<year>2009</year>
<fpage>1033</fpage>
<lpage>1040</lpage>
</element-citation>
</ref>
<ref id="B36">
<label>36</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sunday</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>P. A.</given-names>
</name>
<name>
<surname>Dodd</surname>
<given-names>M. D.</given-names>
</name>
<name>
<surname>Gauthier</surname>
<given-names>I.</given-names>
</name>
</person-group>
<article-title>Gender and hometown population density interact to predict face recognition ability</article-title>
<source>
<italic toggle="yes">Vision Research</italic>
</source>
<year>2019</year>
<volume>163</volume>
<fpage>14</fpage>
<lpage>23</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2019.08.006</pub-id>
<pub-id pub-id-type="other">2-s2.0-85071284092</pub-id>
<pub-id pub-id-type="pmid">31472340</pub-id>
</element-citation>
</ref>
<ref id="B37">
<label>37</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Inamizu</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Yamada</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Ogata</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Uehara</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kira</surname>
<given-names>J.-i.</given-names>
</name>
<name>
<surname>Tobimatsu</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Neuromagnetic correlates of hemispheric specialization for face and word recognition</article-title>
<source>
<italic toggle="yes">Neuroscience Research</italic>
</source>
<year>2019</year>
<volume>156</volume>
<fpage>108</fpage>
<lpage>116</lpage>
<pub-id pub-id-type="pmid">31730780</pub-id>
</element-citation>
</ref>
<ref id="B38">
<label>38</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>S. K.</given-names>
</name>
<name>
<surname>Chellappa</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Illuminating light field: image-based face recognition across illuminations and poses</article-title>
<conf-name>Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings</conf-name>
<conf-date>May 2004</conf-date>
<conf-loc>Seoul, Korea (South)</conf-loc>
<fpage>229</fpage>
<lpage>234</lpage>
</element-citation>
</ref>
<ref id="B39">
<label>39</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blanz</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Vetter</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Face recognition based on fitting a 3D morphable model</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>2003</year>
<volume>25</volume>
<issue>9</issue>
<fpage>1063</fpage>
<lpage>1074</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2003.1227983</pub-id>
<pub-id pub-id-type="other">2-s2.0-0141502062</pub-id>
</element-citation>
</ref>
<ref id="B40">
<label>40</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Samaras</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Face recognition from a single training image under arbitrary unknown lighting using spherical harmonics</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>2006</year>
<volume>28</volume>
<issue>3</issue>
<fpage>351</fpage>
<lpage>363</lpage>
<pub-id pub-id-type="pmid">16526422</pub-id>
</element-citation>
</ref>
<ref id="B41">
<label>41</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Blanz</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Scherbaum</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Vetter</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Seidel</surname>
<given-names>H. P.</given-names>
</name>
</person-group>
<article-title>Exchanging faces in images</article-title>
<source>
<italic toggle="yes">Computer Graphics Forum</italic>
</source>
<year>2004</year>
<volume>23</volume>
<issue>3</issue>
<publisher-name>Wiley Online Library</publisher-name>
<fpage>669</fpage>
<lpage>676</lpage>
</element-citation>
</ref>
<ref id="B42">
<label>42</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Royer</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Blais</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Charbonneau</surname>
<given-names>I.</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Greater reliance on the eye region predicts better face recognition ability</article-title>
<source>
<italic toggle="yes">Cognition</italic>
</source>
<year>2018</year>
<volume>181</volume>
<fpage>12</fpage>
<lpage>20</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2018.08.004</pub-id>
<pub-id pub-id-type="other">2-s2.0-85051252949</pub-id>
<pub-id pub-id-type="pmid">30103033</pub-id>
</element-citation>
</ref>
<ref id="B43">
<label>43</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kas</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>el merabet</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Ruichek</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Messoussi</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Mixed neighborhood topology cross decoded patterns for image-based face recognition</article-title>
<source>
<italic toggle="yes">Expert Systems with Applications</italic>
</source>
<year>2018</year>
<volume>114</volume>
<fpage>119</fpage>
<lpage>142</lpage>
<pub-id pub-id-type="doi">10.1016/j.eswa.2018.07.035</pub-id>
<pub-id pub-id-type="other">2-s2.0-85050763442</pub-id>
</element-citation>
</ref>
<ref id="B44">
<label>44</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shashua</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Riklin-Raviv</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>The quotient image: class-based re-rendering and recognition with varying illuminations</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>2001</year>
<volume>23</volume>
<issue>2</issue>
<fpage>129</fpage>
<lpage>139</lpage>
</element-citation>
</ref>
<ref id="B45">
<label>45</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chellappa</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Rank constrained recognition under unknown illuminations</article-title>
<conf-name>IEEE International Workshop on Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003</conf-name>
<conf-date>October 2003</conf-date>
<conf-loc>Nice, France</conf-loc>
<fpage>11</fpage>
<lpage>18</lpage>
</element-citation>
</ref>
<ref id="B46">
<label>46</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Basri</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>D. W.</given-names>
</name>
</person-group>
<article-title>Lambertian reflectance and linear subspaces</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>2003</year>
<volume>25</volume>
<issue>2</issue>
<fpage>218</fpage>
<lpage>233</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2003.1177153</pub-id>
<pub-id pub-id-type="other">2-s2.0-0037328517</pub-id>
</element-citation>
</ref>
<ref id="B47">
<label>47</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ramamoorthi</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Analytic PCA construction for theoretical analysis of lighting variability in images of a Lambertian object</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>2002</year>
<volume>24</volume>
<issue>10</issue>
<fpage>1322</fpage>
<lpage>1333</lpage>
</element-citation>
</ref>
<ref id="B48">
<label>48</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gao</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>H. J.</given-names>
</name>
</person-group>
<article-title>Cross-pose face recognition based on multiple virtual views and alignment error</article-title>
<source>
<italic toggle="yes">Pattern Recognition Letters</italic>
</source>
<year>2015</year>
<volume>65</volume>
<fpage>170</fpage>
<lpage>176</lpage>
<pub-id pub-id-type="doi">10.1016/j.patrec.2015.07.018</pub-id>
<pub-id pub-id-type="other">2-s2.0-84940389987</pub-id>
</element-citation>
</ref>
<ref id="B49">
<label>49</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Pu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>X.</given-names>
</name>
</person-group>
<article-title>Using the original and symmetrical face training samples to perform collaborative representation for face recognition</article-title>
<source>
<italic toggle="yes">Optik-International Journal for Light and Electron Optics</italic>
</source>
<year>2016</year>
<volume>127</volume>
<issue>4</issue>
<fpage>1900</fpage>
<lpage>1904</lpage>
<pub-id pub-id-type="doi">10.1016/j.ijleo.2015.09.142</pub-id>
<pub-id pub-id-type="other">2-s2.0-84954042795</pub-id>
</element-citation>
</ref>
<ref id="B50">
<label>50</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Vasilescu</surname>
<given-names>M. A. O.</given-names>
</name>
<name>
<surname>Terzopoulos</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Multilinear subspace analysis of image ensembles</article-title>
<conf-name>2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings</conf-name>
<conf-date>June 2003</conf-date>
<conf-loc>Madison, WI, USA</conf-loc>
</element-citation>
</ref>
<ref id="B51">
<label>51</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Shan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>W.</given-names>
</name>
</person-group>
<article-title>Manifold-manifold distance with application to face recognition based on image set</article-title>
<conf-name>2008 IEEE Conference on Computer Vision and Pattern Recognition</conf-name>
<conf-date>June 2008</conf-date>
<conf-loc>Anchorage, AK, USA</conf-loc>
<fpage>1</fpage>
<lpage>8</lpage>
</element-citation>
</ref>
<ref id="B52">
<label>52</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tenenbaum</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Freeman</surname>
<given-names>W. T.</given-names>
</name>
</person-group>
<article-title>Separating style and content with bilinear models</article-title>
<source>
<italic toggle="yes">Neural Computation</italic>
</source>
<year>2000</year>
<volume>12</volume>
<issue>6</issue>
<fpage>1247</fpage>
<lpage>1283</lpage>
<pub-id pub-id-type="doi">10.1162/089976600300015349</pub-id>
<pub-id pub-id-type="other">2-s2.0-0034202338</pub-id>
<pub-id pub-id-type="pmid">10935711</pub-id>
</element-citation>
</ref>
<ref id="B53">
<label>53</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shin</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>H.-S.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Illumination-robust face recognition using ridge regressive bilinear models</article-title>
<source>
<italic toggle="yes">Pattern Recognition Letters</italic>
</source>
<year>2008</year>
<volume>29</volume>
<issue>1</issue>
<fpage>49</fpage>
<lpage>58</lpage>
</element-citation>
</ref>
<ref id="B54">
<label>54</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Prince</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Warrell</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Elder</surname>
<given-names>J. H.</given-names>
</name>
<name>
<surname>Felisberti</surname>
<given-names>F. M.</given-names>
</name>
</person-group>
<article-title>Tied factor analysis for face recognition across large pose differences</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>2008</year>
<volume>30</volume>
<issue>6</issue>
<fpage>970</fpage>
<lpage>984</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2008.48</pub-id>
<pub-id pub-id-type="other">2-s2.0-43249098320</pub-id>
<pub-id pub-id-type="pmid">18421104</pub-id>
</element-citation>
</ref>
<ref id="B55">
<label>55</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Elgammal</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>C.-S.</given-names>
</name>
</person-group>
<article-title>Separating style and content on a nonlinear manifold</article-title>
<conf-name>Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004</conf-name>
<conf-date>July 2004</conf-date>
<conf-loc>Washington, DC, USA</conf-loc>
<fpage>I-478</fpage>
<lpage>I-485</lpage>
</element-citation>
</ref>
<ref id="B56">
<label>56</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wright</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>A. Y.</given-names>
</name>
<name>
<surname>Ganesh</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Sastry</surname>
<given-names>S. S.</given-names>
</name>
<name>
<surname>Yi Ma</surname>
</name>
</person-group>
<article-title>Robust face recognition via sparse representation</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Pattern Analysis and Machine Intelligence</italic>
</source>
<year>2009</year>
<volume>31</volume>
<issue>2</issue>
<fpage>210</fpage>
<lpage>227</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2008.79</pub-id>
<pub-id pub-id-type="other">2-s2.0-61549128441</pub-id>
<pub-id pub-id-type="pmid">19110489</pub-id>
</element-citation>
</ref>
<ref id="B57">
<label>57</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Allagwail</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gedik</surname>
<given-names>O. S.</given-names>
</name>
<name>
<surname>Rahebi</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Face recognition with symmetrical face training samples based on local binary patterns and the Gabor filter</article-title>
<source>
<italic toggle="yes">Symmetry</italic>
</source>
<year>2019</year>
<volume>11</volume>
<issue>2</issue>
<fpage>p. 157</fpage>
<pub-id pub-id-type="doi">10.3390/sym11020157</pub-id>
<pub-id pub-id-type="other">2-s2.0-85061859258</pub-id>
</element-citation>
</ref>
<ref id="B58">
<label>58</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kamarainen</surname>
<given-names>J.-K.</given-names>
</name>
<name>
<surname>Kyrki</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Kalviainen</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Invariance properties of Gabor filter-based features-overview and applications</article-title>
<source>
<italic toggle="yes">IEEE Transactions on Image Processing</italic>
</source>
<year>2006</year>
<volume>15</volume>
<issue>5</issue>
<fpage>1088</fpage>
<lpage>1099</lpage>
<pub-id pub-id-type="doi">10.1109/TIP.2005.864174</pub-id>
<pub-id pub-id-type="other">2-s2.0-33646005199</pub-id>
<pub-id pub-id-type="pmid">16671290</pub-id>
</element-citation>
</ref>
<ref id="B59">
<label>59</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meshgini</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Aghagolzadeh</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Seyedarabi</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Face recognition using Gabor-based direct linear discriminant analysis and support vector machine</article-title>
<source>
<italic toggle="yes">Computers & Electrical Engineering</italic>
</source>
<year>2013</year>
<volume>39</volume>
<issue>3</issue>
<fpage>727</fpage>
<lpage>745</lpage>
<pub-id pub-id-type="doi">10.1016/j.compeleceng.2012.12.011</pub-id>
<pub-id pub-id-type="other">2-s2.0-84879206983</pub-id>
</element-citation>
</ref>
<ref id="B60">
<label>60</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Haghighat</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zonouz</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Abdel-Mottaleb</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Identification using encrypted biometrics</article-title>
<source>
<italic toggle="yes">International Conference on Computer Analysis of Images and Patterns</italic>
</source>
<year>2013</year>
<publisher-name>Springer</publisher-name>
<fpage>440</fpage>
<lpage>448</lpage>
<series>Lecture Notes in Computer Science</series>
</element-citation>
</ref>
<ref id="B61">
<label>61</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Banks</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Vincent</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Anyakoha</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>A review of particle swarm optimization. Part II: hybridisation, combinatorial, multicriteria and constrained optimization, and indicative applications</article-title>
<source>
<italic toggle="yes">Natural Computing</italic>
</source>
<year>2008</year>
<volume>7</volume>
<issue>1</issue>
<fpage>109</fpage>
<lpage>124</lpage>
<pub-id pub-id-type="doi">10.1007/s11047-007-9050-z</pub-id>
<pub-id pub-id-type="other">2-s2.0-39049136085</pub-id>
</element-citation>
</ref>
<ref id="B62">
<label>62</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kennedy</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Eberhart</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Particle swarm optimization</article-title>
<conf-name>Proceedings of ICNN'95-International Conference on Neural Networks</conf-name>
<conf-date>December 1995</conf-date>
<conf-loc>Perth, WA, Australia</conf-loc>
<fpage>1942</fpage>
<lpage>1948</lpage>
</element-citation>
</ref>
<ref id="B63">
<label>63</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hafiz</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Swain</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Naik</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>A two-dimensional (2-D) learning framework for particle swarm based feature selection</article-title>
<source>
<italic toggle="yes">Pattern Recognition</italic>
</source>
<year>2018</year>
<volume>76</volume>
<fpage>416</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2017.11.027</pub-id>
<pub-id pub-id-type="other">2-s2.0-85040372862</pub-id>
</element-citation>
</ref>
<ref id="B64">
<label>64</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kennedy</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Eberhart</surname>
<given-names>R. C.</given-names>
</name>
</person-group>
<article-title>A discrete binary version of the particle swarm algorithm</article-title>
<conf-name>1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation</conf-name>
<conf-date>October 1997</conf-date>
<conf-loc>Orlando, FL, USA</conf-loc>
<fpage>4104</fpage>
<lpage>4108</lpage>
</element-citation>
</ref>
<ref id="B65">
<label>65</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Shi</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>Particle swarm optimization: developments, applications and resources</article-title>
<conf-name>Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No. 01TH8546)</conf-name>
<conf-date>May 2001</conf-date>
<conf-loc>Seoul, Korea (South)</conf-loc>
<fpage>81</fpage>
<lpage>86</lpage>
</element-citation>
</ref>
<ref id="B66">
<label>66</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Shi</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Eberhart</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>A modified particle swarm optimizer</article-title>
<conf-name>1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360)</conf-name>
<conf-date>May 1998</conf-date>
<conf-loc>Anchorage, AK, USA</conf-loc>
<fpage>69</fpage>
<lpage>73</lpage>
</element-citation>
</ref>
<ref id="B67">
<label>67</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Unler</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Murat</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>A discrete particle swarm optimization method for feature selection in binary classification problems</article-title>
<source>
<italic toggle="yes">European Journal of Operational Research</italic>
</source>
<year>2010</year>
<volume>206</volume>
<issue>3</issue>
<fpage>528</fpage>
<lpage>539</lpage>
<pub-id pub-id-type="doi">10.1016/j.ejor.2010.02.032</pub-id>
<pub-id pub-id-type="other">2-s2.0-77951139898</pub-id>
</element-citation>
</ref>
<ref id="B68">
<label>68</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Too</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Abdullah</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Mohd Saad</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Tee</surname>
<given-names>W.</given-names>
</name>
</person-group>
<article-title>EMG feature selection and classification using a Pbest-guide binary particle swarm optimization</article-title>
<source>
<italic toggle="yes">Computation</italic>
</source>
<year>2019</year>
<volume>7</volume>
<issue>1</issue>
<fpage>p. 12</fpage>
<pub-id pub-id-type="doi">10.3390/computation7010012</pub-id>
<pub-id pub-id-type="other">2-s2.0-85064111883</pub-id>
</element-citation>
</ref>
<ref id="B69">
<label>69</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>X.</given-names>
</name>
</person-group>
<article-title>Sparse representation or collaborative representation: which helps face recognition?</article-title>
<conf-name>2011 International Conference on Computer Vision</conf-name>
<conf-date>2011</conf-date>
<conf-loc>NW Washington, DCUnited States</conf-loc>
<fpage>471</fpage>
<lpage>478</lpage>
</element-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="fig1" orientation="portrait" position="float">
<label>Figure 1</label>
<caption>
<p>(a) Different values of wavelength (left: 25, right: 50), (b) different orientation (left: 0, right: 45), (c) changing the values of phase shift (left: 180, right: 90), (d) aspect ratio very large (left) and very small (right), and (e) different bandwidth values (left: large, right: small) [
<xref rid="B57" ref-type="bibr">57</xref>
].</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.001"></graphic>
</fig>
<fig id="fig2" orientation="portrait" position="float">
<label>Figure 2</label>
<caption>
<p>Particles searching for the best and optimum solution: (a) first iteration and (b) last iteration.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.002"></graphic>
</fig>
<fig id="fig3" orientation="portrait" position="float">
<label>Figure 3</label>
<caption>
<p>CNN's topology.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.003"></graphic>
</fig>
<fig id="fig4" orientation="portrait" position="float">
<label>Figure 4</label>
<caption>
<p>ORL database.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.004"></graphic>
</fig>
<fig id="fig5" orientation="portrait" position="float">
<label>Figure 5</label>
<caption>
<p>(a) Real image, (b) left side, (c) right side, (d) left side's mirror, (e) right side's mirror, (f) integration of left side with mirrors, and (g) integration of right side with mirrors.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.005"></graphic>
</fig>
<fig id="fig6" orientation="portrait" position="float">
<label>Figure 6</label>
<caption>
<p>Train result.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.006"></graphic>
</fig>
<fig id="fig7" orientation="portrait" position="float">
<label>Figure 7</label>
<caption>
<p>The comparison of the MSE, RMSE, MAPE, and
<italic>R</italic>
for train data.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.007"></graphic>
</fig>
<fig id="fig8" orientation="portrait" position="float">
<label>Figure 8</label>
<caption>
<p>Test result.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.008"></graphic>
</fig>
<fig id="fig9" orientation="portrait" position="float">
<label>Figure 9</label>
<caption>
<p>The comparison of the MSE, RMSE, MAPE, and
<italic>R</italic>
for test data.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.009"></graphic>
</fig>
<fig id="fig10" orientation="portrait" position="float">
<label>Figure 10</label>
<caption>
<p>Train result with PSO.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.010"></graphic>
</fig>
<fig id="fig11" orientation="portrait" position="float">
<label>Figure 11</label>
<caption>
<p>The comparison of the MSE, RMSE, and
<italic>R</italic>
for train data.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.011"></graphic>
</fig>
<fig id="fig12" orientation="portrait" position="float">
<label>Figure 12</label>
<caption>
<p>Test result with PSO.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.012"></graphic>
</fig>
<fig id="fig13" orientation="portrait" position="float">
<label>Figure 13</label>
<caption>
<p>The comparison of the MSE, RMSE, and
<italic>R</italic>
for train data.</p>
</caption>
<graphic xlink:href="BMRI2021-6621540.013"></graphic>
</fig>
<table-wrap id="tab1" orientation="portrait" position="float">
<label>Table 1</label>
<caption>
<p>Parameter for particle swarm optimization to select the best features.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Parameter</th>
<th align="center" rowspan="1" colspan="1">Description</th>
<th align="center" rowspan="1" colspan="1">Value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>N</italic>
</td>
<td align="center" rowspan="1" colspan="1">Number of particles (population size)</td>
<td align="center" rowspan="1" colspan="1">40</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>T</italic>
</td>
<td align="center" rowspan="1" colspan="1">Maximum number of iterations</td>
<td align="center" rowspan="1" colspan="1">15</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>c</italic>
<sub>1</sub>
</td>
<td align="center" rowspan="1" colspan="1">Cognitive factor</td>
<td align="center" rowspan="1" colspan="1">3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>c</italic>
<sub>2</sub>
</td>
<td align="center" rowspan="1" colspan="1">Social factor</td>
<td align="center" rowspan="1" colspan="1">2.5</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>w</italic>
<sub>max</sub>
</td>
<td align="center" rowspan="1" colspan="1">Maximum bound on inertia weight</td>
<td align="center" rowspan="1" colspan="1">0.8</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>w</italic>
<sub>min</sub>
</td>
<td align="center" rowspan="1" colspan="1">Minimum bound on inertia weight</td>
<td align="center" rowspan="1" colspan="1">0.5</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>V</italic>
<sub>max</sub>
</td>
<td align="center" rowspan="1" colspan="1">Maximum velocity</td>
<td align="center" rowspan="1" colspan="1">5</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="tab2" orientation="portrait" position="float">
<label>Table 2</label>
<caption>
<p>Specifications of the deep learning.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th align="center" rowspan="1" colspan="1">Type</th>
<th align="center" rowspan="1" colspan="1">Activation</th>
<th align="center" rowspan="1" colspan="1">Learnable</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="center" rowspan="1" colspan="1">Sequence input</td>
<td align="center" rowspan="1" colspan="1">5141</td>
<td align="center" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="center" rowspan="1" colspan="1">LSTM</td>
<td align="center" rowspan="1" colspan="1">200</td>
<td align="center" rowspan="1" colspan="1">Input weights (800∗5141)
<break></break>
Recurrent weights (800∗200)
<break></break>
Bias (800∗1)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="center" rowspan="1" colspan="1">Fully connected</td>
<td align="center" rowspan="1" colspan="1">50</td>
<td align="center" rowspan="1" colspan="1">Weights 50∗200
<break></break>
Bias 50∗1</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="center" rowspan="1" colspan="1">Dropout</td>
<td align="center" rowspan="1" colspan="1">50</td>
<td align="center" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="center" rowspan="1" colspan="1">Fully connected</td>
<td align="center" rowspan="1" colspan="1">1</td>
<td align="center" rowspan="1" colspan="1">Weights 1∗50
<break></break>
Bias 1∗1</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="center" rowspan="1" colspan="1">Regression output</td>
<td align="center" rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="tab3" orientation="portrait" position="float">
<label>Table 3</label>
<caption>
<p>Comparing other methods with the proposed methods.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Method</th>
<th align="center" rowspan="1" colspan="1">Recognition rate</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">PCA [
<xref rid="B22" ref-type="bibr">22</xref>
]</td>
<td align="center" rowspan="1" colspan="1">53.2%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">SRC [
<xref rid="B56" ref-type="bibr">56</xref>
]</td>
<td align="center" rowspan="1" colspan="1">75.12%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">CRC [
<xref rid="B69" ref-type="bibr">69</xref>
]</td>
<td align="center" rowspan="1" colspan="1">79.4%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Gabor wavelet with Euclidian method [
<xref rid="B57" ref-type="bibr">57</xref>
]</td>
<td align="center" rowspan="1" colspan="1">83.44%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Symmetrical face sample method [
<xref rid="B49" ref-type="bibr">49</xref>
]</td>
<td align="center" rowspan="1" colspan="1">81.43%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Proposed method</td>
<td align="center" rowspan="1" colspan="1">85.25%</td>
</tr>
</tbody>
</table>
</table-wrap>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Sante/explor/MaghrebDataLibMedV2/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000166  | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000166  | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Sante
   |area=    MaghrebDataLibMedV2
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     
   |texte=   
}}

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Wed Jun 30 18:27:05 2021. Site generation: Wed Jun 30 18:34:21 2021